## After CSC16 (cont’)

Umit V. Çatalyürek presented our work on the directed acyclic graph (DAG) partitioning problem. In this problem, we are given a DAG $G=(V,E)$ and an integer $k\geq 2$. The aim is to partition the vertex set $V$ into $k$ parts $V_1,\ldots, V_k$ in such a way that the parts have (almost) equal weight and the sum of the costs of all those arcs having their endpoint in different parts minimized. Vertices can have weights, and the edges can have costs. Up to now, all is standard. What is not standard is that the quotient graph of the parts should be acyclic. In other words, the directed graph $G'=(V', E')$, where $V'=\{V_1,\ldots,V_k\}$ and $V_i\rightarrow V_j\in E'$ iff $v_i\rightarrow v_j\in E$ for some $v_i\in V_i$ and $v_j\in V_j$, should be acyclic.

John R. Gilbert wanted to understand the complexity of the problem, with respect to the undirected version. He is an expert on the subject matter (see, e.g., [2]). He asked what happens if we orient the edges of the $n\times n$ model problem. If you are not familiar with this jargon, it is the $n\times n$ mesh with each node being connected to its immediate neighbor in the four main directions, if those neighbors exist. See the small image for an $8\times 8$ example.

Partitioning these kind of meshes is a very hard problem. Gary Miller had mentioned their optimal partitioning in his invited talk (about something else). Rob Bisseling [Ch. 4, 1] has a great text about partitioning these meshes and their cousins in 3D. I had surveyed known methods in a paper with Anaël Grandjean [3]. In particular, Anaël found about discrete isoperimetric problems, showed that the shape of an optimal partition at a corner, or inside the mesh was discussed in the related literature. He also showed that the Cartesian partitioning is optimal for edge cut.  Anaël also proposed efficient heuristics which produced connected components. See the cited papers for nice images. Our were based on our earlier work with Umit [4].

Anyhow,  let’s return back to acyclic partitioning of DAGs, and John’s question. He suggested that we should look at the electrical spiral heater to get an orientation. This orientation results in a total order of the vertices. The figures below show the ordering of the $8\times 8$ and the $16\times 16$ meshes. Only some of the edges are shown; all edges of the mesh, including those that are not shown are from the lower numbered vertex to the higher numbered one.

As seen in the figures above, the spiral ordering is a total order and there is only one way to cut the meshes into two parts with the same number of vertices; blue and red show the two parts.

Theorem: Consider the $n\times n$ mesh whose edges are oriented following the electrical spiral heater ordering. The unique acyclic cut with $n^2/2$ vertices in each side has $n^2-4n+3$ edges in cut, for $n\geq 8$.

The theorem can be proved by observing that the blue vertices in the border (excluding the corners) has one arc going to a red vertex; those in the interior, except the one labeled $n^2/2$ has 2 such arcs;  the vertex labeled $n^2/2$ has three such arcs. The condition $n\geq 8$ comes from the fact that we assumed that there are blue vertices in the interior of the mesh. This is a lot of edges to cut!

Later, John said that he visited the Maxwell Museum of Anthropology at UNM after the CSC16 workshop, and saw that similar designs by the original native New Mexicans.

### References

1. Rob H. Bisseling, Parallel Scientific Computation: A Structured Approach using BSP and MPI, 1st ed, Oxford University Press, 2004.
2. John R. Gilbert, Some nested dissection order is nearly optimal. Inf. Process. Lett. 26(6): 325–328 (1988).
3. Anaël Grandjean and Bora Uçar, On partitioning two dimensional finite difference meshes for distributed memory parallel computers. PDP 2014: 9–16.
4. Bora Uçar and Umit V. Çatalyürek, On the scalability of hypergraph models for sparse matrix partitioning. PDP 2014: 593–600.
5. Da-Lun Wang and Ping Wang, Discrete isoperimetric problems, SIAM Journal on Applied Mathematics, 32(4):860–870 (1977).

## After CSC16

CSC16 was held last week. Kudos to Assefaw and Erik as the chair of the workshop.

There are so much to talk about. We will have a series of posts about the workshop and related things. Here are some bits and pieces.

The workshop had three invited talks, 19 contributed talks, and eight posters, and attended by 60+ people. There will be a proceedings with 11 papers. The proceedings will be published by SIAM and will be hosted at its publication platform.

We had also celebrated the 60th birthdays of Alex Pothen and Rob Bisseling.

There was a best paper award. It went to Fredrik Manne, Md. Naim, Håkon Lerring, and Mahantesh Halappanavar for their paper titled On Stable Marriages and Greedy Matching. Congratulations. The citation by the best paper award committee (Uwe Naumann, Alex Pothen, and Sivan Toledo) reads as:

for the way the paper brings together several decades of work on stable marriages with the more recent work on approximation algorithms for weighted matchings, and the consequences for the average case complexity of the latter algorithms.

A heads up: the CSC18 meeting will likely be in Bergen, Norway. Erik cracked a joke about this in saying that the best paper awardees should take on organizing the next meeting.

## On HPC Days in Lyon

Last week (6–8 April, 2016) we had an incredible meeting called HPC Days in Lyon. This three day event featured only invited long talks and invited mini-symposia talks. The meeting was organized with generous support of Labex MILYON.

Rob Bisseling and Alex Pothen contributed to a mini-symposium on combinatorial scientific computing.

Rob talked about hypergraph partitioning and how to use it with an iterative solver. We usually get this question: how many mat-vecs (or iterations in a solver) one needs to perform to offset the use of hypergraph partitioning. Rob’s main point in this talk was that one can estimate the number of iterations and spend some more time partitioning the hypergraph, if the number of iterations allow it. He has an ongoing project of optimally bisecting sparse matrices (see the link); his talk included updates (theoretical and practical) to this project. He says he adds a matrix a day to this page. As of now, there are 263 matrices. Chapeau! as the French say.

Also, he said (well maybe slipped out after a few glasses of Côtes du Rhône) that the new edition of his book (Parallel Scientific Computation: A Structured Approach using BSP and MPI) will be coming out.  There are new materials; in particular a few sections on sorting algorithms and a complete chapter on graph algorithms (mainly matching). Stay tuned! Rob will be at SIAM PP next week. I will try to get more  information about his book.

[I have just realized that I did not put Alex’s photo anywhere yet. So let’s have his face too.]

Alex discussed approximation algorithms to matching and $b$-matching problems. He took up the challenge of designing parallel algorithms for matching problems, where the concurrency is usually limited. He discussed approximation algorithms with provable guarantees and great parallel performance for the $b$-matching and a related edge cover problem. He also discussed an application of these algorithms in a data privacy problem he has been working on.

Alex arrived earlier to Lyon and we did some work. With Alex, we always end up discussing matching problems. This was not exception. We looked at the foundations of bottleneck matching algorithms. Alex and I will be attending SIAM PP16 next week. If you know/like these algorithms, please attend CSC mini-symposia so that we can talk.

I had chaired an invited talk by Yousef Saad.

The talk was for 90 minutes, without break! His talk was very engaging and illuminating. I enjoyed very much and appreciated how he communicates deep math to innocent (or ignorant;)) computer scientists. His two books Iterative methods for sparse linear systems (2nd edition) and Numerical Methods for Large Eigenvalue Problems (2nd Edition) are available at his web page and attest this.
Here is a crash course on Krylov subspace methods from his talk.

Let $x_0$ be an initial guess and $r_0=b-Ax_0$ be the initial residual.
Define $K_m=\textrm{span}\{r_0, Ar_0,\ldots,A^{m-1}r_0\}$ and $L_m$ another subspace of dimension $m$.
Basic Krylov step is then:
$x_m=x_0 + \delta$ where $\delta\in K_m$ and $b-Ax_m \perp L_m$.

At this point, the reader/listener gets the principle and starts wondering what are the choices of $L_m$ that make sense? How do I keep all $m$ vectors? How do I get something orthogonal to them? Yousef had another slide:

1. $L_m=K_m$; class of Galerkin or orthogonal projection methods (e.g., CG), where $\|x^*-\tilde{x}\|=\min_{z\in K_m}\|x^*-z\|_{A}.$
2. $L_m=AK_m$; class of minimal residual methods (e.g., ORTHOMIN, GMRES) where $\|b-A\tilde{x}\|_2=\min_{z\in K_m}\|b-Az\|_2$.

So we learned the alternatives for $L_m$, and we probably guessed correctly that we do not need to keep all $m$ vectors in all cases (e.g., CG), sometimes need all (e.g., GMRES without restart), and even if we need we can short-cut and restart. Getting orthogonal vectors could be tougher, especially if we do not store all the $m$ vectors. Now that we have a guide, a feeling, and a few questions we can turn to resources to study.

## Good news again!

Professor Thomas F. Coleman has  been named among the Class of 2016 of SIAM Fellows. Tom is currently the Ophelia Lazaridis Research Chair at the University of Waterloo, Canada. He has served earlier as the Dean of the Faculty of Mathematics at Waterloo (2005-2010), and also the Director of the Theory Center at Cornell University (1998-2005). Tom’s research contributions are in optimization algorithms, financial optimization, Automatic Differentiation, and in CSC. Tom was a pioneer with Jorge More of Argonne National Lab, to model the estimation of sparse Jacobian and Hessian matrices as graph coloring problems, and thereby develop efficient algorithms for computing these derivative matrices. Tom was the PhD advisor of one of us (Alex Pothen) and Bruce Hendrickson at Cornell, and  through his mentoring and research has profoundly influenced the CSC community.

Xiaoye Sherry Li has also been named among the Class of 2016 of SIAM Fellows (the whole list is here). She is very well known internationally for her work on methods and software for sparse matrix computations. In particular, she is the lead author behind SuperLU (software for solving general sparse systems of linear equations). Her citation also highlights the enabling role of her contributions in large-scale scientific and engineering applications. Sherry has been recently elected to lead the Scalable Solvers Group in Berkeley Lab’s Computational Research Division (CRD).

Congratulations to Tom and Sherry! We are also fortunate  to have Sherry serve on the CSC Steering Committee.

Alex and Bora

## A recent survey on direct methods

I have just read a recent survey by

Timothy A. Davis, Sivasankaran Rajamanickam, and Wissam Sid-Lakhdar,“A survey of direct methods for sparse linear systems (link)”. The authors state their goal in the abstract:

The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

I have very much appreciated the breadth of the survey. It reviews the earlier work on methods for the classical problems (e.g., LU, QR, Cholesky factorizations) and gives the context of the recent work  (e.g., GPU acceleration of the associated software; most recent problems of updating/downdating; exploiting low-rank approximations for efficiency).

One of the most impressive parts of the surveys are their reference lists. This one has 604 bibliographic items  (if I did not do any errors in counting). There is great scholarly work in collecting 604 bibliographic items, reading through them, and putting them into a well-organized survey. There is virtually no bulk references; all citations come with at least a few words. This assiduous approach got me excited and I dug into the reference list. The earliest cited work is from 1957 (one by Berge [1], and one by Markowitz [2]); the latests are from 2015 (there are a number of them).  There are no cited papers from the years 1958, 1959, 1960, 1962, 1964, and 1965. Here is a histogram of the number of papers per 5-year periods (centered at years 1955 to 2015 by increments of 5, e.g., 1955:5:2015).

The histogram tells at least two things: (i) much of the activities at the foundations of today’s methods are from the years 1990–2000; (ii) the field is very active, considering that the survey gives an overview of fundamentals, and recent developments which did not fit neatly into the grand/traditional lines of the world of direct methods are only summarized in a relatively short section (Section 12).

I underlined another quotation from the survey:

Well-designed mathematical software has long been considered a cornerstone of scholarly contributions in computational science.

This is a great survey, even for those who know the area. Kudos to Tim, Siva, and Wissam for having crafted it.

### References

1. Claude Berge, Two theorems in graph theory, Proceedings of the National Academy of Sciences of the United States of America 43(9), 842–844, 1957  (link).
2. Harry M. Markowitz, The elimination form of the inverse and its application to linear programming, Management Science, 3 (3), 255–269, 1957 (link).

## More good news

I feel so privileged to be able to post these announcements.

Aydın Buluç, a great scholar and a good friend, has received the IEEE TCSC Award for Excellence for Early Career Researchers. This award recognizes up to three individuals who have made outstanding, influential, and potentially long-lasting contributions in the field of scalable computing within five years of receiving their PhD degree as of January 1 of the year of the award. Aydın received a plaque at the SC15 conference that  was held in Austin, TX in Nov. 15-20, 2015.

Congratulations Aydın!

## Some great news

We have received some very good news. Three of our colleagues received recognition from two different technical societies.

Umit V. Catalyurek and Tim Davis have been elevated to IEEE Fellow. Umit’s elevation included the citation “for contributions to combinatorial scientific computing and parallel computing” and Tim’s included “for contributions to sparse matrix algorithms and software”. This is a great accomplishment as there is a rigorous evaluation procedure.  Here is a link to the complete list of IEEE 2016 Fellows.

Ali Pinar has been named ACM Distinguished Scientist. This attests his great contributions to the advancement of the science of computing, and to building the knowledge base within the field of computer science. Here is a link to the complete list of ACM Distinguished Members.

Here is what Tim says on this occasion:

It is really an amazing honor.  IEEE is the first association I joined, way back in the early 80s when I was an undergraduate at Purdue.  So it brings back many fond memories.  I used SPICE in my undergrad days for my circuit simulation homework problems … and now there are circuit simulators out there, commercial and government-lab, that use my solvers.  It’s hard to believe that it’s come full circle, and it’s very encouraging news to hear that I was elected as an IEEE Fellow.  I have had lots to be thankful for this Thanksgiving season (I got the news just days before Thanksgiving).

Congratulations to Umit, Tim and Ali for the recognition!