On HPC Days in Lyon

Last week (6–8 April, 2016) we had an incredible meeting called HPC Days in Lyon. This three day event featured only invited long talks and invited mini-symposia talks. The meeting was organized with generous support of Labex MILYON.

Rob Bisseling and Alex Pothen contributed to a mini-symposium on combinatorial scientific computing.

robBisseling
Rob Bisseling

Rob talked about hypergraph partitioning and how to use it with an iterative solver. We usually get this question: how many mat-vecs (or iterations in a solver) one needs to perform to offset the use of hypergraph partitioning. Rob’s main point in this talk was that one can estimate the number of iterations and spend some more time partitioning the hypergraph, if the number of iterations allow it. He has an ongoing project of optimally bisecting sparse matrices (see the link); his talk included updates (theoretical and practical) to this project. He says he adds a matrix a day to this page. As of now, there are 263 matrices. Chapeau! as the French say.

Also, he said (well maybe slipped out after a few glasses of Côtes du Rhône) that the new edition of his book (Parallel Scientific Computation: A Structured Approach using BSP and MPI) will be coming out.  There are new materials; in particular a few sections on sorting algorithms and a complete chapter on graph algorithms (mainly matching). Stay tuned! Rob will be at SIAM PP next week. I will try to get more  information about his book.

[I have just realized that I did not put Alex’s photo anywhere yet. So let’s have his face too.]

apothen
Alex Pothen

Alex discussed approximation algorithms to matching and b-matching problems. He took up the challenge of designing parallel algorithms for matching problems, where the concurrency is usually limited. He discussed approximation algorithms with provable guarantees and great parallel performance for the b-matching and a related edge cover problem. He also discussed an application of these algorithms in a data privacy problem he has been working on.

Alex arrived earlier to Lyon and we did some work. With Alex, we always end up discussing matching problems. This was not exception. We looked at the foundations of bottleneck matching algorithms. Alex and I will be attending SIAM PP16 next week. If you know/like these algorithms, please attend CSC mini-symposia so that we can talk.

I had chaired an invited talk by Yousef Saad.

yousefSaad
Yousef Saad

The talk was for 90 minutes, without break! His talk was very engaging and illuminating. I enjoyed very much and appreciated how he communicates deep math to innocent (or ignorant;)) computer scientists. His two books Iterative methods for sparse linear systems (2nd edition) and Numerical Methods for Large Eigenvalue Problems (2nd Edition) are available at his web page and attest this.
Here is a crash course on Krylov subspace methods from his talk.

Let x_0 be an initial guess and r_0=b-Ax_0 be the initial residual.
Define K_m=\textrm{span}\{r_0, Ar_0,\ldots,A^{m-1}r_0\} and L_m another subspace of dimension m.
Basic Krylov step is then:
x_m=x_0 + \delta where \delta\in K_m and b-Ax_m \perp L_m.

At this point, the reader/listener gets the principle and starts wondering what are the choices of L_m that make sense? How do I keep all m vectors? How do I get something orthogonal to them? Yousef had another slide:

1. L_m=K_m; class of Galerkin or orthogonal projection methods (e.g., CG), where \|x^*-\tilde{x}\|=\min_{z\in K_m}\|x^*-z\|_{A}.
2. L_m=AK_m; class of minimal residual methods (e.g., ORTHOMIN, GMRES) where \|b-A\tilde{x}\|_2=\min_{z\in K_m}\|b-Az\|_2.

So we learned the alternatives for L_m, and we probably guessed correctly that we do not need to keep all m vectors in all cases (e.g., CG), sometimes need all (e.g., GMRES without restart), and even if we need we can short-cut and restart. Getting orthogonal vectors could be tougher, especially if we do not store all the m vectors. Now that we have a guide, a feeling, and a few questions we can turn to resources to study.

Good news again!

Professor Thomas F. Coleman has  been named among the Class of 2016 of SIAM Fellows. Tom is currently the Ophelia Lazaridis Research Chair at the University of Waterloo, Canada. He has served earlier as the Dean of the Faculty of Mathematics at Waterloo (2005-2010), and also the Director of the Theory Center at Cornell University (1998-2005). Tom’s research contributions are in optimization algorithms, financial optimization, Automatic Differentiation, and in CSC. Tom was a pioneer with Jorge More of Argonne National Lab, to model the estimation of sparse Jacobian and Hessian matrices as graph coloring problems, and thereby develop efficient algorithms for computing these derivative matrices. Tom was the PhD advisor of one of us (Alex Pothen) and Bruce Hendrickson at Cornell, and  through his mentoring and research has profoundly influenced the CSC community.

Xiaoye Sherry Li has also been named among the Class of 2016 of SIAM Fellows (the whole list is here). She is very well known internationally for her work on methods and software for sparse matrix computations. In particular, she is the lead author behind SuperLU (software for solving general sparse systems of linear equations). Her citation also highlights the enabling role of her contributions in large-scale scientific and engineering applications. Sherry has been recently elected to lead the Scalable Solvers Group in Berkeley Lab’s Computational Research Division (CRD).

Congratulations to Tom and Sherry! We are also fortunate  to have Sherry serve on the CSC Steering Committee.

Alex and Bora