Last week (6–8 April, 2016) we had an incredible meeting called HPC Days in Lyon. This three day event featured only invited long talks and invited mini-symposia talks. The meeting was organized with generous support of Labex MILYON.
Rob Bisseling and Alex Pothen contributed to a mini-symposium on combinatorial scientific computing.

Rob talked about hypergraph partitioning and how to use it with an iterative solver. We usually get this question: how many mat-vecs (or iterations in a solver) one needs to perform to offset the use of hypergraph partitioning. Rob’s main point in this talk was that one can estimate the number of iterations and spend some more time partitioning the hypergraph, if the number of iterations allow it. He has an ongoing project of optimally bisecting sparse matrices (see the link); his talk included updates (theoretical and practical) to this project. He says he adds a matrix a day to this page. As of now, there are 263 matrices. Chapeau! as the French say.
Also, he said (well maybe slipped out after a few glasses of Côtes du Rhône) that the new edition of his book (Parallel Scientific Computation: A Structured Approach using BSP and MPI) will be coming out. There are new materials; in particular a few sections on sorting algorithms and a complete chapter on graph algorithms (mainly matching). Stay tuned! Rob will be at SIAM PP next week. I will try to get more information about his book.
[I have just realized that I did not put Alex’s photo anywhere yet. So let’s have his face too.]

Alex discussed approximation algorithms to matching and -matching problems. He took up the challenge of designing parallel algorithms for matching problems, where the concurrency is usually limited. He discussed approximation algorithms with provable guarantees and great parallel performance for the
-matching and a related edge cover problem. He also discussed an application of these algorithms in a data privacy problem he has been working on.
Alex arrived earlier to Lyon and we did some work. With Alex, we always end up discussing matching problems. This was not exception. We looked at the foundations of bottleneck matching algorithms. Alex and I will be attending SIAM PP16 next week. If you know/like these algorithms, please attend CSC mini-symposia so that we can talk.
I had chaired an invited talk by Yousef Saad.

The talk was for 90 minutes, without break! His talk was very engaging and illuminating. I enjoyed very much and appreciated how he communicates deep math to innocent (or ignorant;)) computer scientists. His two books Iterative methods for sparse linear systems (2nd edition) and Numerical Methods for Large Eigenvalue Problems (2nd Edition) are available at his web page and attest this.
Here is a crash course on Krylov subspace methods from his talk.
Let be an initial guess and
be the initial residual.
Define and
another subspace of dimension
.
Basic Krylov step is then:
where
and
.
At this point, the reader/listener gets the principle and starts wondering what are the choices of that make sense? How do I keep all
vectors? How do I get something orthogonal to them? Yousef had another slide:
1. ; class of Galerkin or orthogonal projection methods (e.g., CG), where
2. ; class of minimal residual methods (e.g., ORTHOMIN, GMRES) where
.
So we learned the alternatives for , and we probably guessed correctly that we do not need to keep all
vectors in all cases (e.g., CG), sometimes need all (e.g., GMRES without restart), and even if we need we can short-cut and restart. Getting orthogonal vectors could be tougher, especially if we do not store all the
vectors. Now that we have a guide, a feeling, and a few questions we can turn to resources to study.