CSC Mini-symposium at SIAM PP16

SIAM Conference on Parallel Processing for Scientific Computing (PP16) took place in Paris, April 12–15, 2016. This was the first SIAM PP outside the US.

As announced before, we had a mini-symposium on CSC in three sessions. SIAM keeps records of the program, the speakers and the abstracts (here). As the organizers, we thought that we should take this one step ahead and make the pdf’s of the talks available as well, wherever possible.

Here is the list of talks, in the order of speaker line-up.


1. Computational surgery: Visualization with augmented matrices (Alex Pothen)

Authors: Alex Pothen (Purdue University, USA) and Mu Wang (Purdue University, USA)

Abstract. Not yet

Talk: No files yet.

Comments: Alex had a video showcasing the use of the solver, in real-time, updating the mesh and showing the result of the surgery. The video was prepared with the help of professionals. The audience was all captive and silent during the video!


2. Parallel combinatorial algorithms in sparse matrix computation? (Esmond Ng)

Authors: Mathias Jacquelin (Lawrence Berkeley National Laboratory, USA), Esmond Ng (Lawrence Berkeley National Laboratory, USA), Barry Peyton (Dalton State College, USA), Yili Zheng (Lawrence Berkeley National Laboratory, USA), and Kathy Yelick, (Lawrence Berkeley National Laboratory, USA).

Abstract. Combinatorial techniques are used in several phases of sparse matrix computation. For large-scale problems, while numerical phases are often executed in parallel, most of these combinatorial techniques are serial and can become bottlenecks. We are investigating the extent to which some of the combinatorial techniques can be performed in parallel.

Talk: No files yet.

Comments: RCM is discussed as the showcase. I think Aydin and Ariful were also involved (second slide of the talk had this information). Given the group’s experience in distributed memory BFS, it is of surprise that the RCM is implemented based on this. The target was not small-world graphs, neither social network graphs; graphs with large diameters were at the focus. So the parallelization problem is somehow tough. Sorting (by the vertex degrees) is required for a formal RCM (I could not catch the details of which sorting algorithm was used). This step incurred cost and was detrimental to performance. Maybe, in an application one can skip sorting and obtain a variant of RCM (after all, RCM is a heuristic). Also, Esmond pointed that the motivation for this work is that the matrix/graph could be already distributed in another context. Instead of collecting the global data to a central processor, solving it there, and distributing the result back to everyone, one could possibly solve the problem in parallel. In RCM,  pseudo-peripheral nodes are used traditionally. They are again found by BFS. There are recent work for finding the diameters in graphs with a few rounds of BFS. Maybe review this.


3. Parallel graph matching algorithms using matrix algebra (Ariful Azad)

Authors: Ariful Azad (Lawrence Berkeley National Laboratory, USA) and Aydin Buluç (Lawrence Berkeley National Laboratory, USA).

Abstract. We present distributed-memory parallel algorithms for computing matchings in bipartite graphs. We consider both exact and approximate algorithms for cardinality and weighted matching problems. We substitute the asynchronous data access patterns of traditional matching algorithms by a small subset of more structured, bulk-synchronous functions based on matrix algebra. Relying on communication-avoiding algorithms for the underlying matrix-algebric modules, different matching algorithms achieve good speedups on tens of thousands of cores on current supercomputers.

Talk: (arifulAzad-pp16) file.


4. On the Birkhoff–von Neumann decomposition (Bora Uçar)

Authors: Michele Benzi (Emory University, USA), Fanny Dufossé (Inria Lille-Nord Europe, France),  Kamer Kaya (Sabanci University, Turkey), Alex Pothen (Purdue University, USA), and Bora Uçar (CNRS and ENS Lyon, France).

Abstract. Not yet

Talk: (boraUcar-pp16) file.


5. A Partitioning problem for load balancing and reducing communication from the field of quantum chemistry (Edmond Chow)

Authors: Edmond Chow (Georgia Institute of Technology, USA)

Abstract. We present a combinatorial problem and potential solutions arising in parallel computational chemistry. The Hartree-Fock (HF) method has a very complex data access pattern. Much research has been devoted over 20 years for parallelizing this important method, based primarily on intuition and experience. A formal approach for parallelizing HF while reducing communication may come from graph and hypergraph partitioning. Besides providing a potential solution, this approach may also shed light on the optimality of existing approaches.

Talk: (edmondChow-pp16) file.


6. Community detection on GPU (Fredrik Manne)

Authors: Md Naim (University of Bergen, Norway) and Fredrik Manne (University of Bergen, Norway,)

Abstract. There has been considerable interest in community detection for finding the modularity structure in real world data. Such data sets can arise from social networks as well as various scientific domains. The Louvain method is one popular method for this problem as it is simple and fast. It can also be used to detect hierarchical structures in the data. However, its inherently sequential nature and cache unfriendly workloads makes it difficult to parallelize. This is particularly true for co-processor architectures. In this work we show how these obstacles can be overcome and present results from implementing the algorithm on a GPU.

Talk: (fredrikManne-pp16) file.

Comments: Md Naim could not attend to the conference (dommage), and Fredrik replaced him.


7. Scalable parallel algorithms for de novo assembly of complex genomes (Evangelos Georganas)

Authors: Evangelos Georganas (University of California, Berkeley, USA)

Abstract. A critical problem for computational genomics is the problem of de novo genome assembly: the development of robust scalable methods for transforming short randomly sampled sequences into the contiguous and accurate reconstruction of complex genomes. While advanced methods exist for assembling the small and haploid genomes of prokaryotes, the genomes of eukaryotes are more complex. We address this challenge head on by developing HipMer, an end-to-end high performance de novo assembler designed to scale to massive concurrencies. HipMer employs an efficient Unified Parallel C (UPC) implementation and computes the assembly of the human genome in only 8.4 minutes using 15,360 cores of a Cray XC30 system.

Talk: (evangelosGeorganas-pp16) file.


8. Faster and more scalable sparse matrix-matrix multiplication (Aydin Buluç)

Authors: Aydin Buluç (Lawrence Berkeley National Laboratory, USA)

Abstract. We present a faster and more scalable implementation of the sparse matrix-matrix multiplication (SpGEMM) kernel. The implementation exploits multiple levels of parallelism, using a scalable three-dimensional algorithm for inter-node parallelism and multithreaded subroutines for intra-node parallelism. The three-dimensional formalism has characteristics that are special for the sparse case, which we thoroughly explain. We then provide results on applications in Markov graph clustering and Algebraic Multigrid based graph coarsening.

Talk: (aydinBuluc-pp16) file.


9. Directed graph partitioning (Umit V. Catalyurek)

Authors: Julien Herrmann (The Ohio State University, USA), Umit V. Catalyurek (The Ohio State University, USA), Kamer Kaya (Sabanci University, Turkey), and Bora Uçar (CNRS and ENS Lyon, France).

Abstract. In scientific computing directed graphs are commonly used for modeling dependencies among entities. However, while modeling some of the problems as graph partitioning problems, directionality is generally ignored. Accurate modeling of some of the problems necessitates to take the directionally into account, which adds additional constraints that cannot be easily addressed in the current state-of-the-art partitioning methods and tools. In this talk, we will discuss some example problems, models and potential solution approaches for them.

Talk: (pdf) file.


10. Parallel approximation algorithms for b-Edge Covers and data privacy (Arif Khan)

Authors: Arif Khan (Purdue University, USA), and Alex Pothen (Purdue University, USA).

Abstract. We propose a new 3/2-approximation algorithm, called LSE for computing b-Edge Cover and its application to a data privacy problem called adaptive $latex k$-Anonymity. b-Edge Cover is a special case of the well-known  Set Multicover problem and also a generalization of Edge Cover problem in graphs. The objective is to choose a subset of $latex C$ edges in the graph with weights on the edges, such that at least a specified number $latex b(v)$ of edges in $latex C$ are incident on each vertex $latex v$ and the sum of edge weights is minimized. We implement the algorithm on serial and shared-memory parallel processors and compare its performance against a collection of inherently sequential approximation algorithms that have been proposed for the Set Multicover problem. With LSE, i) we propose the first shared-memory parallel algorithm for the adaptive $latex k$-Anonymity problem and ii) give new theoretical results regarding privacy guarantees which are significantly stronger than the best known previous results.

Talk: (arifKhan-pp16) file.


11. Clustering sparse matrices with information from both numerical values and pattern (Daniel Ruiz)

Authors: Iain S. Duff (Science & Technology Facilities Council, United Kingdom and CERFACS, Toulouse, France), Philip Knight (University of Strathclyde, United Kingdom), Sandrine Mouysset (Université de Toulouse, France), Daniel Ruiz (ENSEEIHT, France), and Bora Uçar (CNRS and ENS Lyon, France).

Abstract. Considering any square fully indecomposable matrix A, we can apply a two-sided diagonal scaling to $latex |A|$ to render it into doubly stochastic form. The Perron-Frobenius theorem is a key tool to exploit and we aim to use spectral properties of doubly stochastic matrices to reveal hidden block structure in matrices. We also combine this with classical graph analysis techniques to design partitioning algorithms for large sparse matrices based on both numerical values and pattern information.

Talk: (danielRuiz-pp16) file.


12. Parallel graph coloring on manycore architectures (Mehmet Deveci)

Authors: Mehmet Deveci (Sandia National Laboratories, USA), Erik Boman (Sandia National Laboratories, USA), and Siva Rajamanickam (Sandia National Laboratories, USA).

Abstract. In scientific computing, the problem of finding sets of independent tasks is usually addressed with graph coloring. We study performance portable graph coloring algorithms for many-core architectures. We propose a novel edge-based algorithm and enhancements of the speculative Gebremedhin-Manne algorithm that exploit architectures. We show superior quality and execution time of the proposed algorithms on GPUs and Xeon Phi compared to previous work. We present effects of coloring on applications such as Gauss-Seidel preconditioned solvers.

Talk: (mehmetDeveci-pp16) file.

Advertisements

On HPC Days in Lyon

Last week (6–8 April, 2016) we had an incredible meeting called HPC Days in Lyon. This three day event featured only invited long talks and invited mini-symposia talks. The meeting was organized with generous support of Labex MILYON.

Rob Bisseling and Alex Pothen contributed to a mini-symposium on combinatorial scientific computing.

robBisseling
Rob Bisseling

Rob talked about hypergraph partitioning and how to use it with an iterative solver. We usually get this question: how many mat-vecs (or iterations in a solver) one needs to perform to offset the use of hypergraph partitioning. Rob’s main point in this talk was that one can estimate the number of iterations and spend some more time partitioning the hypergraph, if the number of iterations allow it. He has an ongoing project of optimally bisecting sparse matrices (see the link); his talk included updates (theoretical and practical) to this project. He says he adds a matrix a day to this page. As of now, there are 263 matrices. Chapeau! as the French say.

Also, he said (well maybe slipped out after a few glasses of Côtes du Rhône) that the new edition of his book (Parallel Scientific Computation: A Structured Approach using BSP and MPI) will be coming out.  There are new materials; in particular a few sections on sorting algorithms and a complete chapter on graph algorithms (mainly matching). Stay tuned! Rob will be at SIAM PP next week. I will try to get more  information about his book.

[I have just realized that I did not put Alex’s photo anywhere yet. So let’s have his face too.]

apothen
Alex Pothen

Alex discussed approximation algorithms to matching and b-matching problems. He took up the challenge of designing parallel algorithms for matching problems, where the concurrency is usually limited. He discussed approximation algorithms with provable guarantees and great parallel performance for the b-matching and a related edge cover problem. He also discussed an application of these algorithms in a data privacy problem he has been working on.

Alex arrived earlier to Lyon and we did some work. With Alex, we always end up discussing matching problems. This was not exception. We looked at the foundations of bottleneck matching algorithms. Alex and I will be attending SIAM PP16 next week. If you know/like these algorithms, please attend CSC mini-symposia so that we can talk.

I had chaired an invited talk by Yousef Saad.

yousefSaad
Yousef Saad

The talk was for 90 minutes, without break! His talk was very engaging and illuminating. I enjoyed very much and appreciated how he communicates deep math to innocent (or ignorant;)) computer scientists. His two books Iterative methods for sparse linear systems (2nd edition) and Numerical Methods for Large Eigenvalue Problems (2nd Edition) are available at his web page and attest this.
Here is a crash course on Krylov subspace methods from his talk.

Let x_0 be an initial guess and r_0=b-Ax_0 be the initial residual.
Define K_m=\textrm{span}\{r_0, Ar_0,\ldots,A^{m-1}r_0\} and L_m another subspace of dimension m.
Basic Krylov step is then:
x_m=x_0 + \delta where \delta\in K_m and b-Ax_m \perp L_m.

At this point, the reader/listener gets the principle and starts wondering what are the choices of L_m that make sense? How do I keep all m vectors? How do I get something orthogonal to them? Yousef had another slide:

1. L_m=K_m; class of Galerkin or orthogonal projection methods (e.g., CG), where \|x^*-\tilde{x}\|=\min_{z\in K_m}\|x^*-z\|_{A}.
2. L_m=AK_m; class of minimal residual methods (e.g., ORTHOMIN, GMRES) where \|b-A\tilde{x}\|_2=\min_{z\in K_m}\|b-Az\|_2.

So we learned the alternatives for L_m, and we probably guessed correctly that we do not need to keep all m vectors in all cases (e.g., CG), sometimes need all (e.g., GMRES without restart), and even if we need we can short-cut and restart. Getting orthogonal vectors could be tougher, especially if we do not store all the m vectors. Now that we have a guide, a feeling, and a few questions we can turn to resources to study.

Good news again!

Professor Thomas F. Coleman has  been named among the Class of 2016 of SIAM Fellows. Tom is currently the Ophelia Lazaridis Research Chair at the University of Waterloo, Canada. He has served earlier as the Dean of the Faculty of Mathematics at Waterloo (2005-2010), and also the Director of the Theory Center at Cornell University (1998-2005). Tom’s research contributions are in optimization algorithms, financial optimization, Automatic Differentiation, and in CSC. Tom was a pioneer with Jorge More of Argonne National Lab, to model the estimation of sparse Jacobian and Hessian matrices as graph coloring problems, and thereby develop efficient algorithms for computing these derivative matrices. Tom was the PhD advisor of one of us (Alex Pothen) and Bruce Hendrickson at Cornell, and  through his mentoring and research has profoundly influenced the CSC community.

Xiaoye Sherry Li has also been named among the Class of 2016 of SIAM Fellows (the whole list is here). She is very well known internationally for her work on methods and software for sparse matrix computations. In particular, she is the lead author behind SuperLU (software for solving general sparse systems of linear equations). Her citation also highlights the enabling role of her contributions in large-scale scientific and engineering applications. Sherry has been recently elected to lead the Scalable Solvers Group in Berkeley Lab’s Computational Research Division (CRD).

Congratulations to Tom and Sherry! We are also fortunate  to have Sherry serve on the CSC Steering Committee.

Alex and Bora

A recent survey on direct methods

I have just read a recent survey by

Timothy A. Davis, Sivasankaran Rajamanickam, and Wissam Sid-Lakhdar,“A survey of direct methods for sparse linear systems (link)”. The authors state their goal in the abstract:

The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

 

I have very much appreciated the breadth of the survey. It reviews the earlier work on methods for the classical problems (e.g., LU, QR, Cholesky factorizations) and gives the context of the recent work  (e.g., GPU acceleration of the associated software; most recent problems of updating/downdating; exploiting low-rank approximations for efficiency).

One of the most impressive parts of the surveys are their reference lists. This one has 604 bibliographic items  (if I did not do any errors in counting). There is great scholarly work in collecting 604 bibliographic items, reading through them, and putting them into a well-organized survey. There is virtually no bulk references; all citations come with at least a few words. This assiduous approach got me excited and I dug into the reference list. The earliest cited work is from 1957 (one by Berge [1], and one by Markowitz [2]); the latests are from 2015 (there are a number of them).  There are no cited papers from the years 1958, 1959, 1960, 1962, 1964, and 1965. Here is a histogram of the number of papers per 5-year periods (centered at years 1955 to 2015 by increments of 5, e.g., 1955:5:2015).

davisetalCHist

The histogram tells at least two things: (i) much of the activities at the foundations of today’s methods are from the years 1990–2000; (ii) the field is very active, considering that the survey gives an overview of fundamentals, and recent developments which did not fit neatly into the grand/traditional lines of the world of direct methods are only summarized in a relatively short section (Section 12).

I underlined another quotation from the survey:

Well-designed mathematical software has long been considered a cornerstone of scholarly contributions in computational science.

This is a great survey, even for those who know the area. Kudos to Tim, Siva, and Wissam for having crafted it.

References

  1. Claude Berge, Two theorems in graph theory, Proceedings of the National Academy of Sciences of the United States of America 43(9), 842–844, 1957  (link).
  2. Harry M. Markowitz, The elimination form of the inverse and its application to linear programming, Management Science, 3 (3), 255–269, 1957 (link).

On the Birkhoff-von Neumann decomposition

Michele Benzi, Alex Pothen and I have been making use of the celebrated Birkhoff-von Neumann theorem on doubly stochastic matrices. The theorem says that any doubly stochastic matrix can be written as a convex combination of a finitely many permutation matrices. Formally, let \mathbf{A} be a doubly stochastic matrix. Then,

\mathbf{A}=\sum_{j=1}^k \alpha_j \mathbf{P}_j\;,

where \alpha_j>0, \sum_j \alpha_j=1 and each \mathbf{P}_j is a permutation matrix.

Given this formulation, one wonders if the decomposition is unique. Well, the answer is “No”. Then, one asks what can be said about the number k. And this is the main topic of this post.

Richard A. Brualdi [1] discusses many things among which a lower bound and an upper bound on k. The minimum number of permutation matrices is equal to the maximum cardinality of a set of nonzeros positions of \mathbf{A} no two of which could appear together in a single permutation matrix in the pattern of \mathbf{A}. An easy lower bound is then equal to the maximum number of nonzeros in a row or a column of \mathbf{A}. The upper bound is \mathrm{nnz}(\mathbf{A})-2n+2, for a fully indecomposable matrix \mathbf{A} with \mathrm{nnz}(\mathbf{A}) nonzeros; more generally if there are b fully indecomposable blocks, then the upper bound is \mathrm{nnz}(\mathbf{A})-2n+b+1.

What about the minimum number of permutation matrices? It turns out that this is an NP-complete problem. Let’s state it in the form of a standard NP-completeness result.

Input:  A doubly stochastic matrix \mathbf{A}.
Output: A Birkhoff-von Neumann decomposition of \mathbf{A} as \mathbf{A} = \alpha_1\mathbf{P}_1 + \alpha_2\mathbf{P}_2 + \cdots +\alpha_k\mathbf{P}_k.
Measure: The number k of permutation matrices in the decomposition.

The NP-completeness of the decision version of this problem is shown in a recent technical report [2].

References

  1. Richard A. Brualdi, Notes on the Birkhoff algorithm for doubly stochastic matrices, Canadian Mathematical Bulletin, 25(2), 191–199, 1982 (doi).
  2. Fanny Dufossé and Bora Uçar, Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices, Technical Report, RR-8852, Inria Grenoble Rhône-Alpes, 2016 (link).

More good news

I feel so privileged to be able to post these announcements.

Aydın Buluç
Aydın Buluç

Aydın Buluç, a great scholar and a good friend, has received the IEEE TCSC Award for Excellence for Early Career Researchers. This award recognizes up to three individuals who have made outstanding, influential, and potentially long-lasting contributions in the field of scalable computing within five years of receiving their PhD degree as of January 1 of the year of the award. Aydın received a plaque at the SC15 conference that  was held in Austin, TX in Nov. 15-20, 2015.

Congratulations Aydın!

CSC Mini-symposium at SIAM PP16

SIAM PP16 (April 12-15, 2016 Paris) program just came out. Aydın Buluç, Alex Pothen, and I will be managing a CSC mini-symposium with three parts.

The mini has the following description:

Combinatorial algorithms and tools are used in enabling parallel scientific computing applications. The general approach is to identify performance issues in an application and design, analyze, implement combinatorial algorithms to tackle the identified issues. The proposed minisymposium gathers 12 talks, covering applications in bioinformatics, solvers of linear systems, and data analysis; and graph algorithms for those applications. The objective is to summarize the latest combinatorial algorithmic developments and the needs of the applications. The goal is to cross-fertilize the both domains: the applications will raise new challenges to the combinatorial algorithms, and the combinatorial algorithms will address some of the existing problems of the applications.

The twelve excellent talks are spread out two days: Tuesday April 12, 2016 : The first part at 1:10 PM – 2:50 PM, and the second part at 3:20 PM – 5:00PM; Wednesday April 13, 2016; The third part at 10:35 AM – 12:15 PM.

See you there!