After CSC’16 (cont’): The best paper

Now that the Proceedings of the CSC Workshop 2016 has appeared (link), we can look at the papers. In this post, we are going to look at the best paper by Manne, Naim, Lerring, and Halappanavar, “On stable marriages and greedy matchings” (doi).

As noted in the citation of the award, this paper brings together  recent half-approximation algorithms  for  weighted matchings  and the classical  Gale-Shapley and McVitie-Wilson algorithms for the Stable Marriage problem. This is helpful in two ways.

First, the authors show that a  recent half-approximation algorithm for computing greedy matchings, the Suitor algorithm and its variants,  is equivalent to  the classical algorithms for the Stable Marriage problem. This correspondence enables the authors to propose multi-core and many-core parallel algorithms for the Stable Marriage problem, based on the greedy matching algorithms. This way, if I am not mistaken, they develop the first GPU algorithm for the Stable Marriage problem.

Second, the extensive theoretical work on the Stable Marriage algorithms  explains the  behavior of the Suitor matching algorithm. The worst-case number of proposals in the Suitor algorithm can be O(n^2),  where n is the number of vertices  in the bipartite graph. However, if the weights are assigned randomly, the expected number of proposals is O(n \log n), which follows from a classical analysis of Donald Knuth [1, 2].

This work represents a nice marriage between theory and practice: the practical algorithms for the greedy matching help in designing parallel algorithms for the Stable Marriage problem, and the theoretical understanding of the Stable Marriage problem sheds light into the behavior of the greedy weighted matching algorithms.

Vocabulary

Stable marriage problem: This is described on a full bipartite graph G=(L\cup R, L\times R).  Each vertex on the left has a total ranking of the vertices on the right; similarly each vertex on the right has a total ranking of all vertices on the left. The aim is to find a matching M such that no l\in L and r\in R  would obtain a higher ranked partner if they were to abandon their current partners in M and rematch with each other.

Greedy matching algorithm: Here we compute approximations to matchings of maximum  weight in weighted graphs. The Greedy algorithm  considers edges in a non-increasing order of weights and the heaviest remaining edge (u,v) is added to the matching, whereupon all edges incident on u and v are removed. The Greedy matching is a half-approximate matching.

Suitor algorithm: This is a proposal based algorithm [3]. Vertices can propose in any order, however, each vertex proposes to a neighbor with the  heaviest weight  that already does not have a better weight offer to match with it.  A vertex could annul the proposal received by a neighbor if it has a better weight edge to offer the neighbor.  When two vertices propose to each other, they are matched. The Suitor algorithm computes the same matching as the one obtained by the Greedy algorithm and the Locally Dominant edge algorithm described by Robert Preis [4]!

References

  1. Vicki Knoblauch, Marriage matching: A conjecture of Donald Knuth, Economics Working Papers. Paper 200715, http://digitalcommons.uconn.edu/econ_wpapers/200715, 2007.
  2. Donald E. Knuth, Stable Marriage and its Relation to Other Combinatorial Problems, CRM Proceedings and Lecture Notes, Vol. 10 American Mathematical Society, (1997).
  3. Fredrik Manne and Mahantesh Halappanavar, New effective multithreaded matching algorithms, in Proc. IPDPS 2014, IEEE 28th International Parallel and Distributed Processing Symposium, Phoenix, AZ, USA, pp. 519–528, 2014.
  4. Robert Preis, Linear time 1/2-approximation algorithm for maximum weighted matching in general graphs, in Proc. STACS 99, 16th Annual Symposium on Theoretical Aspects of Computer Science Trier, Germany, pp. 259–269, 1999.
Advertisements

After CSC16

CSC16 was held last week. Kudos to Assefaw and Erik as the chair of the workshop.

There are so much to talk about. We will have a series of posts about the workshop and related things. Here are some bits and pieces.

The workshop had three invited talks, 19 contributed talks, and eight posters, and attended by 60+ people. There will be a proceedings with 11 papers. The proceedings will be published by SIAM and will be hosted at its publication platform.

We had also celebrated the 60th birthdays of Alex Pothen and Rob Bisseling.


There was a best paper award. It went to Fredrik Manne, Md. Naim, Håkon Lerring, and Mahantesh Halappanavar for their paper titled On Stable Marriages and Greedy Matching. Congratulations. The citation by the best paper award committee (Uwe Naumann, Alex Pothen, and Sivan Toledo) reads as:

for the way the paper brings together several decades of work on stable marriages with the more recent work on approximation algorithms for weighted matchings, and the consequences for the average case complexity of the latter algorithms.


A heads up: the CSC18 meeting will likely be in Bergen, Norway. Erik cracked a joke about this in saying that the best paper awardees should take on organizing the next meeting.

A recent survey on direct methods

I have just read a recent survey by

Timothy A. Davis, Sivasankaran Rajamanickam, and Wissam Sid-Lakhdar,“A survey of direct methods for sparse linear systems (link)”. The authors state their goal in the abstract:

The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems, so that the reader can both understand the methods and know how best to use them.

 

I have very much appreciated the breadth of the survey. It reviews the earlier work on methods for the classical problems (e.g., LU, QR, Cholesky factorizations) and gives the context of the recent work  (e.g., GPU acceleration of the associated software; most recent problems of updating/downdating; exploiting low-rank approximations for efficiency).

One of the most impressive parts of the surveys are their reference lists. This one has 604 bibliographic items  (if I did not do any errors in counting). There is great scholarly work in collecting 604 bibliographic items, reading through them, and putting them into a well-organized survey. There is virtually no bulk references; all citations come with at least a few words. This assiduous approach got me excited and I dug into the reference list. The earliest cited work is from 1957 (one by Berge [1], and one by Markowitz [2]); the latests are from 2015 (there are a number of them).  There are no cited papers from the years 1958, 1959, 1960, 1962, 1964, and 1965. Here is a histogram of the number of papers per 5-year periods (centered at years 1955 to 2015 by increments of 5, e.g., 1955:5:2015).

davisetalCHist

The histogram tells at least two things: (i) much of the activities at the foundations of today’s methods are from the years 1990–2000; (ii) the field is very active, considering that the survey gives an overview of fundamentals, and recent developments which did not fit neatly into the grand/traditional lines of the world of direct methods are only summarized in a relatively short section (Section 12).

I underlined another quotation from the survey:

Well-designed mathematical software has long been considered a cornerstone of scholarly contributions in computational science.

This is a great survey, even for those who know the area. Kudos to Tim, Siva, and Wissam for having crafted it.

References

  1. Claude Berge, Two theorems in graph theory, Proceedings of the National Academy of Sciences of the United States of America 43(9), 842–844, 1957  (link).
  2. Harry M. Markowitz, The elimination form of the inverse and its application to linear programming, Management Science, 3 (3), 255–269, 1957 (link).

On the Birkhoff-von Neumann decomposition

Michele Benzi, Alex Pothen and I have been making use of the celebrated Birkhoff-von Neumann theorem on doubly stochastic matrices. The theorem says that any doubly stochastic matrix can be written as a convex combination of a finitely many permutation matrices. Formally, let \mathbf{A} be a doubly stochastic matrix. Then,

\mathbf{A}=\sum_{j=1}^k \alpha_j \mathbf{P}_j\;,

where \alpha_j>0, \sum_j \alpha_j=1 and each \mathbf{P}_j is a permutation matrix.

Given this formulation, one wonders if the decomposition is unique. Well, the answer is “No”. Then, one asks what can be said about the number k. And this is the main topic of this post.

Richard A. Brualdi [1] discusses many things among which a lower bound and an upper bound on k. The minimum number of permutation matrices is equal to the maximum cardinality of a set of nonzeros positions of \mathbf{A} no two of which could appear together in a single permutation matrix in the pattern of \mathbf{A}. An easy lower bound is then equal to the maximum number of nonzeros in a row or a column of \mathbf{A}. The upper bound is \mathrm{nnz}(\mathbf{A})-2n+2, for a fully indecomposable matrix \mathbf{A} with \mathrm{nnz}(\mathbf{A}) nonzeros; more generally if there are b fully indecomposable blocks, then the upper bound is \mathrm{nnz}(\mathbf{A})-2n+b+1.

What about the minimum number of permutation matrices? It turns out that this is an NP-complete problem. Let’s state it in the form of a standard NP-completeness result.

Input:  A doubly stochastic matrix \mathbf{A}.
Output: A Birkhoff-von Neumann decomposition of \mathbf{A} as \mathbf{A} = \alpha_1\mathbf{P}_1 + \alpha_2\mathbf{P}_2 + \cdots +\alpha_k\mathbf{P}_k.
Measure: The number k of permutation matrices in the decomposition.

The NP-completeness of the decision version of this problem is shown in a recent technical report [2].

References

  1. Richard A. Brualdi, Notes on the Birkhoff algorithm for doubly stochastic matrices, Canadian Mathematical Bulletin, 25(2), 191–199, 1982 (doi).
  2. Fanny Dufossé and Bora Uçar, Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices, Technical Report, RR-8852, Inria Grenoble Rhône-Alpes, 2016 (link).