----------------------- REVIEW 1 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
SUMMARY:
This paper proposes and solves a new problem, called convex consensus,
which is a generalization of the vector consensus problem. In both
problems, the input of each process is a d-dimensional vector of real
numbers (i.e., a single point in d-space). For the vector consensus
problem, the output must be another d-vector (point), which is inside
the convex hull of the inputs, whereas in this new problem, the output
must be a convex polytope which is inside the convex hull of the
inputs.
The fault model primarily studied in this paper is "crash faults with
incorrect inputs", in which there are at most f faulty processes, and
a faulty processor might have an incorrect input and might crash at
some point. Algorithms for this model can be transformed into
algorithms tolerant of Byzantine failures.
Convex consensus cannot be solved in an asynchronous system (due to [FLP]),
so the paper considers an approximate version of the problem, in which
the Hausdorff distance between the outputs of two correct processes must
be at most some parameter epsilon. (The Haussdorf distance is defined
as follows for two polytopes, h1 and h2: First, consider the maximum,
over every point in h1, of the shortest distance from that point to any
point in h2. Then consider this quantity with the roles of h1 and h2
reversed. Finally, take the larger of these two quantities.)
Prior lower bounds imply that in order for this problem to be
solvable, the total number of nodes n must be more than (d+2)*f. An
algorithm is given that meets this bound. In the algorithm, after an
initial round that eliminates the effect of faulty inputs, each
process keeps a current estimate of the output polytope; for an
appropriate number of asynchronous rounds, processes exchange
polytopes and compute new ones as linear combinations (equally
weighted) of those received. As in prior work, the correctness proof
rephrases the algorithm as a series of matrix multiplications, and
appeals to Tverberg's Theorem. It is also shown that, in some sense,
the output polytopes are optimally large.
A brief discussion of modifying the algorithm for the classic crash
model (where inputs of faulty processes are correct) is included.
The last topic in the paper discusses applications of convex consensus
to the problem of minimizing a cost function over a domain that consists
of the convex hull of the correct inputs.
EVALUATION: As is made clear in the paper, this is an extension of the
authors' previous work on vector consensus. Several of the ideas are
the same or similar.
Overall, the presentation was quite clear, although in a few places a
discussion of the intuition behind the formal statements would be
helpful. It would also be helpful to provide more discussion about
possible applications, especially relating to the convex function
optimization.
DETAILED COMMENTS:
page 1: "consensus" is misspelled "consnesus"
page 2, Problem definition: I believe that any vector consensus
algorithm would satisfy the condition. That is, the validity
condition does not rule out such "degenerate" solutions. Is there a
way to indicate that the output polytope(s) should be as large as
possible? This would relate to the optimality condition proved about
the algorithm.
page 4, Definition 2: don't capitalize "Linear"
page 5: It would be helpful to give a bit of intuition about the
algorithm. Why is it important to use the stable vector primitive in
round 0 but not in subsequent rounds? I guess the point is to rule
out the contribution of faulty inputs at the beginning, but
subsequently the cost of the stable vector primite is not needed?
page 6, Lemma 1: restrict t be to be at most t_end?
page 7, equation (7): restrict t to be at most t_end?
page 8, equation (12): As this is key to the correctness of the
algorithm, it would be nice to give some intuition for this claim,
regardless of whether Lemma 5 and its proof end up in a later version.
Similarly intuition for the inequality in between (12) and (13) would
be appreciated.
page 8, definition of t_end: "to be the smallest integer t" (add in the t)
page 9, Section 4: It would be helpful to have more discussion about
applications of the function optimization.
page 11: There is a journal version of reference [8].
----------------------- REVIEW 2 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
The paper presents the problem of convex consensus, similar to vector consensus, but each entity agrees on a convex hull of points. The problem is relevant, the paper is well written, but I see three issues:
(issue 1: viability) The main difficulty of vector consensus is still present here: the exponential local computation (the one computing $\mathcal{H}$, the convex hull of $n-f$ points considering any set of incorrect outputs). Is that cost "inherent"? Would a specialized algorithm for problems like convex function optimization mitigate the issue? No discussion is given.
This is relevant because this problem is technically a relaxation of the vector consensus problem. A better local computation perspective is thus expected (at least a discussion on it). The cost is particularly relevant since the authors motivate the problem in terms of ("other interesting problems" + "convex function optimization", page 3). Furthermore, is such cost "inherent"? Wouldn't a specialized algorithm for, say, convex function optimization, mitigate the issue? Why is such general algorithm better given such a high cost? I feel that this problem should be tackled or at least thoroughly discussed, because otherwise, almost all tecnical merit is already present in the vector consensus paper of PODC'13.
(issue 2: convergence) The convergence "speed" (1 - 1/n) gets slower and slower as the number of processes increase. Again, a relaxation from vector consensus "calls for" either (a) a better convergence speed or (b) a discussion on why a slow convergence is necessary. The paper focuses on the optimality of the convex polytopes, but nothing is discussed regarding this issue.
Also, what if you don't have bounds on the coordinates? Can you still estimate $t_end$ somehow? Would the technique in [1] work here?
(issue 3: novelty) Technically, the algorithm is an application of the stable vectors of [2] (not [14]) in the first round, and then a series of convergence steps. Processes also "increase" their knowledge of the polytope of correct processes using the $L$ operator.
The simplicity is definitely appreciated, but I'm doubtful about the novelty of the technique. In other words, while the problem is interesting, the technique to solve it is a straightforward simplification of vector consensus + the use of stable vectors, both already established in the literature ([20] and [2], respectively).
If issues (1-2) above were tackled, then a strong contribution would exist.
(minor issue) The suitability of the incorrect input model is not thoroughly discussed. Perhaps bad sensors? (In that case, local computation is relevant.) Byzantine failures? (Emphasize the transformation of [6].) I think the paper should discuss it better.
FINAL CONSIDERATIONS: The problem is relevant, the proof is elegant, however it seems to me that some novelty is missing given that this is a relaxation (simplification) of vector consensus. That novelty could be either in the form of a faster local computation or of a faster convergence. Neither is provided here. I suggest further examination (or at least a further, incisive discussion!) of these issues.
-- specific comments --
pg 3/ contributions: what *other* problems could benefit from the convex consensus problem? You cite this in the main contributions but do not specify.
pg 4/ stable vector paragraph: [14] is not a reference for stable vectors (it is a brief announcement). [2] defines stable vectors, [14] uses [2] in the identical Byzantine communication system as described in Attiya & Welsch. Given the importance of stable vectors, I'd suggest citing the source [2] directly. [14] seems irrelevant for the purpose here.
pg 6/ Lemma 2: wouldn't that be $i \in V \setminus (F[0] \cup F[1])$ because there might have processes that crash right away?
pg 9, sec 3.3: Once more stable vectors. They were originally defined for crash failures, so you can simply refer to [2] instead of saying "this is feasible".
----------------------- REVIEW 3 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
The paper picks up on the interesting consensus generalizations recently introduced. In particular, the authors introduce the generalized notion of "approximate convex consensus" where processes need to output a convex polytope contained within the convex hull of the inputs at the fault-free processes. The polytope approximation quality is measured in terms of the distance between
the output polytopes.
The main contribution is a convex consensus algorithm which is optimal in terms of robustness and polytope approximation, and also has implications on the convex optimization.
Overall, this is an interesting result and well-written paper. Unfortunately, the paper does not put into perspective well the technical novelty. It would be nice to discuss in more detail the additional insights needed compared to previous work, e.g., on Barycentric agreement. Accordingly, it would be nice to have a small subsection not only on the main result, but also the technical contribution in terms of new tools.
----------------------- REVIEW 4 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
The paper presents an asynchronous distributed (message-passing) algorithm for approximate "convex consensus" in the presence of crash faults and incorrect inputs. The algorithm is shown to produce results that have optimal quality, in a specified sense. Application to convex function optimization are also given.
The results are technically involved, but the setting and results seem somewhat incremental.
----------------------- REVIEW 5 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
Te manuscript discusses a method for obtaining a convex consensus between all fault free processes in a distributed asynchronous environment with byzantine failures.
The method presented is straightforward, and the results seem solid. Some applications are presented.
----------------------- REVIEW 6 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
Brief review:
Overall, this is a nicely written paper, with a nice collection of results on the problem of "convex consensus." This is a new generalization of "approximate" Byzantine agreement, with a strong fault-tolerance guarantee.
Even so, my general impression is that this paper is somewhere between "borderline" and "weak accept" as the techniques are not all that different from existing papers, and the analysis builds significantly on previous approaches.
----------------------- REVIEW 7 ---------------------
PAPER: 46
TITLE: Asynchronous Convex Consensus in the Presence of Crash Faults
AUTHORS: Lewis Tseng and Nitin Vaidya
----------- REVIEW -----------
The introduced convex abstraction is nice and may lead to future simplifications of arguments.