---------------------------------------------- Submission number: 10
Corresponding authors: Guanfeng Liang Submission title: Error-Free Multi-Valued
Consensus with Byzantine Failures ----------------------------------------------
---------------------------- REVIEW 1 -------------------------- PAPER: 10
TITLE: Error-Free Multi-Valued Consensus with Byzantine Failures
The paper gives a deterministic algorithm for consensus on a message of length
L, using O(Ln+n^6 + Ln^.5) bits or linear in the message per processor when L >
n^6, in a synchronous model. Consensus here means that the agreed upon output
must match the input of a good processor only when the message is agreed upon by
all the good processors. Otherwise, the processors can agree on any default
message. This seems to be the definition previously used in the literature,
although one could imagine a stronger definition. Previous work uses linear
communication even for smaller L but has a nonzero probability of failure and
relies on hash functions.
The paper is well written. The technique seems very nice. Briefly, it relies on
coding each chunk of the messages in n pieces such that n-2t pieces are needed
for reconstruction (using Reed Solomon codes). Each processor sends its piece to
the other processors. Processors maintain an agreed upon graph of trusted edges,
using a previously known one bit n^2 bit algorithm to communicate bits
representing trust. Either agreements on chunks occur or inconsistencies are
detected with a more expensive process revealing a bad edge. As there can't be
too many bad edges (incident to bad processors), the complexity works out.
Detailed comments
page 2 lines 10-11: Why is Omega(nL) a lower bound on the communication
complexity of consensus on an L-bit value? Either explain or give a reference.
Page 2 line 11: algortihm -> algorithm
Page 2 line 19: lower bounded -> bounded below
Page 4 line -7: parenthetical sentences should begin with a capital letter and
should have their period before the right parenthesis.
Page 5 line 1: A (n,n-2t) -> An (n,n-2t) distance of a Reed-Solomon code is not
defined.
It is confusing to use the notation V/A to denote the sequence of elements of V
indexed by elements of A, since / is also used for division. V|A might be a
better choice.
Is a stronger form of validity possible?
Page 4 line -7: parenthetical sentences should begin with a capital letter and
should have their period before the right parenthesis.
---------------------------- REVIEW 2 -------------------------- PAPER: 10
TITLE: Error-Free Multi-Valued Consensus with Byzantine Failures
This paper presents an improvement to multi-valued Byzantine agreement in
synchronous settings. Specifically, it is known that the communication
complexity of a single-bit agreement protocol is Omega(n^2) bits; so a na?ve
multi-valued protocol on L-bit values by repeating L instances incurs O(n^2 *L)
complexity. The paper at hand reduces the complexity to O(n L), for sufficiently
large values of L. The main technique used in the paper is interesting in its
own right. The main cause of complexity in a single-bit protocol is the
malicious actions by faulty participants. However, by incurring those costs, the
faulty participants risk exposure. The idea in this paper is to take advantage
of this: The solution it presents constructs a fault-detection graph in early
rounds of the protocol, so that the faulty participants cannot continue causing
communication overheads. Overall, the result is technically challenging, in a
very well studied area. Of course, the progress is makes is quite marginal, as
we are squeezing the performance of such protocols to the limit; but this should
not be held against. I found the introduction and the presentation in the paper
quite clear and easy to read, relative to the technical depth of the result.
---------------------------- REVIEW 3 -------------------------- PAPER: 10
TITLE: Error-Free Multi-Valued Consensus with Byzantine Failures
This paper proposes a deterministic protocol for Byzantine agreement among $n$
parties on an $L$-bit value. The communication complexity is less than "the
typical" $O(n^2L)$ and converges to $O(nL)$ for large $L$. Previously, only
probabilistic such protocols were known.
The result seems interesting and the technique of using both 1) error correcting
codes and 2) a diagnosis graph seems novel. It does not seem a particularly
hard result to get a Las Vegas algorithm, given that probabilistic protocols
were already known, *and* mutlivalued BA seems like an easy domain for changing
a Monte Carlo algorithm to a Las Vegas one. However, the authors go beyond this
and give a completely deterministic algorithm. I also like the connection to
error correcting codes, since I think there are deeper connections between ECCs
and BA that should be explored.
A weakness of the paper is that It is challenging to verify all steps (partly
because of writing issues). In particular, it should be demonstrated how to
compute $P_match$ efficiently; similarly for $P_decide$ in Step 3.(h). Another
weakness is that it requires L to be Omega(n^6) to obtain the O(nL) result. The
value Omega(n^6) is much too high for practicality. However, reducing this
exponent could be an interesting area of future work.
Note to authors: Secret sharing need not make cryptographic assumptions and is
not necessarily any more complicated that ECCs. In the abstract and intro, you
should just say that your algorithm makes no cryptographic assumptions and drop
the reference to secret sharing.
Typos:
- "Our algorithm require no" -> requires no - "produce the" -> "produces the" -
represent a -> represents a - received the from -> received from Algorithm 1:
1.(a) Typo: $S_i[j]$. 2.(a) Maybe, state explicitly what $R_j$ is?