Distributed Computing (DISC) Group
Prof. Vaidya and his students
research on topics in distributed computing, with an emphasis on design and theoretical analysis of distributed algorithms. Ongoing research addresses the following three areas:
- Robust distributed optimization and machine learning:
Multi-agent distributed optimization has many applications. In recent year, its application
in the context of machine learning has received significant attention. We are exploring three research directions in this
context: (i) making distributed optimization and learning robust to tampering of data and communication during training, (ii) privacy-preserving optimization and machine learning, and (iii) making machine learning robust to adversarial examples.
Distributed shared memory systems: Distributed shared memory abstractions are useful to implement inter-process communication and coordination in a distributed setting. Key-value stores that are in common use today provide such an abstraction.
A consistency model specifies the behavior of the distributed shared memory as observed by the processes, and different consistency models are often desired
in different contexts. In our work, we are exploring consistency models for emerging applications such social networking and distributed robotics.
Our interest is in identifying suitable algorithms for achieving the desired notions of consistency, designing algorithms that implement useful primitives
on top of these consistency models, and debugging of programs under different consistency models.
Distributed computation over wireless networks: We are exploring performance of distributed computations
over wireless networks, exploiting the (lossy) broadcast property of the wireless channel. Our past work in this area has included design
of algorithms for distribute consensus, distributed optimization, distributed mutual exclusion, and leader election in wireless networks.