Distributed Communication-constrained Learning

28 August 2022, Crowne Plaza, Belgrade


STATOS 2022 is packed with cutting-edge in-depth 1-hour plenaries by world-class experts in wireless communications, distributed/federated learning, and learning-communication co-design. The workshop will take place in room Baltic at the Crowne Plaza in Belgrade, the same venue as EUSIPCO 2022.


  • Alexander Jung

    Alexander Jung (Website)

    Alexander Jung obtained a Ph.D. (with sub auspiciis) in 2012 from Technical University Vienna. After Post-Doc periods at TU Vienna and ETH Zurich, he joined Aalto University as an Assistant Professor for Machine Learning in 2015. He leads the group “Machine Learning for Big Data” which studies explainable machine learning in network structured data. Alex first-authored a paper that won a Best Student Paper Award at IEEE ICASSP 2011. He received an AWS Machine Learning Research Award and was the "Computer Science Teacher of the Year" at Aalto University in 2018. Currently he serves as an associate editor for the IEEE Signal Processing Letters and as the chair of the IEEE Finland Jt. Chapter on Signal Processing and Circuits and Systems. His textbook "Machine Learning: The Basics" has been published by Springer in 2022.

  • Danijela Cabric

    Danijela Cabric (Website)

    Danijela Cabric is a Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. She received M.S. from the University of California, Los Angeles in 2001 and Ph.D. from University of California, Berkeley in 2007, both in Electrical Engineering. In 2008, she joined UCLA as an Assistant Professor, where she heads Cognitive Reconfigurable Embedded Systems lab. Her current research projects include novel radio architectures, signal processing, communications, machine learning and networking techniques for spectrum sharing, 5G millimeter-wave, massive MIMO and IoT systems. She is a principal investigator in the three large cross-disciplinary multi-university centers including SRC/JUMP ComSenTer and CONIX, and NSF SpectrumX.
    Prof. Cabric was a recipient of the Samueli Fellowship in 2008, the Okawa Foundation Research Grant in 2009, Hellman Fellowship in 2012 and the National Science Foundation Faculty Early Career Development (CAREER) Award in 2012, and the Qualcomm Faculty Award in 2020 and 2021. She is serving as an Associate editor for several IEEE journals and on the IEEE Signal Processing for Communications and Networking Technical Committee. She was the General Chair of IEEE Vehicular Networking Conference (VNC) in 2019 and IEEE Dynamic Spectrum Access (DySPAN) in 2021, and a Distinguished Lecturer for IEEE Communications Society from 2018 to 2019. Prof. Cabric is an IEEE Fellow.

  • Stefan Vlaski

    Stefan Vlaski (Website)

    Stefan Vlaski is Lecturer in the Communications and Signal Processing Group within the Department of Electrical and Electronic Engineering at Imperial College London, where he conducts research at the intersection of machine learning, network science and optimisation with applications in signal processing and communications. He received the B.Sc. degree in Electrical Engineering from Technical University Darmstadt, Germany, in 2013, and M.S as well as Ph.D. degrees in Electrical and Computer Engineering from the University of California, Los Angeles, USA, in 2014 and 2019, respectively. From 2019 to 2021 he was Postdoctoral Researcher with the Adaptive Systems Laboratory at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. He was recipient of the German National Scholarship at TU Darmstadt and the Graduate Division Fellowship at UCLA. His papers have been recognized at Best Student Paper contests at IEEE ICASSP 2016 and IEEE CAMSAP 2019, and his research has led to patents which have been assigned to UCLA and Amazon.

  • Lara Dolecek

    Lara Dolecek (Website)

    Lara Dolecek is a Full Professor with the Electrical and Computer Engineering Department and Mathematics Department (courtesy) at the University of California, Los Angeles (UCLA). She holds a B.S. (with honors), M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences, as well as an M.A. degree in Statistics, all from the University of California, Berkeley. She received the 2007 David J. Sakrison Memorial Prize for the most outstanding doctoral research in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. Prior to joining UCLA, she was a postdoctoral researcher with the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology. She received IBM Faculty Award (2014), Northrop Grumman Excellence in Teaching Award (2013), Intel Early Career Faculty Award (2013), University of California Faculty Development Award (2013), Okawa Research Grant (2013), NSF CAREER Award (2012), and Hellman Fellowship Award (2011). With her research group and collaborators, she received numerous best paper awards. Her research interests span coding and information theory, graphical models, statistical methods, and algorithms, with applications to emerging systems for data storage and computing. She currently serves as an Associate Editor for IEEE Transactions on Information Theory and as the Secretary of the IEEE Information Theory Society. Prof. Dolecek is 2021-2022 Distinguished Lecturer of the IEEE Information Theory Society. Prof. Dolecek has served as a consultant for a number of companies specializing in data communications and storage.

  • Yonina Eldar

    Yonina Eldar (Website)

    Yonina C. Eldar received the B.Sc. degree in Physics in 1995 and the B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), Tel-Aviv, Israel, and the Ph.D. degree in Electrical Engineering and Computer Science in 2002 from the Massachusetts Institute of Technology (MIT), Cambridge. From January 2002 to July 2002 she was a Postdoctoral Fellow at the Digital Signal Processing Group at MIT.
    She is currently a Professor in the Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel, where she holds the Dorothy and Patrick Gorman Professorial Chair and heads the Center for Biomedical Engineering. She was previously a Professor in the Department of Electrical Engineering at the Technion, where she held the Edwards Chair in Engineering. She is also a Visiting Professor at MIT, a Visiting Scientist at the Broad Institute, an Adjunct Professor at Duke University, an Advisory Professor of Fudan University, and was a Visiting Professor at Stanford. She is a member of the Israel Academy of Sciences and Humanities (elected 2017), an IEEE Fellow, a EURASIP Fellow, and a Fellow of the 8400 Health Network.

  • Schedule

    Time Slot Abstract
    1:00-2:00 Alexander Jung Networked Federated Learning Alexander Jung, Aalto Many important application domains generate distributed collections of heterogeneous local datasets. These local datasets are related via an intrinsic network structure that arises from domain-specific notions of similarity between local datasets. Networked federated learning combines information in local data and their network structure to learn accurate personalized models in a distributed fashion. We formulate networked federated learning as an instance of regularized empirical risk minimization using generalized total variation (GTV) as a regularizer. This formulation unifies and considerably extends recent approaches to federated learning. We develop a duality between GTV minimization and network flow optimization. This duality proves useful both computationally and conceptually. The network flow picture lends naturally to precise conditions on the network structure and local models such that network optimization algorithms are successful.
    2:00-3:00 Danijela Cabric UAV swarm enabled communications: System Design for Spectrum and Energy Efficiency with Security Considerations Danijela Cabric, UCLA Multi-UAV deployments create new opportunities for wireless communications. By coordinating the UAVs, they can act as a virtual-antenna-array and use multi antenna communication schemes like distributed MIMO and distributed beamforming (BF).
    Distributed MIMO enables a swarm of UAVs to transmit multiple data streams simultaneously to a multiantenna ground station (GS), thus improving the spectral efficiency. Due to the line-of-sight propagation between the swarm and the GS, the MIMO channel is highly correlated, leading to limited multiplexing gains. By optimizing the UAV positions, the swarm can attain the maximum capacity given by the single-user-bound. To achieve this capacity, we propose a centralized approach using block coordinate descent and distributed iterative approach using linear controllers.
    Distributed BF can extend the communication range of a remotely deployed swarm, avoiding energy waste in travel towards the destination radio. In order to beamform, the UAVs typically rely on the destination feedback, however, noisy feedback degrades the BF gains. To limit the degradation, we developed an analytical framework to predict the BF gains at a given SNR and used it to optimize the signaling with the destination. The proposed framework was verified experimentally in the lab and using UAV-mounted software-defined-radios (SDR). We also developed a feedback-free BF approach that eliminates the need for destination feedback entirely in a LOS channel. In this approach, one BF radio acts as a guide and moves to point the beam of the remaining radios towards the destination. This approach tolerates localization error and was demonstrated using SDRs.
    As for the security considerations, they apply beyond UAVs to any wireless device. Security considerations include radio authentication and interpreting unauthorized signals. For device authentication, we leveraged the radios' RF fingerprint extracted using deep learning and formulated an open set classification problem to reject signals from unauthorized transmitters. We compared several approaches and studied the training dataset impact on performance. To blindly decode unauthorized signals, we proposed the dual path network (DPN) combining digital signal processing and deep learning for modulation classification and blind symbol decoding. DPN design yields interpretable outputs and by jointly estimating the unknown parameters, it improves the modulation classification accuracy.
    3:00-4:00 Stefan Vlaski Provable and Efficient Learning over Networks Stefan Vlaski, Imperial Rapid increases in the availability of data and computational resources have led to a paradigm shift in many areas of engineering and beyond. While in the past, the design and operation of engineering systems have been based on carefully crafted (physical) models, an abundance of data and processing capabilities has allowed them to be replaced by data-driven solutions, where the modelling step itself is delegated to the machine. Such underdetermined learning tasks frequently give rise to highly non-convex optimisation problems. Such problems can be NP-hard in the worst case, yet the practical success of gradient-based algorithms in many applications (such as back-propagation for deep learning) suggest that an important subset of non-convex optimisation problems can be solved both efficiently and reliably. Analytical guarantees of this kind have appeared only recently.
    At the same time, the democratisation of technology has caused both data and computational resources to be accessible at dispersed and heterogeneous locations, rather than powerful centralised processing centers. Data is generated and processed on our mobile devices, in sensors scattered throughout “smart cities” and “smart grids”, and vehicles on the road. Central aggregation of raw data is frequently neither efficient nor feasible, due to concerns around communication constraints, privacy, and robustness to link and node failure. The purpose of decentralised optimisation and learning is then to devise intelligent, data-driven, engineering systems by means of decentralised processing and peer-to-peer interactions, while preserving the strong performance guarantees of centralised architectures and yet ensuring communication efficiency, privacy and robustness. This talk will review recent results on the development and analysis of learning algorithms over networks and the associated performance trade-offs.
    4:00-4:30 Coffee break -
    4:30-5:30 Lara Dolecek Variable Coded Batch Matrix Multiplication Lara Dolecek, UCLA Coded computing is a popular method to overcome stragglers in distributed systems using tools from channel coding. A key task is to perform coded matrix-matrix multiplication. The majority of prior coded matrix-matrix computation literature has broadly focused in two directions: matrix partitioning for computing a single computation task and batch processing of multiple distinct computation tasks. While these works provide codes with good straggler resilience and fast decoding for their problem spaces, these codes would not be able to take advantage of the natural redundancy of re-using matrices across batch jobs. In this talk, we introduce the Variable Coded Distributed Batch Matrix Multiplication (VCDBMM) problem which tasks a distributed system to perform batch matrix multiplication where matrices are not necessarily distinct among batch jobs. We then present Flexible Cross-Subspace Alignments (FCSA) codes that are flexible enough to utilize this redundancy. We provide a full characterization of FCSA codes which allow for a wide variety of system complexities including good straggler resilience and fast decoding, and we discuss their performance under practical scenarios.
    5:30-6:30 Yonina Eldar Model Based Deep Learning: Applications to Imaging and Communications Yonina Eldar, Weizman Inst. Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets.
    On the other hand, signal processing and communications have traditionally relied on classical statistical modeling techniques that utilize mathematical formulations representing the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. Here we introduce various approaches to model based learning which merge parametric models with optimization tools and classical algorithms leading to efficient, interpretable networks from reasonably sized training sets. We will consider examples of such model-based deep networks to image deblurring, image separation, super resolution in ultrasound and microscopy, efficient communication systems, and finally we will see how model-based methods can also be used for efficient diagnosis of COVID19 using X-ray and ultrasound.


    Deniz Gunduz (Imperial College London)

    Anja Klein (Technical University of Darmstadt)

    Marius Pesavento (Technical University of Darmstadt)

    Abdelhak Zoubir (Technical University of Darmstadt)

    Nikos Sidiropoulos (University of Virginia)

    Georgios Giannakis (University of Minnesota)

    Goran Dimić (Institute Mihajlo Pupin)

    Past Workshops

    STATOS 2018

    STATOS 2016

    STATOS 2013