NSF-AoF: Lean-6G: Learning to Network the Edge in 6G
This is a joint project between Yale University and the University of Helsinki funded in the US by National Science Foundation (NSF) and in Finland by Academy of Finland (AoF)
BackgroundThis project aims at developing theoretical foundations for in-network data management to exploit different types of network resources, enhance the performance of the distributed machine learning models and corresponding applications. Distributed machine learning leverages resources at the network edge, near the data sources, for model training and inference, thereby offloading from these tasks the centralized cloud platforms and satisfying demands of applications for low latency, scalability and privacy. Training ML models at realistic 5GIoT network scales requires processing enormous volumes of data generated by the end-user devices, thus leading to huge amount of data communications between network users and the central cloud platform. As a consequence, the data uploading and training procedure in real-time applications (e.g., ARVR) causes significant performance degradation. Finally, since the training data are collected by network users, preserving the privacy of network users with affordable accuracy loss is an important issue in distributed ML. Developing data aware resource allocation algorithms in distributed ML systems is a complex and open problem. Existing works on distributed ML can be classified into two categories: (i) one focusing on how to run the ML tasks (model training and inference) at the edge devices in a collaborative and resource-optimized manner, and (ii) another focusing on how to exploit ML to find a better solution to the optimization problems in the edge network. It has been illustrated that data quality is a critical factor in machine learning algorithms, and data importance-aware network resource allocation and training algorithms can improve ML systems performance. Therefore, a unified treatment of data selection, communication and computing is required to improve the performance of distributed ML systems. More importantly, protecting against data leakage in the distributed resource-sharing process is an unresolved practical issue in protocol realization. GoalsThis project aims to address the above challenges in theoretical research and healthcare applications that arise in distributed machine learning systems. Overall, the major goals in this project can be summarized as follows:
Contact
Prof. Leandros Tassiulas Yale University |