posted by organizer: yuhanpanda || 5674 views || tracked by 4 users: [display]

FL - NeurIPS 2019 : The 2nd International Workshop on Federated Learning for Data Privacy and Confidentiality (in Conjunction with NeurIPS 2019)


When Dec 13, 2019 - Dec 13, 2019
Where West 118–120 Vancouver Convention Center
Submission Deadline Sep 9, 2019
Notification Due Sep 30, 2019
Categories    machine learning   artificial intelligence   federated learning   data privacy

Call For Papers

FL-NeurIPS 2019 Workshop Program

Date: Friday, 13 December, 2019
Venue: West 118 – 120, Vancouver Convention Center, Vancouver, Canada

08:55 – 09:00 | Opening Remarks by Lixin Fan

09:00 – 09:30 | Invited Talk by Qiang Yang
Title: Federated Learning in Recommendation Systems
09:30 – 10:00 | Invited Talk by Ameet Talwalkar
Title: Personalized Federated Learning

10:00 – 10:30 | Tea Break & Poster Exhibition
10:30 – 11:00 | Invited Talk by Max Welling
Title: Ingredients for Bayesian, Privacy Preserving, Distributed Learning
11:00 – 11:30 | Invited Talk by Dawn Song
Title: Decentralized Federated Learning with Data Valuation

Session 1. Effectiveness and Robustness
11:30 – 11:40 | Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
11:40 – 11:50 | Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
11:50 – 12:00 | Daliang Li and Junpu Wang. FedMD: Heterogeneous Federated Learning via Model Distillation
12:00 – 12:10 | Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan. Improving Federated Learning Personalization via Model Agnostic Meta Learning

12:10 – 13:30 | Lunch & Poster Exhibition

13:30 – 14:00 | Invited Talk by Daniel Ramage
Title: Federated Learning at Google – Systems, Algorithms, and Applications in Practice
14:00 – 14:30 | Invited Talk by Francoise Beaufays
Title: Applied Federated Learning – What it Takes to Make it Happen, and Deployment in GBoard, the Google Keyboard

Session 2: Communication and Efficiency
14:30 – 14:40 | Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
14:40 – 14:50 | Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar. Mitigating the Impact of Federated Learning on Client Resources
14:50 – 15:00 | Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong. A Communication Efficient Vertical Federated Learning Framework
15:00 – 15:10 | Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. Better Communication Complexity for Local SGD

15:10 – 15:30 | Tea Break & Poster Exhibition
15:30 – 16:00 | Invited Talk by Raluca Ada Popa
Title: Helen: Coopetitive Learning for Linear Models
16:30 – 17:00 | Invited Talk by Yiqiang Chen
Title: FOCUS: Federated Opportunistic Computing for Ubiquitous Systems

Session 3. Privacy and Fairness
17:00 – 17:10 | Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
17:10 – 17:20 | Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao. Exploring Private Federated Learning with Laplacian Smoothing
17:20 – 17:30 | Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz. Gradient-Leaks: Understanding Deanonymization in Federated Learning
17:30 – 17:40 | Aleksei Triastcyn and Boi Faltings. Federated Learning with Bayesian Differential Privacy

17:40 – 18:00 | Panel Discussion (Mediated by: Qiang Yang)
1. Yiqiang Chen, Professor, Institute of Computing Technology, Chinese Academy of Sciences
2. Boi Faltings, Professor, EPFL, AAAI Fellow
3. Chunyan Miao, Professor, Chair, School of Computer Science and Engineering, Nanyang Technological University, Singapore
4. Daniel Ramage, Research Scientist, Google Research
5. Dawn Song, Professor, University of California, Berkeley
6. Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm

18:00 – 18:05 | Closing Remarks by Brendan McMahan

End of the Workshop

Proceed to Reception Venue:

Vancouver Marriott Pinnacle Downtown Hotel, Level 3 Pinnacle Ball Room

19:00 – 21:00 | WeBank AI Night, Reception & Award Ceremony

19:00 – 19:10 | Welcome Speech by WeBank CAIO Prof Qiang Yang
19:10 – 19:30 | WeBank and MILA Partnership Announcement
19:30 – 19:50 | WeBank and Tencent Partnership Announcement
19:50 – 20:10 | FL-NeurIPS 2019 Award Ceremony
20:10 – 21:00 | Reception and Networking


Distinguished Paper Awards:
1. Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
2. Daliang Li and Junpu Wang. FedMD: Heterogenous Federated Learning via Model Distillation

Distinguished Student Paper Awards:
1. Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
2. Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

Privacy and security are becoming major concerns in recent years, particularly as companies and organizations are collecting increasingly detailed information about their products and users. This information can enable machine learning that produces more helpful products. However, at the same time, it expands the potential for misuse, and increases corresponding public concerns about the way companies use data, particularly when private data about individuals is involved. Recent research shows that privacy and utility do not necessarily need to be at odds, but can be addressed by careful design and analysis. The need for such research is reinforced by the recent introduction of new legal constraints, led by the European Union’s General Data Protection Regulation (GDPR), which is already inspiring novel legislative approaches around the world such as Cyber-security Law of the People’s Republic of China and The California Consumer Privacy Act of 2018.

A specific approach that has the potential to address a number of problems in this space is Federated Learning. The concept of Federated Learning is relevant in the setting when one wants to train a machine learning model based on a dataset stored across multiple locations, without the ability to move the data to any central location. This seemingly mild restriction renders many of the state-of-the-art techniques in machine learning impractical. One class of applications arises when data is generated by different users of a smartphone app, staying on users’ phones for privacy reasons. For example, Google’s Gboard mobile keyboard is already using federated learning in multiple places. Another class of applications involves data collected by different organizations, unable to share due to confidentiality reasons. Nevertheless, the same restrictions can also be present independent of privacy concerns, such as the case of data streams collected by IoT devices or self-driving cars, which need to be processed on-device, because it is infeasible to transmit and store the sheer amount of data.

At this moment, the pace of research innovation in federated learning is hampered by the relative complexity of properly setting up even simple experiments that reflect the practical setting. This issue is exacerbated in academic settings which typically lack access to actual user data. Recently, multiple open-source projects were created to address this high-barrier to entry. For example, LeaF is a benchmarking framework that contains preprocessed datasets, each with a “natural” partitioning that aims to reflect the type of non-identically distributed data partitions encountered in practical federated environments. Federated AI Technology Enabler (FATE) led by WeBank is an open-source technical framework that enables distributed and scalable secure computation protocols based on homomorphic encryption and multi-party computation, supporting federated learning architectures with various machine learning algorithms. Webank is also leading a related IEEE standard proposal. TensorFlow Federated (TFF) led by Google is an open-source framework on top of TensorFlow for flexibly expressing arbitrary computation on decentralized data. TFF enables researchers to experiment with federated learning on their own datasets, or those provided by LeaF. Google has also published a systems paper describing the design of their production system, which supports tens of millions of mobile phones. We expect these projects will encourage academic researchers and industry engineers to work more closely in addressing the challenges and eventually make significant positive impact. We support reproducible research and will sponsor a prize to be given to the best contribution, which also provides code to reproduce their results.

The workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop should provide an opportunity to share the most recent and innovative work in this area, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research fields, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, and others. Contributions in theory as well as applications are welcome, particularly proposals for novel system design.

Call for Contributions
We welcome high quality submissions in the broad area of federated learning (FL). A few (non-exhaustive) topics of interest include:
1. Optimization algorithms for FL, particularly communication-efficient algorithms tolerant of non-IID data
2. Approaches that scale FL to larger models, including model and gradient compression techniques
3. Novel applications of FL
4. Theory for FL
5. Approaches to enhancing the security and privacy of FL, including cryptographic techniques and differential privacy
6. Bias and fairness in the FL setting
7. Attacks on FL including model poisoning, and corresponding defenses
8. Incentive mechanisms for FL
9. Software and systems for FL
10. Novel applications of techniques from other fields to the FL setting: information theory, multi-task learning, model-agnostic meta-learning, and etc.
11. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with FL.

Submissions in the form of extended abstracts must be at most 4 pages long (not including references) and adhere to the NeurIPS 2019 format. Submissions should be anonymized. The workshop will not have formal proceedings, but the accepted contributions will be expected to present a poster at the workshop.

Submission link:

Lixin Fan (WeBank, China)
Jakub Konečný (Google, USA)
Yang Liu (WeBank, China)
Brendan McMahan (Google, USA)
Virginia Smith (Carnegie Mellon University, USA)
Han Yu (Nanyang Technological University, Singapore)

Program Committee
Adria Gascon (The Alan Turing Institute / University of Warwick, UK)
Anis Elgabli (University of Oulu, Finland)
Aurélien Bellet (Inria, France)
Ayfer Ozgur (Stanford University, USA)
Bingsheng He (National University of Singapore, Singapore)
Boi Faltings (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
Chaoping Xing (Nanyang Technological University, Singapore)
Chaoyang He (University of Southern California, USA)
Dimitrios Papadopoulos (Hong Kong University of Science and Technology, Hong Kong)
Fabio Casati (University of Trento, Italy)
Farinaz Koushanfar (University of California San Diego, USA)
Gauri Joshi (Carnegie Mellon University, USA)
Graham Cormode (University of Warwick, UK)
Jalaj Upadhyay (Apple, USA)
Ji Feng (Sinnovation Ventures AI Institute, China)
Jianshu Weng (AI Singapore, Singapore)
Jihong Park (University of Oulu, Finland)
Joshua Gardner (University of Michigan, USA)
Jun Zhao (Nanyang Technological University, Singapore)
Keith Bonawitz (Google, USA)
Lalitha Sankar (Arizona State University, USA)
Leye Wang (Peking University, China)
Marco Gruteser (Google, USA)
Martin Jaggi (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
Mehdi Bennis (University of Oulu, Finland)
Mingshu Cong (The University of Hong Kong, Hong Kong)
Nguyen Tran (The University of Sydney, Australia)
Peter Kairouz (Google, USA)
Pingzhong Tang (Tsinghua University, China)
Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
Prateek Mittal (Princeton University, USA)
Richard Nock (Data61, Australia)
Rui Lin (Chalmers University of Technology, Sweden)
Sewoong Oh (University of Illinois at Urbana-Champaign, USA)
Shiqiang Wang (IBM, USA)
Siwei Feng (Nanyang Technological University, Singapore)
Tara Javidi (University of California San Diego, USA)
Xi Weng (Peking University, China)
Yihan Jiang (University of Washington, USA)
Yong Cheng (WeBank, China)
Yongxin Tong (Beihang University, China)
Zelei Liu (Nanyang Technological University, Singapore)
Zheng Xu (University of Science and Technology of China, China)

Related Resources

NeurIPS 2020   Thirty-fourth Conference on Neural Information Processing Systems
IJCAI 2021   30th International Joint Conference on Artificial Intelligence
IARCE 2021-Ei Compendex & Scopus 2021   2021 5th International Conference on Industrial Automation, Robotics and Control Engineering (IARCE 2021)
FL-ICML 2020   International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2020
ML_BDA 2021   Special Issue on Machine Learning Technologies for Big Data Analytics
ICDM 2021   21th Industrial Conference on Data Mining
SI-DAMLE 2020   Special Issue on Data Analytics and Machine Learning in Education
Signal 2021   8th International Conference on Signal and Image Processing
CMMM 2020   Special Issue on Machine Learning Applications in Single-Cell RNA Sequencing Data
FL-IJCAI 2020   International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with IJCAI 2020 (FL-IJCAI'20)