FML 2019 : IEEE BigData 2019 - Special Track on Federated Machine Learning
Call For Papers
Special Session Programme
Tuesday, December 10th, 2019
Location: Santa Barbara C, Westin Bonaventure Hotel & Suites, Los Angeles, CA, USA
16:20 - 16:25 | Open Address by Prof. Qiang Yang
16:25 - 16:40 | Federated Learning with Bayesian Differential Privacy: Aleksei Triastcyn and Boi Faltings
16:40 - 16:55 | SGNN: A Graph Neural Network Based Federated Learning Approach by Hiding Structure: Guangxu Mei, Ziyu Guo, Shijun Liu, and Li Pan
16:55 - 17:10 | Measure Contribution of Participants in Federated Learning: Guan Wang, Charlie Xiaoqian Dang, and Ziye Zhou
17:10 - 17:25 | Profit Allocation for Federated Learning: Tianshu Song, Yongxin Tong, and Shuyue Wei
17:25 - 17:30 | Break
17:30 - 17:45 | Secure and Efficient Federated Transfer Learning: Shreya Sharma, Xing Chaoping, Yang Liu, and Yan Kang
17:45 - 18:00 | Infer Latent Privacy for Attribute Network in Knowledge Graph: Zeyuan Cui, Li Pan, Shijun Liu, and Lizhen Cu
18:00 - 18:15 | Privacy-preserving Heterogeneous Federated Transfer Learning: Dashan Gao, Yang Liu, Anbu Huang, Ce Ju, Han Yu, and Qiang Yang
18:15 - 18:30 | Power Demand Response Incentive Pricing Model: Kun Zhang, Yuliang Shi, Yuecan Liu, and Zhongmin Yan
Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while unlocking the values in the data, since the most useful data for machine learning often tend to be confidential. The European Union’s General Data Protection Regulation (GDPR) brings new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization.
In order to explore how the AI research community can adapt to this new regulatory reality, we organize this special track on Federated Machine Learning (FML). The special track will focus on machine learning and big data analytics techniques with privacy and security. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The special track intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of GDPR compliant machine learning. It will also serve as a venue for networking. Researchers from different communities interested in this problem will have ample time to share thoughts and experience, promoting possible long-term collaborations. Both theoretical and application-based contributions are welcome.
The special track seeks to explore new ideas with particular focus on addressing the following challenges:
• Security and Regulation Compliance: How to meet the security and compliance requirements? Does the solution ensure data privacy and model security?
• Collaboration and Expansion Solution: Does the solution connect different business partners from various parties and industries? Does the solution exploit and extend the value of data while observing user privacy and data security?
• Promotion and Empowerment: Is the solution sustainable and intelligent? Does it include incentive mechanisms to encourage parties to participate on a continuous basis? Does it promote a stable and win-win business ecosystem?
We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the conference. At least one author of each accepted paper is expected to register for and attend the conference. Topics include but are not limit to:
1. Adversarial learning, data poisoning, adversarial examples, adversarial robustness, black box attacks
2. Architecture and privacy-preserving learning protocols
3. Federated learning and distributed privacy-preserving algorithms
4. Human-in-the-loop for privacy-aware machine learning
5. Incentive mechanism and game theory
6. Privacy aware knowledge driven federated learning
7. Privacy-preserving techniques (secure multi-party computation, homomorphic encryption, secret sharing techniques, differential privacy) for machine learning
8. Responsible, explainable and interpretability of AI
9. Security for privacy
10. Trade-off between privacy and efficiency
1. Approaches to make AI GDPR-compliant
2. Crowd intelligence
3. Data value and economics of data federation
4. Open-source frameworks for distributed learning
5. Safety and security assessment of AI solutions
6. Solutions to data security and small-data challenges in industries
7. Standards of data privacy and security
Please submit a full-length paper (up to 10 page IEEE 2-column format) through the online submission system.
Paper Submission Page: http://wi-lab.com/cyberchair/2019/bigdata19/scripts/ws_submit.php?subarea=SP.
Papers should be formatted according to the IEEE Computer Society Proceedings Manuscript Formatting Guidelines (https://www.ieee.org/conferences/publishing/templates.html)
- Yang Liu (WeBank, China)
- Han Yu (Nanyang Technological University, Singapore)
- Bo Li (University of Illinois at Urbana-Champaign, USA)
- Yang Liu (UC Santa Cruz, USA)
- Zelei Liu (Nanyang Technological University, Singapore)
- Jian Lou (Vanderbilt University, USA)
- Qiaozhu Mei (University of Michigan, USA)
- Yongxin Tong (Beihang University, China)
- Zhiguang Wang (Facebook, USA)
- Hui Xiong (Rutgers University, USA)
- Haibin Zhang (University of Maryland Baltimore County, USA)
- Haifeng Zhang (Carnegie Mellon University, USA)
- Junbo Zhang (JD.com, China)