![]() |
| |||||||||||||||
STMUS 2025 : International Workshop on Secure and Trustworthy Machine Unlearning Systems (co-located with ESORICS) | |||||||||||||||
Link: https://www.ntu.edu.sg/dtc/esorics-workshop | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Machine Unlearning (MU) is an emerging and promising technology that addresses the needs for safe AI systems to comply with privacy regulations and safety requirements by removing undesired knowledge from AI models. As AI integration deepens across various sectors, the capability to selectively forget and eliminate knowledge from trained models without model retraining from scratch provides significant advantages. This not only aligns with important data protection principles such as the “Right To Be Forgotten” (RTBF) but also enhances AI by removing undesirable, unethical and even harmful memory from AI models.
However, the development of machine unlearning systems introduces complex security challenges. For example, when unlearning services are integrated into Machine Learning as a Service (MLaaS), multiple participants are involved, e.g., model developers, service providers, and users. Adversaries might exploit vulnerabilities in unlearning systems to attack ML models, e.g., injecting backdoors, compromising model utility, or exploiting information leakage. This can be achieved by crafting unlearning requests, poisoning unlearning data, or reconstructing data and inferring membership using knowledge obtained from the unlearning process. Therefore, unlearning systems are susceptible to various threats, risks, and attacks that could lead to misuses, resulting in potential privacy breaches and data leakage. The intricacies of these vulnerabilities require sophisticated strategies for threat identification, risk assessment, and the implementation of robust security measures to guard against both internal and external attacks. Despite its significance, there remains a widespread lack of comprehensive understanding and consensus among the research community, industry stakeholders, and government agencies regarding methodologies and best practices for implementing secure and trustworthy machine unlearning systems. This gap underscores the need for greater collaboration and knowledge exchange to develop practical and effective mechanisms that ensure the safe and ethical use of machine unlearning techniques. Topics include but are not limited to: 1. Architectures and algorithms for efficient machine unlearning. 2. Security vulnerabilities and threats specific to machine unlearning. 3. Strategies to manage vulnerabilities in machine unlearning systems. 4. Machine unlearning for large-scale AI models, e.g., large language models, multi-modal large models. 5. Evaluation of machine unlearning effectiveness, including metrics and testing methodologies. 6. Machine unlearning for data privacy and public trust. 7. Machine Unlearning as a Service. 8. Machine unlearning in distributed systems, e.g., federated unlearning. 9. Real-world applications and case studies of unlearning for AI systems. |
|