![]() |
| |||||||||||||||
AICS 2025 : The second International Workshop on Artificial Intelligence for Cybersecurity | |||||||||||||||
Link: https://fllm-conference.org/2025/Workshops/AICS2025/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The 2nd Workshop on Artificial Intelligence for Cybersecurity (AICS) explores the growing intersection between AI, particularly Foundation Models (FMs) and Large Language Models (LLMs), and cybersecurity. As LLMs become embedded in security-critical systems and operations, they bring unprecedented capabilities in automation, reasoning, and threat detection. At the same time, they introduce new attack vectors, privacy concerns, and governance challenges. This workshop provides a focused venue to examine how FMs and LLMs can be designed, adapted, and deployed to support cybersecurity tasks such as anomaly detection, secure software engineering, threat intelligence, and response automation. AICS also invites discussion on the vulnerabilities of these models themselves, including adversarial attacks, data leakage, and misuse.
By convening experts from AI, cybersecurity, and policy domains, AICS aims to foster multidisciplinary dialogue and chart a responsible path forward for integrating AI into secure digital ecosystems. The workshop invites original research, tools, case studies, and position papers that address technical, practical, and ethical aspects of AI for cybersecurity, as well as cybersecurity for AI systems. We welcome research contributions, position papers, and case studies on topics including but not limited to: LLMs for threat intelligence, anomaly detection, intrusion detection, and fraud prevention Prompt engineering for secure task design Adversarial attacks on FLLMs (e.g., prompt injection, jailbreaks, evasion) Privacy-preserving learning and inference with foundation models Security vulnerabilities in open-source and fine-tuned models Cybersecurity in multimodal and federated LLM systems Secure deployment and governance of LLM-powered systems Explainability, robustness, and trust in AI-based security tools Misuse detection (e.g., phishing, malware generation, abuse of generative models) Case studies and real-world applications of AI for cyber defense Ethical, legal, and policy issues in AI and cybersecurity |
|