posted by user: SilviaDC || 2244 views || tracked by 1 users: [display]

IFIP Summer School 2024 : 19th IFIP Summer School on Privacy and Identity Management: Generating Futures

FacebookTwitterLinkedInGoogle

Link: https://ifip-summerschool.github.io/call_for_paper/
 
When Sep 10, 2024 - Sep 13, 2024
Where Madrid
Submission Deadline Apr 21, 2024
Notification Due May 7, 2024
Final Version Due Jul 7, 2024
Categories    privacy   identity management   law   computer science
 

Call For Papers

// Deadline update! You can submit abstracts until April 21st 2024!//

Generating Futures
Generative AI as a commodity might still be considered a novelty, but it is already significantly impacting society. Its ultimate consequences are yet to be discovered, but the immediate effects are already visible, especially in combination with other technologies: from assistants linked to e-commerce platforms to home control systems, all the way to our workplace, especially in creative industries, there seem to be many possible applications.

At the same time, in several instances, outcomes of AI systems have already led to harm, such as discrimination or fake news. If we do not want the providers of AI systems to generate our future on their own, we will need a broad societal effort to decide how to deal with this technology. Policymakers and data protection authorities have already clarified that, e.g., before using generative AI, it must comply with privacy and data protection rules. Identity management, security, and confidentiality are also pivotal for developing safe AI.

The 19th IFIP Summer School on Privacy and Identity Management welcomes contributions dealing with all privacy and identity management-related research. Nonetheless, this year, we emphasize the interplay between novel AI applications—with a focus on generative AI—and privacy and identity management.

The advent of generative AI technologies has led to many privacy concerns and ethical dilemmas, highlighted by recent high-profile cases, as, quite often, providers of AI services operate globally while disregarding local privacy regulations and ethical considerations. Key issues of these services include using personal data, initially collected for other purposes, to train machine-learning models, and the indiscriminate web scraping by AI tools, potentially capturing personal data in violation of data protection rules, e.g., without consent. A particularly concerning aspect is the unintentional leakage of the personal data used in the training phase by these AI models, such as replicating a training subject’s face with a synthetic face generation tool. Another critical issue is the transparency regarding the fate of data input into AI tools (‘prompts’), often leading to unauthorized use or enhancement of subsequent AI tool versions, affecting the personal data entered in those queries.

Moreover, the tendency of generative AI to produce false or ‘hallucinated’ information, especially when it relates to individuals, poses a significant threat to the data integrity and accuracy principle. The ease of creating fake photo and video montages, such as ‘deep fakes’, through generative AI tools, further exacerbates privacy concerns.

The models underlying generative AI systems often contain racist and sexist material from the open web. Especially when used for decision-making processes, there are serious risks of discrimination and other harms for those affected by those decisions, particularly already marginalized people. These are exacerbated when automated decisions are involved. Current models also lack any explainability and disregard algorithmic transparency principles. Even the software development process is not immune, with tools like Copilot influencing the privacy features of products developed, raising questions about the adherence to privacy concerns and best practices in code generation.

While many of these issues predate generative AI, they are significantly amplified by its emergence. Generative AI products and services must conform to principles such as fairness, accountability, transparency, security, relevance, and non-discrimination to respect and enhance the privacy of the data subjects involved. Our goal is to foster a comprehensive understanding of the challenges and opportunities presented by the intersection of AI and privacy and to explore innovative solutions for the future that respect the rights of those concerned.

The 19th IFIP Summer School on Privacy and Identity Management aims to create a platform for spreading awareness and facilitating interactions and knowledge exchanges around old and new issues by taking a holistic approach to society and technology. We support interdisciplinary research exchange and foster discussions through keynote lectures, tutorials, and workshops. Participants will benefit from presenting their research and receiving meaningful feedback. The IFIP Summer School culminates in the publication of selected papers by the participants as an edited volume published by Springer.

We welcome contributions addressing these issues from any of the following perspectives (and especially the interdisciplinary combination of more than one): sociological, legal, technical, ethical, political, surveillance, intersectional, anthropological, economic, historical, media & communication, regulatory, philosophical, critical, disabilities, and psychological studies in the areas of privacy, data protection, and identity management.

The IFIP Summer School encourages not only interdisciplinarity but also broader diversity. It particularly welcomes submissions on how to foster gender and cultural balance in privacy and identity research and policy, and notably, tutorials and workshops about how to raise awareness in these matters.

Paper Submission and Review Process
The research paper presentations focus on involving students and on encouraging the publication of high-quality, thorough research papers by students and young researchers. To this end, the school will have the following process for submissions:

Submit an extended abstract of at least 2 and at most 4 pages in Springer LNCS style. From these submissions, the PC chairs select papers within the scope of this call for presentation at the Summer School.
A full length submission (up to 16 pages), also in Springer LNCS format, is required to be submitted before the Summer School by applicants whose abstract has been accepted. The full-length paper will be published in the pre-proceedings.
At the Summer School each author will give a presentation followed by a discussion.
After the Summer School, authors are invited to submit to the proceedings (again 16 pages LNCS). They are expected to consider the comments and discussions from the Summer School.
There will be a review of these submissions by the Programme Committee. Based on these reviews, papers might be accepted, conditionally accepted, or rejected.
Accepted and (after satisfactory revision) conditionally accepted papers will be included in the Summer School’s proceedings, which will be published by Springer.
Proposals for Workshops and TutorialsPermalink
A workshop is an interactive session scheduled for one or two hours and focuses on involving students in discussion. In it, participants jointly work on a topic or project related to the Summer School theme. Workshop activities are summarized in short papers that recapitulate the outcome and the kinds of discussion raised in the Summer School for inclusion in the proceedings. Proposals for workshops should contain a 2-page statement presenting the topic and summarising the planned activity and the expected contributions from the audience members, e.g., responding to a questionnaire or conducting a small experiment. Proposers should indicate whether any special equipment is needed for the workshop, such as audio-visual systems or computational equipment and support.

Tutorials are one or two-hour-long presentations. They should deal with topics that interest the interdisciplinary audience in the Summer School. Tutorials should provide knowledge on theoretical, empirical, methodological, practical, or other aspects relevant to the Summer School. Tutorial Proposals should contain a 2-page summary and state the level and background required for audience members to follow the tutorial.

Possible Topics
(can include, but are not limited to)

Technical and Organisational Measures, Methods, and Tools for Privacy and Data Protection that address:
- Transparency and information provision to data subjects: Addressing the entire lifecycle of generative AI systems, from data collection (including unstructured data scraped from the Web) to secondary uses such as improving models from the contents of user prompts. This includes developing tools to detect unlawful use of personal data by generative AI systems.
- Intervenability and control over personal data: Focusing on mechanisms within generative AI systems, such as machine un-learning, to support deleting personal data from existing models.
- Unlinkability and anonymization: Ensuring the anonymization of personal data during the training and fine-tuning of models to prevent identification.
- Confidentiality in generative AI tools: Exploring local training, federated learning, and encrypted processing (e.g., homomorphic encryption) in generative AI systems.
- Accuracy in AI-generated personal data: Investigating approaches like humility rules, external verification for triangulation, human-on-the-loop (HOTL) feedback mechanisms, and self-improvement protocols that include learning from mistakes.
- Integrity of generative models: Focusing on preventing, detecting, and correcting model poisoning and data poisoning.
- Explainability of AI models: Including both external analyses (e.g., probing, counterfactuals) and internal model improvements (e.g., backtracking to identify influential model features, and source provenance in AI responses). This also encompasses detecting and avoiding bias in generated content and decisions.
- Purpose limitation in data processing: Implementing binding policies to ensure data is used solely for its intended purpose in the context of generative AI.
- Evaluation and performance assessment: Assessing generative AI solutions’ adherence to privacy goals.
- Trustworthiness and Privacy by Design: Incorporating trustworthiness and privacy considerations into the development process of generative AI solutions, focusing on system assurance and compliance traceability.

Law, Regulation and Governance:
- Data Protection and/or privacy implications of generative AI (transparency, lawfulness, special categories of data, (joint)controllers, data subjects’ rights, DPIAs, publicly available personal data, explainability, etc).
- European and other legislation on data and data governance (Data Act, Data Governance Act, Digital Services Act, Digital Markets Act, Artificial Intelligence Act, etc.).
- The interaction of data protection with liability regulation (product liability reform, AI liability proposal) and/or AI (AI Act).
- Socio-legal implications of generative AI for children, workers, disadvantaged groups, companies, public bodies, governments, society, etc.
- Governance institutions and policy processes, and regulatory bodies at different levels (e.g., national. regional, global).
- The role of experts and the general public in shaping regulation and governance.
- Data justice, data fairness and equality.
- Fundamental rights and accountability in technology and data practices.
- Certification and standardisation.
- Landscape of Generative AI compliance with various regulations.
- AI solutions for regulatory compliance, which aid organizations in adhering to privacy regulations.
- Effects and Impacts (negative or positive):
- Discriminatory effects of technology.
- Technology-enabled social profiling and social exclusion.
- Digital divides, digital dividends, data sovereignty.
- Communities, societies, cultures, and technological mediation.
- Data practices, AI, and the Global South.

Socio-Technical Perspectives:
- Awareness, attitudes, skills, and behavior of citizens and public and private organizations.
- Approaches for diversity, non-discrimination and democratic enhancement.
- Surveillance, surveillance pressures, chilling effects.
- Critical perspectives on data practices.
- Welfare, solidarity, and care.
- Data economy and ecosystems, new business models.
- Trade-offs, tensions and conflicts between participation in digital cultures and privacy aspects.
- Historical development of data practices.
- Training, awareness, and empowerment of end-users, focusing on educating and equipping them to recognize and address privacy issues in AI effectively.

Why should I submit?
Accepted papers will receive thorough discussions during the School and provide students with an opportunity to be published in the IFIP AICT series by Springer.

All individual members of IFIP member societies (over 40 national IT societies plus ACM and others) plus all members of IFIP Technical Committees and Working Groups are entitled to a discount of at least 10% on all registration fees.

We encourage submissions from students from emerging economies: applying for support from the IFIP Digital Equity Fund is possible to ease student travel. You can check the details at ifipsc2024 (at) easychair.org

Related Resources

AI4SS Summer School 2024   Summer School on Artificial Intelligence for a Secure Society
RW 2024   The 20th Reasoning Web Summer School
Ei/Scopus-ACAI 2024   2024 7th International Conference on Algorithms, Computing and Artificial Intelligence(ACAI 2024)
SSRM 2024   EURASIP - IEEE SPS Summer School on Remote Sensing and Microscopy Image Processing
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
DARE 2024   Second Summer School on Distributed and Replicated Environments
CEU 2025   8th International Conference on Civil Engineering and Urban Planning
IFIP SEC 2025   40th International Conference on ICT Systems Security and Privacy Protection