posted by organizer: Vzt15 || 63 views || tracked by 1 users: [display]

GPTMB 2026 : The Third International Conference on Generative Pre-trained Transformer Models and Beyond

FacebookTwitterLinkedInGoogle

Link: https://www.iaria.org/conferences2026/GPTMB26.html
 
When Jul 5, 2026 - Jul 9, 2026
Where Nice, France
Submission Deadline Mar 14, 2026
Notification Due May 4, 2026
Final Version Due May 31, 2026
Categories    AI   LLM   tools   data
 

Call For Papers

CfP: GPTMB 2026 || July 5 - 9, 2026 - Nice, France

INVITATION:

=================

Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish original scientific results to:

- GPTMB 2026, The Third International Conference on Generative Pre-trained Transformer Models and Beyond

GPTMB 2026 is scheduled to be July 5 - 9, 2026 in Nice, France under the DigiTech 2026 umbrella.

The submission deadline is March 14, 2026.

Authors of selected papers will be invited to submit extended article versions to one of the IARIA Journals: https://www.iariajournals.org

All events will be held in a hybrid mode: on site, online, prerecorded videos, voiced presentation slides, pdf slides.

=================


============== GPTMB 2026 | Call for Papers ===============

CALL FOR PAPERS, TUTORIALS, PANELS


GPTMB 2026, The Third International Conference on Generative Pre-trained Transformer Models and Beyond

General page: https://www.iaria.org/conferences2026/GPTMB26.html

Submission page: https://www.iaria.org/conferences2026/SubmitGPTMB26.html


Event schedule: July 5 - 9, 2026


Contributions:

- regular papers [in the proceedings, digital library]

- short papers (work in progress) [in the proceedings, digital library]

- ideas: two pages [in the proceedings, digital library]

- extended abstracts: two pages [in the proceedings, digital library]

- posters: two pages [in the proceedings, digital library]

- posters: slide only [slide-deck posted at www.iaria.org]

- presentations: slide only [slide-deck posted at www.iaria.org]

- demos: two pages [posted at www.iaria.org]


Submission deadline: March 14, 2026


Extended versions of selected papers will be published in IARIA Journals: https://www.iariajournals.org

Print proceedings will be available via Curran Associates, Inc.: https://www.proceedings.com/9769.html

Articles will be archived in the free access ThinkMind Digital Library: https://www.thinkmind.org


The topics suggested by the conference can be discussed in term of concepts, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas.

All tracks are open to both research and industry contributions.
Before submission, please check and comply with the editorial rules: https://www.iaria.org/editorialrules.html


GPTMB 2026 Topics (for topics and submission details: see CfP on the site)

Call for Papers: https://www.iaria.org/conferences2026/CfPGPTMB26.html

============================================================

GPTMB 2026 Tracks (topics and submission details: see CfP on the site)

Generative-AI basics

- Generative pre-trained transformer (GPT) models

- Transformer-based models and LLMs (Large Language Models)

- Combination of GPT models and Reinforcement learning models

- Creativity and originality in GPT-based tools

- Taxonomy of context-based LLM training

- Deep learning and LLMs

- Retrieval augmented generation (RAG) and fine-tunning LLMs

- LLM and Reinforcement Learning from Human Feedback (RLHF)

- LLMs (autoregressive, retrieval-augmented, autoencoding, reinforcement learning, etc.)

- Computational resources forLLM raining and for LLM-based applications

LLMs

- Large Language Models (LLM) taxonomy

- Model characteristics (architecture, size, training data and duration)

- Building, training, and fine tuning LLMs

- Performance (accuracy, latency, scalability)

- Capabilities (content generation, translation, interactive)

- Domain (medical, legal, financial, education, etc.)

- Ethics and safeness (bias, fairness, filter, explainability)

- Legal (data privacy, data exfiltration, copyright, licensing)

- Challenges (integrations, mismatching, overfitting, underfitting, hallucinations, interpretability, bias mitigation, ethics)

LLM-based tools

- Challenging requirements on basic actions and core principles

- Methods for optimized selection of model size and complexity

- Fine-tuning and personalization mechanisms

- Multimodal input/output capabilities (text with visual, audio, and other data types)

- Adaptive learning or continuous learning (training optimization, context-awareness)

- Range of languages and dialects, including regional expansion

- Scalability, Understandability, and Explainability

- Tools for Software development, planning, workflows, coding, etc.

- Cross-interdisciplinary applications (finance, healthcare, technology, etc.)

- Computational requirements and energy consumption

- Efficient techniques (quantization, pruning, etc.)

- Reliability and security of LLM-based applications

- Co-creation, open source, and global accessibility

- Ethical considerations (bias mitigation, fairness, responsibility)

Small-language models and tiny-language models

- Architecture and design principles specific to small language models

- Tiny language models for smartphones, IoT devices, edge devices, and embedded systems

- Tools for small languages models (DistilBERT, TinyBERT, MiniLM, etc.)

- Knowledge distillation, quantization, low latency, resource optimization

- Energy efficiency for FPGAs and specialized ASICs for model deployment

- Tiny language models for real-time translation apps and mobile-based chatbots

- Tiny languages and federated learning for privacy

- Small language models for vision for multimodal applications

- Hardware considerations (energy, quantization, pruning, etc.)

- Tiny language models and hardware accelerators (GPUs, TPUs, and ML-custom ASICs)

Critical Issues on Input Data

- Datasets: accuracy, granularity, precision, false/true negative/positive

- Visible vs invisible (private, personalized) data

- Data extrapolation

- Output biases and biased Datasets

- Sensitivity and specificity of Datasets

- Fake and incorrect information

- Volatile data

- Time sensitive data

- Critical Issues on Processing

- Process truthfulness

- Understability, Interpretability, and Explainability

- Detect biases and incorrectness

- Incorporate the interactive feedback

- Incorporate corrections

- Retrieval augmented generation (RAG) for LLM input

- RLHF for LLM fine-tuning output

Output quality

- Output biases and biased Datasets

- Sensitivity and specificity of Datasets

- Context-aware output

- Fine/Coarse text summarization

- Quality of Data pre-evaluation (obsolete, incomplete, fake, noisy, etc.)

- Validation of output

- Detect and expalin hallucinations

- Detect biased and incorrect summarization before spreading it

Education and academic liability issues

- Curricula revision for enbedding AI-based tools and methodolgies

- User awareness on output trust-ability

- Copyright infringements rules

- Plagiarism and self-plagiarism tools

- Ownership infringement

- Mechanisms for reference verification

- Dealing with hidden self-references

Regulations and limitations

- Regulations (licensing, testing, compliance-threshold, decentralized/centralize innovations)

- Mitigate societal risks of GPT models

- Capturing emotion and sentience

- Lack of personalized (individual) memory and memories (past facts)

- Lack of instant personalized thinking (personalized summarization)

- Risk of GPTM-based decisions

- AI awareness

- AI-induced deskilling

Case studies with analysis and testing AI applications

- Lesson learned with existing tools (ChatGPT, Bard AI, ChatSonic, etc.)

- Predictive analytics in healthcare

- Medical Diagnostics

- Medical Imaging

- Pharmacology

- AI-based therapy

- AI-based finance

- AI-based planning

- AI-based decision

- AI-based systems control

- AI-based education

- AI-based cyber security


------------------------

GPTMB 2026 Committee: https://www.iaria.org/conferences2026/ComGPTMB26.html

IARIA Ambassadors

Steve Chan, Decision Engineering Analysis Laboratory, USA
Dirceu Cavendish, Kyushu Institute of Technology, Japan
Monika Maria Moehring, Study Centre for Blind and Disabled Students, Technische Hochschule Mittelhessen, Gie en Germany
Carlos Becker Westphall, Federal University of Santa Catarina, Brazil
Lasse Berntzen, University of South-Eastern Norway, Norway
Les Sztandera, Thomas Jefferson University, Philadelphia, USA
Andreas Rausch, TU Clausthal, Clausthal-Zellerfeld, Germany
Timothy Phan, NASA, Jet Propulsion Laboratory, USA
Manuela Vieira, CTS/ISEL/IPL, Portugal
Luigi Lavazza, Universit dell'Insubria - Varese, Italy

Related Resources

AIMS 2025   2025 Artificial intelligence Models and Systems Symposium
NLP4KGC 2025   4th NLP4KGC: Natural Language Processing for Knowledge Graph Construction
MAS-GAIN 2025   1st International Workshop on Multi-Agent Systems using Generative Artificial INtelligence for Automated Software Engineering
AIACT 2026   2026 10th International Conference on Artificial Intelligence, Automation and Control Technologies (AIACT 2026)
GenAI 2025   The Second International Symposium on Generative Artificial Intelligence
AIMDS 2025   2nd International Conference on AI, Machine Learning and Data Science
LLMCS 2025   The Second International Workshop on Large Language Models for Cybersecurity
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
PromptEng 2025   The 2nd International Workshop on Prompt Engineering Large Language Models
SOC4AI 2025   2nd Workshop on Service-Oriented Computing for AI Applications