posted by user: LESHEM || 2617 views || tracked by 3 users: [display]

babyLM CHALLENGE 2023 : CfP babyLM - shared task hosted in CoNLL/CMCL 2023

FacebookTwitterLinkedInGoogle

Link: https://babylm.github.io/
 
When Jan 1, 2023 - Sep 1, 2023
Where CMCL+CONLL
Submission Deadline TBD
Categories    machine learning   natural language processin   computational linguistics   pretraining
 

Call For Papers

Announcing the BabyLM Challenge, the shared task at CoNLL/CMCL 2023!


The goal of this shared task is to encourage researchers with an interest in pretraining and/or cognitive modeling to focus their efforts on optimizing pretraining given data limitations inspired by human development. Additionally, we hope to democratize research on pretraining—which is typically thought to be practical only for large industry groups—by formulating an exciting open problem and establishing a community around it.


A huge effort has been put towards optimizing LM pretraining at massive scales in the last several years. While increasingly larger models often get the most attention, datasets have also grown by orders of magnitude. For example, Chinchilla is exposed to 1.4 trillion words during training—well over 10000 words for every one word a 13-year-old human has encountered in their entire life.


Focusing on scaled-down pretraining has several potential benefits: First, small-scale pretraining can be a sandbox for developing novel techniques for improving data efficiency. These techniques have the potential to then scale up to larger scales commonly seen in applied NLP or used to enhance current approaches to modeling low-resource languages. Second, improving our ability to train LMs on the same kinds and quantities of data that humans learn from hopefully will give us greater access to plausible cognitive models of humans and help us understand what allows humans to acquire language so efficiently.


The task has three tracks, two of which restrict the training data to pre-released datasets of 10M and 100M words and are dedicated to explorations of approaches such as architectural variations, self-supervised objectives, and/or curriculum learning. The final track only restricts the amount of text used, allowing innovation in the choice of the data, its domain, and even its modality (i.e., data from sources other than text is welcome). We will release a shared evaluation pipeline that evaluates on a variety of benchmarks and tasks, including targeted syntactic evaluations and natural language understanding.


Important dates:

January 2023: Training data released (see website for download)

March 2023: Evaluation pipeline released

July 15, 2023: Results due

August 1, 2023: Paper submissions due

Date TBA: Presentation at CoNLL


For more information, visit the BabyLM website https://babylm.github.io/ or consult our extended call for papers.

Related Resources

CFP-CIPCV-EI/SCOPUS 2026   The 2026 4th International Conference on Intelligent Perception and Computer Vision
Ei/Scopus-ITCC 2026   2026 6th International Conference on Information Technology and Cloud Computing (ITCC 2026)
Tokyo-RSAI 2026   2026 International Conference on Robotic Systems and Artificial Intelligence (RSAI 2026)
IEEE-ICECCS 2026   2025 IEEE International Conference on Electronics, Communications and Computer Science (ICECCS 2026)
Tokyo CFP-RCVE 2026   2026 4th International Conference on Robotics, Control and Vision Engineering
AMLDS 2026   IEEE--2026 2nd International Conference on Advanced Machine Learning and Data Science
CFP_DIAGRAMS 2026   First Call For Papers | DIAGRAMS 2026
Ei/Scopus-CEICE 2026   2026 3rd International Conference on Electrical, Information and Communication Engineering (CEICE 2026)
CfP_Kinship Conference 2026   CfP: Kinship Structures, Dynamics, and Inequalities
Artnodes. CFP about Voice 2026   Artnodes. How to Do Things with the Voice: Toward an Expanded and Mutant Understanding of Vocality