posted by organizer: EGPAIConf || 506 views || tracked by 1 users: [display]

AEGAP 2018 : ARCHITECTURES AND EVALUATION FOR GENERALITY, AUTONOMY & PROGRESS IN AI

FacebookTwitterLinkedInGoogle

Link: http://cadia.ru.is/workshops/aegap2018/
 
When Jul 13, 2017 - Jul 15, 2017
Where Stockholm
Submission Deadline TBD
Categories    architectures for ai   ai evaluation   generality in ai   autonomy in ai
 

Call For Papers

The Joint Workshop on Architectures and Evaluation for Generality, Autonomy and Progress in AI (AEGAP) focuses on our field's original grand dream: the creation of cognitive autonomous agents with general intelligence that matches (or exceeds) that of humans. We want AI that understands its users and their values so it we can form beneficial and satisfying relationships with them.

In 2018, it is about three decades since John McCarthy published a new version of his 1971 Turing Award Lecture on “Generality in Artificial Intelligence”. Since he coined the term "Artificial Intelligence", the field has come a long way. Progress has certainly been made as AI grew from a niche science to a multi-billion dollar endeavor that solves many tasks and a household term that is often viewed to be the future of everything. However, it is not clear how much progress has been made exactly, and especially with respect to AI's grand dream.

As the task turned out to be more difficult than anticipated in the 1950s, a divide-and-conquer approach was adopted that has resulted in a very successful but fractured field. AEGAP aims to bring together researchers from different sub-disciplines to discuss how the different approaches and techniques can contribute to the goal of building beneficial AI with high levels of generality and autonomy. To achieve this goal we will likely need to build large-scale, complex and dynamic architectures that can integrate bottom-up and top-down approaches. One hopeful avenue may be to combine logic- or rule-based top-down approaches with neuroscience-inspired bottom-up approaches, so that intelligence might emerge from their interplay.

This cannot be done without methods for evaluating the different approaches to AI as they exist now and are developed in the future. While we can readily see the performance of AI systems in specific domains, it is more difficult to assess progress in AI, ML and autonomous agents when we put the focus on generality and autonomy. Real progress in this direction only takes place when a system exhibits enough autonomous flexibility to find a diversity of solutions for a range of tasks, some of which may not be known until after the system is deployed. Many evaluation platforms exist (see here), but open research questions remain about how to define batteries or curricula of tasks that capture notions such as generality, transfer or learning to learn, with gradients of difficulty that actually represent the progress we want to make in several directions. The question of fully autonomous reproducibility must also be understood as the goals become more open and general.

We welcome regular papers, short papers, demo papers about benchmarks or tools, and position papers, and encourage discussions over a broad list of topics. As AEGAP is the result of a merger between the Third Workshop on Evaluating Generality and Progress in Artificial Intelligence (EGPAI), the Second Workshop on Architectures for Generality & Autonomy (AGA) and the First Workshop on General AI Architecture of Emergence and Autonomy (AAEA), we are interested in submissions on both evaluation and architectures:

Related Resources

EDML 2019   1st Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning @ SDM 2019
COINS-DOCTORAL 2019   DOCTORAL SYMPOSIUM & Work in Progress (WiP) | Internet of Things | IoT | Big Data | Artficial Intelligence | Machine Learning | Cloud Computing | Blockchain | EDA
IoT-ASAP 2019   3rd International Workshop on Engineering IoT Systems: Architectures, Services, Applications, and Platforms
ENASE 2019   14th International Conference on Evaluation of Novel Approaches to Software Engineering
SIGCOMM 2019   Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication
QEST 2019   16th International Conference on Quantitative Evaluation of SysTems
VALUETOOLS 2019   12th EAI International Conference on Performance Evaluation Methodologies and Tools
CLEF 2019   CLEF 2019 | Conference and Labs of the Evaluation Forum
CASES 2019   International Conference on Compilers, Architectures, and Synthesis for Embedded Systems
DSD 2019   22nd Euromicro Conference on Digital System Design