posted by organizer: EGPAIConf || 709 views || tracked by 1 users: [display]

AEGAP 2018 : ARCHITECTURES AND EVALUATION FOR GENERALITY, AUTONOMY & PROGRESS IN AI

FacebookTwitterLinkedInGoogle

Link: http://cadia.ru.is/workshops/aegap2018/
 
When Jul 13, 2017 - Jul 15, 2017
Where Stockholm
Submission Deadline TBD
Categories    architectures for ai   ai evaluation   generality in ai   autonomy in ai
 

Call For Papers

The Joint Workshop on Architectures and Evaluation for Generality, Autonomy and Progress in AI (AEGAP) focuses on our field's original grand dream: the creation of cognitive autonomous agents with general intelligence that matches (or exceeds) that of humans. We want AI that understands its users and their values so it we can form beneficial and satisfying relationships with them.

In 2018, it is about three decades since John McCarthy published a new version of his 1971 Turing Award Lecture on “Generality in Artificial Intelligence”. Since he coined the term "Artificial Intelligence", the field has come a long way. Progress has certainly been made as AI grew from a niche science to a multi-billion dollar endeavor that solves many tasks and a household term that is often viewed to be the future of everything. However, it is not clear how much progress has been made exactly, and especially with respect to AI's grand dream.

As the task turned out to be more difficult than anticipated in the 1950s, a divide-and-conquer approach was adopted that has resulted in a very successful but fractured field. AEGAP aims to bring together researchers from different sub-disciplines to discuss how the different approaches and techniques can contribute to the goal of building beneficial AI with high levels of generality and autonomy. To achieve this goal we will likely need to build large-scale, complex and dynamic architectures that can integrate bottom-up and top-down approaches. One hopeful avenue may be to combine logic- or rule-based top-down approaches with neuroscience-inspired bottom-up approaches, so that intelligence might emerge from their interplay.

This cannot be done without methods for evaluating the different approaches to AI as they exist now and are developed in the future. While we can readily see the performance of AI systems in specific domains, it is more difficult to assess progress in AI, ML and autonomous agents when we put the focus on generality and autonomy. Real progress in this direction only takes place when a system exhibits enough autonomous flexibility to find a diversity of solutions for a range of tasks, some of which may not be known until after the system is deployed. Many evaluation platforms exist (see here), but open research questions remain about how to define batteries or curricula of tasks that capture notions such as generality, transfer or learning to learn, with gradients of difficulty that actually represent the progress we want to make in several directions. The question of fully autonomous reproducibility must also be understood as the goals become more open and general.

We welcome regular papers, short papers, demo papers about benchmarks or tools, and position papers, and encourage discussions over a broad list of topics. As AEGAP is the result of a merger between the Third Workshop on Evaluating Generality and Progress in Artificial Intelligence (EGPAI), the Second Workshop on Architectures for Generality & Autonomy (AGA) and the First Workshop on General AI Architecture of Emergence and Autonomy (AAEA), we are interested in submissions on both evaluation and architectures:

Related Resources

LREC 2020   12th Conference on Language Resources and Evaluation
SARA 2020   First Workshop on Secure and Resilient Autonomy
PACT 2020   International Conference on Parallel Architectures and Compilation Techniques
Autonomy in CPS 2020   Workshop on Autonomy in Cyber-Physical Systems at CPS-IoT Week
SIGCOMM 2020   Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication
SEA 2020   18th Symposium on Experimental Algorithms
ENASE 2020   15th International Conference on Evaluation of Novel Approaches to Software Engineering
ECSA 2020   14th European Conference on Software Architecture
MaSPECS 2020   Modeling and Simulation for Performance Evaluation of Computer-based Systems track of the 34th International ECMS Conference on Modelling and Simulation
NLP-ARCHES 2020   NLP architectures in the age of end-to-end deep-learning systems