Deep Learning currently is the dominant topic in Machine Learning and Artificial Intelligence, delivering newsworthy headlines on a weekly basis and keeping academia, industry, and the general public on their toes. Still, by now there is a growing number of voices (including one of "the fathers of Deep Learning") questioning how long this level of progress can be sustained, and how far Deep Learning as a paradigm can actually go.
In order to understand many of the corresponding worries, it is worth looking back a few decades into the history of AI and ML. While early work on knowledge representation and inference was primarily symbolic, the corresponding approaches subsequently fell out of favor, and were largely supplanted by connectionist methods (eventually leading into the present dominance of Deep Learning). Symbolic methods performed excellently well in tasks requiring reasoning and related forms of processing knowledge in a structured manner, but were much too inflexible and brittle when it came to tasks requiring adaptive behaviour based on learning from (possibly even noisy) data. The situation today seems like a mirror image thereof: While Deep Learning approaches excel at processing noisy data in numerous application domains, the corresponding systems lack in top-down control and the ability to reason over learned information.
Using "Cognitive Computation" as joint umbrella term and final aim, this symposium brings together established leaders in the fields of neural computation, logic and artificial intelligence, knowledge representation, natural language understanding, machine learning, cognitive science and computational neuroscience. They are invited to share their views on the "Big Questions" stated above, outlining relevant parts of their recent work and engaging in a discussion with each other and the audience on the future of AI and ML.
|