Open-Endedness in Simulation: a definition and its consequences

Susan Stepney,  University of York, UK

Susan
Abstract
Open-ended behaviour in simulated systems is one goal of artificial life, yet the term “open-ended” is rarely defined. Here I discuss a recent definition in terms of models and meta-models, and its consequences for discovering multi-scale open-endedness in computer simulations
Susan Stepney was originally a theoretical astrophysicist.  She moved to industry where she used formal methods to prove the security of smart card applications.  In 2002 she returned  to academia as Professor of Computer Science at the University of York, UK, where she leads the Non-Standard Computation research group. Since 2012 she has been Director of the York Centre for Complex Systems Analysis.  She is on the board of directors of the International Society for Artificial life, and is a member of EPSRC’s ICT Strategic Advisory Team.
Her current research interests include unconventional models of computation, complex systems, artificial chemistries, emergence, open-ended evolution, and bio-inspired computing.

Parallel problem solving through crowds and machines

Josh Bongard, University of Vermont, USA

josh
Abstract
 
Cloud robotics, and the internet of things, is enabling ever-larger combinations of people and machines to solve increasingly challenging problems. In this talk I will outline some of our work to recruit large numbers of non-experts to solve various problems in robotics by drawing on the twin human instincts to build and to teach. I will first describe the DotBot project, in which participants build the body plans of simulated robots, while search methods improve the controllers for them. I will then explain how features were learned from the human-designed robots to empower a subsequent, fully automated system in which computers optimized robot bodies and brains. I will conclude by introducing the Twitch Plays Robotics project, in which participants teach robots how to ground the symbols of human languages in action and social prediction.
Josh Bongard obtained his Bachelors degree in Computer Science from McMaster University, Canada; his Masters degree from the University of Sussex, United Kingdom; his PhD from the University of Zurich, Switzerland; and served as a postdoctoral associate at Cornell University. In 2006 he was named a Microsoft New Faculty Fellow, as well as one of the top 35 innovators under the age of 35 by MIT’s Technology Review Magazine. In 2011 he received a Presidential Early Career Award for Scientists and Engineers (PECASE) from Barack Obama at the White House. Josh is currently the Veinott Professor of Computer Science at the University of Vermont. His research foci include evolutionary robotics, crowdsourcing, and machine science.

Navigation with a tiny brain: getting home without knowing where you are

Andrew Philippides, University of Sussex, Brighton, UK

andyPhilippides

Abstract

The use of visual information for navigation is a universal strategy for sighted animals, amongst whom social insects are particular experts. The general interest in studies of insect navigation is in part due to their small brains; biomimetic engineers can take inspiration from elegant and parsimonious control solutions, while biologists look for a description of the minimal cognitive requirements for complex spatial behaviours. We take an interdisciplinary approach to studying visual guided navigation by combining behavioural experiments with modelling and robotics to understand how complex behaviour can emerge from the combination of a simple sensory system and brain, interacting with innate behaviours all tuned to the natural habitat. In so doing, we show that an agent can robustly navigate without ever knowing where it is, without specifying when or what it should learn, nor requiring it to recognise specific objects, places routes or maps. This leads to an algorithm in which navigation is driven by familiarity detection rather than explicit recall, with sensory data specifying actions not locations. Route navigation is thus recast as a search for familiar views, allowing an agent to encode routes through visually complex worlds in a single layer neural network after a single training run. We suggest that this work is a specific example of a more general idea which has implications for engineers seeking nature-inspired solutions: By considering how animals directly acquire and use task-specific information through specialised sensors, brains and behaviours, we can solve complex problems without complex processing.

Short bio
I am a Reader in the Department of Informatics at the University of Sussex and co-director of the Centre for Computational Neuroscience and Robotics (http://www.sussex.ac.uk/ccnr/). Having read Mathematics at King’s College Cambridge, I moved to Sussex to do an MSc in Artificial Intelligence and Adaptive Systems followed by a doctorate in Computational Neuroscience and Robotics, and have been at Sussex since. My research is interdisciplinary and is best described as computational neuroethology; That is, I combine behavioural experiments with computational and robotic models to understand, and take inspiration from, biological systems. Current research topics include: visual navigation in insects and robots, neuromodulation in (real and artificial) neural networks, analysis of biological imaging data and agent-based modelling applied to crowd movement and human migration.