The reinforcement learning team works on both fundamental and applied AI research, with a particular focus on reinforcement learning. Reinforcement learning is the study of sequential decision making under uncertainty, and hence encompasses many problems, from robotic manipulation to coordinating multiple goal-based agents. Our aim is to improve our understanding of biological and artificial intelligence, and use these insights to develop AI systems that can be applied to complex tasks in the real world.

KEY WORDS
#Deep Learning
#Reinforcement Learning
#Imitation Learning
#Evolutionary Computation
#Artificial Life

Highlights

Brain-Computer Interface Robot Control

Brain-computer interfaces (BCIs) provide a direct connection between our nervous signals and computers, enabling humans with neurological/physiological disorders to interact with the world in ways that would otherwise be impossible. However, BCI inputs are challenging to work with, requiring sophisticated signal processing and machine learning techniques. Coupling BCIs with robots through the paradigm of shared autonomy would enable users to physically interact with the world. Unfortunately, operating robots in the real world (outside of highly-constrained factory settings) is challenging, requiring further state-of-the-art methods in artificial intelligence and control theory. We are currently working on integrating and developing research on all of these areas in order to achieve more generalisable BCI robot control.

Benchmarking Imitation Learning

Researchers have developed many imitation learning method over the last few years, each claiming state-of-the-art performance on various tasks. However, results can be easily influenced by factors outside of the claimed contributions of a method, such as data preprocessing, extensive hyperparameter tuning, or simply more computation. In A Pragmatic Look at Deep Imitation Learning, we broke down the contributions of many methods, and developed a unified framework for imitation learning methods, allowing us to perform a fair comparison between algorithms. We have also released an open source library for other researchers to research or apply imitation learning methods.

Procedural Content Generation for Space Engineers

Many video games rely on procedural content generation (PCG) in order to (semi-)autonomously create a large amount of game content. For some assets, such as vehicles, the generated content must be both functional and aesthetically-pleasing. In addition, the designer, or even player, typically wants a variety of options to choose from. To achieve this for the Space Engineers 3D sandbox game, we have been using a combination of techniques from evolutionary computation. In Evolving Spaceships with a Hybrid L-system Constrained Optimisation Evolutionary Algorithm, we developed a novel hybrid evolutionary algorithm to generate interesting and functional spaceships. We later improved upon this in Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation, by developing a novel fitness function, and further applying it to the well-known MAP-Elites "quality-diversity" algorithm.

Members

Kai Arulkumaran, Ph.D.
Research Team Lead
Kai is a Research Team Lead at Araya. He received his B.A. in Computer Science at the University of Cambridge in 2012 and his Ph.D. in Bioengineering at Imperial College London in 2020. He has previously worked at DeepMind, Microsoft Research, Facebook AI Research, Twitter Cortex and NNAISENSE. His research interests are deep learning, reinforcement learning, evolutionary computation and theoretical neuroscience.
Manuel Baltieri, Ph.D.
Chief Researcher
Manuel is a Chief Researcher at Araya and a Visiting Researcher at the University of Sussex. After graduating with a B.Eng. in Computer Engineering and Business Administration at the University of Trento, he received an M.Sc. in Evolutionary and Adaptive Systems and a Ph.D. in Computer Science and AI, both from the University of Sussex. Following that, he was awarded a JSPS/Royal Society postdoctoral fellowship, and worked in the Lab for Neural Computation and Adaptation at RIKEN CBS with Taro Toyoizumi, until he joined Araya at the end of 2021. His research interests include artificial intelligence and artificial life, theories of agency and individuality, origins of life, embodied cognition and decision making.
Rousslan Dossa, Ph.D.
Chief Researcher
Rousslan is a Chief Researcher at Araya. He received his Ph.D. from Kobe University in 2023. His research interests span over the topics of deep reinforcement learning with an emphasis on self-supervised learning, human cognition-inspired decision-making, neuroscience, and evolutionary computing.
Shivakanth Sujit
Senior Researcher
Shivakanth is a Senior Researcher at Araya. He received his M.Sc. in 2023 from Mila Quebec working with Prof Samira Ebrahimi Kahou. He is interested in deep reinforcement learning for robotics and LLMs. Before joining Mila he completed his undergraduate at NIT Trichy, India in Control Engineering, and this background drives his research in combining the insights from control theory and RL for building agents that can safely interact in the real world.
Shogo Akiyama
Senior Researcher
Shogo is a Senior Researcher at Araya. He received his B.S. in Computer Science in 2019. He has previously worked as an AI and web application engineer. His research interests are reinforcement learning and natural language processing.
Marina Di Vincenzo
Senior Researcher
Marina is a Senior Researcher at Araya. She received her B.A. in Psychology at the University of Urbino “Carlo Bo” in 2017; her M.S. in Neuroscience and Psychological Rehabilitation in 2021 at the University “La Sapienza” of Rome; and a Diploma in Artificial Intelligence, the same year, at the Institute of Sciences and Technologies of Cognition of the National Research Council (ISTC-CNR). Her research focuses on User-Centered Design in Assistive Neurotechnology.
Hannah Kodama Douglas
Hannah Kodama Douglas
Senior Researcher
Hannah is a Senior Researcher at Araya. She received her B.S. in Statistics at Carnegie Mellon University in 2020 before completing her postbac at the National Institutes of Health in the Unit of Neural Computation and Behavior. She then received her M.S. in Computational Neuroscience from Princeton University in 2024. She's interested in exploring ways to apply insights from neuroscience and machine learning to develop practical brain-machine interfaces.
Luca Nunziante
Luca Nunziante
Senior Researcher
Luca is a Senior Researcher at Araya. After receiving his bachelor's degree in Electronic and Computer Engineering at the University of Campania Luigi Vanvitelli, he completed his M.Sc. in Artificial Intelligence and Robotics at La Sapienza University of Rome in 2024. During his master he visited the Space Robotics Laboratory at Tohoku University, Japan. His research interest are robot control, artificial intelligence, and the intersection of the two.