“Restructuring human sciences based on decoding of emotional information”.
PI: Junichi Chikazoe. (October 2021-March 2024)
The influence of emotion on human behavior is the most fundamental theme of human sciences. However, it is not easy to construct the mathematical model of human behavior based on the variables of emotion, as the subjective emotion of individuals is not visible. Recently, the dramatic advance in machine learning techniques enables us to decode the hidden emotional information from neural and physical information. In this project, we will propose new models in the field of human sciences, including linguistics, economics, and aesthetics. We are looking for collaborators in the broad areas of Human Sciences as well as neuroscience and data science.
“Using open-ended algorithms to generate video game content”.
PI: Kai Arulkumaran. (2021-2022)
This project aims to test the power of open-endedness in computational creativity and human-machine collaborative design, ending with an algorithm that can create fully functional and visually appealing game content in Space Engineers. The project will provide insights about the role of multi-agent interactions in open-ended, lifelong learning, which relates it to GoodAI’s research into lifelong-learning AI (Badger architecture).
“Creating a new framework for multi-agent AI systems”.
PI: Martin Biehl. (2021-2023)
Current artificial intelligence is limited in its scope and is far from human-level intelligence. One of the key components missing is learning to pursue multiple goals, ones that are dynamic, changing, and that depend on knowledge acquired from previous tasks.
In this project, we will produce a new framework for multi-agent AI systems that aims to advance the field of multi-agent learning and inform the development of GoodAI’s own artificial intelligence framework, Badger architecture. It will do so by both increasing our understanding of existing algorithms and how they can be applied and by the development of novel methods.
This project will focus on four areas where current artificial intelligence methods fall short:
1. The ability to have multiple, often competing goals
2. Coordination and communication from an information-theoretic perspective
3. Dynamic scalability of multi-agent systems
4. Dynamically changing goals that depend on knowledge acquired through observations
“Liberation from Biological Limitations via Physical, Cognitive and Perceptual Augmentation”.
PI: Ryota Kanai. (December 2020-November 2025)
This project aims to develop cybernetic avatars that can be controlled via intention. This intention will be estimated from brain activities and information observed on the surface of the human body and through interactions. We will integrate intention estimation methods using AI technologies, and enhance the functionality of cybernetic avatars controlled by brain machine interfaces (BMI) while considering ethical implications. By 2050, we will create the ultimate BMI-cybernetic avatars that can be freely operated by human intention.
“Linking machine intelligence toconsciousness”.
PI: Ryota Kanai. (July 25, 2018–July 24, 2021)
What fundamental elements underlie consciousness—and can machines replicate those elements?
Intelligence and consciousness are often viewed as distinct. While intelligence is characterized by objective performances in solving well-defined tasks, consciousness is characterized by internal, subjective experience not directly observable from a third-person perspective. Or so the popular notion goes.
Ryota Kanai challenges this dichotomy. His team’s bold research on artificial general intelligence (AGI) seeks to uncover a novel link between these two concepts and, ultimately, whether consciousness can emerge in machines.
The team hypothesizes that AGI requires the ability of mental simulation based on generative models, and that physical implementation of such functions necessitates internal causal structures in the system that correspond to consciousness.
They will test this idea with a two-stage constructivist approach:
1. Identifying putative functions of consciousness and implementing them into AI systems.
2. Characterizing the intrinsic information structures of such systems to infer the presence of conscious experience.
As suggested by empirical observations from psychology and neuroscience, the ability to carry out model-based planning might be a key functional component of consciousness in biological systems. Building on this idea, the team will develop neural network systems that perform model-based planning using the generative models of the environment and the agent itself (i.e. the self model). In the second stage, they will analyze the causal structure within such systems. They predict that AI systems capable of model-based planning necessarily exhibit highly integrated information—considered a signature of consciousness in biological brains.
In summary, the team expects to develop a theoretical framework that characterizes machine and biological intelligence from the perspective of intrinsic information. These results will offer new design principles for building AGI, opening broader discussions on machine consciousness.
PI: Ryota Kanai. (October 2015–March2021)
Artificial intelligence is now being revolutionalized by the advent of deep neural networks inspired by multilayered structures in the brain along with the availability of big data. However, we are yet to create the natural intelligence of biological systems that understands meanings of events and generates voluntary actions. In this project, we will implement computational principles offered by contemporary theories of the brain and consciousness, and create artificial systems that have conscious experience, intention and will, and apply this technology to multitudes of data processing in real life environments.