Intelligence and consciousness are often viewed as distinct. While intelligence is characterized by objective performances in solving well-defined tasks, consciousness is characterized by internal, subjective experience not directly observable from a third-person perspective. Or so the popular notion goes.
Ryota Kanai challenges this dichotomy. His team’s bold research on artificial general intelligence (AGI) seeks to uncover a novel link between these two concepts and, ultimately, whether consciousness can emerge in machines.
The team hypothesizes that AGI requires the ability of mental simulation based on generative models, and that physical implementation of such functions necessitates internal causal structures in the system that correspond to consciousness.
They will test this idea with a two-stage constructivist approach:
1. Identifying putative functions of consciousness and implementing them into AI systems.
2. Characterizing the intrinsic information structures of such systems to infer the presence of conscious experience.
As suggested by empirical observations from psychology and neuroscience, the ability to carry out model-based planning might be a key functional component of consciousness in biological systems. Building on this idea, the team will develop neural network systems that perform model-based planning using the generative models of the environment and the agent itself (i.e. the self model). In the second stage, they will analyze the causal structure within such systems. They predict that AI systems capable of model-based planning necessarily exhibit highly integrated information—considered a signature of consciousness in biological brains.
In summary, the team expects to develop a theoretical framework that characterizes machine and biological intelligence from the perspective of intrinsic information. These results will offer new design principles for building AGI, opening broader discussions on machine consciousness.