Our team has been developing neuroimaging biomarkers of neuropsychiatric disorders as well as novel analysis methods by combining artificial intelligence and neuroscience. Our work has been supported by Japan Agency for Medical Research and Development (AMED), Grant-in-Aid for Transformative Research Areas(B) and Grant-in-Aid for Scientific Research (B).


Junichi Chikazoe
Research Team Leader

Junichi is a research team leader at Araya. He received his M.D. and Ph.D. from the Graduate School of Medicine, the University of Tokyo. He did his postdoc with Adam K. Anderson at the University of Toronto and Cornell University. After working as an associate professor at National Institute for Physiological Sciences, he joined Araya in 2021. His research interest is creating AI with emotion. He currently focuses on understanding how value/valence emerges from sensory information by combining fMRI and deep learning. Please see his latest work at the following link.

Google Scholar

Dan Lee
Dan is a researcher at Araya. He received his PhD in Psychology at the University of Toronto, and his BASc in Computer Engineering at the University of Waterloo. He is interested in emotions and value, their representations, and their function in intelligence.
Haruki Niwa

Haruki is a master’s student at Aichi Prefectural University. He is interested in machine learning.


(1) Development of neuroimaging biomarker of neuropsychiatric disorders (AMED)

Diagnosis of neuropsychiatric diseases relies on the subjective judgment of doctors, thus, development of the objective biomarkers is needed. Since recent studies reported that non-invasive investigation of functional connectivity by resting-state fMRI provides the biomarkers for neuropsychiatric diseases, its development is expected. However, most of the currently available methods lack sufficient accuracy for diagnosis, suggesting the limitation of traditional analysis methods. In this project, We develop a method to improve resting-state fMRI analysis by 1) analysis of time-series of brain-state information by linear combination modeling at rest, 2) dissociation of brain-state and task-related information by linear combination modeling during task performance, and 3) specification of causality between neuronal responses reflecting brain-state and task-related information by optogenetics in mice. Human and mice research has been promoted by the collaboration of the expert of human fMRI (Dr, Chikazoe at Araya) and the expert of calcium imaging and optogenetics in mice (Dr. Agetsuma at National Institute for Physiological Sciences).

(2) Emotional informatics (Grant-in-Aid for Transformative Research Areas(B))

The influence of emotion on human behavior is the most fundamental theme of Human Sciences. However, it is not easy to construct the mathematical model of human behavior based on the variables of emotion, as the subjective emotion of individuals is not visible. Recently, the dramatic advance in machine learning techniques enables us to decode the hidden emotional information from neural and physical information. Based on emotional information decoded from brain activities, we will provide novel models, concepts, and theories in the field of Human Sciences.

(3) Correspondence between AI and brains

Humans and now computers can derive subjective valuations from sensory events although such a transformation process is essentially unknown. In this study, we elucidated unknown neural mechanisms by comparing convolutional neural networks (CNNs) to their corresponding representations in humans. Specifically, we optimized CNNs to predict aesthetic valuations of paintings and examined the relationship between the CNN representations and brain activity via multivoxel pattern analysis. Primary visual cortex and higher association cortex activities were similar to computations in shallow CNN layers and deeper layers, respectively. The vision-to-value transformation is hence proved to be a hierarchical process that is consistent with the principal gradient that connects unimodal to transmodal brain regions (i.e. default mode network). The activity of the frontal and parietal cortices was approximated by goal-driven CNN. Consequently, representations of the hidden layers of CNNs can be understood and visualized by their correspondence with brain activity–facilitating parallels between artificial intelligence and neuroscience. The preprint of this study is published on biorxiv.