Computers Generate Faces Based on Mental Maps
Published in Brain/Neurology.
October 21, 2020
Cognitive neuroscientists have long debated about whether people have visual-like “pictures in the brain” that we activate when we think of them, or whether representations are more semantically organized in sets of features. So, for example, if asked to think of a blond person, will someone conjure in their “mind’s eye,” a typical blond person such as Taylor Swift, or will they just target some sort of blondness feature?
To help tease out this process, some cognitive neuroscientists are now collaborating with computer scientists, using machine learning paired with EEG measurements to map mental representations. In a new paper in Science Reports, Lauri Kangassalo and colleagues at the University of Helsinki used such a technique to computer-generate faces based on the intentions of people thinking about certain facial features.
“Faces, in particular, seem to work great, maybe because it is a very constrained part of vision that humans seem to be incredibly good at; only very recently, new methods in machine learning have been able to surpass our ability to recognize thousands of distinct faces,” says Michiel Spapé, a co-author on the new paper. “As such models very much started by borrowing from research on human physiology and perceptual processes, I think it’d be a beautiful irony if now we could use them to solve old questions in cognitive psychology as well.”