A new study has shown that the brain regions controlling facial expressions in macaques work together in unexpected ways, challenging prior assumptions about their division of labor. Researchers led by Geena Ianni at the University of Pennsylvania used advanced neural recordings to reveal how these gestures are encoded. The findings could pave the way for future brain-computer interfaces that decode facial signals for patients with neurological impairments.
Neuroscientists have long puzzled over how the brain generates facial expressions, assuming a clear split between areas handling emotional signals and those managing deliberate movements like speaking. However, a study published in Science on January 20, 2026, upends this view through experiments on macaques, primates with facial musculature similar to humans.
Geena Ianni and her team at the University of Pennsylvania began by scanning the macaques' brains with fMRI while filming their faces during social interactions. The animals viewed videos of other macaques, interactive avatars, or live companions, prompting natural expressions such as lipsmacking to show submission, threat faces to deter rivals, and neutral chewing.
Using these scans, the researchers pinpointed key brain areas: the primary motor cortex, ventral premotor cortex, primary somatosensory cortex, and cingulate motor cortex. They then implanted micro-electrode arrays with sub-millimeter precision into these regions—the first such effort to record multiple neurons during facial gesture production.
Contrary to expectations, all four areas activated for every gesture, from social signals to chewing, in a coordinated pattern. "We expected a division where the cingulate cortex governs social signals, while the motor cortex is specialized in chewing," Ianni noted, but the data showed otherwise.
Further analysis revealed distinct neural codes. The cingulate cortex employs a static pattern, persistent for up to 0.8 seconds, likely integrating social context and sensory input. In contrast, the motor and somatosensory cortices use dynamic codes with rapidly shifting firing rates to control precise muscle movements, such as subtle lip twitches.
"The static means the firing pattern of neurons is persistent across both multiple repetitions... and across time," Ianni explained, suggesting it stabilizes the gesture's intent while dynamic areas execute the details.
This foundational work, detailed in the journal (doi.org/10.1126/science.aea0890), builds toward neural prostheses for restoring facial communication in stroke or paralysis patients. Ianni remains optimistic: "I hope our work goes towards enabling... more naturalistic and rich communication designs that will improve lives." Yet, she cautions that reliable devices remain years away, akin to early speech-decoding tech from the 1990s.