ViralTopic

Brain-response foundation model

March 26, 2026AI at Meta, Chubby, Aakash Gupta

Aakash Gupta spotlights a FAIR Paris release that predicts brain responses to what you see, hear, or read, calling out "70x higher resolution" despite Reality Labs losses.

Today we're introducing TRIBE v2 (Trimodal Brain Encoder)
a foundation model trained to predict how the human brain responds to almost any sight or sound
draws on 500+ hours of fMRI recordings from 700+ people
“Meta just dropped TRIBE v2”
“predicts how your brain responds to sight, sound, and language.”
“Trained on 500+ hours of fMRI data from 700+ people”
“predict a new person's brain activity without any retraining”
“Meta has lost $73 billion on Reality Labs since 2020.”
“quietly, the FAIR team in Paris releases a model that predicts how your brain responds to anything you see, hear, or read.”
“70x higher resolution”
Meta just released TRIBE v2, a foundation model that predicts human brain activity across vision, sound and language.
Acts like a digital model of the human brain
Predicts neural responses to images, audio and text
AI at Meta
Chubby
Aakash Gupta
AshutoshShrivastava
neuroscienceresearch

See what experts are saying right now

This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new expert voices, debates, and emerging ideas.

← Back to Artificial Intelligence