Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Meta releases TRIBE v2, an AI model that predicts brain responses to images, sound, and text
Headline
Meta’s FAIR team released TRIBE v2, a model trained on fMRI data that predicts brain activity from visual, audio, and text inputs.
Summary
Tech analyst Aakash Gupta (200k+ followers) tweeted about Meta’s release of TRIBE v2, a foundation model that predicts brain activity in response to images, sounds, and text. The model achieves 70x higher resolution than v1 and can generalize to new subjects without retraining. Gupta frames this as strategically valuable for Meta, connecting it to the company’s hardware efforts (Ray-Ban smart glasses, neural wristbands) and arguing that Reality Labs’ $73 billion in losses represents an investment in understanding human attention for advertising. The release shows continued convergence between neuroscience and AI research.
Analysis
Gupta points to Meta’s pattern of open-sourcing AI research (as with LLaMA) to build ecosystems while keeping commercial advantages. TRIBE v2’s CC BY-NC license restricts commercial use, consistent with this approach. The model was trained on 451.6 hours of fMRI data from 25 subjects and tested on 1,117.7 hours from 720 subjects, achieving 2-3x accuracy gains over previous methods. It builds on v1’s first-place finish at Algonauts 2025.
The release points toward “in-silico neuroscience” - using AI to simulate brain experiments rather than running them directly. Gupta’s speculation about connections to Meta’s ad platform and hardware remains unconfirmed, though the potential for AI-driven personalization in consumer devices is real. The ethical questions around attention-targeting technology remain open.
For AI development, this work advances multi-modal models by training them against neural data, which could inform sensory processing in future systems. The research represents Meta’s long-term bet on AI-neuroscience convergence, separate from Reality Labs’ near-term financial pressures.
Impact Assessment