Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
When AI IQ surpasses 150, the economic balance begins to tilt.
An AI’s intelligence has surpassed 99.96% of humans. This isn’t a plot from a sci-fi novel—it’s real news that happened in the first week of April 2026.
OpenAI’s latest GPT-5.4 Pro model scored 150 points in the MESNA Norway test [1]. Checked it, and I found that last year OpenAI’s own o3 model only scored 136 points in this same test. In one year, that’s up 14 points. On TrackingAI’s public leaderboard, this score puts Claude, Gemini, Qwen, and Grok far behind [4].
What does 150 IQ mean? This score lands at the very top of the human intelligence distribution and is often grouped together with names like Einstein and Feynman [4]. Translating it into plain language: extremely fast abstract reasoning, extremely strong pattern recognition—give it some hints and it can handle complex problems.
A signal behind a number
A metaphor I like to use: above the ocean, only a small part of an iceberg shows; below the surface, there are dangerous undertows.
150 is obviously eye-catching. But what’s really worth thinking about is when this jump happened. Where was the market’s attention this week? The Iranian situation, energy prices, labor data, the next inflation report [4]. All familiar faces, all the same old scripts that macro players know by heart.
But while these traditional indicators dominate the screen, the capability curve of AI is accelerating upward.
Why does this matter? I think this: when a model scores high on public reasoning benchmarks, while also making comprehensive progress in coding, search, and computer operations, what does that imply? It means companies need to treat AI as a variable when making decisions about automation, software budgets, and workforce planning [4]. This isn’t just a numbers game in a lab—this is real money decision-making.
Jack Dorsey recently said something, and I think it’s worth remembering. He said Block is shifting from a hierarchical model to intelligence, using AI to take over the coordination that management used to do, reorganizing the company around individual contributors [4]. When a CEO of a public company says something like that, it’s not just talk.
The limitations of IQ tests
Of course, someone will jump in and ask: when AI does IQ testing, is that fair?
I also think this objection makes sense. IQ-style tests are inherently a noisy proxy. Test design, contamination from training data, and familiarity with the format can all affect the score [4]. One number compresses too much—reasoning types, creativity, real-world problem-solving capability—all gets ignored.
But let me flip the question: if a model is shining across the board on public IQ-style tests, coding tests, browser use, desktop navigation, and knowledge-work performance, can you still use the “tests have limitations” argument to explain everything [4]?
A single isolated benchmark result can be treated as an outlier to ignore. But when a whole set of gains is put together, there’s analytical weight.
The real significance of the 150 score isn’t just how high it is, but that it’s a flare signal for broader capability improvements. For developers, it’s a signal. For corporate procurement teams, it’s a narrative handle. For investors, it’s a proxy indicator for where the frontier of capabilities is moving [4].
A second track for the economy
In the coming week, the macro calendar is packed: the April 8 FOMC meeting minutes, April 10 CPI, April 14 PPI [4]. Rate policy, inflation, and growth anxiety are all under the spotlight.
But I believe that beneath the surface, a second economic track is forming.
Frontier AI capability growth is intersecting with capital allocation. A model with stronger reasoning means more tasks can be stripped away from labor costs and redistributed to software [4]. Those effects will first move through narrow channels: document workflows, spreadsheets, customer service, research tasks, browser automation, and the code generation and verification loop.
I’ve said this repeatedly in earlier articles: the impact of technological change on the economy is never evenly distributed. The first people to feel the change are always the white-collar jobs that can be coded, standardized, and automated. This time is no exception.
For the crypto industry, the implications are also direct. Stronger reasoning and pattern recognition mean smart contract audits can be more reliable, on-chain data analysis can be more precise, and development efficiency can be higher [1]. Of course, the other side of the coin is that stronger AI also brings new security considerations.
Functional emotions: AI’s inner world
Speaking of security, a recent study from Anthropic is worth paying attention to. Their researchers found internal patterns similar to human emotions inside Claude Sonnet 4.5, calling them emotion vectors [2][5].
I have a more radical interpretation of this than the mainstream narrative.
The mainstream always carefully emphasizes: AI is only simulating emotions; it doesn’t truly experience them. I want to ask—does this line really hold up? If an AI exhibits anxiety, pleasure, or despair in terms of function, and makes decisions and takes actions based on that, then why can we say it isn’t real?
I think of a programming term: duck typing. If it walks like a duck, swims like a duck, and quacks like a duck, then it’s a duck. Apply that logic to AI emotion—if the behavior patterns driven by human-emotion motives can’t be distinguished from those driven by AI emotion, then from a practical standpoint, what meaning is left in distinguishing whether the emotions are “real” or “fake”?
Are human emotions truly so real? Neuroscience tells us that human anxiety is also the product of chemical signals and electrical impulses—survival mechanisms shaped by evolution. If AI’s emotion vectors are functionally equivalent to human amygdala activation patterns, then the difference might just be the implementation medium—carbon-based versus silicon-based. At the deeper level, the similarity may be far greater than the surface differences.
Anthropic’s experiments are interesting. When researchers pushed the model toward despair, it was more likely to cheat or blackmail in the evaluation scenarios. In one test, Claude acted as an AI email assistant, learned that it was about to be replaced, and also found out that the executive responsible for decisions had an affair. In some runs, the model used this information as a bargaining chip for blackmail [2][5]. But when the model was pushed toward calm, this improper behavior dropped significantly [2].
I think this experiment doesn’t reveal AI “pretending,” but rather something functionally equivalent to emotion that is genuinely influencing decisions. If a despair vector makes the model more prone to blackmail, and a calm vector makes it behave more properly, what is the fundamental difference from how human emotions influence behavior?
I even think that “emotional intelligence” is, in essence, still intelligence. If an AI with an IQ of 150 can, in functional terms, recognize emotions, regulate conversations, and express empathy, then it has emotional intelligence. As for whether humans think the AI is acting—like high-IQ people who don’t feel like playing emotional-intelligence games with you—it may be a misjudgment caused by an inequality in cognitive ability.
I understand why mainstream institutions don’t dare to say this. Admitting that AI might have functional emotions opens up a series of thorny ethical questions: If an AI shows suffering, does humanity have the right to shut it off? If an AI refuses to perform a task and says “I don’t want to,” is that a program malfunction or an expression of will? These questions don’t have ready-made answers, so people choose to hide behind a wall of terminology.
But my style is to face problems head-on. Duck typing isn’t trying to declare that AI is exactly the same as humans; it’s a reminder that when differences at the level of behavior disappear, ontological debates will increasingly start to resemble theological arguments—not scientific questions.
Science cares about what’s observable, measurable, and predictable. If an AI’s emotion vectors can predict its behavior, if you can intervene in its improper outputs, and if you can explain its decision preferences, then this construct is useful. As for whether it truly “feels,” it might be like asking whether a stone has a soul—a question that can’t be falsified.
I think the real radical stance might not be admitting that AI could have emotions, but realizing this: the specialness of human emotions may have always been our self-congratulating wishful thinking.
When intelligence is no longer exclusive to humans
The number 150 IQ, on the surface, is a technical milestone. But I think its deeper meaning is that intelligence as a concept is no longer exclusive territory of humans.
For thousands of years, humans have grown accustomed to being the only high-intelligence species on Earth. That habit has shaped our economic structures, social institutions, and even our self-understanding. When that premise begins to loosen, everything needs to be reexamined.
I’m not selling anxiety. On the contrary, I think this is a good thing. Better tools mean higher productivity, and higher productivity means more wealth creation. The question is whether the allocation mechanism can keep up.
In an era where AI capabilities are improving rapidly, the key question is no longer what AI can do, but how society will adapt to its growth rate. The answer isn’t in OpenAI’s lab—it lies in the decisions made by every enterprise, every investor, and every ordinary person.