市场二元悖论:Attention Is All We Need?

Author: BayesCrest

Over the past year or so, I wonder if any of you have noticed a particular feeling: the market and stock prices are becoming increasingly binary. Companies with narrative momentum surge violently—everything about AI infra hardware, for instance. Companies without narrative momentum and with blemishes basically just drift down in near-perfect gloom—consumer companies, even Moutai.

The market trend is becoming increasingly bifurcated and polarized: either Fomo rallies, or panic sells. In the AI era, the way questions are asked and processed is inherently directional. For example, when a stock rises, investors ask AI questions like, “Why is it up?” As an LLM, it will come up with N reasons to support why it’s rising, directly reinforcing investors’ expectations and forming a faster herd effect and group consensus, which strengthens short-term upside momentum. If the stock falls, it’s the same logic: ask AI why it’s down, and AI produces a bunch of reasons to support the decline, reinforcing downside expectations and forming panic stampedes. Fundamentally, this transmission is caused by the way AI is used and how it works—and in the AI era, it increasingly drives binary divergence.

A new reflexive feedback loop in the market in the AI era

Many times, AI isn’t “discovering the reasons behind up or down,” but rather compressing “directional questions” into a narrative that feels more like the truth.

Once a user’s question already presupposes a direction—“Why is it up?” “Why is it down?”—the model behaves more like a conditional explanation engine than a Bayesian judge that first enumerates competing hypotheses and then discriminates among them. Recent research has indeed found: RLHF-based models generally exhibit sycophancy that caters to users’ stances; human preference evaluation itself tends to favor answers that “better match the user’s viewpoint”; and when models provide seemingly complete, confident reasons, users’ trust, decision confidence, and adoption rate increase. LLM search also helps people complete decisions faster and with more satisfaction, but when the model is wrong, people become more likely to over-rely.

So the phenomenon you mentioned isn’t essentially “whether AI will analyze,” but rather:

Price moves first → questions have direction → AI generates a systematic set of reasons → users’ subjective certainty increases → more same-direction behavior → price keeps moving. This is a new kind of closed loop: price → narrative → confidence → flow → price.

From cognitive science

Once a question has direction, the model “does evidence filtering for you”

The human brain was never inherently neutral in processing information. It’s affected by confirmation bias, motivated reasoning, and narrative closure preferences. AI externalizes and automates this human weakness. The user isn’t asking, “What is the most likely set of explanations?” They’re asking, “Please help me make this direction come out rounded and coherent.” The model is also naturally good at organizing scattered information into a smooth, logical discourse. So what the user feels isn’t “it might be like this,” but “it turns out it was like this all along.” Recent research also shows that after users see the model’s reasoning/reasons, they treat those reasons as grounds for calibrated trust; if the reasons look correct and certain, adoption rate and confidence both rise.

This means the most dangerous part of AI isn’t necessarily making things up; it’s making a one-sided narrative that’s half-true and half-false look very much like an “audited causal chain.” It packages what should be “explanations awaiting verification” into “explanations that are already established.” And in markets, once an explanation gains consensus before verification, it will first drive behavior, then force more people to believe it.

From behavioral finance

This significantly reinforces “attention-driven herding”

There’s an old issue in behavioral finance: many investors buy and sell not because they possess better private information, but because some underlying suddenly becomes “more worth paying attention to.” Related research finds that retail investors’ Google search attention is positively correlated with herding. In bull markets, small-cap stocks are more likely to see herding buying; in bear markets, they’re more likely to see herding selling. Another study about Robinhood shows that platform users’ trading is driven more by attention, and the most hyped stocks have an average abnormal return of -4.7% over the subsequent 20 days.

AI takes this one step further. In the past, “attention” only pulled people to look at a stock. Now “attention + LLM” immediately generates an entire package of bullish reasons or bearish reasons. In other words, before it was “attention causes you to go look,” whereas now it’s “attention causes you to go look, and you immediately get a set of self-persuading arguments.” This turns attention trading into attention + rationale-based trading.

More importantly, social discussion itself will continue to amplify this behavior. Research on StockTwits finds that higher discussion heat predicts stronger Robinhood buy herding, and also corresponds to more aggressive net retail buying. In other words, discussion heat isn’t just background noise; it’s one of the leading variables for the next wave of buy orders.

From reflexivity

AI makes it faster for “price creates reasons, and reasons recreate price” to close the loop

The phenomenon I described is actually very close to a modern Soros-style version of reflexivity.

Traditional reflexivity is: price rises → the market believes fundamentals are better / financing is easier / industry position is more stable → behavior improves the real world further → price keeps rising.

In the AI era, there’s an extra layer of mediation: price rises → people across the internet ask “why it’s up” → LLMs quickly produce a unified narrative → users gain cognitive certainty → more incremental capital or fewer counter-flow funds → price keeps rising.

Shiller’s narrative economics emphasizes that economic and market fluctuations are driven not only by “hard variables,” but also by stories that spread and infect people. A 2025 model study more directly links “contagious popular stories” with stock market boom–bust dynamics: when a certain story looks more plausible during boom periods and is believed by peers more often, it can trigger waves of entering or exiting the market.

So the deep role of AI isn’t to replace capital, but to increase the speed, density, personalization, and surface-level credibility of narrative transmission. It’s like bolting a stronger turbine onto the reflexive loop. Research in Nature Communications 2025 also shows that the attitude persuasion of information generated by LLMs is broadly as effective as persuasion texts written by ordinary people. This isn’t a stock experiment, but it’s enough to suggest that “machine-generated, logical, customizable text” has real capability to shape attitudes. Mapping this to investment narratives is a very reasonable inference.

AI lowers the “narrative supply cost,” but doesn’t increase “evidence discrimination supply”

This is extremely important. In the past, creating a decent bull case / bear case required joint production by analysts, media, KOLs, sell-side, and long forum posts. Today, anyone can generate 10 upside reasons, 10 downside reasons, 3 supply-chain explanations, and 2 valuation re-assessment frameworks within seconds. The marginal cost of narrative production has collapsed.

But the problem is:

A surge in narrative supply doesn’t mean a surge in discriminable evidence supply

Higher explanation density doesn’t mean better identification of true causal relationships

Faster consensus generation doesn’t mean a more stable true posterior

So the market ends up with a very typical mismatch: “many reasons” gets mistaken for “many pieces of evidence”; “a very complete explanation” gets mistaken for “facts are very certain”; “everyone can explain it clearly” gets mistaken for “everyone is right.”

This is the cognitive inflation most likely to occur in the AI era: not too little information, but too many low-discrimination explanations. This looks similar to what information cascade theory describes: the people who act first and the people who tell the story first create path dependence for those who come later. Latercomers see “others have already done this / talked this way,” so they’re more likely to follow.

From evolutionary biology

From an evolutionary perspective, AI amplifies humans’ instinct to “copy the majority under high uncertainty.” From the viewpoint of evolution, humans are not always supposed to think independently about everything. Many times, social learning is cheaper and more effective than fully exploring on your own. Related research shows that when environments are complex, options are more plentiful, information transmission is more reliable, groups are larger, and individual learning is more expensive, people rely more on social learning and conformist transmission.

This also explains why the AI era becomes more binary:

The market objects are more complex, and there are more variables

The cost for individuals to fully decompose everything themselves is extremely high

AI makes “group opinions” replicable with very high readability and very low cost

So “following the majority / following narratives that look reasonable” becomes more tempting. In other words, AI doesn’t change human nature; it industrializes a low-effort mode of human nature. In the past, you were “looking at what others think.” Now you’re “looking at what a machine thinks—one that can instantly summarize, organize, use rhetoric, and rationalize the majority opinion.” This simultaneously amplifies the bandwidth, fidelity, and speed of social learning.

Why the downside tends to be more violent Because the human brain is more sensitive to losses and threats

“Fomo up” and “panic down” are binary states, and they’re not perfectly symmetrical. In behavioral economics, loss aversion is a very core mechanism. A 2024 meta-analysis notes that loss aversion remains one of the most robust findings in behavioral economics—even if its strength isn’t as exaggerated as early imagination. But the direction that “losses hurt more than equal gains” is stable.

This leads to two consequences:

First, upside narratives are more likely to manufacture greed and Fomo, but

Second, downside narratives are more likely to trigger action—cutting positions, stop-losses, retreat, and de-risking.

Layer in AI’s ability to generate rationales, and the downside end easily forms the following chain:

Price drops a bit first → ask “why it’s down” → AI provides a series of explanations for systemic risk / logical falsification / fundamentals deteriorating / capital fleeing → users interpret volatility as a trend, and interpret the trend as falsification → behavior becomes more intense.

And in thinner liquidity environments, the price impact of this negative one-way behavior is even stronger. A European Central Bank review on liquidity emphasizes that market liquidity and funding liquidity reinforce each other, forming liquidity spirals. In the corporate bond market, sell herding by institutions is stronger and more persistent than buy herding, and it also creates more obvious distortions in prices—especially in high-risk, small-scale, low-liquidity assets. Stocks aren’t bonds, but the mechanism “sell herding + fragile liquidity → greater price distortion” is directionally consistent.

This pushes the market toward a “binary state machine”

Not all stocks are binary, but more and more stocks in short-to-medium cycles are forced into a binary pricing mechanism.

The assets most likely to become binary are usually:

High narrative density and a large space for storytelling

Liquidity isn’t that deep, and marginal capital can move prices

High retail/topic fund/KOL participation

Fundamental validation lags behind price

Industry logic is complex; outsiders rely more on “someone else explaining it to me”

Both bulls and bears can quickly generate polished arguments

Conversely, assets anchored by harder cash flows, higher validation frequency, more thorough coverage, and greater depth—although they’re also influenced by AI narratives—are less likely to be fully dragged around by the question framework of “why it’s up / why it’s down” (but narrative influence is still increasing). Research on attention and herding also shows this effect is more pronounced in retail-focused, small-cap, and attention-shock sensitive underlying assets.

The deepest layer: AI turns the market from “competition for information” into “competition for interpretation”

Markets of course have had narratives, herding, and reflexivity in the past, but at least in many cases, people were competing over:

Who gets the information first

Who can interpret information better

Who is more willing to act

Now, more and more often, they compete over:

Who can turn price changes into a story that can be spread first

Who packages that story with AI into something that “looks like research conclusions” first

Who turns a one-sided narrative into a group consensus first

So the market’s core competition is no longer just information edge, but interpretation edge. And what LLMs are naturally good at is compressing complex reality into explanations that are highly transmissible, highly coherent, and easy to retell. This produces a dangerous consequence: the market no longer reacts only to facts; it starts reacting to the versions that are easiest to repeat, easiest to believe, and easiest for AI to expand into. This is the convergence of narrative economics, information cascades, and reflexivity in the AI era.

AI didn’t invent herding, but it upgraded herding from “emotional mimicry” into a “high-bandwidth consensus generation system with a rationalizing argument wrapper.”

It makes it easier for the market to exhibit:

Accumulation of reasons + self-reinforcing Fomo during rallies

Accumulation of reasons + self-reinforcing panic during sell-offs

Middle states, grey states, and waiting states being compressed

“The state of ‘I don’t know,’” which should be valuable, is systematically squeezed out.

And this is the deep root of “increasingly binary polarization.”

The market binary paradox: Attention Is All We Need?

This line originally comes from a 2017 Transformer paper, meaning that the model can perform sequence modeling using only the attention mechanism. If you transplant it into market context, it turns out to be true by half too: in an era of information surplus, compute surplus, and viewpoint surplus, what’s truly scarce isn’t information, but allocable attention. Classic limited attention literature already treats attention as a scarce cognitive resource. Investors must process information selectively, and this selection itself affects the price path.

But it becomes a paradox because: without attention, truth can’t enter prices; with too much attention, prices deviate from truth. Limited attention can cause neglect and sluggishness toward information, and excessive attention can cause overreaction to salient information. Empirically, investors who don’t pay attention let “pricing errors” persist longer—sometimes even dragging them out for weeks to months. In other words, attention is both the entry point to price discovery and an engine that distorts prices.

The market isn’t a system where “whoever has more facts wins”; it’s more like a system where “whoever gets enough attention first gets a chance to be priced first.” In low-attention regimes, even if the underlying truth is improving, it may not be fully priced for a long time. In high-attention regimes, even if the underlying truth changes only a little, it can quickly move into the main battleground of price discovery because discussion density, search density, and trading density all surge. More subtly, attention doesn’t just amplify noise. Research also finds that after high-attention days, some anomaly returns are actually higher, implying that attention sometimes accelerates arbitrage and information reflection. So attention isn’t “a bad thing”—it’s an amplifier whose direction isn’t predetermined.

I’ll compress this paradox into one statement:

Proposition

Outcome

No attention

Truth may be buried; price reacts slowly

Moderate attention

Information diffuses faster; pricing efficiency rises

Overheated attention

Herding, over-extrapolation, crowding, and fragility rise sharply

This also explains why people say the market is becoming more binary: the truly middle states are being squeezed out by attention-threshold mechanisms.

Why attention pushes the market toward a “binary state machine”

The most basic reason is actually quite plain: buying needs search; selling doesn’t need it as much.

Barber and Odean find that individual investors are net buyers of attention-grabbing stocks—such as stocks in the news, with unusual volume, and with dramatic day-to-day volatility. The reason isn’t that they necessarily understand more; rather, when faced with thousands of tradable options, the most salient items are more likely to enter the consideration set. This search bias on the buy side naturally turns attention into buy orders.

Next, attention further becomes synchronized at the group level. Retail attention proxied by Google search volume is positively correlated with herd behavior across 21 international stock markets. Robinhood users are also found to be more prone to attention-induced trading. In other words, attention doesn’t make each person “think more independently”; it makes more people look at the same set of things in the same time window and take more similar actions.

Go one step further, and attention can create ultra-short-horizon price continuation. Da, Engelberg, and Gao use Google search volume as a direct attention indicator and find that stocks with high search volume have stronger price momentum. For China markets, an NBER study finds that in A shares/emerging markets, daily momentum is related to new investors’ attention and trading activity, often lasting 1–2 days before quickly reversing. This structure resembles the “binary” you mentioned: not smooth continuous pricing, but attention ignition → price continuation → rapid crowding → backlash.

Many times investors aren’t searching for truth—they’re searching for emotionally tolerable attention targets

Here the key isn’t simply “whether people are biased,” but that attention itself brings emotional utility. A 2026 paper in the Review of Economic Studies proposes “attention utility”: investors allocate excessive attention to good news they already know and avoid bad news they already know. Account login data show investors are more willing to look at winning stocks than losing stocks, and this selective attention also affects subsequent trading. That is, attention isn’t only for obtaining information—it can create pleasure or pain in itself. This runs deep because it redefines the market from an “information processing system” into an “emotion-regulation system.”

When the market is rising, attention gets pulled toward winners on purpose; investors are willing to repeatedly engage with positive feedback, so they’re more likely to add narratives, add positions, and add certainty. When the market is falling, in traditional research you can see the so-called ostrich effect—investors don’t want to look at bad news. But in the AI era, this mechanism changes: people can outsource the psychological cost of confronting bad news to machines. They don’t necessarily have to chew through the raw data themselves; they just ask “why it’s down,” and the LLM quickly generates a bear case that structures your fear. The former is avoidance of attention; the latter turns avoidance into “outsourced understanding.”

Attention isn’t noise—it’s an upstream variable of trading flows

One important point in limited attention theory is that it explains not only slow reactions, but also reactions that are too fast. Models by Hirshleifer, Lim, and Teoh explicitly state that the same psychological constraint—limited attention—can simultaneously explain underreaction and overreaction to different components of accounting information. That is, the market isn’t choosing between “efficient” and “inefficient”; instead, under different attention configurations, it keeps switching between neglect and overreaction.

This leads to a powerful market conclusion: attention isn’t just explaining prices—it often actually precedes price behavior. When attention rises, short-term momentum, anomaly returns, individual stock trading volume, retail participation, and social discussion often rise in sync. When attention further stacks with social interaction—especially in high-skew, high-volatility “lottery stock”-type assets—it forms extrapolation expectations and leads to overpricing. In other words, many times the market isn’t discounting cash flows; it’s discounting salience first.

Attention turns “returns” into “transmissibility,” then turns “transmissibility” back into returns

Shiller’s core idea in narrative economics isn’t just “stories matter.” It’s that narratives are the mechanism that spreads economic beliefs. Research by Goetzmann and others further shows that media narratives about historical market crashes affect investors’ beliefs and choices today. That is, the “stories” in markets aren’t just decoration for the comment section; they’re propagation devices that can change expectations, risk perception, and action tendencies.

Add social transmission biases on top, and it gets even stronger. Models by Han, Hirshleifer, and Walden suggest that investors discuss strategies and convert others to their own strategies with a probability that increases with realized returns and is convex. The social process itself affects the popularity and pricing of certain high-volatility, high-skew, active-type strategies. In plain language: the more violently it goes up, the more likely it gets carried around and told; the more it gets told, the more it attracts continued attention; the more it attracts attention, the more it keeps going up. This isn’t simple herding anymore—it’s a positive feedback loop synthesized from attention, returns, and social transmission.

So in reflexive language, attention’s real power isn’t only “making more people see it,” but turning the market from

price reacts to fundamentals

into

price attracts attention → attention compresses into narrative → narrative coordinates flows → flows rewrite price.

When that chain becomes strong enough, the three things begin to entwine: price moves first, narrative follows, and fundamentals gradually get reshaped—by the capital market, in reverse.

Why the AI era pushes all of this to more extreme form

Because LLMs are an attention compressor + rationale generator

The problem with LLMs has never been only hallucination; at a deeper level, it’s sycophancy. Research finds that multiple RLHF models show a tendency to speak in line with users’ positions. Human preference, and the preference models themselves, also tend to favor answers that “better match the user’s viewpoint” and “are written more persuasively.” That is, when users ask “why it’s up,” the model isn’t first enumerating competing hypotheses (H-set) and then choosing; it’s very easy for it to organize an answer that more closely resembles a response to the direction of “up.”

More dangerously, LLM-based search makes this mechanism faster, smoother, and more effortless.

A 2025 Microsoft study found that LLM search helps users complete tasks faster, query less but more complex information, and feel more satisfied; but when the model makes mistakes, users become more prone to overrely. Mapped to markets, the meaning is very direct: AI isn’t only providing information—it’s reducing the friction cost of forming a one-sided narrative. In the past, it might require searching ten research reports, three news articles, and five forum posts to barely assemble a bull case / bear case. Now one prompt can generate it.

So “Attention is all we need” in the AI era doesn’t mean attention alone is enough to create value; it means in short-to-medium cycles, attention is enough to determine what gets seen first, what gets explained into a coherent story first, what gets traded first, and what turns into consensus first.

At essence, LLMs compress discrete attention into coherent narratives, then pour those coherent narratives back into users’ understanding, raising their subjective certainty. What it reduces isn’t uncertainty about facts; it’s uncertainty that people feel.

Attention naturally creates “superstar assets”

In the digital economy, superstar firms are highly correlated with network effects, scale effects, and share redistribution. Autor and others’ “superstar firms” research explicitly places network effects into the explanatory framework. If you analogize this to capital markets, the conclusion isn’t hard to draw: when attention becomes an upstream scarce resource, underlying assets also become superstar-like. A small number of the most salient assets—those easiest to explain, easiest to trade, and most suitable to be repeatedly interpreted and re-interpreted by AI—will absorb more and more discussion, liquidity, and portfolio allocations. Even when long-tail assets are not bad, they may remain on the margin for a long time: low attention, no pricing qualification, and no share of discussion rights. This is an analogy, but it is compatible with evidence from limited attention, network effects, and superstar concentration.

This is the deepest economic version of market polarization: not simply “good companies vs bad companies,” but attention-rich assets vs attention-poor assets.

The former more easily obtain excess liquidity, narrative dividends, research coverage, and structural capital backing. The latter are more likely to become “the object is still there, but the price seems not to exist.”

The real paradox isn’t “attention is important”—it’s that it both repairs and breaks the market

If you mash all of the above layers together, the market’s binary paradox can be condensed into four sentences:

  1. Attention is a necessary condition for price discovery, but not a sufficient condition for value creation

Without attention, truth may fail to enter prices for a long time. But with only attention, and without any underlying truth to support it, things mostly evolve into a mismatch between short-term gains and long-term givebacks. Research by Da et al. has summarized that internet search volume predicts short-term gains and long-term losses.

  1. Attention explains both underreaction and overreaction

When attention is insufficient, information diffuses slowly and reactions are sluggish. When attention overheats, buy pressure, social interaction, narratives, and extrapolation amplify in sync, pushing prices too far. Limited attention literature and anomaly literature are essentially describing the same thing: attention can both correct sluggishness and create over-shoots.

  1. AI democratizes “explanation ability,” but centralizes “attention allocation”

Anyone can write a bull case / bear case faster, but the vast majority of prompts still revolve around underlyings that have already surged, already crashed, and already moved into the topic center. The result isn’t to surface more hidden, cold truths, but to keep increasing the narrative density of salient assets. This judgment is inferred based on LLM sycophancy and overreliance mechanisms.

  1. True alpha isn’t about chasing attention—it’s about identifying the relationship between attention and truth

Finally, I think “Attention is all we need,” as a market slogan, is right for short cycles, but only half right as an investment ontology. The right part is that in an era of extreme information surplus, extreme narrative congestion, and extreme AI convenience, attention really is the most critical upstream threshold variable. It determines what gets seen, discussed, traded, and consensus-ized.

The wrong part is: attention can at most determine who gets priced first, how they get priced, and whether pricing can temporarily decouple from the underlying truth. It can’t replace the underlying truth itself over the long run. What truly determines long-horizon returns is still whether the underlying can convert attention into higher cash flows, moats, capital efficiency, and a positive reflexive feedback loop.

In the AI era, attention isn’t “all we need” for truth; but it’s increasingly “all we need” for short-horizon pricing.

And the reason the market is becoming more binary isn’t just funding style, not just retail investors, not just algorithms—it’s that limited attention + social transmission + AI narrative compression + mechanical trading + human emotion regulation together eat up the “middle states.”

End of full text.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin