Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The Missing "Four-Circle RPM": An Intergenerational Conspiracy About Power, Authority, and Agents
(Source: Caixin Media App)
In early 2026, the anxiety of the Chinese internet was concretized by a “lobster.”
On any Chinese search engine, when you search for Openclaw, you can see the near-miraculous behavior demonstrated by this new “lobster” agent framework: it breaks away from the identity of an AI consultant that can only spit out characters in the chat box, and instead turns into a private intelligent assistant that can directly take over the browser, move the mouse on mobile, click captchas, and scrape data across platforms. Some say this is a revolution in artificial intelligence, while others, from those silent clicks, hear the countdown to being replaced.
However, behind the visual spectacle is a cold reality.
Most users, after choosing a large model and connecting servers and APIs, marvel at “AI growing limbs,” but they don’t realize that every second this “lobster” is running, Tokens in the background are burning at a speed like “burning cash.” To make up for the clumsiness of the current logic, this “lobster” has to pay the price of hundreds or thousands of futile trial-and-error attempts to complete an operation that a human can do in three seconds.
So, an extremely twisted value-chain gap appears:
In this stage, productivity gains are not yet clear, but the bonus from friction costs has already landed in the pockets of some people. Middlemen selling distribution API interfaces, script sellers offering one-click deployment environments, and course operators selling “the fear of not being replaced”—all of them are taking part in a collective feast around this technology-named “cash-burning machine.”
This naturally leads one to think:
In this mode—during periods of tech explosion—where people exploit the defects of “semi-finished products,” cognitive asymmetry, and the pain of transformation to plunder the entire industry chain, is it a “compute-power hegemony” unique to the AI era, or some sort of business ghost in human technological history that has never really changed, yet persists with unwavering consistency?
In the face of the rapidly coming AI agents, how should we position ourselves correctly, so we can find a place for ordinary people that won’t get their value shaved away?
Let’s cut through the 2026 fog, step out of the world of code, and take a look at the industrial revolution of humanity more than 200 years ago.
The missing “four rings of rotational speed”
In 1786, in the Quarrybank mill in the southern suburbs of Manchester, Samuel Greg was staring at a huge wooden undershot waterwheel spanning the River Bollin.
That was the power heart of the industrial world at the time. This setup was cast from high-quality oak and pig iron, with a diameter of more than 15 feet. When it turned, the dull thunderous sound could travel to farms a mile away. In Samuel’s plan, the waterwheel would rotate 12 revolutions per minute—this determined the steady output of 150 spinning machines in his factory.
But that year, the weather became Samuel’s enemy.
The water level dropped by 30 centimeters. That giant wooden heart began to struggle—its rotation rate slowly fell from 12 revolutions to 8.
Those missing 4 revolutions were, without question, an invisible earthquake for the textile industry and for Samuel.
Spinning machines had almost abnormal requirements for power stability. A drop in rotation would cause uneven tension, making the fine yarn snap repeatedly during high-speed spinning. The top-tier fine cloth that could originally be produced turned into coarse linen cloth that could only be processed at a low price. At the same time, contract pressure squeezed Samuel’s space at every moment—his orders with London merchants were settled on a weekly basis. A one-third drop in capacity meant he wouldn’t just fail to make money; he would also have to pay hefty breach-of-contract penalties.
But just as he was sinking into trouble, a letter from Birmingham appeared on Samuel’s desk. The letter writer was Matthew Boulton, business partner of James Watt.
The letter described a monster that generated perpetual power without needing river water—only coal: the Watt steam engine.
But for Samuel at the time, this blueprint did not mean “progress.” It represented a painful choice. To install the machine, he needed to dismantle half the factory building to build a massive smokestack, and he also needed to retrain the workers—who had only seen water, never fire.
While Samuel was still weighing whether to accept the blueprint, a wealthy merchant in the Piccadilly area of Manchester—Peter Drinkwater—had already made a decision that shocked the whole city.
He decided to completely abandon the water source and build a cotton spinning mill in the center of Manchester, powered entirely by Watt steam engines.
Drinkwater was the era’s “tech geek.” He not only had to buy the most expensive steam engines, but also demanded that the machines reach extreme frequency stability. To that end, he personally wrote to Matthew Boulton, requesting the latest model of parallel linkage equipment. However, around 1786, Drinkwater’s factory became a giant money-burning black hole. Because the steam engines produced heavy black smoke and violent vibrations, he had to pay expensive compensation to nearby residents. Worse still, coal prices in Manchester fluctuated wildly at the time, and he found he was not only spinning yarn—he was gambling on coal futures.
In contrast to Drinkwater, the Robinson family living in Papwick, Nottinghamshire, was a die-hard group from the waterpower age. When the black smoke from steam engines rose, the Robinson family didn’t feel threatened. Instead, they believed it was the struggle of losers. They were convinced: as long as waterworks engineering was pushed to an extreme, there would be no room for steam engines to survive. So in the mid-to-late 1780s, the Robinson family built the most complex tiered reservoir system in human history at Papwick. They mobilized thousands of laborers to construct huge stone dams and underground aqueduct channels, and even set up a meticulous “meteorological observation station” to precisely forecast rainfall.
But good times didn’t last. The Robinson family fell into a deadly loop of “diminishing marginal returns.” To add the last 2 revolutions, the money they invested even exceeded the price of buying two steam engines. They staked everything on “even more perfect water engineering,” but they overlooked the fact that the baseline logic of the most fundamental industrial metrics had already shifted.
Drinkwater represented an aggressive approach with “high coupling.” One foot was in the future, but the other foot was dragged down by extremely immature industrial supporting systems—expensive maintenance workers, unstable coal supply, fragile parts, and the like. He was a pioneer of his time, but he also paid the highest “tech novelty fee.”
Meanwhile, the Robinson family was a typical “stock-optimization fanatic.” They tried to compensate for the gap in the power source itself through extreme diligence and precise civil engineering. This was “defensive investment”: they built a magnificent palace on the ruins of the old era, but they failed to notice that the road had already been built elsewhere.
Samuel couldn’t see that far into the future. His problem was immediate, and in the end, amid hesitation, he chose an extremely awkward “middle path.” He hired a group of mysterious “mill craftsmen” and set up, beside the waterwheel, a crude auxiliary water-pumping system powered by coal.
His logic was simple: since the river water wasn’t enough, I’ll use coal to pump the downstream water back upstream, artificially creating a small waterfall and forcing that old waterwheel to return to 12 revolutions. While this seemingly comical “semi-automation” plan looked foolish, commercially it was oddly clear-headed. It perfectly avoided collapse caused by dramatic technological change. At the same time, in that narrow turning point, it gave Samuel a chance to catch his breath.
But what’s truly interesting is that in this gap, it wasn’t just Samuel. There was also a group of “arbitrageurs” produced by technical change.
Those small merchants selling “water pump parts” in Lancashire; those speculators claiming they could provide “coal procurement hedging for price protection”; and those technical intermediaries who specifically helped mill owners like Samuel “optimize transmission efficiency.” They didn’t care whether steam engines represented the future. They only cared about how much Samuel—someone who wanted to cross over but didn’t dare fully—was willing to pay “tolerance cost” during the pain of transformation.
This is the truest underlying tone of history: technology iteration is never a long-distance race where everyone participates in sync. It’s a slow forward move full of compromise and friction, composed of countless Samsuels, Drinkwaters, Robinson families, and countless “new and old workers.”
Give AI a pair of false hands
Pulling our focus out of that damp, smoke-and-oil Lancashire year of 1786 and looking at today, when AI agents are highly developed in 2026, we can actually see that this period still contains an eerie similarity.
More than 200 years ago, Samuel Greg faced a rotational-speed problem caused by a 30-centimeter drop in the River Bollin water level; and today, what we’re going through is a more hidden, but equally deadly, “dry spell of interaction.”
If we want to un-mystify Openclaw, we need to understand and clearly see how it works and its niche. Right now, the mainstream Windows, macOS, and all the web browsers we can find are essentially designed for humans’ naked eyes, intuition, and fingers. This architecture—called the GUI, or graphical user interface—has a core logic: what is visible to the eyes is potentially operable.
But for AI at this stage, this is precisely a barren vacuum zone.
Large-model language systems have near-excessive logic-processing capability, but when faced with these interfaces designed for humans, they are still like a paralyzed genius with a brain and no fingers. Today’s operating systems do not reserve native “machine-to-machine” ports for AI.
So, if AI wants to take over your computer, it has to confront those visual challenges that are hard to reconcile.
What Openclaw does is a comically laborious job on top of the existing visual UI. It can’t see the web page’s underlying logic, so it can only imitate humans by “taking screenshots” continuously to perceive the world. It can’t directly invoke click functions, only locate buttons through pixel-level annotations. It lacks stable real-time feedback and can only repeatedly retry to confirm actions.
Once we see the deficiencies of AI at the so-called visual layer, it becomes easy to understand Openclaw’s real face and the niche it occupies. It is not that perfect “native AI steam engine” representing the future. It is merely a “digital auxiliary water pump” forcibly architected to solve the dry-spell problem of “AI can’t directly operate interfaces.”
This kind of “anthropomorphic” attempt is, in essence, an extremely forceful yet very clumsy “patch solution.”
In the fracture of AI interaction, it uses expensive compute resources to simulate the most basic—even near-instinctive—physical actions of humans. Admittedly, Openclaw proves that AI can “be able” to operate a computer like a person. But it hasn’t proven that this is an “economical” product with commercial sustainability and even potential to evolve.
The circled land-grab movement in the digital world
Tencent Computer Manager officially launched its AI assistant tool, QClaw—QClaw’s website is live. ByteDance’s “lobster” ArkClaw launched as well, aiming to build a dedicated intelligent companion online 24/7. Xiaomi miclaw opened applications for users for closed beta, building permission-tier management to ensure user data and control rights stay safe… In the half month when Openclaw went viral, major vendors have followed one after another, announcing and even rolling out agent-framework products directly similar to Openclaw. For a semi-finished product like this, what is the significance of all this competition?
In 1786, whoever held the patent for the steam engine and the supply of coal controlled the factory’s life or death. And in 2026, this kind of agent framework represents a restructuring of power at the interaction layer—a “land-grab movement” for digital real estate. Over the past two decades, the core of power on the internet has been “search engines” and “app stores.” But when agents start taking over the browser, users no longer need to click ads or browse pages—they only need to tell the agent a target, and it can help them accomplish it.
Looking at it from another angle, ByteDance has actually been working deeply with ZTE since the end of last year, trying a physical AI agent framework and launching an “AI Native” phone based on the Nubia M153 as the core. The Doubao assistant, through deep operating-system-level cooperation with phone manufacturers, obtains high-risk permissions at the Android underlying layer via “system-level integration,” enabling it—under human authorization—to perform operations indistinguishable from humans, such as “comparing prices” and “ordering food.” ByteDance hopes that through systemic progress they can leap straight into the future, but they underestimated the “squeezing-out” alliance of other competitors. This experimental machine was blocked by major apps within just a few days, and its permissions were continuously reduced.
ByteDance’s in-house apps are mostly entertainment and self-media. By comparison, Alibaba has been much more smooth on this path. Alibaba’s Qianwen AI may not have changed anything at the hardware level, but it also uses the agent framework to achieve things—using apps from the Alibaba ecosystem to conduct some fully “hands-off automated operations,” and even achieving highly human-like voice in the layer of placing orders by phone. This is certainly surprising and worth recognizing. However, whether such operations would deepen information bubbles and how privacy boundaries are defined in law still deserve our discussion.
In summary, after saying all this, it’s obvious what the starting point is for major vendors to fight over the “ultimate distribution right for traffic.” This is not because the lobster is so useful, but because the lobster is becoming the new gateway to traffic. If in the future users interact with the world only through agents, then whoever isn’t in the agent ecosystem will, in the digital world, completely “disappear” in stealth. Once users get used to only listening to agent reports, those merchants that can’t enter the agent’s recommendation algorithms have essentially already suffered social death in the digital world. This is no longer a competition about efficiency or whether it’s good to use. It’s a competition over “defining the power that constitutes reality” and over “survival.”
The collusion of the “semi-finished product” “inefficiency trap”
You may ask: since ByteDance and Alibaba both see the potential of “system-level integration,” why not fully rebuild the system so that AI runs cheaper and more efficiently? The answer lies in that missing “four rings of rotational speed”: in a vacuum period of technical standards, maintaining “stability” and “inefficiency” is the shortest path to extracting extraordinary profits.
The 2026 developer is paying an expensive “digital carbon tax.” When you connect an agent framework, you’re not just buying productivity—you’re entering an “ultra-low value-for-money” energy trap. For energy providers, the early teething pains of technology are precisely the honey pot of profits. This logic, 240 years later, has been perfectly replicated in the AI wave as “consumption upgrades” at the infrastructure level.
Because current operating systems lack native AI interfaces, developers are forced to use the most expensive compute resources to handle the most trivial interaction tasks. This resource mismatch of “cannon shells to swat flies” is packaged in vendors’ research-and-development reports as “productivity breakthroughs,” but in the account books, it is real, concrete spending.
From the start of Openclaw through implementation and deployment, you need server supply, Token storage, local memory resources… Those compute behemoths behind the scenes—NVIDIA, Amazon AWS, and domestic players like Alibaba Cloud and Tencent Cloud—might look even happier. The high consumption brought by Openclaw’s “unintelligent” approach may essentially be a collusion of infrastructure suppliers: mac mini prices rise sharply, Token monthly subscription packages, and so on. They got away from the main theme of the traditional software era—“optimize algorithms, save resources”—and instead chose the opposite direction to let the “red lobster” precisely click a button, consuming Token at hundreds of times the rate of traditional command operations, to do high-definition screenshots, visual encoding, and multimodal inference.
Now, vendors are using the “semi-finished stage” to build a complex, high-bar, and extremely dependent on their own compute ecosystem patch solution. This is a consumption-driven technology revolution. The “lobster” is a product that’s clumsy to operate and needs repeated debugging, but can greedily devour Tokens crazily—far better, in quality, than a smart, efficient, money-saving AI.
A new order on the fault line
As individuals, are we really that powerless and helpless?
I don’t think so. When you peel away the “sugar coating” of “human nature,” what’s left is the confrontation between systems and individuals. From a macro perspective, perhaps we need to understand the principle of “being harvested is evolution.” We don’t shy away from “harvesting.” Every Token burned by Openclaw—every error click and logical slippage produced by these “inefficient runs”—is extremely precious training data. In essence, it is also feeding major companies with training data for “native AI operating systems.”
And for Samuel back then, the reason he didn’t go bankrupt hinged on having an endogenous buffering logic inside his “mill architecture”: a decoupling capability and a deep decision-making feedback loop.
The key of decoupling capability is to cut off reliance on a single path. If decoupling capability is low, once one link breaks, the whole system collapses.
In our era, of course we can believe that one day there will be a massive integration of resources so all information becomes interconnected and interoperable. But until then, as each party carries out its land-grab movement, can we still carry out core business without relying on a specific model, and without relying on a specific platform closed loop? That is the “decoupling capability” we should seek. The stronger the decoupling capability, the stronger your “ability to avoid being shaved” in the face of technical gaps. The premium you pay is used to buy this kind of freedom.
Second, the depth of the decision-making feedback loop. Watt’s steam engine was more like a black box to Samuel. If it broke, it had to be shipped to Birmingham or Manchester to find professional engineers for repairs. But that auxiliary water pump—though crude—had an extremely short loop. Local ironworkers could fix it, roadside coal vendors could supply it, and Samuel himself could decide the water amount for today. The entire process didn’t require seeking permissions from the so-called upstream of the industrial chain—he could accomplish the “last link” of generating power himself.
In today’s so-called AI startups, the actual depth is very low—they just pass user needs to a large model and then spit out answers. That decision loop is entirely external. For the current feedback-loop depth, we should look more at what you can inject during an agent’s execution—personal logic that can’t be replaced by the model, industry experience, and feedback patterns. In Openclaw’s chaotic jumping around, only you can define what a “correct click” is, and it’s you who decide when to “cut losses.” Major companies can provide “general power,” but only you can decide what “specific order” to create.
In Manchester 1786, Samuel Greg stood in front of his own parts clattering water pump.
To top engineers like Boulton, this thing was inefficient, belching black smoke, and essentially an embarrassment to industrial civilization. But in Greg’s ledger, it was his only lever to fight against the River Bollin drying up—to fight against London contract pressures—and to fight against Boulton’s steam-engine patent monopoly.
Two hundred and forty years later, facing that red lobster on the screen that keeps bouncing between clicking and screenshots while crazily consuming Tokens, we don’t need to mythologize it, and we don’t need to wait for some “perfect future.”
To be harvested is a kind of gravitational pull toward progress. But under this grand consumption-driven dynamic, what truly determines whether you live or die is whether you possess, outside the “power system” defined by big companies, a decoupled endogenous logic architecture of your own.
Technology can collapse, power can change hands—only those “nonlinear variables” that piece together order on the fault line and can still wake up and jump clearly under gravity are the true subject of history.