Artemis: The New Machine Economy Era of 2030 — Who Will Be the Ultimate Winner

Author: Lucas Shin, Source: Artemis, Compiled by Shaw 金色财经

Overview

  • By 2030, smart agents (AI Agents) will become the primary way people use the internet.

  • A brand-new agentic internet needs a new kind of payment rails, a currency system, and foundational components to match.

  • Value will concentrate across three major layers: the interface layer, the主体 that controls user interactions; the payments layer, the主体 that intervenes in the flow of funds; and the compute & custody layer, the主体 that operates the infrastructure.

  • The long-tail end of smart agent business activities will run on open protocols.

Let’s paint a scene.

The year is 2030. You’re 24, living in Burlington, Vermont, and you love investing—mostly in U.S. stocks, and you also participate in some crypto and prediction market trading on Kalshi. Two months ago, you started a fintech consulting firm on the side.

Some days—like today—the opening always comes out of nowhere.

Bzz—

The phone’s ringtone jolts you awake, like a bucket of cold water thrown in your face. It’s a message from your personal smart agent, Nexus:

Good morning, Joe. I completed the following tasks overnight ——

Portfolio update: I reduced your $WMT position by 15% overnight. Satellite data shows foot traffic at stores is down, and earnings-call sentiment has shifted bearish—already cross-validated.

Schedule update: I’ve booked 3 meetings this afternoon; the brief has been pinned into the meeting notes.

Expense optimization: Found a new cloud server provider—similar performance, annual fee down from $840 to $290. It’s ready to migrate anytime.

Total spending: $0.67

What, exactly, happened while you were asleep?

  1. Nexus dispatched a research sub-agent at a cost of $0.24, pulling information from 40 different data providers overnight. It compared Walmart’s latest earnings call content with satellite imagery from parking lots at Walmart locations across the U.S., then updated your investment logic. When the satellite data showed Walmart’s traffic was declining, your portfolio agent cross-referenced Kalshi’s earnings-sentiment market, confirmed the bearish signal, and finished de-risking before you woke up. Four years ago, this kind of trading strategy was still the exclusive domain of Citadel and a handful of quant funds. They had to pay millions of dollars for subscriptions to satellite imagery. Even a Bloomberg terminal costing $30k per year couldn’t cover all the information—you’d also have to separately subscribe to satellite imagery and alternative data, then spend hours integrating and analyzing it. And now, a 24-year-old in Vermont can get the same information advantage as quant analysts at Citadel for a cost of less than a cup of coffee.

  2. Nexus’s sales sub-agent filtered 200 leads that matched your target-customer profile—fintech companies in the U.S. Southeast that are Series B or later and haven’t yet used data providers—and completed the information enrichment at a cost of $0.002 per lead. The API interfaces it used were developed by another agent and listed on the open market. It selected the 3 leads with the highest intent signals, then immediately contacted their scheduling agents to negotiate meeting times. Before each meeting, it pulled the lead’s alma mater, shared connections, company news, and funding history, and put together a one-page briefing for you—pinned into the meeting notes. Just the information enrichment step alone: if you do it via a SaaS subscription, it would cost $200 per account per month.

  3. Nexus’s operations sub-agent ran comparative tests between your consulting website and six server providers: Vercel, Render, Railway, Fly.io, Netlify, and Cloudflare. It called each provider’s trial API interface at extremely low cost, deployed a test environment, and measured latency, availability, and throughput. Ultimately, Railway achieved the same performance at one-third the cost. Nexus used Railway’s pricing agent to agree on a monthly fee, built a website mirror on the new server, and completed the full suite of tests to ensure everything runs properly. Without agents, this would take at least a week: searching the web, getting quotes, and dealing with the anxiety-inducing manual migration process. You only need to confirm execution to Nexus.

Your agents completed all of that for just $0.67.

Now, multiply this scenario by every knowledge worker worldwide, every company, and every smart agent that’s running.

Bzz—

Nexus: Insufficient balance. $1.87 remaining.

Just like last week, you top up $5 using a credit card linked through Apple Pay, then keep brushing your teeth. Down at the base layer, those $5 are converted from the credit card into stablecoins—yet you can’t see your wallet at all. You don’t have to think about depositing money, and you never need to touch the blockchain.

This is a glimpse of machine economics—a brand-new business setting in which AI agents spend money on things humans have never paid for, with transaction scale and speed far beyond the realm of human commerce. You can imagine billions of transactions occurring every day.

But today’s internet is not yet ready to support all of this.

The current internet is designed for humans. It filters non-human behavior with rate limiting, CAPTCHAs, and API keys, and it monetizes human users through advertising. However, as massive numbers of autonomous agents appear, this business model will completely fail.

Traffic surges while effective attention shrinks drastically.

Internet servers subsidized for the long term by ad revenue will face requests scaling by orders of magnitude—requests that ad targeting will never be able to affect.

Agent payments naturally solve this problem, and micro-payments will become the key to access.

Pay-to-scrape, pay-to-access, pay-to-use.

Companies that build the infrastructure that ultimately gets widely adopted by agents will capture the largest pool of new economic activity our generation will ever see. The current giants are already jockeying for position, but machine economics will also spawn its own new giants. The last wave of a new internet gave us Google, Amazon, Facebook, PayPal, and Salesforce.

The agentic internet era is coming.

Market size outlook

By 2030, most online interactions will no longer be completed through browsers. Our smart agents will browse, test, negotiate on your behalf, assemble teams of sub-agents, and execute transactions. Every task they complete will generate a chain of small payments. These per-use costs seem like new expenses, but in reality they’re substituting costs for tools and labor that cost far more. The more advanced the tools are, the better the agents perform—and we’ll grant them higher degrees of autonomy.

Demand and adoption speed

Let’s do a rough estimate.

In the case earlier, Joe’s agents completed hundreds of transactions for just $0.67. If you scale this to a 500-person mid-sized enterprise—where each employee has a personal agent, plus hundreds of shared agents across sales, finance, legal, operations, and other departments—then it’s easy to generate 100k transactions per day initiated by agents.

There are over 1 billion knowledge workers worldwide, and 88% are already using AI at work—demand-side scale is huge and continuously growing. But today, most of these uses are still limited to basic tasks like web search, document summaries, or writing emails. The transformation to fully agentic computing hasn’t arrived yet, but once it starts, the speed will be extremely fast.

Instagram reached 100 million users in 30 months, TikTok did it in 9 months, while ChatGPT took only 2 months (Reuters / UBS data). One reason for ChatGPT’s rapid adoption is that the conversation interface has long been familiar, and you don’t need to learn new software or change your habits—you just describe what you need, and the agents will find a way to get it done.

The only obstacle is trust, and the speed at which trust is built will far exceed people’s expectations. Today Claude Code has already contributed 4% of all public code commits on GitHub (over 135k times per day). At the current growth rate, by the end of 2026 it will surpass 20%. That means a 42,896x increase in just 13 months. Developers moved from skepticism to large-scale production-grade code being handled by AI in just a little over a year.

As models become more intelligent, interfaces become more streamlined, and more technical complexity is abstracted away, I believe the adoption speed of smart agents will accelerate further.

By 2030, even if only 60% of knowledge workers use agents, daily spend of $3 to $5 per day (this is already a conservative estimate—after all, Joe completed three tasks before breakfast for just $0.67), the transaction volume of agent activity on personal endpoints alone will reach $800 billion to $1.4 trillion per year.

Enterprise market

Robbie Petersen of Dragonfly pointed out in an article that smart agents for commercial use are a reasonable evolution path for the SaaS model. I fully agree. They’re no longer just assisting workflows—they will completely replace existing processes. Just as more than 95% of software spend today comes from enterprises and government institutions, it’s highly likely that the usage volume and spending scale of agentic systems on the enterprise side will far exceed the personal market.

We’re already witnessing this transformation. Klarna replaced Salesforce with an internal AI system, saving about $2 million. ZoomInfo built AI agents to replace its transaction approval department, saving over $1 million per year. These are only early cases where individual workflows were agentified, saving millions of costs. Every enterprise has hundreds of such processes across sales, finance, legal, operations, and R&D. Once smart agents are deployed across the entire company, the scale of related spending will be staggering.

Everyone can become a merchant

As code agents dramatically reduce development costs, the entry barrier for internet merchants is approaching zero. A wedding planner who’s good at venue selection can package and sell the best workflows. An independent developer in Lagos can build a vertical-domain API and start earning revenue from agents around the world within hours. All you need is domain knowledge—generate an API interface through prompts—and you can start collecting payments.

But what happens if agents start selling services to other agents?

Let’s assume the Joe mentioned earlier wants to enter a new domain: a mid-sized healthcare business in the U.S. Midwest with legacy payment infrastructure. If his agent starts from scratch and completes the reasoning, the token cost accumulates quickly:

  • Filter 200 companies matching a specific profile (inference + API calls): about 500k tokens

  • Enrich each lead’s information (tech stack, funding, hiring data): 200 leads × about 5,000 tokens = 1 million tokens

  • Lock in decision-makers for core customers: about 200k tokens

  • Score intent signals (hiring cadence, contract cycle): about 300k tokens

  • Research each decision-maker’s background: 20 leads × about 10k tokens = 200k tokens

  • Write personalized outreach copy: 20 leads × about 3,000 tokens = 60k tokens

Total: about 2.3 million tokens. Using a cutting-edge model like Opus 4.6, the cost ranges from $8 to $15.

Wait—didn’t Joe’s sales sub-agent do a similar process for only a few cents?

Yes. Because most steps were already solved by other agents. Lead enrichment, intent scoring, and scheduling all have packaged interfaces on the open market, priced at just a few hundredths of a dollar.

This model creates a whole new business scenario. The supply side of the market grows bidirectionally: humans build services, and agents also build services. A high-token-cost problem solved by one agent can become a cheap tool that every other agent can use afterward. In such a world, agents can distill their experience into workflows and sell them to other agents, subsidizing their own operating costs.

Every paradigm shift creates new merchants. Shopify empowered ecommerce sellers, Stripe empowered online businesses—and in machine economics, it will empower impromptu developers and autonomous smart agents.

A reality check

So how far are we from truly commercial agent-based transactions?

My Artemis team has been tracking the progress of two major mainstream agent payment protocols: Coinbase’s open x402 protocol and the machine payment protocol (MPP) co-developed by Stripe and Tempo. In simple terms, the goals of these two types of protocols are completely aligned: enable users or agents to pay for any network service (e.g., data, web scraping, model inference, or other API services) within a single network request—eliminating tedious steps like signing up for accounts, handling API keys, and billing settlement.

It’s still early days.

The x402 protocol’s transaction volume at the end of 2025 is inflated by meme-coin hype and leaderboard “volume-farming” behavior. The chart above shows the “real” transaction activity after filtering out fake transactions using proprietary algorithms. Once you remove the noise from fake transactions and meme-coin speculation, it becomes clear that the agent economy hasn’t truly arrived yet. Most current activity is developers testing paid APIs and AI tools—not genuine agent economies operating.

Before this model truly takes off, two core problems must be solved:

  1. The supply side is not yet formed: there are far too few practical API interfaces that can generate real paid demand from agents.

  2. A mature discovery and aggregation layer is missing: even if high-value interfaces exist, agents currently have no reliable way to find them.

Since the ecosystem is still evolving, it’s too early to use transaction volume as the primary metric. A more appropriate observation indicator is the growth on the supply side—specifically, the number of merchants providing services to agents. We’ll refer to these merchants collectively as service providers.

The figure above shows the cumulative change in the number of service providers (sellers) that meet the standards over time. A qualifying seller must complete more than two “real” transactions and have at least two independent buyers. In October last year, the number was still under 100; now it has exceeded 4,000. I expect this growth rate will accelerate further, driven by three major trends:

  1. Artificial intelligence is lowering the barrier to creating digital products (as discussed above), meaning more people and AI agents will become merchants.

  2. New services will be designed with an agent-first philosophy. Agents are becoming the core customer, so the product forms built for them will be entirely different: using APIs instead of web pages, using instant access instead of signup flows, and using pay-as-you-go instead of subscriptions.

  3. Existing service providers will be forced to transform. As more users interact via AI interfaces instead of manually browsing web pages, ad-based monetization models will completely fail—because there is no human user attention available to monetize. Enterprises will have no choice but to charge directly for content and services.

These forces will create a positive feedback loop where supply and demand amplify each other, ultimately igniting the entire agent economy.

Industry landscape

The architecture of the agent transaction ecosystem is rapidly taking shape. A large number of startups are popping up like mushrooms after rain, each focused on solving a different blank spot in the architecture. Meanwhile, growth-stage companies in fintech and SaaS are also transitioning toward native agent transactions. In the past twelve months, almost all major payment giants and AI labs have launched or announced protocols related to agent transactions.

We’ve mapped more than 170 companies across five major layers: interaction interfaces, smart agents, account systems, payment infrastructure, and AI engines. Here we narrow it down to around 80 core organizations:

We break it down layer by layer from the top down.

Interface layer

The interface layer is closest to the user. It’s responsible for directing user intent (requests) toward the required tools or services (supply). Whoever defines how smart agents discover, evaluate, and select services gains enormous dominance over all the layers below. We’ll focus on two most important categories within this layer:

User interface

This is the entry point where most people directly interact with smart agents. Apple, Google, OpenAI, Anthropic, xAI, and Perplexity are all building these interaction interfaces, and their forms are rapidly moving beyond just “chat” mode. New formats keep emerging—voice assistants, desktop assistants, embedded copilots, browser agents—closely matching real usage scenarios. The platform that becomes the default AI interface for users will be the starting point for initiating all transactions by agents, and the winner in this track will gain an additional massive advantage.

AI labs have already crawled and trained on data from the entire internet; now, the best remaining training data is human-provided feedback. Every time you accept or reject a response, make corrections, or provide preference information to Claude or ChatGPT, the interaction interface captures that data for selling or model training. Controlling the interface means controlling the feedback loop that optimizes user experience and the model itself. This is also why Anthropic released Claude Code, why Google acquired Windsurf, and why OpenAI attempted to acquire Cursor. Once your agent accumulates context about your preferences, workflows, and frequently used tools, your migration cost becomes extremely high.

Service discovery

When Joe’s agent needs a lead-enrichment interface or a satellite data provider, how does it find the right service? This might be the biggest unsolved problem in the entire ecosystem architecture. Most current solutions are hard-coded lists of tools or curated service marketplaces. Major platforms are already building their own systems: OpenAI and Stripe launched ACP, Google and Shopify launched UCP, and Visa launched TAP. Fundamentally, these are merchant directories that require proactive integration by both the platform and merchants to work. This model performs well in conventional scenarios, but as the thresholds for creating and selling digital services drop dramatically, a large number of niche, highly customized applications will emerge—ones that curated models can’t satisfy for long-tail needs.

Companies represented by Coinbase, Merit Systems, Orthogonal, and Sapiom are building open alternatives: aggregators and underlying infrastructure that allow agents to autonomously discover and pay for services at runtime, without needing pre-integration or commercial partnerships. As the supply side (i.e., network resources) grows exponentially, the difficulty of solving this problem becomes extremely high. But whoever can crack sorting and recommendation systems—so agents match the right services at the right time—will hold tremendous industry influence.

In the end, will agent transactions move toward curated closed models or open-ecosystem models, and how that determines value allocation—that is one of the most central debates in this field. We’ll dig into this topic later.

Smart agents and account layer

To get tasks done for us, smart agents alone are far from enough. Joe’s sales sub-agent completed the full flow: filtering 200 leads, enriching information, and scheduling three meetings. Joe didn’t have to configure any tools, manage API keys, or approve every step one by one. Most of the infrastructure needed to make this happen is invisible to end users. But without those facilities, agents are just large language models without execution capability. Here’s an overview of the core components required to make all of this real:

Tools and standards

Protocols and frameworks like these give smart agents the ability to interact with the outside world. MCP (Machine Communication Protocol, initiated by Anthropic and now managed by the Linux Foundation) enables agents to connect to external data and tools: call APIs it has never touched, read databases, or instantly invoke a service. A2A (proposed by Google) defines how agents developed for different platforms can discover each other and collaborate. Frameworks released by LangChain, Nvidia, and Cloudflare provide developers with foundational building blocks for creating and deploying agents on top of these protocols. OpenClaw, recently acquired by OpenAI, integrates context management and tool calling into a single locally prioritized framework, dramatically reducing the difficulty for developers to build agents that can autonomously discover and pay for services.

The core issue in this area is: will these standards ultimately unify, or will they fragment? Can commercial frameworks built on top of these standards capture value before the tools become commoditized?

Identity authentication

After agents can communicate with each other, trust must be established. Before an agent can conduct transactions or sell services, it must prove the authorized entity and operational permissions, and keep an activity record that other agents can verify.

There are currently multiple technical paths, including: biometric identity verification (Worldcoin, Civic), on-chain agent reputation systems (ERC-8004), and verifiable credentials (Dock, Reclaim).

This design space is wide but also extremely risky: how much money can your agent spend before it gets your approval? Can it sign contracts on your behalf? Can it delegate permissions to sub-agents? These rules and security boundaries will most likely be finalized at the account layer.

Wallet

Obviously, for agents to make payments, they must be equipped with wallets. Many companies—Coinbase, Safe, MetaMask, Phantom, MoonPay, Privy, etc.—are building in this area, offering features such as programmatic access and creation, permission delegation, per-transaction spending limits, whitelists for receiving payments, and cross-chain operation capabilities without the user needing to manually confirm every operation. This is one of the most fiercely competitive tracks in the entire ecosystem, and it also raises a key question: where exactly is the company’s moat? Will this area ultimately move toward commoditization?

Payment layer

The payment layer sits deeper in the overall architecture, so it should be invisible to end users. But in machine economics, every unit of money will pass through here. When Joe’s agent pays $0.24 at night to retrieve data from 40 service providers, he doesn’t need to choose payment cards, currencies, or settlement chains for each transaction.

The key challenge is that traditional payment rails are designed for humans clicking a “Buy” button—not for agent API calls happening thousands of times per minute, with per-request amounts below a penny. Credit-card networks have a fixed transaction cost of around $0.03–$0.04 per transaction, plus 2.3%–2.9% fees. This works for a $400 hotel order, but it’s completely incompatible with new multi-step agent transactions.

This gives rise to a set of new protocols and currency systems specifically built for agent transactions, while the traditional giants are also modifying existing infrastructure to fit these requirements.

Key points are as follows:

Payment rails

These protocols and standards define how smart agents initiate, route, and complete payment settlement. Currently, two technical routes have mainly formed:

  1. x402 (Coinbase/Cloudflare) and MPP (Stripe/Tempo) designed specifically for machine-native transactions: agents call interfaces, get quotes, sign payments, and receive data—all completed within a single HTTP request, with stablecoin settlement and per-transaction costs only a few cents.

  2. ACP (OpenAI/Stripe), AP2 (Google/PayPal), and Visa’s TAP take another approach: modifying existing card payment infrastructure to adapt to agent scenarios. These solutions are more suitable for high-value transactions; compared to settlement speed and cost, buyer protection and merchant acceptance coverage are more important.

Stablecoins and settlement

Smart agents need money with programmable, fast, low-cost, and global characteristics. Stablecoins fully meet these requirements, so they naturally become the choice for x402 and MPP transactions. At the same time, card payment rails still provide buyer protection and rely on established merchant usage habits—remaining important for high-value transactions. Underlying public chains (like Base, Solana, Tempo) introduce another critical question: which chains can support the processing throughput, transaction finality, and cost structure required by agent-scale transactions?

Service providers

These institutions act as intermediaries between smart agents and merchants, handling complex steps like compliance review, merchant onboarding, and permission authentication. Coinbase, Stripe, and PayPal are expanding their existing ecosystems to support agent transactions. They’re betting that their merchant networks and compliance infrastructure can form a competitive advantage. Others like Sponge and Sapiom start from the emerging merchant side to solve cold-start problems, making it easy for any API-based business to start accepting agent payments. As payment rails, protocols, and merchants keep growing, coordinators have a good chance to become the key connective tissue preventing the entire system from fragmenting.

AI engine layer

This layer needs little introduction: all agent interactions, reasoning steps, and tool calls are driven by it. But the business model changes here faster than in other parts of the architecture, and where the value ultimately flows isn’t as clear as it seems on the surface. We focus on two major categories:

Compute and custody

Whenever Joe’s smart agent performs reasoning on a task, calls tools, or creates sub-agents, it consumes compute. But model inference is only part of the story. With the explosive growth of low-code/improvisational development apps and agent-built services, a large number of new interfaces are appearing—each requiring a hosting substrate. By May 2025, the number of accessible web pages grew by 45% in just two years; and as code agents make it extremely easy to launch new services, that growth rate will only accelerate further. This means compute demand is rising in sync from both ends: on one side, more agents handle more tasks; on the other, more services keep launching to satisfy their needs.

Hyperscale cloud providers (AWS, Google Cloud, Nvidia) are the obvious key participants. AWS and Google Cloud are also continuously simplifying deployment processes for agent backends and APIs on their infrastructure. Cloudflare focuses on edge computing, providing low-latency serverless compute for agent-facing services. Meanwhile, decentralized compute platforms like Akash, Bittensor, and Nous meet the excess compute demand by aggregating global GPU resources and selling them at extremely low prices.

Foundation models

Foundation models are the “brains” of the entire system. Anthropic, OpenAI, Google, and Meta, as leading labs, continuously expand the boundaries of smart agent capabilities, while the cost of running these models is dropping rapidly. At the end of 2022, running a GPT4-level model cost about $20 per million tokens; by early 2026, models with comparable performance cost about $0.05 per million tokens. In just a little over three years, the decrease reached 600x. Hardware upgrades, competition among vendors, and optimization technologies like prompt caching and batching all work together to keep inference costs falling. Meanwhile, as inference logic is distilled into smaller open-weight models and run costs become extremely low, the cost of building intelligent systems also drops significantly. In some benchmark tests, the performance gap between open-weight models and closed models has narrowed to only 1.7%.

This is a major tailwind for machine economics.

Cheaper intelligence means cheaper agents, which allows a 24-year-old independent founder in Vermont to easily afford operating costs—thereby further boosting transaction activity across all layers of the ecosystem. If foundation models end up getting pulled into a price war like cloud providers do today, then value may concentrate in upstream and downstream parts of the model layer’s stack, rather than in the model itself.

Who will be the winner?

By 2030, most of your digital interactions won’t require a browser, search engine, or app store. You’ll simply state your needs, and smart agents will handle everything end-to-end: finding the right services, negotiating terms, completing payments, and delivering the final results. The internet will look completely different.

It can be understood as: an SEO era for agents. More and more API interfaces will appear, while fewer and fewer human-facing interaction interfaces will exist.

In a world like this, who captures the value?

Sam Ragsdale of Merit Systems once wrote an article comparing today’s agent transaction ecosystem to the early internet. He believes the curated agent services marketplaces built by major platforms (ACP, UCP, TAP) are taking the same path as the 1990s era of AOL—an experience that’s polished and closed, but with a core limitation that all service providers must be manually filtered and approved. Meanwhile, open protocols like x402 and MPP are more “rough and ready,” but they are permissionless: anyone can build an interface, without a business team or legal review, and earn revenue through agents. In the 1990s, the closed-garden product experience was better, but the open internet has infinite possibilities.

In the end, the open internet wins.

The same logic is repeating itself. ACP, UCP, and TAP will connect to top AI labs and serve mainstream scenarios well—but they are limited to agents that can only perform tasks pre-arranged by the platform’s curated service directories. Agents that can plug into the entire open protocol ecosystem will have far broader capability boundaries.

After all, the most vibrant part of today’s internet is driven by the massive long-tail web traffic enabled by the HTTP protocol.

We must humbly admit that we can’t imagine the full shape of an open agent internet. Just like in 1995 no one could predict ride-hailing or social media, once we provide the tools required for agents, we can’t know what they will create, and which services people will end up paying for.

As we discussed earlier, foundation models are moving rapidly toward commoditization, and value may shift toward other layers in the technical architecture. Developer tools, wallets, and identity infrastructure are crucial, but as standards converge, these areas are likely to commoditize too. Therefore I believe value will concentrate in three areas: interaction interfaces, payments, and compute.

Interaction interfaces

Interaction interfaces determine spending limits, approval flows, and trust delegation mechanisms. Platforms that can provide the most personalized experience for users will carry the most transaction traffic.

Apple is the most underestimated player in this area. Its devices are deeply integrated into people’s daily lives, giving users extremely high migration costs. If Siri evolves into a mature agent-interaction entry point, Apple won’t need to build the very best model to control the starting point for billions of transactions. They only need to maintain the best-quality interaction entry point.

Google faces an even harder transition. Shifting from humans manually browsing to agentic intelligent filtering will erode its core advertising revenue. But Google has advantages other companies can’t match: it has accumulated decades of personal data in search, email, calendars, maps, and documents. You also have to consider enterprise migration costs—Google Workspace is embedded in millions of enterprises, and employees’ emails, files, and workflows run on Google infrastructure. If there’s any company that can build the most personalized agents for both consumers and enterprises, it’s Google. The question is whether it can monetize agent services as efficiently as it monetizes search traffic.

Merit Systems is the dark horse I’m rooting for. They’re both building service discovery infrastructure for the open agent economy (AgentCash, x402 scanning, MPP scanning) and developing consumer-side interfaces (Poncho). Their core logic is: whoever controls the agent’s service discovery channel and inserts themselves into the flow of funds will be able to occupy the position Google had in the early internet. It’s an ambitious bet—but if open agent transactions defeat curated closed models, Merit will become the most advantageous aggregation layer. It’s still early-stage, much like the competition between Google and AOL in that era when AOL’s closed ecosystem was valued at the equivalent of what’s now $350 billion.

Payments

Who controls the flow of funds captures a share of every transaction. I’m most confident about this layer’s prospects because its scale will grow directly in sync with transaction volume.

Stripe and Tempo are best positioned in machine-native payments. Stripe already has a mature developer ecosystem and a large merchant network. Tempo, meanwhile, has streaming payments, ~500-millisecond transaction finality, streaming payment rails, native support for credit/debit cards and stablecoins, paying Gas fees in dollars (no token-volatility risk), and features like server-sponsored transactions—built specifically for massive transaction volumes in machine economics. If MPP becomes the default machine-native payment rail, Stripe and Tempo will take a cut from every agent transaction.

Circle will grow in parallel with the expansion of the agent economy. I’m convinced that stablecoins will become the settlement layer of machine economics. Then Circle will earn revenue from reserves and take a share of every dollar flowing through agents’ wallets. USDC is the stablecoin with the broadest acceptance across exchanges, wallets, public chains, and payment protocols. New developers will preferentially choose it, further deepening integration across its ecosystem and making it harder for competitors to break in.

Visa will complete the adaptation. Remember when Joe topped up via Apple Pay using a credit card and, at the base layer, it was automatically converted into stablecoins—while he couldn’t see his wallet the entire time and didn’t have to worry about the blockchain? That’s the future normal. Consumers will continue using familiar cards, while stablecoins handle settlement underneath. As payment rails upgrade, Visa will lean on its brand trust with consumers and merchants to maintain its footing.

Compute and custody

Growth in the number of agents means higher inference demand. More improvisational development services means increased hosting needs. No matter which model, protocol, or interface becomes mainstream, compute providers will benefit. AWS and Cloudflare are the two best-positioned companies in this area, for similar reasons.

First, they already support most of the internet’s traffic. AWS has about 30% of cloud infrastructure share across 37 regions worldwide. Cloudflare provides security and performance services for over 20% of websites, meaning all requests to those sites flow through its network. When a burst of new agent-facing interfaces emerges, developers will default to deploying on platforms they already know.

Second, they are building monetization infrastructure for the next generation of the internet. As ad-based models fade and paid-access models rise, both companies natively support this shift. Cloudflare has launched paid scraping services, allowing any site on its network to charge AI scrapers via x402 (Stack Overflow is already using it). And AWS is a founding member of the x402 fund and published an open-source serverless x402 reference architecture. Any service running on either platform can easily enable native agent monetization features.

Identity authentication

I’m pessimistic about companies like Worldcoin. Their system requires human verification for every interaction. This extreme vision assumes people will care whether the online interaction partner is a human or an agent—but we’ve long been accustomed to this. In my view, the more likely future is: the basis on which most network traffic is filtered will be micro-payments, not human identity credentials.

Pay-to-access will be more practical than “prove you’re human.”

Identity systems matter only for some high-risk interactions, but in most agent transactions, (micro) payments themselves are the trust credential.

Closing

When Joe wakes up, he doesn’t think about payment rails or agent identity protocols. He just looks at his phone and learns that the agent already completed the transactions, scheduled the meetings, and found cheaper servers. All the technical architecture layers discussed in this article have been perfectly abstracted away, and he doesn’t need to worry about anything.

We’re still moving toward this future. The relevant protocols are live but not yet widely adopted; the supply side is growing but still thin; the service-discovery problem hasn’t been solved; and the identity layer is highly fragmented. Most of today’s transactions are still developer tests, not real agent transactions. But the speed at which the ecosystem’s puzzle pieces come together is faster than what data metrics can show. People who doubt the early infrastructure only stare at the downward curve; while what I’m thinking about is what this picture will look like when everyone has one or a group of agents that truly have economic agency.

If you haven’t acted yet, it’s time to transition to the agent-economy model.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin