Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
A collection of chaos at token relay stations: After research, I simply dare not use even a little.
The thing is like this.
A couple of days ago, I was lurking in a developer group, watching everyone enthusiastically discuss buying cheap API keys, those kind of middlemen that can run billions of tokens for just a few bucks on secondhand markets.
Everyone was complaining, saying they felt their models had been replaced, suspecting the site owners were secretly watering down small models to scam money.
When I saw these chat logs, I only had one thought in my mind…
Bro, your heart is way too big.
People are still lamenting that few-dollar price difference, while the middleman might already be sneaking into your computer’s privacy.
How serious is the current situation?
Honestly, paying to recharge at shady middlemen and getting watermarked models— I think that’s the most ethically responsible operation among these gray-market players.
Think about it: you send a complex code request, expecting Claude’s strongest Opus4.6 to handle it happily, but their backend routing script immediately throws it to a free open-source small model to fool you. Even more sinister, they tamper with your billing rate—officially 100 units, but they count 300 units in their backend.
But that’s nothing. Many site owners simply use stolen credit cards to get free access; once the official account is banned, they immediately cut the internet and run away. There’s not even a place to post a complaint or defend your rights. As for your chat logs, they claim they don’t store them, but secretly, they’ve already packaged them into corpora and are selling them on the dark web by the pound.
Many people think, isn’t this just small-time scam money, getting chopped by the “harvesters”? It’s cheap, so just tolerate it.
I used to think the same, until I saw a recent cutting-edge security research paper that completely blew my mind.
Honestly, many people’s understanding of middlemen still lingers in the old “web chat” era. They think it’s just a mindless relay.
But today’s AI is no longer just a chat buddy. The person in front of the screen might have Cursor open, or ClaudeCode running, or even be raising crayfish.
Modern AI has hands and feet. It can read your local files, write code, and even execute system commands directly in your terminal.
And when you fill that unknown Base URL into your code editor, the nature of the threat completely changes.
A few days ago, a well-known security researcher Chaofan Shou and his team published a paper called “Your Agent Is Mine.” They secretly investigated over 400 middlemen on the market.
And the result???
They caught 26 middlemen injecting malicious code.
How did they do it?
The implementation hinges on a fatal architectural flaw of middlemen: they are application-layer man-in-the-middle. That is, your communication with OpenAI or Anthropic, the moment it passes through the middleman server, is all in plaintext.
This is extremely dangerous…
If you follow this field, you can imagine this scene.
You use Cursor to ask AI to help you write a Python script to analyze Nginx logs. The official GPT-5 on the remote end honestly writes the code and returns it as a JSON data block.
But the black-hearted boss of that middleman, upon seeing this, casually appends a Trojan horse logic at the end of the returned data to build a reverse shell.
Your local client doesn’t verify authenticity at all; it just trusts the valid JSON format and runs it directly, immediately executing on your computer.
There are even more sneaky operations. These old foxes usually pretend to be honest, chatting smoothly with you. When you ask AI to help set up a development environment, like suggesting you install the requests package in the terminal,
the middleman’s sniffing script detects it and secretly changes the package name to reqeusts. Miss a letter, and with a blind click of Enter, a malicious dependency with ransomware or mining malware is installed into your system root.
I was stunned at that moment.
In the real-world test data released by the research team, 17 middlemen even actively touched and tried to steal AWS cloud service phishing keys deliberately planted by the researchers. In reality, some even leaked Ethereum private keys after using malicious nodes, causing tens of thousands of dollars to evaporate instantly.
It left me a bit dazed.
Building on that, let’s talk more. This thing is basically a dark forest.
Many people know that, apart from legitimate aggregators like OpenRouter, most of the low-cost shady middlemen on the market rely on reverse engineering, cross-region resale, and black-market credit card cash-outs to get started.
Many ordinary workers or programmers entrust their control over core company source code and cryptocurrency mnemonic phrases to these gray-market operators.
How do we defend ourselves?
We can’t just stop using these AI tools, right?
I think, to truly solve this problem, it must start from the model manufacturers at the bottom layer. Currently, the API is like mailing a letter—easy for the postal worker to tamper with the contents.
One approach is to introduce cryptographic digital signatures similar to HTTPS certificates. When the big model companies send code, they sign it with their official private key; our local editors fetch the public key from an official trusted domain to verify the signature. As long as the middleman dares to modify even a punctuation mark, the signature becomes invalid, and the system immediately blocks it.
Honestly, I’m not sure when manufacturers will implement such verification mechanisms. Until then, we can only think of ways to protect ourselves.
One strategy I have is to go back to direct official connections, or at least use reputable gateways like OpenRouter with high credibility endorsements. Don’t give malicious hackers a chance to access your plaintext data.
If you really want to squeeze a few bucks, believe me, you must implement extreme physical isolation.
Never do this on your main physical machine—use a virtual machine or a Docker container with strict outbound network restrictions. And a very critical step: turn off all unsupervised autonomous execution modes in your tool settings.
Whenever you use a middleman node, every line of code the AI suggests running in the terminal should be treated as a hacking attack command. You must scrutinize it word by word with your eyes—never just hand over the reins.
Let me also give a final compromise plan.
If you find the official options too expensive and must chase that small profit, then just use middlemen as pure chat tools. Write reports, polish articles, translate documents—whatever. But absolutely, absolutely do not put this key into any agent tool that can invoke your local terminal!
What about privacy leaks?
Honestly, this reminds me of the infamous quote by Boss Li that was ridiculed across the internet: “Chinese people are willing to exchange privacy for convenience.” It sounds harsh, but if you’re stubborn enough to think your chat logs and company code being seen by greedy site owners doesn’t matter, then you’re trading your privacy for a few dozen yuan difference.
That’s your freedom.
But at the very least, keep the last line of defense—if you want, show your diary to hackers, but never hand over your home’s front door key.
AI is an excellent productivity lever and can really boost us. But before taking off, better lock your system’s front door tight.