Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Exposes "Polaris" Project, "2028 Great Unemployment" May Actually Be Coming
In September, an AI intern capable of independently conducting research was developed, and this time it might not just be empty talk.
Not long ago, a “2028 Prediction” article went viral online. The article pointed out that due to advances in AI, there would be a large wave of unemployment by 2028, with many jobs being replaced by AI.
After the article was published, combined with the Middle East situation, it caused a sharp drop in the US stock market that day. This was almost surreal—since the article was clearly written by AI, yet it seemed to align perfectly with people’s fears of “AI causing massive unemployment,” thus having a huge impact.
Recently, a piece of news revealed by OpenAI made people realize that the “mass unemployment in 2028” might not be just a rumor.
Recently, OpenAI Chief Scientist Jakub Pachocki said in an exclusive interview with MIT Technology Review a chilling statement—their “North Star” is to build a fully automated multi-agent research system by 2028.
By September this year, the first phase goal will be achieved:
An “autonomous AI research intern” capable of independently handling specific research problems.
This is not a placeholder in a product roadmap, nor a casual boast by Altman on X. It signifies that OpenAI is betting all its resources on one direction.
The Meaning of “North Star”
When tech companies talk about a “North Star,” it usually means two things: first, other projects will make way for it; second, there is internal consensus.
From OpenAI’s actions over the past two weeks, this judgment seems to be correct.
On March 19, OpenAI announced the acquisition of developer tools company Astral, integrating its team into the Codex division; at the same time, the company announced a unified desktop “super app” integrating ChatGPT, Codex, and a browser, led by application head Fidji Simo, with Greg Brockman assisting organizational reforms.
The era of fragmented products is coming to an end. OpenAI is pushing all its chips in one direction.
And that direction is “letting AI do research on its own.”
Pachocki’s logic is quite clear: reasoning models, agents, and interpretability—these three technical routes were originally separate within OpenAI. Now, they are being integrated under one goal—to create an AI researcher that can operate autonomously in data centers for extended periods. He said once this is achieved, “this will be what we truly rely on.”
Former OpenAI researcher Andrej Karpathy’s view is even more direct—“All leading labs working on large language models will do this; this is the ultimate boss battle.” He added a phrase worth pondering: “Scaling will definitely be more complex, but doing this is just an engineering problem, and it will succeed.”
Pay attention to his wording: it’s not ‘whether’ it can be done, but ‘when’.
Anthropic in Action
On the very day OpenAI announced its “North Star,” Anthropic quietly launched Claude Code Channels—a feature allowing developers to interact directly with a running Claude Code session via Telegram and Discord.
This might seem minor on its own, but in the context of overall trends, it’s significant.
Anthropic’s logic is: rather than telling developers what AI can do in the future, it’s better to embed it into their current workflows. Telegram and Discord are not academic papers—they are where programmers work every day. Having Claude Code live here means it shifts from being a “tool” to a “colleague.”
Community reactions confirmed this judgment.
One user said directly: “Claude, through this update, has killed OpenClaw—you no longer need to buy a Mac Mini.” The implication is that Anthropic’s infrastructure improvements have already made open-source alternatives less cost-effective.
From a broader timeline perspective, Anthropic’s iteration speed on Claude Code is astonishing. In just a few weeks, it integrated text processing, thousands of MCP skills, and autonomous bug fixing capabilities. While OpenAI is strengthening Codex through Astral acquisition, Anthropic has already put Claude Code directly into developers’ chat windows.
Both companies are heading toward the same endpoint, but their routes are completely different—OpenAI is building a “fully automated researcher in 2028,” while Anthropic is creating “intelligent agent tools usable today.”
The Real Challenge
However, there is a detail that cannot be overlooked.
Pachocki did something rare in the interview—he openly discussed the challenges of safety and controllability, and he was quite candid.
He said their idea is to use other large language models to “monitor the AI researcher’s notes,” catching bad behavior before problems arise. But he immediately admitted: “Our understanding of large language models is not enough to fully control them,” and that it will take a long time to truly say “this problem is solved.”
A company’s chief scientist saying “we don’t have full control yet,” while also announcing plans for a fully automated AI research system by 2028, is worth serious reflection.
This is not pessimism but an understanding of the real difficulty. Pachocki’s words indicate a clear awareness within OpenAI of the road’s hardships.
On the technical side, a concept called the “Karpasi Cycle,” summarized by researchers, is worth noting—successful automated AI research frameworks require three elements: an agent with permission to modify individual files, a single objective for objective testing, and fixed experimental time limits.
This framework has already begun to produce results in real environments. Shopify CEO Tobias Lütke shared an example: he let an autoresearch agent run overnight, and by morning, it had conducted 37 experiments, improving the model’s performance by 19%.
From concept to implementation, this path is shorter than expected.
The Future of a $20,000 Subscription
The “North Star” project is not only a technological advantage but also a business game-changer.
Paul Roetzer’s figures are worth multiple readings: he cites internal OpenAI forecasts that by 2029, the agent business alone could generate $29 billion annually, including $2,000/month “knowledge agents” and $20,000/month “research agents.”
These numbers show that “AI researchers” are never just a technical goal—they are a revenue roadmap.
A $20,000/month “research agent” translates to a fraction of a senior researcher’s annual salary, but it can work 24/7, running 37 experiments simultaneously. It’s not about replacing a specific person but redefining “research productivity” itself.
This reminds me of Karpathy’s statement—“This is the ultimate boss battle.” When he says “boss,” he’s not talking about competitors but about the ceiling of AI capabilities itself.
Once AI can autonomously advance scientific research, the pace of AI progress will no longer be limited by the number of human researchers and working hours.
Pachocki echoed this sentiment, albeit more restrained—“Once the system can operate autonomously in data centers for long periods, that’s what we truly depend on.”
The AI research intern of September 2026 is not the end but an important starting point.