🥳 Earning Growth Points can Win an iPhone 16?
🔥 Gate Post Growth Points Summer Lucky Draw Round 1️⃣ 1️⃣ Is Live!
🎁Prize pool over $10,000! Win iPhone 16 Pro Max 512G, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=11
How to earn Growth Points fast?
1️⃣ Go to [Post], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
New feature this round: “Fragment Exchange”! Collect fragments to redeem exclusive Gate merch!
100% chance t
China AI catches up: DeepSeek releases R1 model to challenge US technological leadership
China's AI lab DeepSeek recently launched its Open Source inference model DeepSeek-R1, triggering wide follow in the industry. The model, known as the 'inference model,' is claimed to perform comparably to OpenAI's o1 on certain AI Benchmark tests. R1 has been released through the AI development platform Hugging Face under the MIT license, allowing users to commercialize it without restrictions.
DeepSeek claims that R1 outperformed o1 in several Benchmark tests, including the American Invitational Mathematics Exam (AIME), MATH-500, and SWE-bench Verified. Among them, AIME uses other models to assess reasoning abilities, MATH-500 focuses on word problems, and SWE-bench Verified tests programming tasks.
The R1 model has advantages, but is limited by politics
It is said that as a reasoning model, R1 has a unique self-validation ability, which makes it more reliable than traditional models in fields such as physics, science, and mathematics. Although reasoning models usually require longer computing times, ranging from several seconds to minutes (, their high accuracy is of great advantage in dealing with complex problems.
The technical report points out that R1 contains 671 billion parameters, far exceeding many existing models. The number of parameters is usually proportional to the problem-solving ability of the model, making R1 a massive model. However, D