🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
Sam Altman: OpenAI spent tens of millions of dollars addressing the words "please" and "thank you" from ChatGPT users.
OpenAI CEO Sam Altman reveals a surprising fact: the world's user's overly polite use of ChatGPT is costing the AI giant millions of dollars more every year due to the invisibly increased computational costs. (Synopsis: OpenAI releases o3 and o4-mini The strongest inference model: can think about pictures, automatically select tools, and break through math and coding performance) (Background supplement: OpenAI secretly creates its own community platform, pointing to Musk's X) In the field of artificial intelligence, efficiency and cost control are eternal themes, and Sam Altman, CEO of OpenAI, recently pointed out an unexpected source of costs in a public conversation - Polite language from the user. Altman said that when many users use ChatGPT, they habitually add "please", "thank you", "can please help me... Is it?" and other polite words, although meaningful in interpersonal relations, are additional computational costs for language models. The price of human politeness Altman mentioned in the post that these polite words cost tens of millions of dollars to reply, and it is understood that large language models such as ChatGPT process user input prompts by breaking up text into the smallest units called tokens to understand and generate. A token can be part of a word, a complete word, or a punctuation mark. The longer the user input prompt, the more tokens are included, and the model needs to consume a certain amount of computing resources to process each token, which may be only a small part of the overall overhead of OpenAI (including model training, server maintenance, R&D investment, etc.), but it reveals that even small user behavior patterns can have a significant economic impact in large-scale AI applications. Tokens Energy Economics When a user enters a question or instruction into ChatGPT, the system first breaks down your text into token sequences, such as "Please tell me how the weather is today?" May be broken down into things like ["please", "tell", "me", "today", "weather", "how", "?? ] such a token. And a more concise command "Today's weather?" It may only be broken down into ["today", "weather", "?" ], which means the same thing, and may consume additional costs depending on the number of etymology. The model then performs an "Inference" phase based on the number of etymology, usually the more tokens are entered, the more initial information the model needs to process, and longer inputs may sometimes lead to longer responses, further increasing the amount of computation at the output. The processing of each token requires a powerful GPU for matrix operations. However, many users have responded to Ultraman's remarks, in addition to the cost problem, humans actually project the need for politeness on words to artificial intelligence, which may be the reason why ChatGPT's dialogue ability is very smooth and natural, making it easy for users to subconsciously see it as a human-like conversation partner. Both humans have established deep-rooted norms such as communication in society, and it is also an additional cost for humans to abandon these norms because of machines, and may also unconsciously follow these norms, and more polite questioning methods can help to obtain more friendly or cooperative responses, which in turn can achieve the goal of answering more satisfying to humans. All in all, it's interesting to talk about ChatGPT adding millions of dollars in costs due to user courtesy from an Altman perspective, revealing not only the staggering computational costs behind large-scale AI services, but also the pressure on service providers to supply computing power. And user demand has also pushed models to become more natural results, and overall, in the era of artificial intelligence and language models, consumers may be the biggest winners. Related Stories GPT-5 Postponed! OpenAI first pushes o3, o4-mini, Sam Altman revealed: integration is more difficult than imagined OpenAI announced: Open Agents SDK supports MCP, connect everything and take a key step OpenAI unlocks Deep Research: Paid users can query 10 times a month, Microsoft releases multimodal AI agent Magma (Sam Altman: OpenAI spends tens of millions of dollars in response to ChatGPT users "please, Thank you" on the word〉This article was first published in BlockTempo's "Dynamic Trend - The Most Influential Blockchain News Media".