🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
Discussion on the discharge of nuclear sewage into the sea! The Japanese government was exposed to using AI weapons to monitor the entire network in real time for "false information"
Source: Xinzhiyuan
EDIT: Aeneas is so sleepy
In the past few days, the news that Japan has officially started to discharge nuclear-contaminated water into the Pacific Ocean has attracted widespread attention.
Just before the discharge, some media reported that the Japanese government had been using AI tools since last year to monitor any remarks related to the Fukushima nuclear power plant's plan to discharge nuclear sewage.
In June of this year, the AI discovered a South Korean media report claiming that senior officials of the Japanese Ministry of Foreign Affairs had made huge political donations to the International Atomic Energy Agency (IAEA).
It is worth noting that this framework includes not only information intended for Japanese audiences, but also information intended for Japan in other countries and regions.
Event Review
In March 2011, an earthquake and tsunami knocked out the cooling system at the Fukushima Daiichi nuclear power plant, causing nuclear fuel in three reactors to melt down and leak radioactive material. The ensuing massive pollution forced tens of thousands of people to evacuate.
More than 1.3 million cubic meters of seawater has since been used to cool the reactor core, which overheated after the explosion.
This contaminated water is also collected and stored in more than 1,000 stainless steel tanks on the site.
Among the 64 radioactive elements that caused pollution, the radioactive elements that mainly pose a threat to human health are: carbon-14, iodine-131, cesium-137, strontium-90, cobalt-60 and tritium-3.
In order to treat these nuclear sewage, Tokyo Electric Power Company (TEPCO) adopted a self-developed advanced liquid treatment system (ALPS), the process is divided into five stages of coprecipitation, adsorption and physical filtration.
In April 2021, the Japanese government officially approved the discharge of these treated nuclear sewage into the sea.
Despite concerns expressed by various countries and international organizations, this has not stopped Japan from advancing the plan.
At the same time, the Japanese Ministry of Foreign Affairs has also begun to use AI to monitor online reports about radioactive substances contained in nuclear sewage, and to dilute the concentration of such information by producing a large number of promotional materials.
On July 21, the Ministry of Foreign Affairs of Japan released an animated video on Twitter, explaining the safety protection in the process of nuclear sewage treatment in Japanese, English, French, Spanish, Russian, Arabic, Chinese and Korean. measure.
The video explains how the plant's water is purified to regulatory standards through the Advanced Liquid Treatment System (ALPS). And emphasized that before being released into wider ocean areas, the discharged nuclear sewage has been diluted 100 times by seawater.
AI monitor speech
In fact, this technology of monitoring Internet public opinion has already been deeply and extensively explored in the field of AI.
One of the most popular is the use of a combination of algorithms, machine learning models, and humans to deal with "fake news" published in social media.
A 2018 Twitter study showed that fake news stories are 70% more likely to be retweeted by humans than real news.
Meanwhile, real news takes about 6 times longer to reach a group of 1,500 people, and most of the time it rarely reaches more than 1,000 people. By contrast, popular fake news can reach as many as 100,000 people.
Sphere is the first AI model capable of scanning hundreds of thousands of citations at once to check whether they support the corresponding claims.
When Sphere finds suspicious sources, it can recommend stronger sources or corrections to help improve the accuracy of an entry.
The development of Sphere marks Meta's efforts to address misinformation on the platform.
Meta has faced harsh criticism from users and regulators for several years for misinformation spread on Facebook, Instagram and WhatsApp. CEO Xiao Zha was even called before Congress to discuss the issue.
Discover fake news and explore social media communication patterns
In Europe, there is also the Fandango project, which is building software tools to help journalists and fact-checkers detect fake news.
In addition, the system looks for web pages or social media posts with similar words and opinions based on fake news that has been flagged by fact-checkers.
The project, called GoodNews, upends traditional fake news AI detection tools.
In addition, many times fake news may be images, which are difficult to analyze using natural language processing techniques.
The results suggest that fake news can get far more shares than likes on Facebook, while regular posts tend to get more likes than shares. By spotting such patterns, GoodNews attaches credibility scores to news items.
From this, they trained the AI algorithm, teaching the model which stories were false and which were not.
In addition to pure text, the rapid development of visual generation models such as Stable Diffusion has also made the DeepFake problem more and more serious.
In multimodal media tampering, the faces of important people in pictures of various news reports (the face of the French president in the picture below) are replaced, and key phrases or words in the text are tampered with (the positive phrase "is welcome to ” was altered to the negative phrase “is forced to resign”).
Currently, this work has been accepted by CVPR 2023.
This model is based on the model architecture of multi-modal semantic fusion and reasoning based on the double-tower structure, and realizes the detection and location of multi-modal tampering in a fine-grained and hierarchical manner through shallow and deep tampering reasoning.
In shallow tampering reasoning, Manipulation-Aware Contrastive Learning is used to align the semantic features of image and text unimodality extracted by image encoder and text encoder. At the same time, the single-modal embedding feature is used for information interaction through the cross-attention mechanism, and the Local Patch Attentional Aggregation mechanism (Local Patch Attentional Aggregation) is designed to locate the image tampering area;
In deep tampering reasoning, multimodal semantic features are further fused using the modality-aware cross-attention mechanism in the multimodal aggregator. On this basis, special multi-modal sequence tagging and multi-modal multi-label classification are performed to locate text tampering words and detect finer-grained tampering types.
Experimental results show that the HAMMER proposed by the research team can detect and locate multimodal media tampering more accurately than multimodal and single-modal detection methods.