Discussion on the discharge of nuclear sewage into the sea! The Japanese government was exposed to using AI weapons to monitor the entire network in real time for "false information"

Source: Xinzhiyuan

EDIT: Aeneas is so sleepy

[Introduction] Some media broke the news that as early as last year, the Japanese government began to use AI tools to detect remarks related to the discharge of Fukushima nuclear sewage, and responded within a few hours.

In the past few days, the news that Japan has officially started to discharge nuclear-contaminated water into the Pacific Ocean has attracted widespread attention.

Just before the discharge, some media reported that the Japanese government had been using AI tools since last year to monitor any remarks related to the Fukushima nuclear power plant's plan to discharge nuclear sewage.

In June of this year, the AI discovered a South Korean media report claiming that senior officials of the Japanese Ministry of Foreign Affairs had made huge political donations to the International Atomic Energy Agency (IAEA).

Within hours, the Japanese government responded, dismissing the report as "groundless" in both English and Japanese.

According to previous reports by Nikkei Asia, the Ministry of Foreign Affairs of Japan will launch a brand new AI system in 2023 to collect and analyze information on social media and other platforms, as well as track the impact of public opinion in the medium and long term.

It is worth noting that this framework includes not only information intended for Japanese audiences, but also information intended for Japan in other countries and regions.

Event Review

In March 2011, an earthquake and tsunami knocked out the cooling system at the Fukushima Daiichi nuclear power plant, causing nuclear fuel in three reactors to melt down and leak radioactive material. The ensuing massive pollution forced tens of thousands of people to evacuate.

More than 1.3 million cubic meters of seawater has since been used to cool the reactor core, which overheated after the explosion.

This contaminated water is also collected and stored in more than 1,000 stainless steel tanks on the site.

Among the 64 radioactive elements that caused pollution, the radioactive elements that mainly pose a threat to human health are: carbon-14, iodine-131, cesium-137, strontium-90, cobalt-60 and tritium-3.

In order to treat these nuclear sewage, Tokyo Electric Power Company (TEPCO) adopted a self-developed advanced liquid treatment system (ALPS), the process is divided into five stages of coprecipitation, adsorption and physical filtration.

However, such large quantities of water also make sustainable storage increasingly difficult.

In April 2021, the Japanese government officially approved the discharge of these treated nuclear sewage into the sea.

Despite concerns expressed by various countries and international organizations, this has not stopped Japan from advancing the plan.

At the same time, the Japanese Ministry of Foreign Affairs has also begun to use AI to monitor online reports about radioactive substances contained in nuclear sewage, and to dilute the concentration of such information by producing a large number of promotional materials.

On July 21, the Ministry of Foreign Affairs of Japan released an animated video on Twitter, explaining the safety protection in the process of nuclear sewage treatment in Japanese, English, French, Spanish, Russian, Arabic, Chinese and Korean. measure.

The video explains how the plant's water is purified to regulatory standards through the Advanced Liquid Treatment System (ALPS). And emphasized that before being released into wider ocean areas, the discharged nuclear sewage has been diluted 100 times by seawater.

AI monitor speech

In fact, this technology of monitoring Internet public opinion has already been deeply and extensively explored in the field of AI.

One of the most popular is the use of a combination of algorithms, machine learning models, and humans to deal with "fake news" published in social media.

A 2018 Twitter study showed that fake news stories are 70% more likely to be retweeted by humans than real news.

Meanwhile, real news takes about 6 times longer to reach a group of 1,500 people, and most of the time it rarely reaches more than 1,000 people. By contrast, popular fake news can reach as many as 100,000 people.

To this end, Meta has launched a brand new AI tool Sphere in 2022 to ensure the accuracy of information.

Sphere is the first AI model capable of scanning hundreds of thousands of citations at once to check whether they support the corresponding claims.

Sphere's dataset includes 134 million public web pages. It relies on the internet's collective knowledge to quickly scan hundreds of thousands of web citations for factual errors.

Meta said Sphere has scanned all pages on Wikipedia to see if it can identify sources of citations that do not support the claims made in the pages.

When Sphere finds suspicious sources, it can recommend stronger sources or corrections to help improve the accuracy of an entry.

Previously, many AI systems were able to identify information that lacked citation sources, but researchers at Meta said that picking out dubious claims and determining whether citation sources actually support them requires "deep understanding and analysis by AI systems." .

The development of Sphere marks Meta's efforts to address misinformation on the platform.

Meta has faced harsh criticism from users and regulators for several years for misinformation spread on Facebook, Instagram and WhatsApp. CEO Xiao Zha was even called before Congress to discuss the issue.

Discover fake news and explore social media communication patterns

In Europe, there is also the Fandango project, which is building software tools to help journalists and fact-checkers detect fake news.

Whether it’s PS or DeepFake, Fandango’s system can reverse engineer the changes, using algorithms to help journalists spot doctored content.

In addition, the system looks for web pages or social media posts with similar words and opinions based on fake news that has been flagged by fact-checkers.

Behind this system is the support of various AI algorithms, especially natural language processing.

Bronstein, a professor at the University of Lugano in Switzerland and Imperial College London in the United Kingdom, took an atypical AI approach to detecting fake news.

The project, called GoodNews, upends traditional fake news AI detection tools.

In the past, these tools have analyzed the unique semantic characteristics of fake news, but they have often encountered obstacles, such as WhatsApp, which is encrypted and does not allow access.

In addition, many times fake news may be images, which are difficult to analyze using natural language processing techniques.

So Professor Bronstein's team turned the traditional model on its head to study how fake news spreads.

The results suggest that fake news can get far more shares than likes on Facebook, while regular posts tend to get more likes than shares. By spotting such patterns, GoodNews attaches credibility scores to news items.

The team's first model, using graph-based machine learning, was trained on data from Twitter, some of which were proven false by journalists.

From this, they trained the AI algorithm, teaching the model which stories were false and which were not.

### Multi-modal DeepFake detection, so AIGC has nowhere to hide

In addition to pure text, the rapid development of visual generation models such as Stable Diffusion has also made the DeepFake problem more and more serious.

In multimodal media tampering, the faces of important people in pictures of various news reports (the face of the French president in the picture below) are replaced, and key phrases or words in the text are tampered with (the positive phrase "is welcome to ” was altered to the negative phrase “is forced to resign”).

In order to meet the new challenges, the researchers proposed a multi-modal hierarchical tampering inference model, which can detect the cross-modal semantic inconsistency of tampered samples by fusing and inferring semantic features between modalities.

Currently, this work has been accepted by CVPR 2023.

Specifically, the author proposes a multimodal hierarchical tampering reasoning model HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER).

This model is based on the model architecture of multi-modal semantic fusion and reasoning based on the double-tower structure, and realizes the detection and location of multi-modal tampering in a fine-grained and hierarchical manner through shallow and deep tampering reasoning.

The HAMMER model has the following two characteristics:

  1. In shallow tampering reasoning, Manipulation-Aware Contrastive Learning is used to align the semantic features of image and text unimodality extracted by image encoder and text encoder. At the same time, the single-modal embedding feature is used for information interaction through the cross-attention mechanism, and the Local Patch Attentional Aggregation mechanism (Local Patch Attentional Aggregation) is designed to locate the image tampering area;

  2. In deep tampering reasoning, multimodal semantic features are further fused using the modality-aware cross-attention mechanism in the multimodal aggregator. On this basis, special multi-modal sequence tagging and multi-modal multi-label classification are performed to locate text tampering words and detect finer-grained tampering types.

Experimental results show that the HAMMER proposed by the research team can detect and locate multimodal media tampering more accurately than multimodal and single-modal detection methods.

Judging from the visualization results of multi-modal tamper detection and localization, HAMMER can accurately perform tamper detection and localization tasks simultaneously.

In addition, the model attention visualization results on tampered words further demonstrate that HAMMER performs multimodal tampering detection and localization by focusing on image regions that are semantically inconsistent with the tampered text.

References:

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments