Japanese

Events

  • NPI Home
  • Events
  • January 21, 2026 NPI Open Webinar: "Research Project for Risks in the Information Sphere"

2026/02/10
January 21, 2026 NPI Open Webinar: "Research Project for Risks in the Information Sphere"

As people's reliance on social media as a source of information expands, we are entering an era in which online information shapes public opinion. Along with this trend, in addition to the spread of disinformation, we are also seeing new methods of influence operations, such as the amplification of biased opinions, a development that increases risks in the information sphere from a security perspective. In response to these new trends by foreign influence operations, it is becoming essential for the defending side to utilize technologies such as generative AI.

With the cooperation of Sakana AI, discussions led by the moderator with participation by the panelists and members of the NPI Research Project for Risks in the Information Sphere were held covering the latest trends in influence operations in the information sphere and the potential for countering disinformation using generative AI.


Moderator
Osawa Jun, Chair, Research Project for Risks in the Information Sphere; Senior Fellow, NPI


Presentations

  • "Influence Operations (FIMI), Information Warfare, and Cognitive Warfare: The Emerging Cognitive Domain Crisis in Japan"
    Osawa Jun, Chair, Research Project; Senior Fellow, NPI
  • "AI × Intelligence: Applications in Cognitive Warfare"
    Ishii Junya, eopolitical Analyst; Project Manager, Applied Team, Sakana AI Co., Ltd.

Discussion and Q&A Session

  • Tsuchiya Takahiro, Professor, Institute for Liberal Arts and Sciences, Kyoto University of Foreign Studies
  • Nagasako Tomoko, Researcher, Office of Cyber Domain Awareness, Information-technology Promotion Agency, Japan (IPA)
  • Fuse Satoru, Executive Chief Fellow, Institute for International Socio-Economic Studies (IISE)
  • Miyazaki Yoko, Tokyo Office Director, Okinawa Institute of Science and Technology; Former Senior Research Fellow, SmartNews Media Research Institute
  • Mochinaga Dai, Associate Professor, Cybersecurity Laboratory, College of Systems Engineering and Science, Shibaura Institute of Technology
  • Suzuki Ryohei, Doctoral student, Graduate School of Law and International Relations, Hitotsubashi University

On the day of the webinar, a lively discussion took place with a large number of participants from government ministries and agencies, corporations, researchers, and the mass media. The main points of the discussions are as follows.


■ Moderator Osawa Jun, "Influence Operations (FIMI), Information Warfare, and Cognitive Warfare: The Emerging Cognitive Domain Crisis in Japan"

  • The proportion of people using the Internet as a source of information has been increasing year by year, and the influence of social media on public opinion formation is also growing. At the same time, the information on social media cannot necessarily be said to be thoroughly verified by those who post it, leaving room for information manipulation and use in influence operations.

  • Influence operations are a form of information warfare aimed at creating division and instability within the target state's society and interfering with decision-making. These operations seek to divide public opinion, weaken institutions, or impact political agendas.

  • Traditionally, Russia's influence operations relied on state-run propaganda media and troll armies to generate disinformation and spread it on social media, aiming to sow societal confusion. In recent years, however, they have increasingly used local influencers and AI-generated "persona accounts" disguised as local citizens to post disinformation. In addition to this operation, bot accounts powered by generative AI are used to intensively amplify these posts and disseminate them across social media.

  • In recent years, China has begun incorporating Russian-style influence operations as part of its own efforts. This operation reportedly includes impersonating voters in the United States to amplify disinformation that generates political and social conflict within the United States; they also operate fake news sites disguised as local media in approximately 30 countries, including Japan, to disseminate information favorable to China.

  • It is necessary to analyze influence operations by China and Russia in real time. However, because the volume of data in the social media sphere is extremely large, visualization through big data analysis, such as that conducted by the European External Action Service (EEAS), is essential. It is also necessary to explore the potential of AI-driven analysis.

■ Presenter Ishii Junya: "AI × Intelligence: Applications in Cognitive Warfare"

  • Sakana AI Co., Ltd. positions defense and intelligence as its main business areas. Social media analysis is a domain in which the strengths of AI can be maximized, and the company recognizes countering cognitive warfare as an urgent issue for Japan. From this perspective, using its proprietary AI technology, our team extracts the diverse narratives present in social media and visualizes the discourse space to generate a range of insights. In addition, the team applies disinformation detection technologies and conducts concrete analyses.

  • As a recent example, Sakana AI conducted and reported on an analysis across multiple social media platforms of narratives concerning Japan-China relations following Prime Minister Takaichi Sanae's remarks on a potential Taiwan contingency and "Survival-Threatening Situations" on November 7, 2025. The analysis revealed that, for example, critical posts regarding Japan's foreign and security policies did not increase rapidly immediately after the Prime Minister's remarks. Rather, the surge in such posts and the corresponding rise in engagement occurred after November 13, 2025, when Vice Minister of Foreign Affairs Sun Weidong summoned Japan's Ambassador to China Kanasugi Kenji and demanded a retraction of the Prime Minister's remarks.Earlier, on November 9, 2025, posts critical of China began to rise sharply primarily in response to Consul-General of China in Osaka Xue Jian's insulting remarks about Prime Minister Takaichi on the social media platform X on November 8, 2025.From this series of developments on social media, it is considered possible to discern China's intentions and understand how China seeks to influence the discourse space.
    • As a result of the use of disinformation detection technology by Sakana AI, posts such as "Public safety in Japan is deteriorating" or "Crimes targeting Chinese people are occurring frequently" could not be formally classified as fake, since they were presented as statements by the Chinese government. However, the technology was able to identify that the content of these claims diverged from official announcements by Japanese authorities such as the national government or the National Police Agency. Taking other factors into account, the technology determined that, from a comprehensive perspective, there is a high likelihood that these posts constitute disinformation. Similarly, the company's technology successfully judged the inaccuracy of a post stating that "Japan said it would defend Taiwan."

    • Furthermore, by utilizing Agent-Based Modeling technology, the company is developing a simulation in which AI-generated personas react as users when specific actions are taken on social media, thereby reproducing the dynamics of the social media discourse sphere.

    The comments from each panelist are as follows.


    ■ Panelist Miyazaki Yoko
    Cases of disinformation using generative AI have been observed even in U.S. operations targeting Venezuela. Videos showing Venezuelan citizens celebrating the arrest of President Nicolás Maduro were circulated, but factchecking confirmed that the videos were disinformation. The risk that emotionally charged AI-generated videos could be spread by influencers, amplified by algorithms, and further disseminated across platforms--ultimately leading to disinformation becoming established as fact--cannot be ignored.

    Regarding potential policy applications of the AI technologies presented, it is possible to rapidly detect false accounts and the reach of individual influencers, and issue warnings such as "Authenticity is in question and under verification" in comments. By predicting how audiences might respond in advance, it is also feasible to design and test counter-narratives, effectively turning the resulting detection into proactive, response-informed public communication. There is significant value in having countermeasure technologies functioning outside of the platform operators themselves, allowing the accumulation of data and insights.


    ■ Panelist Fuse Satoru
    It is important to recognize that information warfare and influence operations by countries around the world are being conducted even in peacetime.

    Attention must be paid to whether the decision-making and quality of debate in democratic countries are being undermined by such influence operations. The fact that corporate-led analyses of influence operations, such as those presented by Mr. Ishii in this session, are being made widely known is significant from the perspective of raising public awareness; it will also contribute to strengthening the public's resilience against risks in the information sphere. I hope that companies, research institutions, and researchers will continue to actively engage in this work.

    Furthermore, while social media tends to attract the most attention when analyzing influence operations, it is important to consider the reality that China and Russia spread information and narratives favorable to their own interests through state-run media, thereby giving these messages a sense of authority. From this perspective, the role that "old media" such as television and newspapers can play in countering disinformation must continue to be carefully monitored as well.


    ■ Panelist Tsuchiya Takahiro
    The mere presence of large amounts of disinformation does not necessarily lead to changes in people's behavior. At present, it is difficult to say that the cognitive warfare conducted by state actors has been successful. However, we must remain vigilant about the risk that, in the future, information whose veracity is difficult to determine without the use of AI could spread and lead to changes in people's behavior.


    ■ Panelist Mochinaga Dai
    By utilizing AI technology, it becomes increasingly important to identify the strategic objectives of the actors behind influence operations. Cases of misinformation that distort people's cognitive abilities have existed since before World War II, so we can confirm that the phenomena we see today are not entirely new. What has changed are the tools that produce these phenomena, which now can be said to allow them to have a much greater impact. Going forward, it will be necessary to use new technologies to address influence operations in real time.


    ■ Panelist Suzuki Ryohei
    In addition to technological aspects such as detecting disinformation and organizing narratives, it is important to pay attention to the social context in which disinformation spreads easily. Under conditions such as the rise of populist rhetoric, societal polarization, and dependence on social media, disinformation tends to propagate more readily.

    With the latest technology developed by Sakana AI, there may be potential applications for visualizing the degree of division in discussions on specific topics, as well as predicting vulnerability clusters based on social media posting data.


    ■ Panelist Nagasako Tomoko
    It is important to analyze not only disinformation but also the narratives themselves. For example, in narrative formation such as Japan's militarization, information manipulation can occur through satirical cartoons that do not contain disinformation, and detecting and countering such methods remains a challenge.

    While China and Russia employ different methods in their foreign influence operations, there are also cases of coordination between the two countries. Cross-posting cases across multiple platforms and instances that span different languages have been observed, highlighting the need for multi-layered monitoring.

    Additionally, it is crucial to detect AI-generated bot networks before key events such as elections or diplomatic activities. At the same time, mechanisms are needed to capture "warming-up" activities that occur before disinformation is actively disseminated.

< Back to previous page

Latest Articles on Events

Go to Article List >
Nakasone Peace Institute (NPI)(NPI)
Copyright ©Nakasone Peace Institute, All Rights Reserved.