In a slightly stilted tone and with slightly awkward grammar, the American-accented voice on YouTube last month ridiculed Washington’s handling of the war between Israel and Hamas, claiming the United States was unable to “play its role as a mediator like China” and “now finds itself in a position of significant isolation.” The 10-minute post was one of 4,500 videos in a large network of YouTube channels spreading pro-China and anti-U.S. narratives, according to a report this week from the Australian Strategic Policy Institute, a security-focused think tank.
Some videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to the institute to pair A.I. voices with video essays. The campaign’s goal, according to the report, was to influence global opinion in favor of China and against the United States. The videos promoted narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players. Some of the clips fawned over Chinese companies like Huawei and denigrated American companies like Apple.
Content from at least 30 channels in the network drew nearly 120 million views and 730,000 subscribers since last year, along with occasional ads from Western companies, the report found. Some of the videos featured titles and scripts that seemed to be direct translations of common Chinese phrases and the names of Chinese companies, the report said. Others mentioned information that could be traced to news stories that were produced and circulated primarily in mainland China. Disinformation, such as the false claim that some Southeast Asian nations had adopted the Chinese yuan as their own currency, was common. The videos were often able to quickly react to current events. Jacinta Keast, an analyst at the Australian institute, wrote that the coordinated campaign might be “one of the most successful influence operations related to China ever witnessed on social media.”
YouTube said in a statement that its teams work around the clock to protect its community, adding that “we have invested heavily in robust systems to proactively detect coordinated influence operations.” The company said it welcomed research efforts and that it had shut down several of the channels mentioned in the report for violating the platform’s policies. Efforts to push pro-China messaging have proliferated in recent years, but have featured largely low-quality content that attracted limited engagement or failed to sustain meaningful audiences, Ms. Keast said.
“This campaign actually leverages artificial intelligence, which gives it the ability to create persuasive threat content at scale at a very limited cost compared to previous campaigns we’ve seen,” she said. Several other recent reports have suggested that China has become more aggressive in pressing propaganda denigrating the United States. Historically, its influence operations have focused on defending the Community Party government and its policies on issues like the persecution of Uyghurs or the fate of Taiwan.
China began targeting the United States more directly amid the mass pro-democracy protests in Hong Kong in 2019 and continuing with the Covid-19 pandemic, echoing longstanding Russian efforts to discredit American leadership and influence at home and aboard. Over the summer, researchers at Microsoft and other companies unearthed evidence of inauthentic accounts that China employed to falsely accuse the United States of using energy weapons to ignite the deadly wildfires in Hawaii in August. In a report in September, the State Department accused China of using “deceptive and coercive methods” to shape the global information environment, including the creation of fake social media accounts and even fake news organizations. Other research suggests that China has actively spread disinformation in Taiwan that the United States will eventually betray the island nation.
Meta announced last month that it removed 4,789 Facebook accounts from China that were impersonating Americans to debate political issues, warning that the campaign appeared to be laying the groundwork for interference in the 2024 presidential elections. It was the fifth network with ties to China that Meta had detected this year, the most of any other country. The advent of artificial technology seems to have drawn special interest from Beijing. Ms. Keast of the Australian institute said that disinformation peddlers were increasingly using easily accessible video editing and A.I. programs to create large volumes of convincing content.
She said that the network of pro-China YouTube channels most likely fed English-language scripts into readily available online text-to-video software or other programs that require no technical expertise and can produce clips within minutes. Such programs often allow users to select A.I.-generated voice narration and customize the gender, accent and tone of voice. Some of the voices used in the pro-China network were clearly synthetic. Ms. Keast noted that the audio lacked natural pauses and included pronunciation mistakes and occasional notes of electronic interference. Occasionally, multiple channels in the network used the same voice. (One group of videos, however, tried to dupe viewers into thinking a real person was speaking, incorporating audio such as “I’m your host, Steffan.”)
In 39 of the videos, Ms. Keast found at least 10 artificially generated avatars advertised by a British A.I. company. She wrote that she also discovered what may be the first example in an influence operation of a digital avatar created by a Chinese company — a woman in a red dress named Yanni. The scale of the pro-China network is probably even larger, according to the report. Similar channels appeared to target Indonesian and French people. Three separate channels posted videos about chip production that used similar thumbnail images and the same title translated into English, French and Spanish.