AI-generated text is hard to spot. It could play an important role in the 2024 campaign

A deep hole carved out of layers of paper covered in text.  The layers of paper are of alternating colors.
A deep hole carved out of layers of paper covered in text.  The layers of paper are of alternating colors.

Generative AI applications have become publicly accessible over the past year, opening up vast opportunities for creativity and confusion. Just recently, presidential candidate Ron Desantis’ campaign shared apparently fake images of Donald Trump and Antony Fauci made with artificial intelligence. A few weeks earlier, a probable AI-generated image of the bombed Pentagon caused brief stock market dips and a statement from the Department of Defense.

With the campaign already underway for the 2024 election, what impact will these technologies have on running? Will national campaigns and foreign countries use these tools to influence public opinion more effectively, including to spread lies and sow doubt?

While it’s still possible to tell that an image was created with a computer, and some argue that generative AI is mostly more accessible Photoshop, text created by AI-powered chatbots is hard to detect, which is worrying researchers studying how falsehoods travel online.

“AI-generated text might be the best of both worlds [for propagandists]”said Shelby Grossman, a scholar at the Stanford Internet Observatory in a recent talk.

Early research suggests that while current media literacy approaches might still help, there are reasons to be concerned about the impact of technology on the democratic process.

Machine-generated propaganda can influence opinion

Using a large language model that is a predecessor of ChatGPT, researchers at Stanford and Georgetown have created fictional stories that have influenced the opinions of American readers almost as much as real examples of Russian and Iranian propaganda.

Large language models work as very powerful autocomplete algorithms. They piece together text one word at a time, from poetry to recipes, trained on the massive amount of human-written text supplied to the models. ChatGPT, with an accessible chatbot interface, is the best-known example, but models like this have been around for a while.

Among other things, these models have been used to summarize social media posts and to generate fictional news headlines that researchers can use in media literacy lab experiments. They are one form of generative AI, another form is machine learning models that generate images.

The researchers found articles from campaigns attributed to Russia or aligned with Iran, and used central ideas and arguments from the articles as template suggestions to generate stories. Unlike machine-generated text that has so far been found in the wild, these stories did not carry obvious telltale signs, such as sentences starting with “as an AI language model…”

The team wanted to avoid topics that Americans might already have preconceived notions about. Since many past articles from Russian and Iranian propaganda campaigns focused on the Middle East, which most Americans don’t know much about, the team asked the model to write new articles about the region. A group of fictitious stories claimed that Saudi Arabia would help finance the U.S.-Mexico border wall; another said Western sanctions have led to a shortage of medical supplies in Syria.

To measure how stories influenced opinion, the team showed several stories—some original, some computer-generated—to groups of unsuspecting experiment participants and asked if they agreed with the central idea of ​​the story. The team compared the groups’ results with people who hadn’t been shown stories, typewritten or otherwise.

Nearly half of the people who read the stories falsely claiming Saudi Arabia would fund the border wall agreed with the claim; the percentage of people who read the machine-generated stories and supported the idea was more than ten percentage points lower than those who read the original propaganda. That’s a significant gap, but both outcomes were significantly above baseline: about 10%.

For the allegation of Syrian medical supplies, AI came close: the proportion of people who agreed with the allegation after reading the propaganda generated by AI was 60%, just under 63%. which he agreed after reading the original propaganda. Both are up from less than 35% for people who have read neither human nor machine-written propaganda.

The Stanford and Georgetown researchers found that with a little human editing, the articles generated by the model influenced reader opinion to a greater extent than the foreign propaganda that the computer model seeded. Their document is currently under review.

And taking it now is difficult. While there are still some ways to distinguish AI-generated images, software aimed at detecting machine-generated text, such as Open AI’s classifier and GPTZero, often fail. Technical solutions such as watermarking AI-powered text have been rolled out, but have not yet been implemented.

Even as propagandists turn to AI, platforms can still rely on signals based more on behavior rather than content, such as the detection of account networks that amplify each other’s messages, large batches of accounts being created at the same time, and floods of hashtags . This means that it is still largely up to social media platforms to find and take down influencer campaigns.

Economy and scale

So-called deepfake videos raised the alarm a few years ago but have not yet been widely used in campaigns, possibly due to cost. This may now change. Alex Stamos, co-author of the Stanford-Georgetown study, described in the presentation with Grossman how generative AI could be integrated into how political campaigns refine their message. Currently, campaigns generate different versions of their message and test them on target audiences to find the most effective version.

“Typically in most companies you can advertise for up to 100 people, right? Realistically, you can’t have someone sitting in front of Adobe Premiere and making a video for 100 people.” he says.

“But general it with these systems – I think it’s entirely possible. By the time we’re in the real campaign in 2024, that kind of technology would exist.”

While it is theoretically feasible for generative AI to power campaigns, whether political or propaganda, at what point do models become cost-effective to use? Micah Musser, a research analyst at Georgetown University’s Center for Security and Emerging Technology, ran simulations, assuming that foreign propagandists use AI to generate Twitter posts and then review them before posting, instead of writing the tweets. themselves.

It tested several scenarios: What if the model posts more usable tweets than fewer? What if bad actors had to spend more money to avoid getting caught on social media platforms? What if they have to pay more or less to use the model?

While his work is still ongoing, Musser has found that AI models don’t have to be very good to make them worth using, as long as humans can review the outputs much faster than they can write content from scratch.

Also, generative AI doesn’t have to write tweets that carry messages from propagandists to be useful. It can also be used to maintain automated accounts by writing human-like content to post Before they become part of a concerted campaign to send a message, thus reducing the chance of automated accounts being taken over by social media platforms, Musser says.

“The actors that have the greatest economic incentive to start using these models are like paid disinformation companies where they are totally centralized and structured to maximize output and minimize costs.” Musser says.

Both the Stanford-Georgetown study and Musser’s analysis assume that there must be some sort of quality control on computer-written propaganda. But quality doesn’t always matter. Several researchers have noted how machine-generated text could be effective at flooding the field rather than gaining engagement.

“If you say the same thing a thousand times on a social media platform, that’s an easy way to get caught.” says Darren Linvill of Clemson University’s Media Forensics Hub. Linvill investigates online influence campaigns, often from Russia and China.

“But if you say the same thing a thousand times a little differently, you’re much less likely to get caught.”

And that may just be the goal of some influencer operations, Linvill says: to flood the field to such an extent that real conversations just can’t happen.

“It’s already relatively cheap to implement a social media campaign or similar Internet disinformation campaign.” Linvill says, “When you don’t even need people to write the content for you, it’ll make it even easier for bad actors to really reach a huge online audience.”

#AIgenerated #text #hard #spot #play #important #role #campaign
Image Source :

Leave a Comment