The concerning rise of AI content in politics

A deeply offensive AI-generated video, depicting a bizarre version of Gaza, Palestine, was recently shared by President Donald Trump on social media. The video — posted to Trump’s Truth Social and Instagram accounts — depicted Israeli Prime Minister Benjamin Netanyahu, Trump sidekick and billionaire Elon Musk and the president himself sunbathing in a resort-style iteration of Gaza. 

Today, it has become easy for the general public to create content with malicious intent. By using low-cost or free AI tools from companies such as Google and OpenAI, it takes only a simple text prompt to generate realistic media designed to deceive audiences on social media.

Right-wing extremists have been using AI-generated content to promote harmful ideals and propaganda online. The accessibility of AI allows users to quickly spread misinformation. 

For instance, AI-generated images of Trump cuddling cats and ducks went viral on X and other social media platforms after he and Vice President J.D. Vance incorrectly promoted offensive claims about Haitian immigrants in Ohio eating pets.

These posts gained millions of views and thousands of clicks. Some were clearly racist, such as an AI-generated image of Trump running through a field with cats under each arm as two shirtless Black men chase him.

Although this type of content is obviously fake and the claims about Haitian immigrants are baseless, they still push a hateful agenda. This is evident in how Trump’s baseless claims have affected Springfield, Ohio. From bomb threats targeting schools to general fears about Haitian immigrants’ safety, the spread of misinformation has damaging real-world consequences.

The term “AI slop” refers to the influx of low-quality, inaccurate content generated by AI tools that has flooded social media. At its best, AI slop is used for automated replies or captions. At its worst, it becomes a tool exploited by those aiming to spread harmful ideas. Netanyahu proposed transforming Gaza after its devastation and displacement of Palestinians, into a resort tourist destination, showcasing AI-generated images of hyper-modern skyscrapers, solar energy fields and more.

The use of AI slop, especially in politics, blurs the line between propaganda, satire and genuine policy proposals. In this case, Netanyahu has a real plan for Gaza, set for completion by 2035. It includes connecting a bullet train from Gaza to NEOM, a city built upon its own ethnic cleansing campaign.

The misuse of AI tools significantly affects the public’s perception of serious issues, as they create echo chambers of misinformation. Deepfakes are particularly harmful as AI-generated media mimicking public figures become more convincing. While celebrities are the most common targets — such as fake intimate images of Taylor Swift — deepfakes of broadcasters and politicians are also spreading.

Numerous false video clips and recordings of politicians have circulated on social media. For example, former UK Prime Minister Rishi Sunak has been impersonated in several Facebook video advertisements, with some reaching over 400,000 people. These fake advertisements also included videos of TV newsreaders falsely promoting an investment scheme linked to Musk.

Some deepfakes of politicians have even been used to manipulate political outcomes, such as a video of Ukrainian President Volodymyr Zelenskyy. In the video, an AI-generated Zelenskyy calls on his troops to surrender just days after Russia’s full-scale invasion. While the video was unconvincing, it underscores the growing threat of AI-generated misinformation in politics and even warfare.

While many people could recognize the video as fake, this type of content undermines public trust in government institutions. As AI technology advances, the risk of more convincing and damaging misinformation campaigns grows. Unlike manipulated videos and photos, AI-generated audio-only deepfakes can be more difficult to verify, as they lack the obvious signs of manipulated content. One example involved an AI-generated audio message impersonating former President Joe Biden, discouraging voters from participating in the New Hampshire primaries.

As AI-generated content becomes more sophisticated and accessible, the threat of political manipulation and misinformation continues to grow. Whether low-quality AI slop or highly convincing deepfakes, AI tools are being used to deceive the public and diminish trust in previously reliable sources.

Without stricter regulations, AI-driven misinformation could further damage democratic processes and negatively affect public perceptions.

Deanza Andriansyah is an Opinion Intern for the winter 2025 quarter. She can be reached at dandrian@uci.edu.

Edited by Jaheem Conley

Read More New U