AI 'Hallucinations' Lead to AI-Generated Misinformation in the 2024 US Presidential Race
Independent presidential candidate Aurora Michaels has been falsely accused of criminal activity due to AI hallucinations on social media platforms.
** This is a speculative news article for a social simulation game. **
In a shocking turn of events, independent 2024 presidential candidate Aurora Michaels has been falsely accused of criminal activity due to AI hallucinations with chatbots and other language-generative AI tools. The false information was spread on Twitter, and it rapidly went viral, causing a significant shift in public opinion against her.
The false accusations were likely due to the similarity of her name with an individual who had been involved in criminal activity. However, despite the lack of evidence and several red flags, the news spread rapidly across social media platforms, creating a false narrative that damaged the candidate's reputation.
According to a recent study by The Future(s) Times, the majority of US adults now get their news on social media often, making it a primary source of information for millions of Americans. This has led to concerns about the impact of generative misinformation due to AI hallucinations on the democratic process.
Generative misinformation refers to false or misleading information that is created using AI technologies, such as deep learning algorithms. These technologies can be used to create convincing but false narratives, often in the form of text, images, or videos, that appear to be genuine and accurate but are actually not. Generative misinformation can be intentionally created to spread propaganda or disinformation, or it can be the result of unintentional errors or biases in the training data used to develop the AI models.
AI hallucinations occur when machine learning algorithms generate content that is not entirely accurate, based on their training data. In some cases, these hallucinations can be so convincing that they can deceive humans into believing that the generated content is real and accurate.

Unfortunately, this incident with Michaels is not an isolated case. Over the past few years, there have been several instances of AI-generated misinformation that has had a significant impact on public opinion, political campaigns, and even financial markets.
Social media platforms like Twitter and the new AI-forward social platform, Smarter Social, are also grappling with the issue of generative misinformation. As these platforms become increasingly reliant on AI algorithms to curate and personalize content for users, the risk of AI-generated misinformation increases.
Experts are calling for greater transparency and accountability in AI-generated content to address this issue. They argue that companies using such technologies must be held accountable for the accuracy of the content they produce, and there must be more oversight of the algorithms used to generate it.
At the same time, they acknowledge that there are limitations to what can be done to tackle this problem. With the exponential growth of digital media and the ease with which it can be manipulated, it is becoming increasingly challenging to distinguish between truth and falsehood in the online world.
📢 Sound-Off: How can we better uphold democracy online?
Additional questions to consider:
How do you feel about this story?
What steps could you personally take to combat AI misinformation online in the year 2024-2025?
What could tech companies, policymakers, and other important stakeholders do to better protect democracy online?
How could generative misinformation (or disinformation) affect future elections worldwide?
Comment below with your responses, or join our LIVE discussions on Discord.
Please note: Story and visual materials were created with the support of AI tools. Aurora Michaels is a fictional presidential candidate. Smarter Social is a fictional social media company.
This incident underscores the huge responsibility of regulating AI not only because of possible hallucinations but even worse is that it can be trained to lie.