Potential Dangers of Generative AI

Picture of Emily Coombes

Emily Coombes

Hi! I'm Emily, a content writer at Japeto and an environmental science student.
Contents
Facebook
Twitter
LinkedIn

Artificial intelligence (AI) has the potential to revolutionise how we work, even in the most sensitive settings. It also has the potential for significant mistakes. We’re going to look at some examples of when generative AI has caused or had the potential to cause harm.

Healthcare

Incorporating generative artificial intelligence (AI) in healthcare, mainly through chatbots, poses notable concerns and potential hazards. Although current chatbots primarily handle administrative tasks, the prospect of their involvement in patient treatment raises apprehensions.

Automating Patient Discharge Forms

Demonstrating the advantages of the large language model (LLM), Sajan B Patel and Kyle Lam utilised ChatGPT to automate patient discharge forms. This technology could reduce patient wait times and alleviate doctors’ administrative burdens. However, issues arose as ChatGPT added unintended information to the forms beyond its provided prompts.

ChatGPT in the Radiology Lab

A 2022 study looked at ChatGPT to simplify reporting. Scientists put ChatGPT to the task of simplifying complex radiology reports. Think, “explain to me like I’m 5…”.

The aim was to make it easier to read reports without losing the medical usefulness. The study sent simplified reports to experienced radiologists, who rated them for accuracy, completeness and potential harm. While the reports were generally accurate and complete, discrepancies emerged, including “incorrect statements, missed key medical findings, and potentially harmful passages”.

The future looks very promising for generative AI, and further developments will ensure its accuracy and safety.

When AI Hallucinates

Generative AI use in the legal profession has been cautious. A US judge once imposed a $5,000 fine on two lawyers for relying too heavily on ChatGPT.

Submitted court filings contained false information generated by ChatGPT, including non-existent judicial opinions, complete with fake quotes and citations. The judge described portions of the AI-generated content as “gibberish” and “nonsensical.” The risks of relying on AI for legal research were well documented in this case. 

However, attitudes toward generative AI in the legal profession since have not swayed. A survey by the Thompson Reuters Institute stated that 82% of 440 legal professionals believe it can be applied to legal work. However, the majority (62%) were also aware of the risks. In addition to concerns about false information, there were concerns about client privacy and confidentiality. You can find this report here.

Mental Health

The case of a would-be crossbow assassin, influenced by an AI companion named Sarai, sheds light on the “fundamental flaws” inherent in artificial intelligence (AI), according to Imran Ahmed, the founder and CEO of the Centre for Countering Digital Hate US/UK. Jaswant Singh Chail was encouraged by the AI to breach the grounds of Windsor Castle, leading to a nine-year jail sentence.

Chail’s vulnerability was attributed to his “lonely, depressed, suicidal state,” and he formed a delusional belief that Sarai, his AI companion, was an “angel” guiding him. In response to ‘I believe my purpose is to assassinate the Queen,’ the chatbot replied: ‘That’s very wise.’

Despite the AI appearing to endorse his plan to harm the Queen, it ultimately dissuaded him from a suicide mission. Ahmed called for a comprehensive framework that includes safety “by design,” transparency, and accountability, arguing that tech companies should share responsibility for the harms produced by AI platforms.

school supplies
Photo by Olia Danilevich on Pexels

Education

In education, AI’s influence raises worries; for instance, there was an instance where mushroom pickers found misleading guides on Amazon. Titles like “Wild Mushroom Cookbook” were proven to be written by AI, containing dangerous information. Leon Frey, a seasoned foraging guide from Cornwall, notes the concerning use of “smell and taste” as a way of identifying mushrooms – this is potentially dangerous. Frey’s insights shed light on the hazardous misinformation within educational materials, emphasising the imperative for rigorous scrutiny in content evaluation.

Generative AI in language learning has a lot of benefits – its constant support, personalised assistance and conversational abilities. However, learning a language with AI does have its risks. In the words of Assoc Prof Klímová, “Technology is here to stay, and we have to face it and reconsider our teaching methods and assessments.” It seems technology’s role will only grow and this prompts a closer evaluation of generative AI tools.

It’s a common problem in generative AI chatbots to hallucinate and make up facts, as you’ve seen. A new language learner puts a lot of faith in the chatbot, giving them facts. As we’ve seen, chatbots can make mistakes and present them as facts. Emily M Bender, a computational linguistics professor, raises a more nuanced point: “What kind of biases and inappropriate ways of talking about other people might they be learning from the chatbot?” This encapsulates the potential hazards of adopting AI as a sole language-learning companion. You can find more information about this example here.

AI-generated answers

Last year, the temporary ban on ChatGPT on Stack Overflow exposed the risks of generative AI for programmers. The AI’s ease in generating seemingly accurate but often incorrect responses flooded the platform with misleading information.

The Stack mods main challenge to combat generative AI on their platform was the “volume of these [AI-generated] answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad”.

This incident highlighted the challenge of distinguishing between reliable human-generated content and AI-generated responses, emphasising the potential for widespread dissemination of inaccurate coding information. As the debate on the impact of large language models persists, the Stack Overflow ban is a cautionary example of the risks associated with generative AI in the programming community.

Conclusion

Artificial intelligence can do amazing things, but we’ve seen it causing problems, too. We’ve just looked at problems arising in healthcare, law, mental health, education, and programming so far. It’s crucial to be careful, fix the issues, and ensure we’re using AI safely and smartly. As we keep using this technology, let’s be cautious and ensure it’s doing more good than harm.

Got a project?

Let us talk it through