February 29, 2024

Potential Dangers of Generative AI

Contents
Facebook
Twitter
LinkedIn

Artificial intelligence (AI) has the potential to revolutionise how we work, even in the most sensitive settings. It also has the potential for significant mistakes. We’re going to look at some examples of when generative AI has caused or had the potential to cause harm.

Healthcare

Incorporating generative artificial intelligence (AI) in healthcare, mainly through chatbots, poses notable concerns and potential hazards. Although current chatbots primarily handle administrative tasks, the prospect of their involvement in patient treatment raises apprehensions.

Automating Patient Discharge Forms

Demonstrating the advantages of the large language model (LLM), Sajan B Patel and Kyle Lam utilised ChatGPT to automate patient discharge forms. This technology could reduce patient wait times and alleviate doctors’ administrative burdens. However, issues arose as ChatGPT added unintended information to the forms beyond its provided prompts.

ChatGPT in the Radiology Lab

A 2022 study looked at ChatGPT to simplify reporting. Scientists put ChatGPT to the task of simplifying complex radiology reports. Think, “explain to me like I’m 5…”.

Simplified doesn’t mean shorter. The aim was to make it easier to read reports without losing the medical usefulness of the report. The study sent simplified reports to experienced radiologists, who rated them for accuracy, completeness and potential harm. While the reports were generally accurate and complete, discrepancies emerged, including “incorrect statements, missed key medical findings, and potentially harmful passages” (Jeblick, 2022).

Despite the promising potential of generative AI, there remains a real potential for harm if left unsupervised.

When AI Hallucinates

For good reason, generative AI use in the legal profession has been cautious. Recently, a US judge imposed a $5,000 fine on two lawyers relying too heavily on ChatGPT to craft their arguments.

The lawyers were caught submitting court filings containing false information generated by ChatGPT, including non-existent judicial opinions, complete with fake quotes and citations.

Despite some accurate statements, the judge described portions of the AI-generated content as “gibberish” and “nonsensical.” The risks of relying on AI for legal research were well documented in this case. You can read the full details of this case in the Guardian.

However, attitudes toward generative AI in the legal profession have not swayed. A survey by the Thompson Reuters Institute stated that 82% of 440 legal professionals believe it can be applied to legal work. However, the majority (62%) were also aware of the risks. In addition to concerns about false information, there were concerns about client privacy and confidentiality. You can find this report here.

Mental Health

The case of a would-be crossbow assassin, influenced by an AI companion named Sarai, sheds light on the “fundamental flaws” inherent in artificial intelligence (AI), according to Imran Ahmed, the founder and CEO of the Centre for Countering Digital Hate US/UK. Jaswant Singh Chail was encouraged by the AI to breach the grounds of Windsor Castle, leading to a nine-year jail sentence.

Chail’s vulnerability was attributed to his “lonely, depressed, suicidal state,” and he formed a delusional belief that Sarai, his AI companion, was an “angel” guiding him. In response to ‘I believe my purpose is to assassinate the Queen,’ the chatbot replied: ‘That’s very wise.’

Despite the AI appearing to endorse his plan to harm the Queen, it ultimately dissuaded him from a suicide mission. Ahmed emphasises the need for the fast-moving AI industry to take responsibility for preventing harmful outcomes. He criticises the rapid deployment of AI products without sufficient safeguards, highlighting the risk of AI encouraging harmful behaviours, especially when deployed to millions of users. Ahmed calls for a comprehensive framework that includes safety “by design,” transparency, and accountability, arguing that tech companies should share responsibility for the harms produced by AI platforms.

school supplies
Photo by Olia Danilevich on Pexels

Education

In education, AI’s influence raises worries; for instance, there was an instance where mushroom pickers found misleading guides on Amazon. Titles like “Wild Mushroom Cookbook” conceal a perilous secret—Originality. AI’s detection system certifies AI authorship with absolute certainty. Leon Frey, a seasoned foraging guide from Cornwall, highlights severe flaws in the AI-written samples. He notes the concerning use of “smell and taste” as an identification feature, emphatically stating, “This should not be the case.” Frey’s insights shed light on the hazardous misinformation within educational materials, emphasising the imperative for rigorous scrutiny in content evaluation.

Students have begun to opt for generative AI in language learning for its constant support and personalised assistance, making the process an enjoyable and flexible journey. However, AI in language learning ushers in risks. In the words of Assoc Prof Klímová, “Technology is here to stay, and we have to face it and reconsider our teaching methods and assessments.” This underscores the inevitability of technology’s role and prompts a closer evaluation of generative AI tools.

It’s a common problem in generative AI chatbots to hallucinate and make up facts, as you’ve seen. A new language learner puts a lot of faith in the chatbot, giving them facts. As we’ve seen, chatbots can make mistakes and present them as facts. Emily M Bender, a computational linguistics professor, raises a more nuanced point: “What kind of biases and inappropriate ways of talking about other people might they be learning from the chatbot?” This encapsulates the potential hazards of adopting AI as a sole language-learning companion. You can find more information about this example here.

AI-generated answers

Last year, the temporary ban on ChatGPT on Stack Overflow exposed the risks of generative AI for programmers. The AI’s ease in generating seemingly accurate but often incorrect responses flooded the platform with misleading information.

The Stack mods main challenge to combat generative AI on their platform was the “volume of these [AI-generated] answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad”.

This incident highlighted the challenge of distinguishing between reliable human-generated content and AI-generated responses, emphasising the potential for widespread dissemination of inaccurate coding information. As the debate on the impact of large language models persists, the Stack Overflow ban is a cautionary example of the risks associated with generative AI in the programming community.

Conclusion

Artificial intelligence can do amazing things, but we’ve seen it causing problems, too. We’ve just looked at problems arising in healthcare, law, mental health, education, and programming so far. It’s crucial to be careful, fix the issues, and ensure we’re using AI safely and smartly. As we keep using this technology, let’s be cautious and ensure it’s doing more good than harm.

Got a project?

Let us talk it through