December 19, 2023

Questioning

Data Dilemma

Contents
Facebook
Twitter
LinkedIn

The ethical use of data has become a critical concern in the landscape of artificial intelligence (AI) and chatbots. Instances of data misuse, breaches, and questionable practices have raised eyebrows and prompted discussions about the boundaries for AI. We’re exploring some specific cases for how that line gets crossed.

Bugs, bugs, bugs

Photo by Zac Wolff on Unsplash

In March 20, 2023, an Open AI data breach exposed conversations and personal details for 1.2% of active users’ chat history – including the titles of conversations and the payment information.

The bug, while short lived, even got ChatGPT banned in Italy for a period. OpenAI claimed they are compliant with General Data Protection Regulation (GDPR) in the EU. They made more transparent data policies as a result but also acts as a warning for generative AI models that are trained on their own data.

Data for research (that you didn't consent to)

Photo by Tim Mossholder on Unsplash

The mental health helpline SHOUT faced backlash for sharing data with third-party researchers, including Imperial College London. The removal of the promise not to share individual conversations eroded user trust. Transparency issues led to a culture of distrust, highlighting the need for ethical data practices in sensitive domains.

Shout, the UK’s largest crisis text line for urgent mental health support, was scrutinised for breaching user trust by sharing millions of messages, including those from vulnerable users and children under 13, with third-party researchers. Initially launched with a £3 million investment from the Royal Foundation of the Duke and Duchess of Cambridge. Shout assured users that individual conversations would never be shared.

However, privacy concerns arose when the promise of privacy was deleted from the SHOUT site in 2021. More than 10.8 million messages were used in a project with Imperial College London, employing artificial intelligence to predict behaviours, including suicidal thoughts.

“Trust is everything in health – and in mental health most of all. When you start by saying in your FAQs that ‘individual conversations cannot and will not ever be shared’; and suddenly move to training AI on hundreds of thousands of ‘full conversations’, you’ve left behind the feelings and expectations of the vulnerable people you serve” – Cori Crider lawyer, and co-founder of Foxglove (digital rights advocacy group).

While Shout claims the data was anonymised, concerns have been raised by privacy experts, data ethicists, and helpline users. As the project was using full conversations to understand users’ gender, age and disability. The revelation prompts questions about informed consent, especially for vulnerable individuals in crisis, and the ethical considerations of using sensitive conversations for AI research. Shout then faced an assessment by the Information Commissioner’s Office regarding its handling of user data. It’s clear transparent and ethical data practices in mental health support services are vital.

Another case at Crisis

The Crisis Text Line is a suicide prevention hotline in which users can get direct support from a Crisis Text Line volunteer. This came after its UK affiliate organisation SHOUT got into trouble for sharing data with a third party for a project. Crisis Text Line (CTL) faced criticism for sharing data with for-profit company Loris AI for commercial purposes. It was uncovered that helpline’s data was being used by Loris to develop AI software for corporate customer service packages.

Critics, including former volunteers, found this use of sensitive data disrespectful, emphasising the delicate nature of conversations dealing with mental health crises. They argue that such data, often involving delicate and vulnerable situations, should remain private. A former volunteer, Reierson initiated a petition urging the non-profit to stop monetising crisis conversations. The scrutiny prompted CTL to stop data sharing with Loris AI, emphasising the importance of accountability.

Pretending to be human

Photo by Count Chris on Unsplash

Smishing is type of cyberattack combining SMS with phishing to get personal information. Attackers often impersonate a trustworthy entity, such as a bank, government agency, or a legitimate service provider. In 2020 there was a smishing campaign involving a fake Apple chatbot used by scammers to trick users into claiming an iPhone 12. The chatbot guided the individual through a fake claims process before requesting their credit card info to cover shipping. Being cautious of unexpected messages, checking links before clicking, and not sharing personal information These are crucial to protect individuals who may be vulnerable to smishing.

Summary

In conclusion, the ethical boundaries of chatbot development are often tested, and these real-world examples highlight the need for responsible data practices. The Japeto team ensure all PII personal identifiable information is scrubbed. All messages sent to a Japeto chatbot go to Japeto and that’s it. Nowhere else. As AI continues to advance, adhering to good data management is paramount to building and maintaining trust.

Got a project?

Let us talk it through