Quantcast
Channel: Scarlett Johansson – The Irish Sun
Viewing all articles
Browse latest Browse all 71

ChatGPT’s human voice mode is so convincing that people may become ’emotionally reliant’ on it, OpenAI fears

$
0
0

CHATGPT’S human voice mode is so realistic that people may become “emotionally reliant” on it, its creators warn.

OpenAI, who are behind ChatGPT, have revealed concerns users may forge an emotional dependency on the chatbot’s forthcoming voice mode.

a phone displays the chatgpt app on its screen
Getty
OpenAI has warned its users could become ’emotionally dependent’ on ChatGPT’s voice mode[/caption]

The ChatGPT-4o mode is currently being analysed for safety before being rolled out.

It allows, to some extent, users to speak naturally with the AI assistant as if it was a real person.

While this may be very convenient for many users, it does come with the risk of emotional reliance and “increasingly miscalibrated trust” of an AI model that could be exacerbated by interactions with a remarkably human-like voice.

A voice that is also capable of taking into account the user’s emotions via the tone of their voice.

The findings of the safety review were published this week and highlighted concerns about language that reflected a sense of shared bonds between the human and the AI, Wired reports.

The reviews said: “While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

It also warned the dependence on the AI might affect relationships with other humans.

The paper said: “Human-like socialisation with an AI model may produce externalities impacting human-to-human interactions.

“For instance, users might form social relationships with the AI, reducing their need for human interaction – potentially benefiting lonely individuals but possibly affecting healthy relationships.

“Extended interaction with the model might influence social norms.

“For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”

Not only that, the review also pointed out the possibility of over-reliance and dependence.

It said: “The ability to complete tasks for the user, while also storing and ‘remembering’ key details and using those in the conversation, creates both a compelling product experience and the potential for over-reliance and dependence.”

The team behind the study said there will be further work carried out on the potential for emotional reliance on the voice-based version of ChatGPT.

The feature attracted widespread attention earlier this summer due to the voice’s remarkable resemblance to the actress Scarlett Johansson.

The Hollywood A-lister, 39, who played an AI being its user fell in love with in the movie Her, turned down the offer to voice OpenAI’s assistant.

What is ChatGPT?

ChatGPT is a new artificial intelligence tool

ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.

It’s part of a new generation of AI systems.

ChatGPT is a language model that can produce text.

It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.

ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions

GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.

If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.

ChatGPT can also hold conversations and even learn from things you’ve said.

It can handle very complicated prompts and is even being used by businesses to help with work.

But note that it might not always tell you the truth.

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.

While the end result sounds suspiciously like her, Open AI’s boss Sam Altman has insisted her voice wasn’t cloned.

OpenAI is not alone in recognising the potential risk of AI assistants that mimic human interaction.

Google DeepMind issued a paper in April which examined the potential ethical challenges raised by more capable AI assistants.

Co-author Iason Gabriel, a staff research scientist at Google DeepMind, told Wired that chatbots’ ability to use language “creates this impression of genuine intimacy,” saying he had found an experimental voice interface for Google DeepMind’s AI to be especially sticky.

He said: “There are all these questions about emotional entanglement.”

a phone with gpt-40 written on the screen
Getty
The ChatGPT-4o mode is currently being analysed for safety before being rolled out[/caption]

Viewing all articles
Browse latest Browse all 71

Trending Articles





<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>