Contact Form

Name

Email *

Message *

Friday, 14 July 2023

ChatGPT, Creator In Trouble: OpenAI is being probed for false, harmful statements about real people

US regulators are investigating OpenAI, an artificial intelligence company, regarding the potential risks posed to consumers by ChatGPT, an AI system that generates false information.

The Federal Trade Commission (FTC), in a letter to OpenAI, has requested information on how the company addresses the reputational risks faced by individuals. This inquiry reflects the increasing regulatory scrutiny surrounding this technology.

Sam Altman in trouble?
OpenAI’s CEO, Sam Altman, has stated that the company will cooperate with the FTC. ChatGPT produces human-like responses in a matter of seconds, unlike traditional internet searches that generate a list of links. This and similar AI products are expected to significantly change the way people access online information.

Competing tech companies are rushing to develop their own versions of this technology, which has sparked intense debates on issues such as data usage, response accuracy, and potential violations of authors’ rights during the training process.

Altman said OpenAI had spent years on safety research and months making ChatGPT “safer and more aligned before releasing it”.

“We protect user privacy and design our systems to learn about the world, not private individuals,” he said on Twitter.

The FTC’s letter inquires about the measures taken by OpenAI to address the potential generation of false, misleading, disparaging, or harmful statements about real individuals. The FTC is also examining OpenAI’s approach to data privacy, including data acquisition for training and informing the AI system.

OpenAI’s potential for errors
Altman emphasized OpenAI’s commitment to safety research, stating that they made ChatGPT “safer and more aligned” before releasing it. He asserted the company’s dedication to protecting user privacy and designing systems to learn about the world rather than private individuals.

Altman had previously appeared before Congress and acknowledged the technology’s potential for errors. He advocated for regulations and the creation of a new agency to oversee AI safety, expressing the company’s willingness to collaborate with the government to prevent mishaps.

“I think if this technology goes wrong, it can go quite wrong… we want to be vocal about that,” Altman said at the time. “We want to work with the government to prevent that from happening.”

AI hallucinations are dangerous
The Washington Post initially reported the FTC investigation, providing a copy of the letter. The FTC, led by Chair Lina Khan, has been actively policing major tech companies, prompting debates about the extent of the agency’s authority.

“We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else,” Ms Khan said.

“We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about,” she added.

Khan has voiced concerns about ChatGPT’s output, citing instances of sensitive information being exposed and the emergence of defamatory or false statements. The FTC’s investigation into OpenAI is still in its preliminary stages.

OpenAI has faced previous challenges on similar grounds, such as Italy’s temporary ban on ChatGPT due to privacy concerns. The service was reinstated after OpenAI implemented age verification tools and provided more detailed information about its privacy policy.



from Firstpost Tech Latest News https://ift.tt/Xfei8pW

No comments:

Post a Comment

please do not enter any spam link in the comment box.

Navigating the World of Crypto: Exploring the Potential of Crypto4u

 In recent years, the world of cryptocurrency has undergone a seismic shift, evolving from a niche interest among tech enthusiasts to a glob...