Email address you used for ChatGPT login is at risk: Reports

A research team of the Indiana University Bloomington has revealed about a potential privacy risk associated with email logged-in in ChatGPT. 

A research team of the Indiana University Bloomington has revealed about a potential privacy risk associated with OpenAI’s powerful language model, GPT-3.5 Turbo. The research team led by Rui Zhu, a Ph.D. candidate, reached out to individuals to take an account of email address used in login in the AI tool.

When the team researched more about the email addresses used n ChatGPT, the AI tool revealed the address of more than 80 percent individuals. This information alarms about the potential for generative AI tools like ChatGPT to disclose sensitive information.

OpenAI’s language models are basically designed to continuously learn from new data. However, disclosing the sensitive data of the users is making it unsafe to use. Software giants like OpenAI, Meta and Google use various techniques to safeguard personal information of users, but researchers find many ways to bypass these techniques.

Following this, the OpenAI has said that the company is committed to safety of the users and it rejects requests for private information. However, experts have said that the AI tool lack transparency regarding the specific training data and the potential risks associated with AI models holding private information.

Also Read: Sam Altman’s Humane To Ship ChatGPT-Powered Ai Pin Starting March 2024

 
Kalinga TV is now on WhatsApp. Join today to get latest Updates
 
Leave A Reply

Your email address will not be published.