Sat. May 25th, 2024

OpenAI is concerned that ChatGPT 4 will make AI far too powerful by recognising and reading people’s faces –  indiansupdate

The well-known AI-powered chatbot utilized in a range of activities, OpenAI’s ChatGPT, has moved above simply interacting with text. It is now capable of recognising and reading people’s faces.

The most recent version of ChatGPT, known as GPT-4, now contains an intriguing new feature: image analysis. Customers can now engage with the bot not only with words, but also by describing images, asking inquiries regarding them, and even recognising the faces of specific individuals. This technology’s possible applications include assisting users in recognizing and resolving faults in photographs, such as diagnosing a broken-down car’s engine or identifying a mystery rash.

Jonathan Mosen, the CEO of a blind employment firm, was one of the early adopters of this enhanced version after experiencing the visual analysis capability during a trip. He was able to detect distinct dispensers in a hotel restroom and understand the contents in detail using ChatGPT, much exceeding the capabilities of standard image analysis software.

But OpenAI is concerned about the possible dangers of facial recognition. While the chatbot’s visual analysis is limited to recognizing a few individuals, the company is aware of the ethical and legal issues around the use of facial recognition technology, particularly those relating to privacy and permission. As a result, the app no longer provides Mosen with information on people’s faces.

Sandhini Agarwal, OpenAI’s policy researcher, says the business intends to participate in a transparent debate with the public about the incorporation of visual analysis capabilities into its chatbot. They are eager to gather feedback and democratic input from people in order to set clear norms and security procedures. In addition, OpenAI’s charitable arm is searching for ways to incorporate the public in the development of AI standards to ensure responsible and ethical procedures.

The creation of visual analysis in ChatGPT is a logical step due to the model’s training data, which includes images and text obtained from online sources.

Yet OpenAI is aware of potential issues, such as “hallucinations” in which the system generates misleading or inaccurate information in reaction to visuals. For example, when shown a photograph of a person on the verge of stardom, the chatbot may incorrectly produce the name of another prominent figure.

Microsoft, a big OpenAI investor, has access to the visual analysis tool and is testing it on their Bing chatbot in a limited rollout. These firms, still are walking gently in order to protect user privacy and address concerns before broad distribution.