(IntegrityPress.org) – Mike Woolridge, an Oxford University professor specializing in artificial intelligence has issued a warning against revealing sensitive personal information to chatbots like ChatGPT. He said that revealing such info can start potential downfalls in people’s lives.
Wooldridge cautioned users not to perceive these chatbots as trustworthy, as their primary function involves learning from what people input to the tool to refine future responses. He stressed that every piece of information shared with such AI systems contributes to their ongoing development and training.
Woodbridge also said that the chatbots are programmed to give responses that are aligned with user preferences, and not designed to always give factual or truthful info. He said this creates an environment where the chatbots only provide answers that the user wants to hear, essentially creating a personal echo-chamber.
The Guardian reported that Wooldridge will delve deeper into the intricacies of AI during this year’s Royal Institution Christmas lectures. His aim is to address prevalent misconceptions about AI technology and shed light on its actual capabilities and limitations.
Contrary to popular belief, Wooldridge clarified that AI chatbots like ChatGPT do not possess experiential knowledge or genuine understanding. Their design primarily focuses on generating responses that cater to user expectations, a feature that may mislead users into thinking the AI truly comprehends what they are telling it.
He also emphasized the irreversible nature of sharing information with AI systems. Once data is inputted into the AI framework, retrieving or retracting that information becomes exceedingly challenging, if not impossible. Wooldridge cautioned users to exercise discretion and assume that any data shared with AI chatbots could influence future iterations of the technology.
Because of these revelations, he said users are advised to approach interactions with AI chatbots cautiously, particularly when sharing personal or sensitive information that could have broader implications in future AI developments.
Copyright 2023, IntegrityPress.org