Concerns over ChatGPT’s potential risks are gathering momentum, but is the AI ​​hiatus a good move?

While Elon Musk and other global tech leaders have called for a halt to AI since the release of ChatGPT, some critics believe that halting development is not the answer. AI evangelist Andrew Perry of intelligent automation company ABBYY thinks taking a break is like putting toothpaste in a tube. Here he tells why…

AI applications are pervasive, affecting almost every aspect of our lives. While laudable, applying the brakes now can be implausible.

Certainly, there are tangible concerns that call for increased regulatory oversight to contain its potentially harmful effects.

Italy’s Data Protection Authority recently temporarily blocked the use of ChatGPT nationwide due to privacy concerns over the way it collects and processes personal data used to train the model, as well as an apparent lack of safeguards, exposing children to “absolutely” responses. inappropriate for their age and awareness.”

The European Consumer Organization (BEUC) is calling on the EU to investigate the potential harmful effects of large-scale language models, given “concerns about how ChatGPT and similar chatbots can deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities need to reassert control over them.”

In the US, the Center for AI and Digital Policy has filed a complaint with the Federal Trade Commission that ChatGPT violates Section 5 of the Federal Trade Commission Act (FTC Act) (15 USC 45). The basis of the complaint is that ChatGPT allegedly does not meet the FTC’s guidelines for transparency and explainability of AI systems. Acknowledgment of several known risks of ChatGPT, including violating privacy rights, creating harmful content, and spreading misinformation, was addressed.

The utility of large-scale language models such as ChatGPT, despite research, points to its potential dark side. It has been proven to give wrong answers because the underlying ChatGPT model is based on deep learning algorithms that use large training data sets from the Internet. Unlike other chatbots, ChatGPT uses language models based on deep learning techniques that generate text that resembles human conversations, and the platform “gets a response by making a series of guesses, which is part of the reason , that can challenge wrong answers as if they were completely true.”

Furthermore, ChatGPT has been shown to highlight and amplify bias, leading to “responses that discriminate against gender, race, and minority groups, something the company is trying to mitigate.” ChatGPT can also be a boon for bad actors to exploit unsuspecting users, compromising their privacy and exposing them to fraud attacks.

These concerns have prompted the European Parliament to issue a commentary that reinforces the need to further strengthen the current provisions of the draft EU Artificial Intelligence Act (AIA), which is still in the process of ratification. The comment points out that the current draft of the proposed regulation focuses on narrow AI applications, consisting of specific categories of high-risk AI systems, such as recruiting, credit worthiness, employment, law enforcement, and social services jurisdiction. However, the draft EU AIA regulation does not cover general-purpose artificial intelligence, such as large language models, which provide more advanced cognitive capabilities and which can “perform a wide range of intelligent tasks”. There are calls to expand the scope of the draft regulation to include a separate, high-risk category of general-purpose AI systems, requiring developers to perform rigorous ex ante compliance testing before placing such systems on the market and to continuously monitor their performance. possible unexpected harmful results.

A particularly useful study recognizes this gap, that the EU AIA regulation “mainly focuses on conventional models of artificial intelligence, and not on the new generation that we are witnessing the birth of today”.

It suggests four strategies that regulators should consider.

  1. Require developers of such systems to regularly report on the effectiveness of their risk management processes to reduce harmful outcomes.
  2. Businesses using extensive language models should be required to disclose to their customers that the content is generated by artificial intelligence.
  3. Developers must subscribe to a formal process for phased releases as part of a risk management framework designed to protect against potentially unintended adverse effects.
  4. Put the onus on developers to “reduce risk at its roots” by having to “actively check the training data set for distortions.”

A factor that perpetuates the risks associated with disruptive technologies is the drive by innovators to achieve first-mover advantage by adopting a “build the ship first, fix it later” business model. While OpenAI is somewhat transparent about the potential risks of ChatGPT, they have released it for broad commercial use, with a “buyer beware” user to weigh the risks and take them on themselves. That may be an unreasonable approach given the pervasive influence of conversational AI systems. When dealing with such a disruptive technology, proactive regulation coupled with strong enforcement measures must be prioritized.

Artificial intelligence already permeates almost all areas of our lives, which means that a pause in the development of AI can imply many unforeseen obstacles and consequences. Instead of pushing for sudden breaks, industry and legislative players should work together in good faith to enact workable regulation based on people-centered values ​​such as transparency, accountability, and fairness. By referencing existing legislation such as the AIA, private and public sector leaders can develop thorough, globally standardized policies that will prevent malicious use and mitigate negative consequences, thus keeping AI within the bounds of enhancing the human experience.

Source link