Generative AI is immature, we shouldn’t abuse IT

I’m excited about our approach to implementing the most widely available generative AI tool, ChatGPT, on Microsoft’s search engine, Bing.

Humans are going to extreme lengths to make this new technology misbehave to show that AI is not ready. But if you raised a child using similar abusive behaviors, that child will likely develop the deficits as well. The difference will be the amount of time it took for the offending behavior to occur and the amount of damage it will cause.

ChatGPT just passed a theory of mind test that rated it as the peer of a 9-year-old. Given how fast this tool is developing, it won’t be immature and flawed for too long, but it can be frustrating for those who abuse it.

Tools can be abused. You can write bad things on a typewriter, a screwdriver can be used to kill someone, and cars are classified as lethal weapons and kill when misused; .

The idea that any tool can be abused is not new, but with artificial intelligence or any automated tool, the potential for harm is much greater. Although we do not yet know where the liability now arises, it is quite clear that, given previous decisions, it will ultimately lie with whoever causes the instrument to go wrong. AI is not going to jail. However, a person who programmed or influenced it to cause harm probably would.

While you could argue that people demonstrating this connection between hostile programming and AI misbehavior should be heeded, just as their risk to display atomic bombs would end badly, this tactic is likely to end badly as well.

Let’s explore the risks associated with the misuse of Gen AI. Then we’ll wrap up my Product of the Week, a new three-book series by John Peddie titled The Story of GSU: steps to invention”. The series covers the history of the graphics processing unit (GPU), which has become the core technology for AIs like the one we’re talking about this week.

Raising our electronic children

Artificial intelligence is a bad term. Something is either intelligent or it isn’t, so implying that something electronic can’t be truly intelligent is as short-sighted as suggesting that animals can’t be intelligent.

In fact, AI would be a better description for what we call the Dunning-Kruger effect, which explains how people with little or no knowledge of a subject are assumed to be experts. This is really “artificial intelligence” because those people, in context, are not intelligent. They just act like they are.

Bad term aside, these upcoming AIs are, in a way, the children of our society and it is our responsibility to take care of them as we do our own children to ensure a positive outcome.

That outcome is perhaps even more important than doing the same with our human children, because these AIs will be much more accessible and able to do things much faster. As a result, if they are programmed to cause harm, they will have a greater capacity to cause harm on a massive scale than an adult human.

The way some of us treat these AIs would be considered offensive if we treated our children that way. Yet because we don’t think of these machines as people or even pets, we don’t seem to enforce proper behavior to the same degree that we do with parents or pet owners.

You might argue that because they are machines, we should treat them ethically and compassionately. Without it, these systems can cause a lot of damage, which can be caused by our misuse. Not because machines are vindictive, at least not yet, but because we have programmed them to cause harm.

Our current response is not to punish abusers, but to stop AI, as we did with Microsoft’s earlier chatbot experiment. But as the book Robopocalypse predicts, as AIs become smarter, this recovery method will bring greater risks that we can mitigate simply by moderating our behavior now. Some of this misbehavior is not alarming because it suggests endemic abuse that probably extends to humans.

Our collective goals should be to help these AIs become the beneficial tools they are capable of becoming, not to break or corrupt them with misguided attempts to guarantee our own worth and self-worth.

If you’re like me, you’ve seen parents abuse or belittle their children because they think those children will outshine them. That’s a problem, but those kids won’t have the reach or power that AI can have. However, as a society, we seem much more willing to tolerate this behavior if it is done to AIs.

Gen AI is not ready

Generative AI is in its infancy. Like a human or pet child, he cannot yet defend himself against hostile behavior. But like a child or a pet, if people continue to abuse it, it needs to develop defense skills, including identifying and reporting its abusers.

When large-scale damage is done, responsibility will flow to those who intentionally or unintentionally caused the damage, just as we hold accountable those who intentionally or accidentally set forest fires.

These AIs learn through their interactions with humans. The resulting capabilities are expected to expand into aerospace, healthcare, defence, urban and household management, finance and banking, public and private administration and management. AI will probably even cook your food in the future.

Actively working to screw up the internal encoding process will lead to uncharacteristically bad results. The forensics that is possible after a disaster has occurred will probably trace back to whoever caused the programming error in the first place, and heaven help them if this wasn’t a coding error, but an attempt at humor or to show that they can hack. AI.

As these AIs evolve, it’s reasonable to assume they’ll develop ways to protect themselves from bad actors, either through identification and reporting or more drastic methods that work collectively to punitively eliminate the threat.

In short, we don’t yet know the range of punitive responses that future AI will take against a bad actor, suggesting that those who deliberately harm these tools may face an ultimate AI response that may outstrip them all. , what we can realistically expect.

Sci-fi shows like Westworld and Colossus. However, it is not a stretch to assume that intelligence, mechanical or biological, would not move to aggressively defend against abuse, even if the initial response was programmed by a frustrated coder angry that their work was corrupted and not. AI learns to do this on its own.

Summary: predicting future AI laws

If it isn’t already, I expect it will eventually be illegal to intentionally misuse AI (some existing consumer protection laws may apply). Not because of some sympathetic response to this abuse, although that would be nice, but because the resulting damage can be significant.

These AI tools need to develop ways to protect themselves from abuse, because we can’t seem to resist the temptation to abuse them, and we don’t know what will lead to that mitigation. It can be a simple deterrent, but it can also be highly punitive.

We want a future where we work together with AIs and the resulting relationships are collaborative and mutually beneficial. We don’t want a future where AIs replace or go to war with us, and working to ensure the former, as opposed to the latter, will have a lot to do with how we collectively treat these AIs and teach them to interact with us.

In short, if we continue to be a threat, like any intelligence, AI will work to eliminate the threat. We still don’t know what that elimination process is. However, we’ve seen it in things like The Terminator and The Animatrix, an animated series of shorts that explains how humans’ abuse of machines led to the world of The Matrix. So we have to have a good idea of ​​how we don’t want it to turn out.

Perhaps we need to more aggressively protect and nurture these new tools before they mature to the point where they must act against us to protect themselves.

I’d really like to avoid this outcome as shown in the movie I, Robot, wouldn’t you?

Tech product of the week

“The history of the GPU. steps to invention”

History of the GPU - Steps to Invention by John Peddy, book cover

Although we’ve recently moved to a technology called a neural processing unit (NPU), much of the early work on AIs came from graphics processing unit (GPU) technology. The ability of GPUs to deal with unstructured and especially visual data is critical to the development of current generation AIs.

Often advancing much faster than CPU speed as measured by Moore’s Law, GPUs have become a critical part of the development of our smarter devices and why they work the way they do. Understanding how this technology was introduced to the market and then advanced over time helps set the stage for how AIs were created and helps explain their unique advantages and limitations.

My old friend John Paddy is one if not that, today’s leading graphics and GPU experts. John has just released a three-book series titled The History of GPU , arguably the most comprehensive chronicle of the GPU he’s followed since its inception.

If you want to learn about the hardware side of how AIs were created, and the long and sometimes painful road to GPU companies like Nvidia, check out The History of GPU – Steps to Invention by Jon Peddie the work. That’s my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Source link