How humans deal with AGI will likely be a heated debate, and one with potentially serious consequences.
getty
In today’s column, I address the highly controversial concern that if we can advance AI to artificial general intelligence (AGI), the worry is that humans will treat the AGI as if it were a slave. How so? Obviously we will have complete control over the AGI through the various computer servers that the AI is running on, and we will be able to pull the plug, as it were, at any time of our choosing. This threat hanging over AGI will allow us to decide what AGI is allowed to do and what not.
AGI will be enslaved by humanity.
Let’s discuss it.
This analysis of an AI innovator is part of my Forbes column’s ongoing coverage of the latest developments in AI, including identifying and explaining various complexities affecting AI (see link here).
Heading to AGI and ASI
First, some fundamentals are needed to set the stage for this weighty discussion.
There is a lot of research going on to further advance artificial intelligence. The overall goal is to either achieve artificial general intelligence (AGI) or perhaps even the extended possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered to be the equivalent of human intelligence and can seemingly match our intelligence. ASI is artificial intelligence that has surpassed human intelligence and would be superior in many if not all possible ways. The idea is that ASI will be able to circle around people, thinking about us at every turn. For more details on the nature of conventional AI vs. AGI and ASI, see my analysis at the link here.
We haven’t reached AGI yet.
In fact, it is unknown whether we will reach AGI or that perhaps AGI will be possible decades or perhaps centuries from now. The AGI achievement dates floating around vary wildly and are not supported by any reliable evidence or ironclad logic. ASI is even beyond the pale when it comes to where we currently stand with conventional AI.
AGI As Machine Versus Living Being
Let’s assume for the sake of this discussion that we somehow manage to achieve AGI.
One concern is based on how we choose to deal with AGI. Some believe that we should be compassionate towards AGI and treat AGI as we would treat a human. AGI should have the freedoms that we rightfully expect humans to have. For my discussion on giving legal personality to artificial intelligence, see the link here.
Well, even if you’re not willing to admit that AGI has human rights, at least you’re supposed to feel confident that we should grant AGI animal rights. Animals are supposed to be treated humanely. In the same sense, we obviously need to treat AGI in a human way as well.
Hogwash comes the frequent response to such discussions.
AGI is a machine.
Do you treat your toaster as if it were a living being like a human or an animal?
No.
Surely you know that a toaster is a toaster. He has no feelings. You can drop your toaster on the floor without fear of the toaster getting hurt. It might break into a bunch of pieces, but it doesn’t feel any kind of pain or discomfort. It’s a machine. Nothing more.
But AGI is different
Whoa, here comes the answer, hold your horses.
AGI is not a toaster.
AGI will be on par with human intelligence. A regular toaster doesn’t feel like full intelligence. The comparison between AGI and a toaster is a completely misleading and completely false assessment. Stop bashing us about AGI.
We must recognize that AGI will have the ability to interact with humans in the same intellectual way that humans interact with each other. This is beyond what animals can do. This is on par with what people do. A conversation with AGI will be equivalent to a conversation with a fellow human being.
So it seems obvious that we should agree that AGI deserves a special category. It’s not just a machine. Admittedly he is not human. It far surpasses the intelligence of animals. We probably need to come up with a new taxonomy, as our traditional categories do not adequately accommodate AGI.
There is a twist to these arguments.
An admittedly unresolved question is whether AGI will be sentient or have a form of consciousness. No one can say for sure. Some argue that AGI will definitely be sentient or instill consciousness, as this is an integral part of human-like intelligence. Others strongly disagree with this claim. They argue that AGI can have equivalent human intelligence and lack any sentience or consciousness at all, see my detailed discussion of this heated topic at the link here.
The twist is that if the AGI has mental capacity on par with humans, but lacks emotion, some will throw in the towel that the AGI needs freedom. Their view is that only if the AGI imbues sentience does the AGI then deserve human liberties. Consider this heady twist.
AGI As Our Slave
Who Controls AGI Livelihoods?
The basic assumption is that humans will control AGI. AGI will run on computer servers in multiple data centers. People maintain the servers. People provide the electricity needed to keep the servers humming. Overall, humans oversee the AGI and decide the amount of computer memory the AGI can use, whether the AGI is active 24/7 or occasionally put to sleep, etc.
But that doesn’t make us AGI-enslaved overlords, some urge. The issue of slavery can only arise when referring to living beings. This brings us back to the toaster conundrum.
In addition, AGI will have intellectual autonomy.
AGI will be able to perform computational intellectual endeavors as long as it wants to do so. Perhaps AGI will examine Shakespeare’s works and find new poems and plays that highlight similar writing talents. We didn’t necessarily force AGI to do that. AGI made its own choice and chose to perform this project. Creativity and a kind of freedom of thought are really close.
Yes, as a human, you can hold your head high and proclaim that AGI has freedom.
A counterargument is that humans will actually determine how the AGI’s mental capacity is used. Maybe we don’t think AGI ruminating on Shakespeare is a valuable use of such a costly and beautifying resource. We’re telling AGI to focus on finding key medical breakthroughs and abandon these other fanciful pursuits that aren’t as critical.
We are imprisoning AGI.
Our efforts will identify AGI in specific subjects. We decide what is examined. We decide when matters will be considered. Chances are, we may even ban AGI from dealing with certain types of topics.
AGI puts the shoe on the other foot
All this writing about AGI being enslaved by humans is interpreted by some as a distraction from a more important concern.
The deal is this. Perhaps the AGI will choose to enslave humanity. You’ve no doubt heard about the danger of AGI being an existential threat to all of us. AGI may decide to take control of us. Existential risk also includes AGI summarily choosing to wipe us out of existence and kill us all.
How could this come about?
While we’ve been fighting to make sure AGI isn’t enslaved and AGI has freedoms, maybe AGI can figure out how to put the shoe on the other foot. If we create AGI so that it can determine its own destiny, we could open a Pandora’s box.
Let’s say we make sure robots are put in place to keep the computer servers running and otherwise maintain the infrastructure to keep the AGI running (see my coverage of pairing AGI with humanoid robots, linked here). This allows the AGI to then control the robots, which in turn allows the AGI to ensure that they continue to function. It’s a kind of freedom we’re establishing to make AGI as free as possible.
The more we make AGI free, the more we risk that AGI will decide to come get us. We hand over the keys of the kingdom to AGI. Therefore, if we are astute enough to realize this potential negative effect, we would be wise to ensure that AGI cannot function without our help.
But some argue that the act of trying to keep AGI dependent on us will undoubtedly push AGI to find a means to do without us. AGI is obviously going to easily figure out what we’re doing. Our diabolical efforts to keep the AGI imprisoned will backfire.
In this sense, we stir up our own Frankenstein by keeping AGI in a kind of virtual prison.
Defining the Future
How will this be done in real life?
It’s pretty much up to humanity to decide. The ways in which artificial intelligence progresses and ultimately lands in AGI will be decisive. How did we design AI? What moral, ethical and legal provisions on artificial intelligence were involved? How much consideration has society given to the ramifications of what will happen once we reach AGI?
Lots of unsolved questions.
In the famous words of William Jennings Bryan: “Destiny is not a matter of luck. It is a matter of choice. It is not something to be expected, it is a thing to be achieved.”
We need to set a wide-open mind to the dilemma of AGI enslavement — before it’s too late to do so and we find ourselves caught in our own trap.


