Artificial intelligence is quickly advancing. In 1964 the first major AI, ELIZA, was built, and it could only hold a conversation from a script. Now, AIs like Apple’s Siri, Windows’ Cortana and Amazon’s Alexa can interpret human dialogue to an amazing degree, and fulfill commands. It is benign now, but AI is something with which society needs to be cautious.
AI has the possibility of creating a human society so advanced we cannot even imagine it, but it also has the capability of upheaving our societal system and upheaving us with it.
Steven Hawking, one of the world’s leading scientists, warned, “The full development of artificial intelligence could spell the end of the human race.”
The full development Hawking refers to is superintelligence.
Nick Bostrom, a philosophy professor at Harvard University, defines superintelligence as intellect surpassing the human brain. The prospect of a superintelligent AI presents us with something smarter than any regular human could ever be, and it ultimately may not want to be controlled or limited by its creator.
All sentient life has an innate desire to not die, so why would an AI be any different? Someone may intend to shut the AI off, but the AI may not want this to happen. It may do anything to ensure it cannot be killed.
In fact, it may evolve or learn how to break free from its control, for after all, it would be the smartest being in the world. Because a superintelligent AI is actually thinking, we do not know for certain if an AI will do something. A superintelligent AI may decide to go along with humans before getting bored, then decide it wants to change up the status quo for fun.
The AI development community must understand the ramifications of a superintelligent AI, so now let us assume we have been either able to impede the creation of superintelligence or superintelligence is inherently benevolent.
The main problem will be automation. History is full of examples of technology assisting humans, which destroys and creates jobs. The difference now is AI will be a technology which will adapt and think for itself, taking humans out of the labor market entirely.
In 2016, Google’s automated car had driven two million miles with only 14 total accidents. Thirteen of those were caused by other cars, which is statistically better than an average teenage driver. The Bureau of Transportation Statistics shows there were 3,428,010 jobs in the transportation sector of the economy.
If Google mass-produces its self-driving program, which does not have to be paid, does not get tired and does not miss work, an entire sector of the economy is gone. As AI becomes more advanced and cost effective, we need to be prepared for the inevitable automation accompanying it.
All of this is not meant to fear-monger, it is meant to show we need to see this coming.
From various surveys Bostrom conducted to AI specialists, he predicts superintelligent AI has a 50 percent chance to exist by 2040. This does not mean we should be afraid. It means we should be prepared.
We should be ready for its eventuality, and for the ramifications of its discovery. It may be that superintelligence is at least 100 years away, but we need to start planning now, before it surprises us in the future.