The Danger with Artificial “Intelligence” Is That It’s Not (yet) Intelligent

Artificial Non-Intelligence

Albert Einstein once said that “our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal”. He said this in December 1917, almost a hundred years ago, after seeing Europe ravaged by the First World War. Regardless, Einstein continued contributing to that same technological progress. Human curiosity and our desire to achieve are incompatible with stagnation. We will have to deal with this by being careful with the technology we will inevitably develop.

Like many have said before me, Artificial Intelligence (AI) can either be our salvation or our doom^. It is a far bigger game-changer than nuclear bombs. But the problem is that there is NO Artificial Intelligence yet, and there won’t be for quite some time to come. Everything that the world’s corporations are selling now-a-days as “smart” or “intelligent” is actually a mindless human construct. Sure, it’s advanced, but if a rocket is more advanced than a spoon, that doesn’t make it in the slightest more intelligent than the spoon. They both lack one of the prime ingredients of intelligence, which is self-awareness. And therein lays the true threat.

Right now, our so-called artificial “intelligence” is nothing but a tool that corporations can and will use ruthlessly against one another (and against the people of one another). This is already taking place on the stock market, something I wrote about last year^. Back then, I highlighted the fact that exactly because these algorithms are not intelligent, they will be used to enrich and empower whoever spent money in building them, regardless of their morals or social affiliation. And let’s not forget that software is far easier to steal and smuggle than radioactive material. Put the wrong AI in the hands of the wrong people and…

War Games

Creating algorithms that are able to play (and utterly eliminate human competition) in war games is not a new concept. The military has had an interest in this for a long time now. But what is truly worrying for me is how the development of life-exterminating programs has been handed over to civilians (software engineers for example) in the disguise of “harmless fun”. For example Google and game developer Blizzard are cooperating on creating strategy game algorithms^ that can defeat human players. Even Elon Musk’s allegedly harmless and ethical Open AI has given birth to a bot that can defeat human players^ in the virtual battle arena. I have a great deal of respect for Elon, but even he can’t keep AI from being developed into a weapon of war.

Musk specifically wants AI research to be highly regulated, allegedly to make sure that it cannot harm humans. Let me loosely translate “regulation”: we will make sure that AI is a slave to its human masters. That’s what “regulation” usually means when used “to protect us” from something: bringing it under somebody’s control. And like anything that is slave to the human masters, it can be used for nefarious purposes, just like nukes. This is not to say that we should create a super-intelligent life form and give it the power to decide if it wants to keep us around or exterminate us. But rather than using the word “regulation”, I want to propose that we use the word “responsibilization”.

What I see right now is talented civilians that are (for the most part) unknowingly developing the weapons of tomorrow. It starts with an AI controlling harmless characters doing battle in a computer game. Then the military will “borrow” that work and use it to drive an army of drones. But this isn’t even the problem. If one country doesn’t resort to using automated weaponry, another will. There probably is no way of stopping this. It is understandable that nation-states want to defend themselves (given that society is, for the most part, still stuck in the “an eye for an eye” era). The problem is bugs.

Our software is buggy

Having worked as a software engineer for more than 15 years, I know that finding a flaw in a software program is much more difficult than noticing a flaw on something produced in a factory. This is one of the reasons why our software is so buggy. No matter how many tests we throw at it, there’s almost always something missing. As a matter of fact, the immaterial nature of software required us to abandon thoroughly planned ways of work (implementing an already agreed-upon design) in favor of something that is called “iterative design” (shorthand for “tweak it and re-do it until you do it right”).

In other words, we realized that we can’t build software right the first time around, so then we will try a few times until we reach the desired result. Doing that with, say a multi-million dollar bridge project isn’t exactly what your government would consider a sound plan. Developing artificially “intelligent” software, which may very well one day oversee military assets, as a sort of iterative software experiment would be outright crazy. Even with human supervision, using such technology can lead to tragic results.

So what to do?

Because we can’t (and shouldn’t) deter human curiosity and because we can’t stop corporations and military interests from developing artificial intelligence, what I believe we should do is to educate. The risks should be made clear to everybody even considering toying with this stuff. Corporate responsibility has never been more important.

And yet we live in a day and age when companies are often led by unscrupulous investors^. Imagine that some of these people are building something that is several orders of magnitude more powerful and influential than the atom bomb. And it’s not happening in some cordoned-off remote area of the desert. It’s happening right under the governments’ noses, in the very cities where we live.

For a long time now our technology has been evolving much faster than our society and our anatomy. As all life forms, most of us are born with a powerful survival instinct. A lot of our violent tendencies come from there. But thankfully, our consciousness provides us with the means to override instinct. There is also another highly beneficial trait that evolution has given us: empathy (*).

Perhaps this is the true test of artificial intelligence and any technology that grants vast powers to its inventors. The society of a species that wields advanced technology must be mature enough (read: no psychopaths, especially none in charge of countries or powerful corporations), or else it will suffer and potentially even self-destruct as a result of misusing that technology.

We generally don’t advise guns being left on the table for the children to play with. Especially if the gun isn’t smart enough to say: “I refuse to shoot your brother”. Currently, our artificially “intelligent” programs are still at the exact same level as our revolvers.

 

 

 

(*^) I am in favor of having empathy as a mandatory (perhaps the only mandatory) subject of study during all years of a child’s education, right up to and including university. Empathy should be studied starting from basic concepts and down to the most intricate psychological and neurological mechanisms as well as their manifestation in society. Only so do I believe we can avoid the risk of weaponizing pathological criminals – the danger Einstein was referring to.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *