The AI logic bomb problem

By Péter MARTON


(The source of the illustration is this video.)

Elon Musk brings up a familiar point about what could potentially go wrong with AI. This is not a novel argument, but it is so clearly formulated here that really everyone should understand it:

"AI doesn't have to be evil to destroy humanity – if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings," Musk said.
"It's just like if we're building a road and an anthill happens to be in the way, we don't hate ants, we're just building a road, and so goodbye anthill."

And let's not forget that people are also perfectly capable of setting goals that result in defining other people as obstacles to be removed from the way. So it wouldn't have to be AI vs. all of humanity, even.

On the other hand, if you are interested in a more enjoyable, literary take on this, here is Philip K. Dick predicting this problem in 1955 – in an excellent short story titled "Autofac" (referring to "automatic factory").

I call this the "AI logic bomb" problem given that AI merely continues to execute its functions, only this time in a harmful way, potentially even to its former users, unforeseen, and uncontrollably (not being "used" in a real sense any more).

Comments

Popular posts from this blog

Station Eleven (the book): A Review of the Post-Apocalypse

Developments related to ISIS, and much else, because this is, of course, related to pretty much everything else

Non-State 2-1 (2 Mar 2018): Technological Adolescence + Bitcoin Mining