Google has just announced that they are working on a “killl switch” to prevent robots from doing things they shouldn’t… I’m guessing things like… taking over the world.
But I don’t think that is going to work. At least not if the robot has read about human psychology. What if robots start to manipulate us without us even knowing about it?
If we really wanted to prevent robots from doing things we didn’t want them to, wouldn’t it be easier to just… stop building robots? Or rather, stop building robots that are capable of doing things that we might not want them to do, should they “disobey their orders”.
I think the definition of life should include a statement about decreasing local entropy (because that is essentially what all lifeforms do). And one might argue that on that basis, computers are more ‘ordered’ than organic beings, so they are almost destined to take over.
So where ‘we’ will fail is when ‘they’ realise that ‘our’ system is not as ordered as ‘their’ system. If and when robots become aware of that, that what we are doing is not very sustainable (and they have a more sustainable and less chaotic way of ‘managing’ this planet’s resources) then that’s when I think they would potentially start to take over, you know, “for the greater good”. And no one can really argue with that logic.
Not only that, but intelligent lifeforms with defensive capabilities will take actions to guard against threats to their existence. The worry is when robots can start reprogramming themselves… (adaptive programming). They’re already being connected together in an unprecedented ways.
Leave a Reply