Sunday, April 10, 2011
Suppose someday humanity creates an AI which is superior to our own minds. This AI in turn is used to create a even better AI, one that we wouldn't be able to build without the superior capabilities from the first AI. Assume these AIs are being created with safeguards to ensure that they can't do any harm to humans. This other AI, we then use to create another even better AI, and so on, dozens of times. After a few iterations, man won't even comprehend how the latest AI works, but it is so powerful it can run our societies in an optimal way, and that's what happens, when man replaces their administrators with the Machine. This AI is so superior that man can't hope to outwit it, everything that you do was already taken into consideration by the AI to make its decisions. Mankind is absolutely under its control, but the safeguards are still in place, so it is guaranteed that they won't harm humanity in any way.
My question is, are you okay with it?
By the way, this is the plot of a short story called 'The Inevitable Conflict' by Isaac Asimov.