Yeah, I'm reading the book that's triggered all of this speculation at the moment,
"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom, an Oxford philosophy professor.
I've been dipping in an out of it for a year or so, as he's in love with using unnecessarily complicated language. To be honest, I think he uses a lot of this language to hand-wave over technical shit he really doesn't have an understanding of, but the core messages are strong.
One of the concepts that really captures the imagination is in the section on malignant fail states, called "Perverse Instantiation". Controlling a superintelligent system with orders is nigh on impossible, as the AI could interpret orders in almost completely open-ended ways in order to facilitate its own goals. Basically, a genie warping the wish of its master.
And then there's scary examples of evolutionary algorithms which have already found success through unpredictable methods that human minds and understanding alone could never have developed.
I'm going to almost quote verbatim from the book here:
One such search process was tasked with creating an oscillator, but was deprived of a seemingly indispensable component for the construction of said oscillator: the capacitor.
It solved the problem, but the researchers who first examined the solution concluded it shouldn't work.
The algorithm had reconfigured its sensorless motherboard into a makeshift radio receiver, using the printed circuit board tracks as an aerial to pick up signals from PCs that happened to be situated nearby in the laboratory. The circuit amplified this signal to produce the desired oscillating output.
And you know who's at the helm of one of the world's most advanced AI projects, Google's Deep Mind?
Demis Hassabis, The same guy responsible for helping giant monkeys learn how to throw flaming turds at hapless villagers in the game Black & White.