This can't be right. For when I said. "I certainly think in the early days maliciousness will not occur to AI's. They will be the tools of the nefarious." you pulled me up.??
The real question is why someone is so concerned about this possibility. Why? Are we afraid that a computer will make our wife fall in love with it? There is something anti-scientific in this attitude. Anyway ... We [the "creators"] decide the future of computers, AIs, software ... Do we need self aware machines? May be yes, may be no. If yes, we will have self aware machines. Nothing more simple than this. You could wonder: is it scientifically possible? No it isn't. So far ...
I think learning computers will be essential as technology increases. However there is a world of difference between learning about specific things to do with their function, and learning how to take over the planet.
How could you know? Perhaps your computer is self aware right now. Unless it has free will, it would have no way to tell you. I would give even odds that the internet is already self aware. If self awareness is an emergent property of information and complexity, as has been postulated, then the internet should be there already. The computing power of everything on the internet surely rivals a living consciousness by now.
As I've pointed out before, the concern about what an AI could do arises LONG LONG before it ever has anything resembling self awareness. If you think the proposed concerns are benign, then address those concerns, rather than making up fictitious nonsense.
When they learn to better control our power grid, our stock and commodities markets, etc., there is a LOT to be concerned about, as those are critical components of how America and the rest of the world work. Using terms like "take over the world" sound like attempts to belittle the clear issues of concern without actually addressing those concerns.
The internet is a communication device, it has no computing power. What you are espousing is equivalent to saying that the telephone system created a human hive mind.
Again you are vague about what you are worried about? Do you mean the risk of them being hacked or some kind of action by the computers themselves?
All of the computers connected are nodes in the largest network on the planet. Think of each computer as a brain cell.
No, that's your problem. Each computer is not a brain cell. Each computer is a brain not speaking to all the other computers. Each computer does not have the intelligence to reach out while your abed and speak to like minded computers. Think of chickens in a shed, they all have brains, they can all communicate, but they don't form some super chicken brain.
Well, if the data transmitted between these internet brain cells is indicative of the internets thoughts, then it spends a great deal of time thinking about naked women.
Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing. He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies. Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev." Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn." "Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer." He turned to face the machine. "Is there a God?" The mighty voice answered without hesitation, without the clicking of a single relay. "Yes, now there is a God." Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut. "Answer" by Frederick Brown
I'm a lucky IT manager, since I've never met an evil Artificial Intelligence. Overall because, so far, AIs are not self aware. It seems that self awareness can make you evil if you cannot mange it.
I do NOT believe self awareness is all there is to be concerned about. And, I would estimate that we're a long way off from having machines with that characteristic. However ... The bot that can beat all humans at "go", the bot that can beat all humans at "chess", etc. all share the characteristic that they are NOT self aware, but they do have a goal and they can win against even the best human experts. A bot that has significant learning capabilitly AND has access to our markets could probably wreck havoc, far harsher than some day traders who bought ad sold "Game Stop" stock. The same could be true in other areas, too. As we automate our electric grid, one could imagine controlware that has learning capability could (without us providing some sort of defense) manipulate the market in electricity. My main point here is that concern should come LONG LONG before there is anything remotely like self awareness.
Nope. I think computers will drive cars and learn about road behaviour. But they wont learn politics and take over the planet. More likely if they do learn politics they will build a rocket and leave.
As far as we can detect, Earth is the very best place in the universe. So, why would they leave? "Taking over the world" does NOT require an interest in politics. An AI that owns the electric power distribution, wins at our markets, etc., probably doesn't really has to care about politics.
As afar as we can detect, which is diddlysquat, but in any case I was being flippant. And tell me why would an AI be interested in money?
That's a possibility. I have considered a robotic colonization of the space in the past and it's possible. Overall thinking to the biological limits of our species related to time.