AI will kill humans as our warfare technologies increase and they will replace humans on the ground. This is safer for the humans in service, but fewer of them will need to be deployed. We just need to make sure that the automatic weaponry don't turn on us like a bad science fiction plot. Of course, some say the universe itself is an AI teaching itself physics. And, the universe hasn't killed us yet. It's killed other things through mass extinction, but not us... yet.
Are bad AI programs killing people and making governship much harder? Yes. Is fully intelligent AI going to kill us? Probably not.
Only because it is illegal for a machine to make a kill decision without a human in the loop. But, someone, at some point, could change that. They are trying to, that is certain. The entire point of the article was to warn humans of that eventuality.
To be fair, that's largely because a book called "Don't Worry, Everything Will Be Fine" wouldn't sell very well. There are certainly lots of serious and significant questions to ask about AI development, but a lot of the commentary is, on the surface at least, excessively alarmist and dramatic to attack attention (even if it is for normally positive motivation). There is also a major issue with a lot of things being called or attributed to AI when they're nothing of the sort (though that isn't the case in this example). For most laymen, this is as much about fear of the unknown as anything else.
But what he is predicting, is, in fact, the direction we are headed and very close to. The only thing preventing it is a law, which could be repealed.
He's predicting a worst case scenario (certainly in titles, headlines and the like). In this case, he may well be doing so with some kind of good intention, to warn or the potential risks of this extreme, but those extremes are no more likely than they are in any other field. I also have a slight logical issue with the idea that it is fundamentally wrong and evil for a machine to decide to kill someone but it is entirely legitimate (indeed, often expected and celebrated) if a human decided to do exactly the same thing. I totally accept that there are some practical differences but I don't think the practical or moral line is anything like as solid as is so often made out. I think the consequences of all this are just much more complex and nuanced than is commonly presented in contexts like this.
AI does not kill people people kill people. I imagine it will be another weapon of war. I fear mans inhumanity to man.
He did say that they are shooting for machines that can decide who to kill, without humans in the killing decision. He also said that AI is developing a self-preservation, self-replication capability, giving the impression that it humans could lose control over it. No?
Yes, potentially, but as I said, I think we need to address the question of why we assume that would be automatically wrong yet as soon as you introduce a human in the decision (with all the human flaws, biases and limitations), it instantly becomes perfectly acceptable. I would argue that the issue of machines used to kill people (often lots of people) shouldn't be focused exclusively on who makes the immediate decisions "on the ground". In some contexts, yes, though you don't need AI for that risk to exist. Simple computer viruses, some of which have been developed for state and military use, can ending up spreading much wider than intended too. Again, I am totally in support of recognising and addressing the risks of developing technology and how it is used. I just don't think that should focus exclusively on AI (actual AI or what people call AI) and shouldn't be presented as if AI poses a unique and special risks that doesn't exist in any of the surrounding fields.
First, I think that it's faint hope to program some sort of ai to not kill humans without human corroboration of the decision. We're so majorly dependent on the internet and other technology that simple mistakes could mean death for large numbers. Could an AI ever decide that turning off electricity production was the best way to stop a computer virus? Could it decide that the internet is too much of a threat? Would an ai be smarter about human life than Texas?
While the warbot idea is certainly scary, I don't really see that as the largest threat, though there is major opportunity for horrendous atrocity. I think the larger question is more a matter of how an AI could become so smart as to NOT kill people. We tend to ignore the solutions to serious problems that we know would mean disaster for many humans - what it means to shut down the internet or electric power, etc. Ensuring that we won't have ai's that fail to catch all these mistakes in approaches to those serious problems - THAT seems more than just hard. Plus, contemplating any serious ai brings up the problem of ai's being able to design and create new ai's or to otherwise modify their own programming to better accomplish the mission. After all, creating advanced ai's or attempting to approach human brain capability even in some minor functional manner WOULD require ai's and learning systems to accomplish even to a small degree - not some human programer who adds "please don't kill me" code.
Yes... definitely a rather scary prospect........ no wonder there is soon to be an outpouring of the Ruach ha Kodesh on all flesh........ We are all about to be humbled in comparison to the enemies we will soon face..... Yes... in the future there is little doubt that A.I. will probably be programmed to kill certain people......
Define the word "kill" because A. I. and robotics technology can certainly kill many of us ECONOMICALLY! This brings us to the need for a Basic Minimum Income that is UNCONDITIONAL BUT TAXABLE THAT GOES TO ALL CITIZENS OF CANADA AND THE U.S.A.... I am proposing an unconditional but taxable B.M.I. for all Canadians of five hundred dollars per month for all citizens and legal residents regardless of income levels or age. I stole this concept from Economist Milton Friedman and several others. http://www.politicalforum.com/index...-own-the-u-s-a-dollar.599736/#post-1073450404 Do three hundred and thirty million Americans own the U.S.A. Dollar?
Milton Friedman proposed a negative income tax. in other words, say we set the threshold at 130% of the poverty level, and anything below that, gets a check to bring the person up to that level. I'd support that, but only if they regionalized the poverty thresholds because that figuire would vary from one region to another.
Kill us? Probably not. Change us? Definitely. And it can't be stopped unless something apocalyptic takes down technological innovation. This has never been a question of if but when. He's mostly talking about the singularity. Socially, the biggest thing is that fewer people would need to work for a living. So how do we decide how resources are allocated? The other big thing is that effectively, the winners in the long-run will be the society willing to fully adopt and hybridize with this technology. The people getting killed, if any, are those who don't adopt it because they will be quickly outmatched. Humans will become semi-synthetic. In the short term though, it's going to be more about decision support tools. In medicine, each doctor will be able to do more with AI assistance, so the model for 10 doctors worth of work might instead be 1 doctor, 3 nurse practitioners, all using decision support from AI helping them to not miss things. But patients would still want a human touch. Just fewer humans would be required per unit of work. Though in medicine that usually just means they do more work, not that they need fewer doctors.
Regarding AI and military drones, I hope they don't go so far as to take humans out of killing decisions and let the Drones run wild.
Science fiction has played with the idea of digital blood lust for a long time. Science, on the other hand, hasn't weighed in on the subject. So it is probably best to see it as a science fiction sort of thing than a scientific one.