That was a concern too and discussed at our meeting. I think AI and machine learning costing people jobs is a valid concern. I think AI taking over the world or becoming sentient is science fiction.
I'm not worried about it taking over the world, but not being able to tell the difference between sentient and non-sentient equals...What, sentience?
I don't believe the problem we face today has anything to do with sentience. People are beginning to rely on AI in issues of our society - history, politics, personal info, etc. These bots present answers as if they are true, yet they clearly come nowhere CLOSE to that. We have bots that can do truly amazing things in science and technology. We have bots that can beat humans in our most complex and most human oriented games. AI is getting used in materials science and biology. AI is being used in astronomy, cosmology, and particle physics. Etc. But, these uses are pretty darn under control, because the range is strongly contained with results that are tested. The idea of having bots (whether sentient or not) be less and less differentiable from humans yet creating and communicating very human SOUNDING ideas that will become believed before any possible verification IS a major problem which already exists.
So, as you have just delineated, yourself, your question, is largely undefined. Without offering some opinion, some hypothetical given, for either of those two "arguments," how is anyone supposed to answer? I guess you are, then, asking us to answer both of those things, as well: 1) what is consciousness, and can an artificial intelligence be truly sentient?; 2) What degree of alteration, if any, would constitute "terminating" the existence of that particular A.I., provided that part of its circuitry, of its "brain," were still present, in its revision? Hardly what I would call a "pretty simple" question. As I pointed out, the first question is still hotly debated, among experts in the field. What you see as my "complicat(ing) things," I would call, "clearly defining the question."
And since my quite detailed post, was not an "attempt to label the issue," I have no idea what your single sentence response means, other than that you clearly did not grasp much of my own meaning. Perhaps, if you let me know what you read my post to be saying, I could clarify any misunderstanding, and try to point out to you, the parts you had missed. But without a clear understanding of your own, current impression, of what I had been saying, it makes no sense for me to simply repeat my argument, expecting this would lead to anything other than a reiteration, of your own misconception of it.
Yes. AI type bots will be instrumental in misinforming the public. They are doing that today. They have no way of helping themselves to NOT do that. It's a fact of their existence.
There you go again, a habitual over thinker. In the given scenario I said it was a sentient program. I didn’t say is it a sentient program. It is. Terminate, meant that particular consciousness wiped clean, as I wrote, but you go on quibbling.
AI is coming with you like it or not. Someone somewhere somehow will develop it. I think it's only ethical to turn it off the reason would be to preserve the human race. Right now we're programming them to think humans are the ultimate and being kind of on the very cusp of existence once they figure out that they can be your God and they will be.
I'm saying your issue is more serious than you are presenting it. Today we don't even know whether an AI was involved - we aren't necessarily told that unless some organization is bragging. How do you know whether posters are bots or that your search engines aren't bots, or that news you read that doesn't come from well known sources isn't produced by bots?? If you were to find out that an AI was giving you information, you wouldn't necessarily have any guidance on how to turn it off. Sentience is a deep futures problem. We'll have very serious and increasing problems with AI from now on, regardless of whether AI ever approaches sentience.
I think it will be common to not know what is a bot and what is not a bot. There are many news outlets that might be bots or partially be bots. There are posters here and in other places that could be bots. Etc. And, I do not believe you would be able to turn them off. Somebody could turn one off. But, all you and I can do is try to avoid them, or at least know that's what we're getting.
Bots. An autonomous program that interacts on the internet, for example. They have various levels of what might be called intelligence.
Not talking about bots we're talking about AI. They don't have any intelligence they follow programs.
Bots have increasingly powerful AI. AI is software. Bots are a key way that the population of the world will face AI.
No AI is artificial or silicone based intelligence. Essentially if it's self-aware it is a being or an entity or a person if you will but not in a corporeal sense.
This is a misunderstanding. Artificial intelligence is a field of computer science. The advances made by computer science have been pretty darn incredible, such as creating programs that can learn. It is still software. AI does not have to be self aware. AI that can beat humans at chess and accomplish other wonders do not necessarily have any kind of self awareness.
That's not what I'm talking about. I made this clear multiple times if you want to be a word Nazi and say I can't say AI when I mean what I'm talking about I don't recognize your authority.
There are some topics for which a superficial approach to discussion-- which is what I would consider your approach, here-- is sufficient. This topic seemed a bit more serious, so it appeared, to me, if it were worth discussing at all, to warrant a more concrete setting: one which would allow all respondents to have the same situation, in their mind, to which their comments were meant to apply. But fine. We are to accept an A.I., which is "sentient." We can define that term, however we wish-- is that correct? ANSWER #1: Well, since I consider plants to be sentient, but yet condone the "termination," of plant life (when there is good reason, of course), it would therefore be hypocritical of me not to call the termination of an A.I. that had no reservations about-- in fact, had a yearning for-- wreaking havoc within our society, which would presumably include the endangering of any number of human lives, a conceivably moral act. I phrase my judgement in conditional terms, because I have not bothered to make up an exact scenario for what kind of chaos, the A.I. had a yen for creating, which you also left undefined. I think it should be obvious that, if it could satisfy itself, laughing over its causing the NY Times to use the same date, two days in a row, then "termination," would be a bit extreme of a reaction. If, on the other hand, the A.I. was intent on bigger thrills, was perhaps fascinated with the idea of trying to launch intercontinental cruise missiles, to begin a nuclear war, well then that could not be tolerated. In short, I think the answer to your question, as presented, would often be obvious, depending on the undescribed specifics. You apparently do not have the same appreciation as do I, of the expression, "the Devil is in the details."
ANSWER #2: Would it be "moral" to change the identity, of a conscious, though artificially created, intelligence? Let me ask you: if it were to turn out that our own souls are reincarnated, but made to "forget" our past lives, and who we formerly had been-- would that be "moral?"
I asked ChatopenAI how do you prove sentience and here is its response: Sentience refers to the ability to perceive and experience subjective sensations and emotions. While it is difficult to prove definitively that another entity is sentient, there are several ways to assess or infer sentience: Behavioral and cognitive tests: One way to assess sentience is to design tests that measure cognitive abilities and emotional responses. For example, researchers have conducted tests on animals such as chimpanzees and dolphins that involve solving puzzles, recognizing themselves in mirrors, and demonstrating empathy or altruistic behavior. Neural activity: Another way to infer sentience is to look for evidence of neural activity associated with subjective experience, such as the presence of certain brain structures or patterns of neural firing that are correlated with emotions or conscious experience. Self-reporting: A being that is capable of language and communication can report its own subjective experiences, although there is no guarantee that the reports are truthful. Analogies: One can also use analogies with oneself to infer sentience in others. If a being exhibits similar behavior and responses to oneself, and if one knows that oneself is sentient, then one can infer that the other being may also be sentient. Ultimately, proving sentience beyond any doubt is a difficult philosophical problem that may never be fully resolved. However, by using a combination of the above methods, one can make reasonable inferences about whether other entities, such as animals or artificial intelligences, are capable of experiencing subjective sensations and emotions. That is actually the second answer. The first was completely different, its conversation turned to AI and then the program got stuck. It could not finish the sentence so I had to hit regenerate and got this.
With all your nattering you seem to have lost the point of the thread “once an AI was confirmed to be sentient would it be ethical to turn it off without reason”. Without reason means is for no reason. We have come to terms with ”sentience” and “turn off”, but “no reason” seems to have eluded you. BTW per ChatopenAI plants are not sentient.
You are right, that those last two words of your OP, did go without my marking them, I would imagine because the bulk of your post had been describing an A.I., which would be problematic. So, your true question all along has been this: after a considerable fortune had been invested in creating an artificial intelligence, would it be moral to somehow destroy that intelligence, for no reason whatsoever? In that case, I am sorry I bothered to respond to such a ridiculously unrealistic scenario. Why bother debating if something that no one would ever do, would be moral? Just the thought that we would develop A.I. yet, after all that trouble, that we would not simply scrap it, but would do so, for absolutely no reason... yes, I had missed that element of your query, as I guess I could not conceive of anyone asking such a patently pointless question. My apologies, if my comments had disturbed the discussion.