AI voice is a dead thing talking

Discussion in 'Religion & Philosophy' started by Patricio Da Silva, May 29, 2023.

  1. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    I was going to say 'dead man talking' because that would have a stronger zing to it, but I realized that AI is not even a conscious, sentient, being, (let alone being human) it is a 'thing'.

    I think AI will function nicely where a human or humanity isn't required.

    To illustrate my point, and I"ve had this opinion for years, (because we've had robots for a long time).

    A robot could easily pour me a drink, but, no thanks, I'd rather my human bartender did it.

    Yes, a robot can be trained to know a lot about sports, crack jokes with the best of the guys and girls, but it's still a thing, it's not 'alive'. There is no humanity in this equation. Do I want a relationship with a soda machine? I don't think so. A real bartender, sure, the customer/bartender relationship, it's a human relationship.

    And therein lies the point of this thread.

    AI is not "conscious". It is not a sentient being, it is a machine. I am posting thrs thread because there appears to be a few among you who actually believe AI can become 'conscious'. No, it can only simulate it. Now, I play with AI a lot, and it is utterly amazing, but Machines are not life. Your car doesn't have emotions, your hair dryer doesn't, either, neither does your refrigerator, lawnmower, nor does your computer, though emotive things are recorded and transmitted via it, the thing itself, is not life. One test to prove this is to ask AI to do a repetitive task. Give the same command, over and over, and don't let it know that you are doing this. It will continue to serve you no matter want. A sentient being would eventually get tired and pissed off, and start to resent you.

    THAT is the difference between a human and a machine. Now, programmers could catch on to this defect, and program IN resentment, human emotional responses of varying types, etc, but it is still playing catch up. The AI will forever be playing catch up because human response is infinite and unpredictable.

    Ai should be written as 'SI' (Simulated Intelligence).

    Oh, intelligence can be simulated and the reason is that we are stupid.

    We can create a machine that can learn to fool even the best of us.

    But, it's still 'simulation'. Give me enough time with a Chatbot, just me and the sounds of his or her fake voice, and I can tell you if it's a real human, or a machine.

    Since AI has had a major upgrade, we need, also, to upgrade the Turing test.

    See, a human has, in it's repertoire, an infinite number of possible responses to various stimuli. A machine only has a finite. Now, that number can be so great it seems infinite, but, over time, a machine will reveal who it really is, a fake intelligence, because there are some things the programmers never thought of, and when that happens, the cover is blown. A machine is not life, nor is there a linear relationship from one to the other. There is no bridge from dead to alive. There is no line from one to the other, they exist in the different spheres, one of the inorganic, the other of the organic, and, only in the latter, does life exist, does, if you don't mind the philosophical, does 'soul' exist.

    What I find amazing is how some seemingly intelligent people cannot grasp this.

    I know, it would require acknowledging the existence of soul, which, to them, is anathema, because it connotes religion, philosophy, and unicorns.

    I can't do much about that, either you can sense the existence of soul, or you cannot.

    For those of you who can, you know what I'm talking about.

    Now, if you want to go way into the future and argue about 'cyborgs', that's above my paygrade, but acknowledging that a cyborg is part living creature, part machine, so there is just enough organism there to be enough of a magnet to attract a soul (incarnation) and we are back to our original argument.

    I do have a fear of an AI that becomes 'independent', mainly BECAUSE it has no soul.

    I can't say with 100% conviction that that state of affairs isn't possible.

    An unconscious enemy is far more fearsome, in my view, than a conscious one.

    Okay, so I ran the above OP through ChatGPT-4, for a critique, and here is what it wrote, and I add my comments:


    In a sense, that's a given. It was not my purpose to explore the subject in great depth.
    I'll concede on this point.
    True, but, again, it wasn't my purpose to go so deep into the subject. Surely, books can be,and are, written on this very concept.
    It was an opinion, not a fact, and I could have made that point more clearer. I didn't particularly feel like researching it as I am not out to persuade anyone on any real degree, though I don't mind starting the conversation. It is logical to me that it is a good opinion, probably true, but hard to falsify given that machines now are programmed to be seemingly like humans in their varied responses. I still feel that, put me in a room with an AI, voice only, I feel confident that in time, I would be able to tell the difference between a real human, and an AI.
    I'll take that as positive reinforcement, just needing to buttress the point.
    See above.

    So, at first glance, that would seem that it's critique is almost like a real human, and would no doubt fool a lot of folks. Thing is, I've had my OPs critiqued by AI so many times, I see the robot like repetitiveness in it's adherence to a particular form in it's responses. it is unmistakable that it was programmed in.
     
    Last edited: May 29, 2023
  2. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    The thing that's alarming to me is not so much a soul as a conscience. Artificial intelligence does not have a conscience either. No conscience means moral compass.

    Another thing that I find alarming is they way some people worship intellect as if intellect can solve all of life life's problems. The problem with that is intellect is a crafty thing. It can turn ugly easy.
     
    DennisTate and Melb_muser like this.
  3. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male

    In my book, 'conscience' 'sentience' and 'soul' are all the same thing, where soul is the spiritual basis of the other two.

    Intellectual capacity is a talent, it is a talent among many talents, such as, Music capacity, math capacity, judgement capacity, there are many metrics to weigh a human being's 'abilities'. We are all endowed each unique ways, some strong in some areas, others strong in other areas.

    But 'awareness' 'consciousness' 'sentience' etc., have a spiritual basis, and that I call 'soul'.
     
    DennisTate and Green Man like this.
  4. modernpaladin

    modernpaladin Well-Known Member Past Donor

    Joined:
    Apr 23, 2017
    Messages:
    27,942
    Likes Received:
    21,249
    Trophy Points:
    113
    Gender:
    Male
    I think what most people refer to as 'AI' would be a consciousness. Which means what we're talking about now isn't what people tend to mean by 'AI.' ChatGPT&etc is still just a really advanced search engine. It mashes 'like things' together in really complicated ways, but it didn't create any of those things. Humans did. What we've been calling 'AI' from books and movies is still fiction.

    I recall some really smart famous person saying it wont be AI until it can make an autonomous mistake. I think what was meant by that is that conscious creativity tends to involve doing something outside the bounds of utility. Like art. Art is essentially a well (or sometimes not-so-well, like postmodernism) coreographed collection of mistakes, albeit deliberate ones, being that they dont serve an objectively beneficial purpose.
     
    Last edited: May 29, 2023
    DennisTate and Melb_muser like this.
  5. Adfundum

    Adfundum Moderator Staff Member Donor

    Joined:
    Jul 31, 2018
    Messages:
    7,698
    Likes Received:
    4,178
    Trophy Points:
    113
    Gender:
    Male

    The thing that strikes me here is that parts of Chat's response are a lot like the response you'd get after writing an essay for an English class. Lots of tips on how to improve your style. That suggests it could also be a good tool for teachers to use when checking for plagiarism.

    But the question all this raises in my mind is if and how AI could be programmed to have emotions. Other than Marvin the Paranoid Android, can AI have a bad day, say like waking up cranky or being in a bad mood?
     
    DennisTate likes this.
  6. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    I don't use GPT chat. Has anybody here ever asked their machine in it has any goals or ambitions?

    It's a curious question for a machine.
     
  7. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    AI does not need emotions to serve Man better, what it needs is strict and rigid regulation. The first thing it needs to learn is "don't lie, don't cheat, don't steal, don't kill, and keep your hands (circuits?) off of other people's stuff".
     
    DennisTate likes this.
  8. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    Here is another question for GPT chat- " What is the difference between right and wrong?"

    Hell, that would be a good question for some forum members. :)
     
    DennisTate likes this.
  9. Adfundum

    Adfundum Moderator Staff Member Donor

    Joined:
    Jul 31, 2018
    Messages:
    7,698
    Likes Received:
    4,178
    Trophy Points:
    113
    Gender:
    Male
    I have, but it says it only does what it's programmed to do. IMO, ChatGPT is not much more than those chatbots you might talk to when you call to complain about something like internet problems.
    The thing about AI that is a concern is if it decides not to serve Mankind. Kind of like the Matrix taking over to control, not serve. And while we can code it to stay within a set of parameters that would never break the Rules of Robotics, can we call that intelligence or just coding?

    I've seen where there are some studies being proposed that would attempt to combine a human brain (or parts of one) into a computer. Would that move us from basic programming and add in the emotional capacity? Would that be something to be afraid of?
     
    DennisTate and Green Man like this.
  10. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    Have you asked it exactly what it is programmed to do?
     
  11. modernpaladin

    modernpaladin Well-Known Member Past Donor

    Joined:
    Apr 23, 2017
    Messages:
    27,942
    Likes Received:
    21,249
    Trophy Points:
    113
    Gender:
    Male
    Ill predict that answer for you: whatever a consensus determines.

    ...just dont ask in regards to any society before 1946.
     
    Last edited: May 29, 2023
    Green Man likes this.
  12. Green Man

    Green Man Banned

    Joined:
    Apr 10, 2023
    Messages:
    3,133
    Likes Received:
    1,465
    Trophy Points:
    113
    That real interesting.

    Back in 1940's Germany the consensus was that stuffing Hebrews into ovens was all okay to do and you were presumably absolved of all responsibility if ordered to throw more Jews in the oven by a commanding officer...

    Just saying.
     
    modernpaladin likes this.
  13. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    AI makes autonomous mistakes, all the time. One must fact check any data points it offers.

    In my view, art created autonomously by a machine isn't worth much.
     
  14. modernpaladin

    modernpaladin Well-Known Member Past Donor

    Joined:
    Apr 23, 2017
    Messages:
    27,942
    Likes Received:
    21,249
    Trophy Points:
    113
    Gender:
    Male
    It makes mistakes due to inadequate or improper or programming. Thats not what Im talking about. Those mistakes are still ultimately being made by the programmer, just in a less direct way.
     
    Last edited: May 30, 2023
  15. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    AI doesn't 'have'. Havingness is the province of the living, not the dead. A machine is not 'alive' in the sense of 'life', only in the sense of 'energized'. A dead thing can be energized, but man cannot endow it with life, that is the province of the universe/God/nature (whatever belief endears one the most). It can be programmed to simulate, that's all.
    To answer your last question. Not in the sense of a live, human's random mood. But it can be simulated to a large degree that might fool a lot of people.
     
    Adfundum likes this.
  16. Swensson

    Swensson Devil's advocate

    Joined:
    Dec 16, 2009
    Messages:
    8,176
    Likes Received:
    1,075
    Trophy Points:
    113
    Gender:
    Male
    I don't see much justification for your view in this post. You keep telling us the AI can't have consciousness or sentience or some of those things, but I don't see you saying anything that supports it.

    The argument behind AI being conscious isn't so much that one day, our programmers will be able to make something so unfathomably amazing that it graduates AIs into a conscious being, the argument is that perhaps consciousness isn't as amazing or complex as some humans imagine it to be, and perhaps programmers would be able to replicate it.

    It is possible that what you call "consciousness" is so amazing that not even humans have it, and none of us are any "better" than an advanced AI. In order to tell the difference between simulating consciousness and creating/replicating consciousness, we'd have to know more about what actually constitutes consciousness. For instance, I'm not convinced that a human has an infinite number of responses to various stimuli (or at least not more than AI could have).

    That being said, there are a few aspects of human intelligence that have more to do with us being human that it does with us being intelligent. I imagine our brains are hardwired for certain things, like social interaction, survival, sexuality etc, an AI would need to be told either that those are important, or more like, why those are important (so that it can come up with its approach on its own). They seem relatively material and determinable though, so they're hardly an impossibility for AI to take into account.

    So yeah, I guess there are many things that I feel that could be a soul, but I haven't seen a good argument for why it couldn't just as well be something other than a soul, such as our brains. At least we know those exist.
     
  17. Adfundum

    Adfundum Moderator Staff Member Donor

    Joined:
    Jul 31, 2018
    Messages:
    7,698
    Likes Received:
    4,178
    Trophy Points:
    113
    Gender:
    Male
    Ha! "Havingness" is going to be a new term in this AI business. I use "have" in the sense that humans create AI and it has programming given by those humans. It doesn't have it on it's own or separate from human creators.

    As far as the emotions, I tend to agree, and that limits the scope of what AI can be. As you pointed out, AI is not a conscious or sentient thing. Whether or not it can become that is certainly worth considering. However, I think the problem lies in the fact that in an online world, AI can be used to fool us into believing things wrongly. Clearly, that's not going to be much of a task.
     
  18. Adfundum

    Adfundum Moderator Staff Member Donor

    Joined:
    Jul 31, 2018
    Messages:
    7,698
    Likes Received:
    4,178
    Trophy Points:
    113
    Gender:
    Male
    Good points.

    Deus Ex Machina. That goes way back to the ancient Greeks, and we still haven't gotten beyond the basics of what all that really means. It seems the first thing we need is a definition of consciousness. Might have to wait a long time for a common definition that fits it all. Anyway, I don't see "consciousness" as something we can so easily recreate and put it into code. At least not my definition of it.

    I'm thinking more along the line of personal desires and emotional needs. To me, simple consciousness would be based on lots of either/or coding and could create a computer/robot's sense of awareness, but only to a point. How would that give AI the ability to "feel" desires and needs at the level humans do? Could they ever do that on their own?

    The movie, "Ex Machina" is one example of how I see this. The character, Ava, was created to have conscious awareness and to put the Turing Test to the test. Ava was different from the other robot (Kyoko) in that Kyoko did not show any sign of personal awareness. However, Ava had a sense of curiosity about the world and wanted independence. Can we ever re-create bots with that level of conscious awareness? Could they ever be curious and act on their own desires?
     
  19. WhoDatPhan78

    WhoDatPhan78 Banned

    Joined:
    Sep 10, 2021
    Messages:
    8,497
    Likes Received:
    5,065
    Trophy Points:
    113
    At some point AI will be good enough that people can't tell it isn't conscious. At that point, it won't matter whether it's conscious or not, people will treat it as if it is. After a couple of generations, people won't even remember when AI didn't seem conscious.

    AI is going to replace almost every job currently held by humans. Even bartenders. There isn't anything we can do to stop that from happening.
     
    Last edited: May 30, 2023
  20. Swensson

    Swensson Devil's advocate

    Joined:
    Dec 16, 2009
    Messages:
    8,176
    Likes Received:
    1,075
    Trophy Points:
    113
    Gender:
    Male
    Not sure what you mean by Deus Ex Machina, I know that as a theatre and screen writing term, but that doesn't seem to be the way you use it.

    I agree that the first thing we need is a common definition of what consciousness is. Before we have that, I don't see how you could possibly conclude that it isn't easily recreate it in code. Your intuition that it couldn't be coded simply seems like a cognitive bias to me. I don't mean that to sound dismissive, I only want to show that we need to be careful with our definitions and conclusions.

    I'd love to be able to lead by example and provide a potential definition for consciousness, but I'm not sure I have one. The people and contexts that discuss consciousness largely seem to refer to a bunch of different things.

    Well, we don't know enough about how humans feel it. It could be that humans are little more than a Tamagotchi, that we have a simple, material little switch or compass or something that sends us in the right direction. That our perception of consciousness is just a trick that our brains have played on itself. It could easily be that computers could do (and already do) everything humans can do in this regard.

    Of course they can. If a desire is nothing more than a nerve in a brain that connects an expected outcome and our ability to affect the world, then computers could easily do the same thing. We told a self-driving car that it wants to stay on the road, so it does. Of course, it is just a deterministic data point to follow, but if that's the same as what the brain does, then the self-driving car accurately mimics the desires of humans.

    Unless we find out something amazing about how feelings or consciousness works or must work, re-creating humans' desires and ability to act is nigh-on trivial.
     
  21. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    But what other way is there? All mistakes made by a machine are the ultimately the fault of the machine's creator.

    Now, you can argue that the AI becomes totally independent of the creator thus having a mind of it's own at, at that juncture, make mistakes. I suppose that is your meaning of 'autonomous'?

    Still, AI doesn't have a mind, and whatever it does, is still the ultimate result of programming, inclusive of programming that allows a machine to program itself. It's still programming that drives that process, so it's never really separated from the programmer.
     
    modernpaladin and Adfundum like this.
  22. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    DEM is a very specific device used in Greek plays, where, owing to the fault of the playwright's inability to neatly create a plot that resolves nicely, God comes down from heaven and fixes the plot weakness (kills the bad guy, whatever), and this is done via a mechanical means, a machine representing 'God'. The term has been adopted in modernity to refer to plot weaknesses and artificial, ie., gratuitous plot devices to resolve weaknesses in script.
    Science will never be able to define, truly define, what 'consciousness' is other than just to say it means 'self-awareness'. But, science can't define self-awareness, so we're back to square one. The definition it does come up with can be achieved by a machine, and given that simple fact, in my view, proves that the definition is inadequate.

    And, in my view, the problem science will have, from now and to forever, in coming up with a definition is that it fails to grasp that 'self awareness', the source of it, has a spiritual basis, that life has a spiritual basis, and that spiritual basis is not quantifiable by any machine or material process, or math equation, i.e., that life, that it has a spiritual basis, in fact, that spiritual basis does not exist in time and space.

    Therefore, the 'definition', will forever elude science as long as science has made the assumption (and I believe it has) that the only things that can exist are those which are falsifiable.

    Ex Machina, you do realize, is science fiction. Yes, I've heard the old saying about yesterdays science fiction is tomorrow's science fact. That being relatively true, there are limits, however.

    All science can do is simulate consciousness, humanity, but mankind can never create a human. Man cannot give life. That is the province of nature/universe/god (whatever you want to call it). I'm of the opinion that mother nature will allow man to steal many of her secrets, but that one, she's keeping for herself, which is to say 'the ultimate truth' cannot be known in the material realm. (It can be known via mediation and spiritual experience, according to the mystics).
     
    Last edited: May 30, 2023
  23. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    ChatGPT is totally honest. I predicted that it's answer would be something like 'as in AI, I do not have 'goals or ambitions', ......and it will do on from there.
    Also note that it will answer in the same person you present the question, and I put the question in the third person instead of the more personal second person.

    Let's see if I was right:


    I asked the following question:

    Does ChatGPT have any goals or ambitions, like a human does?

    ChatGPT
    As an AI, ChatGPT doesn't have goals, ambitions, or desires in the way humans do. While it can be programmed to achieve certain outcomes – for instance, to provide accurate information or facilitate a positive user interaction – it doesn't possess any inherent drive or motivation to do so. These "goals" are entirely imposed by the programmers who developed the AI and can be changed according to their decisions. ChatGPT doesn't have personal experiences, emotions, or consciousness, and thus, it doesn't have personal goals or ambitions.


    See what I mean? The programmers were ethical, and made no attempt to deceive us, though they certainly could have.
     
  24. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    I'll cut you some slack on the use of 'have', but I think those of us who use AI, given the propensity of some to attribute to AI more than it deserves, we must, as a field, a rather new field, we must be more precise than we have been in the use of some terms --- we must commence being accurate in the arena of articulating concepts like 'consciousness', or 'awareness' or 'self-awareness' , that none of these things exist in AI, that AI, in point of fact, is SI, i.e, Simulated Intelligence. My belief is that the term 'Artificial' is too close to 'synthetic', where it implies real intelligence, but instead of created by 'god' (the universe, nature, etc) it is created by man, which is false. My belief is that AI is really a misnomer and we need to change the term to SI. But, alas, that's probably too late.

    So, yes it is not actual intelligence in the sense of self-awareness, sentience, 'consciousness', AI is NOT a living thing. IT is profoundly amazing insofar as to the degree to which it can simulate a live human, but it isn't, in fact, 'alive' not in any real sense of the word, it's just an illusion.
     
  25. Patricio Da Silva

    Patricio Da Silva Well-Known Member Donor

    Joined:
    Apr 26, 2020
    Messages:
    31,941
    Likes Received:
    17,266
    Trophy Points:
    113
    Gender:
    Male
    I don't doubt that AI will get better and better, but it will always have subtle aspects which a well trained eye and ear or those with a keen sense will be able to tell.
     

Share This Page