Are you for AI

Discussion in 'Opinion POLLS' started by Nonnie, Jul 8, 2023.

?

My thoughts on AI

  1. For AI

    8 vote(s)
    32.0%
  2. Against AI

    7 vote(s)
    28.0%
  3. Not sure, mixed feelings

    10 vote(s)
    40.0%
  1. Grey Matter

    Grey Matter Well-Known Member Donor

    Joined:
    Feb 15, 2020
    Messages:
    4,436
    Likes Received:
    2,593
    Trophy Points:
    113
    Gender:
    Male
    No, you don't. You are assuming that AI will never become autonomous, that is what Super-Intelligence means. It means that there is the potential that these things could potentially become independently capable of thinking and that their capabilities would far exceed the ability of the very-most intelligent humans. It is the stuff of science fiction, but then again so was the Tri-Corder once upon a time: now it is simply called an iPhone or a Samsung.
     
  2. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    Sorry, but your question does not compute.
     
  3. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    And if it develops capabilities adequate enough to manipulate itself (through bypassing) to the nuclear button? There is no machine that I am aware of that shares human morality.
     
    Last edited: Jul 12, 2023
  4. JohnHamilton

    JohnHamilton Well-Known Member

    Joined:
    Oct 17, 2022
    Messages:
    6,669
    Likes Received:
    5,513
    Trophy Points:
    113
    Gender:
    Male
    That is a dangerous slippery slope like assuming that a big, all powerful government will always have your best interests at heart. Who programs the moral values into these machines? What would stop these machines from deciding that they are superior to humans since they are immortal and not subject to the weakness of mortal flesh? How would you insure that they would not end up like HAL from the Stanley Kubrick film, 2001, a Space Odyssey?
     
    Grey Matter likes this.
  5. yardmeat

    yardmeat Well-Known Member

    Joined:
    Aug 14, 2010
    Messages:
    57,890
    Likes Received:
    31,837
    Trophy Points:
    113
    There is also no machine you are aware of that is capable of doing what you propose here. A machine sophisticated enough to do that may well also be sophisticated enough for basic morality. Regardless, what reason would said machine have for nuking the planet?
     
  6. Grey Matter

    Grey Matter Well-Known Member Donor

    Joined:
    Feb 15, 2020
    Messages:
    4,436
    Likes Received:
    2,593
    Trophy Points:
    113
    Gender:
    Male
    Yes Sir, these are the concerns of brainiacs like Hawking.
     
  7. 19Crib

    19Crib Well-Known Member Past Donor

    Joined:
    Feb 4, 2021
    Messages:
    5,885
    Likes Received:
    5,782
    Trophy Points:
    113
    Gender:
    Male
    No I see talking to AI instead of India when you need customer service.
    I can see governments using it to create categories of people based on specific traits.
    I Can expect early intervention of children likely to be criminals.
    I can imagine it running major US systems, and like Boeings faulty autopilot, landing us nose down in a field.
    Lastly, it doesn't draw a paycheck or pay taxes, so what good is it?
     
  8. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    My question is conditional. Read it again.

    You clearly do not understand what "morality" is.
    1). Winning. Machines are programmed for it.

    2). Advantage.

    3). Because it can.

    4). Because a machine cannot "be sophisticated enough for basic morality".Again, look up the term 'morality'.
     
  9. LiveUninhibited

    LiveUninhibited Well-Known Member

    Joined:
    Sep 26, 2008
    Messages:
    9,861
    Likes Received:
    3,108
    Trophy Points:
    113
    We are machines, biological ones far more sophisticated in some ways than current artificial intelligence, but obviously with less potential in other ways (much slower and less precise). Morality is following actions that allow the greatest good and do not violate the rights of others. It's based upon rules and weighing harms vs benefits. Sophisticated machines ought to be better at it than humans. Humans have illogical desires that were adaptive in pre-history that get in the way of morality.

    I expect someday humans will be hybridized with machines to make a much more sophisticated hybrid, combining advantages of both, and those that fail to adapt will likely become rather like native american tribes trying to live traditionally today, albeit hopefully with less drug use and poverty, and probably without incidental genocide on the way.
     
  10. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    No. Not necessarily.
    No. It is based upon perceived harms vs benefits.
    Impossible. Morality is linked with emotions. Machines cannot be programmed to act upon emotions or morality.
     
  11. undertheice

    undertheice Well-Known Member

    Joined:
    Sep 12, 2010
    Messages:
    2,279
    Likes Received:
    1,102
    Trophy Points:
    113
    the very concept of ai is fatally flawed. such an "intelligence" is based on an initial algorithm and that algorithm is influenced by its creator. where organic intelligence is influenced by chaos, an "artificial intelligence" can never be touched by chaos. chaos is inherently absent from the digital world. though an "artificial intelligence" may be capable of learning and changing, it will do so only in a logical fashion. there can be no great leaps, no flights of fancy that lead to the unexpected. it will always be dependent on the initial algorithm.

    some fools may see this as a step forward, but think about it for a moment. much of our morality is entirely illogical. to save one's own child at the expense of several others would seem illogical, but most of us would do it anyway. the very concept of familial responsibility is illogical, but it is the cornerstone of civilization. if the initial algorithm is based on such precepts, then one might create a simplistic intelligence similar to that of humanity, but then chaos creeps in. how does an entirely rational "being" deal with chaos? one must suppose that it would depend on how the originators of the initial algorithm viewed chaos.

    i recently read that google was toying with the idea of basing "google news" on an ai algorithm and it gave me another reason to ignore google news. it was bad enough when the "news" was brought to us by a revolving series of biased reporters, but now we will have a single bias deciding what is "newsworthy" and what is not and that bias will be predicated on the initial algorithm that the ai was built upon.
     
    grumpy geezer and MississippiMud like this.
  12. Chrizton

    Chrizton Well-Known Member

    Joined:
    Aug 4, 2020
    Messages:
    7,803
    Likes Received:
    3,841
    Trophy Points:
    113
    I am not worried about it. The law of diminishing returns will balance out its deployment. You can call that return on investment or scalability if you prefer. Just because something is theoretically possible to do or you build a few does not mean that enough people would be willing to pay for it to make it a marketable product. Brooms and mop sales still leave roomba sales in the dust.
     
  13. MississippiMud

    MississippiMud Well-Known Member

    Joined:
    Oct 27, 2015
    Messages:
    1,544
    Likes Received:
    381
    Trophy Points:
    83
    The cat is already out of the bag with AI. We are already in the midst of an AI arms race and there is no stopping it. We can hope for the best and prepare for the worst. The difference between AI and a nuclear bomb is that nukes can't create better nukes. AI can and IS already creating better AI.

    AI will bring amazing discoveries for humanity. Major even miraculous advances in medicine. Perhaps even solve the world's energy problem. There are also great risks that even developers of AI technology have not yet seen.

    Anyone with serious interest or concern might want to take a look at these 2 presentations on Youtube
    The A.I. Dilemma put out by Center for Humane Technology
    AI and the Future of Humanity by Yuval Noah Harari
     
  14. edna kawabata

    edna kawabata Well-Known Member

    Joined:
    Oct 20, 2018
    Messages:
    4,571
    Likes Received:
    1,497
    Trophy Points:
    113
    Fatally flawed? Have you not thought our intelligence is based on an organic algorithm? Chaos is confusion. When we are confused about something, we may reboot the algorithm or reexamine the evidence and attempt to make sense of it by looking at it from a different angle. AI likewise becomes confused. Its output may have errors or outright lies which programmers call “hallucinations”, which have no basis in reality….it’s confused.

    Also morals are completely logical if you look at it in terms of preservation of the species and prioritizing our genetic line.

    I’ve read currently AI has an IQ of about 80 or 90 but programers can eventually double that. I think, with the chaos theme, you were going for AI’s inability, if not confronted by chaos, to create something wholly new, like art or technological invention. It has already shown that it can use its imagination or what they call “hallucinations”, but just think what it can do when it is smarter than humans.
     
    Last edited: Jul 24, 2023
  15. LiveUninhibited

    LiveUninhibited Well-Known Member

    Joined:
    Sep 26, 2008
    Messages:
    9,861
    Likes Received:
    3,108
    Trophy Points:
    113
    Morality is not determined by the emotions of the entity being moral. Very often being moral is acting against ones emotions in considering others emotions. Emotions can be a part of harms and benefits, and it is dependent upon the entity experiencing the emotions. It is likely that sufficiently sophisticated machines can experience emotions, since we are biological machines anyway, but it acting morally is not dependent upon that.
     
  16. LiveUninhibited

    LiveUninhibited Well-Known Member

    Joined:
    Sep 26, 2008
    Messages:
    9,861
    Likes Received:
    3,108
    Trophy Points:
    113
    Not necessarily. If it escapes that, that is what we could call the technological singularity. We are machines, just using different kinds of parts.

    That's not illogical at all, but in fact is entirely predictable. Natural selection is based upon your offspring surviving, thus having an intense love for ones offspring at the expense of all others is the logical conclusion of natural selection (at least if the reproduction strategy is raising a small number of offspring well). Whether it is the pinnacle of morality is a different matter though. The moral answer would be the greatest good but weighing lives against each other is not so easy.

    Not illogical at all. In fact, it's leveraging people's natural tendencies (for reasons stated above) to form more stable social structures.

    I'm not even sure what you mean by chaos.
     
    Last edited: Jul 24, 2023
  17. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    No, I don't believe that. If there is only room for 20 people in a life raft and it's the machine's job to take the first 20 people then that's what it will do. But a human with the same task will see that his wife is number 37 and what do you think he'll do? Let her drown?
     
  18. edna kawabata

    edna kawabata Well-Known Member

    Joined:
    Oct 20, 2018
    Messages:
    4,571
    Likes Received:
    1,497
    Trophy Points:
    113
    Would he do the moral thing or go with his emotions?
    Actually what the machine is doing, taking the first 20, is amoral. The moral thing to do would be to draw straws, if your morality tells you all humans are of equal value.
     
  19. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    That's the question.
    That's my point. Well, I guess it depends on the correct definition. Does amoral mean the absence of morals or anti-moral?
    That's a contradiction because drawing straws isn't a moral thing to do and also because the machine in your example isn't acting on morals at all which again is my point. Know what I mean?
     
  20. Josh77

    Josh77 Well-Known Member Past Donor

    Joined:
    Dec 3, 2014
    Messages:
    10,512
    Likes Received:
    7,120
    Trophy Points:
    113
    Gender:
    Male
    Morals are subjective, and dictated by culture, Though there is often much common ground, such as murder is bad, etc.
    I think it has more to do with the functioning of society than good/evil.
     
  21. edna kawabata

    edna kawabata Well-Known Member

    Joined:
    Oct 20, 2018
    Messages:
    4,571
    Likes Received:
    1,497
    Trophy Points:
    113
    Amoral is the absence of morals. The machine is following a rule without relation to a moral judgement.
    To draw straws is saying you cannot choose between individuals because they are of equal worth and therefore impossible to choose between them so it must be left up to chance if only 20 can live to be fair. Know what I mean?
    If you believe people are of unequal worth that involves bias and morally suspect.
     
  22. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    I'll accept that.
    It is not a rule, it is a program concocted by man's rule.

    Yes
    No. That's man's interpretation, not the machine. Furthermore, it is not moral at all.
    That's not chance. That is a program.
    No, You are completely wrong and this is at the center of MY WHOLE POINT that machines cannot be taught morals or emotions.

    A machine can be programmed to take the FIRST 20 but the machine cannot distinguish the moral aspects of (as in this case) survival:

    * Those who are "handy" to build and create?
    * Those who are men and women of God?
    * Male reproductive capabilities?
    * Pregnant women?

    :arrow: A machine cannot make such choices because they are situation-sensitive and (among many things) include morals and emotions.
     
  23. Thingamabob

    Thingamabob Well-Known Member Past Donor

    Joined:
    Jan 12, 2017
    Messages:
    14,267
    Likes Received:
    4,465
    Trophy Points:
    113
    Gender:
    Male
    Fully agreed.
    I disagree. Making war is justifying murder by calling it something else.
     
  24. Josh77

    Josh77 Well-Known Member Past Donor

    Joined:
    Dec 3, 2014
    Messages:
    10,512
    Likes Received:
    7,120
    Trophy Points:
    113
    Gender:
    Male
    Right, I agree, but I think wars are often justified morally by a culture for the benefit of the culture, by making those outside of the culture seem to be bad guys. It is for the benefit of the culture, or at least made to appear that way through propaganda. But a murder of an individual within a culture is usually seen as a bad thing, as it harms the group.
     
  25. edna kawabata

    edna kawabata Well-Known Member

    Joined:
    Oct 20, 2018
    Messages:
    4,571
    Likes Received:
    1,497
    Trophy Points:
    113
    You seem to have lost contact with your own scenario. You are the one telling the machine to choose the “first” 20 to save. It’s a rule you chose. I asked Chat openAI what is the best way to save 20 out of many?

    "Random selection: To ensure fairness and avoid favoritism, consider a random selection process. One way to do this is by using numbered lots or drawing names from a hat. This way, the decision is not influenced by any biases or preferences."

    And that is the thing the moral human should do as I said and it’s the “machines” interpretation. It wasn’t followed because you didn’t give it that choice.
    I see you are willing to decide which human is more valuable than the others. AI disagrees.
     

Share This Page