Growing debate on AI singularity

Discussion in 'Computers & Tech' started by Same Issues, Jul 5, 2014.

  1. Same Issues

    Same Issues Well-Known Member

    Joined:
    Jun 10, 2014
    Messages:
    1,561
    Likes Received:
    533
    Trophy Points:
    113
    Since reading Hugo De Garis's Artilect War a few years ago I have been interested in the topic of artificial intelligence and where it will lead us in the future. Most of the leaders in the field of AI have been bringing up the dangers that will be presented when machines reach an intelligence far greater than our own especially since the start of the millennium. Computers have been increasing in speed exponentially as it follows moores law and with further work in quantum computing it seems that they will only continue to get faster and "will outmatch not only your own intelligence, but the world's combined human intelligence too". I think as stated in the artilect war that this issue will soon dominate world politics, and agree with De Garis that there will be some engineers that will try to produce godlike massively intelligent machines as almost a religious pursuit regardless of the consequences it could have on the human race (Even admitting to being a "Cosmist" himself). I also agree with him that at some point "a general" will also have his hand in making a monster before someone else (someone else being a hostile nation to his) does. As the article below states (as well as the book sited) people will increasingly use cybernetics to increase lifespan and intelligence, but at what point do we stop as all biological systems are perishable unless replaced. I tend to have a negative outlook on the possibilities of the future of robotics and AI, anyone else?


    By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem
    http://finance.yahoo.com/news/2045-physicist-says-top-species-123359838.html
    Del Monte spoke to us over the phone about his thoughts surrounding artificial intelligence and the singularity, an indeterminate point in the future when machine intelligence will outmatch not only your own intelligence, but the world's combined human intelligence too.

    The average estimate for when this will happen is 2040, though Del Monte says it might be as late as 2045. Either way, it's a timeframe of within three decades.


    also

    http://www.bbc.com/news/technology-27343076
    'Killer robots' to be debated at UN
     
  2. Sadistic-Savior

    Sadistic-Savior New Member Past Donor

    Joined:
    Jul 20, 2004
    Messages:
    32,931
    Likes Received:
    89
    Trophy Points:
    0
    The singularity is not going to happen in our lifetime...certainly not before quantum computers (real ones) become commonplace. AI is MUCH harder to do than a lot of people seem to think it is. Simply having a computer that can predict human actions with some success is not enough to qualify as self aware.
     
  3. Same Issues

    Same Issues Well-Known Member

    Joined:
    Jun 10, 2014
    Messages:
    1,561
    Likes Received:
    533
    Trophy Points:
    113
    Those researchers are still trying to define what intelligence is. Once they do (or start to get a grasp on it) things will move towards a more "real" AI, or a self aware AI.

    But its true it probably a ways off. But 50 to 100 years is not that far in the future and a realistic goal in the grand scheme of things.
     

Share This Page