The controversial nature of using computer algorithms to make decisions

Discussion in 'Law & Justice' started by kazenatsu, Apr 27, 2018.

  1. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,821
    Likes Received:
    11,301
    Trophy Points:
    113
    Unknown to most people, computer algorithms are being used now to make decisions in many places about how individual people get treated. They are already being used in several states to help sentence persons convicted of a crime. They are used by loan companies to decide who gets a loan. But these algorithms have many potential pitfalls and may not be the fairest at making decisions for the people who's lives are affected.



    Quick imaginary example to illustrate the point. Suppose African Americans statistically have a higher prevalence of using a certain type of drug. The computer algorithm notices that defendants who have been convicted of crimes involving this type of drug are more likely to have a higher reoffending rate when it comes to violent crimes. Essentially now the computer is using unfair discrimination to sentence some criminals to longer sentences, even though there is no direct cause and effect connection between using that particular type of drug and committing a violent crime.
     
    Last edited: Apr 27, 2018
  2. HonestJoe

    HonestJoe Well-Known Member Past Donor

    Joined:
    Oct 28, 2010
    Messages:
    14,890
    Likes Received:
    4,867
    Trophy Points:
    113
    First, “computer algorithm” and “artificial intelligence” aren’t the same thing. There is far too much bluring of technical terms in discussions in this area, even by people who should know better.

    Simple algorithms are really just sets of fixed rules the a computer program applies to sets of data. There’s technically nothing preventing the same rules being applied by human beings, it would just typically be much slower and much more prone to error. There are countless things these days which are managed by this kind of software, though in scenarios where the outcome is potentially harmful or significant, there is typically some form of human intervention to confirm or override the results.

    As far as I’m aware, true AI, software capable of making decisions entirely independently or its core programming, effectively changing the rules it applies, is not used in any live systems anywhere though there will be various systems in development, both academic and commercial. I’d expect that any implementation of this kind of thing, especially early adoption, would have all sorts of human monitoring and overrides.

    Interesting example, though is it really discrimination? If people convicted of particular offences are truly statistically more likely to reoffend, would it be wrong to treat them differently on that basis, regardless of any racial imbalances in use of the drug?

    Regardless, what makes you imagine a human making the same decisions wouldn’t be subject to the same bias. Indeed, in this kind of things humans are probably more likely to allow bias to influence their decisions since our subconscious (and sometimes conscious) biases aren’t rational. The computer could be told to ignore race and it would do so unconditionally. I don’t think any human is capable of doing that, however much they might want to.
     
  3. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,821
    Likes Received:
    11,301
    Trophy Points:
    113
    Yes, because the connection might be due entirely to race. Let's say you notice a correlation between crime A and probability of future criminal offense. Yet, when you break it down by the race of the offenders, the connection only holds true for African Americans, and not other race groups who committed crime A, and a high percentage of the ones convicted of crime A in the statistics are African American. That means the computer algorithm is only able to predict future offense rate because it has found a data input that statistically correlates to the subject's race. Now you're essentially using race to predict future re-offense rate.
     
    Last edited: Apr 27, 2018
  4. kazenatsu

    kazenatsu Well-Known Member Past Donor

    Joined:
    May 15, 2017
    Messages:
    34,821
    Likes Received:
    11,301
    Trophy Points:
    113
    Computer algorithms can be racist. There was a case a few years ago where Google programmers incorporated picture recognition technology into their search function. The program would self-teach itself to be able to make a connection between names and pictures. But the algorithm was not completely perfect. When people typed "gorilla" into the image search, some of the pictures displayed were of Black people. Google apologized, and programmers made a quick fix to the algorithm to prevent that from happening again.

    https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
     
    Last edited: Apr 27, 2018
    Raffishragabash likes this.
  5. dixon76710

    dixon76710 Well-Known Member

    Joined:
    Mar 9, 2010
    Messages:
    58,859
    Likes Received:
    4,554
    Trophy Points:
    113
    "There is nothing more painful for me at this stage in my life than to walk down the street and hear footsteps and start to think about robbery and then look around and see it’s somebody white and feel relieved." The Reverend Jesse Jackson. (Kennedy 1998, 16)
     

Share This Page