AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms

Wondering how smart assistants like Siri or Alexa keep getting better at understanding voice instructions over time, or how Netflix seems to keep getting better at suggesting content? Or perhaps you wonder sometimes how you are getting all these ads about something you never searched for but was exactly what you needed? These are only some of the most visible and least critical interactions we have with Artificial-Intelligence-developed algorithms every day of our lives. They are designed to learn from us, analyze our interests and decision-making thinking, and ultimately outsmart us.

The Wall Street Journal has a very interesting article describing how the use of AI is expanding: “We are witnessing a turning point for artificial intelligence, as more of it comes down from the clouds and into our smartphones and automobiles (…) Shield AI, a contractor for the Department of Defense, has put a great deal of AI into quadcopter-style drones which have already carried out—and continue to be used in—real-world combat missions.” (Mims, 2021). This is only one more example of how humanity continues developing Artificial-Intelligence-based automation and progresses in transitioning more critical decision-making into the hands of Artificial-Intelligence-developed algorithms.

As I have discussed in previous posts, data-based algorithmic decision making can contain unwanted bias. Therefore, it is important to require effective methods that prevent these algorithms from containing the unwanted bias especially now that the use of machine-learning and AI is expanding into critical aspects of our society.

I am trying to stay positive and expect that the more this topic is discussed, the more public interest it gains. Quality methods should be expected to evaluate algorithms and require them to fit quality standards. We should avoid falling into easy mistakes like trying to make the standards fit the algorithms. Figure 1 expresses this sentiment with some good quality humor.


Cartoon: "Despite our great research results, some have questioned our ai-bised methodology but we trained a classifier on a collection of good and bad methodology sections, and it says ours is fine"
Figure 1. AI Methodology. (Source: xkcd.com, n.d.)

 

Reference:

Mims, Christopher. “How AI Is Taking Over Our Gadgets.” WSJ, 26 June 2021, www.wsj.com/articles/how-ai-is-taking-over-our-gadgets-11624680004.

AI Methodology. (n.d.). [Cartoon]. Xkcd. https://xkcd.com/2451/

Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

Algorithmic Bias: Is Perfectly Imperfect Good Enough?