Algorithmic Bias: Is Perfectly Imperfect Good Enough?


A Forbes article titled “Perfectly Imperfect: Coping With The ‘Flaws’ Of Artificial Intelligence (AI)” claims that AI will never be perfect and that bias should be expected and accepted by society. That implicit bias, poor data, and people expectations make the case for AI to never be perfect. It goes beyond that and claims that AI is perfectly imperfect (Sahota, 2020). Nevertheless, the article’s points of view have one common flaw, the unparalleled size and speed at which AI can operate. As mentioned in an article published by McKinsey and Company on bias in AI: “extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale” (Silberg & Manyika, 2020). XKCD (AI Hiring Algorithm, n.d.) makes a great example of this in Figure 1.

 

Cartoon showing a person indicating how a machine learning algorithm found that the best people to hire was the people that showed interest in developing further the algorithm.

Figure 1. AI Hiring Algorithm. (Source: xkcd.com, n.d.)

 

Although it is logical to understand that no creation can be perfect and that all we can do is to limit bias and other issues with AI to its minimum, we must consider the large-scale coverage of AI as its main difference with other imperfect human creations or even when compared to humans itself. For instance, bias in an HR department of a company may cause an unwanted pattern that can be seen when navigating through the data. Nevertheless, if an AI-developed algorithm decides that this pattern is relevant to the task given, it may not only repeat the pattern but ultimately prioritize it and replicate it as fast as its algorithm can recalculate -which technically, in a system with low resistance, is close to the speed of light. Nonetheless, accepting that AI bias will always be present is needed. Perhaps the argument here is that we still need to do our best to reduce that bias as best as possible or to at least have tools that would evaluate those algorithms to identify potential biases. We could think of those processes or programs as auditors that check for uncontrolled bias growth within those AI-based algorithms.

 

 

 

References:

 

Sahota, N. (2020, June 15). Perfectly Imperfect: Coping With The ‘Flaws’ Of Artificial Intelligence (AI). Forbes. https://www.forbes.com/sites/cognitiveworld/2020/06/15/perfectly-imperfect-coping-with-the-flaws-of-artificial-intelligence-ai/?sh=3990d40663ee

 

AI Hiring Algorithm. (n.d.). [Cartoon]. Xkcd. https://xkcd.com/2237/

 

Silberg, J., & Manyika, J. (2020, July 22). Tackling bias in artificial intelligence (and in humans). McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans#

Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms