Algorithmic Bias Examples

    One of the most famous cases of algorithmic bias at the corporate level was the case of Amazon’s AI recruiting tool. Reuters reported that Amazon had developed an AI-developed tool to filter job candidates, but it had to be shut down due to bias found in the tool: “machine-learning specialists uncovered a big problem: their new recruiting engine did not like women (…) computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry” (Dastin, 2018). Amazon did not have a technical problem, but rather a conceptual one. Artificial Intelligence is only as good as the baseline data it is sourced with, which can be tainted with many human biases. As more software-based alternatives to complex human processes that require processing data emerge, smaller companies than Amazon will also start using software-based intelligence to make certain decisions. It is concerning that when this happens, issues like the one experienced by Amazon’s tool will multiply across several industries without proper prevention. There is a need to provide standards or create regulations to prevent algorithms and specially machine learning produced algorithms from skipping the already existing laws such as anti-discriminatory legislation.

    The medical industry has also been using AI. One well-known application is in imaging screening. Scientific American reported that: “algorithms trained with gender-imbalanced data do worse at reading chest x-rays for an underrepresented gender, and researchers are already concerned that skin-cancer detection algorithms, many of which are trained primarily on light-skinned individuals, do worse at detecting skin cancer affecting darker skin” (Kaushal, 2020). Once again, in this case, the cause of the bias is in the quality of the datasets used to train the AI.

    Another example of bias was the UnitedHealth Group’s algorithm that was allegedly racially biased and led to lower level of care for black patients. NBC News reported that “the algorithm determined that black patients spent $1,800 less in medical costs per year than white patients with the same chronic conditions, leading the algorithm to conclude incorrectly that the black patients must be healthier since they spend less on health care” (Gawronski, 2019). I am embedding a video of the interview CBS made to Melanie Evans (CBS News, 2019), a Wall Street Journal reporter who wrote a piece on this topic (Evans & Mathews, 2019). In this interview, Evans discusses also some good points about why bias in algorithms are a concern.


New York Regulator Probes UnitedHealth Algorithm for Racial Bias (Source: CBS News)

References

Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. U.S. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Kaushal, A. R. A. (2020, November 17). Health Care AI Systems Are Biased. Scientific American. https://www.scientificamerican.com/article/health-care-ai-systems-are-biased/

Gawronski, Q. (2019, November 7). Racial bias found in widely used health care algorithm. NBC News. https://www.nbcnews.com/news/nbcblk/racial-bias-found-widely-used-health-care-algorithm-n1076436

Evans, M., & Mathews, A. W. (2019, October 26). New York Regulator Probes UnitedHealth Algorithm for Racial Bias. WSJ. https://www.wsj.com/articles/new-york-regulator-probes-unitedhealth-algorithm-for-racial-bias-11572087601

CBS News. (2019, November 1). Racial bias found in health care company algorithm [Video]. YouTube. https://www.youtube.com/watch?v=KYYVjT0mQB8&feature=youtu.be

Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

Algorithmic Bias: Is Perfectly Imperfect Good Enough?

AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms