Concerns About Bias in Machine Learning and Artificial Intelligence

    As science and technology advancements continue pushing the boundaries of what computational power can do, digital data keeps growing in popularity as the preferred method to store and search for human knowledge. Recently, Machine Learning approaches to processing this vast and growing universe of datasets have started to be used to improve efficiency and to be able to keep up with the speed at which our digital universe is growing, not to mention the growing digital cloud empowered by high-speed data transmission. Machine Learning has opened the door to autonomous algorithm-based decision-making. These technologies are the building blocks of Artificial Intelligence.

    Basic Artificial Intelligence technology is currently being used to support or even completely overtake decision-making in multiple industry sectors and human ingenuity may likely one day bring into reality more autonomous versions of Artificial Intelligence. Worrall in a 2015 National Geographic article already warned us about how AI is already present in our lives: “We may not be aware of it, but machine learning is already an integral part of our daily lives, from the product choices that Amazon offers us to the surveillance of our data by the National Security Agency”.

Amazon Alexa Echo Dot 3rd Generation on a table
Figure 1Amazon Alexa Echo Dot 3rd Gen. (Source: Gugleta, 2019).

    As a computer science student who is particularly interested in data science, it is one of my concerns how humanity’s craving for technological advancement may be living out some important conceptual, perhaps even philosophical, questions that need to be asked before our creations are out in the wild interacting with the people we care for. The transmission of human bias into Artificial-Intelligence-developed algorithms is a concerning topic. Silberg and Manyika (2020) warn us in an article in a McKinsey’s publication about this bias:

“Underlying data rather than the algorithm itself are most often         the main source of the issue. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities (…) Bias can also be introduced into the data through how they are collected or selected for use”.

    Society, including governments and all industries involved, needs to reassess rules and processes, perhaps even by implementing certain regulations, to audit and control human bias in our digital “smart” creations.
 
References

Worrall, S. (2015, October 7). How Artificial Intelligence Will Revolutionize Our Lives. National Geographic. https://www.nationalgeographic.com/culture/article/151007-computers-artificial-intelligence-ai-robots-data-ngbooktalk?loggedin=true
 
Silberg, J., & Manyika, J. (2020, July 22). Tackling bias in artificial intelligence (and in humans). McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans#

Gugleta, L. (2019). Amazon Alexa Echo Dot 3rd Gen. [Photograph]. Unsplash. https://unsplash.com/photos/Ub4CggGYf2o


Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms

Algorithmic Bias: Is Perfectly Imperfect Good Enough?