An Insightful and Inspiring Tech Talk by Cathy O'Neil on Algorithmic Bias

As a Computer Science Major, I spend most of my days learning and perfecting the skill required to communicate with zeros and ones with fascinating non-sentient systems and embed logic into their memories, so they follow my commands. This sorcery is usually referred to as coding!

Just like with magic, we can also program these systems to consume large quantities of data and spit back countless secrets hidden in it; or perhaps answer complex questions that would take us (humans) sometimes up to several lifetimes to compile. This magic is the result of algorithms defined in code. Well-designed code-embedded data-based decision-making logics had been the attainable holy grail for many organizations in recent times and some of us, data sorcerers, had been happily non-stop coding as many decisions in our lives as caffeine allows.

 

We often debate on the quality of the code and the algorithms, but rarely talk about the source data. Even more rarely do we discuss the quality of that data. We assume the data has errors such as input errors, or missing data, but I’ve never heard of bias in data until I heard Cathy O’Neil’s tremendously insightful tech talk on algorithmic and data bias. Cathy says: “So we are the ones that are biased, and we are injecting those biases into the algorithms by choosing what data to collect (…) But by trusting the data that's actually picking up on past practices and by choosing the definition of success, how can we expect the algorithms to emerge unscathed? We can't. We have to check them. We have to check them for fairness.”(O’Neil, 2017).

 

Cathy proposes auditing algorithms for bias, particularly when the decision-making has direct implications on people’s lives. Cathy talks about checking integrity (data quality), definitions of success, accuracy, and the long-term effects of the algorithms. I found this video eye-opening for anyone who has never heard of data or algorithmic bias. There are so many implications when we talk about unintended bias in our data.


Cathy's talk happened in 2017. In 2022, this topic seems to be gaining momentum. Bloomberg reports that the FTC and the CFPB are already looking into this topic at consumer's level: "Given the growing use of artificial intelligence (AI) and automated decision-making tools in consumer-facing decisions, we expect federal regulators in 2022 to continue their recent track record of interest in potential discrimination and unfairness, as well as data accuracy and transparency."(“Potential Bias in AI Consumer Decision Tools Eyed by FTC, CFPB,” 2022).


This is certainly an exciting topic and I am looking forward to continuing my research on it. You can watch Cathy’s TED Talk here. I hope you find her position as interesting as I did. Let's join the conversation.


References:


O’Neil, Cathy. (2017, April). The era of blind faith in big data must end [Video]. TED Conferences. https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end


Potential Bias in AI Consumer Decision Tools Eyed by FTC, CFPB. (2022, February 3). Bloomberg Law. https://news.bloomberglaw.com/tech-and-telecom-law/potential-bias-in-ai-consumer-decision-tools-eyed-by-ftc-cfpb


Comments

Popular posts from this blog

Could the FTC Order to Delete Biased Algorithms in the Near Future?

AI Bias: Human Bias in Artificial-Intelligence-Developed Algorithms

Algorithmic Bias: Is Perfectly Imperfect Good Enough?