Posts

Could the FTC Order to Delete Biased Algorithms in the Near Future?

Image
Continuing with my endeavor to further understand algorithmic bias, as well as learning about what is being done about it, I recently stumbled upon a fascinating article in Protocol that talked about how the Federal Trade Commission (FTC) had instructed the company WW International to delete all the illegally obtained private data and delete all algorithms derived from this data. The subject was intriguing, “a new standard for penalizing tech companies that violate privacy and use deceptive data practices: algorithmic destruction” (Kaye, 2022). Started looking further into the details by reading the FTC’s press release on the settlement resulting from a complaint filed by the Department of Justice (DOJ) on behalf of the FTC on WW International (formerly Weight Watchers). The FTC press release indicated that WW International collected personal information from children as young as eight without parental permission. The settlement instructed not only for the illegally collected data to b

Algorithmic Bias and Filter Bubbles

Image
While researching on algorithmic bias I found a very insightful TED Talk by Eli Pariser about Filter Bubbles. Pariser says that Filter Bubbles are “your own personal, unique universe of information that you live in online. And what's in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don't decide what gets in. And more importantly, you don't actually see what gets edited out.” (Pariser, 2011). He describes in his talk how algorithms define what gets filtered in or filtered out of our information feeds. Beware online "filter bubbles" (source: TED Talks) A good example of this filtering is what we experience in social media. Algorithms decide what to show and what not to show. This decision making seems to be neutral, but as I have discussed in previous posts, no algorithm is neutral given that they are only mathematical representations of a set of particular human views of what should be prioritized and what not.

Algorithmic Bias Examples

Image
     One of the most famous cases of algorithmic bias at the corporate level was the case of Amazon’s AI recruiting tool. Reuters reported that Amazon had developed an AI-developed tool to filter job candidates, but it had to be shut down due to bias found in the tool: “machine-learning specialists uncovered a big problem: their new recruiting engine did not like women (…) computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry” (Dastin, 2018). Amazon did not have a technical problem, but rather a conceptual one. Artificial Intelligence is only as good as the baseline data it is sourced with, which can be tainted with many human biases. As more software-based alternatives to complex human processes that require processing data emerge, smaller companies than Amazon will also start using software-based intelligence to make certain decisio

The Dangers of Asking the Wrong Questions to Artificial Intelligence - A TED Talk Review

Artificial Intelligence may not want to harm us, but it will if you we ask the wrong question. This sounds like a twisted game from a horror movie, but it is one of the biggest problems we are currently facing with machine learning and AI navigating through our data to achieve certain goals, make certain decisions, or even make predictions about us. In the TED talk “The Danger of AI is Weirder than You Think”, Janelle Shane explains her findings when testing AI to achieve goals or answer questions that she proposed to it (Shane 03:15–05:21). The results are unexpected, yet real and therefore concerning. She found that AI is very effective at achieving tasks and answering the questions proposed. Nevertheless, the wrong question or the wrong set of instructions to answer a question could be disastrous to us. Janelle’s TED talk is well performed and very visual. It particularly manages to bring interesting examples that support her point in a very playful way, which is certainly apprec

Light at The End of The Data Bias Tunnel

Image
     Particularly in the case of Machine Learning and Artificial Intelligence (AI), the root cause of bias in the algorithms generated by them is in the quality of data that is sourced to the machine to learn from. Low quality or unaudited datasets can easily solidify or even exponentially increase human bias in the new logics created by Machine Learning and AI.      Research suggests Copyright Law could be used to improve the type of data sourced to AI to learn from and improve its learning process. Lewvendowski suggests that the Fair Use Doctrine in copyright law could be used to allow developers to use data, otherwise unavailable due to copyright laws, to supply their software with potentially less biased datasets  (2018) . A good example could be the use of Fair Use by automakers to share collected datasets on driving and pedestrian patterns. The dangers of using biased or low-quality datasets to teach AI how to drive cars are greater than the individual benefit of companies that o

A Very Simple Bias Algorithm Using Real Data

Image
     While researching on biased algorithm's examples I was not able to find simple ones that would not require comprehensive in-deep knowledge of other topics like coding or statistics, so I thought it would be fun to make and analyze a very simple algorithm for bias. The goal is to understand how bias works in an algorithm.      I've picked the cities of Lewiston (ID) and Clarkston (WA) for this example. These two cities share the Lewis-Clark valley at the confluence of the Snake and the Clearwater rivers. Now, let's assume we work for a real state investment firm looking into building new housing developments. Let's assume another algorithm had already segmented the real state market in the LC valley in four sections based on population and other parameters: Red, Yellow, Blue, and Green. Let's assume we are tasked with creating an algorithm that would decide where next the company should invest. The four parameters we are going to use for our very simple algorith

Relying On, but Not Trusting Algorithms to Prevent Bias

Image
     One critical aspect of understanding how to treat algorithms is to understand the nature of our relationship with them. Algorithmic logic is math at its core. Given the objective nature of math, we tend to associate its objectiveness and describe algorithms as objective as well. Nevertheless, algorithms are not objective by nature. Quite the opposite, they are opinions written in code, glued together with math. In other words, we cannot claim a slice of apple pie to be fresh and nutritious as fruit just because it was made with apples, no matter how hard some of us may wish it to fall into that category. Figure 1 . Computers vs Humans. (Source: xkcd.com, n.d.)      The result of this, sometimes unfortunate, instinct to define by association may build a false sense of trust in algorithms. These logics are great tools to guide us through processes, but we must be careful with concluding a relationship of trust from those interactions. As algorithms get more complex, we move from hum