Posts

Showing posts from April, 2022

Could the FTC Order to Delete Biased Algorithms in the Near Future?

Image
Continuing with my endeavor to further understand algorithmic bias, as well as learning about what is being done about it, I recently stumbled upon a fascinating article in Protocol that talked about how the Federal Trade Commission (FTC) had instructed the company WW International to delete all the illegally obtained private data and delete all algorithms derived from this data. The subject was intriguing, “a new standard for penalizing tech companies that violate privacy and use deceptive data practices: algorithmic destruction” (Kaye, 2022). Started looking further into the details by reading the FTC’s press release on the settlement resulting from a complaint filed by the Department of Justice (DOJ) on behalf of the FTC on WW International (formerly Weight Watchers). The FTC press release indicated that WW International collected personal information from children as young as eight without parental permission. The settlement instructed not only for the illegally collected data to b

Algorithmic Bias and Filter Bubbles

Image
While researching on algorithmic bias I found a very insightful TED Talk by Eli Pariser about Filter Bubbles. Pariser says that Filter Bubbles are “your own personal, unique universe of information that you live in online. And what's in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don't decide what gets in. And more importantly, you don't actually see what gets edited out.” (Pariser, 2011). He describes in his talk how algorithms define what gets filtered in or filtered out of our information feeds. Beware online "filter bubbles" (source: TED Talks) A good example of this filtering is what we experience in social media. Algorithms decide what to show and what not to show. This decision making seems to be neutral, but as I have discussed in previous posts, no algorithm is neutral given that they are only mathematical representations of a set of particular human views of what should be prioritized and what not.

Algorithmic Bias Examples

Image
     One of the most famous cases of algorithmic bias at the corporate level was the case of Amazon’s AI recruiting tool. Reuters reported that Amazon had developed an AI-developed tool to filter job candidates, but it had to be shut down due to bias found in the tool: “machine-learning specialists uncovered a big problem: their new recruiting engine did not like women (…) computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry” (Dastin, 2018). Amazon did not have a technical problem, but rather a conceptual one. Artificial Intelligence is only as good as the baseline data it is sourced with, which can be tainted with many human biases. As more software-based alternatives to complex human processes that require processing data emerge, smaller companies than Amazon will also start using software-based intelligence to make certain decisio