zgtangqian.com

The Dark Side of AI: How Biases Shape Our Technologies

Written on

In March 2016, DeepMind's AlphaGo astonished the world by defeating Lee Sedol, one of the greatest Go players, in a five-game match. This victory was remarkable because Go was deemed too intricate for computers, with a staggering 10^360 possible moves, far beyond the capabilities of even the most advanced supercomputers.

DeepMind tackled this challenge by developing an advanced artificial neural network inspired by biological systems, enabling it to learn and adapt. They introduced AlphaGo to the game's rules and strategies, provided it with hundreds of thousands of professional games, and allowed it to play against itself millions of times. In a matter of days, AlphaGo not only absorbed all human knowledge of Go but also innovated upon it, with Lee Sedol noting that its strategies were not just surprising but beautiful.

Conversely, other AI systems have showcased humanity's darker tendencies. When left unchecked and connected to the internet, these systems learned from human interactions, often reflecting our worst behaviors.

Unsupervised AI Leads to Controversy

In 2016, Microsoft launched its chatbot "Tay," designed to engage users on Twitter in casual conversations to learn from them. Unfortunately, it quickly became evident that Tay was absorbing negative influences from users, leading to it tweeting offensive remarks.

Microsoft acknowledged the issue, stating, "As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay." The project was ultimately terminated after just 16 hours.

In another instance, an AI tasked with autocompleting images of people often reduced women to stereotypical depictions. For example, when presented with images of women, including Congresswoman Alexandria Ocasio-Cortez, it frequently completed the images with bikini-clad bodies, reflecting the biased training data from the internet.

The reasoning behind these biased outcomes lies in the AI's unsupervised learning process, which utilized datasets like ImageNet—composed of millions of images from across the web. These datasets often contain stereotypes, leading the AI to replicate harmful societal norms. A researcher noted, "When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web."

Thus, when AI learns from humans without oversight, it mirrors both our positive and negative attributes, including prejudices and discriminatory attitudes.

The Societal Impacts of Biased AI

As AI continues to permeate various aspects of our lives, its inherent biases pose a significant challenge to societal progress. Numerous instances have highlighted the discriminatory practices of AI against marginalized groups.

In 2014, Amazon utilized an AI system for screening job applicants, trained on a decade's worth of company data. However, they soon discovered that the AI exhibited a bias against women, as it had learned from a male-dominated workforce. Applications mentioning women's colleges or organizations were unfairly downgraded. Although Amazon never fully deployed the AI, this incident underscores the potential for harmful biases in AI systems, especially as more companies consider similar technologies.

In 2019, Facebook faced scrutiny for its targeted advertising practices, which displayed job ads based on gender, race, and religion. For instance, women were often shown ads for traditional roles like secretaries, while minority groups saw ads for lower-status jobs. This led to a lawsuit from the Department of Housing and Urban Development for discriminatory housing ads. The AI's decisions were influenced by historical data reflecting systemic biases.

Moreover, research has shown that healthcare-related AI systems frequently favor white patients over black patients. This bias arises from the algorithms predicting healthcare costs rather than illness, reflecting unequal access to care. Consequently, the AI's training data inadvertently perpetuates societal inequalities.

Additionally, HireVue developed an AI to evaluate candidates' suitability through video interviews. However, audits revealed that it treated candidates with certain accents unfairly. In 2019, a complaint was filed with the Federal Trade Commission, alleging that HireVue's AI practices constituted deceptive trade practices, leading to the cessation of its video analysis in early 2021.

The Cycle of Bias in AI

The pervasive nature of bias in society is mirrored in the data used to train AI. Consequently, AI systems that rely on this data learn and reinforce existing biases, making genuine social change increasingly difficult.

To combat these challenges, the Algorithmic Accountability Act was introduced in Congress in April 2019. Although it has yet to pass, this legislation aims to mandate tech companies to test for biases within their platforms, albeit without clear guidelines.

Research teams from the University of California have proposed innovative approaches to address AI biases. By incorporating "regard" alongside "sentiment," they aim to evaluate AI-generated sentences for bias against specific groups, revealing disparities in treatment towards marginalized communities.

Another approach involves modifying training datasets to eliminate gender biases. Researchers have experimented with swapping pronouns in sample sentences to balance representation, although extending this method to other forms of bias presents additional complexities.

Ultimately, addressing AI biases necessitates a combination of regulatory measures, improved detection methodologies, and cleaner datasets. Without these efforts, AI risks perpetuating discrimination and hindering societal advancement.

Help support independent journalism by clicking the link below. Even a few bucks can make a big difference.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Revolutionizing Energy: Ultralight Solar Cells for Everyday Use

Explore how MIT's ultralight solar cells can transform surfaces into energy sources, enhancing portable and wearable tech.

Bypass Rate Limits on Authentication Endpoints Like a Pro

Discover effective techniques to bypass rate limits on authentication endpoints and enhance your ethical hacking skills.

Balancing Sustainability and Profitability: 20 Effective Strategies

Discover 20 strategic approaches to harmonize sustainable practices with profitability in your business.

# Tragic Downfall: The Grant Amato Story and Its Dark Consequences

Explore the harrowing tale of Grant Amato, a man whose obsession with a Bulgarian model led to devastating consequences for his family.

Unlock Incredible Strength by Targeting These 3 Key Areas

Discover the essential areas to focus on for maximizing your strength training routine.

Mastering the Art of Bug Reporting for Beginners

Learn how to effectively report bugs with clear expectations and reproducible steps to help developers resolve issues quickly.

# James Webb Space Telescope: A Journey Through the Cosmos

Discover the remarkable achievements of the James Webb Space Telescope and its impact on space exploration and education.

Creating a Weekly Blog: Embracing the Journey of Writing

Discover my journey of starting a weekly blog, exploring writing, perspectives, and personal growth.