A beauty contest was judged by AI that didn't like dark skin

Joined
May 10, 2012
Messages
19,519
Reputation
6,484
Daps
42,997
A beauty contest was judged by AI and the robots didn't like dark skin


A beauty contest was judged by AI and the robots didn't like dark skin
The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners



One expert says the results offer ‘the perfect illustration of the problem’ with machine bias. Photograph: Fabrizio Bensch/Reuters
Sam Levin in San Francisco


@SamTLevin

Thursday 8 September 2016 18.42 EDT Last modified on Friday 9 September 2016 18.10 EDT



The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.

But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.

The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.

When Microsoft released the “millennial” chatbot named Tay in March, it quickly began using racist language and promoting neo-Nazi views on Twitter. And after Facebook eliminated human editors who had curated “trending” news stories last month, the algorithm immediately promoted fake and vulgar stories on news feeds, including one article about a man masturbating with a chicken sandwich.

While the seemingly racist beauty pageant has prompted jokes and mockery, computer science experts and social justice advocates say that in other industries and arenas, the growing use of prejudiced AI systems is no laughing matter. In some cases, it can have devastating consequences for people of color.

Beauty.AI – which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. While there are a number of reasons why the algorithm favored white people, the main problem was that the data the project used to establish standards of attractiveness did not include enough minorities, said Alex Zhavoronkov, Beauty.AI’s chief science officer.

Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.



Facebook Twitter Pinterest
Winners of the Beauty.AI contest in the category for women aged 18-29. Photograph: Beauty.AI 2.0 Winners
“If you have not that many people of color within the dataset, then you might actually have biased results,” said Zhavoronkov, who said he was surprised by the winners. “When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”

The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.

The Beauty.AI results offer “the perfect illustration of the problem”, said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing”, which has increasingly relied on machines. “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.”

The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.

Civil liberty groups have recently raised concerns that computer-based law enforcement forecasting tools – which use data to predict where future crimes will occur – rely on flawed statistics and can exacerbate racially biased and harmful policing practices.

“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice.

A ProPublica investigation earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.

“That’s truly a matter of somebody’s life is at stake,” said Sorelle Friedler, a professor of computer science at Haverford College.

A major problem, Friedler said, is that minority groups by nature are often underrepresented in datasets, which means algorithms can reach inaccurate conclusions for those populations and the creators won’t detect it. For example, she said, an algorithm that was biased against Native Americans could be considered a success given that they are only 2% of the population.

“You could have a 98% accuracy rate. You would think you have done a great job on the algorithm.”

Friedler said there are proactive ways algorithms can be adjusted to correct for biases whether improving input data or implementing filters to ensure people of different races are receiving equal treatment.

Prejudiced AI programs aren’t limited to the criminal justice system. One study determined that significantly fewer women than men were shown online ads for high-paying jobs. Last year, Google’s photo app was found to have labeled black people as gorillas.

Cyril noted that algorithms are ultimately very limited in how they can help correct societal inequalities. “We’re overly relying on technology and algorithms and machine learning when we should be looking at institutional changes.”

Zhavoronkov said that when Beauty.AI launches another contest round this fall, he expects the algorithm will have a number of changes designed to weed out discriminatory results. “We will try to correct it.”

But the reality, he added, is that robots may not be the best judges of physical appearance: “I was more surprised about how the algorithm chose the most beautiful people. Out of a very large number, they chose people who I may not have selected myself.”
 

MrPentatonic

Superstar
Joined
May 2, 2012
Messages
4,224
Reputation
669
Daps
14,066
Reppin
NULL
Why would white people build robots that dont see things in their makers perspective?

They probably having a field day in robotics, artificial intelligence and machine learning because its mostly a white and a tiny amount of indians+asians
 

Guile

Superstar
Supporter
Joined
Aug 6, 2013
Messages
17,101
Reputation
1,210
Daps
50,581
"So G:mjpls:od created mankind in his own image..."
 

KinksandCoils

African American Queen
Joined
Jul 16, 2013
Messages
11,305
Reputation
2,060
Daps
21,180
Reppin
Locker room
There is a lot of looking for things to be angry about on here. Did you stumble upon this or go looking for it?

fukk cac products
 

IWunD3r

Superstar
Joined
Oct 5, 2015
Messages
5,552
Reputation
1,230
Daps
27,271
Lmao
This shyt reminds me of the sex robot thread...
All ima say is, imagine a breh giving up organic p*ssy, to only pay thousands for a sex robot, that will call him n1g ger:russ:
 
Top