An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions
It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.
But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.
“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”
Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. … Now we know that we need protections.”
Kosinski was not available for an interview, according to a Stanford spokesperson. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.
In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.
This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.
“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”
Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.
Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”



It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.
But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.
“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”
Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. … Now we know that we need protections.”
Kosinski was not available for an interview, according to a Stanford spokesperson. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.
In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.
This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.
“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”
Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.
Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

