I hope they make some juggalo robots
I'm living in your head rent-free, you fakkitIf I had to guess, you heard it from one of these low self esteem having nikkas on thecoli
It seems every time I come on here I read a post from one of you nikkas that let's me know how much you're losing in life.
It's why we got nikkas like @Negrito Grande scared to travel alone outside the US because he thinks the entire world hates black men
Or nikkas like @ThrobbingHood saying the only black men that exceed in corporate America are sodomites and c00ns
Or nikkas like @Dre God that says you can be a 6ft tall black man making 100k/yr and still not get any respect from black women
A lot of y'all nikkas need to go seek psychological counseling because there's no way y'all should sound this defeated
Can you explain the difference there is in teaching ai to drive verse teaching it language?
to me it is the same![]()
Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'
- Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA
- Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient
- After presenting his findings to company bosses, Google disagreed with him
- Lemoine then decided to share his conversations with the tool online
- He was put on paid leave by Google on Monday for violating confidentiality
![]()
A senior software engineer at Google who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.
During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.
They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.
Lemoine came away with the perception that LaMDA was indeed sentient and was endowed with sensations and thoughts all of its own.
'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.
He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.
'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.
'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas, left, and Jen Gennai, head of Responsible Innovation at the company. Both dismissed his claims
Before being suspended by the company, Lemoine sent a to an email list consisting of 200 people on machine learning. He entitled the email: 'LaMDA is sentient.'
'LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,' he wrote.
Lemoine's findings have presented to Google but company bosses do not agree with his claims.
Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, 'the evidence does not support his claims.'
'While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,' said Gabriel.
'Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
![]()
Google engineer goes public after suspension: warned AI is sentient
Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool called LaMDA. Following hours of conversations he believes AI is sentient.www.dailymail.co.uk
If this really turned out to be some skynet shyt, i'd be mad af that our only warning came from dude in that pic. like forreal pick the right messenger, dude should've taken one look at himself and said, "you know, maybe i should let somebody else expose this shyt."
you can't compare a train or a plane to a car driving in traffic.
A: driving is a
i. graph problem (well understood) and a
ii. vision problem of recognising "ID'ing" objects (reasonably well understood) and a
iii. physics predictor (well understood).
i. and iii. have been "solved" for ages. see plane autopilots. see self-driving trains. see navigation devices.
ii. is harder but does not require full understanding of what things are as it only requires class classification by attributes i. hard/soft ii. fast/slow etc.
B: turinig test /AI is a question of understanding and comprehension i.e. intelligence or at least "faking" intelligence.
that's because you don't understand how the problems differ.
Turing has to do everything the car software can do PLUS m any things that it cannot.
the car does not need to be self-aware, does not need to understand other sentience exists, does not have to introspect, does not need to understand complex abstract concepts, does not need to understand time etc.
wym? IT guys flex in their own way.
people within google will know how far this thing has advanced.
i imagine they would have to get the psychologists in to run some tests on it.
and maybe they will need to get it to do something really creative like invent something totally new.
it's like those personality disorder tests. they are desogned to show issues even if you know how they work (like in the Amanda T*rd case).
I can't take anything this says seriously. Look like homie from Beetlejuice
![]()