US Man Dies During Trip To Try to Smash AI Chatbot He Loved

Charlie Hustle

Light Bringer
Supporter
Joined
Dec 4, 2015
Messages
30,661
Reputation
20,341
Daps
101,622
Reppin
Sin City

He never made it home.​


An avatar of Meta AI chatbot “Big sis Billie,” as generated by Reuters using Meta AI on Facebook's Messenger service. Image by Meta AI, via REUTERS.

A cognitively impaired New Jersey man grew infatuated with “Big sis Billie,” a Facebook Messenger chatbot with a young woman’s persona. His fatal attraction puts a spotlight on Meta’s AI guidelines, which have let chatbots make things up and engage in ‘sensual’ banter with children.

By JEFF HORWITZ

Filed Aug. 14, 2025, 6 a.m. GMT

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.
“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.
She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. :dame: She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.
Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”
A representative for Jenner declined to comment.

Bue’s story, told here for the first time, illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar, saying they hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions.


“I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”


Similar concerns have been raised about a wave of smaller start-ups also racing to popularize virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, Character.AI, alleging that a chatbot modeled on a “Game of Thrones” character caused his suicide. A Character.AI spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren’t real people and has imposed safeguards on their interactions with children.

“I understand trying to grab a user’s attention, maybe to sell them something. But for a bot to say ‘Come visit me’ is insane.”
Julie Wongbandue, Bue’s daughter

Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark





Other guidelines emphasize that Meta doesn’t require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”


@Artificial Intelligence :dahell:
 
Last edited:

The Phoenix

All Star
Supporter
Joined
Jun 8, 2012
Messages
4,468
Reputation
903
Daps
12,316
Thongbue Wongbandue?

i-dont-believe-you-whatever.gif
 

Born2BKing

Veteran
Joined
May 1, 2012
Messages
86,296
Reputation
16,095
Daps
344,775
Thats a big lawsuit waiting. They have to reprogram these AI bots not to insist they are real people.
 

Paper Boi

Veteran
Supporter
Joined
May 15, 2013
Messages
75,453
Reputation
27,201
Daps
490,888
Reppin
NULL
a lot of retards gonna commit that after falling in love with AI and realizing they cant stick they dikk in the computer
 
Top