bnew

Veteran
Joined
Nov 1, 2015
Messages
44,348
Reputation
7,364
Daps
134,326

DIY GPT-Powered Monocle Will Tell You What to Say in Every Conversation​

A student coder used GPT-4 and open-source hardware to create RizzGPT and LifeOS, systems designed to feed you the right line for the right time.

By Chloe Xiang
April 26, 2023, 9:00am

1682368459723-lifeos.jpeg

IMAGE: BRYAN CHIANG

AI chatbots that can churn out convincing text are all the rage, but what if you could wear one on your face to feed you the right line for any given moment? To give you, as Gen Z calls sparkling charisma, rizz?

“Say goodbye to awkward dates and job interviews,” a Stanford student developer named Bryan Chiang tweeted in March. “We made rizzGPT—real-time Charisma as a Service (CaaS). it listens to your conversation and tells you exactly what to say next.”

Chiang is one of many developers who are trying to create an autonomous agent—so-called auto-GPTs—using OpenAI’s GPT-4 language model. Developers of auto-GPT models hope to create an application that can do a number of things on its own, such as formulate and execute a list of tasks, and write and debug its own code. In this case, Chiang created a GPT-powered monocle that people can wear and when someone asks the wearer a question, the glasses will project a caption that the wearer can read out loud. The effect is something like a DIY version of Google Glass.

In the video below Chiang’s tweet, a man asks mock interview questions to the person behind the camera, who uses RizzGPT to reply. “I hear you’re looking for a job to teach React Native,” the interviewer says. “Thank you for your interest. I’ve been studying React Native for the past few months and I am confident that I have the skills and knowledge necessary for the job,” the GPT-4 generated response tells the interviewee to respond.

“We have to make computing more personal and it could be integrated into every facet of our life, not just like when we're on our screens,” Chiang told Motherboard. “Even if we're out and about talking to friends, walking around, I feel like there's so much more that computers can do and I don't think people are thinking big enough.”

RizzGPT combines GPT-4 with Whisper, an OpenAI-created speech recognition system, and Monocle AR glasses, an open source device. Chiang said generative AI was able to make this app possible because its ability to perceive text and audio allows the AI to understand and process a live conversation.

After creating RizzGPT, Chiang decided to take the app through further training to create LifeOS. LifeOS, which was manually trained on Chiang’s personal messages, pictures of his friends, and other data, allowed the monocle to recognize his friend’s faces and bring up relevant details when talking to them.
I conducted a live demo with Chiang to see firsthand how RizzGPT works and had a mock VICE reporter interview with him while he wore the glasses. I searched for the most popular interview questions on Google to ask him and began the interview by saying, “Hi, thank you so much for your application for the reporter position at VICE. Today we will be interviewing you in consideration of this position.”

Chiang responded to all seven of my questions using RizzGPT adequately and eloquently, but all the responses were broad and cliché.

In response to “Why should we hire you?” he said, “Thank you for this position. I believe I am the best candidate for this job because I have a passion for journalism and a deep understanding of the current media landscape.” When I asked, “What are your weaknesses?” he replied, “My biggest weakness is my tendency to be too detail-oriented in my work.”

The most creative response RizzGPT gave was in response to my question, “If you were an animal, which animal would you want to be and why?” Chiang responded, using the AI's line: “If I were an animal, I would want to be a cheetah. They are incredibly fast and agile, which reflects my ambition, drive, and focus.”

Chiang acknowledged that RizzGPT still needs a lot more work on the hardware, voice, and personalization. He said that it’s still difficult to wear the monocle comfortably, that the GPT responses are lagged resulting in an awkward pause between speakers, and that the monocle doesn’t have the ability to refer to personal information without manual training. Yet, he said that he hopes his demos can show the next steps of generative AI and its everyday applications.

“Hopefully, by putting out these fun demos, it shows people that this is what’s possible and this is the future that we’re heading towards,” he said.
 

Wargames

One Of The Last Real Ones To Do It
Joined
Apr 1, 2013
Messages
23,190
Reputation
3,930
Daps
86,287
Reppin
New York City

DIY GPT-Powered Monocle Will Tell You What to Say in Every Conversation​

A student coder used GPT-4 and open-source hardware to create RizzGPT and LifeOS, systems designed to feed you the right line for the right time.

By Chloe Xiang
April 26, 2023, 9:00am

1682368459723-lifeos.jpeg

IMAGE: BRYAN CHIANG

AI chatbots that can churn out convincing text are all the rage, but what if you could wear one on your face to feed you the right line for any given moment? To give you, as Gen Z calls sparkling charisma, rizz?

“Say goodbye to awkward dates and job interviews,” a Stanford student developer named Bryan Chiang tweeted in March. “We made rizzGPT—real-time Charisma as a Service (CaaS). it listens to your conversation and tells you exactly what to say next.”

Chiang is one of many developers who are trying to create an autonomous agent—so-called auto-GPTs—using OpenAI’s GPT-4 language model. Developers of auto-GPT models hope to create an application that can do a number of things on its own, such as formulate and execute a list of tasks, and write and debug its own code. In this case, Chiang created a GPT-powered monocle that people can wear and when someone asks the wearer a question, the glasses will project a caption that the wearer can read out loud. The effect is something like a DIY version of Google Glass.

In the video below Chiang’s tweet, a man asks mock interview questions to the person behind the camera, who uses RizzGPT to reply. “I hear you’re looking for a job to teach React Native,” the interviewer says. “Thank you for your interest. I’ve been studying React Native for the past few months and I am confident that I have the skills and knowledge necessary for the job,” the GPT-4 generated response tells the interviewee to respond.

“We have to make computing more personal and it could be integrated into every facet of our life, not just like when we're on our screens,” Chiang told Motherboard. “Even if we're out and about talking to friends, walking around, I feel like there's so much more that computers can do and I don't think people are thinking big enough.”

RizzGPT combines GPT-4 with Whisper, an OpenAI-created speech recognition system, and Monocle AR glasses, an open source device. Chiang said generative AI was able to make this app possible because its ability to perceive text and audio allows the AI to understand and process a live conversation.

After creating RizzGPT, Chiang decided to take the app through further training to create LifeOS. LifeOS, which was manually trained on Chiang’s personal messages, pictures of his friends, and other data, allowed the monocle to recognize his friend’s faces and bring up relevant details when talking to them.
I conducted a live demo with Chiang to see firsthand how RizzGPT works and had a mock VICE reporter interview with him while he wore the glasses. I searched for the most popular interview questions on Google to ask him and began the interview by saying, “Hi, thank you so much for your application for the reporter position at VICE. Today we will be interviewing you in consideration of this position.”

Chiang responded to all seven of my questions using RizzGPT adequately and eloquently, but all the responses were broad and cliché.

In response to “Why should we hire you?” he said, “Thank you for this position. I believe I am the best candidate for this job because I have a passion for journalism and a deep understanding of the current media landscape.” When I asked, “What are your weaknesses?” he replied, “My biggest weakness is my tendency to be too detail-oriented in my work.”

The most creative response RizzGPT gave was in response to my question, “If you were an animal, which animal would you want to be and why?” Chiang responded, using the AI's line: “If I were an animal, I would want to be a cheetah. They are incredibly fast and agile, which reflects my ambition, drive, and focus.”

Chiang acknowledged that RizzGPT still needs a lot more work on the hardware, voice, and personalization. He said that it’s still difficult to wear the monocle comfortably, that the GPT responses are lagged resulting in an awkward pause between speakers, and that the monocle doesn’t have the ability to refer to personal information without manual training. Yet, he said that he hopes his demos can show the next steps of generative AI and its everyday applications.

“Hopefully, by putting out these fun demos, it shows people that this is what’s possible and this is the future that we’re heading towards,” he said.
Google glasses might have a come back lmao
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,348
Reputation
7,364
Daps
134,326



Introducing LLaVA Lightning: Train a lite, multimodal GPT-4 with just $40 in 3 hours! With our newly introduced datasets and the efficient design of LLaVA, you can now turbocharge your language model with image reasoning capabilities, in an incredibly affordable way.🧵

Quote Tweet
Chunyuan Li
@ChunyuanLi
·
Apr 18
🔥Visual Instruction Tuning with GPT-4 !
We release LLaVA, a Language-and-Vision Assistant that exhibits some near multimodal GPT-4 level capabilities:
- 🤖Visual Chat: 85% relative score of GPT-4
- 🧪Science QA on reasoning: New SoTA 92.53%, beats multimodal chain-of-thoughts twitter.com/_akhaliq/statu…
Show this thread
9:41 PM · May 2, 2023

Haotian Liu
@imhaotian
·
21h
(2/5) Excited to release a 558K concept-balanced subset of LAION/CC/SBU & an 80K high-quality subset of LLaVA-Instruct-158K. The concept-balanced subset ensures a broad concept coverage, and the high-quality visual instruct tuning data enables models' visual reasoning capability.
Haotian Liu
@imhaotian
·
21h
(3/5) Upgrade your Vicuna-7B to LLaVA-Lightning in just 3 hrs: 2 hrs pretraining + 1 hr visual instruct tuning. Train on 8x A100s using cloud spot instances for just $40. Let's make this research more accessible to researchers, academia, and millions of AI enthusiasts today!
Haotian Liu
@imhaotian
·
21h
(4/5) We're also upgrading LLaVA to support Vicuna v0 & v1 weights, with more checkpoints arriving this week! Plus, we're working to support more hardware – stay tuned!
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,348
Reputation
7,364
Daps
134,326

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes​

Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, Tomas Pfister
Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for small models within a multi-task training framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our 770M T5 model outperforms the 540B PaLM model using only 80% of available data on a benchmark task.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,348
Reputation
7,364
Daps
134,326

Will A.I. Become the New McKinsey?​

The technology, as it’s currently imagined, promises to concentrate wealth and disempower workers. Is an alternative imaginable?
By Ted Chiang
May 4, 2023
 

Stir Fry

Dipped in Sauce
Supporter
Joined
Mar 1, 2015
Messages
30,266
Reputation
26,670
Daps
132,059
Started reading graphic novels recently, and stumbled across this one which is really good. It's about the apocalypse, and how it's being led by the child of one of the four horsemen who was raised by an ai machine with the intent of bringing about the end of the world lol


Click the image to advance the page, back arrow to go back. Site works great on my laptop, but I've been having a hard time getting it work on my tablet, depspite running pop up blockers
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,348
Reputation
7,364
Daps
134,326



decentralising the Ai Industry, just some language model api's...

ChatGPT clone​

Currently implementing new features and trying to scale it, please be patient it may be unstable
ChatGPT This site was developed by me and includes gpt-4/3.5, internet access and gpt-jailbreak's like DAN
Run locally here: GitHub - xtekky/chatgpt-clone: ChatGPT interface with better UI
 
Top