We're talking "real a.i developments" breh, not some workarounds. I don't know much about what you just posted but they were talking about Meta releasing those weights, it sounds like that kaikoendev guy posted a workaround he found on github. The main tech though was Facebook, who you can probably follow on linkedin and view the team responsible.
![]()
From the ChatGPT community on Reddit: Meta's LLaMA LLM has leaked - Run Uncensored AI on your home PC! | Pedro Larroy
So now that the LLaMa ( https://lnkd.in/ewY-jPwf ) weights are kindly released by Meta, we can play with some models which were released as "delta weights" such as Vikuna ( https://lnkd.in/ed9Fyii3 ) This is interesting because Meta required to apply for access. You can also run LLaMa 7B on...www.linkedin.com
Somebody making real progress with expensive hardware or large data set processing, you think that's happening at some random person's home lab? That keikoendev don't even have a twitter either, but the fact he worked at Stripe: bet they got a Linkedin though.
meta spent millions training an A.I model, they realized they were far behind openAI and open sourced their model and made it available to a select group of researchers. the model leaked on github back in feburary and the open source community has been innovating at a really fast pace. the dev who maninged to increase context length on an already trained model didn't find anything on github, he publishes his blog findings on github.
thats like saying the main tech for a mustang was the ford model-T.


