bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
PUlhiVP.png

1UlcQZv.png

:ohhh:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820








The RedPajama project aims to create a set of leading open-source models and to rigorously understand the ingredients that yield good performance. A few weeks ago we released the RedPajama base dataset based on the LLaMA paper, which has galvanized the open-source community. The 5 terabyte dataset has been downloaded hundreds of times and used to train models like MPT, OpenLLaMA, OpenAlpaca. Today we are excited to release RedPajama-INCITE models, including instruct-tuned and chat versions.


Today’s release includes our first models trained on the RedPajama base dataset: a 3 billion and a 7B parameter base model that aims to replicate the LLaMA recipe as closely as possible. In addition we are releasing fully open-source instruction-tuned and chat models. Our key takeaways:
  • The 3B model is the strongest in its class, and the small size makes it extremely fast and accessible (it even runs on a RTX 2070 released over 5 years ago).
  • The instruction-tuned versions of the models achieve strong performance on HELM benchmarks. As expected, on HELM the 7B model performance is higher than the base LLaMA model by 3 points. We recommend using these models for downstream applications with few-shot, entity extraction, classification, or summarization tasks.
  • The 7B model (which is 80% done training) is already outperforming the Pythia 7B model, which is showing the importance of a bigger dataset and the value of the RedPajama base dataset.
  • Based on our observations, we see a clear path for creating a better version of the RedPajama dataset, which we will release in the coming weeks, that will go beyond the quality of LLaMA 7B. We plan to build models at larger scale with this new dataset.
  • We expect differences between the LLaMA 7B and our replication, which we have investigated below.
The biggest takeaway is the demonstration that performant LLMs can be built quickly by the open-source community. This work builds on top of our 1.2 trillion token RedPajama dataset, EleutherAI’s Pythia training code, FlashAttention from Stanford and Together, the HELM benchmarks from Stanford CRFM and generous support from MILA, EleutherAI & LAION for compute time on the Summit supercomputer within the INCITE program award "Scalable Foundation Models for Transferable Generalist AI”. We believe these kind of open collaborations, at larger scales, will be behind the best AI systems of the future.
“RedPajama 3B model is the strongest model in it’s class and brings a performant large language model to a wide variety of hardware.”
Today’s release includes the following models, all released under the permissive Apache 2.0 license allowing for use both in research and commercial applications.
 

O.Red

Superstar
Joined
Jun 1, 2012
Messages
16,125
Reputation
4,838
Daps
62,106
Reppin
NULL
And subsequently fail every exam..and if you somehow got through you'd be a terrible employee who wouldn't know anything. That's the set back with humans no longer learning anything..
This is a fundamental human flaw I've noticed. We're such slaves to convenience and technology that we atrophy our natural and potential gifts

It's like whenever I hear a person say "Why learn math when calculators exist?" Calculators are fine, but learning and applying math has tangible benefits on intelligence and brain function. A lot of the most successful people are greater at math than most of their peers

Technology enslaves us when it transcends being a tool and becomes a means for passing the buck
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820







 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
  • NEWS
  • 02 May 2023

‘Remarkable’ AI tool designs mRNA vaccines that are more potent and stable​


Software from Baidu Research yields jabs for COVID that have greater shelf stability and that trigger a larger antibody response in mice than conventionally designed shots.

Elie Dolgin

d41586-023-01487-y_25308736.jpg

During the COVID-19 pandemic, mRNA vaccines against the coronavirus SARS-CoV-2 had to be kept at temperatures below –15 °C to maintain their stability. A new AI tool could improve that characteristic.Credit: Jean-Francois Monier/AFP via Getty


An artificial intelligence (AI) tool that optimizes the gene sequences found in mRNA vaccines could help to create jabs with greater potency and stability that could be deployed across the globe.

Developed by scientists at the California division of Baidu Research, an AI company based in Beijing, the software borrows techniques from computational linguistics to design mRNA sequences with shapes and structures more intricate than those used in current vaccines. This enables the genetic material to persist for longer than usual. The more stable the mRNA that’s delivered to a person’s cells, the more antigens are produced by the protein-making machinery in that person’s body. This, in turn, leads to a rise in protective antibodies, theoretically leaving immunized individuals better equipped to fend off infectious diseases.

What’s more, the enhanced structural complexity of the mRNA offers improved protection against vaccine degradation. During the COVID-19 pandemic, mRNA-based shots against the coronavirus SARS-CoV-2 famously had to be transported and kept at temperatures below –15 °C to maintain their stability. This limited their distribution in resource-poor regions of the world that lack access to ultracold storage facilities. A more resilient product, optimized by AI, could eliminate the need for cold-chain equipment to handle such jabs.

The new methodology is “remarkable”, says Dave Mauger, a computational RNA biologist who previously worked at Moderna in Cambridge, Massachusetts, a maker of mRNA vaccines. “The computational efficiency is really impressive and more sophisticated than anything that has come before.”

Linear thinking​

Vaccine developers already commonly adjust mRNA sequences to align with cells’ preferences for certain genetic instructions over others. This process, known as codon optimization, leads to more-efficient protein production. The Baidu tool takes this a step further, ensuring that the mRNA — usually a single-stranded molecule — loops back on itself to create double-stranded segments that are more rigid (see ‘Design optimization’).
DESIGN OPTIMIZATION. Graphic shows a before and after optimization of an mRNA sequence using a new AI tool.

Source: Adapted from Ref. 1
Known as LinearDesign, the tool takes just minutes to run on a desktop computer. In validation tests, it has yielded vaccines that, when evaluated in mice, triggered antibody responses up to 128 times greater than those mounted after immunization with more conventional, codon-optimized vaccines. The algorithm also helped to extend the shelf stability of vaccine designs up to sixfold in standard test-tube assays performed at body temperature.

“It’s a tremendous improvement,” says Yujian Zhang, former head of mRNA technology at StemiRNA Therapeutics in Shanghai, China, who led the experimental-validation studies.

So far, Zhang and his colleagues have tested LinearDesign-enhanced vaccines against only COVID-19 and shingles in mice. But the technique should prove useful when designing mRNA vaccines against any disease, says Liang Huang, a former Baidu scientist who spearheaded the tool’s creation. It should also help in mRNA-based therapeutics, says Huang, who is now a computational biologist at Oregon State University in Corvallis.

The researchers reported their findings on 2 May in Nature1.

Optimal solutions​

Already, the tool has been used to optimize at least one authorized vaccine: a COVID-19 shot from StemiRNA, called SW-BIC-213, that won approval for emergency use in Laos late last year. Under a licensing agreement established in 2021, the French pharma giant Sanofi has been using LinearDesign in its own experimental mRNA products, too.
https://www.nature.com/articles/d41586-023-00042-z
Executives at both companies stress that many design features factor into the performance of their vaccine candidates. But LinearDesign is “certainly one type of algorithm that can help with this”, says Sanofi’s Frank DeRosa, head of research and biomarkers at the company’s mRNA Center of Excellence.
Another was reported last year. A team led by Rhiju Das, a computational biologist at Stanford School of Medicine in California, demonstrated that even greater protein expression can be eked out of mRNA — in cultured human cells at least — if certain loop patterns are taken out of their strands, even when such changes loosen the overall rigidity of the molecule2.

That suggests that alternative algorithms might be preferable, says theoretical chemist Hannah Wayment-Steele, a former member of Das’s team who is now at Brandeis University in Waltham, Massachusetts. Or, it suggests that manual fine-tuning of LinearDesign-optimized mRNA could lead to even better vaccine sequences.

But according to David Mathews, a computational RNA biologist at the University of Rochester Medical Center in New York, LinearDesign can do the bulk of the heavy lifting. “It gets people in the right ballpark to start doing any optimization,” he says. Mathews helped develop the algorithm and is a co-founder, along with Huang, of Coderna.ai, a start-up based in Sunnyvale, California, that is developing the software further. Their first task has been updating the platform to account for the types of chemical modification found in most approved and experimental mRNA vaccines; LinearDesign, in its current form, is based on an unmodified mRNA platform that has fallen out of favour among most vaccine developers.

A structured approach​

But mouse studies and cell experiments are one thing. Human trials are another. Given that the immune system has evolved to recognize certain RNA structures as foreign — especially the twisted ladder shapes within many viruses that encode their genomes as double-stranded RNA — some researchers worry that an optimization algorithm such as LinearDesign could end up creating vaccine sequences that spur harmful immune reactions in people.

“That’s kind of a liability,” says Anna Blakney, an RNA bioengineer at the University of British Columbia in Vancouver, Canada, who was not involved in the study.

Early results from human clinical trials involving StemiRNA’s SW-BIC-213 suggest the extra structure is not a problem, however. In small booster trials reported so far, the shot’s side effects have proved no worse than those reported with other mRNA-based COVID-19 vaccines3. But as Blakney points out: “We’ll learn more about that in the coming years.”

doi: ‘Remarkable’ AI tool designs mRNA vaccines that are more potent and stable
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
ChatGPT can pick stocks better than your fund manager

ChatGPT can pick stocks better than your fund manager​


Anna Cooban
CNN
Digital
Contact
Published May 5, 2023 4:43 p.m. EDT

phone-screen-shows-chatgpt-logo-1-6386356-1683319160545.jpg

A basket of stocks selected by ChatGPT, a chatbot powered by artificial intelligence (AI), has far outperformed some of the most popular investment funds in the United Kingdom. (Gabby Jones/Bloomberg/Getty Images)


A basket of stocks selected by ChatGPT, a chatbot powered by artificial intelligence (AI), has far outperformed some of the most popular investment funds in the United Kingdom.

Between March 6 and April 28, a dummy portfolio of 38 stocks gained 4.9 per cent while 10 leading investment funds clocked an average loss of 0.8 per cent, according to an experiment conducted by financial comparison site finder.com.

It wouldn't "be long until large numbers of consumers try to use [ChatGPT] for financial gain," Jon Ostler, Finder's CEO, said in a statement earlier this week.

Over the same eight-week period, the S&P 500 index, which tracks the 500 most valuable companies in the United States, rose 3 per cent. Europe's equivalent, the Stoxx Europe 600 index, ticked up 0.5 per cent in that time.

A typical investment fund pulls together money from multiple investors, and is overseen by a fund manager who decides how to invest that money.

Finder's analysts took the 10 most popular UK funds on trading platform Interactive Investor as a benchmark for assessing the performance of the ChatGPT-generated fund. Funds managed by HSBC and Fidelity were among those selected.

The analysts asked ChatGPT to select stocks based on some commonly used criteria, including picking companies with a low level of debt and a track record of growth. Microsoft, Netflix and Walmart were among the companies selected.

While major funds have used AI for years to support their investment decisions, ChatGPT has put the technology in the hands of the general public, with the potential to guide the decisions of retail investors.

A survey of 2,000 UK adults conducted by Finder last week showed that 8 per cent had already used ChatGPT for financial advice, while 19 per cent said they would consider doing so.

Yet a much bigger 35 per cent said they would not consider using the chatbot to help them make decisions about their money.

Still, "fund managers may be starting to look nervously over their shoulders," Ostler said.

DISRUPTING FINANCE​

In a study published in April, researchers at the University of Florida found that ChatGPT could predict the stock price movements of specific companies more accurately than some more basic analysis models.

Since research company Open AI opened up access to ChatGPT in December, the chatbot has stunned users with its ability to provide lengthy, sophisticated responses to questions.

Its potential uses — from writing high school essays to dispensing medical guidance — have raised concerns that the technology could provide misleading information, allow students to cheat on exams, and oust real people from their jobs.

Ostler at Finder said the "safe and recommended" approach for individual investors was to conduct their own research or speak to a qualified financial adviser. He cautioned that it was too early for investors to trust AI with their finances.

Nevertheless, "the democratization of AI seems to be something that will disrupt and revolutionize financial industries," Ostler said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
[/U]

ChatGPT: China detains man for allegedly generating fake train crash news, first known time person held over use of AI bot​

  • Police in Gansu say suspect named Hong used artificial intelligence technology to concoct information and post it on multiple accounts
  • Chinese regulations that took effect in January require videos and photos made using deep synthesis tech to be clearly labelled to prevent public confusion



William Zheng
Published: 6:00pm, 8 May, 2023
A man has been arrested in Gansu province in China after allegedly using AI to create “false and untrue information”. Photo: Shutterstock

A man has been arrested in Gansu province in China after allegedly using AI to create “false and untrue information”. Photo: Shutterstock

Chinese police have detained a man who allegedly used ChatGPT to generate fake news and disseminate it online in what may be the country’s first detention related to use of the bot.

Police in northwestern China’s Gansu province said in a statement on Sunday that a suspect surnamed Hong had been detained for “using artificial intelligence technology to concoct false and untrue information”.

The case first caught the attention of the cyber division of a county police bureau when they spotted a fake news article that claimed nine people had been killed in a local train accident on April 25.

The cybersecurity officers in Kongtong county found the article simultaneously posted by more than 20 accounts on Baijiahao, a blog-style platform run by Chinese search engine giant Baidu. The stories had received more than 15,000 clicks by the time it came to authorities’ attention.


This is the first time the public has been made aware of a detention by Chinese authorities after Beijing’s first provisions to regulate the use of “deepfake” technology officially took effect in January.
The provisions, called the Administrative Provisions on Deep Synthesis for Internet Information Service, define deep synthesis as the use of technologies – including deep learning and augmented reality – to generate text, images, audio and video and to create virtual scenarios.

The police said they traced the origins of the article to a company owned by the suspect Hong, which operated personal media platforms registered in Shenzhen in Guangdong province in southern China. Some 10 days later a police team searched Hong’s home and his computer and detained him.

The statement said Hong confessed to bypassing Baijiahao’s duplication check function to publish on multiple accounts he had acquired. He input the elements of trending social stories in China from past years into ChatGPT to quickly produce different versions of the same fake story and uploaded them to his Baijiahao accounts, it said.
While ChatGPT is not directly available to Chinese IP addresses, Chinese users can still access its service if they have a reliable VPN connection.


The Gansu public security department said Hong was suspected of the crime of “picking quarrels and provoking trouble”, a charge that normally carries a maximum sentence of five years. But in cases that are deemed especially severe, offenders can be jailed for 10 years and given additional penalties.

In 2013, the Chinese authorities extended it to cover people deemed to have posted and spread false news or rumours online.
China’s top internet regulator has long voiced concern that unchecked development and use of deep synthesis technology could lead to its use in criminal activities such as online scams or defamation.

China’s regulations, which were jointly introduced by the Cyberspace Administration of China, the Ministry of Industry and Information Technology and the Ministry of Public Security, say videos and photos made using deep synthesis technology must be “clearly labelled” to prevent public confusion.
Previously, the regulation of deep synthesis was spread between multiple authorities, but the move to implement a stand-alone regulation shows China wants to rein in the rapid development of the technology and the regulatory challenges it faces.

One of the most common and notorious applications of the technology is the deepfake, where synthetic media is used to swap the face or voice of one person for another. It is getting more difficult to detect such content because of the advancement of the technology, which has been now used around the world to generate fake celebrity porn videos, produce fake news and commit financial fraud.

Western social media platforms such as Twitter and Facebook have also introduced measures to detect and prevent the spread of disinformation generated by deepfake technology.

As ChatGPT has gone viral in recent months, China’s law enforcement agencies have repeatedly voiced suspicion, and even warnings, about the technology.

In one of the first comments on the chatbot made by the Chinese security apparatus, police in Beijing specifically warned the public in February to be wary of “rumours” generated by ChatGPT.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
OP, do you have a list of recommended sites, accounts/pages to follow? It would be great to add them on your first post if possible

I find some info from r/singlularity , r/futurology , tech news sites, searching twitter and browsing the profiles of people commenting on AI development.

I'll try to make a list this weekend made up of the accounts I posted tweets from here.

these user accounts are from all the tweets i posted here, it's not precise for since it likely includes accounts of commenters asking question too but the majority should be accounts that have discusses A.I development. not endorsing anything since I only post individual tweets and know nothing about these users other interests.

a multi-account nitter link:



 
Last edited:
Top