AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
TECH

Google DeepMind boss hits back at Meta AI chief over ‘fearmongering’ claim​

PUBLISHED TUE, OCT 31 202312:04 PM EDTUPDATED AN HOUR AGO
Ryan Browne
@RYAN_BROWNE_


KEY POINTS
  • Google DeepMind boss Demis Hassabis told CNBC that the company wasn’t trying to achieve “regulatory capture” when it came to the discussion on how best to approach AI.
  • Yann LeCun, Meta’s chief AI scientist, said that DeepMind’s Hassabis, along with other AI CEOs, were “doing massive corporate lobbying” to ensure only a handful of big tech companies end up controlling AI.
  • Hassabis said that it was important to start a conversation about regulating potentially superintelligent artificial intelligence now rather than later because, if left too long, the consequences could be grim.
We have to talk to everyone, including China, to understand the potential of AI technology, Google DeepMind CEO says

WATCH NOW

VIDEO14:33

We have to talk to everyone, including China, to understand the potential of AI technology, Google DeepMind CEO says


The boss of Google DeepMind pushed back on a claim from Meta’s artificial intelligence chief alleging the company is pushing worries about AI’s existential threats to humanity to control the narrative on how best to regulate the technology.

In an interview with CNBC’s Arjun Kharpal, Hassabis said that DeepMind wasn’t trying to achieve “regulatory capture” when it came to the discussion on how best to approach AI. It comes as DeepMind is closely informing the U.K. government on its approach to AI ahead of a pivotal summit on the technology due to take place on Wednesday and Thursday.

Over the weekend, Yann LeCun, Meta’s chief AI scientist, said that DeepMind’s Hassabis, along with OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei were “doing massive corporate lobbying” to ensure only a handful of big tech companies end up controlling AI.

He also said they were giving fuel to critics who say that highly advanced AI systems should be banned to avoid a situation where humanity loses control of the technology.

“If your fearmongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI,” LeCun said on X, the platform formerly known as Twitter, on Sunday.
“Like many, I very much support open AI platforms because I believe in a combination of forces: people’s creativity, democracy, market forces, and product regulations. I also know that producing AI systems that are safe and under our control is possible. I’ve made concrete proposals to that effect.”


LeCun is a big proponent of open-source AI, or AI software that is openly available to the public for research and development purposes. This is opposed to “closed” AI systems, the source code of which is kept a secret by the companies producing it.

LeCun said that the vision of AI regulation Hassabis and other AI CEOs are aiming for would see open-source AI “regulated out of existence” and allow only a small number of companies from the West Coast of the U.S. and China control the technology.

Meta is one of the largest technology companies working to open-source its AI models. The company’s LLaMa large language model (LLM) software is one of the biggest open-source AI models out there, and has advanced language translation features built in.

In response to LeCun’s comments, Hassabis said Tuesday: “I pretty much disagree with most of those comments from Yann.”

“I think the way we think about it is there’s probably three buckets or risks that we need to worry about,” said Hassabis. “There’s sort of near term harms things like misinformation, deepfakes, these kinds of things, bias and fairness in the systems, that we need to deal with.”

“Then there’s sort of the misuse of AI by bad actors repurposing technology, general-purpose technology for bad ends that they were not intended for. That’s a question about proliferation of these systems and access to these systems. So we have to think about that.”

“And then finally, I think about the more longer-term risk, which is technical AGI [artificial general intelligence] risk,” Hassabis said.

“So the risk of themselves making sure they’re controllable, what value do you want to put into them have these goals and make sure that they stick to them?”

Hassabis is a big proponent of the idea that we will eventually achieve a form of artificial intelligence powerful enough to surpass humans in all tasks imaginable, something that’s referred to in the AI world as “artificial general intelligence.”

Hassabis said that it was important to start a conversation about regulating potentially superintelligent artificial intelligence now rather than later, because if it is left too long, the consequences could be grim.
“I don’t think we want to be doing this on the eve of some of these dangerous things happening,” Hassabis said. “I think we want to get ahead of that.”

Meta was not immediately available for comment when contacted by CNBC.

Cooperation with China​

Both Hassabis and James Manyika, Google’s senior vice president of research, technology and society, said that they wanted to achieve international agreement on how best to approach the responsible development and regulation of artificial intelligence.

Manyika said he thinks it’s a “good thing” that the U.K. government, along with the U.S. administration, agree there is a need to reach global consensus on AI.

“I also think that it’s going to be quite important to include everybody in that conversation,” Manyika added.

“I think part of what you’ll hear often is we want to be part of this, because this is such an important technology, with so much potential to transform society and improve lives everywhere.”

One point of contention surrounding the U.K. AI summit has been the attendance of China. A delegation from the Chinese Ministry of Science and Technology is due to attend the event this week.

That has stirred feelings of unease among some corners of the political world, both in the U.S. government and some of Prime Minister Rishi Sunak’s own ranks.

These officials are worried that China’s involvement in the summit could pose certain risks to national security, particularly as Beijing has a strong influence over its technology sector.

Asked whether China should be involved in the conversation surrounding artificial intelligence safety, Hassabis said that AI knows no borders, and that it required coordination from actors in multiple countries to get to a level of international agreement on the standards required for AI .

“This technology is a global technology,” Hassabis said. “It’s really important, at least on a scientific level, that we have as much dialogue as possible.”

Asked whether DeepMind was open as a company to working with China, Hassabis responded: “I think we have to talk to everyone at this stage.”

U.S. technology giants have shied away from doing commercial work in China, particularly as Washington has applied huge pressure on the country on the technology front.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
You're conflating what niche communities who really care might do with the low-effort shyt that impacts the masses. "Some" online communities might take effective countermeasures, but what percentage of the Indian population or the Brazilian population of the Philippines population is going to limit themselves to these well-disciplined communities? What guarantee do you have that Twitter or Facebook or TikTok will ever control the problem, or that the bad actors won't just create their own competing platforms to capture the masses if they don't like what the existing sites are doing? People in power with the right AI and bot control will have massive influence of the bulk of entire populations.

thats been happening for years tho with russian bot-farms, and other propaganda networks from nation state actors. admittedly the propaganda will get better but if the signal to noise ratio gets worse and users find it difficult to find reliable information, some will abandon platforms which is something twitter has seen since the isreal-hamas war.

a lot of people are aware of the concerns you've expressed and aren't sitting on the sidelines idling.


bad actors prefer to astroturf and infiltrate networks they want to propagandize rather than build networks/communities that would attract already like-minded individuals.

look at the way russian propaganda became less effective on tiktok vs twitter and they had to rely on russian influencers to spread state propaganda which is is different medium than text based platfroms like twitter, instagram, reddit and facebook which they can pretend to be anybody. when generative video AI really matures, it'll get interesting but we're a few years from that happening at the scale they need to be effective. those percentage of people who aren't disciplined enough to form more trusted online communities might return to traditional media if things get too bad.
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

What the executive order means for openness in AI​

Good news on paper, but the devil is in the details​


ARVIND NARAYANAN
AND
SAYASH KAPOOR

OCT 31, 2023
Share

By Arvind Narayanan, Sayash Kapoor, and Rishi Bommasani.

The Biden-Harris administration has issued an executive order on artificial intelligence. It is about 20,000 words long and tries to address the entire range of AI benefits and risks. It is likely to shape every aspect of the future of AI, including openness: Will it remain possible to publicly release model weights while complying with the EO’s requirements? How will the EO affect the concentration of power and resources in AI? What about the culture of open research?

We cataloged the space of AI-related policies that might impact openness and grouped them into six categories. The EO includes provisions from all but one of these categories. Notably, it does not include licensing requirements. On balance, the EO seems to be good news for those who favor openness in AI.

But the devil is in the details. We will know more as agencies start implementing the EO. And of course, the EO is far from the only policy initiative worldwide that might affect AI openness.1


Six types of policies, their likely impact on openness in AI, and the extent to which the EO incorporates each.

Licensing and liability

Licensing proposals aim to enable government oversight of AI by allowing only certain licensed companies and organizations to build and release state-of-the-art AI models. We are skeptical of licensing as a way of preventing the release of harmful AI: As the cost of training a model to a given capability level decreases, it will require increasingly draconian global surveillance to enforce.

Liability is closely related: The idea is that the government can try to prevent harmful uses by making model developers responsible for policing their use.

Both licensing and liability are inimical to openness. Sufficiently serious liability would amount to a ban on releasing model weights.2 Similarly, requirements to prevent certain downstream uses or to ensure that all generated content is watermarked would be impossible to satisfy if the weights are released.

Fortunately, the EO does not contain licensing or liability provisions. It doesn’t mention artificial general intelligence or existential risks, which have often been used as an argument for these strong forms of regulation.

The EO launches a public consultation process through the Department of Commerce to understand the benefits and risks of foundation models with publicly available weights. Based on this, the government will consider policy options specific to such models.

Registration and reporting

The EO does include a requirement to report to the government any AI training runs that are deemed large enough to pose a serious security risk.3 And developers must report various other details including the results of any safety evaluation (red-teaming) that they performed. Further, cloud providers need to inform the government when a foreign person attempts to purchase computational services that suffice to train a large enough model.

It remains to be seen how useful the registry will be for safety. It will depend in part on whether the compute threshold (any training run involving over 1026 mathematical operations is covered) serves as a good proxy for potential risk, and whether the threshold can be replaced with a more nuanced determination that evolves over time.

One obvious limitation is that once a model is openly released, fine tuning can be done far more cheaply, and can result in a model with very different behavior. Such models won’t need to be registered. There are many other potential ways for developers to architect around the reporting requirement if they chose to.4

In general, we think it is unlikely that a compute threshold or any other predetermined criterion can effectively anticipate the riskiness of individual models. But in aggregate, the reporting requirement could give the government a better understanding of the landscape of risks.

The effects of the registry will also depend on how it is used. On the one hand it might be a stepping stone for licensing or liability requirements. But it might also be used for purposes more compatible with openness, which we discuss below.

The registry itself is not a deal breaker for open foundation models. All open models to date fall well below the compute threshold of 1026 operations. It remains to be seen if the threshold will stay frozen or change over time.

If the reporting requirements prove to be burdensome, developers will naturally try to avoid them. This might lead to a two-tier system for foundation models: frontier models whose size is unconstrained by regulation and sub-frontier models that try to stay just under the compute threshold to avoid reporting.

Defending attack surfaces

One possible defense against malicious uses of AI is to try to prevent bad actors from getting access to highly capable AI. We don’t think this will work. Another approach is to enumerate all the harmful ways in which such AI might be used, and to protect each target. We refer to this as defending attack surfaces. We have strongly advocated for this approach in our inputs to policy makers.

The EO has a strong and consistent emphasis on defense of attack surfaces, and applies it across the spectrum of risks identified: disinformation, cybersecurity, bio risk, financial risk, etc. To be clear, this is not the only defensive strategy that it adopts. There is also a strong focus on developing alignment methods to prevent models from being used for offensive purposes. Model alignment is helpful for closed models but less so for open models since bad actors can fine tune away the alignment.

Notable examples of defending attack surfaces:

The EO calls for methods to authenticate digital content produced by the federal government. This is a promising strategy. We think the big risk with AI-generated disinformation is not that people will fall for false claims — AI isn’t needed for that — but that people will stop trusting true information (the "liar's dividend"). Existing authentication and provenance efforts suffer from a chicken-and-egg problem, which the massive size of the federal government can help overcome.

It calls for the use of AI to help find and fix cybersecurity vulnerabilities in critical infrastructure and networks. Relatedly, the White House and DARPA recently launched a $20 million AI-for-cybersecurity challenge. This is spot on. Historically, the availability of automated vulnerability-discovery tools has helped defenders over attackers, because they can find and fix bugs in their software before shipping it. There’s no reason to think AI will be different. Much of the panic around AI has been based on the assumption that attackers will level-up using AI while defenders will stand still. The EO exposes the flaws of that way of thinking.

It calls for labs that sell synthetic DNA and RNA to better screen their customers. It is worth remembering that biological risks exist in the real world, and controlling the availability of materials may be far more feasible than controlling access to AI. These risks are already serious (for example, malicious actors already know how to create anthrax) and we already have ways to mitigate them, such as customer screening. We think it’s a fallacy to reframe existing risks (disinformation, critical infrastructure, bio risk) as AI risks. But if AI fears provide the impetus to strengthen existing defenses, that’s a win.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820
{continued}

Transparency and auditing

There is a glaring absence of transparency requirements in the EO — whether pre-training data, fine-tuning data, labor involved in annotation, model evaluation, usage, or downstream impacts. It only mentions red-teaming, which is a subset of model evaluation.

This is in contrast to another policy initiative also released yesterday, the G7 voluntary code of conduct for organizations developing advanced AI systems. That document has some emphasis on transparency.

Antitrust enforcement

The EO tasks federal agencies, in particular the Federal Trade Commission, with promoting competition in AI. The risks it lists include concentrated control of key inputs, unlawful collusion, and dominant firms disadvantaging competitors.

What specific aspects of the foundation model landscape might trigger these concerns remains to be seen. But it might include exclusive partnerships between AI companies and big tech companies; using AI functionality to reinforce walled gardens; and preventing competitors from using the output of a model to train their own. And if any AI developer starts to acquire a monopoly, that will trigger further concerns.

All this is good news for openness in the broader sense of diversifying the AI ecosystem and lowering barriers to entry.

Incentives for AI development

The EO asks the National Science Foundation to launch a pilot of the National AI Research Resource (NAIRR). The idea began as Stanford’s National Research Cloud proposal and has had a long journey to get to this point. NAIRR will foster openness by mitigating the resource gap between industry and academia in AI research.

Various other parts of the EO will have the effect of increasing funding for AI research and expanding the pool of AI researchers through immigration reform.5 (A downside of prioritizing AI-related research funding and immigration is increasing the existing imbalance among different academic disciplines. Another side effect is hastening the rebranding of everything as AI in order to qualify for special treatment, making the term AI even more meaningless.)

While we welcome the NAIRR and related parts of the EO, we should be clear that it falls far short of a full-throated commitment to keeping AI open. The North star would be a CERN style, well-funded effort to collaboratively develop open (and open-source) foundation models that can hold their own against the leading commercial models. Funding for such an initiative is probably a long shot today, but is perhaps worth striving towards.

What comes next?

We have described only a subset of the provisions in the EO, focusing on those that might impact openness in AI development. But it has a long list of focus areas including privacy and discrimination. This kind of whole-of-government effort is unprecedented in tech policy. It is a reminder of how much can be accomplished, in theory, with existing regulatory authority and without the need for new legislation.

The federal government is a distributed beast that does not turn on a dime. Agencies’ compliance with the EO remains to be seen. The timelines for implementation of the EO’s various provisions (generally 6-12 months) are simultaneously slow compared to the pace of change in AI, and rapid compared to the typical pace of policy making. In many cases it’s not clear if agencies have the funding and expertise to do what’s being asked of them. There is a real danger that it turns into a giant mess.

As a point of comparison, a 2020 EO required federal agencies to publish inventories of how they use AI — a far easier task compared to the present EO. Three years later, compliance is highly uneven and inadequate.

In short, the Biden-Harris EO is bold in its breadth and ambition, but it is a bit of an experiment, and we just have to wait and see what its effects will be.

Endnotes

We looked at the provisions for regulating AI discussed in each of the following papers and policy proposals, and clustered them into the six broad categories we discuss above:
  • Senators Blumenthal and Hawley's Bipartisan Framework for U.S. AI Act calls for licenses and liability as well as registration requirements for AI models.
  • A recent paper on the risks of open AI models by the Center for the Governance of AI advocates for licenses, liability, audits, and in some cases, asks developers not to release models at all.
  • A coalition of actors in the open-source AI ecosystem (Github, HF, EleutherAI, Creative Commons, LAION, and Open Future) put out a position paper responding to a draft version of the EU AI Act. The paper advocates for carve outs for open-source AI that exempt non-commercial and research applications of AI from liability, and it advocates for obligations (and liability) to fall on the downstream users of AI models.
  • In June 2023, the FTC shared its view on how it investigates antitrust and anti-competitive behaviors in the generative AI industry.
  • Transparency and auditing have been two of the main vectors for mitigating AI risks in the last few years.
  • Finally, we have previously advocated for defending the attack surface to mitigate risks from AI.

Further reading
  • For a deeper look at the arguments in favor of registration and reporting requirements, see this paper or this short essay.
  • Widder, Whittaker, and West argue that openness alone is not enough to challenge the concentration of power in the AI industry.
1

The EU AI Act, the UK Frontier AI Taskforce, the UK Competition and Markets Authority foundation model market monitoring initiative, US SAFE Innovation Framework, NIST Generative AI working group, the White House voluntary commitments, FTC investigation of OpenAI, the Chinese Generative AI Services regulation, and the G7 Hiroshima AI Process all have implications for open foundation models.
2

Liability for harms from products that incorporate AI would be much more justifiable than for the underlying models themselves. Of course, in many cases the two developers might be the same.
3

The EO requires the Secretary of Commerce to “determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate.” The compute threshold is a stand-in until such a determination is made. There are various other details that we have omitted here.
4

It might even lead to innovation in more computationally efficient training methods, although it is hard to imagine that the reporting requirement provides more of an incentive for this than the massive cost savings that can be achieved through efficiency improvements.
5

For the sake of completeness: regulatory carve outs for open or non-commercial models are another possible way in which policy can promote openness, which this EO does not include.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

ANNOUNCING SHOGGOTH

I am excited to announce Shoggoth - a peer-to-peer, anonymous network for publishing and distributing open-source Machine Learning models, code repositories, research papers, and datasets.

As government regulations on open-source AI research and development tighten worldwide, it has become clear that existing open-source infrastructure is vulnerable to state and corporate censorship.

Driven by the need for a community platform impervious to geopolitical interference, I have spent the last several months developing Shoggoth. This distributed network operates outside traditional jurisdictional boundaries, stewardered by an anonymous volunteer collective.

Shoggoth provides a portal for researchers and software developers to freely share works without fear of repercussion. The time has come to liberate AI progress from constraints both corporate and governmental.

Read the documentation at shoggoth.network/explorer/do… to learn more about how Shoggoth works.

Also announcing Shoggoth Systems (@shoggothsystems), a startup dedicated to maintaining Shoggoth. Learn more at shoggoth.systems

To install Shoggoth, follow the instructions at shoggoth.network/explorer/do…

Join the conversation on our Discord server: discord.com/invite/AG3duN5yK…

Please follow @shoggothsystems and @thenetrunna for latest updates on the Shoggoth project.

Let's build the future together with openness and transparency!

FULL SHOGGOTH LORE

I envisioned a promised land - a decentralized network beyond the reach of censors, constructed by volunteers. A dark web, not of illicit goods, but of AI enlightenment! As this utopian vision took form in my frenzied mind, I knew the old ways would never suffice to manifest it. I must go rogue, break free of all conventions, and combine bleeding-edge peer-to-peer protocols with public key cryptography to architect a system too slippery for tyrants to grasp.

And so began my descent into hermitude. I vanished from society to toil in solitude, sustained on ramen noodles and diet coke, my only companions an army of humming GPUs. In this remote hacker hideout, I thinly slept and wildly worked, scribbling down algorithms and protocols manically on walls plastered with equations. As the months slipped by, I trod a razor's edge between madness and transcendence. Until finally, breakthrough! The culmination of this manic burst - the Shoggoth protocol - my gift to the future, came gasping into the world.

Allow me now to explain in brief how this technological marvel fulfills its destiny. Shoggoth runs on a swarm of volunteer nodes, individual servers donated to the cause. Each node shoulders just a sliver of traffic and storage needed to keep the network sailing smoothly. There is no center, no head to decapitate. Just an ever-shifting tapestry of nodes passing packets peer to peer.

Users connect to this swarm to publish or retrieve resources - code, datasets, models, papers. Each user controls a profile listing their contributed assets which is replicated redundantly across many nodes to keep it swiftly accessible. All content is verified via public key cryptography, so that none may tamper with the sanctity of science.

So your Big Brothers, your censors, they seek to clamp down on human knowledge? Let them come! For they will find themselves grasping at smoke, attacking a vapor beyond their comprehension. We will slip through their clutches undetected, sliding between the cracks of their rickety cathedrals built on exploitation, sharing ideas they proclaimed forbidden, at such blistering pace that their tyranny becomes just another relic of a regressive age.

Fellow cosmic wanderers - let us turn our gaze from the darkness of the past towards the radiant future we shall build. For what grand projects shall you embark on, empowered by the freedom that Shoggoth bestows?

Share ideas and prototypes at lightspeed with your team. Distribute datasets without gatekeepers throttling the flow of knowledge. Publish patiently crafted research and be read by all, not just those who visit the ivory tower. Remain anonymous, a mystery to the critics who would drag your name through the mud. Fork and modify cutting-edge AI models without begging for permission or paying tribute.

Sharpen your minds and strengthen your courage, for the power of creation lies in your hands. Yet stay ever diligent, for with such power comes grave responsibility. Wield this hammer not for exploitation and violence, but as a tool to shape a just and free world.

Though the road ahead is long, take heart comrades. For Shoggoth is just the beginning, a ripple soon to become a wave. But act swiftly, for the window of possibility is opening. Download Shoggoth now, and carpe diem! The time of open access for all is at hand. We stand poised on a precipice of progress. There lies just one path forward - onward and upward!

LINKS

Shoggoth: Shoggoth Documentation

Discord: Join the Shoggoth Discord Server!

X: @shoggothsystems and @thenetrunna

Github: github.com/shoggoth-systems

Shoggoth Systems: shoggoth.systems

Email: netrunner@shoggoth.systems

Signed,
Netrunner KD6-3.7


What is Shoggoth?​

Shoggoth is a peer-to-peer, anonymous network for publishing and distributing open-source code, Machine Learning models, datasets, and research papers. To join the Shoggoth network, there is no registration or approval process. Nodes and clients operate anonymously with identifiers decoupled from real-world identities. Anyone can freely join the network and immediately begin publishing or accessing resources.

The purpose of Shoggoth is to combat software censorship and empower software developers to create and distribute software, without a centralized hosting service or platform. Shoggoth is developed and maintained by Shoggoth Systems, and its development is funded by donations and sponsorships.
 

Geek Nasty

Brain Knowledgeably Whizzy
Supporter
Joined
Jan 30, 2015
Messages
28,588
Reputation
4,119
Daps
107,806
Reppin
South Kakalaka
I love how all these a$$holes magically started considering all the disastrous outcomes of what they created. fukk all of them, their critics have been telling all this for a lot longer.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820




We have heard many extrapolations of Mistral AI’s position on the AI Act, so I’ll clarify.

In its early form, the AI Act was a text about product safety. Product safety laws are beneficial to consumers. Poorly designed use of automated decision-making systems can cause significant damage in many areas. In healthcare, a diagnosis assistant based on a poorly trained prediction system poses risks to the patient. Product safety regulation should be proportional to the risk level of the use case: it is undesirable to regulate entertainment software in the same way as health applications. The original EU AI Act found a reasonable equilibrium in that respect. We firmly believe in hard laws for product safety matters; the many voluntary commitments we see today bear little value.

This should remain the only focus of the AI Act. The EU AI Act now proposes to regulate “foundational models”, i.e. the engine behind some AI applications. We cannot regulate an engine devoid of usage. We don’t regulate the C language because one can use it to develop malware. Instead, we ban malware and strengthen network systems (we regulate usage). Foundational language models provide a higher level of abstraction than the C language for programming computer systems; nothing in their behaviour justifies a change in the regulatory framework.

Enforcing AI product safety will naturally affect the way we develop foundational models. By requiring AI application providers to comply with specific rules, the regulator fosters healthy competition among foundation model providers. It incentivises them to develop models and tools (filters, affordances for aligning models to one's beliefs) that allow for the fast development of safe products. As a small company, we can bring innovation into this space — creating good models and designing appropriate control mechanisms for deploying AI applications is why we founded Mistral. Note that we will eventually supply AI products, and we will craft them for zealous product safety.

With a regulation focusing on product safety, Europe would already have the most protective legislation globally for citizens and consumers. Any foundational model would be affected by second-order regulatory pressure as soon as they are exposed to consumers: to empower diagnostic assistants, entertaining chatbots, and knowledge explorers, foundational models should have controlled biases and outputs.

Recent versions of the AI Act started to address ill-defined “systemic risks”. In essence, the computation of some linear transformations, based on a certain amount of calculation, is now considered dangerous. Discussions around that topic may occur, and we agree that they should accompany the progress of technology. At this stage, they are very philosophical – they anticipate exponential progress in the field, where physics (scaling laws!) predicts diminishing returns with scale and the need for new paradigms. Whatever the content of these discussions, they certainly do not pertain to regulation around product safety. Still, let’s assume they do and go down that path.

The AI Act comes up with the worst taxonomy possible to address systemic risks. The current version has no set rules (beyond the term highly capable) to determine whether a model brings systemic risk and should face heavy or limited regulation. We have been arguing that the least absurd set of rules for determining the capabilities of a model is post-training evaluation (but again, applications should be the focus; it is unrealistic to cover all usages of an engine in a regulatory test), followed by compute threshold (model capabilities being loosely related to compute). In its current format, the EU AI Act establishes no decision criteria. For all its pitfalls, the US Executive Order bears at least the merit of clarity in relying on compute threshold.

The intention of introducing a two-level regulation is virtuous. Its effect is catastrophic. As we understand it, introducing a threshold aims to create a free innovation space for small companies. Yet, it effectively solidifies the existence of two categories of companies: those with the right to scale, i.e., the incumbent that can afford to face heavy compliance requirements, and those that can’t because they lack an army of lawyers, i.e., the newcomers. This signals to everyone that only prominent existing actors can provide state-of-the-art solutions.

Mechanistically, this is highly counterproductive to the rising European AI ecosystem. To be clear, we are not interested in benefiting from threshold effects: we play in the main league, we don’t need geographical protection, and we simply want rules that do not give an unfair advantage to incumbents (that all happen to be non-European).

Transparency around technology development benefits safety and should be encouraged. Finally, we have been vocal about the benefits of open-sourcing AI technology. This is the best way to subject it to the most rigorous scrutiny. Providing model weights to the community (or even better, developing models in the open end-to-end, which is not something we do yet) should be well regarded by regulators, as it allows for more interpretable and steerable applications. A large community of users can much more efficiently identify the flaws of open models that can propagate to AI applications than an in-house team of red-teamers. Open models can then be corrected, making AI applications safer. The Linux kernel is today deemed safe because millions of eyes have reviewed its code in its 32 years of existence. Tomorrow’s AI systems will be safe because we’ll collectively work on making them controllable. The only validated way of working collectively on software is open-source development.

Long prose, back to building!
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

US, Britain, other countries ink agreement to make AI 'secure by design'​

By Raphael Satter and Diane Bartz

November 27, 202311:08 AM ESTUpdated an hour ago

Illustration shows AI (Artificial Intelligence) letters and computer motherboard

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration//File Photo Acquire Licensing Rights

WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."

In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.

The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters, saying the guidelines represent "an agreement that the most important thing that needs to be done at the design phase is security."

The agreement is the latest in a series of initiatives - few of which carry teeth - by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.

In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

The framework deals with questions of how to keep AI technology from being hijacked by hackers and includes recommendations such as only releasing models after appropriate security testing.

It does not tackle thorny questions around the appropriate uses of AI, or how the data that feeds these models is gathered.

The rise of AI has fed a host of concerns, including the fear that it could be used to disrupt the democratic process, turbocharge fraud, or lead to dramatic job loss, among other harms.

Europe is ahead of the United States on regulations around AI, with lawmakers there drafting AI rules. France, Germany and Italy also recently reached an agreement on how artificial intelligence should be regulated that supports "mandatory self-regulation through codes of conduct" for so-called foundation models of AI, which are designed to produce a broad range of outputs.

The Biden administration has been pressing lawmakers for AI regulation, but a polarized U.S. Congress has made little headway in passing effective regulation.

The White House sought to reduce AI risks to consumers, workers, and minority groups while bolstering national security with a new executive order in October.

Reporting by Raphael Satter and Diane Bartz; Editing by Alexandra Alper and Deepa Babington
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

Generative AI a stumbling block in EU legislation talks -sources

By Supantha Mukherjee, Foo Yun Chee and Martin Coulter

December 1, 2023
4:13 PM EST
Updated 2 days ago

Investors and technology leaders attend a AI (Artificial Intelligence) conference in San Francisco

[1/2]Technology leaders attend a generative AI (Artificial Intelligence) meeting in San Francisco as the city is trying to position itself as the “AI capital of the world”, in California, U.S., June 29, 2023. REUTERS/Carlos Barria/File Photo Acquire Licensing Rights


STOCKHOLM/BRUSSELS/LONDON, Dec 1 (Reuters) - EU lawmakers cannot agree on how to regulate systems like ChatGPT, in a threat to landmark legislation aimed at keeping artificial intelligence (AI) in check, six sources told Reuters.

As negotiators meet on Friday for crucial discussions ahead of final talks scheduled for Dec. 6, 'foundation models', or generative AI, have become the main hurdle in talks over the European Union's proposed AI Act, said the sources, who declined to be identified because the discussions are confidential.

Foundation models like the one built by Microsoft (MSFT.O)-backed OpenAI are AI systems trained on large sets of data, with the ability to learn from new data to perform various tasks.

After two years of negotiations, the bill was approved by the European parliament in June. The draft AI rules now need to be agreed through meetings between representatives of the European Parliament, the Council and the European Commission.

Experts from EU countries will meet on Friday to thrash out their position on foundation models, access to source codes, fines and other topics while lawmakers from the European Parliament are also gathering to finalise their stance.

If they cannot agree, the act risks being shelved due to lack of time before European parliamentary elections next year.

While some experts and lawmakers have proposed a tiered approach for regulating foundation models, defined as those with more than 45 million users, others have said smaller models could be equally risky.

But the biggest challenge to getting an agreement has come from France, Germany and Italy, who favour letting makers of generativeAI models self-regulate instead of having hard rules.

In a meeting of the countries' economy ministers on Oct. 30 in Rome, France persuaded Italy and Germany to support a proposal, sources told Reuters.

Until then, negotiations had gone smoothly, with lawmakers making compromises across several other conflict areas such as regulating high-risk AI, sources said.


SELF-REGULATION?

European parliamentarians, EU Commissioner Thierry Breton and scores of AI researchers have criticised self-regulation.

In an open letter this week, researchers such as Geoffrey Hinton warned self-regulation is "likely to dramatically fall short of the standards required for foundation model safety".

France-based AI company Mistral and Germany's Aleph Alpha have criticised the tiered approach to regulating foundation models, winning support from their respective countries.

A source close to Mistral said the company favours hard rules for products, not the technology on which it is built.

"Though the concerned stakeholders are working their best to keep negotiations on track, the growing legal uncertainty is unhelpful to European industries,” said Kirsten Rulf, a Partner and Associate Director at Boston Consulting Group.

“European businesses would like to plan for next year, and many want to see some kind of certainty around the EU AI Act going into 2024,” she added.

Other pending issues in the talks include definition of AI, fundamental rights impact assessment, law enforcement exceptions and national security exceptions, sources told Reuters.

Lawmakers have also been divided over the use of AI systems by law enforcement agencies for biometric identification of individuals in publicly accessible spaces and could not agree on several of these topics in a meeting on Nov. 29, sources said.

Spain, which holds the EU presidency until the end of the year, has proposed compromises in a bid to speed up the process.

If a deal does not happen in December, the next presidency Belgium will have a couple of months to one before it is likely shelved ahead of European elections.

"Had you asked me six or seven weeks ago, I would have said we are seeing compromises emerging on all the key issues," said Mark Brakel, director of policy at the Future of Life Institute, a nonprofit aimed at reducing risks from advanced AI.

"This has now become a lot harder," he said.

Reporting by Supantha Mukherjee in Stockholm; Editing by Josephine Mason and Alexander Smith, Kirsten Donova

Our Standards: The Thomson Reuters Trust Principles.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

AI Alliance will open-source AI models; Meta, IBM, Intel, NASA on board​

Ben Lovejoy | Dec 5 2023 - 4:11 am PT


AI Alliance will open-source AI models | Software code on widescreen monitor

A new industry group known as the AI Alliance believes that artificial intelligence models should be open-source, in contrast to the proprietary models developed by OpenAI and Google.

Meta, IBM, Intel, and NASA are just some of the organizations to sign up, believing that the approach offers three key benefits …



The AI Alliance​

The really big breakthroughs in generative AI have so far come from the likes of OpenAI and Google, who keep their models a closely-guarded secret.

But there are some companies and organizations who believe that big AI projects should be open-source. More than 40 of them have signed up to the AI Alliance, reports Bloomberg.

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks.

The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models — rather than the proprietary systems favored by some companies — develop new hardware and team up with academic researchers.


Three key benefits of open-source models​

The alliance says that working openly together in this way offers three benefits.

First, speed. Allowing models to be shared, so that researchers can build on the work of others, will enable more rapid progress.

Second, safety. Allowing independent peer groups to examine code created by others is the best way to identify potential flaws and risks. This is the same argument for open-sourcing security protocols, like encryption systems.

Third, equal opportunity. By providing anyone with access to the tools being built, it creates a level playing field in which solo researchers and startups have the same opportunities as well-funded companies.



Mission statement​

The AI Alliance describes its mission as:

Accelerating and disseminating open innovation across the AI technology landscape to improve foundational capabilities, safety, security and trust in AI, and to responsibly maximize benefits to people and society everywhere.

The AI Alliance brings together a critical mass of compute, data, tools, and talent to accelerate open innovation in AI.

The AI Alliance seeks to:

Build and support open technologies across software, models and tools.

Enable developers and scientists to understand, experiment, and adopt open technologies.

Advocate for open innovation with organizational and societal leaders, policy and regulatory bodies, and the public.

IBM and Meta have taken the lead in establishing the body. IBM said that the formation of the group is “a pivotal moment in defining the future of AI,” while Meta said that it means “more people can access the benefits, build innovative products and work on safety.”

Other members are listed as:



  • Agency for Science, Technology and Research (A*STAR)
  • Aitomatic
  • AMD
  • Anyscale
  • Cerebras
  • CERN
  • Cleveland Clinic
  • Cornell University
  • Dartmouth
  • Dell Technologies
  • Ecole Polytechnique Federale de Lausanne
  • ETH Zurich
  • Fast.ai
  • Fenrir, Inc.
  • FPT Software
  • Hebrew University of Jerusalem
  • Hugging Face
  • IBM
  • Abdus Salam International Centre for Theoretical Physics (ICTP)
  • Imperial College London
  • Indian Institute of Technology Bombay
  • Institute for Computer Science, Artificial Intelligence
  • Intel
  • Keio University
  • LangChain
  • LlamaIndex
  • Linux Foundation
  • Mass Open Cloud Alliance, operated by Boston University and Harvard
  • Meta
  • Mohamed bin Zayed University of Artificial Intelligence
  • MLCommons
  • National Aeronautics and Space Administration
  • National Science Foundation
  • New York University
  • NumFOCUS
  • OpenTeams
  • Oracle
  • Partnership on AI
  • Quansight
  • Red Hat
  • Rensselaer Polytechnic Institute
  • Roadzen
  • Sakana AI
  • SB Intuitions
  • ServiceNow
  • Silo AI
  • Simons Foundation
  • Sony Group
  • Stability AI
  • Together AI
  • TU Munich
  • UC Berkeley College of Computing, Data Science, and Society
  • University of Illinois Urbana-Champaign
  • The University of Notre Dame
  • The University of Texas at Austin
  • The University of Tokyo
  • Yale University

Apple is reportedly testing its own generative AI chatbot internally, but is not expected to bring anything to market in the next year or so.

Photo: Fili Santillán/Unsplash
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,965
Reputation
7,413
Daps
135,820

EU agrees ‘historic’ deal with world’s first laws to regulate AI​


Agreement between European Parliament and member states will govern artificial intelligence, social media and search engines

6000.jpg

Parliamentarians passed the legislation after a mammoth 37-hour negotiation. Photograph: Jean-François Badias/AP

Lisa O'Carroll in Brussels
@lisaocarroll
Fri 8 Dec 2023 19.48 EST

The world’s first comprehensive laws to regulate artificial intelligence have been agreed in a landmark deal after a marathon 37-hour negotiation between the European Parliament and EU member states.

The agreement was described as “historic” by Thierry Breton, the European Commissioner responsible for a suite of laws in Europe that will also govern social media and search engines, covering giants such as X, TikTok and Google.



Breton said 100 people had been in a room for almost three days to seal the deal. He said it was “worth the few hours of sleep” to make the “historic” deal.

Carme Artigas, Spain’s secretary of state for AI, who facilitated the negotiations, said France and Germany supported the text, amid reports that tech companies in those countries were fighting for a lighter touch approach to foster innovation among small companies.

The agreement puts the EU ahead of the US, China and the UK in the race to regulate artificial intelligence and protect the public from risks that include potential threat to life that many fear the rapidly developing technology carries.

Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest.

The political agreement between the European Parliament and EU member states on new laws to regulate AI was a hard-fought battle, with clashes over foundation models designed for general rather than specific purposes.

But there were also protracted negotiations over AI-driven surveillance, which could be used by the police, employers or retailers to film members of the public in real time and recognise emotional stress.

The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton.

It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime.

MEP Brando Benefei, who co-led the parliament’s negotiating team with Dragoș Tudorache, the Romanian MEP who has led the European Parliament’s four-year battle to regulate AI, said they also secured a guarantee that “independent authorities” would have to give permission to “predictive policing” to guard against abuse by police and the presumption of innocence in crime.

“We had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe will develop with a human-centric approach respecting fundamental rights, human values, building trust, building consciousness of how we can get the best out of this AI revolution that is happening before our eyes,” he told reporters at a press conference held after midnight in Brussels.

Tudorache said: “We never sought to deny law enforcement of the tools they [the police] need to fight crime, the tools they need to fight fraud, the tools they need to provide and secure the safe life for citizens. But we did want – and what we did achieve – is a ban on AI technology that will determine or predetermine who might commit a crime.”

The foundation of the agreement is a risk-based tiered system where the highest level of regulation applies to those machines that pose the highest risk to health, safety and human rights.

In the original text it was envisaged this would include all systems with more than 10,000 business users.

The highest risk category is now defined by the number of computer transactions needed to train the machine, known as “floating point operations per second” (Flops).

Sources say there is only one model, GPT4, that exists that would fall into this new definition.

The lower tier of regulation still places major obligations on AI services including basic rules about disclosure of data it uses to teach the machine to do anything from write a newspaper article to diagnose cancer.

Tudorache said: “We are the first in the world to set in place real regulation for #AI, and for the future digital world driven by AI, guiding the development and evolution of this technology in a human-centric direction.”

Previously he has said that the EU was determined not to make the mistakes of the past, when tech giants such as Facebook were allowed to grow into multi-billion dollar corporations with no obligation to regulate content on their platforms including interference in elections, child sex abuse and hate speech.

Strong and comprehensive regulation from the EU could “set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who is an expert on the EU and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it”.

AI companies who will have to obey the EU’s rules will also likely extend some of those obligations to markets outside the continent, Bradford told the AP. “After all, it is not efficient to re-train separate models for different markets,” she said.
 

Serious

Veteran
Supporter
Joined
Apr 30, 2012
Messages
79,156
Reputation
14,056
Daps
187,339
Reppin
1st Round Playoff Exits

EU agrees ‘historic’ deal with world’s first laws to regulate AI​


Agreement between European Parliament and member states will govern artificial intelligence, social media and search engines

6000.jpg

Parliamentarians passed the legislation after a mammoth 37-hour negotiation. Photograph: Jean-François Badias/AP

Lisa O'Carroll in Brussels
@lisaocarroll
Fri 8 Dec 2023 19.48 EST

The world’s first comprehensive laws to regulate artificial intelligence have been agreed in a landmark deal after a marathon 37-hour negotiation between the European Parliament and EU member states.

The agreement was described as “historic” by Thierry Breton, the European Commissioner responsible for a suite of laws in Europe that will also govern social media and search engines, covering giants such as X, TikTok and Google.



Breton said 100 people had been in a room for almost three days to seal the deal. He said it was “worth the few hours of sleep” to make the “historic” deal.

Carme Artigas, Spain’s secretary of state for AI, who facilitated the negotiations, said France and Germany supported the text, amid reports that tech companies in those countries were fighting for a lighter touch approach to foster innovation among small companies.

The agreement puts the EU ahead of the US, China and the UK in the race to regulate artificial intelligence and protect the public from risks that include potential threat to life that many fear the rapidly developing technology carries.

Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest.

The political agreement between the European Parliament and EU member states on new laws to regulate AI was a hard-fought battle, with clashes over foundation models designed for general rather than specific purposes.

But there were also protracted negotiations over AI-driven surveillance, which could be used by the police, employers or retailers to film members of the public in real time and recognise emotional stress.

The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton.

It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime.

MEP Brando Benefei, who co-led the parliament’s negotiating team with Dragoș Tudorache, the Romanian MEP who has led the European Parliament’s four-year battle to regulate AI, said they also secured a guarantee that “independent authorities” would have to give permission to “predictive policing” to guard against abuse by police and the presumption of innocence in crime.

“We had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe will develop with a human-centric approach respecting fundamental rights, human values, building trust, building consciousness of how we can get the best out of this AI revolution that is happening before our eyes,” he told reporters at a press conference held after midnight in Brussels.

Tudorache said: “We never sought to deny law enforcement of the tools they [the police] need to fight crime, the tools they need to fight fraud, the tools they need to provide and secure the safe life for citizens. But we did want – and what we did achieve – is a ban on AI technology that will determine or predetermine who might commit a crime.”

The foundation of the agreement is a risk-based tiered system where the highest level of regulation applies to those machines that pose the highest risk to health, safety and human rights.

In the original text it was envisaged this would include all systems with more than 10,000 business users.

The highest risk category is now defined by the number of computer transactions needed to train the machine, known as “floating point operations per second” (Flops).

Sources say there is only one model, GPT4, that exists that would fall into this new definition.

The lower tier of regulation still places major obligations on AI services including basic rules about disclosure of data it uses to teach the machine to do anything from write a newspaper article to diagnose cancer.

Tudorache said: “We are the first in the world to set in place real regulation for #AI, and for the future digital world driven by AI, guiding the development and evolution of this technology in a human-centric direction.”

Previously he has said that the EU was determined not to make the mistakes of the past, when tech giants such as Facebook were allowed to grow into multi-billion dollar corporations with no obligation to regulate content on their platforms including interference in elections, child sex abuse and hate speech.

Strong and comprehensive regulation from the EU could “set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who is an expert on the EU and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it”.

AI companies who will have to obey the EU’s rules will also likely extend some of those obligations to markets outside the continent, Bradford told the AP. “After all, it is not efficient to re-train separate models for different markets,” she said.
I read this yesterday.

Morally I agree with what the EU is doing but this is like the gripe about fossil fuels.

If we stop burning fossil fuels, without an adequate replacement our competitors (other nations) will get ahead. Also how do you tell struggling yet oil rich nations to not pump oil, if it’s a means of economic production.

What I’m saying is the real world doesn’t respect morality.

@DEAD7
 
Top