AI Regulation - There have been multiple call to regulate AI. It is too early to do so.

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121

AI Regulation​

There have been multiple call to regulate AI. It is too early to do so.​


ELAD GIL
SEP 17, 2023


[While I was finalizing this post, Bill Gurley gave this great talk on incumbent capture and regulation].
ChatGPT has only been live for ~9 months and GPT-4 for 6 or so months. Yet there have already been strong calls to regulate AI due to misinformation, bias, existential risk, threat of biological or chemical attack, potential AI-fueled cyberattacks etc without any tangible example of any of these things actually having or happened with any real frequency compared to existing versions without AI. Many, like chemical attacks are truly theoretical without an ordered logic chain of how they would happen, and any explanation as to why existing safegaurds or laws are insufficient.
Thanks for reading Elad Blog! Subscribe for free to receive new posts and support my work.

Subscribe


Sometimes, regulation of an industry can be positive for consumers or businesses. For example, FDA regulation of food can protect people from disease outbreaks, chemical manipulation of food, or other issues.
In most cases, regulation can be very negative for an industry and its evolution. It may force an industry to be government-centric versus user-centric, prevent competition and lock in incumbents, move production or economic benefits overseas, or distort the economics and capabilities of an entire industry.
Given the many positive potentials of AI, and the many negatives of regulation, calls for AI regulation are likely premature, but also in some cases clearly self serving for the parties asking for it (it is not surprising the main incumbents say regulation is good for AI, as it will lock in their incumbency). Some notable counterexamples also exist where we should likely regulate things related to AI, but these are few and far between (e.g. export of advanced chip technology to foreign adversaries is a notable one).
In general, we should not push to regulate most aspects of AI now and let the technology advance and mature further for positive uses before revisiting this area.

First, what is at stake? Global health & educational equity + other areas

Too little of the dialogue today focuses on the positive potential of AI(I cover the risks of AI in another post.) AI is an incredibly powerful tool to impact global equity for some of the biggest issues facing humanity. On the healthcare front, models such as Med-PaLM2 from Google now outperform medical experts to the point where training the model using physician experts may make the model worse.

Imagine having a medical expert available via any phone or device anywhere in the world - to which you can upload images, symptoms, and follow up and get ongoing diagnosis and care. This technology is available today and just need to be properly bundled and delivered in a bundled and thoughtful way.
Similarly, AI can provide significant educational resources globally today. Even something as simple as auto-translating and dubbing all the educational text, video or voice content in the world is a straightforward task given todays language and voice models. Adding a chat like interface that can personalize and pace the learning of the student on the other end is coming shortly based on existing technologies. Significantly increasing global equity of education is a goal we can achieve if we allow ourselves to do so.
Additionally, AI can also play a role in other areas including economic productivity, national defense (covered well here), and many other areas.
AI is the likely the single strongest motive force towards global equity in health and education in decades. Regulation is likely to slow down and confound progress towards these, and other goals and use cases.

Regulation tends to prevent competition - it favors incumbents and kills startups

In most industries, regulation prevents competition. This famous chart of prices over time reflects how highly regulated industries (healthcare, education, energy) have their costs driven up over time, while less regulated industries (clothing, software, toys) drop costs dramatically over time. (Please note I do not believe these are inflation adjusted - so 60-70% may be “break even” pricing inflation adjusted).

Regulation favors incumbents in two ways. First, it increase the cost of entering a market, in some cases dramatically. The high cost of clinical trials and the extra hurdles put in place to launch a drug are good examples of this. A must-watch video is this one with Paul Janssen, one of the giants of pharma, in which he states that the vast majority of drug development budgets are wasted on tests imposed by regulators which “has little to do with actual research or actual development”. This is a partial explanation for why (outside of Moderna, an accident of COVID), no $40B+ market cap new biopharma company has been launched in almost 40 years (despite healthcare being 20% of US GDP).
Secondly, regulation favors incumbents via something known as “regulatory capture”. In regulatory capture, the regulators become beholden to a specific industry lobby or group - for example by receiving jobs in the industry after working as a regulator, or via specific forms of lobbying. There becomes a strong incentive to “play nice” with the incumbents by regulators and to bias regulations their way, in order to get favors later in life.

Regulation often blocks industry progress: Nuclear as an example.

Many of the calls to regulate AI suggest some analog to nuclear. For example, a registry of anyone building models and then a new body to oversee them. Nuclear is a good example of how in some cases regulators will block the entire industry they are supposed to watch over. For example, the Nuclear Regulatory Commission (NRC), established in 1975, has not approved a new nuclear reactor design for decades (indeed, not since the 1970s). This has prevented use of nuclear in the USA, despite actual data showing high safety profiles. France meanwhile has continued to have 70% of its power generated via nuclear, Japan is heading back to 30% with plans to grow to 50%, and the US has been declining down to 18%.
This is despite nuclear being both extremely safe (if one looks at data) and clean from a carbon perspective.

Indeed, most deaths from nuclear in the modern era have been from medical accidents or Russian sub accidents. Something the actual regulator of nuclear power seem oddly unaware of in the USA.
Nuclear (and therefore Western energy policy) is ultimately a victim of bad PR, a strong eco-big oil lobby against it, and of regulatory constraints.

Regulation can drive an industry overseas

I am a short term AI optimist, and a long term AI doomer. In other words, I think the short term benefits of AI are immense, and most arguments made on tech-level risks of AI are overstated.For anyone who has read history, humans are perfectly capable of creating their own disasters. However, I do think in the long run (ie decades) AI is an existential risk for people. That said, at this point regulating AI will only send it overseas and federate and fragment the cutting edge of it to outside US jurisdiction. Just as crypto is increasingly offshoring, and even regulatory-compliant companies like Coinbase are considering leaving the US due to government crackdowns on crypto, regulating AI now in the USA will just send it overseas.
The genie is out of the bottle and this technology is clearly incredibly powerful and important. Over-regulation in the USA has the potential to drive it elsewhere. This would be bad for not only US interests, but also potentially the moral and ethical frameworks in terms of how the most cutting edge versions of AI may get adopted. The European Union may show us an early form of this.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121

Regulation can distort economics, drive cost way up, and slow down progress for important societal areas: Healthcare as an example.

Regulation tends to distort the economics of an industry. Healthcare is a strong example where the person who benefits (the patient) is different from the person who decides what to get (the doctor), what can be paid for (the insurer) and what the market can even have to begin with (the regulator). This has caused a lack of competition for many parts of the healthcare industry, an inability to launch valuable products quickly, and in some cases, prevention of adoption by valuable products (due to lack of a payor).


It is telling that during COVID, when regulations were decreased, we had a flurry of vaccines developed in less than a year and multiple drugs tested and launched in anywhere from a few months to two years. All this was done with minimal patient side effects or bad outcomes. This similarly happened during WW2 when Churchill wanted to find a treatment for soldiers in the field with gonnorhea and made it a national mandate to find a cure. Penicillin was rediscovered and launched to market in less than 9 months.

Who do you want to make decisions on the future? Tech people are naive about “regulation”

Most people in tech have never had to deal with regulators. If you worked at Facebook in the early days or on a SaaS startup, the risk of regulation has likely not come up. Those who have dealt with regulators (such as later Meta employees or people who have run healthcare companies) realize that there tend to be many drawbacks to working in a regulated industry. Obviously, there are some positives from regulators when said regulators are functioning well and focused on their mission - e.g. data-driven prevention of consumer harm, in a way that does not overstep the legal frameworks of the country.
In many discussions I have had with tech people who call for regulation of AI, a few things have come out:
  1. Many people working in AI think deeply and genuinely about what they are working on, and want it to be very positive for the world. Indeed, the AI industry and its early emphasis on safety and alignment strikes me as the most forward looking group I have seen in tech on the implications of their own technology. However….
  2. …Most people calling for regulation have never worked in a regulated industry nor dealt with regulators. Many of their viewpoints on what “regulation” means is quite naive.
    1. For example, a few people have told me they think regulation means “the government will ask a group of the smartest people working on AI to come together to set policy”. This seems to misunderstand a few basics of regulation - for example the regulator may actually not understand much about the topic they are regulating, or be driven politically versus factually. Indeed, recent example of “AI experts” consulting on regulation tend to have standard political agendas, versus being giants in the AI technology world.
    2. Most people do not understand that most “regulators” have varying internal viewpoints and the group you interact with within a regulator may lead to a completely different outcome. For example, depending on which specific subgroup you engage with at the FDA, SEC, FTC, or other group, you may end up with a very different result for your company. Regulators are often staffed by people with their own motivations, political viewpoints, and career aspirations. This impacts how they work with companies and the industries they regulate. Many regulators are hired later in their careers in to the larger companies they regulated - which is part of what causes regulatory capture over time.
  3. There seems to be a lack of appreciation for existing legal frameworks. There are laws and precedents that have been built up over time to cover many aspects of harm that may be caused by the short run due to AI (hate speech, misinformation, and other “trust and safety issues” on the one hand, or use of AI to cause cyber attacks or physical harm). These existing legal frameworks seem ignored by many of the people calling for regulation - many of whom do not seem to have any real knowledge of what laws already exist.
  4. Many people misunderstand that the political establishment would like nothing more than to seize more power over tech. The game in tech is often around impact and financial outcomes, while the game in DC is about power. Many who seek power would love to have a way to take over and control tech. Calling for AI regulation is creating an opening to seize broader power. As we saw with COVID policies, the “slippery slope” is real.
  5. It is worth pausing to ask yourself who do you want setting norms for the AI-industry - the CEOs and researchers behind the main AI companies (AKA self-regulation) or unelected government officials like Lina Khan or Garry Gessler. Which will lead to a worse outcome for AI and society?
A good question to ask yourself about regulation given recent times may include questions like - Do you think the current regulators have dealt well recently with inflation, interest rates, COVID policy, drug regulation and speed of developing life saving drugs, local policies on crime and drug use, or other issues? What primary data did you use to decide if these outcomes were good or bad or avoidable or not? This may be a litmus test for many other aspects of ones viewpoints.

Short term policy & what should be regulated?

There are some areas that seem reasonable to regulate for AI in the short run - but these should be highly targeted to pre-existing policies. For example:
  • Export controls. There are some things that make sense to regulate for AI now - for example the export of advanced semiconductor technology manufacturing has been, and should continue to have export controls.
  • Incident reporting. Open Philhas a good position excerpted here similar to incident reporting requirements in other industries (e.g. aviation) or to data breach reporting requirements, and similar to some vulnerability disclosure regimes. Many incidents wouldn’t need to be reported publicly, but could be kept confidential within a regulatory body. The goal of this is to allow regulators and perhaps others to track certain kinds of harms and close-calls from AI systems, to keep track of where the dangers are and rapidly evolve mitigation mechanisms.”
  • Other areas. There may be other areas that may make sense over time. Many of the areas people express strong concern for (misinformation, bias etc) have long standing legal and regulatory structures in place already.
It should be noted that I do think the short term risks of AI are overstated, long term risks may be understated. I think we should regulate AI at the moment in time where we think it represent actual existential risk, versus “more of the same” for what humanity has done in the past with or without technologies.

The first “AI election”

The 2024 presidential election may end up being our first “AI election” - in that many new generative AI technologies will likely be used at mass scale for the first time in election. Examples of use may include things like:
  • Personalized text to speech for large scale, personalized robo-dialing in the natural sounding voice of the candidates
  • Large scale content generation and farming for social media and other targeting
  • Deep fakes of video or other content
Whether AI actually ends up impacting the election or not, it may still be blamed for whatever outcome happens similar to social networks being blamed for the 2016 outcome. The election may be used by groups as an excuse to regulate AI.

To sum it all up…​

Regulation tends to squeeze much of the innovation and optimism out of an industry. It is no mistake that many companies stop innovating when the baleful eye of a regulator settles upon them. Examples from healthcare and biopharma, nuclear, and crypto all suggest regulation can stop or slow innovation significantly, cause offshoring of industries, and derail the positive purpose of an industry. Given the huge potential of AI for healthcare, education, and other basic areas of global equity, it is better to hold off on regulating AI for now.
 

mastermind

Rest In Power Kobe
Joined
May 1, 2012
Messages
60,765
Reputation
5,735
Daps
159,263
Too little of the dialogue today focuses on the positive potential of AI(I cover the risks of AI in another post.) AI is an incredibly powerful tool to impact global equity for some of the biggest issues facing humanity. On the healthcare front, models such as Med-PaLM2 from Google now outperform medical experts to the point where training the model using physician experts may make the model worse.

Imagine having a medical expert available via any phone or device anywhere in the world - to which you can upload images, symptoms, and follow up and get ongoing diagnosis and care. This technology is available today and just need to be properly bundled and delivered in a bundled and thoughtful way.
FOH :camby:

Citations Needed had a podcast on this topic.

 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,352
Reputation
2,812
Daps
47,561
Reppin
NULL
The correct model is how the international community came up with rules dealing with what type of research/experiments would be permitted once we made serious advances in understanding genes/microbiology etc etc. You HAVE to create the framework before the explosion. Otherwise it is too late.
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
48,564
Reputation
18,772
Daps
193,550
Reppin
the ether
"Elad Gil is an entrepreneur, operating executive, and investor or advisor to private companies such as Airbnb, Coinbase, Checkr, Gusto, Instacart, OpenDoor, Pinterest, Square, Stripe, Wish. He is cofounder and chairman at Color Genomics. He was the VP of Corporate Strategy at Twitter, where he also ran product (Geo, Search) and operational teams (M&A and Corporate Development). Elad joined Twitter via the acquisition of MixerLabs, where he was co-founder and CEO. Elad spent many years at Google, where he started the mobile team-involved in all aspects of getting it up and running. He was involved with three acquisitions (including the Android team) and was the original product manager for Google Mobile Maps. Prior to Google, Elad had product management and market-seeding roles at a number of Silicon Valley companies. Elad received his Ph.D. from the Massachusetts Institute of Technology and has degrees in Mathematics and Biology from the University of California, San Diego.



Not sure this is a guy I would take seriously to give unbiased insight on corporate regulation.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121
FOH :camby:

Citations Needed had a podcast on this topic.



haven't finished the episode yet but i'm gonna checkout the book they mentioned.

Race for profit: how banks and the real estate industry undermined Black homeownership​

https://ufile.io/f/rn0gi
 

Adeptus Astartes

Loyal servant of the God-Brehmperor
Supporter
Joined
Sep 15, 2019
Messages
10,072
Reputation
2,350
Daps
61,680
Reppin
Imperium of Man
fukk some "regulation", we need a Butlerian Jihad.

That won't happen, so I agree that America shouldn't jump the gun and push the developers abroad where there is no control at all.
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,352
Reputation
2,812
Daps
47,561
Reppin
NULL
even if it's regulated it wont stop china and russia right ?

it's not stopping any independent developer anywhere either.

fukk some "regulation", we need a Butlerian Jihad.

That won't happen, so I agree that America shouldn't jump the gun and push the developers abroad where there is no control at all.

AI related threats don't discriminate. Russia/China and other nations are just as vulnerable (if not moreso) because their institutions (social contract) are fundamentally weaker.

There are UN treaties signed by most nations on earth that deal with limits on genetic engineering, biosafety, biological/chemical weapons, human cloning. And for the most part they've worked. The technology to do all kinds of crazy shyt in these areas has been widely available since the late 90s (and wayyyy before that in case of bio weapons) but governments have generally steered clear.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121

AI Alliance will open-source AI models; Meta, IBM, Intel, NASA on board​

Ben Lovejoy | Dec 5 2023 - 4:11 am PT


AI Alliance will open-source AI models | Software code on widescreen monitor

A new industry group known as the AI Alliance believes that artificial intelligence models should be open-source, in contrast to the proprietary models developed by OpenAI and Google.

Meta, IBM, Intel, and NASA are just some of the organizations to sign up, believing that the approach offers three key benefits …



The AI Alliance​

The really big breakthroughs in generative AI have so far come from the likes of OpenAI and Google, who keep their models a closely-guarded secret.

But there are some companies and organizations who believe that big AI projects should be open-source. More than 40 of them have signed up to the AI Alliance, reports Bloomberg.

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks.

The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models — rather than the proprietary systems favored by some companies — develop new hardware and team up with academic researchers.


Three key benefits of open-source models​

The alliance says that working openly together in this way offers three benefits.

First, speed. Allowing models to be shared, so that researchers can build on the work of others, will enable more rapid progress.

Second, safety. Allowing independent peer groups to examine code created by others is the best way to identify potential flaws and risks. This is the same argument for open-sourcing security protocols, like encryption systems.

Third, equal opportunity. By providing anyone with access to the tools being built, it creates a level playing field in which solo researchers and startups have the same opportunities as well-funded companies.



Mission statement​

The AI Alliance describes its mission as:

Accelerating and disseminating open innovation across the AI technology landscape to improve foundational capabilities, safety, security and trust in AI, and to responsibly maximize benefits to people and society everywhere.

The AI Alliance brings together a critical mass of compute, data, tools, and talent to accelerate open innovation in AI.

The AI Alliance seeks to:

Build and support open technologies across software, models and tools.

Enable developers and scientists to understand, experiment, and adopt open technologies.

Advocate for open innovation with organizational and societal leaders, policy and regulatory bodies, and the public.

IBM and Meta have taken the lead in establishing the body. IBM said that the formation of the group is “a pivotal moment in defining the future of AI,” while Meta said that it means “more people can access the benefits, build innovative products and work on safety.”

Other members are listed as:



  • Agency for Science, Technology and Research (A*STAR)
  • Aitomatic
  • AMD
  • Anyscale
  • Cerebras
  • CERN
  • Cleveland Clinic
  • Cornell University
  • Dartmouth
  • Dell Technologies
  • Ecole Polytechnique Federale de Lausanne
  • ETH Zurich
  • Fast.ai
  • Fenrir, Inc.
  • FPT Software
  • Hebrew University of Jerusalem
  • Hugging Face
  • IBM
  • Abdus Salam International Centre for Theoretical Physics (ICTP)
  • Imperial College London
  • Indian Institute of Technology Bombay
  • Institute for Computer Science, Artificial Intelligence
  • Intel
  • Keio University
  • LangChain
  • LlamaIndex
  • Linux Foundation
  • Mass Open Cloud Alliance, operated by Boston University and Harvard
  • Meta
  • Mohamed bin Zayed University of Artificial Intelligence
  • MLCommons
  • National Aeronautics and Space Administration
  • National Science Foundation
  • New York University
  • NumFOCUS
  • OpenTeams
  • Oracle
  • Partnership on AI
  • Quansight
  • Red Hat
  • Rensselaer Polytechnic Institute
  • Roadzen
  • Sakana AI
  • SB Intuitions
  • ServiceNow
  • Silo AI
  • Simons Foundation
  • Sony Group
  • Stability AI
  • Together AI
  • TU Munich
  • UC Berkeley College of Computing, Data Science, and Society
  • University of Illinois Urbana-Champaign
  • The University of Notre Dame
  • The University of Texas at Austin
  • The University of Tokyo
  • Yale University

Apple is reportedly testing its own generative AI chatbot internally, but is not expected to bring anything to market in the next year or so.

Photo: Fili Santillán/Unsplash
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
48,564
Reputation
18,772
Daps
193,550
Reppin
the ether
AI related threats don't discriminate. Russia/China and other nations are just as vulnerable (if not moreso) because their institutions (social contract) are fundamentally weaker.

There are UN treaties signed by most nations on earth that deal with limits on genetic engineering, biosafety, biological/chemical weapons, human cloning. And for the most part they've worked. The technology to do all kinds of crazy shyt in these areas has been widely available since the late 90s (and wayyyy before that in case of bio weapons) but governments have generally steered clear.


Wouldn't widespread AI make it far easier for non-state actors to do fukked up shyt with genetic engineering, biological weapons, etc.?
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121
Wouldn't widespread AI make it far easier for non-state actors to do fukked up shyt with genetic engineering, biological weapons, etc.?

yes but taking into account the deterrence model and the prisoners dilemma non-state actors might not engage in such behavior unless their goal is to wipe out humanity.

if a layman or slight above layman can use A.I to cause great harm then they can rightfully assume any other person can do the same and their safety and the safety of the people they care about isn't assured.
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,352
Reputation
2,812
Daps
47,561
Reppin
NULL
Wouldn't widespread AI make it far easier for non-state actors to do fukked up shyt with genetic engineering, biological weapons, etc.?

Yes. Which is why you need to get the state actors on the same page.

Nonstate actors could probably get their hands on radioactive materials. But we haven't had a dirty bomb go off bc everyone knows that shyt wouldn't be tolerated.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,205
Reputation
7,364
Daps
134,121

EU agrees ‘historic’ deal with world’s first laws to regulate AI​


Agreement between European Parliament and member states will govern artificial intelligence, social media and search engines

6000.jpg

Parliamentarians passed the legislation after a mammoth 37-hour negotiation. Photograph: Jean-François Badias/AP

Lisa O'Carroll in Brussels
@lisaocarroll
Fri 8 Dec 2023 19.48 EST

The world’s first comprehensive laws to regulate artificial intelligence have been agreed in a landmark deal after a marathon 37-hour negotiation between the European Parliament and EU member states.

The agreement was described as “historic” by Thierry Breton, the European Commissioner responsible for a suite of laws in Europe that will also govern social media and search engines, covering giants such as X, TikTok and Google.



Breton said 100 people had been in a room for almost three days to seal the deal. He said it was “worth the few hours of sleep” to make the “historic” deal.

Carme Artigas, Spain’s secretary of state for AI, who facilitated the negotiations, said France and Germany supported the text, amid reports that tech companies in those countries were fighting for a lighter touch approach to foster innovation among small companies.

The agreement puts the EU ahead of the US, China and the UK in the race to regulate artificial intelligence and protect the public from risks that include potential threat to life that many fear the rapidly developing technology carries.

Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest.

The political agreement between the European Parliament and EU member states on new laws to regulate AI was a hard-fought battle, with clashes over foundation models designed for general rather than specific purposes.

But there were also protracted negotiations over AI-driven surveillance, which could be used by the police, employers or retailers to film members of the public in real time and recognise emotional stress.

The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton.

It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime.

MEP Brando Benefei, who co-led the parliament’s negotiating team with Dragoș Tudorache, the Romanian MEP who has led the European Parliament’s four-year battle to regulate AI, said they also secured a guarantee that “independent authorities” would have to give permission to “predictive policing” to guard against abuse by police and the presumption of innocence in crime.

“We had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe will develop with a human-centric approach respecting fundamental rights, human values, building trust, building consciousness of how we can get the best out of this AI revolution that is happening before our eyes,” he told reporters at a press conference held after midnight in Brussels.

Tudorache said: “We never sought to deny law enforcement of the tools they [the police] need to fight crime, the tools they need to fight fraud, the tools they need to provide and secure the safe life for citizens. But we did want – and what we did achieve – is a ban on AI technology that will determine or predetermine who might commit a crime.”

The foundation of the agreement is a risk-based tiered system where the highest level of regulation applies to those machines that pose the highest risk to health, safety and human rights.

In the original text it was envisaged this would include all systems with more than 10,000 business users.

The highest risk category is now defined by the number of computer transactions needed to train the machine, known as “floating point operations per second” (Flops).

Sources say there is only one model, GPT4, that exists that would fall into this new definition.

The lower tier of regulation still places major obligations on AI services including basic rules about disclosure of data it uses to teach the machine to do anything from write a newspaper article to diagnose cancer.

Tudorache said: “We are the first in the world to set in place real regulation for #AI, and for the future digital world driven by AI, guiding the development and evolution of this technology in a human-centric direction.”

Previously he has said that the EU was determined not to make the mistakes of the past, when tech giants such as Facebook were allowed to grow into multi-billion dollar corporations with no obligation to regulate content on their platforms including interference in elections, child sex abuse and hate speech.

Strong and comprehensive regulation from the EU could “set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who is an expert on the EU and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it”.

AI companies who will have to obey the EU’s rules will also likely extend some of those obligations to markets outside the continent, Bradford told the AP. “After all, it is not efficient to re-train separate models for different markets,” she said.
 
Top