REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

null

...
Joined
Nov 12, 2014
Messages
27,510
Reputation
4,715
Daps
44,274
Reppin
UK,DE,MUFC,DMV
If I'm asking (because I don't know), why would you expect me to hold a position. I'm simply asking you to bolster your claims with proof, this is not an unreasonable ask.

yeah but you know enough to say that the gradle file looks ok "to you". and you want to argue to prove "me" wrong rather than my point about chatGPT wrong.

example of that focusing on comments even though they prove nothing. example of that ignoring my other examples.

if you know gradle files well enough to look and see that it is "ok" then you know enough to take the 5 mins to run it.

:hubie:

i just re-ran both versions on multiple servers under multiple gradle project structures with 2 plugin versions and neither works as produced by the model.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,249
Reputation
7,364
Daps
134,180
by refining the query to this:

Q: "can you create a full example build.gradle file to build and package a docker image using the palantir docker plugin. the code build and copy into docker will be done in the dockerfile not in build.gradle."

I got it to exclude the java build (which it should not have added in the first place).

resulting in the better looking ..

Code:
plugins {
    id 'com.palantir.docker-run' version '0.26.1'
}

group 'com.example'
version '1.0-SNAPSHOT'

repositories {
    jcenter()
}

dockerRun {
    name 'example-image'
    image project.group + '/' + project.name
    useComposeFiles(['docker/docker-compose.yml']) // if you have a docker-compose file
    daemonize true
}

// Define the Docker task as a dependency of the build task
build.dependsOn dockerRun

i will run this version on a different server.

i'm not sure about that group as the code built in docker will be both c++ and java and neither should be visible at this level *(totally different repos).

it was failing due to group value earlier but that is on an older janky machine where the global gradle is not in a good state so I will test it on another server. it should make no diff as running ./gradlew but to be certain.

also: out of the box for the previous version (with java)

Screenshot-2023-12-15-at-01-35-21.png


palantir is on version 35/36 now so it's pulling 26 from an example it has seen somewhere.

Screenshot-2023-12-15-at-01-16-42.png



i've come across version issues before with python since i have 3 different version installed but one version has more libraries than the others.
I tried step back prompting to get the latest info to add to the original prompt. you could probably save sometime creating a concise custom instruction that lists your build environment including the version/build.

whats the latest version of the software mentioned below:
can you create a full example build.gradle file to build and package a docker image using the palantir docker plugin. the code build and copy into docker will be done in the dockerfile not in build.gradle.

The latest version of the Palantir Docker Plugin is 0.35.0, which was created on 03 April 20231.

Here’s an example of a build.gradle file that uses the Palantir Docker Plugin to build and package a Docker image:



In this example, the Dockerfile is located in the src/main/docker/ directory. The buildArgs line specifies a build argument JAR_FILE which can be used in your Dockerfile to copy the built JAR into the Docker image. The actual build and copy operations are performed in the Dockerfile, not in the build.gradle file. Please replace 'src/main/docker/Dockerfile' with the actual path to your Dockerfile and "${project.name}-${project.version}.jar" with the actual name of your JAR file.

Please note that you need to run gradle build docker to build the JAR and the Docker image. The docker task depends on the build task, so the JAR will be built first, then the Docker image will be built.

This is just a basic example. Depending on your project structure and requirements, you might need to adjust this build.gradle file. For more information, please refer to the Palantir Docker Plugin documentation.


AVMSkH8.png
 

null

...
Joined
Nov 12, 2014
Messages
27,510
Reputation
4,715
Daps
44,274
Reppin
UK,DE,MUFC,DMV
Proving a negative is never a great idea.

There could be a million different variables on why that code might not work that has nothing to do with ChatGPT. Environment incorrectly setup, or the copy and paste process fukking up a number of possible things.

Demonstrating unaltered working code makes more sense.

it's not even that IMO.

i could dump the screen output for all of the runs but if someone wants to argue they could always find something else in there to argue about.

if both failed on his systems that would be the end.

given that he is familiar enough with gradle to know if build files look "ok" and given that we are talking about less than 50 lines in each case of scripted languages I think the best approach to draw gradle-unlettered but all-knowing unfettered arguments to a close would be for them to check themselves.

i even sent him the commands to do so and it just take a few minutes.

if you know enough to hold that a script "looks ok" then you know enough to run them.

and on top of that fewer assertions means less focus and more time .. without progress ..
 

null

...
Joined
Nov 12, 2014
Messages
27,510
Reputation
4,715
Daps
44,274
Reppin
UK,DE,MUFC,DMV
@
i've come across version issues before with python since i have 3 different version installed but one version has more libraries than the others.
I tried step back prompting to get the latest info to add to the original prompt. you could probably save sometime creating a concise custom instruction that lists your build environment including the version/build.

whats the latest version of the software mentioned below:
can you create a full example build.gradle file to build and package a docker image using the palantir docker plugin. the code build and copy into docker will be done in the dockerfile not in build.gradle.

The latest version of the Palantir Docker Plugin is 0.35.0, which was created on 03 April 20231.

Here’s an example of a build.gradle file that uses the Palantir Docker Plugin to build and package a Docker image:



In this example, the Dockerfile is located in the src/main/docker/ directory. The buildArgs line specifies a build argument JAR_FILE which can be used in your Dockerfile to copy the built JAR into the Docker image. The actual build and copy operations are performed in the Dockerfile, not in the build.gradle file. Please replace 'src/main/docker/Dockerfile' with the actual path to your Dockerfile and "${project.name}-${project.version}.jar" with the actual name of your JAR file.

Please note that you need to run gradle build docker to build the JAR and the Docker image. The docker task depends on the build task, so the JAR will be built first, then the Docker image will be built.

This is just a basic example. Depending on your project structure and requirements, you might need to adjust this build.gradle file. For more information, please refer to the Palantir Docker Plugin documentation.


AVMSkH8.png


no assertion.

i already generated and checked that on two servers and it does not work.

i just re-ran both versions on multiple servers under multiple gradle project structures with 2 plugin versions and neither works as produced by the model.

see: REVEALED: Open A.I. Staff Warn "The progress made on Project Q* has the potential to endanger humanity" (REUTERS)

the next time you post me a mile-long text/image dump without making an assertion i will just ignore it.
 

null

...
Joined
Nov 12, 2014
Messages
27,510
Reputation
4,715
Daps
44,274
Reppin
UK,DE,MUFC,DMV
schools in.

if i have time i'll set up zoom + a fresh server and we can run these together.

if i don't do it in the next two weeks it will not happen.

i'm travelling to the USA sometime in the next few days so I will most likely do it once i get there

:hubie:

this is the placeholder docker file

# Dockerfile

FROM ubuntu:22.04
LABEL maintainer="colibreh"

$projectDir./src/main/docker/Dockerfile
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,249
Reputation
7,364
Daps
134,180

OpenAI Says Board Can Overrule CEO on Safety of New AI Releases

The arrangement was mentioned in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with AI risks.

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023. 

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023.
Photographer: Dustin Chambers/Bloomberg

By Rachel Metz
December 18, 2023 at 1:03 PM EST


OpenAI said its board can choose to hold back the release of an AI model even if the company’s leadership has deemed it safe, another sign of the artificial intelligence startup empowering its directors to bolster safeguards for developing the cutting-edge technology.

The arrangement was spelled out in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with what it may deem to be extreme risks from its most powerful AI systems. The release of the guidelines follows a period of turmoil at OpenAI after Chief Executive Officer Sam Altman was briefly ousted by the board, putting a spotlight on the balance of power between directors and the company’s c-suite.

OpenAI’s recently announced “preparedness” team said it will continuously evaluate its AI systems to figure out how they fare across four different categories — including potential cybersecurity issues as well as chemical, nuclear and biological threats — and work to lessen any hazards the technology appears to pose. Specifically, the company is monitoring for what it calls “catastrophic” risks, which it defines in the guidelines as “any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.”

Aleksander Madry, who is leading the preparedness group and is on leave from a faculty position at the Massachusetts Institute of Technology, told Bloomberg News his team will send a monthly report to a new internal safety advisory group. That group will then analyze Madry’s team’s work and send recommendations to Altman and the company’s board, which was overhauled after ousting the CEO. Altman and his leadership team can make a decision about whether to release a new AI system based on these reports, but the board has the right to reverse that decision, according to the document.

OpenAI announced the formation of the “preparedness” team in October, making it one of three separate groups overseeing AI safety at the startup. There’s also “safety systems,” which looks at current products such as GPT-4, and “superalignment,” which focuses on extremely powerful — and hypothetical — AI systems that may exist in the future.

Madry said his team will repeatedly evaluate OpenAI’s most advanced, unreleased AI models, rating them “low,” “medium,” “high,” or “critical” for different types of perceived risks. The team will also make changes in hopes of reducing potential dangers they spot in AI and measure their effectiveness. OpenAI will only roll out models that are rated “medium” or “low,” according to the new guidelines.

“AI is not something that just happens to us that might be good or bad,” Madry said. “It’s something we’re shaping.”

Madry said he hopes other companies will use OpenAI’s guidelines to evaluate potential risks from their AI models as well. The guidelines, he said, are a formalization of many processes OpenAI followed previously when evaluating AI technology it has already released. He and his team came up with the details over the past couple months, he said, and got feedback from others within OpenAI.
 

null

...
Joined
Nov 12, 2014
Messages
27,510
Reputation
4,715
Daps
44,274
Reppin
UK,DE,MUFC,DMV
schools in.

if i have time i'll set up zoom + a fresh server and we can run these together.

if i don't do it in the next two weeks it will not happen.

i'm travelling to the USA sometime in the next few days so I will most likely do it once i get there

:hubie:

this is the placeholder docker file

# Dockerfile

FROM ubuntu:22.04
LABEL maintainer="colibreh"

$projectDir./src/main/docker/Dockerfile

UPDATE:

in the end i did this in shell in combination with Dockerfile (linux) commands.

the palantir repo is "defunct".

Screenshot-2023-12-20-at-23-11-16.png



i didn't like the look of bmuschko so just did it by hand.

the plugins are more important for composition and automating deployment into a docker image repo.

if anyone wants to do the interactive test next week (from above) let me know.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,249
Reputation
7,364
Daps
134,180
AI research

Jan 5, 2024

A survey of 2,778 researchers shows how fragmented the AI science community is​

DALL-E 3 prompted by THE DECODER

A survey of 2,778 researchers shows how fragmented the AI science community is


Matthias Bastian

Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.

Profile
E-Mail


The "2023 Expert Survey on Progress in AI" shows that the scientific community has no consensus on the risks and opportunities of AI, but everything is moving faster than once thought.

On the much-discussed question of whether the development of AI needs a pause, the survey reveals an undecided picture: about 35% support either a slower or a faster development compared to the current pace.

However, at 15.6%, the "much faster" group is three times larger than the "much slower" group. 27% say the current pace is appropriate.





Image: Grace et al.​


AI development will continue to accelerate​

The survey found that the pace of AI development will continue to accelerate. The overall forecast revealed a probability of at least 50 percent that AI systems will reach several milestones by 2028, many significantly earlier than previously thought.

These milestones include autonomously creating a payment processing website from scratch, creating a song indistinguishable from a new song by a well-known musician, and autonomously downloading and refining a comprehensive language model.

A fictional New York Times bestseller is expected to be written by AI around 2030. In the last survey, this estimate was around 2038.





Image: Grace et al.​

Answers to the questions about "high-level machine intelligence" (HLMI) and "full automation of work" (FAOL) also varied widely in some cases, but the overall forecast for both questions points to a much earlier occurrence than previously expected.

If scientific progress continues unabated, the probability that machines will outperform humans in all possible tasks without outside help is estimated at 10 percent by 2027 and 50 percent by 2047. This estimate is 13 years ahead of a similar survey conducted just one year earlier.






Image: Grace et al.​

The likelihood of all human occupations being fully automated was estimated at 10 percent by 2037 and 50 percent by 2116 (compared to 2164 in the 2022 survey).





Image: Grace et al.​

Existential fears also exist in AI science, but they are becoming more moderate​

High hopes and gloomy fears often lie close together among the participants. More than half of the respondents (52%) expect positive or even very positive (23%) effects of AI on humanity.

In contrast, 27 percent of respondents see more negative effects of human-like AI. Nine percent expect extremely negative effects, including the extinction of humanity. Compared to last year's survey, the extreme positions have lost some ground.





Image: Grace et al.​

While 68.3 percent of respondents believe that good consequences of a possible superhuman AI are more likely than bad consequences, 48 percent of these net optimists give a probability of at least 5 percent for extremely bad consequences, such as the extinction of humanity. Conversely, 59 percent of net pessimists gave a probability of 5 percent or higher for extremely good outcomes.





Image: Grace et al.​

In terms of specific risks, disinformation and deepfakes are considered particularly threatening. This goes hand in hand with mass manipulation and AI-assisted population control by authoritarian rulers. By comparison, disruptions to the labor market are deemed less risky.





Image: Grace et al.​

There was broad consensus (70 percent) that research into mitigating the potential risks of AI systems should be a higher priority.

The survey is based on responses from 2,778 attendees at six leading AI conferences. It was conducted in October 2023 and is the largest of its kind, according to the initiators. Compared to last year, more than three times as many attendees were surveyed across a broader range of AI research areas.



Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Summary

  • The "2023 Expert Survey on Progress in AI" shows that AI systems are expected to reach several milestones by 2028, many of them much sooner than previously thought, such as autonomously creating a website or generating a song in the style of a popular artist.
  • The probability of machines outperforming humans in all possible tasks without outside help is estimated at 10% by 2027 and 50% by 2047, while the full automation of all human occupations is estimated at 10% by 2037 and 50% by 2116.
  • Among the individual risks, disinformation and deepfakes are seen as particularly threatening. 70% of respondents agree that research into minimizing the potential risks of AI systems should be given higher priority.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,249
Reputation
7,364
Daps
134,180
@aXiom




Lk7fwHs.png

Blending Is All You Need

Based on the last month of LLM research papers, it's obvious to me that we are on the verge of seeing some incredible innovation around small language models.

Llama 7B and Mistral 7B made it clear to me that we can get more out of these small language models on tasks like coding and common sense reasoning.

Phi-2 (2.7B) made it even more clear that you can push these smaller models further with curated high-quality data.

What's next? More curated and synthetic data? Innovation around Mixture of Experts and improved architectures? Combining models? Better post-training approaches? Better prompt engineering techniques? Better model augmentation?

I mean, there is just a ton to explore here as demonstrated in this new paper that integrates models of moderate size (6B/13B) which can compete or surpass ChatGPT performance.




Computer Science > Computation and Language​

[Submitted on 4 Jan 2024 (v1), last revised 9 Jan 2024 (this version, v2)]

Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM​

Xiaoding Lu, Adian Liusie, Vyas Raina, Yuwen Zhang, William Beauchamp
In conversational AI research, there's a noticeable trend towards developing models with a larger number of parameters, exemplified by models like ChatGPT. While these expansive models tend to generate increasingly better chat responses, they demand significant computational resources and memory. This study explores a pertinent question: Can a combination of smaller models collaboratively achieve comparable or enhanced performance relative to a singular large model? We introduce an approach termed "blending", a straightforward yet effective method of integrating multiple chat AIs. Our empirical evidence suggests that when specific smaller models are synergistically blended, they can potentially outperform or match the capabilities of much larger counterparts. For instance, integrating just three models of moderate size (6B/13B paramaeters) can rival or even surpass the performance metrics of a substantially larger model like ChatGPT (175B+ paramaters). This hypothesis is rigorously tested using A/B testing methodologies with a large user base on the Chai research platform over a span of thirty days. The findings underscore the potential of the "blending" strategy as a viable approach for enhancing chat AI efficacy without a corresponding surge in computational demands.
Subjects:Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as:arXiv:2401.02994 [cs.CL]
(or arXiv:2401.02994v2 [cs.CL] for this version)
[2401.02994] Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
Focus to learn more

Submission history

From: Xiaoding Lu [view email]
[v1] Thu, 4 Jan 2024 07:45:49 UTC (8,622 KB)
[v2] Tue, 9 Jan 2024 08:15:42 UTC (8,621 KB)

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,249
Reputation
7,364
Daps
134,180

META

Mark Zuckerberg’s new goal is creating artificial general intelligence​

And he wants Meta to open source it. Eventually. Maybe.​


By Alex Heath
Alex Heath Profile and Activity - The Verge, a deputy editor and author of the Command Line newsletter. He’s covered the tech industry for over a decade at The Information and other outlets.

Jan 18, 2024, 12:59 PM EST103 Comments / 103 New



246967_Meta_Zuckerberg_Interview_final4_CVirginia.jpg

Cath Virginia / The Verge | Photos by Getty Images

Fueling the generative AI craze is a belief that the tech industry is on a path to achieving superhuman, god-like intelligence.

OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal.

Now, Meta CEO Mark Zuckerberg is entering the race. While he doesn’t have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he’s shaking things up by moving Meta’s AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta’s apps. The goal is for Meta’s AI breakthroughs to more directly reach its billions of users.

“We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence,” Zuckerberg tells me in an exclusive interview. “I think that’s important to convey because a lot of the best researchers want to work on the more ambitious problems.”

Here, Zuckerberg is saying the quiet part aloud. The battle for AI talent has never been more fierce, with every company in the space vying for an extremely small pool of researchers and engineers. Those with the needed expertise can command eye-popping compensation packages to the tune of over $1 million a year. CEOs like Zuckerberg are routinely pulled in to try to win over a key recruit or keep a researcher from defecting to a competitor.

“We’re used to there being pretty intense talent wars,” he says. “But there are different dynamics here with multiple companies going for the same profile, [and] a lot of VCs and folks throwing money at different projects, making it easy for people to start different things externally.”

After talent, the scarcest resource in the AI field is the computing power needed to train and run large models. On this topic, Zuckerberg is ready to flex. He tells me that, by the end of this year, Meta will own more than 340,000 of Nvidia’s H100 GPUs — the industry’s chip of choice for building generative AI.

“We have built up the capacity to do this at a scale that may be larger than any other individual company”

External research has pegged Meta’s H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft’s shipments and at least three times larger than everyone else’s. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.

“We have built up the capacity to do this at a scale that may be larger than any other individual company,” he says. “I think a lot of people may not appreciate that.”

The realization

No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive.

“I don’t have a one-sentence, pithy definition,” he tells me. “You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”

Related

Inside Meta’s big AI reorg

He sees its eventual arrival as being a gradual process, rather than a single moment. “I’m not actually that sure that some specific threshold will feel that profound.”

As Zuckerberg explains it, Meta’s new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn’t think that the ability for it to generate code made sense for how people would use a LLM in Meta’s apps. But it’s still an important skill to develop for building smarter AI, so Meta built it anyway.

“One hypothesis was that coding isn’t that important because it’s not like a lot of people are going to ask coding questions in WhatsApp,” he says. “It turns out that coding is actually really important structurally for having the LLMs be able to understand the rigor and hierarchical structure of knowledge, and just generally have more of an intuitive sense of logic.”

“Our ambition is to build things that are at the state of the art and eventually the leading models in the industry”

Meta is training Llama 3 now, and it will have code-generating capabilities, he says. Like Google’s new Gemini model, another focus is on more advanced reasoning and planning abilities.

“Llama 2 wasn’t an industry-leading model, but it was the best open-source model,” he says. “With Llama 3 and beyond, our ambition is to build things that are at the state of the art and eventually the leading models in the industry.”

Open versus closed

The question of who gets to eventually control AGI is a hotly debated one, as the near implosion of OpenAI recently showed the world.

Zuckerberg wields total power at Meta thanks to his voting control over the company’s stock. That puts him in a uniquely powerful position that could be dangerously amplified if AGI is ever achieved. His answer is the playbook that Meta has followed so far for Llama, which can — at least for most use cases — be considered open source.

“I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated,” Zuckerberg says. “Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”

Without naming names, he contrasts Meta’s approach to that of OpenAI’s, which began with the intention of open sourcing its models but has becoming increasingly less transparent. “There were all these companies that used to be open, used to publish all their work, and used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it.’”

While Sam Altman and others espouse the safety benefits of a more closed approach to AI development, Zuckerberg sees a shrewd business play. Meanwhile, the models that have been deployed so far have yet to cause catastrophic damage, he argues.

“The biggest companies that started off with the biggest leads are also, in a lot of cases, the ones calling the most for saying you need to put in place all these guardrails on how everyone else builds AI,” he tells me. “I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy.”

“I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy”

Zuckerberg has his own motivations, of course. The end result of his open vision for AI is still a concentration of power, just in a different shape. Meta already has more users than almost any company on Earth and a wildly profitable social media business. AI features can arguably make his platforms even stickier and more useful. And if Meta can effectively standardize the development of AI by releasing its models openly, its influence over the ecosystem will only grow.

There’s another wrinkle: If AGI is ever achieved at Meta, the call to open source it or not is ultimately Zuckerberg’s. He’s not ready to commit either way.

“For as long as it makes sense and is the safe and responsible thing to do, then I think we will generally want to lean towards open source,” he says. “Obviously, you don’t want to be locked into doing something because you said you would.”

Don’t call it a pivot

In the broader context of Meta, the timing of Zuckerberg’s new AGI push is a bit awkward.

It has been only two years since he changed the company name to focus on the metaverse. Meta’s latest smart glasses with Ray-Ban are showing early traction, but full-fledged AR glasses feel increasingly further out. Apple, meanwhile, has recently validated his bet on headsets with the launch of the Vision Pro, even though VR is still a niche industry.

Zuckerberg, of course, disagrees with the characterization of his focus on AI being a pivot.

“I don’t know how to more unequivocally state that we’re continuing to focus on Reality Labs and the metaverse,” he tells me, pointing to the fact that Meta is still spending north of $15 billion a year on the initiative. Its Ray-Ban smart glasses recently added a visual AI assistant that can identify objects and translate languages. He sees generative AI playing a more critical role in Meta’s hardware efforts going forward.

“I don’t know how to more unequivocally state that we’re continuing to focus on Reality Labs and the metaverse”

He sees a future in which virtual worlds are generated by AI and filled with AI characters that accompany real people. He says a new platform is coming this year to let anyone create their own AI characters and distribute them across Meta’s social apps. Perhaps, he suggests, these AIs will even be able to post their own content to the feeds of Facebook, Instagram, and Threads.

Meta is still a metaverse company. It’s the biggest social media company in the world. It’s now trying to build AGI. Zuckerberg frames all this around the overarching mission of “building the future of connection.”

To date, that connection has been mostly humans interacting with each other. Talking to Zuckerberg, it’s clear that, going forward, it’s increasingly going to be about humans talking to AIs, too. It’s obvious that he views this future as inevitable and exciting, whether the rest of us are ready for it or not.
 
Top