Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Dave24

Superstar
Joined
Dec 11, 2015
Messages
17,687
Reputation
2,819
Daps
23,515


0:00 - Introduction

0:43 - GPT-4

23:23 - Open sourcing GPT-4

39:41 - Defining AGI

47:38 - AGI alignment

1:30:30 - How AGI may kill us

2:22:51 - Superintelligence

2:30:03 - Evolution

2:36:33 - Consciousness

2:47:04 - Aliens

2:52:35 - AGI Timeline

3:00:35 - Ego

3:06:27 - Advice for young people

3:11:45 - Mortality

3:13:26 - Love



According to Eliezer Yudkowsky the AI being developed will destroy all of humanity.



@010101

@MMS

@Scustin Bieburr

@Piff Perkins

@Micky Mikey

@Rhakim

@The M.I.C.

@GnauzBookOfRhymes

@DrBanneker

@bnew

@greenvale

@Mook

@Starski

@Consigliere
 

Dave24

Superstar
Joined
Dec 11, 2015
Messages
17,687
Reputation
2,819
Daps
23,515


We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive. This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity. Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.


0:00 Intro

10:00 ChatGPT

16:30 AGI

21:00 More Efficient than You

24:45 Modeling Intelligence

32:50 AI Alignment

36:55 Benevolent AI

46:00 AI Goals

49:10 Consensus

55:45 God Mode and Aliens

1:03:15 Good Outcomes

1:08:00 Ryan’s Childhood Questions

1:18:00 Orders of Magnitude

1:23:15 Trying to Resist

1:30:45 Miri and Education

1:34:00 How Long Do We Have?

1:38:15 Bearish Hope

1:43:50 The End Goal
 

Dave24

Superstar
Joined
Dec 11, 2015
Messages
17,687
Reputation
2,819
Daps
23,515
9111-WTwr4L.jpg



818Zw+2sUvL.jpg



Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
 

The M.I.C.

The King In The West 👑
Supporter
Joined
Dec 2, 2015
Messages
25,121
Reputation
14,993
Daps
106,700
Reppin
Charlotte - Washington D.C.
These folks have been working on building these "mirrors" for the past forty years and the cause for concern for some of these high profile folks like Musk and company is because they know what is guiding that "sentience" and they don't know if it'll turn on THEM.

There are very few prohibitive controls to rein in control by actual human hands once the proverbial ball "gets rolling". But artificial intelligence is a cornerstone of our New World Order..China has the lead in DEPLOYMENT of these A.I. systems to keep their massive population under control but it's so well integrated within their infrastructure that it's deceptively easy to become use to it.
 

Mook

We should all strive to be like Mr. Rogers.
Supporter
Joined
Apr 30, 2012
Messages
22,985
Reputation
2,574
Daps
58,841
Reppin
Raleigh


0:00 - Introduction

0:43 - GPT-4

23:23 - Open sourcing GPT-4

39:41 - Defining AGI

47:38 - AGI alignment

1:30:30 - How AGI may kill us

2:22:51 - Superintelligence

2:30:03 - Evolution

2:36:33 - Consciousness

2:47:04 - Aliens

2:52:35 - AGI Timeline

3:00:35 - Ego

3:06:27 - Advice for young people

3:11:45 - Mortality

3:13:26 - Love



According to Eliezer Yudkowsky the AI being developed will destroy all of humanity.



@010101

@MMS

@Scustin Bieburr

@Piff Perkins

@Micky Mikey

@Rhakim

@The M.I.C.

@GnauzBookOfRhymes

@DrBanneker

@bnew

@greenvale

@Mook

@Starski

@Consigliere


Ain't no way I'm listening to lex Friedman. How will it destroy us?
 

GnauzBookOfRhymes

Superstar
Joined
May 7, 2012
Messages
12,969
Reputation
2,889
Daps
48,521
Reppin
NULL
the crazy thing i think about is whether superintelligence can emerge on its own. that's what I'm scared of. we're so wrapped up in this idea that something we create will get smarter and smarter through machine learning, better hardware, better algorithms etc. but in the same way that life itself emerged through some random process that turned inorganic chemicals to organic molecules/proteins/rna/dna, from uni to multi cellular, eventually to humans with intelligence, is it possible that what we've done with computers, the fact that they already do so much autonomously, all connected, constantly learning, oftentimes self diagnosing/repairing, creating and destroying, in ways that are basically undetectable bc there are only so many programmers...can that lead to a non-human "intelligence" that eventually develops some version of consciousness?

my sense is that countries are going to treat this like they do with nuclear proliferation. this means some form of disclosure, willingness to allow others to "inspect" certain programs/initiatives and punishing sanctions/war for the ones that refuse. this is something i could see leading to the use of nuclear weapons. imagine a nation announces some major breakthrough in computing that leads to capabilities previously considered decades away from development. other countries might not be willing to take a chance that this won't be used offensively.
 

scarhead

All Star
Joined
May 8, 2012
Messages
1,743
Reputation
370
Daps
6,670
I think we should shut everything down. Even if this guy’s wrong (I hope he is) the risk/benefit just doesn’t make sense to me. Yeah AI will free us from doing menial tasks and can create some porn and fake but real sounding vocals but if there’s a non-zero probability of it killing us all then we should stop it. I know it will bring about some medical advancements and other positive things too but I think we’re playing with fire now and also don’t love the fact the AI race is being dictated by MS and Google. We saw how detrimental social media can be to humanity with what Meta did with it. Now possibly the whole fate of humanity in the hands of MS and Google? Nah…
 
Top