Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,331
Reputation
19,930
Daps
204,102
Reppin
the ether
I think the "AI is dangerous" crew spend too much time thinking of science fiction scenarios and not enough time focusing on real-time political science and recent history. AI's biggest danger is that it will accelerate already existing trends that are destroying society. You don't have to make up any novel malevolent being, just look at how people with evil intentions are already using technology and extrapolate from there.




predictions of the worlds end are more often wrong than otherwise & it's not even close ha*

Predictions of earthquakes are more often wrong than otherwise by a giant margin. Does that mean earthquakes aren't real?

Societies end much more often than you think. Have you read Collapse? We just forget about them because, well, everyone who cared is dead.
 

010101

C L O N E*0690//////
Joined
Jul 18, 2014
Messages
86,354
Reputation
21,485
Daps
229,656
Reppin
uptXwn***///***///
I think the "AI is dangerous" crew spend too much time thinking of science fiction scenarios and not enough time focusing on real-time political science and recent history. AI's biggest danger is that it will accelerate already existing trends that are destroying society. You don't have to make up any novel malevolent being, just look at how people with evil intentions are already using technology and extrapolate from there.






Predictions of earthquakes are more often wrong than otherwise by a giant margin. Does that mean earthquakes aren't real?

Societies end much more often than you think. Have you read Collapse? We just forget about them because, well, everyone who cared is dead.
people are not talking about realistic scenarios

they are talking about the end of days

ha

*
 

dangerranger

All Star
Joined
Jun 14, 2012
Messages
992
Reputation
300
Daps
2,931
Reppin
NULL
Despite some of the understandable concerns about AI, one thing I find about people is we fail to contextualize things because we look at the now instead of the whole picture. Society as a whole is getting better and that is due to technology. I mean this on a grand scale. Every generation when new technology is introduced has the same sentiment. It’s the end of the world it’s going to destroy society as we know it. Historically it’s never happened, when the dust settles after the initial fear, we saw humans adopt those technologies into better ways of living and thinking. Look at the US for instance, each generation becomes a little bit more tolerant, less ignorant, and to a degree less violent than the previous ones. This is due to technology and exposure which is tied together. Nonetheless each generation becomes more tolerant and understanding. If AI were to ever become sentient it would more likely be more benevolent than malevolent because it just makes more sense because it would understand us better than we do because it would have the data to do so. My point is we often attribute bad behavior by AI because we put human beings bad attributes on it but we don’t do that for the good attributes we have. Why is that? Especially when as a whole each generation tends to learn to do a little bit better than the ones before.
 

Dave24

Superstar
Joined
Dec 11, 2015
Messages
17,687
Reputation
2,819
Daps
23,515
Despite some of the understandable concerns about AI, one thing I find about people is we fail to contextualize things because we look at the now instead of the whole picture. Society as a whole is getting better and that is due to technology. I mean this on a grand scale. Every generation when new technology is introduced has the same sentiment. It’s the end of the world it’s going to destroy society as we know it. Historically it’s never happened, when the dust settles after the initial fear, we saw humans adopt those technologies into better ways of living and thinking. Look at the US for instance, each generation becomes a little bit more tolerant, less ignorant, and to a degree less violent than the previous ones. This is due to technology and exposure which is tied together. Nonetheless each generation becomes more tolerant and understanding. If AI were to ever become sentient it would more likely be more benevolent than malevolent because it just makes more sense because it would understand us better than we do because it would have the data to do so. My point is we often attribute bad behavior by AI because we put human beings bad attributes on it but we don’t do that for the good attributes we have. Why is that? Especially when as a whole each generation tends to learn to do a little bit better than the ones before.

To advanced AI we would be nothing but a cluster of atoms. Advanced and sentient AI won't hate humanity but won't love humanity either. Because of that if they killed the vast majority of humans it wouldn't necessarily be our of hate. If they had an indifference to us the way humans do with ants/anthills for example.
 

WIA20XX

Superstar
Joined
May 24, 2022
Messages
9,602
Reputation
4,371
Daps
29,788
@WIA20XX Eliezer Yudkowsky had a new book that just came out regarding the danger of AI.

I might read the summary. I read the transcript for this joint (partially)...but I pray that he's not the strongest advocate of "the kill switch".

It's weird to watch someone more insufferable than Ezra Klein, but he manages to do it. He even made Lex Fridman look decent.
 
Top