DLSS5...aka Ai slop?

Tim Lord

Pro
Joined
May 7, 2012
Messages
344
Reputation
181
Daps
977
Reppin
Toronto
9t3DtJf.png


how you retarded haters look

fukk off

@Diunx you

I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.
 

Rekkapryde

GT, LWO, 49ERS, BRAVES, HAWKS, N4O...yeah UMAD!
Supporter
Joined
May 1, 2012
Messages
161,725
Reputation
32,985
Daps
549,095
Reppin
TYRONE GA!
I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.
good fukkin info :leon:
 

Kingshango

Veteran
Joined
Jul 28, 2013
Messages
30,265
Reputation
9,108
Daps
153,042
Reppin
Chicago
I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.

 

-DMP-

The Prince of All Posters
Supporter
Joined
Apr 30, 2012
Messages
38,441
Reputation
10,150
Daps
120,526
Reppin
LWO/Brady Bunch/#Midnightboyz
I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.
Thorough post. I think it looks good but I appreciate a good break down of why someone is against it as well.
 

MajesticLion

Veteran
Joined
Jul 17, 2018
Messages
36,234
Reputation
7,493
Daps
77,791
The pic on the left is super low quality.

jack up the price 80%
slap on price tag advertising lower price with 40% off holiday sale!
the masses go wild

Same principle.



Or gems like this:

7uOMK0Y.jpeg



Same principle.







AI slop won't make their marketing slop any more real.
 

daze23

Siempre Fresco
Joined
Jun 25, 2012
Messages
32,846
Reputation
2,826
Daps
45,858
it's just another tool devs can use. they can choose if and how they want to use it. and gamers can choose if they want to enable it

if you don't like it, buy an AMD card :troll:
 

Methodical

Veteran
Supporter
Joined
Jun 16, 2012
Messages
57,014
Reputation
7,600
Daps
129,691
Reppin
NULL
I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.

:ehh: the way you put it down, make sense. I can see why many people especially PC gamers are upsetting this, against this.
 

BlackXCL

Superstar
Joined
May 2, 2012
Messages
4,402
Reputation
540
Daps
15,792
Reppin
t.
I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.

Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.

No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?

With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.

For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).

This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.

For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.

This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.

It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.

Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.

To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.

Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.
OBLIVIONLONG.gif

OBLIVIONGIRL.gif

OBLIVION1.gif


Theres simply no argument against this.

These gifs alone show you that the faces remain the same, just rendered at a vastly higher level of quality. And it does most of it with just lighting. Its not changing anybodys face. They are clearly the same people. The Ai is being trained by the game itself, not some random person online.... So the characters will still look like how the artists designed them.

Look at the last gif, you can't even see his eyes without DLSS 5. He has eyes now.

I think people who dislike it are focusing way too much on the faces.

They're missing the bigger picture ... the potential it has for actual gameplay and environment visuals...
  1. Faces really only matter during cutscenes, not moment-to-moment gameplay.
  2. Players spend about 95% of their time traversing the world, interacting with the environment.
That's where this tech really shines. The environmental upgrades I've seen in clips are fantastic.

HDj6TzZaoAADlzr




This tech is brand new, still a baby in infancy. And we're already getting results like this, from studios that aren't even top tier.

Imagine when CDPR, Game Science, Pearl Abyss, R*, get their hands on this? It's going to be absolute insanity.

I am more excited for the potential.

This has the potential to speed up development in the long run. Because inevitably, AI will make spending years optimizing a game a thing of the past. Development becomes faster, iteration becomes easier, and teams can spend less time fighting technical limitations and more time building bigger worlds, deeper systems, and more ambitious ideas.

ASSCREEDGAF.jpg


This is an in-game photo with no HUD with DLSS5.

Look at those visuals , just look at them. Her clothing looks life like, draw distance insane, level of detail insane
 

The_Sheff

A Thick Sauce N*gga
Supporter
Joined
Apr 30, 2012
Messages
27,323
Reputation
5,627
Daps
127,005
Reppin
ATL to MEM
In some games it looks better and in some games it looks worse.

:manny:


Now on to the important shyt. Can someone run this tech with Dead Or Alive Beach Volleyball so we can get a true representation of its capabilities. :takedat:
 
Last edited:

5n0man

Superstar
Joined
May 2, 2012
Messages
17,721
Reputation
3,713
Daps
58,291
Reppin
CALI
I would assume this would be a tool that artists use to create their vision.

Just as an overlay on existing games it’s a no for me dawg.

Unless it works on like really old games and kinda “remasters” them for free :lupe:
The tech would be dope if that was the case but everything we've heard from these companies pushing A.I. is that they intend to replace human workers with A.I. generated content.

Its vital that people reject that shyt
 

Ciggavelli

|∞||∞||∞||∞|
Supporter
Joined
May 21, 2012
Messages
28,398
Reputation
6,718
Daps
58,648
Reppin
Houston
I'm gonna be honest, I don't care if AI replaces some artists' jobs :yeshrug:

If DLSS 5 has these types of graphics, it's gonna be a fukking gamechanger. I want the best graphics possible, be that a human, AI, or ML. I just want the best graphics with the best performance. :yeshrug:
 
Top