I don’t post here enough to know if you are a troll account. But I can let you know why some people are against this. Using machine learning to alter human led design is the first step of eliminating human led design entirely.
Some of what we have seen of DLSS5 is tasteless. For example its homogenized output of what a sunny day looks like or what people’s faces look like. All of the character face filters shown look like they could be from the same game and the outdoor lighting in the environments looks the same in Oblivion, Starfield and AC: Shadows.
No one will dispute that it is a significant difference between it off and on. But how is it better, and is it even more accurate? Also, in what aspects does it fall apart?
With DLSS5 (and other ML learning based algorithms) it will only do what it is trained on and what it is biased for. In the images and video released so far we can see that all characters are always biased for photographer positioned flood lights regardless of the scene/environment they are in. This is because it was trained on headshots of people to highlight conventionally attractive standards of beauty. Also, putting a realistic face on a Starfield character that animates like a muppet can be immersion breaking and trigger the uncanny valley to some people.
For outdoor scenes it is biased for blue hued sunlight (possibly because the outdoor scenes are trained on a sunny day with only clear blue skies) and it does not take into account cloud coverage (as seen in the AC:Shadows valley shot), it eliminates light bounce (as seen in the sidewalk Resident Evil 9 shot), it over darkens some shadows due to the lack of light bounce (as seen between some trees in the AC:Shadows shots).
This model does not have a real sense of where the geometry is within the scene due to where it is implemented in the graphics pipeline so it can’t be truly accurate with its depiction of light. It truly is a filter.
For years NVIDIA has been leading us down a ray traced and path traced future where light is supposed to be acting realistically with objects within an environment. For them to now wash it all away with an ML based approximation that, at least initially, is less accurate due to its limited training is jarring.
This stuff may be more apparent to me since I use (and assist with development) of some ML models daily. I can see where some of these things fall apart and I know how even the best ML models can have a very tough time inferring things.
It’s telling that the people who have supported it so far are at the top of the companies (execs, studio heads) and not the artists or programmers.
Todd Howard’s endorsement is particularly funny since Bethesda is known for procedurally generated content, so switching to machine learning would be a net positive for that team and it makes sense since their in-house technology is weak.
To sum it all up, technology like this will lead to everything looking the same and it will steadily chip away at diversity and human creativity.
Also, AMD and Apple are on the same wave. It was apparent when both of those companies started placing ML accelerators within the compute units of their next-gen GPUs.



the way you put it down, make sense. I can see why many people especially PC gamers are upsetting this, against this.