I am waiting for the GPU’s to use the rotating kinetic power of the fans to feed back into the GPU to give them ERS boost like in formula 1, when scenes become to graphically demanding. If you steal my idea that is intellectual theft and I will be sad!
I’m running a 4070s
CP2077 with RT is around 50fps with dips. Without RT I sit at 90fps with max settings and 144p
Ray tracing is just a way for nvidia to proprietize a technology then force the industry to use it all to keep Jensen in leather jackets. Don’t buy his cards; he has too many leather jackets!
amd cards can handle raytracing too though… soooooo.
As I’m sure you already know the proprietary part comes from the implementation and built in hardware support for said implementation, which AMD is not compatible with (not in any usable way at least)
AMD also has hardware support for raytracing and both are using the same API for raytracing. Nvidia just has a head start and deeper pockets.
This isn’t Cuda or Gameworks where the features depend on Nvidia hardware, it’s more like Tessellation where they can both do it but Nvidia cards did it better so they pushed developers into adding it into games.
as someone who has worked in visual fx for 20 years now, including on over 15 films and 8 games, raytracing is most definitely not simply a marketing tool.
Ray tracing is cool, problem is, it is still in beta basically. Once hardware catches up and you can still get good FPS then it won’t be an annoyance
Meme creator is clearly blind.
I never turn it on, the visual difference is too unimportant to warrant such a huge cost in hardware resources (and temperature). It looks different if you have side-by-side screenshots, or if you turn it off and on in-game, but if the difference is several orders of magnitude too slight to be worth it. Higher frames-per-second is more important than realistically-simulated light beams. You can’t really have both in large AAA games.
Soooo, there’s a missing part here. The point (and drive) behind raytracing isn’t making games beautiful, it’s making them cheaper and less man-hour intensive to make/maintain.
The engine guys spend manyears every year working on that non-raytraced engine so it can do 150. They’ve done every cheat, every side step, and spent every minute possible making it look like they haven’t done anything at all.
The idea is that they stop making/updating/supporting non-raytracing engines and let the GPU’s pick up the slack. Then using AI to artificially ‘upgrade’ the frame rate with interpolation.
It’s like when the unity game engine came out, somehow IMO, instead of having to program the whole thing up to your specific game, now everyone could make a 3D platformer.
It does, again IMO, take the soul out of games.
To see how far rasterization has been stretched, and how that holds back development - Path of Exile 2 has a tech talk about their bare minimum settings. Artists weren’t allowed to rely on anything that could be turned off. They begged the programmers for specific gimmicks, and turned that cheap nonsense into a million blades of grass, raymarched cracks in translucent ice, and soft shadows with no Peter Panning.
Or, picking one specific trick: ambient occlusion was half of why Crysis humbled $5,000 PCs. There’s a slide deck for how a superior version of the same effect was achieved in Toy Story 3 on the Wii.
Real-time raytracing was unobtanium for decades because we kept moving the goalposts. The entire 3D games industry is built on cheating around simple parallel techniques being too expensive. By the time hardware catches up to where doing something the simple way is feasible, complex software has faked a wild variety of other effects. Meanwhile: games are designed to rely on what’s available. All of the tells for proper path-traced lighting have either been faked or avoided. Games don’t even do mirrors, anymore.
There’s a reason RTX shows off games from the late 1900s.
It’s not just a time limitation either tho, it also opens up a lot of room for artistic direction and game design
I don’t think you could possibly make something like Control’s shiny black blocks world look decent without raytraced reflections.
Also anything with significantly large dynamic geometry usually either needs like half of the level file size to be duplicated for every possible state, or some form of raytracing, to work at all. (There’s also things like voxel cone tracing that do their own optimized tracing but they also don’t really work in 100% of situations and come with their own visual downsides)
Don’t forget that temporal smear. I like to apply vaseline directly onto my monitor instead.
Don’t forget the 10 shadow copies of my car/weapon following me around. It’s like someone really liked having a trailing mouse cursor and thought everything should have it
Skyrim has “ray traced” shadows in certain places and works great. I was in a cave once and hiding behind a cliff. An enemy was wandering around the next room and I was able to use the shadow cast on him by a torch to observe his movements without having his actual body in my field of view.
All this modern RT nonsense does is make things look slightly better than screen space reflections and tank performance.
I would expect that to be a normal rasterized shadow map unless you can find any sources explicitly saying otherwise. Because even 1 ray per pixel in complex triangulated geometry wasn’t really practical in real time until probably at least 2018
I’m not sure how it worked, all I know is that it was real time and would react to player models, enemies or other things that would move in unpredictable ways, but only for specific light sources.
Yeah, that’s just rasterized shadow mapping. It’s very common and a lot of old games use it, as well as any modern game. Basically used in any non-raytraced game with dynamic shadows (I think there’s only one other way to do it, just directly projecting the geometry, only done by a few very old games that can only cast shadows onto singular flat surfaces).
The idea is that you render the depth of the scene from the perspective of the light source. Then, for each pixel on the screen, to check if it’s in shadow, you find it’s position on the depth texture. If it’s further away than something else from the perspective of the light, it’s in shadow, else it isn’t. This is filtered to make it smoother. The downside is that it can’t support shadows of variable width without some extra hacks that don’t work in all cases (aka literally every shadow), to get sharp shadows you need to render that depth map at a very high resolution, rendering a whole depth map is expensive, it renders unseen pixels, doesn’t scale that well to low resolutions (like if you wanted 100 very distant shadow catching lights) etc.
Raytraced shadows are actually very elegant since they operate on every screen pixel (allowing quality to naturally increase as you get closer to any area of interest in the shadow) and naturally support varying shadow widths at the cost of noise and maybe some more rays. Although they still scale expensively with many light sources, some modified stochastic methods still look very good and allow far more shadow casting lights than would ever have been possible with pure raster.
You don’t notice the lack of shadow casting lights much in games because the artists had to put in a lot of effort and modifications to make sure you wouldn’t.
That’s actually one specific torch!
It is unknown why it has this function, or why Bethesda left it in
Just Bethesda things
I’ve seen the effect in other places, though I guess technically they can stick that torch wherever they want as you explore.
If you mod, that’s likely why you found it in other places. The wiki isn’t kidding when it says it is found in only one place in the game (in vanilla at least.)
Since you can achieve that effect with only a few rays traced instead of hundreds used for soft shadows. But honestly, the same effect could be achieved dynamically with maybe 10 rays and a blur filter.
I think raytracing is fine for games that want a lot of realism. But I’m playing games with monsters and fantasy. My suspension of disbelief isn’t going to break because reflections aren’t quite right.
But I’m pretty much in the camp of, I want my games to look and feel like games. I like visual cues like highlighting items I can interact with or pick up. So lighting is always non-realistic.
Look at Tiny Glade, it’s a great example of what raytracing can bring to a stylized game. (They did use their own raytracing pipeline different from the usual - in their own words, re-stir was overkill for what their game needed). Or like 95% of animated films. Including Arcane but excluding Stray.
What game is this?
Call of Duty Modern Warfare 2
Dark Souls 2
Pokémon Ruby/Saphire/Emerald
Yes, cool. But annoying after the first minute. And way too reflecting.
<.< How is that annoying?
Baked lighting looks almost as good as ray tracing because, for games that use baked lighting, devs intentionally avoid scenes where it would look bad.
Half the stuff in this trailer (the dynamically lit animated hands, the beautiful lighting on the moving enemies) would be impossible without ray tracing. Or at the least it would look way way worse:
Practically impossible for this developer? Maybe. Technically impossible? No.
We do have realtime GI solutions which don’t require raytracing (voxel cone tracing, sdfgi, screenspace, etc). None of which require any ‘special’ hardware.
Raytracing is just simpler and doesn’t need as much manual work to handle cases where traditional rasterisation might fail (eg; light leaking). But there’s not many things it can do which we can’t already achieve with rasterisation tricks.
Raytracing is mostly useful for developers who don’t have the time/budget/skillset to get the same visual quality with traditional rasterisation.However, in an industry which seems to prioritise getting things released as cheaply and quickly as possible, we’re starting to see developers rely heavily on raytracing, and not allocating many resources into making their non-rt pipeline look nice.
Some are even starting to release games which require raytracing to work at all, because they completely cut the non-rt pipeline out of their budget.So I’d argue that you’re incorrect in theory, but very correct in practise (and getting even more correct with time).
That’s kinda the thing with ray tracing. You can save a lot of work but since you want your game available for gamers that don’t have the hardware you still have to do that work…
I’m expecting the next PlayStation to focus on ray tracing to set it apart in the market. They have the volume and it would be good for their exclusive titles.
Edit: Okey, maybe I’m just hoping, rather than expecting. Sony can absolutely screw this up.
Raytracing is cool, personaly I feel like the state that consumers first got it in was atrocious, but it is cool. What I worry about is the ai upscale, fake frame bullshit. While it’s cool that the technology exists; like sweet, my GPU can render this game at a lower resolution, then upscale it back at a far better frame rate than without upscaling, ideally stretching out my GPU purchase. But I feel like games (in the AAA scene at least) are so unoptimized now, you NEED all of these upscaling, fake frame tricks. I’m not a Dev, I don’t know shit about making games, just my 2 cents.
Optimization is usually possible, but it is easier said than done. Often sacrifices have to be made, but maybe it is still a better value per frame time. Sometimes there’s more that can be done, sometimes it really is just that hard to light and render that scene.
It’s hard to make any sweeping statements, but I will say that none of that potential optimization is going to happen without actually hiring graphics devs. Which costs money. And you know what corporations like to do when anything they don’t consider important costs money. So that’s probably a factor a lot of the time.
The joke is, LCD smear anyway on low framerates.
No you’ve pretty much hit it on the head there. The higher ups want it shipped yesterday, if you can ship it without fixing those performance issues they’re likely going to make you do that.
Raytracing will be cool if hardware can catch it up. It’s pretty pointless if you have to play upscaled to turn the graphics up. And as you say, upscaling has its uses and is great tech, but when a game needs it to not look like dogshit (looking at you Stalker 2) it worries me a lot.
I feel like if you have the level of a 3070 or above at 1080p, pathtracing, even with the upscaling you need, can be an option. At least based on my experience with portal rtx.
Personally I have a 3060, but (in the one other game I actually have played on it with raytracing support) I still turned on raytraced shadows in Halo Infinite because I couldn’t really notice a difference in responsiveness. There definitely was one (I have a 144hz monitor) but I just couldn’t notice it.
We’ve gotten so good at faking most lighting effects that honestly RTX isn’t a huge win except in certain types of scenes.
The difference is pretty big when there are lots of reflective surfaces, and especially when light sources move (prebaked shadows rarely do, and even when, it’s hardly realistic).
A big thing is that developers use less effort and the end result looks better. That’s progress. You could argue it’s kind of like when web developers finally were able to stop supporting IE9 - it wasn’t big for end users, but holy hell did the job get more enjoyable, faster and also cheaper.
Cyberpunk and Control are both great examples - both games are full of reflective surfaces and it shows. Getting a glimpse of my own reflection in a dark office is awesome, as is tracking enemy positions from cover using such reflections.
I have only ever seen Cyberpunk in 2k res, ultra graphics, ultra widescreen, ray-tracing and good fps at a friend’s house and it does indeed look nice. But in my opinion there are too many reflective surfaces. It’s like they are overdoing the reflectiveness on every object just because they can. They could have done a better job at making it look realistic.
For any other game, I’d agree, but cyberpunk being full of chrome is an aesthetic that predates the video games by a fair margin, haha.
My problem is more with wet surfaces and the likes. Walking around the city it feels like every little water puddle is mirror and a spoon can also reflect way too much. I don’t mind shining chrome body parts.
Oh, they are definitely intentionally overdoing it since 90% of said reflective surfaces are ads, often reflecting other ads in there. The game is such an assault of advertising that I’ve found myself minding the advertisements in RL public spaces a lot
moreless.
But, it takes a lot of work by designers to get the fake lighting to look natural. Raytracing would help avoid that toil if the game is forced RT.
thats the same logic behind the really high hardware requirements nowadays.
studios just wanting to save time and cut corners, and you offset that with really expensive cards.
Gamers needs expensive hardware so designer has less work. Game still not cheaper.
I took pickes and tomatoes off my burger, where’s my $0.23 discount damn it?!
Let’s assume cutting out tomatoes and pickles saved $0.23 per hamburger.
McDonald’s serves 6.5 million hamburgers a day.
That’s $500 million extra yearly profit for their shareholders.There’s actually a decent analogy there I think. The hamburger won’t cost less, because the service of customization it itself less efficient: serving customers with their preference of with/without is more expensive than just pickles for all. Likewise I imagine making a game that looks OK with/out RT is extra work than just with.
There is no analogy. It’s comparing returning costs per product (you need a new tomato per 5 burgers) to a one time costs that can be cut during development. And additional copies of a game don’t generate more costs.
There really isn’t.
The op comment was that gamers need to buy expensive hardware so that developers could cut on features/optimization.
The follow-up reply likened it to customizing your burger, but the better analogy (and the one I assumed) would be for McDonald’s to remove all tomato and pickles (saving money), and the user had to buy it themselves to add to the burger.
The issues come if you know how they’re faking them. Sure, SSR can look good sometimes, but if you know what it is it becomes really obvious. Meanwhile raytraced reflections can look great always, with the cost of performance usually. It’s sometimes worth it, especially when done intelligently.
Not true. Screen space reflections consistently fails to produce accurate reflections.
There are cases where screen space can resolve a scene perfectly. Rare cases. That also happen to break down if the user can interact with the scene in any way.
Screenspace isn’t the only way to draw reflections without RT. It’s simply the fastest one.
Most gamers aren’t going to notice, and I can count on one hand the number of games that actually used reflections for anything gameplay related.
What I’m talking about is drawing accurate reflections and I don’t know any other technique that produces the same accuracy as RT
That’s like saying that “physics simulation is the only technique that produces accurately shaped water streams” - technically true but generally not a sufficient improvement over the shortcuts currently in use to make up for the downside that the technically most precise method is slow as fuck.
Game making is at all levels finding shortcuts and simplifications (even games about the real world are riddled with simplifications, if only the gameplay rules being a simplified version of real world interactions because otherwise it would be boring as shit) and in the visual side of things those are all over the place even with RT (the damage on the walls, the clouds in the sky, the smoke rising from fires or the running water on the streams aren’t the product of Physics Simulations but, most likely, the use of something like Perkin Noise or even good old particle effects to fake it well enough to deceive human perception).
Yeah, sure RT is, technically speaking in terms of vidual fidelity alone, better than the usual tricks (say, using an extra rendering step for the viewpoint of the main reflective surfaces such as mirrors). Is the higher fidelity (in, remember, a game space which is in many other ways riddled with shortcuts and simplifications) sufficient to overcome its downsides for most people? So far the market seems to be saying that it’s not.
CDProjektRed just showcased The Witcher 4 running RT with 60 fps on a PS5. Bullshit its too slow to be available for most people.
From an article about it:
Now, it should be stressed that this is a build of The Witcher 4 specifically designed to show off Unreal Engine’s features. Yes, it’s running on a standard PS5, but it’s not necessarily indicative of the finished product.
So that’s like saying “under laboratory conditions it has been demonstrated to work”.
If you know what to look for you can notice it (mainly light bouncing of objects and tainting shadows with the color of those objects, such as the shadow above the green canvas here), but the difference to the non-RT version when one doesn’t know what to look for is minimal and IMHO not enough to justifying upgrading one’s hardware, especially considering that so much of the rest (the water in the streams, the snow in the mountains, the shape of the mountains themselves, the mud splash when a guy is thrown into the mud, the folliage of the plants and so on) has those visual shortcuts I mentioned.
Yeah, sure, it’s nice than shadows next to strongly lit colored surfaces get tinted with the color of that surface, but is that by itself worth it upgrading one’s hardware?!
When most games with RT in them deliver that performance on one generation old hardware and all environments, then you will have proven the point that for most gamers it has no significant negative impact on performance.
RT was three generations ago, and I don’t think they really vary the number of rays much per environment (and rt itself is an o(log(n)) problem)
If you think that video is representative of the release game’s actual performance and fidelity, I have several bridges to sell you.
I don’t see them lying but that’s on you I guess
Depends on how you define “accurate”. Even full ray tracing is just an approximation based on relatively few light rays (on an order of magnitude that doesn’t even begin to approach reality) that is deemed to be close enough where increasing the simulation complexity doesn’t meaningfully improve visual fidelity, interpolated and passed through a denoising algorithm. You can do close enough with a clever application of light probes, screenspace effects, or using a second camera to render the scene onto a surface (at an appropriate resolution).
That’s true, but after a few frames RT (especially with nvidia’s ray reconstruction) will usually converge to ‘visually indistinguishable from reference’ while light probes and such will really never converge. I think that’s a pretty significant difference.
Reflection probes are one way. Basically a camera drawing a simpler version of the scene from a point into a cubemap. Decent for oddly shaped objects, although if you want a lot of them then you’d bake them and lose any real time changes. A common optimisation is to update them less than once a frame.
If you have one big flat plane like the sea, you can draw the world from underneath and just use that. GTA V does that (like ten years ago without RT), along with the mirrors inside. You could make that look better by rendering them in higher resolution.
https://www.adriancourreges.com/blog/2015/11/02/gta-v-graphics-study-part-2/
Where RT is visibly better is with large odd shaped objects, or enormous amounts of them. I can’t say it’s worth the framerate hit if it takes you below 60fps though.
I haven’t personally played a game that uses more than one dynamic reflection probe at a time. They are pretty expensive, especially if you want them to look high resolution and want the shading in them to look accurate.
The best examples of raytracing are in applying it to old games, like Quake II or Portal or Minecraft.
Newer games were already hitting diminishing returns on photo realism. Adding ray tracing takes them from 95% photo realistic to 96%.
I disagree - adding RT to games that weren’t designed for it often (but not always) wrecks the original art direction.
Quake II is a great example; I think the raytraced version looks like absolute ass. Sure, it has fancy shadows and reflections, but all that does is highlight how old the assets are.
Portal with ray tracing is a really cool demo, and Ive used it on the past to show off ray tracing. But man its just not as pretty as the old portal because it lacks the charm, its like those nature photos that are blown out with HDR
I always loved the graphics of Portal 2 but didn’t really see the appeal of those from Portal 1. I think the “with-rtx” version was more on the portal 2 side, so I was fine with it.
Same with Minecraft. Minecraft looks like crap, and improving the lighting, shadows and so on just shows that off even more.
Minecraft is a game that’s deliberately not about the looks.
I disagree, I think a lot of raytraced shaders successful make the game look better while still leaning into the stylized look. I also think it’s unfair to say the game looks bad originally. It doesn’t look realistic, but it has a consistent and compelling visual style.
Look at the Minecraft update trailers for example. They go in that direction even further, by simplifying all of the textures. Yet even with the perfect offline path tracing, it doesn’t look bad.
I’m a big fan of raytraced Minecraft, but I also generally use texture packs that benefit from ray tracing. I’ve found that rather than something for realisim, finding a high resolution cartoony texture pack makes RT shine for Minecraft. Because yes, the game looks bad on purpose, lean into that and make it cartoonier too.
It makes for great survival horror when you’re on the cutest cartooniest texture pack you can find and you’re out after dark without a torch or lantern and its just pitch black except for the light of the moon, barely illuminating silhouettes against the deep purple sky. You see the monsters approaching. You turn a corner and see it, the most adorable thing you’ve ever seen, with death in its eyes made visible only by your own reflected moonlight. It’s too dark to run. Good luck.
often (but not always) wrecks the original art direction.
Which is sometimes a nice benefit. Not to talk about the “layer” in a specific color that suddenly goes away if you enable levelsplus in Reshade. The most extreme example i’ve seen was Elex 1.
Early 3D graphic rendering was all ray-tracing, but when video games started doing textured surfaces the developers quickly realised they could just fake it with alpha as long as the light sources were static.
Unless you consider wireframe graphics. Idk when triangle rasterization first started being used, but it’s more conceptually similar to wireframe graphics the ray tracing. Also, I don’t really know what you mean by ‘fake it with alpha’.