🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

OpenGL 4 reflections

Started by
110 comments, last by taby 1 year, 7 months ago

The one last thing that I absolutely need is reflections. Ever since I played Infinity Blade, I've been in love with reflections. Does anyone have any input, with regard to how I'd go about tackling the problem?

Advertisement

Currently state of the art is transitioning from reflection probes to ray tracing.

Reflection probes are usually offline baked cube maps, either place manually, or within regular grids. PBS tutorials usually show how to apply a single probe, combining multiple is up to you.

Control is an example game to replace this with tracing. They have a voxelized scene, plus SDF volumes to accelerate the tracing, afaik.
With DXR there is no more need for SDF or voxels, using BVH and actual triangles instead. But it adds the need for denoising glossy reflections.

You'r latest screenshots show a lot of black, so you have a need for diffuse GI too. Usually that's more important, as you can reduce the need for sharp reflections by using mainly diffuse materials.
Technically that's the same thing, so you should think about both in combination.
In practice however, reflection probes need high signal resolution to support sharp reflections, at the compromise of a lower spatial resolution of probes.
For diffuse GI probes it's the opposite: Signal resolution can be low (e.g. SH2 or Valves Ambient Cube), but spatial resolution should be high, so probes in a bright room do not leak into a nearby dark room, and to capture some of the nice gradients GI delivers. Lightmaps thus give better quality than probe grids, but are usually static and hard to use for large worlds.

This really is not about what API you use, but about the question: Do i work on a game, or on a renderer? : |

It's also about static vs. dynamic lighting. You could use MagicaVoxels path tracer to bake GI into voxels eventually, but then you have no option to support dynamic time of day, opening / closing doors are a problem, etc.

The point is to make it toon-like. I don't need GI, although that would be mightily awesome to implement.

Thanks for all of your input!

taby said:
The point is to make it toon-like. I don't need GI, although that would be mightily awesome to implement.

Cartoons have no reflections either. So it's questionable if you really need this too.
But in any case you need some form of ambient lighting. The simplest way is to use a constant ambient irradiance term, which many all early 3D games did.
Or you use just one global ambient probe. Same limitations, just as easy, but you get some directional variation everywhere, even if no direct light hits the spot.

For an outdoor game that's acceptable, especially if art style is not realistic.
For some indoor environments it might be enough to have additional SSAO.

The same can work for reflections. Old games had a skybox, so you would use just that for everything. Again acceptable for outdoor, but bad for indoor where a ceiling should block reflections of the sky.
SSR can again help to fix some errors and adding detail, but it's even less reliable than SSAO and its artifacts are worse. So you need a reflection probe to serve as fallback before that and in any case.

Once you have this, extending it to a probe grid is trivial for the renderer, but some work to make a system to bake your probes. And having a cartoon / voxel artstyle does not really help to avoid a need for that, if you want nice lighting respecting the environment.

Thus, assuming your primary goal is game not rendering, maybe it's best to tone down expectations on UE4 reflections. (Or switch to using such engine)
Quake is a good example for something practical but low effort, and Quake 3 had ‘cool’ reflection effects using some cube maps totally ignoring correctness. Like many games of that time, reflections were ‘per material’, but ignored the environment.
Minecraft is a good example for a non realistic but dynamic lighting approach.


There is also the screen-space reflections technique, which has the advantage of not requiring any probes or preprocessing, but it can only reflect things that are in the view. It can work well if you don't care about missing reflections sometimes.

@Aressera I would just love to have an implementation, to at least play with it so that I can learn how they did it. The so-called SSR algorithm would be great!!! Any method would be great. It's like literally the last thing I want for this graphics app… except maybe for bloom and bokeh effects. :D

taby said:
The so-called SSR algorithm would be great!!!

Like SSAO, that's not just one algorithm, but dozens. Likely everybody does its own. You can only search for many papers / tutorials ans see if something attracts you more than others.

To do better, it helps to learn path tracing and physically based shading. Then you don't need references and can just implement an approximation picking compromises which make sense.

But likely you get results much faster if you search out for existing examples and go from there, ofc.

Well I would be super happy with a tiny implementation of SSR. if I can’t find one, I’ll look for a coder who can.

There seems to be a way:

  • Render to texture: Write screen-sized specularity map.
  • Render scene upside down.
  • Use specularity map to determine reflection strength.

This method should work, I think.

taby said:
There seems to be a way: Render to texture: Write screen-sized specularity map. Render scene upside down. Use specularity map to determine reflection strength. This method should work, I think.

Yes, in other words: for SSR, but also for SSAO or SSGI and such effects, it is good to have a GBuffer used for deferred rendering. It's useful to have material information available. E.g. you may want no AO on emissive surfaces or may want to tone it down on foliage, for SSR you may want to know roughness to get a cone angle, etc.

But if you go deferred, you loose MSAA, and with a need for TAA you open up the next rabbit hole time consuming to tackle.

Btw, i may have been the first who used SSR. I used it on a software rasterization engine made for early cell phones, to get reflections on the water surface. That was years before i saw it in first games. : )
But then the iPhone came out. Phones with GPUs! I really did not see that coming. So my work was obsolete. : (

I did not use any ray tracing, just a single lookup, basically an estimated guess.
But proper, modern SSR should not be that hard:

Begin with sharp, perfect mirror reflections.
Construct a ray and follow it in screenspace.
If it hits some surface, you're lucky.
If it becomes occluded by some surface, or goes offscreen, you're fucked and ideally can fallback to a reflection probe lookup or even real ray tracing.
To accelerate the tracing, a min depth pyramid is useful, giving similar advantage than a signed distance field for sphere tracing.

Next step would be glossy reflections, here we have two options:
1. Calculate cone angle from roughness, pick a random ray in the cone. To accumulate multipel samples, either trace multiple rays per frame, or better use temporal accumulation you already have from TAA.
2. Use mipmaps of the frame buffer to approximate cone tracing.

Even it's a screenspace effect, it has similar perf. problems than real ray tracing.
Rays of nearby pixels go into different directions, causing divergent memory access.
Rays of nearby pixels take different numbers of steps until they hit something, causing divergent execution.

If your GPU has RT support, i would try this first. No artifacts and future proof, some people say.

This topic is closed to new replies.

Advertisement