🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Low-cost irradiance indirect lighting?

Started by
10 comments, last by MJP 1 year, 8 months ago

I'm looking for a solution to handle indirect lighting of dynamic objects for low-end hardware (integrated graphics and GPUs that end with __50). I've done hand-placed environment probe volumes in the past with pretty good results. These are basically a box volume of any dimensions inside which an object uses a cubemap rendered from the center of that box. The idea is to put one volume in each room, with the edges touching the inside walls, and it gives an okay approximation of diffuse and specular reflection.

I think id's current engine does something similar to this, mixing probe volumes with screen-space reflections. I read a little bit about it but they like to speak in flowery terms to obfuscate what they are actually doing.

All these papers about irradiance volumes just seem to be about rendering a 3D grid of cubemaps, and they don't seem practical outside of the authors' carefully-picked setups. You would need a huge amount of memory to extend very far. A 512x512x512 cubemap storing six colors would require 3 GB storage space. I can imagine all kinds of problems when a wall lies too close to a probe.

If you used a smaller number of probes and made them follow the camera around, you're still going to have a bad performance hit when the camera shifts one unit and 128x128 cells have to be updated.

I feel like all these papers are very dishonest about the usefulness of such an approach, unless I am missing something. For low-end hardware, is there something better than the hand-placed volumes I have described? I want to make sure there's no other good solution before I proceed.

10x Faster Performance for VR: www.ultraengine.com

Advertisement

Josh Klint said:
is there something better than the hand-placed volumes I have described?

Hard to say what's better or worse in general, but surely everybody agrees it sucks we have to place them manually.

So maybe you'd want to automate this?
I think, a good start would be to calculate a SDF of the scene and searching for local maxima of distance, then merging maximas which are close and can see each other.
I could imagine very little manual tweaking and placement would be necessary after that.

Josh Klint said:
I've done hand-placed environment probe volumes in the past with pretty good results. These are basically a box volume of any dimensions inside which an object uses a cubemap rendered from the center of that box. The idea is to put one volume in each room

I always wanted to know: When rendering, how do you know which probe affects a pixel? I mean, if there are hundreds of probes in the scene, you don't iterate all of them per pixel, no?
Another way would be to render bounding box faces of the probes, and accumulate their contribution to the framebuffer. I guess that's how it's done? But still fill rate heavy and bruteforce?

JoeJ said:
I always wanted to know: When rendering, how do you know which probe affects a pixel? I mean, if there are hundreds of probes in the scene, you don't iterate all of them per pixel, no?

I did it by defining a box area. The probe is only used if the pixel is inside the box. You can also shift the cubemap coord to make the reflection match the camera position with this approach. However, it requires precise alignment of the volume to the edges of the room.

10x Faster Performance for VR: www.ultraengine.com

Which papers are you trying to follow here? I'm pretty sure none of them are being dishonest with you, but any technique has its limitation.

In general no games or engines are going to be working with a dense uniform 3D grid of irradiance probes at the sizes you're suggesting (512^3). They will generally have some sort of system that adds a degree of sparseness so that you have more probes where you need them, or they will have a very low density. Having multiple 3D grids that are hand-placed throughout playable areas is not uncommon, and neither are automatic solutions that attempt to analyze the scene geometry and place more probes in areas with higher scene complexity. It's also quite common to use something like L1 spherical harmonics for irradiance with 4 RGB coefficients, which if you can compress offline or at runtime to BC6H works out to 4 bytes per probe.

This article is good:
https://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

10x Faster Performance for VR: www.ultraengine.com

Josh Klint said:
I did it by defining a box area. The probe is only used if the pixel is inside the box. You can also shift the cubemap coord to make the reflection match the camera position with this approach. However, it requires precise alignment of the volume to the edges of the room.

Yes, but that's not the problem i asked about.
Imagine we have interior skyscraper scene with one probe per room, causing a big number of probes. How do we know which probes affect a pixel without iterating all of them to make some weighted sum?
If we could use a huge dense regular volume grid, we could just map the pixel coords to the grid to get the probe(s), but we can't.
For static geometry, we could map the surface directly to the probes, e.g. using a texture map with probe index per texel. (or just lightmaps) I tried this. But global parametrization sucks, and gathering expertise on the field did not change this.

So we are left with using some form of spatial acceleration structure. But even if we build this offline, traversing it per pixel just to find data we may have already calculated offline as well, is a disappointing cost.

Rasterizing all lights in the frustum to screen like in the early days of deferred isn't great either, because it does not scale well with increasing number of probes.

So we arrived at solutions like tiled or clustered shading, which basically is building acceleration structure at runtime per frame, e.g. by binning tiles to grid in screen space. (Doom seemingly falls into this category as well)
That's not too bad til yet, maybe, but it's in screen space, so does not help us to shade hit points from ray tracing.

Thus, we need something like a BVH for the lights. And even i have that, i remain frustrated about the high traversal cost just to apply the already calculated GI to the scene. (I might have something like a million of surface probes around, so a lot)

Just saw some Unity presentation of automated / manually placed volume bricks of SH probes: http://advances.realtimerendering.com/s2022/index.html

JoeJ said:

Just saw some Unity presentation of automated / manually placed volume bricks of SH probes: http://advances.realtimerendering.com/s2022/index.html

Thank you, anything with Unity's name on it is a good recommendation of what not to do.

Each probe should have a defined volume, then it all works out perfectly.

10x Faster Performance for VR: www.ultraengine.com

Josh Klint said:
Thank you, anything with Unity's name on it is a good recommendation of what not to do.

hmmm… Why do people here recently constantly force me to defend Unity, although i neither know nor care about it? But well, this time i'll let it slip.

I'm curious about the volume shapes you intend to support. Multiple primitives like boxes, spheres, etc.?

I'm just doing boxes right now. Here are my results. I added some screen space reflections and mixed them together. They blend pretty nicely, maybe not smoothly enough for a game level made out of flat mirrors, but if you add normal maps to break up the surface this would definitely work well:

10x Faster Performance for VR: www.ultraengine.com

This topic is closed to new replies.

Advertisement