Hello,
I have coded an OpenGL program with shaders that implements Order Independent Transparency using a linked list approach. That works fine.
If there are too many fragments the memory I allocated is not enough. Naturally, I then try to allocate a large chunk of memory from the video card. I know when that fails, so that's not a problem either; I have proper workarounds for that.
The problem is that I am probably requesting so much memory from the video card that I'm probably not leaving a lot of margin for anything - and I get a TDR. This only happens when I'm really stressing the system (e.g., zooming into a region such that I end up with lots of fragments to be processed).
I thought perhaps the best thing to do is simply to find out how much memory is available in the graphics card and limit my allocations such that I'm always fairly safe so that will not happen.
The program runs on both Linux and Windows. Are there ways - preferably, standard C/C++ calls - that would query the video card memory so I can always leave sufficient memory around? I understand I won't be able to allocate all the memory I need, and that's OK. I can deal with that. What I can't deal with is a TDR!
Thanks.