**UPDATE 2**: I've posted a simple fragment shader model that should produce a reasonable thermal effect: http://mwomercs.com/...ost__p__2188498
**UPDATE 3**: I've posted a further mockup that uses an alternate scheme to pixelation that meets the same goals but should be more aesthetically appealing: http://mwomercs.com/...ost__p__2192493
I've had quite a few games with the new thermal now and while I understand what you were hoping to achieve with the current shader modifications, however it is currently very unphysical and behaves non-intuitively. I find the new thermal to be quite a jarring mode to play with (particularly for a physicist with considerable experience working with thermal cameras)!
Rather than moan about it, I will offer you an alternative implementation that is 1) physically correct and 2) will achieve your desired goal. I've written a lot of opengl shaders in the past so I know what is practically achievable and I hope you will find my suggestions helpful.
Looking though the current shader I see you are currently just attenuating the intensity by object depth:
// In new thermal vision limit the max range of visible mechs to 700m fDepth = clamp(fDepth,0.0f,700.0f); fEdotN *= 1.0f - saturate(((fDepth / 700.f) * 0.9f)); fEdotN = max(VisionMtlParams.y,fEdotN);
So effectively you are simulating a crude linear "fog". The problem with this approach is that it is completely inconsistent with the other viewing modes. In particular you can see mechs using normal vision that are invisible in thermal, something that is literally physically impossible (unless the air is filled with some exotic material that only absorbs long wavelength IR... not likely!). This behaviour is confusing and unintuitive. It has also made thermal all but useless!
Here is an alternative approach, grounded in real physics: simulate a low resolution thermal sensor.
By lowering the effective spatial resolution of the sensor the objects that are distant will average towards the background by virtue of the fact that they occupy only a small fraction of the solid angle subtended by the thermal cameras pixel. This effect would be particularly effective on hot maps where a mech would be much harder to see long range as the background heat is of a similar intensity thus averaging out its signal.
How would this be implemented? I can see to approaches:
1) Render the thermal scene to a low resolution texture eg 192x108 pixel and then stretch it across the view window. In practice this will need to be oversampled to prevent aliasing, so say a 768x432 buffer and resample to 192x108.
2) render the thermal scene in full and downsample to required resolution.
How you display this downsampled image is a matter of preference, it could be scaled using nearest neighbour or interpolated to give a smoother image. Something draws me to the pixelated version as it will make interpretation of small heat sources a bit more challenging.
So what resolution should the "sensor" be, well that's up to you. The lower the resolution the lower the effective range by virtue of the spatial averaging!
You could also reduce the effectiveness of zooming in thermal mode by simply "digitally zooming" the image rather than performing an "analogue" lens based zoom simulation as is currently done.
A fully physically accurate thermal sensor that achieves the goal you intended: to reduce the effective range of the thermal vision.
It does offer some interesting upgrade module possibilities for example:
1) Enhanced thermal camera: has double (or otherwise increased) pixel resolution for those who want to see further.
2) Themal overlay module: overlays the thermal image over the normal vision (blocky pixels for the thermal would make this very visually interesting).
I hope my post is useful/interesting and I would be interested in your feed back.
-Alex
Edited by CnlPepper, 10 April 2013 - 02:14 PM.