Nightbird, on 12 September 2022 - 10:33 AM, said:
The internet is full of free information. Look up the role of CPU and GPU in video games.
The CPU processes what is in a scene, where all the players and objects are. The GPU turns that scene into pixels.
If a scene has 95% of pixels on your monitor from one camera pointing forwards and 5% of pixels from a rear camera, it still processes 100% of all pixels. However the CPU has to process 100% of all objects that belong to the front, and 100% of objects that belongs to the back.
first of all a lot of that info is grossly out of date. you're talking about the painters algorithm, where you had to waste a lot of time sorting things. then we figured out you can throw memory at the problem, and the z buffer was born. you store the depth of the last pixel you rendered and if you render another pixel in that spot, it checks the depth and if its closer you can replace the pixel. we have long since figured you could throw memory at other problems. in the old days of gpu rendering you only had enough memory for the frame buffer, and so you had to do a lot of stuff cpu side. you really had to micromanage things back then. but now you have several gigabytes of bloody fast vram.
speed comes from having everything ready to go at render time. every pixel, every polygon is sitting in some pre-defined object ready to go. the cpu doesn't have to deal with that stuff. it just tells the gpu the what and where, without having to move any big data. and whats more, you can cache that as an object, send it to gpu and have it take care of it. if you are micro managing privatives (lines, polygons, quads, etc) like in the olen days. or you do a lot of redundant things, like do the whole scene setup twice then its going to be slow. but you dont have to do it that way.
the models, the textures, the worldspace transform for each instance, are there ready to go. you queue up the commands for putting the whole scene into world space, store it in a command list and send it to the gpu. you setup the main viewport and run the list. then you change to the rearview viewpoint, change the pointer from the frame buffer to the render target, and simply run the list a second time. if you instead do the whole process of setting up the scene twice, rather than queue up the redundant bits (and thus only have to do it once), its going to be a performance hit. thats just not the way its done anymore.
Nightbird, on 12 September 2022 - 11:18 AM, said:
How do you know where that polygon face is? The CPU has to process it first. The GPU can choose to not render it if it not in view, but the location is not determined magically.
if your cpu has to touch every single polygon (other than packaging and shipping to the gpu at load time), you are doing it wrong. when you want to move rotate and scale an object in the scene, you do it by setting a 4x4 matrix (at least that was the opengl way) and the graphics pipeline does the rest. that puts it in world space. put the data through another matrix to put it into screen space and apply perspective. you just have to say what (with a pointer to a vbo), and where, by specifying a matrix.
the graphics pipeline handles frustum cull and backface cull on the fly. you dont need the cpu to figure that out. if you for some reason need to tweak the polygon data, you are better doing it in the vertex shader.
if were talking physics, you generally use a different data set for that, like privatives like spheres and boxes (aabbs and oriented), convex hulls, etc. a lot less detail. and i believe that in this game that is all handled server side. but rtt is not a physics problem, its a rendering problem.