Jump to content

Has Pgi Solved Or Found A Workaround For Those "engine Limitations" Yet?


68 replies to this topic

#61 Adridos

    Member

  • PipPipPipPipPipPipPipPipPipPipPip
  • Bridesmaid
  • 10,635 posts
  • LocationHiding in a cake, left in green city called New A... something.

Posted 16 August 2013 - 11:12 PM

View PostPurlana, on 07 July 2013 - 08:05 AM, said:

I always wondered why the zoom module didn't function like the regular zoom...? (But more powerful / higher magnification) Maybe I am missing something?


People were highly voceal about them having to do it the same way as in MW4.

They were told how it is and they complained about PGI even further and since it was at the time PGI listened more than nowdays, they've bitten the bullet and gave them what they wanted...

#62 Farpenoodle

    Member

  • PipPipPipPipPipPip
  • Moderate Giver
  • Moderate Giver
  • 240 posts

Posted 16 August 2013 - 11:40 PM

Copying from the frame buffer and pasting it on a scope != PIP :|

#63 New Day

    Member

  • PipPipPipPipPipPipPipPip
  • Veteran Founder
  • Veteran Founder
  • 1,394 posts
  • LocationEye of Terror

Posted 16 August 2013 - 11:46 PM

View PostFarix, on 06 July 2013 - 12:40 PM, said:

I know that a lot of players try to old PGI as the party responsible for why DNF was so bad. However, PGI's role in DNF was rather minor. They were given a bad task (tacking on mutiplayer because all games have mutiplayer now days) and did the best they could.

BUT was it balanced?

#64 ollo

    Member

  • PipPipPipPipPipPipPipPip
  • The Merciless
  • The Merciless
  • 1,035 posts

Posted 16 August 2013 - 11:55 PM

View PostSaxie, on 07 July 2013 - 02:34 AM, said:



here's my question why must it be PIP


It doesn't have to be, and honestly i'd prefer a fullscreen mode. Just add another step to the FOV-cycle, done. :lol:

#65 Vassago Rain

    Member

  • PipPipPipPipPipPipPipPipPipPipPip
  • Bridesmaid
  • Bridesmaid
  • 14,396 posts
  • LocationExodus fleet, HMS Kong Circumflex accent

Posted 17 August 2013 - 12:47 AM

I knew someone would say 'it's not the real deal, because X,' when the real deal looks the exact same as what other games have.

So if the real deal is physically impossible, or impractical, just copy what's already out there. Or would you prefer the 8 bit zoom?

#66 MoonUnitBeta

    Member

  • PipPipPipPipPipPipPipPipPip
  • Philanthropist
  • Philanthropist
  • 4,560 posts
  • LocationCanada ᕙ(⇀‸↼‶)ᕗ

Posted 17 August 2013 - 02:16 AM

View PostTheUncle, on 07 July 2013 - 04:30 AM, said:

May I say a little bit about the topic?

Posted Image

This image uses the exact same technique Piranha is using - they are "reusing" the image they have rendered already as a texture and apply it to the scope. So this is NOT rendering the image twice. If AA was not enabled you could detect the pixels in the scope, as they are simply upscaled. I am 100% certain of that.

-----------------------------------------------------------------------------

Now onto "Rendering in a second Viewport". I will try to make some generalizations so that non-geeks might understand it.

Why can old engines like Unreal 2 or Source do this without a real hassle, why the ultramodern CryEngine 3 can't?
Well actually it can, see here: http://www.crydev.ne...p?f=314&t=76779

But it is not very feasible. And the main reason is performance.

Basically CryEngine 3 is a "deferred rendering" engine, while CryEngine 2, Source, Unreal etc. are "forward rendering" engines (the great majority of Unreal Engine 3 titles is).

A forward rendering engine basically operates like this: You have a light and an object. If the object is in the light's radius, you compute the texture changes/lighting on the object for each polygon (triangle) of the object and apply the change.
Now this happens per-frame for dynamic lights, but engines like Source or Unreal have most of this precomputed, so that the lighting is "baked" into the object's texture, making it permanent.

Regardless of dynamic or precomputed lighting: realizing a second viewport, which is roughly in the same area as your first one, will work without greater problems. Because the lighting is computed for the whole object, it does not matter whether you stand on the exact opposite of the main viewport, everything will look great.
And the cost is manageable because the diffuse lighting and shadows have been computed for all objects already.

Now onto deferred rendering. The term is broad and every engine uses other parts of this overall method, but what you need to know is this.
Before rendering a frame, a so-called "G-buffer" has to be set up, which is usually quite expensive to compute. This is basically an image which stores information (like depth) from the current view which are needed for correct lighting computation. With the G-buffer and the position of the lights, we can calculate the resulting color values for each pixel on the screen.

So that means that the lighting we calculated is only valid for our viewport, because we calculated for each pixel instead of each object.
Now if we were to create a new viewport, we could not recycle any of the lighting computed, because it was tailored specifically to the pixels of the main viewport. A new G-buffer is needed and all lighting has to be recalculated. If the second viewport was the same size as the first one, we would cut our FPS in half, unless we run out of video memory, which would worsen the whole thing immensily. And even if the second viewport was much smaller, setting up the G-buffer is still quite expensive.


I hope it was understandable so far. And these which know more specifics, please don't kill me with 'OMG deferred shading and lighting are not the same'.

So the point which i wanted to make so far:
Implementing a second viewport would probably be possible, but the FPS drop (without heavy modifications) would be way too significant to justify the thing.

And please, even if your computer runs like a ferrari - you would not be pleased to suddenly have 40 frames instead of 70 when trying to have super-accurate aim.

What would be much more feasible for instance was to render the zoomed area in double resolution, so when upscaled the pixels match the screen resolution. Technically this should be possible, but I imagine it to be quite a bit of a hassle for the rendering engineers.


-----------------------------------------------------------------------------
"Wait - but aren't the ocean reflections also technically something like a different viewport?"

Yeah but no direct lighting is applied to objects, apart from the sun, which is still forward rendered afaik. No shadows, no fancy effects at all. And the objects are limited to very big objects (short viewdistance). But the flaws are very hard to spot with all the wave distortion. This is totally not the quality needed for another viewport.



-----------------------------------------------------------------------------
"So why did Piranha chose CryEngine in the first place? If it is so expensive to render everything and we can't even have another viewport and all the netcode...

Deferred rendering has disadvantages, yes. But there are some big advantages. Once the G-buffer is set-up, calculating lighting for individual point lights is very cheap. You can have hundreds of dynamic lights in your scene without too much of a hassle. And the lights also do not care about how many objects are lit or object complexity (well apart from bump/normal mapping). This allows CryEngine to render every light on screen dynamically with ok performance.

Which also means that you do not have to precompute anything, really. Which means that when a level designer builds a map, the map will look exactly like that in the game. In Unreal Engine 3 you would have to compute the lighting each time you change anything in your map. Which takes quite some time, even if you have a server-farm attached to the process. So basically as long as you do not "build" lighting you do not really know how the level is going to look like.
In CryEngine 3 every light is calculated for each frame, everything looks in the editor as it does in game. For a designer this is much more convenient. He does not loose time while the lighting is computed and he instantly sees results when changing stuff.

Also CryEngine 3 has a very competent level editor overall. Setting up terrain is incredibly easy and quick, and modifying the day-time and sun light is very convenient.

So one could say CryEngine 3 is more developer friendly, while Unreal Engine is more gamer-friendly in terms of performance.


Finally I want to make clear that I have indeed some idea about CryEngine and the sorts.

I posted some stuff about it already here: http://mwomercs.com/...22#entry1937322

And a dev kindly answered: http://mwomercs.com/...55#entry1942755


Apart from that, here are two videos of why CryEngine has deferred rendering :lol:




Good read!

It's long, so I liked it.
"That's what" - She.
I will now keep reading the long post.

Edited by MoonUnitBeta, 17 August 2013 - 02:51 AM.


#67 Modo44

    Member

  • PipPipPipPipPipPipPipPipPip
  • Bad Company
  • 3,559 posts

Posted 17 August 2013 - 11:09 AM

View PostGingerBang, on 16 August 2013 - 10:40 AM, said:

You do realize that the weapons you fire travel a lot faster than 150kph right? Especially weapons you fire while travelling 100+ kph.

Yes. Weapon calculations have all variables set the moment a weapon is fired. This is obviously not true for mechs. Nor do mechs behave as points in space -- the structure to keep track of is at least an order of magnitude more complex. Captain Obvious, etc...

#68 GingerBang

    Dezgra

  • PipPipPipPipPipPip
  • 470 posts
  • LocationThe Airport Hilton

Posted 17 August 2013 - 11:43 AM

View PostModo44, on 17 August 2013 - 11:09 AM, said:

Yes. Weapon calculations have all variables set the moment a weapon is fired. This is obviously not true for mechs. Nor do mechs behave as points in space -- the structure to keep track of is at least an order of magnitude more complex. Captain Obvious, etc...



Actually.... you'd be surprised. The supposed issue is latency tracking, not the functionality of the frame moving.

#69 Modo44

    Member

  • PipPipPipPipPipPipPipPipPip
  • Bad Company
  • 3,559 posts

Posted 17 August 2013 - 10:05 PM

View PostGingerBang, on 17 August 2013 - 11:43 AM, said:

Actually.... you'd be surprised. The supposed issue is latency tracking, not the functionality of the frame moving.

You'd be surprised how obviously connected it all is.





5 user(s) are reading this topic

0 members, 5 guests, 0 anonymous users