Jump to content

Out Of Sync...


213 replies to this topic

#201 Calon Farstar

    Member

  • PipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 189 posts
  • Twitter: Link
  • Twitch: Link
  • LocationAt Sea

Posted 24 April 2013 - 06:33 AM

View PostKarl Berg, on 22 April 2013 - 01:38 PM, said:

Hey everyone, I must apologize greatly for this. There has been a lot of work under the hood in preparation for 12v12. Part of those changes have been aggressive bandwidth optimizations so that users don't end up left out due to increased network load when 12v12 is turned on.

Some of these changes actually went live a couple patches ago, and involved sending lots of small little acknowledgements back to the server so that the server could intelligently compress what it sends to your client. Unfortunately, once again we appear to have tripped some unfortunate behaviour in the engine which we are currently trying to track down and correct.

In the interim, I have prepared some emergency changes which are being pushed to testing immediately with the intent to launch a hot fix. QA has good reproduction steps for this issue now, so we should be able to easily catch cases like this in future.

We'll update you all as soon as we have more info about QA testing and hot fix approval.


THANK YOU!!! :wacko: ....Now hurry up theres Dracs to kill!!!

#202 Neema Teymory

    NetCode Slayer

  • Developer
  • Developer
  • 86 posts

Posted 24 April 2013 - 09:04 AM

View PostNeema Teymory, on 22 April 2013 - 11:23 AM, said:


I can't say for sure when exactly the fix will be applied. It depends on a number of factors. All I can say is that we are trying to get it in as soon as possible :D

I will be sure to post to this thread if there are any updates


As promised, here is an update: The hotfix is NOW available. Start patching :D

#203 Karl Berg

    Technical Director

  • 497 posts
  • LocationVancouver

Posted 24 April 2013 - 09:21 AM

Hey all, as Neema mentioned the hot fix was accelerated and is getting pushed out this morning. QA reports are so far extremely positive; but please keep us updated if you continue to notice any behaviour of this sort.

#204 Syllogy

    Member

  • PipPipPipPipPipPipPipPipPip
  • 2,698 posts
  • LocationStrana Mechty

Posted 24 April 2013 - 09:28 AM

Y'all are all-right.

Also, neener neener to the QQ trolls. :D

#205 Kin3ticX

    Member

  • PipPipPipPipPipPipPipPipPip
  • The People's Hero
  • The People
  • 2,926 posts
  • LocationSalt Mines of Puglandia

Posted 24 April 2013 - 10:26 AM

Check this out, First drop testing the new patch and alteast 4 of my teammates could not get the base to cap. I dont know if that is an old glitch or something related to recent netcode fixes. (The other team was able to cap us out however)

I will continue to test for the de-sync bug.

Edited by Kin3ticX, 24 April 2013 - 10:28 AM.


#206 ElLocoMarko

    Member

  • PipPipPipPipPipPipPip
  • 533 posts

Posted 24 April 2013 - 10:54 AM

It is hopefullly fixed: (found on twitter - the better place to look for announcements :D )


View PostGarth Erlam, on 24 April 2013 - 08:23 AM, said:

AND WE'RE BACK UP! See you on the battlefield!

This hotfix addreses the 'rubber banding' issue. Servers will come down at 10am and be back up very shortly after.

Anyone who received ANY small amount of packet loss would, essentially, never send that data. This should fix that, and the rubber banding issue should be gone entirely.

On a similar thread, we're still trying to track down the cause of the various HUD issues that are occuring, but we have no ETA. I'll make a post the second I have news.

Edited by ElLocoMarko, 24 April 2013 - 10:55 AM.


#207 Kin3ticX

    Member

  • PipPipPipPipPipPipPipPipPip
  • The People's Hero
  • The People
  • 2,926 posts
  • LocationSalt Mines of Puglandia

Posted 24 April 2013 - 11:43 AM

After about 10 drops so far so good with eliminating the lag issues.

#208 Scope666

    Member

  • Pip
  • The 1 Percent
  • The 1 Percent
  • 16 posts
  • LocationBranchburg, NJ

Posted 24 April 2013 - 11:56 AM

View PostKarl Berg, on 24 April 2013 - 09:21 AM, said:

Hey all, as Neema mentioned the hot fix was accelerated and is getting pushed out this morning. QA reports are so far extremely positive; but please keep us updated if you continue to notice any behaviour of this sort.


I have to say I'm SO happy you guys found this and fixed it ... I had been experiencing rubber banding worse than most of my friends, and usually have a very low ping (not that far from your servers, in NYC area)

I've done a few drops so far and it seems like you fixed it ... Kudos!!!!

#209 ArtemisEntreriCRO

    Member

  • Pip
  • 14 posts

Posted 25 April 2013 - 05:04 AM

so, it seams that rubberbanding and out of sync is fixed for me, at least for now, will check in a couple of hours when more people are playing
hopefully this game will be playable, dont you dare touching that part of code again

#210 Profiteer

    Member

  • PipPipPipPipPipPip
  • 353 posts
  • LocationNew Zealand

Posted 25 April 2013 - 01:05 PM

I wasn't rubber-banding before the hotfix - but I'am now.

I couldn't find my way off base in some games it was so bad.

#211 Barnaby Jones

    Member

  • PipPipPipPipPipPip
  • Survivor
  • Survivor
  • 434 posts
  • LocationTexas

Posted 25 April 2013 - 04:39 PM

View PostEsplodin, on 24 April 2013 - 04:57 AM, said:

This kind of post REALLY needs to be put in a known issues section. It's detailed and has QUALITY all over it, thoroughly explains why the failure is happening, and most importantly does so without blowing rainbows and skittles up our collective poop chutes or make excuses.

While it doesn't make the bug any easier to play through, it does make it somewhat less frustrating to know what is going on and that a fix is inbound. It's beta, and bugs happen. Dealing with them head on with the community is always the better route, since keeping them under wraps just lets rumors blossom. Rumors and conspiracies left unaddressed are far more damaging in the long run.

More quality communications like this would go a Looooong way to repairing some of the malcontent the community has harbored. The other example I'd give is the one Nick posted about the testing suite you guys are developing. That is the other post that stands out in my mind as being excellent, since it reminded me at least that you guys have to create the massive infrastructure a MMO requires to work as well as the game we see.

I'd complement you to your boss Karl on this excellent writeup, but I've no idea who you report to! Feel free to pass this on to him/her!


QFT.

#212 Karl Berg

    Technical Director

  • 497 posts
  • LocationVancouver

Posted 25 April 2013 - 06:41 PM

View PostProfiteer, on 25 April 2013 - 01:05 PM, said:

I wasn't rubber-banding before the hotfix - but I'am now. I couldn't find my way off base in some games it was so bad.


Hey Profiteer, sorry to hear that :(. We'll have some further changes related to this soon. The hot fix reduced the number of packets sent to make the immediate issue go away, but it came at the cost of extra bandwidth; although bandwidth usage is still a good chunk lower than it was a month ago. I've spent the last few days digging through the network layer and we've definitely found some erroneous behaviour with respect to flow control logic and message starvation. These changes are a bit more extensive though, and will require some pretty extreme testing before we can launch anything.

#213 Rixsaw

    Member

  • PipPipPip
  • The Blade
  • The Blade
  • 58 posts

Posted 29 April 2013 - 05:42 AM

View PostKarl Berg, on 23 April 2013 - 11:48 AM, said:


Maybe; the failure cases for this issue are usually much more extreme, but please let us know if the hotfix makes any difference.



... no comment

@Rixsaw

Not quite. Here is what was done:

Mech input state is queued up into a 20 hz stream of traffic sent from your system to the server. The server processes this state and relays it to all other clients. Obviously then, for a 16 player game, you're sending one set of inputs up to the server, and receiving inputs for 15 other players. For a 24 player game this is compounded, one set of inputs up and 23 sets of inputs down. This movement state traffic dominates all the other game traffic being sent in terms of bandwidth costs.

Taking a look at what is actually in that mech input state, you have some aim angles, throttle settings, jump jet status, torso turn settings, and a small collection of other essential states. At 20 hz, a lot of that state doesn't change from one input to the next, so it's a fairly reasonable optimization to only send input values when they change, commonly referred to as delta compression. Most of our optimizations are focussed on the data being sent to you from the server, since that grows with player count.

To start, we add a sequence id to map transmitted movement state to a specific identifier. Now every move we send to the client is tagged with its sequence number. If this movement packet requires previous state to decompress, we add that base state identifier as well and delta compress the current state against that base state before transmission.

Well, we're using UDP, and all this traffic is unreliable and unordered. The underlying movement system is set up in such a way that it will reorder or simply reconstruct lost traffic over a very small window of time. If any received input is too old it's simply discarded.

We still have to deal with the problem of knowing what states the client has received, so for every state the client receives, it send back a really small 'state ack' packet containing a small identifier and the last received sequence id. The CryNetwork layer handles combining all these little tiny packets into optimally sized packets for transmission for us. Now on the server it's quite simple to always delta compress against the most recently ack'd state for each client.

It's the transmission of these ack packets in combination with small levels of packet loss that have messed things up. My guess is that our sending of lots of tiny little messages is incorrectly triggering flow control logic in the network layer, but it will take some digging to really track down where and why this is happening.

Small update on the hotfix, QA has it and is testing it out in the stable environment. If all goes well, the absolute earliest it might end up getting pushed out would be sometime tomorrow.



Awesome Karl ! Your explanation was a bit more complicated than I necessarily needed, but I appreciate your willing ness to flex your technical muscle ^_^

I am among other things a Voice Engineer, so we have to deal with the same problem you are often.

Video or Voice conference calling. It also has a 20hz packetization, with a 150ms stream buffer. The reason you use these is most humans won't really notice a slight lag in voice that happens 150 ms later. Maybe in the game the lag has to be tighter, so that may be where the issue is. Anyway, it looks like you fixed it, good job.

Basically the netcode is a voice conference call :P Each user sends his audio up, the server combines, and sends the official outbound audio to all participants.

#214 Lugh

    Member

  • PipPipPipPipPipPipPipPipPip
  • The Widow Maker
  • The Widow Maker
  • 3,910 posts

Posted 29 April 2013 - 05:17 PM

View PostQuazar, on 19 April 2013 - 07:05 PM, said:

I think it some kind of ATT routing problem

Nope I'm on Comcast in Delaware and having trouble too. It's bad network code on their end.





6 user(s) are reading this topic

0 members, 6 guests, 0 anonymous users