Jump to content

The Real Reason(Imho) Why Hsr Hitreg Sucks And Why Pgi Probably Needs To Switch Server Providers Again.


70 replies to this topic

#41 Kunae

    Member

  • PipPipPipPipPipPipPipPipPip
  • 4,303 posts

Posted 26 June 2013 - 06:38 PM

View PostXajorkith, on 26 June 2013 - 03:04 PM, said:

Racks of computers are racks of computers and kilowatts of heat are kilowatts of heat. Datacentres have a lot of racks. If you think extra fans (trust me, there are LOTS of fans already in the rack units - it's absolutely deafening) can cool multiple racks in a small room, you haven't been in one. But you must have been in one in order to be correcting strangers and telling the people who work for PGI and IGP what's best for their infrastructure. Got to admire confidence.

Wow.

Just wow.

So much fail in that post...

As you obviously have no clue about the difference between a server-room and a data-center, you really may want to stop talking. Then you may want to apply to work in IGP's "server-room", as that's all it qualifies as, and you seem to be of their caliber.(I am assuming from your ignorant snarkiness, that you actually work in the industry, although it's clear not anywhere real, if so.)

#42 Arcturious

    Member

  • PipPipPipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 785 posts
  • LocationCanberra, Australia

Posted 26 June 2013 - 08:01 PM

I've done a few traces as well and at the time I told support that it was the last two hops in general that had issues.

Since the latest hot fix, my ping has gone back to normal 95% of the time (220-230 from AU).

Last night I noticed I was having hit reg issues on some PPC and gauss shots. Hit tab and sure enough my ping had spiked to 280 and most importantly was fluctuating up and down by some 20-50ms.

This lasted most of the game before settling back down again.

There is definitely something flaky around Toronto as that's approximately where I start noticing ping issues on traces.

The biggest fluctuations seem to occur within the pnap.net addresses. Looking them up, they are a subsidiary of internap.com, a hosting provider that specifically talks about game hosting. I believe from searches they may also be used by Riot games so supposedly they aren't a fail company.

If you google high latency and pnap.net though you get quite a few results (including LoL issues). While this doesn't technically prove anything as their infrastructure would be managed by different teams on different equipment all around the globe, it may still be indicative of some underlying issue in their standard configs / QoS.

What does all this mean? Potentially absolutely nothing. Which is why I'm glad it's not my job to try to hunt these issues down :)

Edit: Check this out for example, taken from a trace I did before latest hot fix, at work so only have historical data to go off, take this with grain of salt, caveat, caveat, etc-eat. I'll try and capture fresh results next time I notice the issue and update if its any different.

11 178 ms 178 ms 177 ms sl-tisca1-369951-0.sprintlink.net [144.228.111.22] - Little variance
12 243 ms 242 ms 244 ms xe-5-0-0.tor10.ip4.tinet.net [141.136.107.66] - Little variance
13 235 ms 234 ms 234 ms internap-gw.ip4.tinet.net [77.67.70.94] - Little variance
14 271 ms 317 ms 266 ms border1.te9-1-bbnet2.tor001.pnap.net [70.42.24.196] - Bam, pnap
15 268 ms 239 ms 247 ms relay-1.mwtactics.com [70.42.29.65] - Slight Variance

Edited by Arcturious, 26 June 2013 - 08:11 PM.


#43 Epikone

    Rookie

  • Urban Commando
  • Urban Commando
  • 8 posts

Posted 26 June 2013 - 09:56 PM

View PostKunae, on 26 June 2013 - 06:38 PM, said:

Wow.

Just wow.

So much fail in that post...

As you obviously have no clue about the difference between a server-room and a data-center, you really may want to stop talking. Then you may want to apply to work in IGP's "server-room", as that's all it qualifies as, and you seem to be of their caliber.(I am assuming from your ignorant snarkiness, that you actually work in the industry, although it's clear not anywhere real, if so.)

As someone whose company has actually experienced a set of rolling failures in our CRAC units due to corrosion in pipes knocking out units in batches, in an average DC you have no option for cooling down DC power loads using portable fans. Portable cooling units, perhaps, if you can find suitable exhaust ports. But even then, it's a challenge to obtain the power and location to put units in that work as you need the exhaust the heated air from the hot aisles, and then the hot exhaust from the unit itself. Portable fans can't generate the pressure required to adequately ventilate a DC through the small entry/exit doors typically available. We had multiple windows in the DC, used venting, additional cooling and only lost ~50% of the CRAC units at any one time, and still had to shut off a number of servers to manage overall heat load.
One of my teammates could comment on all the work he did directly (as he was managing the DC at the time) trying to route our hot isles out windows, increase insulation of the cold isles, and put up plastic etc. to keep cold vs hot isles actually working, but rest assured - an average closed-off DC has serious issues managing the failure of only a small part of its cooling system. As someone who has had to audit a number of DC's - we were extremely lucky to get off as lightly as we did, most DC's by design have solid insulated walls and limited numbers of exits (normally the minimum for fire code requirements, to meet internal security requirements) making airflow horrible.
Even ambient cooling DC's only cool the heat exchangers from ambient, not the servers themselves, or have to have specific building designs to maximise the exhaust of the hot air. But you knew this already, I'm sure. If you don't, check out Facebook's new swedish DC facebook page - particularly the wall of fans!
Now, if you want to suggest that PGI's isn't a proper DC, and is in a nice breezy location with perhaps a bunch of big doors at either end (like the entire wall), with all the isles setup for plenty of room around them, then perhaps you have a point.

#44 Epikone

    Rookie

  • Urban Commando
  • Urban Commando
  • 8 posts

Posted 26 June 2013 - 10:04 PM

View PostArcturious, on 26 June 2013 - 08:01 PM, said:

I've done a few traces as well and at the time I told support that it was the last two hops in general that had issues. Since the latest hot fix, my ping has gone back to normal 95% of the time (220-230 from AU). Last night I noticed I was having hit reg issues on some PPC and gauss shots. Hit tab and sure enough my ping had spiked to 280 and most importantly was fluctuating up and down by some 20-50ms. This lasted most of the game before settling back down again. There is definitely something flaky around Toronto as that's approximately where I start noticing ping issues on traces. The biggest fluctuations seem to occur within the pnap.net addresses. Looking them up, they are a subsidiary of internap.com, a hosting provider that specifically talks about game hosting. I believe from searches they may also be used by Riot games so supposedly they aren't a fail company. If you google high latency and pnap.net though you get quite a few results (including LoL issues). While this doesn't technically prove anything as their infrastructure would be managed by different teams on different equipment all around the globe, it may still be indicative of some underlying issue in their standard configs / QoS. What does all this mean? Potentially absolutely nothing. Which is why I'm glad it's not my job to try to hunt these issues down :) Edit: Check this out for example, taken from a trace I did before latest hot fix, at work so only have historical data to go off, take this with grain of salt, caveat, caveat, etc-eat. I'll try and capture fresh results next time I notice the issue and update if its any different. 11 178 ms 178 ms 177 ms sl-tisca1-369951-0.sprintlink.net [144.228.111.22] - Little variance 12 243 ms 242 ms 244 ms xe-5-0-0.tor10.ip4.tinet.net [141.136.107.66] - Little variance 13 235 ms 234 ms 234 ms internap-gw.ip4.tinet.net [77.67.70.94] - Little variance 14 271 ms 317 ms 266 ms border1.te9-1-bbnet2.tor001.pnap.net [70.42.24.196] - Bam, pnap 15 268 ms 239 ms 247 ms relay-1.mwtactics.com [70.42.29.65] - Slight Variance

We had issues like this with a single flaky border router. Every few minutes it would play up, buffer up and squirt out a bunch of packets in a rush, but as we had 8 at the time, it was terribly hard to diagnose.

#45 LoneGunman

    Member

  • PipPip
  • 41 posts
  • LocationCA

Posted 26 June 2013 - 10:41 PM

+1 to the OP for the research. I get that it doesn't necessarily equal a one-for-one explanation of lag issues with HSR but I'm sure it isn't helping. Hoping PGI finds a solution soon...

#46 Dexter Herbivore

    Member

  • PipPipPipPipPipPip
  • Bad Company
  • 241 posts
  • LocationPerth WA

Posted 26 June 2013 - 11:01 PM

As per usual, a little bit of indicative research that may or may not mean anything has been conflated with "PGI SUXXORS AND ARE AMATEURS AND I DON'T KNOW WHY I KEEP PLAYING THIS GAME EXCEPT I LIEK COMPLAINING".

Grow up, all this research shows is there are sometimes lag spikes on a particular set of routers. It doesn't show why that happens, what causes it, what effect it has on gameplay or whose fault it is. So stop jumping straight to the conclusion that PGI is at fault somehow and/or that this is the sole cause of HSR problems. It's information that has been acknowledged by the devs who are in a much better position to determine the exact cause and effects, and what fixes may be required.

Well done on the original research by the way, this potentially is responsible for some problems and the dev team is now aware of it.

#47 Rushin Roulette

    Member

  • PipPipPipPipPipPipPipPipPip
  • WC 2018 Top 12 Qualifier
  • WC 2018 Top 12 Qualifier
  • 3,514 posts
  • LocationGermany

Posted 27 June 2013 - 01:38 AM

View PostKunae, on 26 June 2013 - 10:09 AM, said:

Soon after they moved their servers to this "data-center", they had their AC unit go out... they only had one, no backup, and couldn't even figure out how to go to "Wal-Mart" and buy 30 20" fans to prop in the doors to help mitigate it.


I absolutely agree.... I mean Ventilators are really good at cooling air moving through them and any hot air comming from the back is automatically cooled to freezing temperatures within the 2-3 millimeters of the fanblade. f they had bought 20-30 fans from Walmartm, then Im sure they could have frozen the whole town the datacenter is located in (get 100 of thise Walmart fans, adn you can even solve the problem of global heating for the world and start a new ice age).

They could have also all bought sheafs of paper and waved them at the server racks... or even resorted to the old Game Boy tactic (open them up and blow across the connections)... it really worked back then and I dont see how this could not work for a server center.

P. S. I have never ever once in my life ever been sarcastic or exagerated... really, honestly.. never, ever.

Well. maybe once or twice here and there.

Edited by Rushin Roulette, 27 June 2013 - 03:30 AM.


#48 FREDtheDEAD

    Member

  • PipPipPipPipPipPip
  • The 1 Percent
  • 406 posts
  • LocationSouth Autstralia

Posted 27 June 2013 - 03:25 AM

View PostKunae, on 26 June 2013 - 06:38 PM, said:

Wow.

Just wow.

So much fail in that post...

As you obviously have no clue about the difference between a server-room and a data-center, you really may want to stop talking. Then you may want to apply to work in IGP's "server-room", as that's all it qualifies as, and you seem to be of their caliber.(I am assuming from your ignorant snarkiness, that you actually work in the industry, although it's clear not anywhere real, if so.)
Want to be specific about obviously having no clue and failing when you're the one that thinks that a few $10 domestic fans can cool an enclosed data centre (or server room or any kind of enclosed space with kilowatts of heat being produced each second)? Actually, you're so convincing, I'm going off to patent cooling nuclear reactors with Walmart fans.

Any time you can come up with actual evidence the servers MWO are using are as tragically badly run as you claim, post some more. The CV of the network admins? How about their names? No? You don't know anything about them? What's the inventory? Do you have any server logs? Is everything you said unprovable opinion based on an agenda? Is that a chip on your shoulder or are you just pleased to see me?

#49 FREDtheDEAD

    Member

  • PipPipPipPipPipPip
  • The 1 Percent
  • 406 posts
  • LocationSouth Autstralia

Posted 27 June 2013 - 03:35 AM

View PostDexter Herbivore, on 26 June 2013 - 11:01 PM, said:

As per usual, a little bit of indicative research that may or may not mean anything has been conflated with "PGI SUXXORS AND ARE AMATEURS AND I DON'T KNOW WHY I KEEP PLAYING THIS GAME EXCEPT I LIEK COMPLAINING".

Grow up, all this research shows is there are sometimes lag spikes on a particular set of routers. It doesn't show why that happens, what causes it, what effect it has on gameplay or whose fault it is. So stop jumping straight to the conclusion that PGI is at fault somehow and/or that this is the sole cause of HSR problems. It's information that has been acknowledged by the devs who are in a much better position to determine the exact cause and effects, and what fixes may be required.

Well done on the original research by the way, this potentially is responsible for some problems and the dev team is now aware of it.

Thankyou for having a sense of perspective! I love you and want to have your babies. Is that an over-reaction?

Yeah, maybe we'll pass on that.

Edited by Xajorkith, 27 June 2013 - 03:41 AM.


#50 DragonsFire

    Member

  • PipPipPipPipPipPipPip
  • Wrath
  • Wrath
  • 655 posts

Posted 27 June 2013 - 06:01 AM

I'm also curious as to where this notion that PGI or IGP is managing their own Datacenter? Given that the servers are in Toronto, I think it's more likely that they are in a co-lo scenario. Perhaps I missed the post where someone from PGI or IGP stated that they own and operate the DC that the servers are located in.

If they don't though, then any failures of the cooling, power or even routing nature are out of their control anyways as that falls squarely in the lap of the DC owners/operators.

Edit: And given that their roadmap has them putting out regional servers it makes sense that they would start in a co-lo scenario from a pure proof of concept standpoint (remote management, response time scenarios, etc).

Edited by DragonsFire, 27 June 2013 - 06:04 AM.


#51 Kunae

    Member

  • PipPipPipPipPipPipPipPipPip
  • 4,303 posts

Posted 27 June 2013 - 06:04 AM

View PostEpikone, on 26 June 2013 - 09:56 PM, said:

As someone whose company has actually experienced a set of rolling failures in our CRAC units due to corrosion in pipes knocking out units in batches, in an average DC you have no option for cooling down DC power loads using portable fans. Portable cooling units, perhaps, if you can find suitable exhaust ports. But even then, it's a challenge to obtain the power and location to put units in that work as you need the exhaust the heated air from the hot aisles, and then the hot exhaust from the unit itself. Portable fans can't generate the pressure required to adequately ventilate a DC through the small entry/exit doors typically available. We had multiple windows in the DC, used venting, additional cooling and only lost ~50% of the CRAC units at any one time, and still had to shut off a number of servers to manage overall heat load. One of my teammates could comment on all the work he did directly (as he was managing the DC at the time) trying to route our hot isles out windows, increase insulation of the cold isles, and put up plastic etc. to keep cold vs hot isles actually working, but rest assured - an average closed-off DC has serious issues managing the failure of only a small part of its cooling system. As someone who has had to audit a number of DC's - we were extremely lucky to get off as lightly as we did, most DC's by design have solid insulated walls and limited numbers of exits (normally the minimum for fire code requirements, to meet internal security requirements) making airflow horrible. Even ambient cooling DC's only cool the heat exchangers from ambient, not the servers themselves, or have to have specific building designs to maximise the exhaust of the hot air. But you knew this already, I'm sure. If you don't, check out Facebook's new swedish DC facebook page - particularly the wall of fans! Now, if you want to suggest that PGI's isn't a proper DC, and is in a nice breezy location with perhaps a bunch of big doors at either end (like the entire wall), with all the isles setup for plenty of room around them, then perhaps you have a point.


The use of fans, was bordering on hyperbole to make a point, I admit, but on the minimal scale that IGP is setup as, it was within the scope of possibility. And I believe my whole point was that IGP(not PGI, as PGI is not hosting this) does not have a proper DC.

This is just one symptom of a lack of professionalism on their part, which goes to your followup point, about a router randomly freaking out. It's been regular enough though, that they should have been able to isolate it by this point.

I don't think they're even looking into it.

Again, I am NOT talking about PGI messing up, this is about IGP.

View PostDragonsFire, on 27 June 2013 - 06:01 AM, said:

I'm also curious as to where this notion that PGI or IGP is managing their own Datacenter? Given that the servers are in Toronto, I think it's more likely that they are in a co-lo scenario. Perhaps I missed the post where someone from PGI or IGP stated that they own and operate the DC that the servers are located in. If they don't though, then any failures of the cooling, power or even routing nature are out of their control anyways as that falls squarely in the lap of the DC owners/operators.

They have stated that this is IGP's DC. They moved the servers there at the end of July 2012, iirc.

Citation? Not possible, as this was said during closed beta, who's forums are no longer accessible.

#52 DragonsFire

    Member

  • PipPipPipPipPipPipPip
  • Wrath
  • Wrath
  • 655 posts

Posted 27 June 2013 - 07:16 AM

View PostKunae, on 27 June 2013 - 06:04 AM, said:


The use of fans, was bordering on hyperbole to make a point, I admit, but on the minimal scale that IGP is setup as, it was within the scope of possibility. And I believe my whole point was that IGP(not PGI, as PGI is not hosting this) does not have a proper DC.

This is just one symptom of a lack of professionalism on their part, which goes to your followup point, about a router randomly freaking out. It's been regular enough though, that they should have been able to isolate it by this point.

I don't think they're even looking into it.

Again, I am NOT talking about PGI messing up, this is about IGP.


They have stated that this is IGP's DC. They moved the servers there at the end of July 2012, iirc.

Citation? Not possible, as this was said during closed beta, who's forums are no longer accessible.


I will buy that IGP is paying for the hosting, but as I noted earlier, it seems unlikely that they would set up a DC infrastructure (ie building, power, rackspace etc) in Toronto when other solutions are available, and for the relatively small amount of servers they are using. The IGP DC that you refer to is a cage at a colo space provided by a hosting solution.

In this scenario, the hosting solution appears to be Internap, as they own both the edge router and local MW IP space. The benefits of a hosting solution allows for less overhead for the company involved in terms of hardware, upkeep, etc. The downside is that they have less recourse over things such as power issues, targeted DDoS at the hosting site itself (which would affect more than just MWO), or general network connectivity.

They might be able to point to an edge router freaking out, but without direct access to said router, they would be left with little recourse for pursuing and correcting the issue. That falls under the purview of the hosting company in particular, and IGP can make requests for the issue to be resolved, but the control ultimately lies with Internap Network Services.

#53 Wildstreak

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 5,154 posts

Posted 27 June 2013 - 07:54 AM

View PostFragger56, on 21 June 2013 - 01:36 AM, said:

TLDR:
The pipes to PGI's servers are too small and full of crap, causing random lag that ***** hitreg.
PGI needs better server hosts.

Well, having worked at similar telecom sites, I would say it depends on site design and possibly other factors.
Not knowing where they are except I think Montreal is not much to go by.

PS - where the logjam occurs, can you tell physical location of that anyhow?

#54 DragonsFire

    Member

  • PipPipPipPipPipPipPip
  • Wrath
  • Wrath
  • 655 posts

Posted 27 June 2013 - 08:02 AM

View PostMerchant, on 27 June 2013 - 07:54 AM, said:

Well, having worked at similar telecom sites, I would say it depends on site design and possibly other factors.
Not knowing where they are except I think Montreal is not much to go by.

PS - where the logjam occurs, can you tell physical location of that anyhow?


In the traces listed above you can see a latency increase at Internaps edge router in Toronto (border1.te9-1-bbnet2.tor001.pnap.net). This latency appears to be carried over into the servers as well. Unfortunately as was noted earlier, ICMP is a first level troubleshooting tool that isn't always indicative of an issue and can often be misleading.

#55 Kunae

    Member

  • PipPipPipPipPipPipPipPipPip
  • 4,303 posts

Posted 27 June 2013 - 08:14 AM

View PostDragonsFire, on 27 June 2013 - 07:16 AM, said:

I will buy that IGP is paying for the hosting, but as I noted earlier, it seems unlikely that they would set up a DC infrastructure (ie building, power, rackspace etc) in Toronto when other solutions are available, and for the relatively small amount of servers they are using. The IGP DC that you refer to is a cage at a colo space provided by a hosting solution. In this scenario, the hosting solution appears to be Internap, as they own both the edge router and local MW IP space. The benefits of a hosting solution allows for less overhead for the company involved in terms of hardware, upkeep, etc. The downside is that they have less recourse over things such as power issues, targeted DDoS at the hosting site itself (which would affect more than just MWO), or general network connectivity. They might be able to point to an edge router freaking out, but without direct access to said router, they would be left with little recourse for pursuing and correcting the issue. That falls under the purview of the hosting company in particular, and IGP can make requests for the issue to be resolved, but the control ultimately lies with Internap Network Services.

It's possible.

I am merely going off of what IGP/PGI stated in July of last year.

If they are a colo in someone else's DC, then the OP's title would seem to hold even more true, and be easier to correct.

#56 Wildstreak

    Member

  • PipPipPipPipPipPipPipPipPipPip
  • Civil Servant
  • Civil Servant
  • 5,154 posts

Posted 27 June 2013 - 08:13 PM

View PostDragonsFire, on 27 June 2013 - 08:02 AM, said:


In the traces listed above you can see a latency increase at Internaps edge router in Toronto (border1.te9-1-bbnet2.tor001.pnap.net). This latency appears to be carried over into the servers as well. Unfortunately as was noted earlier, ICMP is a first level troubleshooting tool that isn't always indicative of an issue and can often be misleading.

Well, going by what I remember, seems like PGI's servers are in Toronto, not Montreal like I thought. Anyhow, they should request testing on the line from where the problem happens in both directions, to their servers and away from it, along with the equipment so to isolate the problem.

Still I wonder how these connections get made for us. I mean I am in the NYC area, I would expect my connection to go from here through NY state to the servers but for some reason it goes all the way to Detroit before crossing into Canada. Strange stuff.

#57 Arcturious

    Member

  • PipPipPipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 785 posts
  • LocationCanberra, Australia

Posted 27 June 2013 - 09:31 PM

For anyone interested, why certain routes are chosen and not others, here meet the Internet (dramatic over simplification ensues);

http://en.m.wikipedi...ateway_Protocol

Its scary sometimes just how complex this stuff is that we take for granted, with networks talking to networks within networks. All being managed by different companies, organisations and governments. A truly crazy number of calculations occur behind the scenes just for me to post this from my phone to this forum. It's frankly amazing it works at all

I mean if we can do all this, giant stompy mechs seem actually not that unrealistic at all :)

#58 DragonsFire

    Member

  • PipPipPipPipPipPipPip
  • Wrath
  • Wrath
  • 655 posts

Posted 27 June 2013 - 09:46 PM

View PostMerchant, on 27 June 2013 - 08:13 PM, said:

Well, going by what I remember, seems like PGI's servers are in Toronto, not Montreal like I thought. Anyhow, they should request testing on the line from where the problem happens in both directions, to their servers and away from it, along with the equipment so to isolate the problem.

Still I wonder how these connections get made for us. I mean I am in the NYC area, I would expect my connection to go from here through NY state to the servers but for some reason it goes all the way to Detroit before crossing into Canada. Strange stuff.


From a testing perspective, it would generally depend on the agreement signed with either the SP (service provider) or the colo host or in some cases, both. They might be able to request a test of sorts, but they need metrics and an idea of where issues are cropping up to be able to be of any use. On top of that unfortunately in a colo situation is that if they request potential intrusive testing, there is a chance for impact not only of their service, but of any other services hosted there that traverse the router under test. This becomes a potential legal mucky muck. That's not to say it can't be done, but I'm just hoping to provide an idea that it's not something just easily requested and performed. :)

Also, as Arcturious pointed out, the internet can be a quirky place. The reason for the path you have to the servers in Toronto comes down to a number of variables. Some of these are within the control of your SP, many of them are not. In the end, the idea is to get your traffic to it's destination as fast as possible, and generally that's done pretty well. Sometimes though, we don't get all the routers aligned correctly, and we miss the shot. I promise you though, it's something we do work on daily either way!

#59 Deathlike

    Member

  • PipPipPipPipPipPipPipPipPipPipPipPipPipPip
  • Littlest Helper
  • Littlest Helper
  • 29,240 posts
  • Location#NOToTaterBalance #BadBalanceOverlordIsBad

Posted 27 June 2013 - 10:29 PM

If someone would look into this IP...: 70.42.29.75
AFAIK, this is the IP that seems to having issues, causing me to be "dumped into the mechlab". It is a source of rage for me...

This IP is noted in the omicron logs, with respect to my connection to the server....

Edited by Deathlike, 27 June 2013 - 10:33 PM.


#60 Arcturious

    Member

  • PipPipPipPipPipPipPip
  • Legendary Founder
  • Legendary Founder
  • 785 posts
  • LocationCanberra, Australia

Posted 28 June 2013 - 05:11 AM

Again, while this doesn't prove anything - more logs can't hurt. Noticed ping was jumping a tiny bit tonight. Here's the trace from while I saw my ping jump (risked death to alt tab and get this!!)

Tracing route to relay-1.mwtactics.com [70.42.29.65]
over a maximum of 30 hops:

1 2 ms 3 ms 1 ms lns20.cbr1.on.ii.net [203.16.215.189]
2 4 ms 4 ms 3 ms lns20.cbr1.on.ii.net [203.16.215.189]
3 3 ms 6 ms 5 ms gi4-0-1.bdr1.cbr1.on.ii.net [150.101.160.77]
4 3 ms 4 ms 4 ms xe-0-0-0.cr1.cbr2.on.ii.net [150.101.33.84]
5 7 ms 6 ms 6 ms ae2.br1.syd4.on.ii.net [150.101.33.22] - Sydney :D
6 153 ms 154 ms 155 ms te0-0-0-1.br1.lax1.on.ii.net [203.16.213.69] - I arrive in the US of A!
7 182 ms 182 ms 182 ms sl-st30-la-.sprintlink.net [144.223.30.1]
8 154 ms 154 ms 155 ms sl-crs2-ana-0-15-0-0.sprintlink.net [144.232.19.226]
9 162 ms 163 ms 163 ms 144.232.25.79
10 200 ms 196 ms 194 ms 144.232.7.142
11 161 ms 161 ms 162 ms sl-tisca1-369951-0.sprintlink.net [144.228.111.2
2]
12 264 ms 294 ms 266 ms xe-5-0-0.tor10.ip4.tinet.net [141.136.107.66] - Hits Toronto, some jitter
13 230 ms 229 ms 228 ms internap-gw.ip4.tinet.net [77.67.70.94]
14 234 ms 226 ms 226 ms border1.te7-1-bbnet1.tor001.pnap.net [70.42.24.132]
15 261 ms 231 ms 260 ms relay-1.mwtactics.com [70.42.29.65] - Again a little jitter

tracert 70.42.29.65

Tracing route to relay-1.mwtactics.com [70.42.29.65]
over a maximum of 30 hops:

2 2 ms 1 ms 1 ms lns20.cbr1.on.ii.net [203.16.215.189]
3 2 ms 2 ms 3 ms gi4-0-1.bdr1.cbr1.on.ii.net [150.101.160.77]
4 5 ms 4 ms 3 ms xe-0-0-0.cr1.cbr2.on.ii.net [150.101.33.84]
5 6 ms 6 ms 6 ms ae2.br1.syd4.on.ii.net [150.101.33.22] - Whee, over the Ocean I go!
6 153 ms 153 ms 154 ms te0-0-0-1.br1.lax1.on.ii.net [203.16.213.69] - Arrive in LAX after a long and weary flight
7 184 ms 182 ms 182 ms sl-st30-la-.sprintlink.net [144.223.30.1]
8 157 ms 156 ms 155 ms sl-crs2-ana-0-15-0-0.sprintlink.net [144.232.19.226]
9 178 ms 181 ms 164 ms 144.232.25.79
10 193 ms 193 ms 194 ms 144.232.7.142
11 173 ms 163 ms 161 ms sl-tisca1-369951-0.sprintlink.net [144.228.111.22]
12 272 ms 335 ms 255 ms xe-5-0-0.tor10.ip4.tinet.net [141.136.107.66] - Ruh Ro!
13 228 ms 228 ms 230 ms internap-gw.ip4.tinet.net [77.67.70.94]
14 229 ms 244 ms 227 ms border1.te7-1-bbnet1.tor001.pnap.net [70.42.24.132]
15 265 ms 232 ms 232 ms relay-1.mwtactics.com [70.42.29.65] - So some random jitter

Now this pingtest I also did doesn't show much other than that I prob shouldn't be seeing such jitter to other networks in that area:

http://www.pingtest....lt/83121308.png

So, again. This is not really conclusive and I've already sent very similar results to the Support guys so none of this data is new. Might just help others here see if they experience similar results from other countries / networks.

Would be awesome if a local Canadian could do similar test and see if they see any jitter - as you can see from my results I can go across Australia without any issues. So a Canadian should have no probs. If they see any jump over 10ms though on their local traffic something may be up.





19 user(s) are reading this topic

0 members, 19 guests, 0 anonymous users