There's also a difference between a reaction time measurement and noticing input latency. Yes, action potential's speed limits may mean that it takes a quarter of a second, even in near instinct, to recieve an image, process it, and longest of all, actually move my finger and click the mouse button. Signal propagation in the human body varies all over the place, but can literally be less than a meter a second ins ome cases as I recall (wrong side of biology for me so those courses were a long ways back, but still).
That said, if I've move a mouse, and it takes a quarter of a second for the screen to register, I am
not going to be oblivious to that. That's why Microstutter and framepacing were such big deals. Companies didn't take it seriously for years, because it wasn't measurable in a lab setting until things like FCAT came along, but jitters only dozens of milliseconds long were painfully noticible, and frame pacing issues of only a handful of milliseconds were at least bad, and cases where input lag were made worse became instantly noticible. You notice this far more on a mouse/KB than other control schemes because it's a zero order controller.
You also can't compare network latency to input latency, because client actions are not locked to server-side communications, time-wise. In other words, if I twitch left in a shooter, the screen doesn't sit there until the computer registers the movement, communicates with the server, and gets a returned confirmation of the movement. No, you twitch, you see yourself twitch, and the server catches up a handful of milliseconds later so that
others see you with slight latency, but you yourself are not subject to that latency (and then various neat tricks like HSR are used to continously keep things synchronized).
TVs are not unplayably bad in many games, and I agree for MWO, it's at least going to be tolerable, but it is
not ideal. Do it if you have to, but don't strive for that setup.
Edited by Catamount, 24 November 2014 - 06:58 AM.