Wikipedia says “the peak angular speed of the eye during a saccade reaches up to 900°/s in humans”.
For reality-like experience, you need to have sub-millisecond rendering latency. For a moderately-complex 3D scene, current GPUs can’t do anything close to that.
Modern mouse sensors can track ridiculous speeds and accelerations, 6000-10000 frames per second are the norm nowadays. Eye might rotate fast, but your brain turns off the picture during saccades.
You’re right, low latency sensor is a minor problems. Low latency rendering is much harder to solve.
A good-looking 3D scene usually takes around 10-15 milliseconds to render. For the last decades, GPUs were optimized for throughput, not for lower latencies.
One common method how existing VR products achieve lower latencies — they render in a larger buffer, then shift the result to accommodate for head rotation since when the rendering was started, then present to the VR headset.
With dynamic LOD based on eye targets this trick will not work. To show better details at the center, you have to actually re-render your scene using better LoDs/textures near the view center. And that, my friend, is going to take 10-15ms.
Wikipedia says “the peak angular speed of the eye during a saccade reaches up to 900°/s in humans”.
For reality-like experience, you need to have sub-millisecond rendering latency. For a moderately-complex 3D scene, current GPUs can’t do anything close to that.