Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It all comes down to the definition of "faster". Standard testing is based on binary computations, the idea that there is a finite answer. Take a fire control computer on a ship. It has maybe 30 inputs, all essentially analogue dials. It combines them into a continuous analogue answer, a firing solution for the guns (elevation + azimuth). It doesn't do that "X times per second" or to a particular level of accuracy. The answer is always just there, constantly changing and available to whoever needs it whenever they ask for it, measurable to whatever level of precision you want to measure. If you measure the output every microsecond, then it is a computer that can generate an answer every microsecond. But that speaks more to the method of measurement than the speed of the machine.


It's true that we measure the speed and precision of analog "computers" differently from how we measure them for digital computers, but it does not therefore follow that analog "computers" are all infinitely fast and perfectly precise. Any analog system has a finite bandwidth; signals above some cutoff frequency are strongly attenuated and before long are indistinguishable from noise. And analog systems also introduce error, which digital computation often does not. When digital computation does introduce error, you can decrease the size of the error exponentially just by computing with more digits, and there is no equivalent approach in the analog world.

For mechanical naval fire control computers the cutoff frequency is on the order of 100 Hz and the error is on the order of 1%. You won't learn anything interesting by sampling them every microsecond that you wouldn't learn by sampling them every millisecond.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: