Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think those cases are a minority, and in most cases you would have been fine with just slightly larger ints. It doesn't seems to cause any issue in python for example, because the cases where integers grow exponentially (which is the same as making their size grow linearly, so its still slow memory-wise) and without bounds are not all that common.

For example, an overflow bug was famous in java for being present in the standard library for a very long time: in a core function performing a binary search, the operation computing the middle of the two bounds was a simple average (max + min)/2, which overflow when max and min are big enough, even if the final result always fit in a standard integer.

In a lot of other cases, integer are used to address memory (or to represent array indexes, etc), so in most code they will naturally be restricted by the available amount of memory, even if intermediate computations can temporarily overflow.



Converting a 'numeric overflow exception' to a 'you eventually run out of memory' failure is much worse, because exhausting memory will cause system-wide impact instead of just killing the individual process handling a request/operation. OOM is also generally handled much worse in software vs throwing an exception at the point where data first exceeds the expected range. Do you really want to kick off the linux oom killer every time someone uploads a corrupted png to your server?

In general, it's much better to catch out-of-range/corrupt data early, and silently promoting to a 256-bit int is effectively hiding corrupt data in the majority of cases. The maximum 64-bit int is really big.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: