Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 3. It compares GC to "ideal" manual memory management, ignoring the fact that manual memory management is not free either. Things like heap fragmentation or computational cost of running allocation/deallocation code do exist and may cause also some real problems. The costs lie elsewhere, but that doesn't mean they don't exist.

No, it doesn't; it uses malloc and free, counting the costs of allocation and deallocation using a traditional memory allocator. (In fact, if anything, that's unfair to traditional memory allocators, as high-performance code will often use things like bump-allocating arenas which significantly outperform malloc and free. It also uses dlmalloc, which is outpaced by jemalloc/tcmalloc these days, although if the benchmarks are not multithreaded the difference will be small.)

Heap fragmentation exists in GC systems as well, and fragmentation in modern allocators like jemalloc is very small.



Ok, point taken, however their cost analysis is based on "simulated cycles", which is extremely simplified. With modern CPUs doing caching, prefetching, out-of-order execution I seriously doubt its accurate. malloc/free have typically a tendency to scatter objects around the whole heap, while compacting GCs allocate from a contiguous region - so a properly designed research experiment would take that into account. Hans Boehm did experiments on real programs and found that using compacting GCs actually speeded up some programs because of better cache friendliness.

As for heap fragmentation - it does not exist in some GC systems, like G1 or C4. And fragmentation is also extremely workload dependent - it might be "very small" for most cases and for some might be as much as 5x (Firefox struggled a lot from this).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: