Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You call malloc/free, or if working with GPU cudaMalloc/free. You can write your own memory pool or object pools, you can use destructors, and even implement your own reference counting scheme.

This is what I use for my own multithreading runtime in Nim and the memory subsystem makes it faster and more robust than any runtime (including OpenMP and Intel TBB) that I've been benchmarking against, see memory subsystem details here: https://github.com/mratsim/weave/tree/master/weave/memory

Example of atomic refcounting in this PR here: https://github.com/mratsim/weave/blob/025387510/weave/dataty...

Also one important thing, Nim current GC is based on TLSF (http://www.gii.upv.es/tlsf/) which is a memory reclamation scheme for real-time system, it provides provably bounded O(1) allocations. You can tune Nim GC with max-pauses for latency critical applications.



Does the standard library use malloc/free or does it depend on the GC? This is the part that's puzzling to me, if the stdlib depends on GC then it's harder to say that GC is optional. Technically optional but not super practical.


The majority of stdlib modules does not depend on GC.

Also, the new ARC memory manager replaces GC and can run in a kernel.


No that's not true.

As soon as you use sequences or strings or async you depend on the GC.

You can however compile with gc:destructors or gc:arc so that those are managed by RAII.


I used various modules with gc:none

I meant that new ARC GC, that will replace the current one, can be used for the kernel.

It's still a GC, technically, but, quoting Araq on ARC:

Nim is getting the "one GC to rule them all". However calling it a GC doesn't do it justice, it's plain old reference counting with optimizations thanks to move semantics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: