Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ahhh, interesting. That looks to me like it's more appropriate for "multiple threads working on independent problems" than "multiple threads working on the same problem" due to the potential serialization overhead and limitations.

    Each thread has its own lua_State. However, we provide a serialization scheme which allows automatic sharing for several Torch objects (storages, tensors and tds types). Sharing of vanilla lua objects is not possible, but instances of classes that support serialization (eg. classic objects with using require 'classic.torch' or those created with torch.class) can be shared, but remember that only the memory in tensor storages and tds objects will be shared by the instances, other fields will be copies. Also if synchronization is required that must be implemented by the user (ie. with mutex).
That's not general-purpose multi-processor threading, that's solving it for a very specific sub-problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: