Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you want to avoid opening Twitter (now known as X), here is the link to the paper: https://arxiv.org/pdf/1502.03808

TLDR “A typical IMAX image has 23 million pixels, and for Interstellar we had to generate many thousand images, so DNGR had to be very efficient. It has 40,000 lines of C++ code and runs across Double Negative’s Linux-based render-farm. Depending on the degree of gravitational lensing in an image, it typically takes from 30 minutes to several hours running on 10 CPU cores to create a single IMAX image. Our London render-farm comprises 1633 Dell-M620 blade servers; each blade has two 10-core E5-2680 Intel Xeon CPUs with 156GB RAM. During production of Interstellar, several hundred of these were typically being used by our DNGR code.“



How fast could this run in a modern CPU with a modern GPU?


GPUs are almost irrelevant for VFX work at this scale due to the memory requirements. The render nodes used for Interstellar had 156GB of RAM, and a decade later the biggest GPUs still don't have that much memory (unless you count Macs I suppose but the existing software ecosystem is very CUDA-centric).

Small VFX shops do typically render on GPUs nowadays, but the high end is still dominated by CPU rendering.


As a matter of fact, Pixar render farm is mostly based on CPU rendering.


That goes for all the biggest players - Pixar, WDAS, Dreamworks, ILM, WETA, Framestore... all do their final rendering on CPUs. Some of them have adopted a hybrid workflow where artists can use GPU rendering to quickly iterate on small slices of data though.


> The render nodes used for Interstellar had 156GB of RAM, and a decade later the biggest GPUs still don't have that much memory

NVIDIA H200 GPU has 141 GB of memory, and the AMD Instinct MI300X GPU has 192 GB of HBM3 memory.


I was thinking of their graphics cards which currently max out at 48GB, but true, their HPC accelerators do have a lot more memory now. In the case of Nvidia they would be a trade-off for rendering though since their HPC chips lack the raytracing acceleration hardware which their graphics chips have.

Besides, in the current climate I don't think VFX companies would be eager to bid against AI companies for access to H100/H200s, the VFX companies don't have infinite venture capital goldrush money to burn...


There's a nice recreation on Shadertoy:

https://www.shadertoy.com/view/lstSRS

No idea if it's doing the same simulation though.


So beautiful, thanks for sharing.


Its a particle simulation, so a lot of it is memory management.

I can't remember the actual size of the simulation in terms of number of particles, but it was using something like 2 150TB fileservers to store it on.

With NVME and cheap ram, it would probably at the point where it'd be useful on a GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: