Unreal engine also uses "portals" between rooms, where each room is either entirely visible or entirely invisible.
The two approaches have different pros and cons:
BSP is theoretically elegant, but requires a fairly expensive "tree walk" step for each frame. This is random in memory and not efficient for modern CPUs because the branch predictor is ineffective and the memory layout is typically not contiguous. It is even worse on GPUs. Doom was written in 1993 when this was much less of a concern, but with Quake and later engines it become a scalability bottleneck.
BSP maps require a very compute-intensive pre-processing step. Back then this limited modding and third-party maps because a very high-end PC was needed. This processing also had numerical precision issues. Theoretically this isn't a problem, but in practice it is. Designers had to be careful to make sure the map is completely closed ("airtight") and that the BSP process didn't create thin triangles. This meant that map design tools had to have all sorts of constraints on alignment and snapping to prevent the geometry "blowing up". A gotcha is that BSP splits polygons and hence the processing eats into the the map geometry polygon budget!
Portals are much simpler to implement, and each room can scale to any number of triangles. They're much more efficient with modern CPUs and GPUs, because each room can be sent as a single draw call ("batch" processing) instead of a list of individual triangles.
The problem with portal-based techniques is that there are sudden step-changes in the number of triangles needed to draw each frame as the camera moves. This can result in stuttering and sudden framerate drops. It can also blow through triangle budgets.
A common technique is to limit portal traversal depth in some way, which would cause "black" doorways in the distance and other similar artefacts. Similarly, certain map geometries were poorly supported, such as complex outdoor spaces with visibility from many directions into many rooms.
I remember building Unreal maps as a teen. I didn't really understand bsp back then, and online discussions regarding mapping where full of cargo-culting weird methods to avoid "bsp holes". They were basically holes in the level geometry that sometimes even only appeared at specific camera angles, and not just in the distance, sometimes right in the corner of the room you're standing in. There were also two sliders in the dialog where you rebuilt the bsp, but I forgot what they were called and at least back then didn't understand what they did anyways, but changing them slightly could resolve the issue - only so you'd realize there's now a new hole somewhere else after 5 minutes.
> Doom was written in 1993 when this was much less of a concern, but with Quake and later engines it become a scalability bottleneck.
Worth clarifying that the 386 and 486 were not pipelined, so they didn't feature branch prediction. The Pentium was, but it had released during Doom"s development and essentially no home user owned one.
Quake made extensive use of the Pentium's faster FPU, so they didn't optimize for earlier CPUs.
As an aside, this is also why people keep saying that functional languages are "not much slower" than traditional procedural languages against all evidence.
People that learned and like LISP probably did so in the 1980s or 1990s when CPUs were much simpler, memory access times were constant, and "cache lines" weren't even a thing yet! Back then, randomly jumping around memory following a linked list was only about 2x slower than reading an array. Linked lists were much faster for some incremental operations, so the net result was that the two were very comparable.
These days arrays can be processed using AVX-512 instructions at a rate of 64 bytes per clock or even higher, whereas even one missed cache entry of a linked-list pointer will stall the CPU for several hundred cycles, during which time the array logic could have chunked through kilobytes.
My Symbolics 3640 Lisp Machine, a model introduced in the mid 1980s had 20 MB RAM and 200 MB virtual memory on one or two disks. The OS was already a 50 MB image. The memory used by Lisp was typically much larger than the available (and/or affordable) RAM.
Access times were far from constant.
A full Garbage Collection over virtual memory took 30 minutes. That's why it had extensive support for incremental garbage collection, copying GC, manual memory management, areas for various data types, ephemeral GC with hardware support, cdr-coded lists, various types of multidimensional arrays, graphics cards with special video memory, 36 bit memory with additional ECC, ...
Before saving a new Lisp image to disk, one was using a special command to reorder objects in memory to optimize access times. That command also took 20-30 minutes.
Lisp Machines like this were developed because Lisp systems on Mainframes or Mini Computers were thrashing the machines during GC and making performance hell for Lisp users and other time sharing users. Thus a personal machine was needed where the memory and CPU belong to one user. They were among the very first commercially available GUI based personal workstations in 1981.
Sure, machines with virtual memory and caches were available in small numbers for a long time, but the vast majority of programming was done on PCs since the 1980s. The first Intel PC with a cache was the 486.
I learned LISP on systems like an IBM mainframe, DEC-10 with TOPS-10, the DEC VAX 11/780 with VMS, SUN 3 & 4, Symbolics 3600/3640, ... all these had large virtual memories, with the resulting feature that the speed of memory access varies widely between cached memory, main memory and virtual memory.
I doubt that the early small memory Intel-based machines played much of a role for Lisp programmers or even education (other than being used as a terminal to a different machine). Lisp was memory hungry, due to the interactive use (with resident dev tools) with a garbage collected heap. The 386 supported virtual memory and various other ways to extend the available RAM was used on early intel machines under the various DOS variants...
The two approaches have different pros and cons:
BSP is theoretically elegant, but requires a fairly expensive "tree walk" step for each frame. This is random in memory and not efficient for modern CPUs because the branch predictor is ineffective and the memory layout is typically not contiguous. It is even worse on GPUs. Doom was written in 1993 when this was much less of a concern, but with Quake and later engines it become a scalability bottleneck.
BSP maps require a very compute-intensive pre-processing step. Back then this limited modding and third-party maps because a very high-end PC was needed. This processing also had numerical precision issues. Theoretically this isn't a problem, but in practice it is. Designers had to be careful to make sure the map is completely closed ("airtight") and that the BSP process didn't create thin triangles. This meant that map design tools had to have all sorts of constraints on alignment and snapping to prevent the geometry "blowing up". A gotcha is that BSP splits polygons and hence the processing eats into the the map geometry polygon budget!
Portals are much simpler to implement, and each room can scale to any number of triangles. They're much more efficient with modern CPUs and GPUs, because each room can be sent as a single draw call ("batch" processing) instead of a list of individual triangles.
The problem with portal-based techniques is that there are sudden step-changes in the number of triangles needed to draw each frame as the camera moves. This can result in stuttering and sudden framerate drops. It can also blow through triangle budgets.
A common technique is to limit portal traversal depth in some way, which would cause "black" doorways in the distance and other similar artefacts. Similarly, certain map geometries were poorly supported, such as complex outdoor spaces with visibility from many directions into many rooms.