What can we do to make rent-seeking hurt society less? Imo we should start by decoupling money from power. Right now, people are forced to participate in the rent-seekers game because his wealth implies power over them.
In some areas, sales tax is actual multiple taxes from different overlapping jurisdictions: state, county, city, and sometimes special tax district. ZIP codes don't align with any of these, so you need to know exactly where the buyer is in order to properly calculate sales tax.
There are places where adjacent addresses in the same ZIP code have different sales tax rates.
Very cool :) I got a good enough score in the basic scenario by playing around a bit but it would be cool if you could link some kind of tutorial (e.g. to a digital PID video or something like that).
1) at/around powers of two
2) regardless whether using GC or preallocating
makes me think its got to do with cache sizes. For example, the M2 macbook has around 16MiB of cache, which gives you approximately 2 million ~ 2^20 nodes where each node has a 64 bit number and a 64 bit pointer.
The jumps in your measurements could be related to the cache hierarchy. On Linux (probably macOS too) you should be able to run "perf stat ./mergesort" to show you the hitrate of your CPU's caches.
Thanks for reading my blog post and thinking about it. I thought of cache as well, the 3 machines I'm running benchmarking on are all very different. The M2 Macbook has 128-byte cache lines, as opposed to the other 2 machines, which have 64-byte ache lines. The old Dell R530 has level 1 through 3 caches, the others have only levels 1 and 2. I'm still trying to understand CPU caching and see if I can correlate it to the performance drops somehow.
The linked list nodes are 16 bytes, and both 8-byte fields (.Next and .Data) get read and written during sorting. Up to a point, larger node sizes,which I would thing would change caching, don't change where the performance drops occur: https://bruceediger.com/posts/mergesort-investigation-7/
Is the compiler, or a library you're using, somehow adding code that takes a different strategy at runtime when the linked list gets bigger than the performance-fall-off size?
Good question, but I don't think so. Two mergesort variants show it, one does not. A C transliteration of one of the variants that show it, also shows performance drops. The Go compiler for MacOS on M2 silicon probably generates very different code than the X86_64 on Linux, yet they both have the same performance drops.
The code only uses standard Go packages. I wrote the linked list struct, I'm not using a package's struct or code. I don't even use standard package code during sorting, only in timing and list preparation.
What you suggest may be true, but it seems very unlikely.
Well no, obviously. Nix has not solved the rice theorem. But you'll have this problem whether you're using BuildKit or Nix. Difference is with Nix the final image only contains your explicitly stated dependencies. I.e. no apt, no Python, no Perl, no cacerts unless you explicitly include them.
You gain: Model consistency guaranteed by the database, your backend basically only acts as an external API for the database.
You lose: Modularity, makes it harder to swap out databases. Also, you have to write SQL for business logic which many developers are bad at or dislike or both.
I've seen a system running on this approach for ten years and it survived three generations of developers programming against this API. There's Python wx frontends, web frontends, Rust software, Java software, C software, etc. They all use the same database procedures for manipulating the model so it stays consistent. Postgres is (kinda, not very) heavy for small projects but it scales for medium up to large-ish projects (where it still scales but not as trivially). One downside I've seen in this project is that some developers were afraid to change the SQL procedures so they started to work around them instead of adding new ones or changing the existing ones. So in addition to your regular work horse programming language you also have to be pretty good at SQL.
IMO the Python version provides more information about the final state of res than the Go version at a glance: It's a list, len(res) <= len(foo), every element is an element of foo and they appear in the same order.
The Go version may contain some typo or other that simply sets result to the empty list.
I'd argue that having idioms like list comprehension allows you to skim code faster, because you can skip over them (ah! we're simply shrinking foo a bit) instead of having to make sure that the loop doesn't do anything but append.
This even goes both ways: do-notation in Haskell can make code harder to skim because you have to consider what monad you're currently in as it reassigns the meaning of "<-" (I say this as a big Haskell fan).
At the same time I've seen too much Go code that does err != nil style checking and misses a break or return statement afterwards :(