If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
> Since MCP is a common language for all agents at Stripe, not just minions, we built a central internal MCP server called Toolshed, which hosts more than 400 MCP tools spanning internal systems and SaaS platforms we use at Stripe.
Are there ecisting open source solutions for such a toolshed?
Fair point! To be clear: rari handled the traffic perfectly fine - the issue was an overly defensive rate limiter I had configured (and it was grouping proxy traffic incorrectly). The framework itself was cruising, I just had the safety rails set too tight. Adjusted now and it's handling the load as expected!
It seems inevitable, but in the mean time, that's vastly more expensive than running curl in a loop. In fact, it may be expensive enough that it cuts bot traffic down to a level I no longer care about defending against. Like GoogleBot had been crawling my stuff for years without breaking the site. If every bot were like that, I wouldn't care.
Serious question, in 2026 you can actually have a successful crawler with just curl? I just had to create one for a customer - for their own site - and nothing would have worked without using Chromium.
Probably not for most sites. Example of a site where it'd likely work: a blog made with a static site generator. Example of one where it wouldn't: darn near anything made with React.
It works for the majority of things a text mining scraper would care to scrape. It's not just static sites but also any CMS like wordpress, as well as many JS apps that have server-side rendering. SPA-only sites aren't that common anymore, especially for things like blogs, news and text-based social media.
Even that functions as a sort of proof of work, requiring a commitment of compute resources that is table stakes for individual users but multiplies the cost of making millions of requests.
One way to sharpen the question is to stop asking whether C is "fundamental" and instead ask whether it is forced by mild structural constraints. From that angle, its status looks closer to inevitability than convenience.
Take R as an ordered field with its usual topology and ask for a finite-dimensional, commutative, unital R-algebra that is algebraically closed and admits a compatible notion of differentiation with reasonable spectral behavior. You essentially land in C, up to isomorphism. This is not an accident, but a consequence of how algebraic closure, local analyticity, and linearization interact. Attempts to remain over R tend to externalize the complexity rather than eliminate it, for example by passing to real Jordan forms, doubling dimensions, or encoding rotations as special cases rather than generic elements.
More telling is the rigidity of holomorphicity. The Cauchy-Riemann equations are not a decorative constraint; they encode the compatibility between the algebra structure and the underlying real geometry. The result is that analyticity becomes a global condition rather than a local one, with consequences like identity theorems and strong maximum principles that have no honest analogue over R.
I’m also skeptical of treating the reals as categorically more natural. R is already a completion, already non-algebraic, already defined via exclusion of infinitesimals. In practice, many constructions over R that are taken to be primitive become functorial or even canonical only after base change to C.
So while one can certainly regard C as a technical device, it behaves like a fixed point: impose enough regularity, closure, and stability requirements, and the theory reconstructs it whether you intend to or not. That does not make it metaphysically fundamental, but it does make it mathematically hard to avoid without paying a real structural cost.
This is the way I think. C is "nice" because it is constructed to satisfy so many "nice" structural properties simultaneously; that's what makes it special. This gives rise to "nice" consequences that are physically convenient across a variety of applications.
I work in applied probability, so I'm forced to use many different tools depending on the application. My colleagues and I would consider ourselves lucky if what we're doing allows for an application of some properties of C, as the maths will tend to fall out so beautifully.
Not meaning to derail an interesting conversation, but I'm curious about your description of your work as "applied probability". Can you say any more about what that involves?
Pure probability focuses on developing fundamental tools to work with random elements. It's applied in the sense that it usually draws upon techniques found in other traditionally pure mathematical areas, but is less applied than "applied probability", which is the development and analysis of probabilistic models, typically for real-world phenomena. It's a bit like statistics, but with more focus on the consequences of modelling assumptions rather than relying on data (although allowing for data fitting is becoming important, so I'm not sure how useful this distinction is anymore).
At the moment, using probabilistic techniques to investigate the operation of stochastic optimisers and other random elements in the training and deployment of neural networks is pretty popular, and that gets funding. But business as usual is typically looking at ecological models involving the interaction of many species, epidemiological models investigating the spread of disease, social network models, climate models, telecommunication and financial models, etc. Branching processes, Markov models, stochastic differential equations, point processes, random matrices, random graph networks; these are all the common objects used. Actually figuring out their behaviour can require all kinds of assorted techniques though, you get to pull from just about anything in mathematics to "get the job done".
In my work in academia (which I’m considering leaving), I’m very familiar with the common mathematical objects you mentioned. Where could I look for a job similar to yours? It sounds very interesting
Sorry, I'm in academia too, but my ex-colleagues who left found themselves doing nearly identical work doing MFT research at hedge funds, climate modelling at our federal weather bureau, and SciML in big tech. I know of someone doing this kind of work in telecoms too, but I haven't spoken to them lately. Having said that, it's rough out there right now. A couple of people I know looking for another job right now (academia or otherwise) with this kind of training are not having much luck...
> Take R as an ordered field with its usual topology and ask for a finite-dimensional, commutative, unital R-algebra that is algebraically closed and admits a compatible notion of differentiation with reasonable spectral behavior.
No thank you, you can keep your R.
Damn... does this paragraph mean something in the real world?
Probably I've the brain of a gnat compared to you, but do all the things you just said have a clear meaning that you relate to the world around you?
> does this paragraph mean something in the real world?
It's actually both surprisingly meaningful and quite precise in its meaning which also makes it completely unintelligible if you don't know the words it uses.
Ordered field: satisfying the properties of an algebraic field - so a set, an addition and a multiplication with the proper properties for these operations - with a total order, a binary relation with the proper properties.
Usual topology: we will use the most common metric (a function with a set of properties) on R so the absolute value of the difference
Finite-dimentional: can be generated using only a finite number of elements
Commutative: the operation will give the same result for (a x b) and (b x a)
Unital: as an element which acts like 1 and return the same element when applied so (1 x a) = a
R-algebra: a formally defined algebraic object involving a set and three operations following multiple rules
Algebraically closed: a property on the polynomial of this algebra to be respected. They must always have a root. Untrue in R because of negative. That's basically introducing i as a structural necessity.
Admits a notion of differentiation with reasonable spectral behaviour: This is the most fuzzy part. Differentiation means we can build a notion of derivatives for functions on it which is essential for calculus to work. The part about spectral behavior is probably to disqualify weird algebra isomorphic to C but where differentiation behaves differently. It seems redondant to me if you already have a finite-dimentional algebra.
It's not really complicated. It's more about being familiar with what the expression means. It's basically a fancy way to say that if you ask for something looking like R with a calculus acting like the one of functions on R but in higher dimensions, you get C.
I'm sure you don't have the brain of a gnat, and, even if you did, it probably wouldn't prevent you from understanding this.
As for whether these definitions have a clear meaning that one can relate to 'the world': I think so. To take just one example (I could do more), finite-dimensional means exactly what you think it means, and you certainly understand what I mean when I say our world is finite (or three, or four, or n) dimensional.
Commutative also means something very down to earth: if you understand why a*b = b*a or why putting your socks on and then your shoes and putting your shoes on and then your socks lead to different outcomes, you understand what it means for some set of actions to be commutative.
And so on.
These notions, like all others, have their origin in common sense and everyday intuition. They're not cooked up in a vacuum by some group of pretentious mathematicians, as much as that may seem to be the case.
Each of the "ordered field", "inital R-algebra" etc. are the names of a set of rules and constraints. That's all it is. So you need to know those sets of rules to make sense of it. It has nothing to do with brain size or IQ :)
In other words, you define a new thing by simply enumerating the rules constraining it. As in: A Duck is a thing that Quacks, Flies, Swims and ... Where Quacks etc. is defined somewhere else.
Math and reality are, in general completely distinct. Some math is originally developed to model reality, but nowadays (and for a long time) that's not the typical starting point, and mathematicians pushing boundaries in academia generally don't even think about how it relates to reality.
However, it is true (and an absolutely fascinating phenomenon) that we keep encountering phenomena in reality and then realize that an existing but previously purely academic branch of math is useful for modeling it.
To the best of our knowledge, such cases are basically coincidence.
Opposing view (that I happen to hold, at least if I had to choose one side or the other): not only is mathematics 'reality'; it is arguably the only thing that has a reasonable claim to being 'reality' itself.
After all, facts (whatever that means) about the physical world can only be obtained by proxy (through measurement), whereas mathematical facts are just... evident. They're nakedly apparent. Nothing is being modelled. What you call the 'model' is the object of study itself.
A denial of the 'reality' of pure mathematics would imply the claim that an alien civilisation given enough time would not discover the same facts or would even discover different – perhaps contradictory – facts. This seems implausible, excluding very technical foundational issues. And even then it's hard to believe.
> To the best of our knowledge, such cases are basically coincidence.
This couldn't be further from the truth. It's not coincidence at all. The reason that mathematics inevitably ends up being 'useful' (whatever that means; it heavily depends on who you ask!) is because it's very much real. It might be somewhat 'theoretical', but that doesn't mean it's made up. It really shouldn't surprise anyone that an understanding of the most basic principles of reality turns out to be somewhat useful.
I think you're not even disagreeing with me, we're just using different definitions of the word "reality". I meant it to use specifically "the physical world" - which you are treating as distinct from mathematics as well in your second paragraph.
Mathematics is an abstract game of symbols and rules invented by humans. It has nothing to do with reality. However it is quite useful for modelling our understanding of reality.
"that we keep encountering phenomena in reality and then realize that an existing but previously purely academic branch of math is useful for modeling it."
Would you have some examples?
(Only example that I know that might fit are quaternions, who were apparently not so useful when they were found/invented but nowdays are very useful for many 3D application/computergraphics)
Group theory entering quantum physics is a particularly funny example, because some established physicists at the time really hated the purely academic nature of group theory that made it difficult to learn.[1]
If you include practical applications inside computers and not just the physical reality, then Galois theory is the most often cited example. Galois himself was long dead when people figured out that his mathematical framework was useful for cryptography.
Yes, the point of mathematics is so that a gnat could do it. These abstractions are about making life easy and making things that previously needed bespoke solutions to be more mechanical.
Historically, every major general-purpose technology followed the same trajectory. Printing reduced the quality of manuscripts while massively increasing access. Industrialization replaced craftsmanship with standardization. Early automobiles were unreliable and dangerous compared to horse-drawn transport, yet they won because they were sufficient and scalable. The internet degraded editorial standards while enabling unprecedented distribution. None of these shifts reversed. They stabilized at a new equilibrium where high quality persisted only in niches where it was economically justified.
> Early automobiles were unreliable and dangerous compared to horse-drawn transport
People have forgotten that a lot of people were killed by horses. Cities had to deal with vast quantities of manure and horse corpses. Horses knew they were slaves and you always had to be careful around them. Horses are expensive and required daily maintenance.
Yes, and that constraint shows up surprisingly early.
Even if you eliminate model latency and keep yourself fully in sync via a tight human-in-the-loop workflow, the shared mental model of the team still advances at human speed. Code review, design discussion, and trust-building are all bandwidth-limited in ways that do not benefit much from faster generation.
There is also an asymmetry: local flow can be optimized aggressively, but collaboration introduces checkpoints. Reviewers need time to reconstruct intent, not just verify correctness. If the rate of change exceeds the team’s ability to form that understanding, friction increases: longer reviews, more rework, or a tendency to rubber-stamp changes.
This suggests a practical ceiling where individual "power coding" outpaces team coherence. Past that point, gains need to come from improving shared artifacts rather than raw output: clearer commit structure, smaller diffs, stronger invariants, better automated tests, and more explicit design notes. In other words, the limiting factor shifts from generation speed to synchronization quality across humans.
I've seen this happen over and over again well before LLMs, when teams are sufficiently "code focused" that they don't care much at all about their teammates. The kind that would throw a giant architectural changes over a weekend. You then get to either freeze a person for days, or end up with codebases nobody remembers, because the bigger architectural changes are secret.
With a good modern setup, everyone can be that "productive", and the only thing that keeps a project coherent is if the original design holds, therefore making rearchitecture a very rare event. It will also push us to have smaller teams in general, just because the idea of anyone managing a project with, say, 8 developers writing a codebase at full speed seems impossible, just like it was when we added enough high performance, talented people to a project. It's just harder to keep coherence.
You can see this risk mentioned in The Mythical Man Month already. The idea of "The Surgery Team", where in practice you only have a couple of people truly owning a codebase, and most of the work we used to hand juniors just being done via AI. It'd be quite funny if the way we have to change our team organization moves towards old recommendations.
I've mostly done solo work, or very small teams with clear separation of concerns. But this reads as less of a case against power coding, and more of a case against teams!
You can ask the agent to reverse engineer its own design and provide a design document that can inform the code review discussion. Plus, hopefully human code review would only occur after several rounds of the agent refactoring its own one-shot slop into something that's up to near-human standards of surveyability and maintainability.
I think the productivity question hinges on what you count as the language versus the ecosystem. Very few nontrivial games are written in "just C". They are written in C plus a large pile of bespoke libraries, code generators, asset pipelines, and domain-specific conventions. At that point C is basically a portable assembly language with a decent macro system, and the abstraction lives outside the language. That can work if you have strong architectural discipline and are willing to pay the upfront cost. Most teams are not.
I agree on C++ being the worst of both worlds for many people. You get abstraction, but also an enormous semantic surface area and footguns everywhere. Java is interesting because the core language is indeed small and boring in a good way, much closer to C than people admit. The productivity gains mostly come from the standard library, GC, and tooling rather than clever language features. For games, the real disagreement is usually about who controls allocation, lifetime, and performance cliffs, not syntax.
> I agree on C++ being the worst of both worlds for many people. You get abstraction, but also an enormous semantic surface area and footguns everywhere.
Not only that, but who even knows C++? It keeps changing. Every few years "standard practice" is completely different. Such a waste of energy.
> Java is interesting because the core language is indeed small and boring in a good way, much closer to C than people admit.
I know. I used to be a Java hater, but then I learned it and it's alright... except the whole no-unsigned-integers thing. That still bothers me but it's just aesthetic really.
I like the lack of unsigned integers in Java. It simplifies the language while only removing a tiny bit of functionality. You can emulate almost all unsigned math using signed operations, whether you use a wider bit width or even the same bit width. The only really tricky operations are unsigned division and unsigned parseInt()/toString(), which Java 8 added to smooth things over.
The reason for using unsigned is not that some operations need it, it is so to declare a variable that can't be negative. If you don't have that, then you must check for a negative number in every single interface.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
reply