I did the exact same thing except a virtualized opensense router and bare metal kubernetes on one host. The kubernetes broke and I downgraded from 32GB of RAM to 16GB . I actually may revisit the setup since opensense FRR and Cilium BGP to peer your cluster and home LAN is actually a really seamless way to self host things in kubernetes. Maybe there are other ways, maybe there is something simpler, but a homelab is about fun more than pure function.
Free Monads are everywhere. Learning Haskell at this time was such an amazing experience. Haskell has incredible library stability, the kmettoverse feels the same, my is still good enough for most situations, there are new streaming libraries but accomplish the same things as conduit and pipes. LLMs are as decent as you would expect on Haskell, and have helped me debug some situations where I would be fighting GHC usually with some flags turned out. AI has actually has been helpful in learning since in Haskell once you figure something out you solve it for a whole class of problems, the issues is sometimes figuring that one thing out it's so abstract you feel like you are hitting a cliff. Excited to be writing Haskell still in 2026, I hope it continues to avoid success at all cost.
People think category theory is weird and confusing, but really it just managed to name things (classes) that before were just "things". One might not know what monad or functor is, but they surely used it and have intuition on how it works.
Right. I don't know how many times I've been exasperated by how monads are perceived as difficult.
Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.
Technically it's also an applicative functor, but at the end of the day, that gives us a few trivial things:
- a constructor (i.e., a way to put something inside your monad, exactly how `[1]` constructs a list out of a natural number)
- map (everyone understands this bc we use them with lists constantly)
- ap, which is basically just "map for things with more than one parameter"
Monads are easy. But when you tell someone "well it's a box and you can unwrap it and modify things with a function that also returns a box, and you unwrap that box take the thing out and put it inside the original box—
No. It is a flatmappable. That's it. Can you flatmap a list? Good. Then you already can use the entirety of monad-specific properties.
When you start talking about Maybe, Either, etc. then you've moved from explaining monads to explaining something else.
It's like saying "classes are easy" and then someone says "yeah well what about InterfaceOrienterMethodContainerArrangeableFilterableClass::filter" that's not a class! That's one method in a specific class. Not knowing it doesn't mean you don't understand classes. It just means you don't have the standard library memorized!
It's also important to note that in Haskell and other functional programming languages, there is no implied order of operations. You need a Monad type in order to express that certain things are supposed to happen after other things. Monads can also express that certain things happen "in between" two operations, which is why we have different kinds of Monads and mathematical axioms of what they're all supposed to do.
Outside of FP however, this seems really stupid. We're used to operations that happen in the order you wrote them in and function applications that just so happen to also print things to the screen or send bits across the network. If you live in this world, like most people do, then "flatmap" is a good metaphor for Monads because that's basically all they do in an imperative language[1].
Well, that, and async code. JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes, where most other programming languages would have gone with green threads or some other software-transparent async mechanism. To be clear, it's better than the callback soup you'd normally have[0], but working with bare Thenables is still painful. Just like working with bare Monads - which is why Haskell and JavaScript both have syntax to work around them (await/async, do, etc).
Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.
[0] The FP people call this "continuation-passing style"
[1] To be clear, Monads don't have to be list-shaped and most Monads aren't.
There is an implied order of operations in Haskell. Haskell always reduces to weak head normal form. This implies an ordering.
Monads have nothing to do with order (they follow the same ordering as Haskell's normalization guarantees).
> JavaScript decided to standardize on a Monad-shaped "thenable" specification for representing asynchronous processes,
Its impossible for something to be monad shaped. All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something else. They're all isomorphic and form a monad. Any model of computation forms a monad.
Assembly language quite literally forms a category over the monoid of endo functors.
Jacquard loom programming also forms a category over the monoid of endo functors because all processes that sequence things with state form such a thing, whether you know that or not.
It's like claiming the Indians invented numbers to fit the addition algorithm. Putting the cart before the horse, because all formations of the natural numbers form a natural group/ring with addition and multiplication formed the standard way (they also all form separate groups and rings, that we barely ever use).
> All asynchronous interfaces form a monad whether you decide to follow the Haskell monad type class or decide to do something els
JS's then is categorically not a monad because it doesn't follow the monad laws.
fn1 : a -> Promise<b>
fn2 : b -> c
fn3 : b -> Promise<c>
With JavaScript, composing fn1 and fn2 with then gives you a -> Promise<c>. So then is isomorphic to map.
With JavaScript, composing fn1 and fn3 with then gives you a -> Promise<c>. So then is isomorphic to flatmap.
Therefore, with JavaScript, map is isomorphic to flatmap. Which obviously violates monad laws.
There's a rather famous Github issue where someone points this out in the issue tracker for `then` development, and one of the devs in charge of then...leaves responses for posterity.
I mean, JavaScript is weakly typed and then always lifts its argument. Speaking of isomorphism doesn't make sense unless you're talking about a typed portion of the language.
> Maybe/Either get talked about because they're the simplest Monads you can make, but it makes Monads sound like a spicy container type.
Actually "spicy container type" is maybe a better definition of Monad than you may think. There's a weird sort of learning curve for Monads where the initial reaction is "it's just a spicy container type", you learn a bit and get to "it is not just a spicy container type", then eventually you learn a lot more and get to "sure fine, it's just a spicy container type, but I was wrong about what 'container' even means" and then settle back down to "it's a spicy container type, lol".
"It's a spicy container type" and "it's anything that is flatmappable" are two very related simplifications, if "container" is a good word for "a thing that is flatmappable". It's a terrible tautological definition, but it's actually not as bad of a definition as it sounds. (Naming things is hard, especially when you get way out into mathematical abstractions land.)
There are flatmappable things that don't have anything to do with ordering or sequencing. Maybe is a decent example: you only have a current state, you have no idea what the past states were or what order they were in.
Flatmappable things are generally (but not always) non-commutative: if you flatmap A into B you get a different thing than if you flatmap B into A. That can represent sequencing. With a Promise `A.then(() => B)` is different sequence than `B.then(() => A)`. But that's as much "domain specific" to the Promise Monad and what its flatmap operation is (which we commonly call `then` to make it a bit more obvious what its flatmap operation does, it sequences; A then B) than anything fundamental to a Monad. The fundamental part is that it has a flatmap operator (or bind or then or SelectMany or many other language or domain-specific names), not anything to do with what that flatmap operator does (how it is implemented).
But this has absolutely nothing to do with "certain things are supposed to happen after other things", and CANNOT POSSIBLY have anything to do with that. Flatmap is a purely functional concept, and in the context of things that are purely functional, nothing ever actually happens. That's the whole point of "functional" as a concept. It cleanly separates the result of a computation from the process used to produce that result.
So one of your "simple" explanations must be wrong.
Because you're not used to abstract algebra. JavaScript arrays form a monad with flatmap as the join operator. There are multiple ways to make a monad with list like structures.
And you are correct. Monads have nothing to do with sequencing (I mean any more than any other non commutative operator -- remember x^2 is not the same as 2^x)
Haskell handles sequencing by reducing to weak head normal form which is controlled by case matching. There is no connection to monads in general. The IO monad uses case matching in its implementation of flatmap to achieve a sensible ordering.
As for JavaScript flat map, a.flatMap(b).flatMap(c) is the same as a.flatMap(function (x) { return b(x).flatMap(c);}).
This is the same as promises:
a.then(b).then(c) is the same as a.then(function (x) { return b(x).then(c)}).
Literally everything for which this is true forms a monad and the monad laws apply.
Nota bene, then is not a monad because the implementation of then implies map is isomorphic to flatmap. This is because `then` turns the return value of a callback into a flat promise, even if the callback itself returns a promise.
That is to say, then checks the type of the return value and then takes on map or flatmap behavior depending on whether the return value of the callback is a Promise or not.
You don't? There's nothing special about monads. I don't know why everyone cares so much about them.
There are a few generic transforms you can use to avoid boilerplate. You can reason about your code more easily if your monad follows all the monad laws. But let's be real.. most programmers don't know how to reason about their code anyway, so it's a moot point
People have different "aha" moments with monads. For me, it was realizing that something being a monad has to do with the type/class fitting the monad laws. If the monad laws hold for the type/class then you've got a monad, otherwise not.
So then when you look at List, Maybe, Either, et al. it's interesting to see how their conforming to the laws "unpacks" differently with respect to what they each do differently (what's happening to the data in your program), but the laws are just the same.
The reason this was an aha moment for me is that I struggled with wanting to understand a monad as another kind of thing — "I understand what a function is, I understand what objects and primitive values are, but I don't get that List and Maybe and Either are the same kind of thing, they seem like totally different things!"
Yes, I 100% agree. But I want to mention something that isn't a disagreement, just a further nuance:
1. my explanation of monad is sufficient for people who need to use them
2. your explanation of monad is necessary for people who might want to invent new ones
What I mean by this is that if you want to invent a new monad, you need to make sure your idea conforms to the monad laws. But if you're just going to consume existing monads, you don't need to know this. You only need to know the functions to work with a monad: flatmap (or map + flatten), ap(ply), bind/of/just. Everything else is specific to a given monad. Like an either's toOptional is not monadic. It's just turning Left _ into None and Right an into Some a.
And needing to know these properties "work" is unnecessary, as their very existence in the library is pretty solid evidence that you can use them, haha.
> everyday business and physics is monadic in function.
So?
> And if-then statements are functorial.
So?
All the "this is hard" stuff around these ideas seems to focus on managing to explain what these things are but I found that to progress at the speed of reading (so, about as easy as anything can be) once it occurred to me to find explanations that used examples in languages I was familiar with, instead of Haskell or Haskell-inspired pseudocode.
What I came out the other side of this with was: OK, I see what these are (that's incredibly simple, it turns out) and I even see how these ideas would be useful in Haskell and some similar languages, because they solve problems with and help one communicate about problems particular to those languages. I do not see why it matters for... anything else, unless I were to go out of my way to find reasons to apply these ideas (and why would I do that? And no, I don't find "to make your code more purely-functional" a compelling reason, I'm entirely fine with code I touch only selectively, sometimes engaging with or in any of that sort of thing).
There is no 'so?' Haskell tends towards applicatives and monads because monads and applicatives are the preferences of haskellers. Just like JavaScript people may like dynamic typing, etc. These are design choices.
By modeling various things as monads, you get the various principled monad extensions. Unlike normal programming where leaky abstractions are the expectation, using algebraic structures with principled laws means things just work.
But this has nothing to do with monads in particular. Haskell's choice to do a lot with monoids provides a similar guarantee about things that combine . It's a preference. Nothing like monoids exist in other languages, because people are told they have to think with 'objects' of whatever.
> Do you understand "flatmap"? Good, that's literally all a monad is: a flatmappable.
Awesome! Now I understand.
> Technically it's also an applicative functor
Aaaand you've lost me. This is probably why people think monads are difficult. The explanations keep involving these unfamiliar terms and act like we need to already know them to understand monads. You say it's just a flatmappable, but then it's also this other thing that gives you more?
But words like "incapsulation" or "polymorphism" or even "autoincrement" also sound unfamiliar and scary to a young kid who encounters them the first time. But the kid learns their meaning along the way, in a desire to build their own a game, or something. The feeling that one already knows a lot, sort of enough, and it'd be painful and boring to learn another abstract thing is a grown-up problem :-\
Those words need definitions, but they can both be defined using words most people know.
Casual attempts at defining Monads often just sweep a pile of confusion around a room for a while, until everything gets hidden behind whatever odd piece of furniture that is familiar to the person generating the definition. They then imagine they have cleared up the confusion, but it is still there.
Most engineers don't have too much trouble understanding things like List<T>, or Promise<T>, or even Optional<T>, which all demonstrate vividly what a monad does (except Promise in JS that auto-flattens).
A monad is a generalization of all them. It's a structure that covers values of type T, some "payload" (maybe one, like Promise, maybe many, like List, maybe even none, like List or Optional sometimes). You can ask it to do some operations on these values "inside", it's the map() operation. You can ask it to do similar thing when operation on each value produces a nested structure of the same kind, and flatten them into one level again: this is flatMap(). This is how Promises are chained. The result is again a structure of the same kind, maybe with "payload" of a different type.
This is a really simple abstraction, simpler than most GoF patterns, to my mind, and more fundamental and useful.
Short definitions, followed by simple examples that clearly match the definition, are the best way to be clear.
Unlike how we define most things, definitions of monads often run into:
1. Just a lot of words, often almost stream of consciousness.
2. Use of supporting words used in a technical sense associated with the concept being defined. Completely understood by anyone who already knows the concept. Completely opaque to anyone else. Those words should be defined first, or not used.
3. Incorporating examples into the definition, which creates a kind of inductive menagerie. There are no obvious boundaries of a concept, or clarity shed on what is crucial or what is specific in the examples.
Dictionaries and most people don't define words this way, for good reason. It is a collage, not a definition.
--
I just spent too much time working on this. It is a deceptively difficult problem. I am certainly not critiquing anyone. To be completed later! For myself, if no one else.
I mean people need to be familiar with mathematics. In mathematics things form things without having to understand them.
For example, The natural numbers form a ring and field over normal addition and multiplication , but you don't need to know ring theory to add numbers..
People need to stop worrying about not understanding things. No one understands everything.
Now imagine if every single explanation of natural numbers talked about rings and fields. Nobody ever just says "they're the counting numbers starting from one." A few of them might say, "they're the counting numbers starting from one, and they form a ring and field over addition and multiplication." And I might think, I understand the first part, but I'm not sure what the second part is and it sounds important, so maybe I still don't know what natural numbers are.
I'm not worried, but it's amusing to see this person say it's so simple, and then immediately trample on it.
Well most people explain monads for no reason. I'm probably one of the rare Haskell developers who never explains them to anyone. It has nothing to do with IO.
If someone is concerned with how to do IO in a pure language then I show them how it actually happens in GHC, which is via the type system enforcing only one instance of RealWorld# is alive at once. There is ABSOLUTELY nothing you need to know about monads to understand IO in Haskell. It's just function composition and careful use of case to force the evaluation of a token of type RealWorld#. Nothing magic about it.. you're just passing the state of the world around.
I'm sorry, I wasn't clear. The "technically" was meant to signal "it doesnt' matter to you, but to pedants here who get off on saying "well ACKshually": I didn't forget that, it's just not relevant :D
If you want a little more elucidation, what you need to know, unless you're aiming to be functional programming god, is that:
- a monad is a FLATMAPPABLE
- all monads are also applicative functors, which i will explain last bc it's kind of a twist on MAPPABLE
- all applicative functors, and thus all monads, are functors, which are MAPPABLEs
- an applicative functor is essentially a mappable for functions that take more than one parameter
I think applicative functors are the hardest to grok because it's not immediately obvious why they're necessary. The type signature is strange, and it's like "why would I ever put a function inside a container??" I wrote a lot of functional code in Kotlin and TypeScript before I finally understood their utility. The effect of this was that a lot of awkward code became much cleaner.
So let's begin with functor (i.e., a mappable):
Container<Integer>
if you have a function Integer to Text, a functor allows you to convert the Integer to Text using a function called `map`. We do this with arrays all the time in Python, JavaScript, etc. It's a very familiar concept, but we don't call it "functor" in those languages.
BUT, what if you have
Container<Integer>
and the function you want to map with takes two parameters. A classic example is you want to use the Integer as the first argument of a constructor. Let's say Pair.
So if Pair's constructor is: a -> a -> (a, a), you would first map Container<Integer> with PairConstructor. Now you have Container<Integer -> (Integer, Integer)>.
To pass in the second Integer to finish constructing the tuple, you use the special property of applicative functors. This is often called "ap" (like "map" without the "m").
---
Now, I would say the ACTUAL most important thing about applicative functors is this:
Imagine if you had a list of words. You want to make an API call for each word. API calls are often modeled with the Async monad (which is also, as mentioned above, definitionally an applicative functor).
But if you mapped [Word] with CallApi, you would end up with [Async ApiResult]. This models "a list of successful and unsuccessful API calls."
But what if you wanted Async [ApiResult] instead? (One might say this is an attempt to model "all api calls successful, but if one api call fails, the whole operation is considered a failure."
This is where applicative functors shine: pulling the applicative functor out of the container and wrapping the whole container. (There's more cool stuff to learn about the nature of this "container" but that'd be for another lesson, much like how you don't learn about primitives and interfaces on the same day in an OOP class.)
Recall that constructing a list of N items would be
a -> a -> a -> ... -> a (n times) -> [a]
That looks an awful lot like one MAP followed by (n-1) APs, based on the discussion above! And that's exactly what it is.
You can map the first api call and then ap the rest, and you end up going over the entire list, getting Async [ApiResult].
Now, there are a lot of ways languages go about solving this kind of "fail if one of the operations fails rather than compile a list of all successes and failures."
But the nice thing about using Functors, Monads, etc. is that you have a bunch of functions that work on these things, and they handle a ton of code so you don't have to.
That collection of Words above? It's a list. Lists are a Traversable, and all Traversable have the following function:
traverse: (a -> Applicative b) -> Traversable a -> Applicative Traversable b
The above, the traversable is a list and applicative functor was apiCall, so your code is as simple as
traverse apiCall listOfWords
No juggling around anything. That's it. You know your result will be "list of successful results, or a failure."
---
There are many more of these "type classes," and the real power comes from not needing to write a lot of code anymore because it's baked into the properties of the various type classes. Have a type that can be mapped to an order able type? Bam, now your type is order able and you never have to write a sort function for your type. Etc.
Yeah, this is exactly what I was thinking. LLMs don't have precise geometrical reasoning from images. Having an intuition of how the models work is actually.a defining skill in "prompt engineering"
Asking your favorite LLM actually helps a lot. They generally are well trained on LLM papers unsurprisingly. In this case though it’s important to realize the LLM is incapable of seeing or hearing or reading. Everything has to be transformed into a vector space. Images are generally cut into patches (like 16x16) which are themselves transformed by several neural networks to convert them into a semantic space represented by the models parameters.
But this isn’t hugely different than your vision. You don’t see the pixel grid either. You have to use tools to measure things. You have the ability over time to iteratively interact with the image by perhaps counting grid lines but the LLM does not - it’s a one shot inference against this highly transformed image. They’ve gotten better at complex visual tasks including types of counting, but it’s not able to examine the image in any analytical way or even in its original representation. It’s just not possible.
It can however make tools that can. It’s very good at working with PIL and other image processing libraries or even writing image processing code de novo, and then using those to ground itself. Likewise it can not do math, but it can write a calculator that can do highly complex mathematics on its behalf.
After using it a couple hours playing around, it is a very solid entry, and very competitive compared with the big US relaeses. I'd say it's better than GLM4.6 and I'm Kimi K2. Looking forward to v4
Did you try with 60k+ context? I found previous releases to be lacklustre which I tentatively attributed to the longer context, due to the model being trained on a lot of short context data.
Yeah, financial bubble != useless technology. Maybe coding agents do cost $50 per month in the long term, but I might just pay that for entertainment and personal stuff. Like, I don't even try to vibe code my job, but in the evenings having a cool slop generator is good times.
Right now I use a Chinese vibe code plan, really good value.
This article stands as complete hype. They just seem to offer an idea of "replication training" which is just some vague agentic distributed RL. Multi-agent distributed reinforcement learning algorithms have been in the actual literature for a while. I suggest studying what DeepMind is doing for current state of the art in agentic distributed RL.
I didn’t think it was vague. Given an existing piece of software, write a detailed spec on what it does and then reward the model for matching its performance.
The vague part is whether this will generalize to other non software domains.
Alan Turing had a great test (not definition) of AGI, which we seem to have forgotten. No I don't think an LLM can pass a Turing Test (at least I could break it).
I think it gave up trying to solve Pokemon. :) Seriously, aren't these ARC-AGI problems easy for most people? They usually involve some sort of pattern recognition and visual reasoning.