Just try using Claude with API for an hour and you will see that the subscriptions are definitely not profitable (unless they percent off “partying but dormant” is very high).
The cursor-mirror skill and cursor_mirror.py script lets you search through and inschpekt all of your chat histories, all of the thinking bubbles and prompts, all of the context assembly, all of the tool and mcp calls and parameters, and analyze what it did, even after cursor has summarized and pruned and "forgotten" it -- it's all still there in the chat log and sqlite databases.
cursor-mirror skill and reverse engineered cursor schemas:
The German Toilet of AI
"The structure of the toilet reflects how a culture examines itself." — Slavoj Zizek
German toilets have a shelf. You can inspect what you've produced before flushing. French toilets rush everything away immediately. American toilets sit ambivalently between.
cursor-mirror is the German toilet of AI.
Most AI systems are French toilets — thoughts disappear instantly, no inspection possible. cursor-mirror provides hermeneutic self-examination: the ability to interpret and understand your own outputs.
What context was assembled?
What reasoning happened in thinking blocks?
What tools were called and why?
What files were read, written, modified?
This matters for:
Debugging — Why did it do that?
Learning — What patterns work?
Trust — Is this skill behaving as declared?
Optimization — What's eating my tokens?
See: Skill Ecosystem for how cursor-mirror enables skill curation.
>Žižek on toilets. Slavoj Žižek during an architecture congress in Pamplona, Spain.
>The German toilets, the old kind -- now they are disappearing, but you still find them. It's the opposite. The hole is in front, so that when you produce excrement, they are displayed in the back, they don't disappear in water. This is the German ritual, you know? Use it every morning. Sniff, inspect your shits for traces of illness. It's high Hermeneutic. I think the original meaning of Hermeneutic may be this.
>Hermeneutics (/ˌhɜːrməˈnjuːtɪks/)[1] is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts. Hermeneutics is more than interpretive principles or methods we resort to when immediate comprehension fails. Rather, hermeneutics is the art of understanding and of making oneself understood.
----
Here's an example cursor-mirror analysis of an experiment with 23 runs with four agents playing several turns of Fluxx per run (1 run = 1 completion call), 1045+ events, 731 tool calls, 24 files created, 32 images generated, 24 custom Fluxx cards created:
Cursor Mirror Analysis: Amsterdam Fluxx Championship -- Deep comprehensive scan of the entire FAFO tournament development:
Just an update re German toilets: No toilet set up in the last 30 years (I know of) uses a shelf anymore. This reduces water usage by about 50% per flush.
A paper on the same topic: On the Expressiveness of Softmax Attention: A Recurrent Neural Network Perspective, Gabriel Mongaras, Eric C. Larson, https://arxiv.org/abs/2507.23632
Linear attention is a first-degree approximation of Softmax attention, and model performance gets better as you increase the degree of the Taylor approximation.
I'm thinking about adapting an existing model to Taylor-approximated attention. I think it should be possible with some model surgery and rehabilitation training.
You can use an agent while still understanding the code it generates in detail. In high stakes areas, I go through it line by line and symbol by symbol. And I rarely accept the first attempt. It’s not very different from continually refining your own code until it meets the bar for robustness.
Agents make mistakes which need to be corrected, but they also point out edge cases you haven’t thought of.
Definitely agreed, that is what I do as well.
At that point you have good understanding of that code, which is in contrast to what the post I responded suggests.
I agree and am the same. Using them to enhance my knowledge and as well as autocomplete on steroids is the sweet spot. Much easier to review code if im “writing” it line by line.
I think the reality is a lot of code out there doesn’t need to be good, so many people benefit from agents etc.
For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.
If anything, 2nd hand AMD gaming rigs make more sense than old servers.
I say that as someone with always off r720xd at home due to noise and heat. It was fun when I bought it during winter years ago, until summer came.
I've been turning off my home server even though it's a modern PC rather than old server hardware because it idles at 100W which is too much. Put a Ryzen 7900X in it.
Not sure if it's not properly doing lower power states, or if it's the 10 HDDs spinning. Or even the GPU. But also don't really have anything important running on it that I can't just turn it off.
> For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.
And what case are you putting them into? What if you want it rack mounted? What about >1gig networking? What if I want a GPU in there to do whisper for home assistant?
Used gaming rigs are great. But used servers also still have loads of value, too. Compute just isn't one of them.
A rackmount case from Rosewill costs a couple of hundred bucks or so, new. And they'll remain useful for as long as things like ATX boards and 3.5" hard drives are useful.
I mean: An ATX case can be paid for once, and then be used for decades. (I'm writing this using a modern desktop computer with an ATX case that I bought in 2008.)
PCI Express lanes can be multiplied. There should frankly be more of this going on than there is, but it's still a thing that can be done.
Consumer boards built on the AMD X670E chipset, for instance, have some switching magic built in. There's enough direct CPU-connected lanes for an x16 GPU and a couple of x4 NVMe drives, and the NIC(s) and/or HBA(s) can go downstream of the chipset.
(Yeah, sure: It's limited to an aggregate 64 Gbps at the tail end, but that's not a problem for the things I do at home where my sights are set on 10Gbps networking and an HBA with a bunch of spinny disks. Your needs may differ.)
It is always PWM under the hood, the question is, how much was spent (or not) on the filtering network out of the PWM. Is it closer to buck converter or is it straight up flicker at the output.
Since these things have lots of LEDs, my first thought was to put a range of different tiny delays on them to induce destructive interference, so that the off parts of one LED's flicker are the on parts of another, to smooth out the overall output.
Actually that's not true, my first thought was "just use a layer of phosphor excited by the LEDs", but fluorescent tubes do that and people used to make the same complaints about flicker, so.
Looks like "flicker index" is a useful(?) search term, anyway.
Nobody runs 4096 samples per pixel. In many cases 100-200 (or even less with denoising) are enough. You might run up to low-1000 if you want to resolve caustics.
reply