Hacker Newsnew | past | comments | ask | show | jobs | submit | rednafi's commentslogin

IDK why these vacuous corpo tropes appear on the front page of HN every now and then. Sounds like exactly what a quasi-technical, management-leaning staff engineer would say.

Sure, in the end we work for these faceless, meat-grinding machines. But more or less, we all have some semblance of autonomy, and I absolutely can choose not to work on a product that people hate. I can switch teams before switching companies.

To some extent, I also just do what leadership asks, keep my mouth shut, and collect paychecks. But whenever that happens, I don’t gaslight myself by writing a post on why it's supposed to be this way.

To me, this seems like someone who is married to their paycheck and would do whatever necessary to protect that.


It's great that they are recreating much of the fundamental software stack using LLMs. But if you're going to 'vibeslop,' at least do it in a language other than JavaScript.

I struggle to understand why anyone would want to generate code in TypeScript - unless what you're building truly can't be done in Go, Rust, or Kotlin; anything but JS.

I’m not sure how much of an improvement it really is to rewrite something from PHP to TypeScript while claiming security benefits.


I'm not bored of the technology per se but the people around it. The yappers, doomers, and the shills are insufferable.

I dream of a SQL like engine for distributed systems where you can declaratively say "svc A uses the results of B & C where C depends on D."

Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.


Isn't this a common architecture in CQRS systems?

Commands to go specific microservices with local state persisted in a small DB; queries go to a global aggregation system.


You could build something like this using Mangle datalog. The go implementation supports extension predicates that you can use for "federated querying", with filter pushdown. Or you could model your dependency graph and then query the paths and do something custom.

You could also build a fancier federated querying system that combines the two, taking a Mangle query and the analyzing and rewriting it. For that you're on your own though - I prefer developers hand-crafting something that fits their needs to a big convoluted framework that tries to be all things to all people.


I think SQL alone is great if you didn't drink the microservice kool-aid. You can model dependencies between pieces of data, and the engine will enforce them (and the resulting correct code will probably be faster than what you could do otherwise).

Then you can run A,B,C and D from a consistent snapshot of data and get correct results.

The only thing microservices allow you to do if scale stateless compute, which is (architecturally) trivial to scale without microservices.

I do not believe there has been any serious server app that has had a better solution to data consistency than SQL.

All these 'webscale' solutions I've seen basically throw out all the consistency guarantees of SQL for speed. But once you need to make sure that different pieces of data are actually consistent, then you're basicallly forced to reimplement transactions, joins, locks etc.


Datomic?

I follow this religiously. The process of posting is manual but it works fairly well if your intention is good and you're not blog spamming in different forums.

But I intentionally haven't added a comment section to my blog [1]. Mostly because I don't get paid to write there and addressing the comments - even the good ones - requires a ton of energy.

Also, scaling the comment section is a pain. I had disqus integrated into my Hugo site but it became a mess when people started having actual discussion and the section got longer and longer.

If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles. That's good enough for me.

[1]: https://rednafi.com


> If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles

Totally agree, I do the same as well on my site; e.g.: https://anil.recoil.org/notes/tessera-zarr-v3-layout

There are quite a few useful linkbacks:

- The social urls (bluesky, mastodon, twitter, linkedin, hn, lobsters etc) are just in my Yaml frontmatter as a key

- Then there's standard.site which is an ATProto registration that gets an article into that ecosystem https://standard-search.octet-stream.net

- And for longer articles I get a DOI from https://rogue-scholar.org (the above URL is also https://doi.org/10.59350/tk0er-ycs46) which gets it a bit more metadata.

On my TODO list is aggregating all the above into one static comment thread that I can render. Not sure it's worth the trouble beyond linking to each network as I'm currently doing, since there's rarely any cross-network conversations anyway.


Damn. I got a bunch of idea around atproto from this comment. Also found out your blog. I wish digging out human written blogs wasn't such a chore. I like the idea of blogs but their discoverability sucks big time.

I like Kagi's small web initiative to help people find personal sites: https://blog.kagi.com/small-web-updates

I just use HN as my comment platform. I have a Hugo short code that (very respectfully!) grabs the comments on a full rebuild, but only if those comments are not already cached and if the post is less than 7 days old. The formatting looks quite good on my site. Feel free to check it out at the bottom of this post: https://mketab.org/blog/sqlite_kdbx

Nice blog, thanks for this one: https://rednafi.com/go/splintered-failure-modes/. Well written - I only need to read that once and now remembered it.

> If the write ups are any useful, it generally appears here or reddit and I often link back those discussions in the articles. That's good enough for me.

If you have a Mastodon account, you can embed all responses to your Mastodon post into your site. See https://blog.nawaz.org/posts/2025/Jan/adding-fediverse-comme...


We have vibe leadership mills where AGI-pilled leaders are driving their companies into a death spiral.

There is literally no reason to write it in a JVM language in 2026 when better options exists. Either Go for simplicity and maintaininability or Rust to get the most out of the machine works.

Also, it'll be hard for them to lure good people to work on that thing. Absolutely no one is getting excited to write, vibe, or maintain Java.


I am not thrilled to use java, but it really does what it says on the tin. A customer copied the jar file I sent them to their as400 and it just worked. There is nothing quite like it.

Go binary says hello. No VM overhead. Everything is statically linked.

Hi go binary, unfortunately you don't exist, because there is no cross compiler for that platform. Also please don't crash if you ever do get cross compiled, since the target system doesn't understand your utf8 strings.

Mandatory read by Peter Norvig - even more relevant now

https://norvig.com/21-days.html


I come from a developing country where the only OS people know is Windows. Macs used to be too expensive, and Linux didn’t have any of the applications people would use (read: pirate) for work.

Typically, college students and teachers would get $500 dingy laptops from Asus, Acer, and Dell. A decade ago, those machines were fine. My mom used one for 7 years, right until they retired Windows 7.

Then the machines started becoming absolutely useless with Windows 8, 10, and now 11. 8GB machines are barely usable now, with constant Windows updates and all the background telemetry services maxing out the disk all the time.

Sure, people can turn off some of these rogue processes. But my point is - an OS should just disappear from the user’s view and let them work.

I don’t live in my home country and haven’t visited in a long time, but I’ve heard that people are really opting for second-hand MacBook Airs. Now with the MacBook Neo, more people will go that route.

Students are opting for cheap Windows machines and flashing them with Ubuntu to make them usable.


The world could use one less "how I slop" article at this point.

This reminds me of the early Medium days when everyone would write articles on how to make HTTP endpoints or how to use Pandas.

There’s not much skill involved in hauling agents, and you can still do it without losing your expertise in the stuff you actually like to work with.

For me, I work with these tools all the time, and reading these articles hasn’t added anything to my repertoire so far. It gives me the feeling of "bikeshedding about tools instead of actually building something useful with them."

We are collectively addicted to making software that no one wants to use. Even I don’t consistently use half the junk I built with these tools.

Another thing is that everyone yapping about how great AI is isn’t actually showing the tools’ capabilities in building greenfield stuff. In reality, we have to do a lot more brownfield work that’s super boring, and AI isn’t as effective there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: