Every library is a liability especially in terms of api. There are many example where the first take on a problem within a std lib was a bad choice and required a major overhaul. Once something is in standard library it’s literally impossible to take it back without breaking the world if you don’t control api consumers
I think this post needs better examples to show case the issue, because right now the issue is not clear. Ideally you would need an example that uses the context.Cause function, see below
The contexts and errors communicate information in different directions. Errors let upstream function know what happened within the call, context lets downstream functions know what happened elsewhere in the system. As a consequence there isn't much point to cancel the context and return the error right away if there isn't anybody else listening to it.
Also, context can be chained by definition. If you need to be able to cancel the context with a cause or cancel it with a timeout, you can just make two context and use them.
Not only that, isn't this a "lie"? You're cancelling the context explicitly, but that's not necessary is it? Because at the moment the above call fails, the called-into functions might not have cancelled the context. There might be cleanup running later on which will then refuse to run on this eagerly cancelled context. There is no need to cancel this eagerly.
Perhaps I'm not seeing the problem being solved, but bog-standard `return err` with "lazy" context cancellation (in a top-level `defer cancel()`), or eager (in a leaf I/O goroutine) seems to carry similar functionality. Stacking both with ~identical information seems redundant.
> For many business apps, they will never reach 2 billion unique values per table, so this will be adequate for their entire life. I’ve also recommended always using bigint/int8 in other contexts.
I'm sure every dba has a war story that starts with similar decision in the past
The other reason is the volume of the code being produced combined with the constant product changes. An innocent change like mixing two close but still different concepts can easily poison the whole codebase and take years to undo and may even be nearly impossible to fix if it propagates to external systems outside of direct control
It's fair to complain about Rust complexity IMO. What's _not_ fair is pointing at Zig as an example of how it could be simpler, when it's not memory safe. The requirements are different. At that point we might as well say "why use Rust when you can just use Go or TypeScript"
I’ve made my own attempt in building a social network [0] and it’s evening running for me and my friends.
Here are few things I’ve encountered during the development:
- Based on my experience private only approach does not work. It’s all about network effects and if users can’t send their post to all friends, they’ll just move on to a different platform
- chronological friends only feed is really boring. Not that it’s bad per de, but it’s hard to convince people to stay if the service is not entertaining
It can also be me not being able to market the project right. Good luck with your attempt!
There are two sides to the argument which I think should be treated separately: a) Is it a good idea overall? and b) is htmx implementation good enough?
a) I think so, yes. I've seen much more spa that have a completely broken page navigation. This approach does not fit all use cases, but if you remember that the whole idea of htmx is that you rely on webserver giving you page updates as opposed to having a thick js app rendering it all the way it makes sense. And yes, js libraries should be wrapped to function properly in many cases, but you would do the same with no react-native components in any react app for example
b) I don't think so. htmx boost functionality is an afterthought and it will always be like this. Compare it with turbo [1] where this is a core feature and the approach is to use turbo together with stimulus.js which gives you automagical component life cycle management. Turbo still has it's pains (my favorite gh issue [2]), but otherwise it works fine
htmx boost functionality is an afterthought in the main use case it is marketed for (turning a traditional MPA into something that feels like a SPA), but it's actually super useful for the normal htmx use case of fetching partial updates and swapping them into a page.
If you do something like <a href=/foo hx-get=/foo hx-target="#foo">XYZ</a> the intention is that it should work with or without JavaScript or htmx available. But the problem is that if you do Ctrl-click or Cmd-click, htmx swallows the Ctrl/Cmd key and opens the link in the same tab instead of in a new tab!
But if you do <a href=/foo hx-boost=true hx-target="#foo">XYZ</a> everything works as expected–left-click does the swap in the current tab as expected, Ctrl/Cmd-click opens in a new tab, etc. etc.
Also another point–you are comparing htmx's boost, one feature out of many, to the entirety of Turbo? That seems like apples and oranges.
hx-boost is an afterthought and we haven't pushed the idea further because we expect view transitions via normal navigation to continue to fill in that area
Would like to second the turbo rec. I've had good results with it for nontrivial use cases. Would like to hear from people if they have different experiences. Also, praying that everything gets cached first load and hand waving that view transitions will eventually work is not a position I want to hear from an engineer in a commercial context. Really happy to see the author bring more attention to how good vanilla web technologies have gotten though.
This and similar post are a bulletproof way to start a flame war.
Last time it was generics that were missing, now everyone is raging about sum times and of course explicit error is a topic of constant concern and why panics and not exceptions?
Go is well designed to build good software quickly. Easy dependency handling, good tooling, vast ecosystem.
Go is well designed to help developers with automation and help them catch mistakes, that's why it's easy to parse and all language design decisions take that into account.
It's also designed to produce a lot of code and that requires the language to be easy to understand and programs easy to tweak and that's what it provides, since you'll have a lot of developers tweaking the code.
We're in the industry of shipping different kinds of products and that imposes different constraints and results in different languages being used. Also, different people care about different stuff and languages form clusters of similar minded people around them, that's a choice too.
One of the things that surprised me in the article was their usage of J2K. They’ve been using it as part of IntelliJ, alright, but why did they have to run it headless? They’ve even mentioned that it was open sourced. And later they’ve said that they were not able to do much improvements because it was on maintenance mode at Jet Brains.
I mean, with the ressources meta has I’m sure they could have rewritten the tool, made a fork or done any other thing to incorporate their changes (they talk about overrides) or transformed the tool into something better fitting their approach. Maybe it has been done, just not clear from the article
Local state is indeed a problem that's exacerbated by swapping logic. Simple example: you have a form + a collapsible block inside or maybe a dynamic set of inputs (imagine you're adding items to a catalog and you want to allow to submit more than one). If you save the form and simply swap the html the block state will be reset. Ofc you could set the toggle state somewhere to pass it along to the backend, but that's a pain already compared to spa approach where you don't even bother with that since no state is reset.
You could you query string as mentioned in the article, but that's not really convenient when done in a custom way for every single case.
Having said that I think that a way to go could be a community effort to settle on the ways to handle different ui patterns and interactions with some bigger project to serve as a testing ground. And that includes backend part too, we can look at what rails does with turbo
My opinion is that htmx (and similar) approach is different enough to potentially require a different set of ui interactions and it usually hurts when one tries to apply react friendly interactions to it.
reply