I wrote Clojure for about five years. Left when I changed jobs, not because I wanted to. It's genuinely one of the most productive languages I've used, and I still miss the REPL-driven workflow.
One thing I built: defun https://github.com/killme2008/defun -- a macro for defining Clojure functions with pattern matching, Elixir-style. Still probably my favorite thing I've open sourced.
I had an idea about writing something similar, but for multimethods, but never got around thinking it through and trying it out.
The way defmulti and defmethod work is that they do a concurrency safe operation on a data structure, which is used to dispatch to the right method when you call the function.
My hunch is that it should be possible to do something similar by using core match. What I don't know is whether it's a good idea or a terrible one though. When you're already doing pattern matching, then you likely want to see everything in one place like with your library.
Clojure is such a fun language to write; it has great concurrency tools from the get go, and since it has access to the entire Java ecosystem you're never hurting for libraries.
I find myself missing Clojure-style multimethods in most languages that aren't Clojure (or Erlang); once multimethods clicked for me, it seemed so blatantly obvious to me that it's the "correct" way to do modular programming that I get a little annoyed at languages that don't support it. And core.async is simply wonderful. There are lots of great concurrency libraries in lots of languages that give you CSP-style semantics (e.g. Tokio in Rust) but none of them have felt quite as natural to me as core.async.
I haven't had a chance to touch Clojure in serious capacity in awhile, though I have been tempted to see if I can get Claude to generate decent bindings for Vert.x to port over a personal project I've been doing in Java.
I regret stumbling on Clojure around 2012-2013. I had every chance to learn and work on a big Clojure project with very knowledgeable people yet I looked dead in its eye, right between the parentheses and confidently said: "no, thank you!". It took me a few more years of enormous struggle with Javascript, and after exhausting my options, trying Typescript, Coffeescript, Livescript, Gorillascript, IcedCoffeeScript, Fay, Haste, GHCJS, Elm to finally arrive on Clojurescript. Even though I was dealing with frontend at that time, I already had good experience, and had gone through other stacks: .Net - C#, F#, VB; Python; Haskell; Objective-C; ActionScript; Delphi; some other lesser-known things.
I remember my initial confusion, but it didn't take long when I suddenly felt flabbergasted - shit just made sense. It was so down-to-earth, inexplicably pragmatic and reasonable that it made me want to learn more. I didn't even build anything with it, I was just browsing through related Google search results when I saw "Clojure/Remote Conf" announcement. It was a no-brainer - I took a day off and joined from my home computer. I immediately became a fan-boy. The amount of crazy awesome stuff I saw, the people I met in the chats, the articles and books I put in my notes - all that made me feel excited. After the conference I sat in my chair staring at the blank screen, for 40 minutes or so. Thinking, meditating, contemplating if that was a mid-career crisis or something. Knowing that on Monday I would have to go back to the same struggle, same shit, same mess that I had for the past two years, everything that until this very point made me feel depressed. On Monday I went back to work and said I'm leaving because: "I saw things I cannot unsee". I just knew I could never sneak-in some Clojure there. So I left. Even though it was well-paid job, fifteen minutes away from my home.
Getting into Clojure radically re-opened my eyes to the entire concept of computing. Not only had I found a different way of programming - I felt so enlightened, and largely thanks to the people I met in the community, which deserves special acknowledgment. Clojurians are just made different - they are the kindest, extremely knowledgeable, tolerant and most sincere professionals I have ever met. Not a single time when I asked them a question - no matter how dumb, provocative, or confusing it was; they always, every single time gave me much deeper and thought-provoking answers than I ever expected. None of my inquiries were ever dismissed, ignored or rejected. They'd gladly discuss just about anything - no matter the language, tool, technique, or ideas. Whatever helps you to find answers or get closer to the solution. I know, I have become a better programmer, thanks to Clojure. Yet more importantly, it helped me to become a better person.
Yes, I regret stumbling on Clojure. I wish I never saw it when I wasn't ready for it. It makes me feel sad for the time I have wasted. I wish I had someone persuasive to convince me to learn it sooner.
Maybe it's less risk for them to offer the individual features. That way if people get inadvertent results, it's easier to blame the user, not the tool/company
All *Claw implementations should use a local model for heartbeat, it doesn't need to do anything complex, pretty much just read a text file and do a true/false decision if there's something in there to do when it wakes up.
If so, it can either just shove the full heartbeat file to a smarter model or try to intelligently spread the tasks to the correct models.
Would love feedback from folks running observability/time-series workloads, especially on upgrade paths + repartition behavior in real clusters. GitHub releases/changelog: https://github.com/GreptimeTeam/greptimedb/releases
This thread overlaps a lot with "Observability 2.0 and the Database for It" (https://news.ycombinator.com/item?id=43789625). The core claim there is: treat logs/spans as structured "wide events", and build a storage/query layer that can handle high-cardinality events so many metrics become derived views rather than pre-modeled upfront. It also argues the hard part isn't "dump it in S3", it’s indexing/queryability + cost control at scale.
In an agentic AI world this pressure gets worse: telemetry becomes more JSON-ish, more high-cardinality (tool names, model/version, prompt/template IDs, step graphs), and more bursty, so pre-modeling every metric up front breaks down faster.
I can't believe they made this decision. It's detrimental to the open-source ecosystem and MinIO users, and it's not good for them either, just look at the Elasticsearch case.
Hi HN, I'm Dennis from Greptime. This article is based on a talk by our engineer Ruihang Xia, who is also a PMC member of Apache DataFusion.
The most surprising finding for me was the hash seed trick - using the same random seed across HashMaps in a two-phase aggregation gives you ~10% speedup on ClickBench. The bucket distribution from the first phase can be preserved during merge, eliminating rehashing overhead and making CPU cache happy.
We also discuss why Rust's prost library can be significantly slower than Go's protobuf implementation, and how fixing it improved our end-to-end throughput by 40%.
Happy to discuss Rust performance optimization or DataFusion internals.
Glad to see that Ruby Under a Microscope is still being updated. It’s an essential read for anyone who wants to understand how Ruby works internally — and I truly enjoy reading it.
Thank you for giving GreptimeDB a shout-out—it means a lot to us. We created GreptimeDB to simplify the observability data stack with an all-in-one database, and we’re glad to hear it’s been helpful.
OpenTelemetry-native is a requirement, not an option, for the new observability data stack. I believe otel-arrow (https://github.com/open-telemetry/otel-arrow) has strong future potential, and we are committed to supporting and improving it.
FYI: I think SQL is great for building everything—dashboards, alerting rules, and complex analytics—but PromQL still has unique value in the Prometheus ecosystem. To be transparent, GreptimeDB still has some performance issues with PromQL, which we’ll address before the 1.0 GA.
Are you saying that you prefer SQL over PromQL for metrics queries? I haven't tried querying metrics via SQL yet, but generally speaking have found PromQL to be one of the easier query languages to learn - more straightforward and concise IME. What advantages does SQL offer here?
I didn’t mean SQL over PromQL — they’re designed for different layers of problems.
SQL has a broader theoretical scope: it’s a general-purpose language that can describe almost any kind of data processing or analytics workflow, given the right schema and functions.
PromQL, on the other hand, is purpose-built for observability — it’s optimized for time‑series data, streaming calculations, and real‑time aggregation. It’s definitely easier to learn and more straightforward when your goal is to reason about metrics and alerting.
SQL’s strengths are in relational joins, richer operator sets, and higher‑level abstraction, which make it more powerful for analytical use cases beyond monitoring. PromQL trades that flexibility for simplicity and immediacy — which is exactly what makes it great for monitoring.
One thing I built: defun https://github.com/killme2008/defun -- a macro for defining Clojure functions with pattern matching, Elixir-style. Still probably my favorite thing I've open sourced.
reply