Hacker Newsnew | past | comments | ask | show | jobs | submit | miguno's commentslogin

Thanks for your contributions!

> In the latest Stackoverflow survey, it's back at the "top 5 of desired stacks to use for next project" over a decade after its inception !

Oh, where did you find that?

Only info I could find was that Rails is at rank 10 in the Web Frameworks category for Admired vs. Desired in the 2025 survey: https://survey.stackoverflow.co/2025/technology/#2-web-frame....


Yes, been there, forgot that. I have since created a shell helper function that prints a list of "new and cool" CLI tools that I recently added to my dotfiles setup, which helps me committing them to long-term memory.


+1 to XLD. I have been using it for years, it's a wonderful piece of software.


Wonderful and charming. Thanks for sharing the game and the link to the engine!


I have been noticing this trend increasingly myself. It's getting more and more difficult to use tools like Google search to find relevant content.

Many of my searches nowadays include suffixes like "site:reddit.com" (or similar havens of, hopefully, still mostly human-generated content) to produce reasonably useful results. There's so much spam pollution by sites like Medium.com that it's disheartening. It feels as if the Internet humanity is already on the retreat into their last comely homes, which are more closed than open to the outside.

On the positive side:

1. Self-managed blogs (like: not on Substack or Medium) by individuals have become a strong indicator for interesting content. If the blog runs on Hugo, Zola, Astro, you-name-it, there's hope.

2. As a result of (1), I have started to use an RSS reader again. Who would have thought!

I am still torn about what to make of Discord. On the one hand, the closed-by-design nature of the thousands of Discord servers, where content is locked in forever without a chance of being indexed by a search engine, has many downsides in my opinion. On the other hand, the servers I do frequent are populated by humans, not content-generating bots camouflaged as users.


It has been for me the last 15 years like this.


That is sad to hear. Spending even just thirty minutes on the website to better communicate what one can/cannot do would go a long way.

In any case, thank you for your work.


Completely agree here. It would probably save time just having to explain timeline/status here, forum, etc.


That's what the AI robots will use as an explanation when they have f*cked us up. :-)


This looks cool, can't wait to try it!

At first, people might think "$10,000 demo problem? What a high number!" Realistically, in corp environments, that number is an understatement. Plus the long time (and pain) it takes to get every team's buy-in to help with capturing/generating that data.


Thanks! Yeah, I didn't want to seem cheeky throwing out a big round number, but it feels ballpark right based on all the situations I've been in.


Thanks for putting things in perspective, EdwardDiego.

> The fact that MM2 happened, and Confluent didn't try to stop it, despite it being awfully similar to Replicator, makes me think that Confluent are acting in good faith.

Let me share an anecdote related to this example. We (Confluent) were actually the ones who contributed the documentation for MirrorMaker v2 to the Apache Kafka docs (https://kafka.apache.org/documentation/#georeplication). The development lead on MM2 was (an engineer at) Cloudera, yet they never spent the time to provide user-facing documentation to the Kafka project. I don't want to speculate about reasons, yet I noticed that MM2 was documented in the Cloudera docs.

If we didn't care for the Kafka community at Confluent, we would not have spent our own resources and time to fill that gap, given that we have a proprietary product similar to MM2 (i.e., Confluent Replicator).

https://github.com/apache/kafka/pull/9983


Shit, wait, there's documentaton for Mirror Maker 2 now? I spent most of my time implementing it by reading hypothetical examples in a KIP, and then diving into the actual code.

Hardly the most straightforward, and it was rather a gaping hole. Thanks for the background on how that hole developed.

I really appreciate Confluent putting that time into documenting something vital, that could compete with your own product, and IMO that does put a nail in the previous commenter's assertions about Confluent's alleged attempts to wall off necessary features of Kafka.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: