Hacker Newsnew | past | comments | ask | show | jobs | submit | resiros's commentslogin

I use netbird and can only recommend it

Netbird is very good for my use case. Simple to set up, and just works.

It might simply be that it was not trained enough in Elixir RL environments compared to Gemini and gpt. I use it for both ts and python and it's certainly better than Gemini. For Codex, it depends on the task.

I wonder why AI labs have not worked on improving the quality of the text outputs. Is this as the author claims a property of the LLMs themselves? Or is there simply not much incentive to create the best writing LLM?

The argument is that the best writing is the unexpected, while an LLM's function is to deliver the expected next token.

Even more precisely, human writing contains unpredictability that is either more or less intention (what might be called authors intent), as well as much more subconsciously added (what we might call quirks or imprinted behavior).

The first requires intention, something that as far as we know, LLMs simply cannot truly have or express. The second is something that can be approximated. Perhaps very well, but a mass of people using the same models with the same approximationa still lead to loss of distinction.

Perhaps LLMs that were fully individually trained could sufficiently replicate a person's quirks (I dunno), but that's hardly a scalable process.


Yeah, that makes banana.

What was the name of the last book you read?

I remember an article a few weeks back[1] which mentioned the current focus is improving the technical abilities of LLMs. I can imagine many (if not most) of their current subscribers are paying for the technical ability as opposed to creative writing.

This also reminded me that on OpenRouter, you can sort models by category. The ones tagged "Roleplay" and "Marketing" are probably going to have better writing compared to models like Opus 4 or ChatGPT 5.2.

[1]: https://www.techradar.com/ai-platforms-assistants/sam-altman...


That's like asking why McDonald's doesn't improve the quality of their hamburger. They can, but only within the bounds of mass produced cheap crap that maximizes profit. Otherwise they'd be a fundamentally different kind of company.

I mean there's tons of better-writing tools that use AI like Grammarly etc. For actual general-purpose LLMs, I don't think there's much incentive in making it write "better" in the artistic sense of the world... if the idea is to make the model good at tasks in general and communicate via language, that language should sound generic and boring. If it's too artistic or poetic or novel-like, the communication would appear a bit unhinged.

"Update the dependencies in this repo"

"Of course, I will. It will be an honor, and may I say, a beautiful privilege for me to do so. Oh how I wonder if..." vrs "Okay, I'll be updating dependencies..."


I wish it would just say "k, updated xyz to 1.2.3 in Cargo.toml" instead of the entire pages it likes to output. I don't want to read all of that!

I used to feel the same but you can just prompt it to reply with only one word when its done. Most people prefer it to summarize because its easier to track so ig thats the natural default

I mean, no one is asking for artistic writing, just not some obvious AI slop. The fact that we all can now easily determine that some text has been written / edited by AI is already an issue. No amount of prompting can help.

The article frames this as "semantic ablation" but the underlying mechanism is more specific: it is distributional averaging. RLHF and DPO reward policies optimize for the modal response given a prompt distribution. That is not a bug in the training process, it is the objective function working as designed. The model learns to produce the response that the median annotator would rate highest, and that response is, almost by definition, the least distinctive one.

What is underappreciated is how much stylistic signal lives in what information retrieval people call "burstiness" -- the tendency for distinctive words to cluster rather than distribute evenly. Hemingway's short declarative stacking, DFW's recursive parentheticals, legal writing's formulaic precision -- these are all bursty patterns that a model trained to maximize expected reward will sand down. You can partially recover it with few-shot prompting, but the model is fighting its own reward gradient the entire time.

The practical question is whether you can encode a style prior that survives the decoding process. The research on authorship attribution (stylometry) suggests the feature set is well-understood -- function word frequencies, sentence length distributions, type-token ratios, syntactic complexity metrics. But nobody has built a production system that uses those features as a constraint during generation rather than just detection.


Yeah but thats not what I am saying. I am saying its default writing style is for communicating with the user, not producing content/text hence it has that distinctive style we all recognise. If you want AI writing thats not slop, there are tools that are trying to do that but the default LLM writing style is unlikely to change imo.

Honestly, just use OpenCode. It works with Claude Code Max, and the TUI is 100x better. The only thing that sucks is Compaction.

How much longer is Anthropic going to allow OpenCode to use Pro/Max subscriptions? Yes, it's technically possible, but it's against Anthropic's ToS. [1]

1: https://blog.devgenius.io/you-might-be-breaking-claudes-tos-...


Consider switching to an OpenAI subscription, which allows OpenCode use.

Yeah. OpenAI allows any client, and only one single fixed system prompt. All their control is on the backend, which is worse than Claude.

Doesn’t Claude code have an agents sdk that officially allows you to use the good parts?

Yes but you can't use a subscription with that

There are also Azure versions of Opus

I have been unable to use OpenCode with my Claude Max subscription. It worked for awhile, but then it seems like Anthropic started blocking it.

What’s 100x better about the TUI?

Nope, OpenCode is nowhere near Claude Code.

It's amazing how much other agentic tools suck in comparison to Claude Code. I'd love to have a proper alternative. But they all suck. I keep trying them every few months and keep running back to Claude Code.

Just yesterday I installed Cursor and Codex, and removed both after a few hours.

Cursor disrespected my setting to ask before editing files. Codex renamed my tabs after I had named them. It also went ahead and edited a bunch of my files after a fresh install without asking me. The heck, the default behavior should have been to seek permission at least the first time.

OpenCode does not allow me to scrollback and edit a prior prompt for reuse. It also keeps throwing up all kinds of weird errors, especially when I'm trying to use free or lower cost models.

Gemini CLI reads strange Python files when I'm working on a Node.js project, what the heck. It also never fixed the diff display issues in the terminal; It's always so difficult for me to actually see what edits it is actually trying to make before it makes it. It also frequently throws random internal errors.

At this point, I'm not sure we'll be seeing a proper competitor to Claude Code anytime soon.


Hmmm, I used OpenCode for awhile and didn't have this experience. I felt like OpenCode was the better experience.

Same, I still use CC mainly due to it being so wildly better at compaction. The overall experience of using OpenCode was far superior - especially with the LSP configured.

I use Opencode as my main driver, and I don’t experience what you have experienced.

For instance, opencode has /undo command which allows you to scroll back and edit a prior prompt. It also support forking conversations based on any prior message.

I think it depends on the set up. I overwrote the default planning agent prompt of opencode to fit my own use cases and my own mcp servers. I’ve been using OpenAI’s gpt codex models and they have been performing very well and I am able to make it do exactly what I ask it to do.

Claude code may do stuff fast, but in terms of quality and the ability to edit only what I want it to do, I don’t think it’s the best. Claude code often take shortcuts or do extra stuff that I didn’t ask.


5.3 Codex on cursor is better than Claude code

Not in my (limited) experience. I gave CC and codex detailed instructions for reworking a UI, and codex did a much worse job and took 5x as long to finish.

This is quite nice but limited in that it is single-player. In my opinion, the next generation of AI agents will be multi-player. Ramp's background agent is a good example https://builders.ramp.com/post/why-we-built-our-background-a...

Making this multi-player + creating the right representation to collaborate with agents is in my opinion the next bottlenecks. I wrote a small article about my thoughts there https://x.com/mmabrouk_/status/2010803911486292154


Very interesting but the limitation on the libraries you can use is very strong.

I wonder if they plan to invest seriously into this?


It would be nice to have an open-source version that you can self-host. That would solve the abuse problem. Maybe with a service to create API keys.


Yeah, this is the next step. I first wanted to understand if this gets any traction. I think I will provide a dockerized version for the server part that you can just run with a simple command and maybe some interface to create api keys and distribute them to your users.


Fair enough from a business standpoint, but seeing as there are massive privacy/security risks involved in exposing your data to an opaque service, the open source component is probably a non-optional aspect of the value prop.


how come? just because it's open source doesn't mean that they run that exact binary on their servers. ngrok does pretty well without open sourcing.


The locus of trust moves, if you have the source, and trust is a factor for you, because you can simply self-host and know what you're running.


fwiw, ngrok started as open source


We're using pgrok for that in our organization. A small EC2 instance serves as the public endpoint.


That is really cool! Congrat on the launch!

I was surprised not to see a share and embed button. I would expect that could be huge for growth.


Thank you! There is a share button in the upper-right corner of the answer page screen :)


"TypeScript is now the most used language on GitHub. In August 2025, TypeScript overtook both Python and JavaScript. Its rise illustrates how developers are shifting toward typed languages that make agent-assisted coding more reliable in production. It doesn’t hurt that nearly every major frontend framework now scaffolds with TypeScript by default. Even still, Python remains dominant for AI and data science workloads, while the JavaScript/TypeScript ecosystem still accounts for more overall activity than Python alone."

I am not sure I agree with the conclusion "developers are shifting toward typed languages that make agent-assisted coding more reliable in production". I see it more with fullstack development being democratized.

I am originally Python/BE/ML engineer. But I've built in the last years many Frontend, simply because AI coding enables so much.

That was not an option previously.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: