Hacker Newsnew | past | comments | ask | show | jobs | submit | BalinKing's commentslogin

In the paper itself, the abstract actually does have a paragraph break, so it's probably just an autoformatting issue or something.

AI-generated comments are disallowed by HN guidelines: https://news.ycombinator.com/newsguidelines.html#generated


I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban, because I see this sort of behavior from non-green accounts as well.

EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct.


> I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban

It pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments.

But as a general rule, accounts that post generated comments get banned.


I think your comment was generated by an LLM and hereby vote for your immediate and permanent instant ban.


I think that your comment was generated by Eliza, and hereby vote for you to get a karma boost for being Legit Old School, then an immediate and permanent instant ban.

I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)


Can you elaborate on that?


Eliza was one of the first chatbots from the mid to late 60s: https://en.wikipedia.org/wiki/ELIZA


that's interesting, tell me more about one of the first chatbots from the mid to late 60s


BTW, what ELIZA implementation are y'all using? The Emacs Doctor?


What would it mean to you if we were all using the Emacs Doctor?


Emacs? Hah! I would appreciate it if you would continue.


Whooosh! I think you missed the joke. :-)

(I didn't, and I thank everyone involved for the nostalgic moment. Also, shout out to Dr. Sbaitso!)


There was also Dr. Abuse, a Spanish language chatbot created in 1992. I remember it being quite popular here in Argentina.


(I think you missed the joke.)


Joke's on you, all of my comments have been written by Dr. Sbaitso[0] since forever =)

[0] https://en.wikipedia.org/wiki/Dr._Sbaitso


Is it because do I feel about 'i think my comment' that you came to me?


I've seen people admit it. I've even seen a commenter say that they were an agent. We can do these cases.


Then nobody would admit it, so the problem persists. Except maybe for fully automated accounts. Those should of course be banned anyways.


Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.

We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".

My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.


> Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text.

And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.

We're living in a 1984 LARP.


Hmm, some LLM text is hard to detect, sure.

Some is also horribly easy. If the text is full of:

- Overly positive commentary and encouragement

- Constant use of bullet point lists, bolding and emoji

- This quaint forced 'funniness', like a misplaced attempt at being lighthearted

- A lot of blablah that just missed the point

- Not concise and to the point, but also not super long

Then that really screams ChatGPT to me.

I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.


> This quaint forced 'funniness', like a misplaced attempt at being lighthearted

HN always downvotes attempts at humour, be them chatbot or brain generated :)


LOL. You just described your own comment!


Well just the bullet points but in this case I thought they were warranted. ChatGPT uses them whenever and ever.


I thought it was intentional. Like a Poe's Law sort of thing.


Sure, it's obviously impossible to ID any single piece of writing as from an LLM without significant false positives.

But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).

At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.


The moderators are supposed to just know it when they see it? It's that black and white to you? Or are lots of false positives a price we have to pay?


Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)

Maybe there can be a dedicated 'flag botspam' button?

Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?


> Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)

I find the above comment concerning, so I ask: to what degree is the above commenter calibrated to ground truth? How would they know? How would we know?

[1]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...

It seems to me comments like the above are overconfident in the worst ways.


He was using a dozen obvious ChatGPT-isms. So either he was lying about writing it manually (the comforting option), or he actually writes like that, which is what I meant being concerning.

But yeah, there isn't a way to prove it one way or the over, even when it's "obvious".

I saw in some schools they're using systems where you have to type the essay in a web app, and the web app analyzes your keystrokes to determine if you're human.


Thanks for sharing. / Speaking of keystroke analysis, have you read "Fall" by Neal Stephenson? A fun read; there is a generalization of this idea therein.


> Maybe there can be a dedicated 'flag botspam' button?

We already have flagging and downvoting?


Abusing the flag button by reporting LLM generated posts and comments (which are not breaking any current guidelines) seems like a good way to get your flags ignored.


Flagging isn’t only in case of breaking the guidelines. From the FAQ:

What does [flagged] mean?

Users flagged the post as breaking the guidelines or otherwise not belonging on HN.

In other words, submissions get flagged that users believe don’t belong on HN. LLM-written submissions can be one such case.


"Not belonging on HN" is an open invitation to flag anything someone disagrees with. Many posts are flagged simply because they express an unpopular opinion.

Community moderation won't fix this problem. It can only be mitigated if the site owners invest significant resources in addressing it. And judging by how little YC actually invests in HN, I wouldn't hold my breath. This website will succumb to this problem just like most others.


https://news.ycombinator.com/item?id=47290841

It is against the rules though


I would be worried the reason for the flag wasn't _immediately_ obvious. Maybe if there was a drop-down for the rule being violated it would help.


https://news.ycombinator.com/item?id=47261561 seems like a better source for the policy.


What a bizarre way to run a community. The guidelines make no mention of this "rule," does dang not have the ability to edit them?


It’s only going to get harder has people continue to model their writing on LLM style.


You're absolutely right.


You know it's bad when reading "you're absolutely right..." causes you to oscillate between wanting to laugh and also violently destroy the computer.


You are viewing this through exactly the right lens. But here is the kicker..


I laughed so hard. It has been a long time. Thanks!


I guess it's been fun but the internet is well and truly dead

If not already, then soon


Something we need to remember that AI was trained on every public internet comment, the vast majority of which are legit terrible. The biggest tell that someone is using AI is having multiple paragraphs saying the same point over and over again. Even trolls are more succinct.


Huh, this is what specifically drove me to complain about LLM-generated tickets at work - multiple paragraphs rewording and emphasizing the same point, all of which was topically relevant, but not necessary.

(i.e. it was obvious in the first place, think along the lines of a ticket about a screen loading slowly, and then multiple paragraphs explaining the benefits of faster-loading screens.)


Dammit, am I going to get banned for rambling?


In some fraction of cases, it's really obvious.

I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.

If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.

I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too).


Sorry, updated my original comment—I meant to qualify it to only those cases where it's blatantly obvious. Obviously a lot of ambiguous comments will slip through as a result, but I agree with you that false negatives are better than false positives.


Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.

I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.

If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text.


For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible.


Are they great at detecting normal prompts that don't try to make the LLM speak non-LLM-ishly? If you make the LLM not use em dashes, "it's not; it's" phrases and similar things, and if you make it make a few mistakes here and there, would it still be detected? My point is that if people aren't trying to hide their LLM use, it might work, otherwise it probably wouldn't. How would a detector tool work against output where the prompt tells the LLM to alter the way it writes? Or if the LLM output is being modified by another LLM specifically designed to mimic certain styles?

Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.


Is that false-positive rate from your own testing, or the author's claims? What is the source of ground truth?


I will never, ever forgive these techbros for ruining emdashes. I will also never stop using them -- they are a permanent part of my writing style -- no matter the personal consequences.


Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.

I've always found this funny. Doesn't macOS' default text substitution enable (annoying to me) things like em-dash, smart quotes, etc?


Can you show an example of "blatantly obvious"?



Oof. Some of those seemed reasonable at first. Ex: CloakHQ's comment on Compaq/DEC...

....until you start scrolling down the page and it becomes screamingly obvious that everything it says comes from the same template.

Maybe the problem isn't just that AI produces gobs of useless crap. Maybe what's worse is that it can produce even more mediocre crap that crowds out the good?

All oatmeal, no steak, leads to "starvation" by poor nutrition.


Can use AI to detect that


People accuses everything of being LLM generated these days. That'd be a tough rule to enforce.


Do this with submissions, too. Or at least put some indicator that it's AI generated.


I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.

Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.


In every single article's comments now, there's always someone coming out of the woodwork to post "This article is written by LLM." These comments are about as useless as "The website's color scheme is annoying" and "The website breaks the [back button | scrollbar]." (which, by the way, are not allowed per the HN guidelines[1])

If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.

1: https://news.ycombinator.com/newsguidelines.html

    Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point.


I have commented once or twice on articles being AI generated. I don't put them when I think the writer used AI to clean up some text. I added them when there are paragraphs of meaningless or incorrect content.

Formats, name collisions or back-button breakage are tangential to the content of the article. Being AI generated isn't. And it does add to the overall HN conversation by making it easier to focus on meaningful content and not AI generated text.

Basically, if the writer didn't do a good job checking and understanding the content we shouldn't bother to either.


I very much agree.

The number of comments I see complaining about "it's not this, it's that" and other "LLMisms" definitely frustrates me more than the original content.


It's much more than a "tangential annoyance" and it adds a lot to the conversation--among other things, it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.

Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?

It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.

I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one.


To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.

In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.

Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.

What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).


> it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.

It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.

You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.

Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality.


> Being on the front page regularly is evidence that people enjoy it or find it informative.

What makes you think that it's people who get it to the front page anymore? Or that most people aren't simply fooled by technology designed to mimic humans?

> Worse, you seem to believe that it needs to be labeled to help you identify it. Why?

Why not? Would adding a label and providing filtering capabilities hurt anyone else's experience?

Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.


> Some people object to this content based on principle, not on its quality, or on how closely it resembles content authored by humans.

Its's OK to have political opinions, even the ones that I disagree with. It's not OK to ruin every unrelated conversation ranting about them. Some folks around here have turned into that one uncle nobody likes inviting around to dinner anymore.

If a label would stop that I might be in favor of it. However, I'm certain it would instead be used to remove otherwise high quality content and ultimately reduce the utility of this place.


I agree with you, but...

> Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication.

Not anymore. Bots are now the majority of producers and consumers of all content on the internet. The social contract you mention has been broken for years, and this new technology has further cemented that.

Those of us who value communication with humans will have to find other platforms where content authorship is strictly regulated, or, at the very least, where tools are provided to somewhat reliably filter out machine-generated content. Or retreat from public spaces altogether.

FWIW I have very little hope that this issue will be addressed on HN, considering [1].

[1]: https://www.ycombinator.com/companies/industry/ai


It's in a lot of people's interest to keep platforms like HN free of LLM spam, frankly. It's in our interest as people who want to keep our discussion site for actual human discussion (though from the other comments in this thread, this sentiment isn't universally shared, god knows why). It's also in the interest of AI companies since if they destroy internet spaces like this they lose valuable future training data. So I'm (perhaps foolishly) optimistic--or at least not completely pessimistic--that there's hope yet for us.

Incidentally I foresee similar issues to this training data pollution arising with LLM coding taking over software engineering--which it inevitably is going to continue to do, at least in the short term. If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today? It feels to me like we risk technological stagnation as our collective skills atrophy and the market value of our skills plummets. Kind of like airplane pilots forgetting how to debug planes or handle edge cases because they just rely on autopilot all the time.


Like you say, some people are interested in keeping the discussion for humans only. Although we can't really know whether any opinion expressed here is coming from a human or not, including this one.

As for "AI" companies, their only interest is increasing their valuation. Historically speaking, most companies prioritize short-term profits, but during a bull market the incentives are even more skewed towards it. So poisoning the well of training data is seen as a future problem for someone else to figure out, or not. In the meantime, carpe pecuniam.

> If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today?

LLMs, of course. :) I don't think the people building these tools haven given these topics any serious thought. Whatever concerns they claim to have, regarding safety and otherwise, are merely performative.


Hey, I'm not a fan of LLM slop articles and blogspam either and if I could hold back the tide, I'd try to. But I'm just saying that pointing it out each and every time is just going to become its own form of spam. We're quickly entering a world where 99+% of what is written online, be it blogs, amateur news, or actual professional journalism, is LLM generated. You hate it, I hate it, but it's coming. The state of journalism is already in shambles and line must go up, so "everything written by AI" is sadly inevitable. Posting every time to remind people of that? I mean by the end of 2026 you might as well have a bot commenting on every article that it's probably LLM generated. I argue it adds no signal to the conversation.


I still think it has strong normative value. Maybe at some point when norms have become firmly established these comments will be pointless and spammy but I don't think we're anywhere close to that point yet.

A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive.


Perhaps it would be better to have comments that praise apparently human-written text?

I understand where you're coming from. I've been posting complaints about LLM-written articles almost as long as I've been here. (My analysis is definitely more complex than a search for blacklisted Unicode characters or words.)

But I've let off on that, partly because I agree the guideline is meant to encompass that kind of criticism (same with my comments about initial page content not rendering with JavaScript, honestly) but largely because it just seems futile. It's better material for a blog post than HN comments (and would be less repetitive).


I was thinking he same thing but didn't want to post my complaint about other commenters becasue I think that's against the rules too?


I generally don't but I think it's defacto-allowed in these "meta HN rules discussion" threads.


I think a steelman interpretation of the parent is that entirely LLM-generated projects should be disallowed. There's a lot of submissions on Show HN that seem completely vibe-coded to me (like, including the README), which is a very different situation IMO from someone who simply used Claude to write some—or even most—of the code. When even the human-facing portion of a submission is LLM-generated, it bothers a lot of people (myself included).


Agreed. Having some level of human input makes a submission at least meaningful. If the entire repo and all text is generated by an LLM, does it really matter if the human is the one posting the link? It's functionally indistinguishable from automated spam.


> I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.

Low value content is still content, written by a human being with a specific point. I would argue that LLM written content is even worse than that, because what value does it add when you or I can just ask the LLM itself for it? Its existence is solely that of regurgitation.


Without engaging in more ad hominem, that are wrong by the way, what's the issue with labeling AI content with what it is?


It's one thing to have an AI-label. It's another to completely derail a conversation with a likely false AI accusation.

Example: https://news.ycombinator.com/item?id=47122272

You have to scroll a few pages before the actual article is discussed.

"This was LLM generated" is likely to float to the top of an article. That's where the best comments about the article deserve to go, not an off topic comment. An AI label should be much less obtrusive.


> You have to scroll a few pages before the actual article is discussed.

Or you could collapse the one thread containing those comments.


Join me and downvote them relentlessly.


> what's the issue with labeling AI content with what it is

1. Your guess is not always correct

2. Over time, AI content will get harder to guess until it is indistinguishable from human content

3. You're not helping anyone by posting "this is AI". Maybe it is, maybe it isn't, but it's not helpful. It just adds to the noise.


I'm not suggesting anyone post "this is AI", the submitter should vouch that it's AI or eventually get banned for spamming.

Ideally there could be a label on the submission that states it's AI


> Ideally there could be a label on the submission that states it's AI

A lot of people tried for #politics and that didn't work. I doubt you'll get #ai.


The guidelines haven't even been updated to say that AI generated posts and submission aren't permitted even though it's been the policy for a couple of years now if one searches for postings by the moderators. So outsiders and new HN users have no reason to know that it's not allowed. I'm sure there are reasons for it but the inaction is all very mysterious from an outsider perspective.


This obviously should have been done years ago. @dang is there a reason it hasn't?


https://news.ycombinator.com/newsguidelines.html#generated

(@dang doesn't work and it took me a long time to find this comment again so I could reply!)


..so updating the guidelines is beyond the pale and suggesting it is downvote worthy?

How very interesting.


"Please don't comment about the voting on comments. It never does any good, and it makes boring reading."

https://news.ycombinator.com/newsguidelines.html


I disagree with this policy.

Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.

LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.

And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.


Aren't down votes on this forum restricted to 500+ karma? And how would those compare to flagging? I'd hate for people under 500 karma to think they need to flag a post in order to have it get any attention by moderation. And, with your idea that LLMs help folks write, wouldn't that make the community worse for them?

And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra


I should clarify — I disagree with disallowing any comments that used LLMs in the writing. I think comments should be judged on their quality, not on how they were written.

I might agree (don't know) with the idea of limiting new accounts more heavily.


> I disagree with disallowing any comments that used LLMs in the writing.

I think the point here is that the community doesn't want to read AI slop, not that using an LLM to clean up your writing contains some inherent evil that prevents quality.

I don't want to accuse you of strawmanning the argument, but honestly, where did you ever see someone advocating the latter?


> LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases.

Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.

Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.


I wasn't talking about someone learning the language and using this instead of learning it.

There are a lot of people who understand English fairly well, but are not actively learning the language, are not native speakers, and can use LLMs to catch grammar mistakes that they otherwise wouldn't notice. Or catch small nuances in what they are saying, small implications that could otherwise go unnoticed.

In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".


> In general, I push back on people saying "I can't find a good/legitimate use for this technology, therefore there are no good/legitimate use for it".

Is that genuinely what you think most of the complaints on HN are saying?

IMNSHO that's an absurd statement to make about the other side of the argument. I'm still giving the benefit of the doubt here but jeeze, this really smells like a strawman.

There are dozens of whole classes of criticism of these tools that I see made on HN, and none of them fall into the category you described.

Ex: Saying "juniors who rely on Copilot/Claud/etc become lazy and can create low quality code without learning how to do better" is night and day different from what you're saying. And that's a criticism that must be addressed or the entire global software industry will destroy itself in two generations.

Surely the difference between that and "we don't want anybody to use Grammarly in their subs that show up here" is completely obvious, yes?


Absolutely this:

> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.


I think all submissions to HN should be submitted via snail-mail, and must be handwritten. That would solve the problem.

/heavy sarcasm

That being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.


Unfortunately I don’t think that it would solve the problem: https://www.google.com/search?q=handwritten+mail+service&udm...


First interview question is to submit a handwriting sample.


I taught myself to type because most people can't read my handwriting.

I would be so screwed. :-(


Marking the sarcasm here really ruins your humour.


Other than this being probably challenging to enforce fairly, I think I agree that if you had strong proof of an account largely or completely posting comments/stories/whatever that was adulterated by an LLM, that is really probably ban worthy like you said.


I think you need (at least) one exception to that rule. We have many people here whose first language is not English, and this is an English-only forum. For at least some of those people, an AI translation may give better clarity than their own attempt at writing in English.

So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.


How ironic, a comment advocating for banning LLM comments using em dashes

What if someone used an LLM to just translate?


When I read comments like this, I think about the average Joe who says: "Most people are terrible drivers." Then, someone asks them: "Are you a terrible driver?" They respond: "Of course not. I am an excellent driver." A few people roll their eyes.

    > worthy of an instant ban
First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?


Maybe we need a reverse Turing test and award -- humans write things that are indistinguishable from AI slop.

I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.

> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?

It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.


Another thing that I have raised multiple times on HN: I had a roommate a few years ago that was a non-native English speaker doing research as a post-doc. He would use ChatGPT to improve his writing in scientific papers. He was producing the first draft, then would discuss with ChatGPT about how to improve the grammar. At the time, I thought it was genius. He said it acted like an English tutor for him. When you see the reactionary anti-LLM comments on HN, never once do they mention LLM-assisted -- only all human or all LLM.


For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.

But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.


Exactly. If your LLM wrote it, then my LLM can read it. I don't want to.


God help us if we get to the point where we need an LLM agent to do the reading and filtering of all our social content for us. I am completely certain that is a downward spiral that ends with the collapse of our society and I give it 50/50 odds for killing off the entire species.


This is a SNR discussion, the N has just gone up an order of magnitude and may well go up multiple orders of magnitude more to the point that communication between people will be drowned out by non-people attempting to communicate with people. It's the spam problem all over again.

I predict the outcome will be roughly the same: a more or less working set of LLM filters that allow you to filter out LLM output at the price of some false positives and some false negatives. Coming soon to a browser near you: the Bye AI Plugin.


Slightly surprised to learn Master and Commander is “lowbrow”—is it just because it’s not an art film or whatever? Usually I’d expect Marvel films to be described that way (unfairly imo, when it comes to the Phase One batch at least)…


It's clearly in a different category from the "highbrow" examples like Solaris, just by virtue of being entertaining to a broad audience. In contrast Solaris is the kind of movie where there's a five minute unbroken scene that's just a guy driving in traffic and thinking about his life. (Like the author, I like them both!)


The ‘brow’ standards have dropped significantly, in a process Fussell has described as the general proletarianization of culture.

For a long time films that would be considered niche and arthouse were middlebrow, because film itself was at best a middlebrow medium.

To people still concerned with the various brows, Marvel films are below low. They are sign of a debased and infantile film culture that caters to childish tastes and merchandising, not art.


Years ago I was surprised to read a critic that described Branagh's Hamlet as middlebrow. I mean, Henry V, sure - that only even qualifies as middlebrow because it's Shakespeare. I would assume it was lowbrow at the time it was written. I love the prologue, though.


Yeah I'd say the critic was most likely affirming the idea that film is a middlebrow medium. Seeing Hamlet at the Globe is high brow, but seeing Hamlet as the cinema is middlebrow.


The Globe is full of tourists, so it's multibrow at best.

Bourdieu's take was that the working classes like simple sentimental art, the middle classes like aspirational, middlebrow art because they feel they have something to prove, and the upper classes often prefer kitsch.

Although sometimes it's high status middlebrow kitsch, such as a lot of opera and light classical music, which is more sentimental than technical.

Most opera lovers have no idea who Luigi Nono was, and would care less if they did know.

https://www.youtube.com/watch?v=joteZTLpHdE

Highbrow art is the exclusive niche domain of intellectuals and academics UNLESS it's been commodified into a Veblen good, like contemporary art.


> Although sometimes it's high status middlebrow kitsch, such as a lot of opera and light classical music, which is more sentimental than technical.

Are you sure it's more sentimental than technical? Like, with-your-ears sure?

Note that it took something like 140 years for someone to write a tempo fugue using a piano technique in Chopin's 4th Ballade. That is to say-- sentimental composers are as good at hiding their technique as audiences are at missing it.


The film is by Peter Weir, who is capable of being quite highbrow. But, it stars Rusty, who is capable of being quite low brow. It depends. Are you not entertained?

The books are a bit highbrow and lowbrow. OBrien did a fantastic highbrow biography of Joseph Banks and another of Picasso, but this series is lowbrow. Not as lowbrow as his translation of "Papillon". He was in his youth feed as a quite highbrow up and coming thing. He wasn't Irish, he was an astute fake who left his first wife and reinvented himself in France under an assumed name he came to personify. John Le Carre is similar.

The books are more highbrow than CS Forester. I upset a Hornblower fan(atic) by averring Bush has a homoerotic fixation on Hornblower.

De gustibus...


Marvel films are commercial tripe. Pure commodity fetishism and cheap spectacle. Utterly without literary merit.

Master and Commander is pretentious pulp. Real, quality media is obscure, and largely unpalatable to our debased modern sensibilities.


Fun name: seems like a reference to “Magit” both syntactically (being a portmanteau of “Magit” and “jujutsu”) and semantically (majutsu meaning “magic” in Japanese).


Minor nitpick, but I didn’t think テーマ (tēma, “theme”) was an abbreviation—Jisho and Wiktionary (for what they’re worth) say it’s from German Thema.


In that case, wouldn't you be happy to get more calls, so that the up-front "training" cost is worth it? Naïvely I'd expect that every additional call would _decrease_ the amortized price per call.


ちなみに、日本語のバージョンでイタリック体の漢字と仮名があることに気づきましたが、それが普段ですか?僕は日本語が下手ですが、日本語でイタリック体の字があまり使われていないと聞いたことがあるだけです。でも、やはり実践にそうじゃないですか?


You're right. Italic is rarely used in Japanese, and it seems the English styling was inadvertently applied to the Japanese text as well. I'll fix it right away.


To be fair, Japanese headlines use a specific writing style that is much more compressed than normal text, like how English newspaper headlines drop words like “a” or “is” to save space.


The apples one is LLM nonsense: the left example doesn’t include any code for the loop, whereas the streams version actually is iterating over a collection.

Regardless, FP-style code isn’t “shiny new stuff”—it’s been around for decades in languages like Lisp or Haskell. Functional programming is just as theoretically “fundamental” as imperative programming. (Not to mention that, these days, not even C corresponds that closely to what’s actually going on in hardware.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: