We see this in UK prisons too, because the pay is so low, the work is dangerous and conditions are kinda shit there's now an increasing amount of hybristophiliacs (people attracted to criminals) being found within the service, who start up romantic relationships with prisoners and corrupt the service. It's harder to weed these people out when you're desperate for staff.
The solution is to pay these positions well enough to attract people who genuinely believe in the profession. But society optimises away from doing that because there's no obvious ROI outside of making running costs cheaper.
Great, and I do agree, we know who to pay more (and who less), but how do we get there democratically? Should we vote for the party that says "we support LGBT and an infinite influx of cheap labor" or the "we support the military and the free market" party?
I think its one of those ones where we don't fix it because the market conditions of democracy can never justify spending that money in terms of prisons and most families of people who need care see limited value in spending more for an experience they don't receive. Improved spending in prisons only works in nordic societies because of laws that would never pass elsewhere, mostly allowing ex-convicts to not disclose their past (with some exemptions for sensitive jobs). You need that because the ROI in spending more on prisons is rehabilitation into society (which saves you the money on re-offending), but most other societies don't accept that and the maths is long term.
So we'll get the robots instead and the cries in response to the horrors of malfunction, will not be heard because the victims are politically weak. I only hope we never do this in education.
It is a little frustrating how those without this perspective react with shock when they discover that some of us have gone no contact with one of our parents. I was chatting to some muslim street preacher the other day and he told me that respect for your parents was a pillar of the faith, so that and my inability to grow a beard means I could never pick the Islam.
> Anthropic’s enemies in the Pentagon, who had, months prior, convinced Trump that Anthropic was “woke” and should be banned for government use.
That people in government speak like this is utterly absurd. The quote from Donald Trump's follow up tweet on t'social is considerably worse.
This was all due to Antropic not wanting to take on a military contract, right? Or is it suggested its more to do with Mythos, but why would it be, if they never released it.
> This was all due to Antropic not wanting to take on a military contract, right?
No, they already had a contract (since 2024, revisited/renewed by the Trump admin in mid-2025) which included military usage. That contract, though, had some language about what Claude couldn't be used for, ostensibly because Anthropic was nervous about accuracy in lethal contexts. Hegseth and others were unhappy with the restrictions and wanted to just redo the contract to remove them. Anthropic didn't want that, at least with current models. Then everything blew up. Zvi has some great writeups with more than you probably want to know.
You have to distinguish between political rhetoric (“woke”) and the substance of the dispute
The substance: traditionally, defense contracts don’t have clauses in them limiting what the military can do with the acquired technology. If Boeing or Lockheed Martin or Northrop Grumann sell a missile system to the Pentagon, they don’t try to impose contractual limits on who the Pentagon can fire the missiles at. Now, for some types of contracts - e.g. contracts to provide personnel - the Pentagon is used to contractual terms limiting uses - but not for hardware or software used in weapons systems / military planning / etc.
Along comes Anthropic, who argue AI is a fundamentally different technology, to which the old rules shouldn’t apply - they want contractual terms prohibiting certain uses (autonomous weapon systems without human in loop; domestic mass surveillance). The Biden admin buys the argument and agrees to those novel contractual terms. The Trump admin takes over and objects to them, demands they be renegotiated. I think it was primarily a matter of principle and power-“software vendors don’t get to tell us what we can and can’t do”-rather than some immediate plan to do things the contract prohibits.
OpenAI negotiated a contract which replicated those terms-but with the proviso that the terms only apply insofar as they reiterate existing legal limits. Anthropic was objecting to that as a meaningless fudge-“we promise not to do X if X is illegal” is very weak, especially when contracting with the government-Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it.
> Congress could change the law tomorrow, or the government’s lawyers could change their interpretation of it, or an appellate court decision could impose a new understanding of it.
And then it becomes legal. It’s not an empty argument, it simply means “someone higher than you took an initiative”.
I think in practice contracts to provide civilian personnel to the Pentagon contain clauses limiting the nature and location of the work - the Pentagon can’t contract for a clerical assistant in DC and then demand they go to Iraq to provide physical security - it violates the nature of the agreed work and the agreed location.
But contracts for personnel generally don’t contain restrictions on use beyond that. If the clerical assistant for DC is asked to provide clerical help to a military planning team who are planning an assault on Chicago, they (and their employer) don’t have legal grounds to refuse. If you are contracted to provide clerical assistance to military planners, you can’t legally say “Baghdad is fine, but Chicago is a no”. Saying that is a breach of contract-unless the courts rule that planning the assault was itself illegal, and I doubt current SCOTUS majority would
> in practice contracts to provide civilian personnel to the Pentagon contain clauses limiting the nature and location of the work
Which is what Anthropic was seeking to do. To be clear, I’m unsympathetic to Anthropic’s automated kill-decision objection. But I’m very sympathetic to their no-domestic-use requirement. And that aligns cleanly with precedent for civilian personnel having limits on “the nature and location” of their work.
> I’m unsympathetic to Anthropic’s automated kill-decision objection. But I’m very sympathetic to their no-domestic-use requirement.
One inherent issue with this set of choices is that you're OK with getting killed upon a decision made by a foreign AI agent.
Sadly, such a loophole already exists[0] and means that your own administration wouldn't be spying/executing you with AI, but oops - some ally (not even an adversary!) might on their behalf.
legally and in practice, they cannot. Even considering the 4th amendment, in a time of war the military can commandeer a service as long as they are compensated.
Congress passing a changed law, and it holding up in court is how it's supposed to work. The people's reps (specifics interpreted by the courts) should be the ones that set the standard on as a country what type of weapons systems we want to deploy vs. what is immoral. Precedent is nerve agent weapons, landmines, etc.
Honestly, Anthropic's stance feels like an oligarch stance. We have better morals than the American people, we will decide what weapons systems the military will use or not use.
It's perfectly understandable if they don't want to sell weapons to the government. That is a noble thing. But Anthropic wanted that DoW money and wanted to determine what is moral vs. not
Those acts are allowed by Anthropic’s terms-they aren’t domestic mass surveillance, and (to the best of my knowledge) any AI targeting decisions were approved by a human in the loop.
Anthropic’s terms weren’t “don’t do anything illegal” they were “here are two highly specific things which you aren’t allowed to, whether they are legal or not”
do you really think the bombings and kidnappings are new as of 2024? You think what we have been doing in the middle east and Guantanamo bay since 2001 are moral?
> If you can't write it down, why would you expect it to be universal and enforceable?
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
From my point of view, they told the kernel security team which is in charge of fixing this. If it’s important for them to tell other people, then it should’ve been written down and further reiterated when they made their report.
The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).
That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.
and its your opinion that it doesn't. Shall we continue stating the obvious? We are communicating using glyphs. This language is English. We are on Hacker News. This branch of the conversation is extremely unproductive.
I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
you seemed to suggest with your initial statement that any disclosure was acceptable as people would have been using the exploit prior to the disclosure. I don't think that's a strong argument given now the initial people who were using the exploit prior to disclosure are now joined by people who have learned of the exploit as a consequence of the disclosure happening before all the distribtions were ready.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
Idk why you felt the need to use quotes to wrap something I didn’t say, and that is a pretty uncharitable attempt at reframing my question. If you wanted a quote, here’s what I’d say:
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
Idk why you felt the need to use quotes to wrap something I didn’t say. Despite the fact I didn't say that, its a much more interesting argument than your original statement implies and it is unfortunate we didn't start there.
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
… I used quotes to wrap something that I was saying. I even called out that it was something I was saying, as a more accurate variant of what you’d claimed I meant.
and I prefaced my quotes with the statement "So I feel like the argument reduces into". I mean, idk what punctuation I'm supposed to use there that doesn't offend you, but I just figured we can all read words and it was clear that I wasn't saying you said that, but rather, as I read the argument it was reducable to that and I took issue with that potential reduction.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
You could try to make that case either way, but as has been pointed out by others all over this thread, the system we've landed on (90/+30) is industry standard after over two and a half decades of experimentation.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
it didn't have to be like this. If we had trusted NGOs with strong funding and a track record of independence and integrity they could shim between token generation and application. Allowing governments to produce identity tokens and applications to verify them with the shim blocking each side from knowing of the other.
I think specifically those opinionated within current generations mostly hate it. Fruit celebrity love island shows that plenty of younger generations are entirely content to consume its content. Its possible that most current generations would as well. Trash TV is extremely popular and LLMs are well suited to produce that type of content.
Nicki Minaj has factor 10 listens over a band like Tool. The money will pick the close to zero production cost for 9/10ths of the viewers every time and the platforms will prioritise the many.
I very much doubt boycott has enough weight in the long run to hold back generated content from taking over most of the bigger spaces. We've already seen this happen in recent years with staged content mopping up a lot of the most viewed content by manipulating potential viewers. Cheap influencer content has similarly squeezed cultivated content ad revenue through volume and consistency on YouTube.
If you want change then the route would have to be a legal one, not a social movement. Especially since we've mostly forgotten how to do the groundwork for social movements, leaving us all hand wringing and shouting into the void.
> Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
Well then we should maybe ask ourselves why RealityTV gets more views than well written work.
The solution is to pay these positions well enough to attract people who genuinely believe in the profession. But society optimises away from doing that because there's no obvious ROI outside of making running costs cheaper.
reply