Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

This is easy imo. Two methods:

1. The law. It should not be legal for the US Govt to murder people at will. If it is legal, then of course they'll use tools to make it easier. Maybe AI, maybe Clippy. If they can't use AI then they'll fall back to using some other way of doing it like they've already been doing for several years.

2. Voting. For representatives that actually represent us and have our interest in mind rather than their own corrupt interests. And voting with our wallet against companies that do legal but morally bankrupt things.

Of course we're failing both of these hard right now. But imo the answer is not to give up and let corporations make the rules.

In other words, if it were legal for a normal citizen to murder anyone they wanted, of course they'll use Google Maps to help them do that. We don't put restrictions on how people can use Google Maps. Instead we've made murder illegal. We should be doing the same thing here.

 help



It's illegal to drive drunk or read your cell phone and hit strangers head on.

Nevertheless, it wasn't lawmakers, it was car makers who innovated to build-in airbags and seatbelts and lane assist and and and ... under the theory that though it's illegal, bad things are done anyway, and guardrails still matter.

Colloquialism: "belt and suspenders".

Many, like Volvo, go above and beyond the requirements to make their vehicles safer, and then having demonstrated these guardrails, some become law as well (even as other makers in the industry kick and scream about being forced to, and riders rebel against buckling up).

As we haven't solved this stand off for a century, we are unlikely to resolve it within the pace needed by expansion of AI. In this scenario, Anthropic is Volvo.


Exactly zero of those account for an individual's or company's ability tolive by their own moral code

And this AI software is not a mere static object like a hammer that can be handed off to a customer and what it is used for is their business, to build a house or bash a living skull.

This is a system that must be constantly maintained by it's builders.

Moreover, even if we use your standard, the law, it has already been decided in Anthropic's favor.

What you require is that Anthropic actively participate in activities that they consider abhorrent and/or unwise. SCOTUS has already ruled that a business cannot even be required to sell a cake to someone if it does not like the intended purpose (in that case, at a celebration at a gay wedding).


> even if we use your standard, the law, it has already been decided in Anthropic's favor.

I support Anthropic here. They had a deal with the Govt and the Govt bullied them. That should not be allowed, and Anthropic is suing which makes sense to me. Anthropic should be allowed to set any terms of use for the product that they want, and gain or lose business based on those terms. That's fine.

I'm saying that the failure is actually upstream. It should not be possible for Anthropic's AI to be used to mass surveile or murder people, because those things should be illegal by law and the govt should not be allowed to do it and should not be doing it. Somehow it isn't this way though.

So now that we find ourselves in this failed state, we have to rely on Anthropic to be "the law": to identify what what's "evil" and disallow it. I'm saying that's out of scope for a tech company and they shouldn't be expected to do that. They should only be in the business of making good tech and then be free to let it be used by anyone for any purpose that that the law allows.

This also means that if it's illegal to share information on how to build a bomb without AI, then it should be illegal for Claude to share that information with AI. So Anthropic to does need to make sure they're not breaking the law themselves as well.


Ah, good, we generally agree

For sure, Anthropic should NOT have been forced to decide the ethics of deploying their tech

Nevertheless, they should always be considering the ethics of their own creations and actions, and it seems they are — as soon as they got bullied by a failing regime, they had the right answer: 'no, that is not ethical and we won't allow it with our products'.

The problem is that the law only very roughly captures what is right and just, so there are many things that are legal that are unethical, at the same time there are many things that are ethical but illegal. So, we can't entirely outsource our personal or corporate ethics to the law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: