Hacker Newsnew | past | comments | ask | show | jobs | submit | war321's commentslogin

Ya love to see it.


Even if it's just open weights and not "true" open source, I'll still give Meta the appreciation of being one of the few big AI companies actually committed to open models. In an ecosystem where groups like Anthropic and OpenAI keep hemming and hawing about safety and the necessity of closed AI systems "for our sake", they stand out among the rest.


To me it will be most interesting to see who attempts to manipulate the models by stuffing them with content, essentially adding "duplicate" content such as via tautology, in order to make it have added-misallocated weight; which I don't think an AI model will automatically be able to determine, unless it was truly intelligent, instead it would require to be trained by competent humans.

And so the models that have mechanisms for curating and preventing such misapplied weighting, and then the organizations and individuals who accurately create adjustments to the models, will in the end be the winners - where truth has been more honed for.


Why would openai/anthropic's approach be more safe? Are people able to remove all the guard rails on the llama models?



Humanity is so fortunate this "guardrails" mentality didn't catch on when we started publishing books. While too close for comfort, we got twice lucky that computing wasn't hampered by this mentality either.

This time, humanity narrowly averted complete disaster thanks to the huge efforts and resources of a small number of people.

I wonder if we are witnessing humanity's the end of open knowledge and compute (at least until we pass through a neo dark ages and reach the next age of enlightenment).

Whether it'll be due to profit or control, it looks like humanity is posed to get fucked.


[flagged]


The EU hasn't made it hard to release models (yet). The EU has made it hard to train models on EU data. Meta has responded by blocking access to the models trained on non-EU data as a form of leverage/retribution. This is explained by your own reference.


They're not safer. The claim is that OpenAI will enforce guard rails and take steps to ensure model outputs and prompts are responsible... but only a fool would take them at their word.


Yeah.. and Facebook said they would enforce censorship on their platforms to ensure content safety.. that didn't turn out so well. Now it just censors anything remotely controversial, such as World War 2 historical facts or even just slightly offensive wording.


You're really just arguing about the tuning. I get that it's annoying as a user but as a moderator going into it with the mentality that any post is expendable and bringing down the banhammer on everything near the line keeps things civil. HN does that too with the no flame-bait rule.


HN moderation is quite poor and very subjective. The guidelines are not the site rules, the rules are made up on the spot.

HN censors too. Facebook just does it automatically on a huge scale with no reasoning behind each censor.

Censorship is just tuning people or things you don't want out. Censorship of your own content as a user is extremely annoying and Facebook's censorhsip is quite unethical. It doesn't help safety of the users, it helps safety of the business.

Also Facebook censors things that are not objectively not offensive in lots of instances. YouTube too. Safety for their brand.


Censorship isn't moderation.


The banhammer can quickly become a tool of net negative though, when actual facts are being repressed/censored.


Unfortunately, there are a number of AI safety people that are still crowing about how AI models need to be locked down, with some of them loudly pivoting to talking about how open source models aid China.

Plus there's still the spectre of SB-1047 hanging around.


They're actually updating their license to allow LLAMA outputs for training!

https://x.com/AIatMeta/status/1815766335219249513


Hasn't really happened with PyTorch or any of their other open sourced releases tbh.


They've been working on AI for a good bit now. Open source especially is something they've championed since the mid 2010s at least with things like PyTorch, GraphQL, and React. It's not something they've suddenly pivoted to since ChatGPT came in 2022.


BD's success or lack thereof with Atlas is to be seen, given that the electrically actuated one is so recent and they're backed by Hyundai who've show desire in driving down the cost.


16k for a humanoid is insanely low. Guess China's aiming to dominate the humanoid robot market like they did with drones.


And robot vacuum cleaners.


The bottleneck for bioterrorism isn't AI telling you how to do something, it's producing the final result. You wanna curtail bioweapons, monitor the BSL labs, biowarfare labs, bioreactors, and organic 3D printers. ChatGPT telling me how to shoot someone isn't gonna help me if I can't get a gun.


This isn't related to my comment. I wasn't asking what if an AI invents a supervirus. I was asking what if someone invents a supervirus. AI isn't involved in this hypothetical in any way.

I was replying to a comment saying that nukes aren't commodities and can't target specific classes of people, and I don't understand why those properties in particular mean access to nukes should be kept secret and controlled.


As said every time this "why are we automating creativity when menial jobs exist?" response comes up:

1) Errors in art programs messing up is less worrisome than a physical robot. One going wrong makes extra fingers in a picture, the other potentially maims or kills you.

2) Moravec's Paradox. Reasoning requires little computation versus sensorimotor and perception.

3) Despite 1 and 2, we are constantly automating menial jobs!


Classifying image generation and manipulation as "art programs" is the most beneficial possible reading of it. When you use them to generate disinformation, incitement and propaganda, they are potentially maiming and killing humans. This failure mode is well known, the mitigations ineffective, yet here we are, about to take another leap forward after a performative period of "red teaming" where some mitigation work happens but the harsher criticism is brushed off as paranoiac.


I couldn't disagree more strongly that disinformation, incitement, or propaganda maim and kill people. People kill other people. Don't give killers an avenue to abdicate responsibility for their actions. Propaganda doesn't cause anyone to do anything. It may convince them, but those are entirely separate things with a clear, bright line between them. Best not mix them up.


It might be instructive to consider for example the history of genocide, in particular of civilian collaboration in state lead genocide. It might be instructive to consider why the genocide convention criminalizes not only acts of genocide, but also incitement of genocide. Why it criminalizes not only the failure to prevent genocide, but also the failure to prevent incitement of genocide. The US has an extraordinarily strong position on freedom of speech, it is nowhere near a universal moral value.

People kill other people is a statement so simple as to be devoid of any positive meaning. What are you actually trying to say? Don't justice systems almost universally contain notions of incitement of crime, criminal negligence to prevent a crime, and other accessory considerations to the actual act?

Don't justice systems almost universally have several levels of responsibility in relation to intent, which at its most basic level can be established by predictable outcomes?

If, for example, you are a leader of armed forces, and also a leader of organizations capable of creating propaganda. Let's say you create and distribute some propaganda (maybe using some AI tools), and a predictable outcome of that is that soldiers will be more lenient in their consideration of the rules of engagement and international law. In that case, one could at the very least establish that you were negligent in your creation and distribution of propaganda. The actual crime would have been the people killing people, namely your soldiers, but you would certainly be given some responsibility for that.

You can similarly take a small next step after that and consider that a company producing, distributing, and profiting from a dual use technology capable of creating propaganda and disinformation that can be responsible for crimes could be held at the very least morally accountable for those crimes, if not criminally.

Responsibility, accountability, moral and criminal, are not black and white notions. They are heaviest and easiest to attribute around physical acts of damage, but they stretch far and wide. To think otherwise is to allow the people with the most power to rampage unaccounted.


> freedom of speech .. is nowhere near a universal moral value

It depends on the basis form which you derive your (universal) moral values. Maximalist liberty as a universal moral value can be derived from the dual axioms of universal moral equality and a lack of moral oracle. If you accept these axioms, it follows that there is no source of moral authority that can legitimately constrain the non-infringing actions of another (eg. your right to wave your fists around ends where my nose begins). These ideas were first laid out in The Declaration of the Rights of Man, and expanded on in the Declaration of Independence.

> What are you actually trying to say?

That the causal chain of an action is completely interrupted at the first agent/actor in the system, who bears full responsibility for their actions.

> justice systems almost universally

It very much depends on the justice system. If you look at US/British/Roman law, a guilty mind (mens rea) and a guilty action (actus reus) are core facts that must be established in order to prove a crime has been committed. These still apply in cases of eg. criminal negligence, where a reasonable person ought to have known that their actions will result in harm. Mens rea is quite challenging to prove in cases of incitement, and legal precedents vary.

In combination with the above causal thesis, I hold that restricting incitement is in all cases an overstep of federal authority and an infringement of fundamental liberty. Incitement as a crime seems to have been established to make policing easier, not because telling someone to do something makes you responsible for their actions.

> you were negligent in your creation and distribution of propaganda

People are not inanimate objects. They are decision-making agents. The world is not a Rube-Goldberg machine. The soldiers who do the killing are responsible for their own moral attitude, and their own actions. You cannot be reasonably expected to know how your ideas will impact the minds of others, since every mind is a black box. Everything that contradicts this does so with generalizations too broad to be predictively useful.

> You can similarly take a small next step

This is where everything goes insane. Where does the responsibility end? You're trying to piece the butterfly effect back together.

Are people who make and sell bullets responsible for shootings? What about those that refine brass and lead? What about those that mine for ore? Creating economic demand, or promoting an idea, are morally neutral actions. People buying goods are in no way responsible for the conditions of their manufacture. People promoting ideas are in no way responsible for the actions a listener may take. Responsibility is zero-sum. Don't allow slavers and murderers to dispense with even a tiny portion of the sum responsibility for their actions. They must bear it all.


Disinformation is art. Art is disinformation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: