Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What if an AI model could tell you exactly how to modify a common virus to kill 50% of everyone it infects?


Yeah. It's will start it's instruction with recommendation of buying some high-tech biolab for $100,000,000.

Seriously. The reason why we dont have mass killings everywhere is not the fact that information on how to make explosive drones or poisons is impossible to find or access. It's also not so hard to buy a car or knife.

Hell you can even find YouTube videos on how exactly uranium enrichment works step by step. Even though some content creators even got police raided for that. Yet we dont see tons of random kids making dirty bombs.

PS: Cody's Lab: Uranium Refining:

https://archive.org/details/cl-uranium


you cannot compare making nuclear weapons to modifying viruses to be more lethal. It is vastly cheaper to modify viruses and the knowledge is bottleneck vs nukes were the knowledge of how to make them is very widespread but getting the materials is very hard.

another example is if a LLM could tell you exactly how to build a tabletop laser device that could enrich uranium for a few hundred thousand dollars.


LLMs are not AGIs. LLMs can only ever tell you how to build a device to enrich uranium for few hundred thousand dollars if this information was already public knowledge and LLM was trained on it. Situation is the same with building biolab tech for few hundred thousand dollars. Also if there was an actor who have few millions already they wouldn't have any problem to get their hands on any LLM or scientist who able to build it for them.

The only "danger" LLM "safety" can prevent is generation of racist porn stories.


With the vast amounts of data LLMs are trained on they make it much easier for people to find harmful and dangerous information if they aren't filtered. See

https://en.wikipedia.org/wiki/Separation_of_isotopes_by_lase...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: