Hacker Newsnew | past | comments | ask | show | jobs | submit | lnenad's commentslogin

It's really funny how people can say these things online without giving them a second thought. There are literal weapons being produced that are killing people daily. But no, it's the meme generator that's evil.

Because this is a tech forum, not a weapons forum. I'd wager that a sizeable chunk of folk decrying AI/LLMs in this manner also do, in fact, decry the same weapons you refer to. They just do it elsewhere because it's not typically on-topic here.

Context is tech, I agree. Is there no tech in weapons? Palantir? Drones? Are there developers that are proud when they made the kill machine 1% more precise; more optimized?

Plenty of HN threads about Palantir and drones also have people commenting about their evil.

Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.


I'm not arguing that, OP said

> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.

I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.


I think the evil part is putting it in the hands of the general public. The ability to create propaganda and deep fakes gives everyone a powerful tool for manipulation. The rich and powerful are going to do whatever the want, anyway. Everyone having access to that same tool doesn't make it any less dangerous.

There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.


> The ability to create propaganda

This has been possible for pretty much the entire history of humanity. The bar has been lowered, but not by a lot imho.

I don't disagree on the rest, and I didn't say there aren't bad uses, but there are many many good uses for AI/Sora. You can't say the same for weapons.


Genuinely curious what the [morally] good use cases for Sora would be.

violence at scale is often facilitated by and preceded by propaganda at scale, which is one of Sora’s only applications. Certain things are obvious to normal people, like “propaganda is real, powerful, bad, and historical of enormous significance”.

This is textbook whataboutism.

Yes, literal weapons are bad, too. But that's not the current topic.


> one of

It is not. Why is that relevant to social entities?

How well you interact with other members of a society increases your chances of procreation, survival, knowledge acquisition, ie. it makes sense as a measure of intelligence

It's a pretty ambiguous definition. The most powerful man in the world right now is not someone I consider a role model for social cognition and yet there he is with the football for the second time demonstrating grandmaster skill at social cognition to get there.

You don't have to be empathetic and nice, just good at navigating society.

So in all seriousness with a bit of snark: Do you want a malevolent AGI? Because "good at navigating society" as the only benchmark here is how you get a malevolent AGI...

Evidence: cuckoos and cheaters all the way down the evolutionary ladder as a winning strategy and arms race against the hard workers.


I don't like a$$holes but they do exist and they are part of our species, ergo intelligent. My opinion of them doesn't change the fact

Yes, but we have a choice about whether the AGI is an a$$h0l3 or not. That's the difference here. You do see that right?

I agree 100%.

Also I am in the process of fine tunning a small model on the data so that you'll be able to build diagrams inside of the app.

It's really amazing how stability of platforms has gone down in the last year or so.


If only this was correlated with something else going on in the industry...


yes, the new normal is crazy. Claude/Github et al.

They are dogfooding their own tools and causing so much downtime, all in the spirit of "staying a head".


> 100% of our code is written by AI

Yeah we can tell...


The schadenfreude is so fucking palpable


Weird take, will you also look sour at devs who use local LLM's in ~50 years? Or is that different


The mass immigration probably still taking a toll.


Who is leaving your possession is just as relevant as who is driving your possession?


You're comparing actual humans to a pet?


What are you asking? Nchagnet is just acknowledging the existence of people who regret having kids, not making a value comparison


[flagged]


They are objectively similar in that both are a big multi-decade commitment to a living being that you chose for yourself (yes, you did choose to have the kid unless you live in a country with no birth control access) but saying something is similar is still not making a value comparison


Yeah of course you can choose the level of evaluating how things are similar. Yes they both breathe. Yes they have DNA. Both are objectively true.

Also, you keep saying value comparison like it's something I used against OP. I never mentioned anything about dog <=> child, nor did OP. I just meant that the core decision of having either is different, so it's not comparable even though you could boil it down to "you care for both".



How is saying that your biological offspring is different to a pet discrimination/unfair treatment?


It's still clunky though. It's a great, cool thing that OP built but just not very practical.


Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human.


I addressed that directly in the comment you’re replying to.

It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.

It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).


>It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.

>They are not human, so attributing human characteristics to them is highly illogical

Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.

Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.

After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.

"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.

Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.

>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.

>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

Good thing that we aren't talking about RDBMS then....


It's something I commonly see when there's talk about LLM/AI

That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that.


I agree 100% with everything you wrote.


> They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.

What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?

> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.

But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.

> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).

You're moving the goalposts.


Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.

Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.


I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.

If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.


What? No?


I think the proper term is blursed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: