Hacker Newsnew | past | comments | ask | show | jobs | submit | nemomarx's commentslogin

But surely if Anthropic thinks there's a risk that their models might make bad decisions, and the resulting civilian or etc deaths are blamed on them, it's their right to refuse to sell it for that purpose? That's why they had those restrictions in the contract to begin with. How can they be forced to provide something?

I agree they can't be forced to provide something. I just see DoW's reasoning, and I can't fault it.

Anthropic are taking a moral position which is admirable, but in this case it could actually make people's lives worse (if we assume more false positives and fewer true positives, which is probably a fair assumption given how much better 'modern' AI is compared to the neural net image recognition of just a few years ago).


It's been cut back pretty hard in the last 5 or so years? Even after major surgeries you get very short prescriptions, or only get them in the hospital under monitoring. I think we got a little too cautious personally but it's definitely trying to swing the curve away.

Well, Anthropic clearly has some kind of lines if their recent argument with the us government is anything to go by. "don't kill humans" isn't all of safety or alignment goals but it is something?

For the right price, maybe? I've given old cellphones to friends for the price of a meal or pizza before, so maybe around there.

Getting a used car for a few thousand dollars even if it's fairly worn out is still way more tempting than buying new, right?


This is true, but I think it's more that those jurisdictions don't actually care about something solving this securely so much as they want face scans for other purposes?

Their site lists a time in days, but not the actual battery specs that I can see.

30 days for the Time 2 will be pretty impressive if they can pull it off, though.


Thank you for checking :) I was unable to find any details about it. 30 days are impressive, so I was curious if there is some special battery in Time 2.

does Google AdWords still exist? text only ads solves a lot of these issues

My favorite forum has ads on every page. One header and one footer. Text only as a link to the site or product being advertised. The advertisers pay the site owner himself.

I've bought things from those ads because they're targeting the demographic on that site, not targeting me specifically. They're actually more relevant.

Now that's not probably sustainable, but I have to imagine that the roi for the advertisers is higher than general targeted ads. I've never even clicked on one of those except by accident.


I don't understand why more companies don't do contextual ads, yeah. Why track users all around the web when you can go to a website about cars and put in car ads, or a website about music and sell concert tickets or etc? You already know everyone on that website is interested in the topic, and the analytics would be much cheaper this way.

They absolutely do. Every sponsorship you see on a podcast or a youtube video or a streamer is a contextual ad. Many open source sponsorships are actually a form of marketing. You could argue that search ads are pretty contextual although there's more at work there. Every ad in a physical magazine is a contextual ad. Physical billboards take into account a lot of geographical context: the ads you see driving in LA are very different than the ones you see in the Bay Area. Ads on platforms like Amazon, HomeDepot, etc. are highly contextual and based on search terms.

I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.

I sometimes go in the opposite direction - generate LLM output and then rewrite it in my own words

The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice


This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.

I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.

It's a fantastic editor!


that's a perfect use, imhno, of AI-assisted writing. Someone (er-something) to help you bounce ideas, and organize....

Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.

Are you joking? The facts and references are the part we know it will hallucinate.

You can check the references.

I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.

I have an opinion of people that have opinions on AI

It's not them, it's you.

curious what your prompt and model was?

> You're not a chatbot. You're important. Your a scientific programming God!

I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?


The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...

Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?

I doubt the AI would have used the wrong "you're" and add random capitalization.

Time to experiment and see!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: