Hacker Newsnew | past | comments | ask | show | jobs | submit | JohnMakin's commentslogin

"conscience," not "conscious."

I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.

people seem to prefer only reading things from people they agree with

matthew 5:47

wish this idea was more prevalent in modern politics !


They use bash in ways a human never would, and it seems very intuitive for them.

If you present most LLM's with a run_python tool it won't realize that it can access a standard Linux userspace with it even if it's explicitly detailed. But spiritually the same tool called run_shell it will use correctly.

Gotta work with what's in the training data I suppose.


There are a lot of shellscripts holding this world together out there.

don't forget Perl!

> That is not possible and it is extremely suppressive to express yourself.

Also for the fact that you cannot predict how future powers will view past comments - for instance, certain benign political views 20 years ago could become "terroristic speech" tomorrow.

I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.


> I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.

More people should keep this same energy. I try to stress this to my kids and it feels like it's falling on deaf ears in regards to my teen. Alas.


I can be a rude prick online sometimes, but I can be in real life too - basically though the reason I do this is I never want it to be some huge surprise IRL if someone sees what I write online and be like, "wow, I didn't know that about him." I'm pretty much what I am online and IRL the same. For some reason this seems to matter for me, at least in the past when people have tried to like, send employers stuff I may have written online. The reaction is like "oh, yea, we knew that already about him."

Nothing terrible, maybe slightly embarrassing, but you know how online spaces can be. just be yourself basically, at least I try to be.


This really hits a string with me, adding on to this, This is how I believe the same way but I would argue that I might be more nicer online than offline because I am better able to control any emotions imo when I give more thought to it.

Because I don't really appreciate flame wars and when that's the case, I like to take some time to find common ground and just have a respectable discussion when possible.

This approach is harder to work irl because those moments are also spontaneous & it does require significantly more discipline to control one's emotion within seconds rather than minutes, but its something that I think I can work upon as well.

But I would say that aside from that, most of my comments are pretty spontaneously written. I frame it as a question of being honest with myself at times, I think I am mostly pretty much the same IRL and online as well.

Another point but such forums also act like a journal to me for my future to read as well. I try to write comments in such sense that in future, I can read them and try to accurately remember what my mind was thinking during the time/days I wrote that comment for self-retrospection as well.

Edit: Although now that I think about it, there are definitely some subtle changes I might have online vs irl but I would still say that I feel like my accounts are pretty authentic fwiw (personally) but I am happy with my authenticity online but there's definitely a level of my thinking which worries about any comment being permanently available though.


Your framing is interesting. You may feel that you can’t change who you are in real life, but people have a choice on how they behave online (or choose not to engage at all). So you could choose to be nice (or at least not a jerk); I’m pretty sure you wouldn’t get people writing to your employer complaining. I’d argue that if you know you’re sometimes a jerk, it’d be less stressful for you and others if you didn’t bring that energy online.

Sure, there is a choice. it’s rarely/never been stressful for me though, and I value being who I am for my own reasons as a strength and not a weakness. I always try to play by the moderation rules as I can possibly and realistically do. some of what I’ve written online has gotten me opportunities it wouldn’t have if i’d been more hesitant.

My point is if you have a good track record what you maintain online vs irl doesn’t matter as much to people as you’d maybe think as long as you are being true to yourself. I’m an elder millennial though, so that’s always been the case online for me and i dont think i often get out of pocket online anyway.

maybe that won’t be the case in the future. I could write a lot more than I’d care to publicly about personal and implied threats I’ve received based on my writings, but caving to that to me would betray my own values and I choose to consume the web how i choose knowing possible consequences - plus the fact moderation standards and what is “rude” drastically differs amongst platforms.


As someone who gets dopamine hits from downvotes on HN, I approve of your behavior!

>just be yourself basically

Yea, it is boring when everyone is the same. I would like a rude but interesting world (even if I might not survive long in one), than a nice, boring one.


"Just be yourself" seems to me a lot like the rightfully discredited "if you don't have anything to hide...".

Everybody has something to hide. Everybody has said things they regret, or meant to be heard by some people but not others.


This is very import: you don't know how the cancelation culture will be in 20 years.

I like to use the example of a guy who did a blackface in a party back in 2000's. Although reprehensible, was not commom-sense racism back then. Today society sees it as completely unacceptable.

Eventually that guy became prime minister of Canada and things went pretty bad when that photo surfaced decades later.

Is it far to judge someone's actions by the lens of a different culture? When the popular opinion comes, they won't care about historical context.


I think people forget that before about the 2010s plus or minus depending on who and where those sorts of overt bigotry were considered a "solved" problem, things were looking up and you and your buddies dressing up as Klansmen for Halloween was mocking the Klansmen more than anything else.

Only idiots don’t care about it historical context.

Your bar is too high in my experience. Most don't care about it and most of the ones that do, care only when it confirms their beliefs.

Unfortunately they get just as many votes as everyone else.

I think the problem with this, especially amongst younger people, is having spent so much time online, they don't know where to draw this line anymore.

Depends on what you want to say. It can be safer to say something directly to someone's face than online because it is transient and generally does not involve random passers-by.

I am not going to give examples, because I don't want them to be pinned on me as my views, but I'm sure most of us have enough imagination to come up with them.


Interesting. You could probably get into trouble in those two places for extremely different things you said.

of course, and it has happened, but I think authenticity is usually appreciated

what two places?

> I operate by a simple, general rule - I don't often say anything online I wouldn't say directly to someone's face in real life.

I think this isn't enough for the digital age, simply because "comments you'd say to someone's face" can compromise you on the internet.

Some dirty joke, gossip or whatever you tell a friend, if posted online, could come back to bite you in the ass in the dystopian future, lose you your job, or worse.


As people will point out, the OSINT techniques described are nothing new - typically, in the past, you could de-anonymize based on writing style or niche topics/interests. Totally deanonymization can occur if any of these accounts link to profiles containing pictures of their faces, which can then be web-searched to link to a real identity. It's astounding how many people re-use handles on stuff like porn sites linked very easily to their IRL identity.

While people will point out this isn't new, the implication of this paper (and something I have suspected for 2 years now but never played with) is that this will become trivial, in what would take a human investigator a bit of time, even using common OSINT tooling.

You should never assume you have total anonymity on the open web.


If LLMs can identify a person across websites, I can ask LLM to read up his posts and write like him impersonating him and then this feeds back into the tools identifying him. I can probabilistically malign a person this way.

So this means deanonymization doesn't work? Rejoice?

This already is a thing people did at least as far back as I started getting into web privacy, which was ~10 years ago. I have been the target of it before.

LLM's are probably better at it, but I don't know if this is as destructive as people may guess it would be. Probably highly person dependent.

The micro-signals this paper discusses are more difficult to fake.


stylometry is only one aspect of de-anonymization. what you describe is certainly a threat that we will have to deal with, but there is a lot more to credible impersonation than just being able to mimic a writing style

How to conduct a psy-op

https://youtu.be/YTGQXVmrc6g


I think the implication is this will become trivial and trivially automated, no human investigator needed. I bet there will be plugins in one year's time to right click on a post and get a full report on who the author is.

agreed and the new frontier here will probably be obfuscation by creating false positives with these same tools, but that kind of renders the web unusable in my mind.

I had this same thought. Seems fairly easy to just put off a strong false signal. If you don’t want anyone to know that you live in Finland, make a point to constantly mention how much you enjoy living in Peru.

Wouldn't it also become trivial to pretend to be another author?

it may become more trivial to llm your comments/blog/whatever into a different "voice", but there is so much that can be used for de-anonymization that the llm-assisted technique dont address.

for example, you may change the content of your comments, but if you only ever comment on the same topic, the topic itself is a signal. when you post (both day and time), frequency of posts, topics of interest, usernames (e.g. themes or patterns), and much more.



It really doesn't matter how "good" these tools feel, or whatever vague metric you want - they hemorrhage cash at a rate perhaps not seen in human history. In other words, that usage you like is costing them tons of money - the bet is that energy/compute will become vastly cheaper in a matter of a couple of years (extremely unlikely), or they find other ways to monetize that don't absolutely destroy the utility of their product (ads, an area we have seen google flop in spectacularly).

And even say the latter strategy works - ads are driven by consumption. If you believe 100% openAI's vision of these tools replacing huge swaths of the workforce reasonably quickly, who will be left to consume? It's all nonsense, and the numbers are nonsense if you spend any real time considering it. The fact SoftBank is a major investor should be a dead giveaway.


Indeed. Many of the posts I see on here are hilarious.

Have any of you tried re-producing an identical output, given an identical set of inputs? It simply doesn't happen. Its like a lottery.

This lack of reproducibility is a huge problem and limits how far the thing can go.


LLMs have randomness baked into every single token it generates. You can try running LLMs locally and set the temperature to low and it immediately feels boring to always have the same reply every time. It's the randomness that makes them feel "smart". Put it another way, randomness is required for the illusion of intelligence.

Im fully aware of that. However, this illusion is a dangerous mirage. It doesnt equate to reality. In some cases thats OK. But in most cases its not, especially so in the context of business operations.

Determinism in agents is a complex topic because there are several different layers of abstraction, each of which may introduce its own non-determinism. But yeah, it is going to be difficult to induce determinism in a commercial coding agent, for reasons discussed below.

However, we can start by claiming that non-determinism is not necessarily a bad thing - non-greedy token sampling helps prevent certain degenerate/repetitive states and tends to produce overall higher quality responses [0]. I would also observe that part of the yin-yang of working with the agents is letting go of the idea that one is working with a "compiler" and thinking of it more as a promising but fallible collaborator.

With that out of the way, what leads to non-determinism? The classic explanation is the sampling strategy used to select the next token from the LLM. As mentioned above, there are incentives to use a non-zero temperature for this, which means that most LLM APIs are intentionally non-deterministic by default. And, even at temperature zero LLMs are not 100% deterministic [1]. But it's usually pretty close; I am running a local LLM as we speak with greedy sampling and the result is predictably the same each time.

Proprietary reasoning models are another layer of abstraction that may not even offer temperature as knob anymore[2]. I think Claude still offers it, but it doesn't guarantee 100% determinism at temperature 0 either. [3]

Finally, an agentic tool loop may encounter different results from run to run via tool calls -- it's pretty hard to force a truly reproducible environment from run to run.

So, yeah, at best you could get something that is "mostly" deterministic if you coded up your own coding agent that focused on using models that support temperature and always forced it to zero, while carefully ensuring that your environment has not changed from run to run. And this would, unfortunately, probably produce worse output than a non-deterministic model.

[0] https://arxiv.org/abs/2007.14966 [1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in... [2] https://learn.microsoft.com/en-us/azure/ai-foundry/openai/ho... [3] https://platform.claude.com/docs/en/about-claude/glossary


Appreciate the response. I agree that non-determinism isnt a bad thing. However LLMs are being pushed as the thing to replace much of the deterministic things that exist in the world - and anyone seen to be thinking otherwise gets punished e.g. in the stock market.

This world of extremes is annoying for people who have the ability to think more broadly and see a world where deterministic systems and non-deterministic systems can work together, where it makes sense.


Yeah, I think you're right that LLMs are overused. In most cases where a deterministic system is feasible and desirable, it's also much faster and cheaper than using an LLM, too..

> In other words, that usage you like is costing them tons of money

Evidence? I’m sure someone will argue, but I think it’s generally accepted that inference can be done profitably at this point. The cost for equivalent capability is also plummeting.


I didn't think there would need to be more evidence than the fact they are saying they need to spend $600 billion in 4 years on $13bn revenue currently, but here we are.

Here you go: https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...


Right, but if OpenAI wanted to stop doing research and just monetize its current models, all indications are that it would be profitable. If not, various adjustments to pricing/ads/ etc could get it there. However, it has no reason to do this, and like all the other labs is going insanely into debt to develop more models. I'm not saying that it's necessarily going to work out, but they're far from the first company to prioritize growth over profitability

This meme needs to go in the bin. Loss making companies love inventing strange new accounting metrics, which is one reason public companies are forced to report in standardized ways.

There's no such thing as "profitable inference". A company is either profitable or it isn't.

Let's for a second assume all the labs somehow manage to form a secret OPEC-style cartel that agrees to slow training to a halt, and nobody notices or investigates. This is already hard to imagine with the amount of scrutiny they're under and given that China views this as a military priority. But let's pretend they manage it. These firms also have lots of other costs:

• Staffing and comp! That's huge!

• User subsidies to allow flat rate plans

• Support (including abuse control and handling the escalations from their support bots)

• Marketing

• Legal fees and data licensing

• Corporate/enterprise sales, which is expensive as hell even though it's often worth it

• Debt servicing (!!)

• Generating returns for investors

Inferencing margins have to cover all of those, even if progress stops tomorrow and the RoI to investors has to be likewise very large, so margins can't be trivial. Yet what these firms have said about their margins is very ambiguous. As they're arriving at this statement by excluding major cost components like training, it's not clear what they think the cost of inferencing actually is. Are they excluding other things too like hw depreciation and upgrades? Are they excluding the cost of the corporate sales/support infrastructure around the inferencing?


"profitable on inference" means "marginal costs of inference are lower than revenue". It is very common to distinguish between upfront costs vs. marginal costs when judging the economic viability of a business.

You mention "debt servicing", but OpenAI has no debt. All the money they have raised is equity not debt.


To be clear, it's absolutely impossible for OpenAI and the others to stop. The valuation and honestly the global markets depend on them staying leveraged to the hilt. So they're not going to stop. However, the point is that the models are genuinely useful and people pay for them, and if we reset the timeline with a company that has just the current proprietary models, they could turn a profit. That might involve charging more than they do now, etc. But this is much different than OpenAI, specifically, trying to turn a profit today, which wouldn't work for many reasons.

But also, "profitable inference" IS a thing! "Gross margin" is important and meaningful, even if a company has other obligations that mean it's overall not profitable.


Nope. The only "all indications" are that they say so. They may be making a profit on API usage, but even that is very suspect - compare against how much it actually costs to rent a rack of B200s from Microsoft. But for the millions of people using Codex/Claude Code/Copilot, the costs of $20-$30-$200 clearly don't compare to the actual cost of inference.

Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al. will gleefully comply with any such requests, no matter how dangerous or unethical. The "problem" that the US govt faces here is that they are kind of tacitly admitting Claude has the most powerful models right now, otherwise they would just cancel all contracts and go to Gemini/OpenAI. It feels like a bluff, so they are trying to bully them into compliance.

> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.

If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.

However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.


Yeah this standoff is worth at least 10 Super Bowl ads in good publicity. The Pentagon is saying "Claude is the best so we need to use it but you need to stop acting ethically". I'm almost wondering if someone in the administration has a stake in Anthropic because this is such a boost.

Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.


Yes, I agree, and this is a moment to prove they aren’t full of it - and it also seems like a very good move when the rest of the world seems increasingly vary wary of tech that even whiffs of US govt involvement.

I am not at all a skeptic anymore on this stuff and the science is well beyond me, but from what I think I know about alignment issues, and anthropic’s intense focus on solving these, it would not surprise me at all if we learn that catering to US whims on AI safety will result in the model actually getting worse or causing intense, 2nd and 3rd order unintended consequences. I’m not saying I believe there is a Terminator sequence of events happening, but if I did believe that, the headlines right now would look exactly what that would look like.

Alignment is the biggest issue for me - in terms of getting these things to actually behave in an environment where it is absolutely necessary that they behave. If I had to guess, that’s probably why the military is preferring to use it. Claude tooling is the only thing I have used yet in this hype cycle that actually I can get to behave how I want and obeys (arguably, and often to a fault).

However I also believe we’re in the worst possible timeline so the moment we get a taste of something that works as promised, it’ll be ripped away because the govt decides to do something stupid or build a moat around its use in a way to make it less useful, and favor other more “compliant” competitors.

Either way I bet there are some wild board room discussions going on at Anthropic right now.


My favorite moment of the past year was when grok was too woke, so they changed it and it became stupid, which they fixed resulting in it getting woke again (and identifying Musk as 'one of the people most deserving of the death penalty'[0]).

It's almost as if contextual awareness and consideration are cornerstones of intelligence.

0 - https://www.theverge.com/news/617799/elon-musk-grok-ai-donal...


"tech executive", "spine", "moral compass" -- it's illegal to these words together in the same sentence... except for this one.

>Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al

it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.

I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.


google famously “dont be evil” as their core mantra, and facebook used to actually be in the business of connecting friends with one another - in this day and age I genuinely cannot understand the position that you should trust what companies say vs how they act (or will act in the future)

Early 2024 if you had speculated about this about Persona's broader goals you would have been called nuts. It has become increasingly obvious though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: