Not who you’re replying for but I can give some thoughts.
For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.
Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.
The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)
HIPAA doesn't require any certification either. Some organizations voluntarily choose to earn certification from private companies that offer certifications for compliance with HIPAA privacy or administrative simplification rules but this is completely optional.
For doing some reporting stuff internally, there isn’t a certification. But there are definitely humans who have to certify financial statements and communications for financial offerings.
> how can one prove code was AI generated beyond a reasonable doubt?
Subpoena the provider they use.
Even if they don’t retain the full context, they have to save API calls for billing and analytics. If you’re clauding for the hour up to and after the commit, one can reasonably assume you built it with (if not exclusively by) AI.
I think if you ask something generic like “shoes”, this could be true.
When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.
If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
What does that mean? How does one come to a personal moral conclusion? Vibes?
(I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)
My moral framework can be different from yours. Me the individual can come to the conclusion that something is immoral when the rest of the group doesn’t agree with me. And (at least for my own moral framework) I should take action accordingly.
So I don’t need a shared framework to make the claim that something is immoral (to me).
The second is that it isn’t very interesting to stop at “personal moral judgement”. You’re having a dialogue, right? So, if you want to have a dialogue, you must explain your moral reasoning. I don’t like your parent’s use of the term “moral framework”, because it does lends itself to a relativistic interpretation - though charitably, the parent need not be a relativist, and is merely acknowledging the different stances of various moral theories. But also charitably, if we lack sufficient common moral ground, the first task is to find that moral common ground before you can discuss something with two incommensurate views in play.
Perhaps. But they made $1.6 billion in net income in 2025, which, from a business perspective, makes them about $10.6 billion more competent than OpenAI.
You can view it however you want, but reality disagrees with you. Palantir's profit comes from real customers paying real money for their real products.
And it's hilarious that you would compare Palantir to a crypto pump-and-dump while claiming OpenAI creates more value and is more successful.
Things are hacked together, extremely difficult to change (without a pile of more hacks, Palantir is most interested in embedding itself deeper and manipulating RFPs than helping orgs operate more effectively, they waaaaaay overpromise during sales and can’t deliver, costs and timelines overrun by a lot, they’ll shift the goalposts by trying to sell the next Magic Fix before the first thing is finished (because they oversold/botched implementation) or has delivered value commensurate with its cost.
For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.
Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.
The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)
reply