Hacker Newsnew | past | comments | ask | show | jobs | submit | toss1's commentslogin

THIS — make it transparent, not try to ban it.

And the transparency must be real-time and MUST include the full dox on beneficial owner of the contract/bet, with steep jailtime for falsification/fronting, etc.. They can even say it is for tax purposes — they win that bet, they should pay income tax (and be able to deduct the costs of their losing bets against that specific income type).

I want to know if a bunch of senators or DOD personnel bet on event X, and I want journalists and OSINT watchers to know it in realtime. That gives everyone information while naturally eliminating most of the advantage of insider trading, since nearly everyone will pile into the same trade and the odds/payoff will come closer to the reality.


Knowing who is making the bets doesn’t prevent mildly corrupt officials from driving the outcome that’s going to win them some cash.

Knowing that high-level DOD official were betting on us invading Iran does us no good if the only reason we invaded Iran was so they could win their long-odds bet. Sure, we can try and shame them, but now they're rich and we're fighting another middle-east war.


What is the current method that exists which stops CEO/executives from short selling their own company's stock, then driving that company's value down (which is easy to accomplish)?

Why can't that same method be used to prevent or indict gov't insiders who tries to do the same?


That same method is the SEC (Securities Exchange Commission) and is it widely regarded as simultaneously ineffective and heavy-handedly overreaching.

It is an inherently hard problem to identify insider trading when trading securities, or in this case, bets/contracts, doesn't have participant identification and transparency problems

The same solution would be best for both — everyone can trade freely with the sole caveat that all ultimately beneficial owners are fully identified and the trades are transparently published in real-time.

Braying about "free market" when in the actual market players can hide their identities and covertly manipulate it, while having an underfunded agency supposedly tracking them down after the fact, is just a farce.

A solution structured so it naturally and dynamically self-corrects is far better than an enforcement bolted-on after the fact. And yes, there would still be enforcement of requiring transparency to enforce proper identification.


> Knowing that high-level DOD official were betting on us invading Iran does us no good if the only reason we invaded Iran was so they could win their long-odds bet.

Of course it does, if we’re willing to do ever-so-slightly more than jerk off on TikTok about it.


I guess we shouldn't do this then. If it doesn't completely solve to problem.

Interesting the effect's reason is still unclear.

I was starting to infer there was a better focusing ability so it could start and exit as a broad cone of radiation and keep the peak intensity at the tip of the focal cones at the tumor-tissue, and the short pulse also helped the healthy tissue.

But the way this sounds, it's more like a straight beam delivering similar intensity to healthy and tumor tissue but the biological effect strongly differs between healthy vs tumor tissue?


Yes, the radiation dose under the conventional metric (energy divided by mass) is the same, but the effects on biological systems change. I included a little speculation on the chemistry in my response to a sibling comment.

If a social network has an ACTUAL straight chronological feed of only accts you follow, or lists you curate, that works great.

Somebody posts abhorrent Nazi racist crap, or lies about what is happening, you shut them off, and they'll never be heard by you again. Yes, you need to see/hear the crap or propaganda once for each Nazi or liar, but that's it.

The problem is nearly every social platform needs to increase your engagement get you to click or scroll just another time so they get to show you more adverts and make more money and claim more 'engagement' to juice their stock price. So along with having to listen to the advertisements, you ALSO are REQUIRED to see/listen to the crap and lies.

The good solution — "you don't have to listen" — is not an actual option in the real world.

(NB: This is why Section 230 should only protect web providers if they have no algorithm. Once they have an algo, they exercise more editorial control than any newspaper or broadcast editor — they ARE responsible for the content, not because they posted it, their users did, but because they routed it to you.)


the context here is the fediverse and not social platforms based on financial incentives.

>>A prompt can be a masterpiece.

I don't think that word means what you think it means.

You have an extremely low bar for calling something a masterpiece.

A prompt can be clever, insightful, unique, and even uniquely productive.

But it is nowhere near the level of decades-deep skill and creative inspiration required to create art anything worthy of the label "masterpiece".

>>AI can be used as another kind of brush

Perhaps that is a valid analogy, but we do not give copyrights to brushes, no matter how much cost or effort was required to make the brush. The brush is not the only tool required to make the art. To continue the analogy, the artist must also select and mount the canvas, mix and color each shade of paint, build up the base layers, and on and on and on...

It doesn't matter if your "brush" is a five hundred billion dollar machine and you spend six months whispering to it to find just the right incantation to generate your file of pixels — SCOTUS is right, you have not make art to which you can claim a copyright.

And the starving student artist in their garage mixing their paints and using the dollar-store brush did make art worthy of a copyright claim.


>> 12 [days at 90° C] or 468 days [at 60° C]

Those temperatures are certainly hard to find in nature, outside of hot springs! Even if this is an error and we are talking about 90°F/60°F, the higher temperature is pretty much constrained to the tropics, so we're talking a year+ to degrade in real conditions. It is better than centuries, but not exactly rapid?


Yeah, I imagine it's considerably slower at ambient ocean temperature. Don't throw your PLA bags in the ocean or a river. Here's a different paper:

> For example, PLA is not biodegradable in freshwater and seawater at low temperatures [32,36–39]. There are two primary reasons for this: (i) The hydrophobic nature of PLA, which does not easily absorb water [40–42]. In aqueous environments, the lack of hydrophilicity diminishes the hydrolysis process, which is crucial for the initial breakdown of PLA into smaller, more degradable fragments. (ii) Resistance to enzymatic attack; the enzymes that degrade PLA are not prevalent or active under typical freshwater and seawater conditions [39,43,44]. The microbial communities in these environments may not produce the necessary enzymes in sufficient quantities or at the required activity levels to effectively breakdown PLA. Additionally, the relatively stable and crystalline domains of PLA can further resist enzymatic degradation.

Also:

> It should be emphasized that neat PLA cannot be classified as a completely biodegradable polymer, as it generates microplastics (MPs) during biodegradation.


The judge is right, and should go even further.

This is data collected with public funds — our money — for public purposes.

Not only should it be available to any US resident by request, it should be public, as in in an online library, and any US resident without a criminal record should be able to get continuous access, not only a batch of records (yes, keep out anyone with a restraining order or any other crime).

It is our tax dollars, any of us should be able to do research on the data. Including watching the watchers. Where do the government employees go and when? Where do the Flock employees go, and when?

Or, if that kind of instantly-available stalking of anyone is too much of a risk, shut it down. Hard. All of it.

The real-world dynamics of the system is either 1) everybody's motions in public are public, or 2) it is a tool of a totalitarian state. There is no other option, and option 2 is intolerable in a free society.


I'm switching over to Claude from OpenAI, and I don't care. OpenAI's image generation is terrible anyway. Just try to get it to generate something to scale, like a cabinet for a specific kitchen or bathroom space. Give it all the explicit constraints, initial sketches, etc. it wants.

The results are laughably bad.

Sure, it does get some of the tones and features, but any kind of actual real-world constraint is so far off, and the dimension indicators it includes are hilarious if they weren't so bad.


If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life.

This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life.

(And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live).

So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses.

If the customer finds such restrictions are not what they want, they ARE FREE to not use the product.


> If each of us individually or as corporations should not be in the business of deciding what it "evil", who should be in that business?

This is easy imo. Two methods:

1. The law. It should not be legal for the US Govt to murder people at will. If it is legal, then of course they'll use tools to make it easier. Maybe AI, maybe Clippy. If they can't use AI then they'll fall back to using some other way of doing it like they've already been doing for several years.

2. Voting. For representatives that actually represent us and have our interest in mind rather than their own corrupt interests. And voting with our wallet against companies that do legal but morally bankrupt things.

Of course we're failing both of these hard right now. But imo the answer is not to give up and let corporations make the rules.

In other words, if it were legal for a normal citizen to murder anyone they wanted, of course they'll use Google Maps to help them do that. We don't put restrictions on how people can use Google Maps. Instead we've made murder illegal. We should be doing the same thing here.


It's illegal to drive drunk or read your cell phone and hit strangers head on.

Nevertheless, it wasn't lawmakers, it was car makers who innovated to build-in airbags and seatbelts and lane assist and and and ... under the theory that though it's illegal, bad things are done anyway, and guardrails still matter.

Colloquialism: "belt and suspenders".

Many, like Volvo, go above and beyond the requirements to make their vehicles safer, and then having demonstrated these guardrails, some become law as well (even as other makers in the industry kick and scream about being forced to, and riders rebel against buckling up).

As we haven't solved this stand off for a century, we are unlikely to resolve it within the pace needed by expansion of AI. In this scenario, Anthropic is Volvo.


Exactly zero of those account for an individual's or company's ability tolive by their own moral code

And this AI software is not a mere static object like a hammer that can be handed off to a customer and what it is used for is their business, to build a house or bash a living skull.

This is a system that must be constantly maintained by it's builders.

Moreover, even if we use your standard, the law, it has already been decided in Anthropic's favor.

What you require is that Anthropic actively participate in activities that they consider abhorrent and/or unwise. SCOTUS has already ruled that a business cannot even be required to sell a cake to someone if it does not like the intended purpose (in that case, at a celebration at a gay wedding).


> even if we use your standard, the law, it has already been decided in Anthropic's favor.

I support Anthropic here. They had a deal with the Govt and the Govt bullied them. That should not be allowed, and Anthropic is suing which makes sense to me. Anthropic should be allowed to set any terms of use for the product that they want, and gain or lose business based on those terms. That's fine.

I'm saying that the failure is actually upstream. It should not be possible for Anthropic's AI to be used to mass surveile or murder people, because those things should be illegal by law and the govt should not be allowed to do it and should not be doing it. Somehow it isn't this way though.

So now that we find ourselves in this failed state, we have to rely on Anthropic to be "the law": to identify what what's "evil" and disallow it. I'm saying that's out of scope for a tech company and they shouldn't be expected to do that. They should only be in the business of making good tech and then be free to let it be used by anyone for any purpose that that the law allows.

This also means that if it's illegal to share information on how to build a bomb without AI, then it should be illegal for Claude to share that information with AI. So Anthropic to does need to make sure they're not breaking the law themselves as well.


Ah, good, we generally agree

For sure, Anthropic should NOT have been forced to decide the ethics of deploying their tech

Nevertheless, they should always be considering the ethics of their own creations and actions, and it seems they are — as soon as they got bullied by a failing regime, they had the right answer: 'no, that is not ethical and we won't allow it with our products'.

The problem is that the law only very roughly captures what is right and just, so there are many things that are legal that are unethical, at the same time there are many things that are ethical but illegal. So, we can't entirely outsource our personal or corporate ethics to the law.


Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

[0] https://news.ycombinator.com/item?id=47145963


Excellent news. I was seriously worried they would cave when I saw the earlier news they'd dropped their core safety pledge [0].

It is entirely reasonable to not provide tools to break the law by doing mass surveillance on civilian citizens and to insist the tool not be used automatically to kill a human without a human in the loop. Those are unreasonable demands by an unreasonable regime.

[0] https://news.ycombinator.com/item?id=47145963


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: