Hacker Newsnew | past | comments | ask | show | jobs | submit | bdbdbdb's commentslogin

We always talk about what these powerful people "have done", as if it's all over. Surely Epstein's death did not bring about the end of billionaire sex trafficking? Someone stepped in. These guys are still raping people on private planes and private islands

But why are we focusing on the raping, and not on what the American government is doing that has no clear rational motive without “Israel has captured the government” and a very clear rational motive with “Israel has captured the government”?

If the American government continues to perform actions that are blatantly against the interests of America and Americans, the impact of that on Americans is going to be (and may be already) massively massively worse than the person to person level crimes we are focusing on.

Does it just feel so bad thinking about it that a lot of people have a hard time even going there mentally? I really don’t get it.


Why did he start the war?

Well, I have no idea. I'm just guessing it's not the reason I like the war.

I generally only attempt to scrutinize government action, and not government reason for action. Random citizens are at such an information disadvantage that I think it would be impossible to have an informed opinion as an outsider on the reasoning.

It could be as simple as "Iran kept trying to assassinate me so I'm going to assassinate them". Maybe he was pressured by Israel, I really have no idea.


The EU leadership I'm most proud of at this moment is our collective unwillingness to back up the US over the Iran war. I just wish we'd stood up to them sooner.

"Oh sure the EU is great now, but wait a few years" fox news, probably

I absolutely love this, it's beautiful. Kudos for the truly original ideas here in the UI. I love to see radial design patterns - everything is boring grids and rows and columns

This is the kind of innovation I love to see. The big AI companies days are numbered if we can have the same quality in house

I guess when it can't be tripped up by simple things like multiplying numbers, counting to 100 sequentially or counting letters in a string without writing a python program, then I might believe it.

Also no matter how many math problems it solves it still gets lost in a codebase


LLMs are bad at arithmetic and counting by design. It's an intentional tradeoff that makes them better at language and reasoning tasks.

If anybody really wanted a model that could multiply and count letters in words, they could just train one with a tokenizer and training data suited to those tasks. And the model would then be able to count letters, but it would be bad at things like translation and programming - the stuff people actually use LLMs for. So, people train with a tokenizer and training data suited to those tasks, hence LLMs are good at language and bad at arithmetic,


Arguments like "but AI cannot reliably multiply numbers" fundamentally misunderstand how AI works. AI cannot do basic math not because AI is stupid, but because basic math is an inherently difficult task for otherwise smart AI. Lots of human adults can do complex abstract thinking but when you ask them to count it's "one... two... three... five... wait I got lost".

> fundamentally misunderstand how AI works

Who does fundamentally understand how LLMs work? Many claims flying around these days, all backed by some of the largest investments ever collectively made by humans. Lots of money to be lost because of fundamental misunderstandings.

Personally, I find that AI influencers conveniently brush away any evidence (like inability to perform basic arithmetic) about how LLMs fundamentally work as something that should be ignored in favor of results like TFA.

Do LLMs have utility? Undoubtedly. But it’s a giant red flag for me that their fundamental limitations, of which there are many, are verboten to be spoken about.


You're not doing yourself a favor when you point out "but they can't do arithmetic!" as if anyone says otherwise. Yes, we all know they can't do arithmetic, and that's just how they work.

I feel like I'm saying "this hammer is so cool, it's made driving nails a breeze" and people go "but it can't screw screws in! Why won't anyone talk about that! Hammers really aren't all they're cracked up to be".


Maybe because society has invested $trillions into this hammer and influencers are trying to convince CEOs to fire everyone and buy a bunch of hammers instead.

My comment even said “LLMs have utility”. I gave an inch, and now the mile must be taken.


Saying that the fundamental limitations are things like counting the number of rs in strawberry is boring, though. That's how tokens work and it's trivial to work around.

Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example.


> Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example.

Sure, thank you for steelmanning my argument. I didn’t think I needed to actually spell out all of the fundamental limitations of LLMs in this specific thread. They are spoken at length across the web, but are often met with pushback, which was my entire point.

Here’s another one: LLMs do not have a memory property. Shut off the power and turn it back on and you lose all context. Any “memory” feature implemented by companies that sell LLM wrappers are a hack on top of how LLMs work, like seeding a context window before letting the user interact with the LLM.


But that's also like saying "humans don't have a memory property, any 'memory' is in the hippocampus". It's not useful to say that "an LLM you don't bother to keep training has no memory". Of course it doesn't, you removed its ability to form new memories!

So why then do we stop training LLMs and keep them stored at a specific state? Is it perhaps because the results become terrible and LLMs have a delicate optimal state for general use? This sounds like an even worse case for a model of intelligence.

Nope, it's not that, but it's nice of you to offer a straw man. Makes the argument flow better.

Not entirely a straw man. What is the purpose of storing and retrieving LLMs at a fixed state if not to guarantee a specific performance? Wouldn’t a strong model of intelligence be capable of, to extend your analogy, running without having its hippocampus lobotomized?

Given the precariousness of managing LLM context windows, I don’t think it’s particularly unfair to assume that LLMs that learn without limit become very unstable.

To steelman, if it’s possible, it may be prohibitively expensive. But somehow I doubt it’s possible.


It is, indeed, prohibitively expensive. But it's not impossible. The proof is in the fact that you can fine-tune LLMs.

Because know one owns a $300 billion dollar hammer that literally runs on fancy calculators.

I read this back in 2009, happy to see it's still on the internet.

Obviously with today's electricity prices it would use more than $5 per year but even doubled it is extremely cheap.

My issue with the concept is space and convenience. My upright fridge is about this size but it would take up too much space in my kitchen on its side. Worse again that you can't keep anything on top because that's where the door is.

But more crucially, with a chest freezer you can only easily access the stuff on top. If something is a few levels down you have to move a lot of stuff to access it. I wish they came with shelves that cantilevered out like a toolbox, or a vertical lid on rails that lifted like a drawer


There's an easy solution for the chest freezer, I've been using IKEA recycling bins, the plastic is cold proof, see: https://youtu.be/ydbsVS5rbSM?is=FVhiLHx4Uh94nb0k

You could just have a vertical fridge with "tub" drawers which individually contain the cold air.

And most fridges do, I have a veg "crisper" and a meat drawer, and two boxes for cheese and other things.

I've read that if you can just minimise the amount of airflow in your fridge, even just by filling it with bottled water, it's more efficient as there's less air to fall out when you open the door. Boxes are essentially this


Why not just send text replies? You can already do that


I'm an atheist, and I don't believe in the antichrist, but it's hard not to see how closely he fits the bill. Only lies come out when he opens his mouth. Even if he says something true, he basically qualifies it with another untruth, and people lap it up. Even the media, even the cynical media, seem to report the things he says at face value. It boggles the mind sometimes.

Last week someone challenged him on his claim that Iran had tomahawks and they bombed their own school. First time I've heard anyone directly challenge him. His response was "I don't know enough about it" classic bs packpedal, like any kid caught in a lie. Next day CNN stories were "trump doesn't know what's happening in the war, others are running it and he's unaware", completely missing the obvious truth, he lied to misdirect people on the school bombing, one person challenged it, he lied again to backpedal.

Some days it's like he has a supernatural ability to get away with lies


I'm an atheist too, but I still see religions as having embedded wisdom - both descriptive of how past societies failed, and prescriptive in that they are parts of the foundations of our present societies.

(Of course they also have a lot of details that are easy to latch onto as mere justifications for doing immoral things. And as moral people move on from traditional religion, then the share of people merely using it as crutch for immorality grows)

The archetype of a leader who engages in abjectly evil behavior while gathering ever more power and followers under a charm spell certainly rings true. But the dynamic is probably more like an individual being particularly adept at releasing the floodgates for our own worst impulses, rather than some supernatural power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: