Hacker Newsnew | past | comments | ask | show | jobs | submit | varispeed's commentslogin

This is because of massive unchecked corruption. In the UK this has become multibillion per year industry where connected landlords / agencies get lucrative contracts from Home Office for keeping immigrants in their properties and then you have complete supply chains developed around this where each entity skims money.

There are billboards where offers of guaranteed rents are advertised etc.


> The device can default to booting software signed by the manufacturer but the user should always be able to use a physical key to unlock the device and install his own keys and certificates instead.

This part is not going to happen, because security services need their backdoors intact. If you supply user with keys, they might flash the device with more secure operating system rendering any surveillance effort fruitless.


If I worked in a European intelligence agency, and considering how the the official US security policy revolves around bringing about regime change in Europe in support of far right extremist parties, and how supportive the tech company leadership seems to be of those goals, I would probably think that locking that very real existential threat to their democracies out would be a worthwhile tradeoff.

European governments and security services have their own surveillance and control agendas, most of them already use Palantir to enforce them. It's not like there are any "good" guys against "bad" ones.

Just wait when next time they ask for your member length and girth or flaps size.

That's the Worldcoin Orb 2.0. Stick it in to identify yourself to make a payment.

To deposit a payment.

;)


Bumping version of dependencies doesn't guarantee any improved safety as new versions can introduce security issues (otherwise we wouldn't have a need of patching old versions that used to be new).

If you replace a dependency that has a known vulnerability with a different dependency that does not, surely that is objectively an improvement in at least that specific respect? Of course we can’t guarantee that it didn’t introduce some other problem as well, but not fixing known problems because of hypothetical unknown problems that might or might not exist doesn’t seem like a great strategy.

I think he's referring to this part of the article:

> Dependencies should be updated according to your development cycle, not the cycle of each of your dependencies. For example you might want to update dependencies all at once when you begin a release development cycle, as opposed to when each dependency completes theirs.

and is arguing in favor of targeted updates.

It might surprise the younger crowd to see the number of Windows Updates you wouldn't have installed on a production machine, back when you made choices at that level. From this perspective Tesla's OTA firmware update scheme seems wildly irresponsible for the car owner.


Maybe. But at least everyone being on the same (new) version makes things simpler, compared to everyone being on different random versions, of what ever used to be current when they were written.

Instagram is serving me literal porn when I browse shorts (for instance women showing their private parts). It's amazing that they are unable or maybe don't want to block it.

Facebook basically has sexual content spam as in the OP article all the way.

It's to the point I'd never open either app when in public.


> It's amazing that they are unable or maybe don't want to block it.

I'm not convinced they care about moderation outside of legal necessity.


This is illegal in the UK, as they have to do age check for adult content. Also showing person porn without consent constitutes some form of sexual assault.

There is nothing smart about current LLMs. They just regurgitate text compressed in their memory based on probability. None of the LLMs currently have actual understanding of what you ask them to do and what they respond with.

If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains.

I somewhat agree with you but I also realise that there are very few "novel" problems in the world. I think it's really just more complex problem spaces is all.

Same relative logic, just more of it/more steps or trials.


Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything.

When humans fail a task, it’s obvious there is no actual intelligence nor understanding.

Intelligence is not as cool as you think it is.


It can still be cool- but maybe it's just not as rare.

I assure you, intelligence is very cool.

you'd think with how often Opus builds two separate code paths without feature parity when you try to vibe code something complex, people wouldn't regard this whole thing so highly

> they'd fail on any novel problem not in their training data

Yes, and that's exactly what they do.

No, none of the problems you gave to the LLM while toying around with them are in any way novel.


None of my codebases are in their training data, yet they routinely contribute to them in meaningful ways. They write code that I'm happy with that improves the codebases I work in.

Do you not consider that novel problem solving?


Correct, you are not doing any novel problem solving.

They don't solve novel problems. But if you have such strong belief, please give us examples.

Depends how precisely you define novel - I don't think LLMs are yet capable of posing and solving interesting problems, but they have been used to address known problems, and in doing so have contributed novel work. Examples include Erdos Problem #728[0] (Terence Tao said it was solved "more or less autonomously" by an LLM), IMO problems (Deepmind, OpenAI and Huang 2025), GPT-5.2 Pro contributing a conjecture in particle physics[1], systems like AlphaEvolve leveraging LLMs + evolutionary algorithms to generate new, faster algorithms for certain problems[2].

[0] https://mathstodon.xyz/@tao/115855840223258103

[1] https://huggingface.co/blog/dlouapre/gpt-single-minus-gluons

[2] https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...


We know that, but that does not make them unuseful. The opposite in fact, they are extremely useful in the hands of non-idiots.We just happen to have a oversupply of idiots at the moment, which AI is here to eradicate. /Sort of satire.

So you are saying they are like copy, LLMs will copy some training data back to you? Why do we spend so much money training and running them if they "just regurgitate text compressed in their memory based on probability"? billions of dollars to build a lossy grep.

I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes.


I have a gut feeling, huge portion of deficiencies we note with AI is just reflection of the training data. For instance, wiki/reddit/etc internet is just a soup of human description of the world model, not the actual world model itself. There are gaps or holes in the knowledge because codified summary of world is what is remarkable to us humans, not a 100% faithful, comprehensive description of the world. What is obvious to us humans with lived real world experience often does not make it into the training data. A simple, demonstrable example is whether one should walk or drive to car wash.

Thats not how they work, pro-tip maybe don't comment until you have a good understanding?

Would you mind rectifying the wrong parts then?

Phrases like "actual understanding", "true intelligence" etc. are not conducive to productive discussion unless you take the trouble to define what you mean by them (which ~nobody ever does). They're highly ambiguous and it's never clear what specific claims they do or don't imply when used by any given person.

But I think this specific claim is clearly wrong, if taken at face value:

> They just regurgitate text compressed in their memory

They're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.)


The fact that the outputs are probabilities is not important. What is important is how that output is computed.

You could imagine that it is possible to learn certain algorithms/ heuristics that "intelligence" is comprised of. No matter what you output. Training for optimal compression of tasks /taking actions -> could lead to intelligence being the best solution.

This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this.


Huh? Their words are an accurate, if simplified, description of how they work.

The simplification is where it loses granularity. I could describe every human's life as they were born and then they died. That's 100% accurate, but there's just a little something lost by simplifying that much.

Just HI slop. Ask any decent model, it can explain what's wrong this this description.

If something is shit, it doesn't matter it costs half price of something okay.

"There is hardly anything in the world that some man cannot make a little worse and sell a little cheaper, and the people who consider price only are this man's lawful prey."

> stuck in loops

I wonder if there is some form of cheating. Many times I found that after a while Gemini becomes like a Markov chain spouting nonsense on repeat suddenly and doesn't react to user input anymore.


Small local models will get into that loop. Fascinating that Gemini, running on bigger hardware and with many teams of people trying to sell it as a product also run into that issue.

Gemini 3 Flash is pure rubbish. It can easily get into loop mode and spout information no different than Markov chain and repeat it over and over.

But how corrupt politicians will make money having such reasonable policies?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: