Hacker Newsnew | past | comments | ask | show | jobs | submit | BeefySwain's commentslogin

Can someone explain what the hell is going on here?

Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.

If I'm using Selenium it's a problem, but if I'm using Claude it's fine??


In a nutshell: Google wants your websites to be more easily used by the agents they are putting in the browser and other products.

They own the user layer and models, and get to decide if your product will be used.

Think search monopoly, except your site doesn't even exist as far as users are concerned, it's only used via an agent, and only if Google allows.

The work of implementing this is on you. Google is building the hooks into the browser for you to do it; that's WebMCP.

It's all opaque; any oopsies/dark patterns will be blamed on the AI. The profits (and future ad revenue charged for sites to show up on the LLM's radar) will be claimed by Google.

The other AI companies are on board with this plan. Any questions?


Knowing Google, there’s a good chance it will turn out like AMP [0]: concerning, but only spotty adoption, and ultimately kind of abandoned/irrelevant.

It’s the Google way.

[0] https://en.wikipedia.org/wiki/Accelerated_Mobile_Pages


    > but only spotty adoption
While I'm glad AMP never got truly widespread adoption, it did get adopted in places that mattered -- notably, major news sites.

The amount of times I've had to translate an AMP link that I found online before sending it onwards to friends in the hopes of reducing the tracking impact has been huge over the years. Now there are extensions that'll do it, but that hasn't always been the case, and these aren't foolproof either.

I do hope this MCP push fizzles, but I worry that Google could just double down and just expose users to less of the web (indirectly) by still only showing results from MCP-enabled pages. It'd be like burning the Library of Alexandria, but at this point I wouldn't put the tech giants above that.


Hopefully that's what happens, but it seems like compared to AMP there is more of a joint standardisation effort this time which worries me.

AMP lives on, mostly as AMP for Email and used by things like Google Workspace for performing actions within an email body (allow listed javascript basically).

> It’s the Google way.

Don't forget the all-important last step: abruptly killing the product - no matter how popular or praiseworthy it is (or heck: even profitable!) if unnamed Leadership figures say so; vide: killedbygoogle.com


The irony is Google properties are more locked down than ever. When I use a commercial VPN I get ReCAPTCHA’ed half of the time doing every single Google search; and can’t use YouTube in Incognito sometimes, “Sign in to confirm you’re not a bot”.

There's also the newer push against what they're calling "model distillation," where their models get prompted in some specific ways to try and extract the behaviour, which, coming from a limited background in machine learning broadly but especially the stuff that's happened since transformers came onto the scene, doesn't seem like something that could be productively done at any useful scale.

Model distillation is very useful!

Put it like this: Reinforcement Learning from Human Feedback (RLHF) is useful with hundreds of examples, and LLM distillation is basically the same thing.


That's by design, their own agents running on their hardware in their network will pass every recaptcha on every customer site

What about Authentication? Should the users to be on Google SSO to use their WebMCP?

Here is the answer from Gemini:

> Google's Web Model Context Protocol (WebMCP) handles authentication by inheriting the user's existing browser session and security context. This means that an AI agent using WebMCP operates within the same authentication boundaries (session cookies, SSO, etc.) that apply to a human user, without requiring a separate authentication layer for the agent itself.


Here’s what Gemini says about copy-pasting AI answers:

> Avoid "lazy" posting—copying a prompt result and pasting it without any context. If the user wanted a raw AI answer, they likely would have gone to the AI themselves.


Oh ho, this is the succinct and correct evaluation. Buckle up y'all, you're gonna be taken for a ride.

We should definitely feel trepidation at the prospects of any LLM guided browser, in addition to WebMCP (e.g. Claude for Chrome enters the same opaque LLM-controlled/deferred decision process, OpenClaw etc).

Just one example: Prompting the browser to "register example.com" means that Google/Anthropic gets to hustle registrars for SEO-style priority. Using countermeasures like captcha locks you out of the LLM market.

Google's incentive to allow you to shop around via traditional web search is decreased since traditional ads won't be as lucrative (businesses will catch on that blanket targeted ads aren't as effective as a "referral" that directs an LLM to sign-up/purchase/exchange something directly)... expect web search quality to decline, perhaps intentionally.

The only way to combat this, as far as I can conceptualize, is with open models, which are not yet as good as private ones, in no small part due to the extraordinary investment subsidization. We can hope for the bubble to pop, but plan for a deader Internet.

Meanwhile, trust online, at large, begins to evaporate as nobody can tell what is an LLM vs a human-conducted browser. The Internet at large is entering some very dark waters.


The Google hate virus is thick here. It seems uncontroversial that users will likely want to use AI to find info for them and do things for them. So either Google provides users with what they want or they go out of business to some other company that provides what users want.

https://www.perplexity.ai/comet

https://chatgpt.com/atlas/

https://arc.net/max

That is not in any way to suggest companies are ok to do bad things. I don't see anything bad here. I just see the inevitable. People are going to want to ask some AI for whatever they used to get from the internet. Many are already doing this. Who ever enables that for users best will get the users.


> It seems uncontroversial that users will likely want to use AI to find info for them and do things for them

Lots of weasel words in there. You're doing a lot of work with "seems", "uncontroversial" and "likely". Power users and tech professionals probably want this or their bosses really want this and they fall in line. But a large portion of the 'normal' users still struggle with basic search, distrust AI or just don't trust to delegating tasks to opaque systems they can't inspect. "Users" is not a monolith.


Is the opposite. Only HNers distrust AI. The "normies" love it and are far less skeptical. Few of them recognize when it's messing up.

https://www.pewresearch.org/global/2025/10/15/how-people-aro...

Patently false, a slim minority in all countries is more optimistic than they are sceptical.


> The "normies" love it and are far less skeptical. Few of them recognize when it's messing up.

This is not a discussion about accuracy. The market disagrees with you. Microsoft tried to shove AI into their user base with mixed results.



> Who ever enables that for users best will get the users.

And if it's anything like Uber, that'll be when the enshittification really kicks into gear.


I'm old enough to remember discussions around the meaning of `User-Agent` and why it was important that we include it in HTTP headers. Back before it was locked to `Chromium (Gecko; Mozilla 4.0/NetScape; 147.01 ...)`. We talked about a magical future where your PDA, car, or autonomous toaster could be browsing the web on your behalf, and consuming (or not consuming) the delivered HTML as necessary. Back when we named it "user agent" on purpose. AI tooling can finally realize this for the Web, but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.

Me too but it died when ads became the currency of the web. If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.

> If the reason the site exists is to use ads, they’re not going to let you use an user agent that doesn’t display the ads.

They've been giving it the old college try for the better part of two decades and the only website I've had to train myself not to visit is Twitch, whose ads have invaded my sightline one time too many, and I conceded that particular adblocking battle. I don't get the sense that it's high on the priority list for most sites out there (knock on wood).


People who block ads are a minority. Sites that serve heavy content like video would care if someone wastes their resources but blocks ads, but why would a site that serves a few KBs of text spend the resources on blocking such users or making the ads beat the ad blocker in a tiresome cat and mouse game?

Those users could even share or recommend the site to someone else who doesn't use ad blockers, so it actually makes sense to not try to battle ad blockers if you want to make your site more popular.

This makes sense for sites that rely on network effects, like forums or classified ad sites and so on. Unless they have a near monopoly or some really valuable content, they would benefit financially if they let people block their ads.

I can't back that up with data or anything, but it makes sense to me.


Many "news sites" are pretty hostile to me as someone with an adblocker. So I add them to my deny list of sites to never visit or hear from.

I once made the mistake of adding the site to the deny list of uBlock... The ads were so annoying I couldn't read the article anyway. So, never again.

Anyway, you're right in that I'll never share articles from those sites to people who don't use ad blockers.


Same, I just don't use Twitch when possible. Most streamers rehost their VODs on Youtube which has a better player anyway.

Adblocker is only few clicks away and a surprisingly large amount of users running one. So they might not like it, but they already letting plenty of users to use agent that doesn't display the ads.

Not only ads. Primary anti-scraping use today is obfuscation either as anti-competitive practice or hiding unlawful behavior like IP infringement etc.

Just like then we were naive about folks not abusing these things to the point of making everyone need to block them to oblivion. I think we are relearning these lessons 30 years later.

> AI tooling can finally realize this for the Web

There was a concept named Web 3.0 a while ago, aka the 'Semantic Web'. It wasn't the crypto/blockchain scam that we call Web3 today. The idea was to create a web of machine readable data based on shared ontologies. That would have effectively turned the web into a giant database of sorts, that the 'agents' could browse autonomously and derive conclusions from. This is sort of like how we browse the web to do research on any topic.

Since the data was already in a structured form in Web 3.0 instead of natural language, the agent would have been nowhere near the energy hogs that LLMs are today. Even the final conversion of conclusions into natural language would have been much more energy-efficient than the LLMs, since the conclusions were also structured. Combine that with the sorts of technology we have today, even a mediocre AI (by today's standards) would have performed splendidly.

Opponents called it impractical. But there already were smaller systems around from various scientific fields, operating on the same principle. And the proponents had already made a lot of headway. It was going to revolutionize information sharing. But what I think ultimately doomed it is the same reason you mentioned. The powers that be, didn't want smarter people. They wanted people who earned them money. That means those who spend their attention on dead scrolling feeds, trash ads and slop.

> but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.

Yes, this! But only when your eyeball and attention earns them profit. Otherwise they are perfectly content with operating behind your backs and locking you out of decisions about how you want to operate the devices you paid for in full. This is why we can't have good things. No matter which way you look, the ruins of all the dreams lead to the same culprit - the insatiable greed of a minority. That makes me question exactly how much wealth one needs to live comfortably or even lavishly till their death.


They wanna let you use the service the way they want.

An e-commerce? Wanna automate buying your stuff - probably something they wanna allow under controlled forms

Wanna scrape the site to compare prices? Maybe less so.


A brave new world for fraud and returns.

Also I just recently noticed Chrome now has a Klarna/BNPL thing as a built in payments option that I never asked for...


Yeah it's a payment method they added to Google Pay (Google Wallet? I don't know anymore). You can turn it off in autofill settings.

> Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.

The proposal (https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...) draws the line at headless automation. It requires a visible browsing context.

> Since tool calls are handled in JavaScript, a browsing context (i.e. a browser tab or a webview) must be opened. There is no support for agents or assistive tools to call tools "headlessly," meaning without visible browser UI.


That really just increases the processing power required to automate it. VM running Chrome to a virtual frame buffer, point agent at frame buffer, automate session. It's clunky, but probably not that much more memory intensive than current browser automation. You could probably ditch the frame buffer as well, except for giving the browser something to write out to. It can probably be /dev/null.

>Can someone explain what the hell is going on here?

Someone at Chromium team is launching rapidly for an promotion


Not fine if you use Claude. But it's fine if you are Google Flights and the user uses Gemini. The paid version of course.

I feel like this is a way to ultimately limit the ability to scrape but also the ability to use your own AI agent to take actions across the internet for you. Like how Amazon doesn’t let your agent to shop their site for you, but they’ll happily scrape every competitor’s website to enforce their anti competitive price fixing scheme. They want to allow and deny access on their terms.

WebMCP will become another channel controlled by big tech and it’ll come with controls. First they’ll lure people to use this method for the situations they want to allow, and then they’ll block everything else.


i’m seeing this at my corporate software job now. that service that you used to have security and product approval for to even read their Swagger doc has an MCP server you can install with 2 clicks.

Sometimes, it gets added there without your consent.

different threat model. cloudflare blocks automation that pretends to be human -- scraping, fake clicks, account stuffing. webmcp is a site explicitly publishing 'here are the actions i sanction.' you can block selenium on login and expose a webmcp flight search endpoint at the same time. one's unauthorized access, the other's a published api.

as a website operator, i want my website to not experience downtime and unreliability because of usage rates that exceed the rate at which humans load pages, and i want to not be defrauded.

if you want to access my website using automated tools, that's fine. but if there's a certain automated tool that is consistently used to either break the site or attempt to defraud me, i'm going to do my best to block that tool. and sometimes that means blocking other, similar tools.

if the webMCP client in chrome behaves in a reasonable way that prevents abuse, then i don't see a problem with it. if scammers discover they can use it to scam, then websites will block it too.


Oh, that's an easy one. LLMs have made people lose their god damned minds. It makes sense when you think about it as breaking a few eggs to get to the promised land omelette of laying off the development staff.

I think I have one explanation why for a website, exposing an MCP servers AND having captchas can make sense.

- an agent loading the real page is waste for the server, because the data sent is a few megavytes, and you don't have the usual returns of an user seeing your ads

- BUT API requests (or here, MCP) are much lighter, a few dozen kB, so that makes the ROI positive again

At least that's my view : please tell me, anyone, if that reason doesn't make sense!


I can deeply, deeply relate. X and Bluesky are both going nuts with ai and ai scams, but _both_ of them banned an advertising account because we were... using a bot to automate behavior because their APIs are only a subset of functionality.

Their vision is a world where they use all the automation regardless of safety or law, and we have to jump through extra hoops and engage in manual processes with AI that literally doesn't have the tool access to do what we need and will not contact a human.


These are obviously different people you're talking about here

Obviously if you wanted people to book flights with a bot then you could have provided a public API for that long ago.

I think potentially the subtlety here is a sort of cooperative mode - the computer filling out a lot of the forms and doing the grunt, but it's important that the human is still in the loop - so they need to be able to share a UI with the agent.

Hence a agent friendly web page, rather than just an API.


Remember when many websites had quite open public APIs? Over time this became less common, and existing things like FB added more limitations.

> Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever,

Not if they don't want their rankings to tank. Now you'll need to make your website machine friendly while the lords of walled gardens will relentlessly block any sort of 'rogue' automated agent from accessing their services.


I was also thinking about more or less the same thing with APIs and MCPs. The companies that didn't have any public apis are now exposing MCPs. That, to me is quite interesting. Maybe it is the FOMO effect.

Also, as someone who has tried to build tools that automate finding flights, The existing players in the space have made it nearly impossible to do. But now Google is just going to open the door for it?

Both. I imagine if using this there is a tell (e.g. UA or other header). Sites can just block unauthenticated sessions using it but allow it to be used when they know who.

I can't see walled garden platforms or any website that monetizes based on ads offering WebMCP. Agents using their site represent humans who aren't.

It’s weirder than that. There is a surge of companies working on how to provide automated access to things like payments, email, signup flows, etc to *Claw.

WebMCP should be a really easy way to add some handy automation functionality to your website. This is probably most useful for internal applications.

And what site is going to open their api up to everyone? Document endpoints already exist, why make it more complicated.

They will wish that you use an official API, follow the funnel they settled for you, and make purchases no matter how

In early experiments with the Claude Chrome extension Google sites detected Claude and blocked it too. Shrug

Is the website Stripe or NYTimes?

Why should a browser care about how websites want you to use them?

In my opinion sites that want agent access should expose server-side MCP, server owns the tools, no browser middleman. Already works today.

Sites that don’t want it will keep blocking. WebMCP doesn’t change that.

Your point about selenium is absolutely right. WebMCP is an unnecessary standard. Same developer effort as server-side MCP but routed through the browser, creating a copy that drifts from the actual UI. For the long tail that won’t build any agent interface, the browser should just get smarter at reading what’s already there.

Wrote about it here: https://open.substack.com/pub/manveerc/p/webmcp-false-econom...


So... an API?

Most sites don't want to expose APIs or care enough about setup and maintenance of said API.


Are you asking if Agents should use API?

(genuinely asking) why not SQLite by default?


We were not able to get good enough performance compared to LMDB. We will work on this more though, there are probably many ways performance can be increased by reducing load on the KV store.


Did you try WITHOUT ROWID? Your sqlite implementation[1] uses a BLOB primary key. In SQLite, this means each operation requires 2 b-tree traversals: The BLOB->rowid tree and the rowid->data tree.

If you use WITHOUT ROWID, you traverse only the BLOB->data tree.

Looking up lexicographically similar keys gets a huge performance boost since sqlite can scan a B-Tree node and the data is contiguous. Your current implementation is chasing pointers to random locations in a different b-tree.

I'm not sure exactly whether on disk size would get smaller or larger. It probably depends on the key size and value size compared to the 64 bit rowids. This is probably a well studied question you could find the answer to.

[1]: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...


Very interesting, thank you. It would probably make sense for most tables but not all of them because some are holding large CRDT values.


Other than knowing this about SQLite beforehand, is there any way one could discover that this is happening through tracing?


I learned that Turso apparently have plans for a rewrite of libsql [0] in Rust, and create a more 'hackable' SQLite alternative altogether. It was apparently discussed in this Developer Voices [1] video, which I haven't yet watched.

[0] https://github.com/tursodatabase/libsql

[1] https://www.youtube.com/watch?v=1JHOY0zqNBY


Keep in mind that write safety comes with performance penalties. You can turn off write protections and many databases will be super fast, but easily corrupt.


Could you use something like Fly's Corrosion to shard and distribute the SQLite data? It uses a CRDT reconciliation, which is familiar for Garage.


Garage already shards data by itself if you add more nodes, and it is indeed a viable path to increasing throughput.


Pendragon?


Consider that this isn't just a random AI slopped assortment of 9,000 tests, but instead is a robust suite of tests that cover 100% of the HTML5 spec.

Does this guarantee that it functions completely with no errors whatsoever? Certainly not. You need formal verification for that. I don't think that contradicts what Simon was advocating for though in this post.


I think it would be interesting if professional engineering becomes more like producing formally correct documents for the AI to implement.


We have these tools that we use to write formally correct documents.

They're called programing languages, and a deterministic algorithm translates them to machine code.

Are we sure English and a probabilistic algorithm is any better at this?


I actually hate AI in my core, to the point that if it gets too much more advanced I'll likely be in existential crisis, so don't attack me on those grounds. Given it exists, I'm going to find what's good about it though. I do think the problem of AI existing has to be confronted. Maybe one solution is what the human does is produce specs like the HTML 5 one, and what the AI does is implement it in software.


The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?


Do you mean gait recognition?


Lol yes, bad auto complete :-)


What's wrong with Docker for this?


I keep on hearing that Docker isn't designed as a security boundary for this kind of thing.

Firecracker is meant to be secure but it's a lot harder to work with.


Hey Simon, given it's you ... are you concerned about LLMs attempting to escape from within the confines of a Docker container or is this more about mitigating things like supply chain attacks?


I'm concerned about prompt injection attacks telling the LLM how to escape the Docker container.

You can almost think of a prompt injection attack as a supply chain attack - but regular supply chain attacks are a concern too, what if an LLM installs a new version of an NPM package that turns out to have been deliberately infected with malware that can escape a container?


When you use docker you can have full control over the networking layer already. As you can bound it's networking to another container that will act as proxy/filter. How WASM offer that?

With reverse proxy you can log requests, or filter them if needed, restrict the allowed domains, do packet inspection if you want to go crazy mode.

And if an actor is able to tailor fit a prompt to escape docker, I think you have bigger issues in your supply chain.

I feel this wasm is bad solution. What it brings a VM or docker can't do?

And escaping a docker container is not that simple, require a lot of heavy lifting and not always possible.


Aside from my worries about container escape, my main problem with Docker is the overhead of setting it up.

I want to build software that regular users can install on their own machines. Telling them they have to install Docker first is a huge piece of friction that I would rather avoid!

The lack of network support for WASM fits my needs very well. I don't want users running untrusted code which participates in DDoS attacks, for example.


You have the same lack of network support with cgroups containers if you configure them properly. It isn't as if it's connected and filtered out, but as though it's disconnected. You can have it configured in such a way that it has network support but that it's filtered out with iptables, but that does seem more dangerous, though in practice that isn't where the escapes are coming from. A network namespace can be left empty, without network interfaces, and a process made to use the empty namespace. That way there isn't any traffic flowing from an interface to be checked against iptables rules.


Escaping a container is apparently much easier than escaping a VM.


I think that threat is generally overblown in these discussions. Yes, container escape is less difficult than VM escape, but it still requires major kernel 0day to do; it is by no means easy to accomplish. Doubly so if you have some decent hygiene and don't run anything as root or anything else dumb.

When was the last time we have heard container escape actually happening?


Just because you haven't heard of it doesn't mean the risk isn't real.

It's probably better to make some kind of risk assessment and decide whether you're willing to accept this risk for your users / business. And what you can do to mitigate this risk. The truth is the risk is always there and gets smaller as you add several isolation mechanisms to make it insignificant.

I think you meant “container escape is not as difficult as VM escape.” A malicious workload doesn’t need to be root inside the container, the attack surface is the shared linux kernel.

Not allowing root in a container might mitigate a container getting root access outside of a namespace. But if an escape succeeds the attacker could leverage yet another privilege escalation mechanism to go from non-root to root


To quote one of HN's resident infosec experts: Shared-kernel container escapes are found so often they're not even all that memorable.

More here: https://news.ycombinator.com/item?id=32319067


apparently...

Like it's also possible in a VM.

What about running non privileged containers! You need really to open some doors to make it easier!


Better not rely on unprivileged containers to save you. The problem is:

Breaking out of a VM requires a hypervisor vulnerability, which are rare.

Breaking out of a shared-kernel container requires a kernel syscall vulnerability, which are common. The syscall attack surface is huge, and much of it is exploitable even by unprivileged processes.

I posted this thread elsewhere here, but for more info: https://news.ycombinator.com/item?id=32319067


Is Podman unescapable compared to Docker?


They both use the same fundamental isolation mechanisms, so no.


They both can be highly unescapable. The podman community is smaller but it's more focused on solving technical problems than docker is at this point, which is trying to increase subscription revenue. I have gotten a configuration for running something in isolation that I'm happy with in podman, and while I think I could do exactly the same thing in Docker, it seems simpler in podman to me.


Apologies for repeating myself all over this part of the thread, but the vulnerabilities here are something that Podman and Docker can't really do anything about as long as they're sharing a kernel between containers.

The vulnerability is in kernel syscalls. More info here: https://news.ycombinator.com/item?id=32319067

If you're going to make containers hard to escape, you have to host them under a hypervisor that keeps them apart. Firecracker was invented for this. If Docker could be made unescapable on its own, AWS wouldn't need to run their container workloads under Firecracker.


This same, not especially informative content is being linked to again and again in this thread. If container escapes are so common, why has nobody linked to any of them rather than a comment saying "There are lots" from 3 years ago?


I did apologize, didn't I? :-)

Perspective is everything, I guess. You look at that three year old comment and think it's not particularly informative. I look at that comment and see an experienced infosec pro at Fly.io, who runs billions of container workloads and doesn't trust the cgroups+namespaces security boundary enough so goes to the trouble of running Firecracker instead. (There are other reasons they landed there, but the security angle's part of it.)

Anyway if you want some links, here are a few. If you want more, I'm sure you can find 'em.

CVE-2022-0492: https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups

CVE-2022-0847: https://www.datadoghq.com/blog/engineering/dirty-pipe-contai...

CVE-2023-2640: https://www.crowdstrike.com/en-us/blog/crowdstrike-discovers...

CVE-2024-21626: https://nvd.nist.gov/vuln/detail/cve-2024-21626

Some are covered off by good container deployment hygiene and reducing privilege, but from my POV it looks like the container devs are plugging their fingers in a barrel that keeps springing new leaks.

(To be fair, modern Docker's a lot better than it used to be. If you run your container unprivileged and don't give it extra capabilities and don't change syscall filters or MAC policies, you've closed off quite a bit of the attack surface, though far from all of it.)

But keep in mind that shared-kernel containers are only as secure as the kernel, and today's secure kernel syscall can turn insecure tomorrow as the kernel evolves. There are other solutions to that (look into gVisor and ask yourself why Google went to the trouble to make it -- and the answer is not "because Docker's security mechanisms are good enough"), but if you want peace of mind I believe it's better to sidestep the whole issue by using a hypervisor that's smaller and much more auditable than a whole Linux kernel shared across many containers.


I mean docker runs in sudo privileges for the most part, yes I know that docker can run rootless too but podman does it out of the box.

So if your docker container gets vulnerable and it can somehow break through a container, I think that with default sudo docker, you might get sudo privileges whereas in default podman, you would be having it as a user run executable and might need another zero day or smth to have sudo privilege y'know?


Docker would be hacky and cumbersome especially when compared to anything assembly like.


Campfire is definitely not FOSS: https://once.com/license


Interesting because the repo only lists a MIT license, with no mention of those requirements. IANAL but those license terms don't seem to be anywhere in the software repository.

https://github.com/basecamp/once-campfire


That's an outdated license web page dated 2024, see "Copyright © 2024" at the bottom.

The code was made free in 2025 per X post dated Sep 12, 2025 by Jason Fried [0], screenshot available [1].

A quote from Fried's tweet:

  Campfire...it's now 100% free...and open source.
Here's a quote from https://github.com/basecamp/once-campfire/blob/main/MIT-LICE...

  Permission is hereby granted, free of charge, to any person obtaining
  a copy of this software and associated documentation files (the
  "Software"), to deal in the Software without restriction...
[0] https://x.com/jasonfried/status/1966559597117964560

[1] https://files.catbox.moe/98t9vx.png


No it's not


I'm curious what tooling you are using to accomplish this?


I used Cline+Claude 3.7 Sonnet for the initial draft of this LLVM PR. There's a lot of handholding and the final version was much different than the original.

https://github.com/llvm/llvm-project/pull/130458

Right now I'm using Roo Code and Claude 4.0. Roo Code looks cooler and draws diagrams but I don't know if it's better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: