Hacker Newsnew | past | comments | ask | show | jobs | submit | thevinter's commentslogin

I lived for months with a 4GB roaming plan. Given, I was not using it at home since I had a wifi connection, but I rarely came close to using all my data unless I was watching YT videos when traveling or something.

I share your sentiment and I agree we should be more mindful of people with metered/slow connections, but the last statement feels blown out of proportion.


I used to be able to get away with this by downloading music, podcasts and maps at home.

During the iOS 26 upgrade cycle, iOS deleted all my third-party map apps and then expired the locally downloaded apple maps. My phone also somehow lost my downloaded podcasts + music a few times, but, unlike losing three offline map applications, that didn't strand me in the middle of the woods with no cell coverage and no maps.

I agree that 4GB (or even 1GB) goes very far with a working phone OS though.


I arrived in a small airport at midnight. Served only by Uber. Since I use Lyft elsewhere, my phone had deleted the Uber app. It took 15 minutes to download that: crappy Wifi and some kind of 5G dead zone. Sometimes you really need to download the app.

I had a 200MB data plan until ~ 2018.

I had data turned off most of the time. At home and in the office I had WiFi. Loaded the map before I left home.

Most other places I was too busy doing whatever I was doing to use a phone. Since upgrading, I guess I can look products up in stores now. That's about it.


If you're highly tech literate, you can get by with 4GB or even 3GB.

What you cannot do, contrary to what someone posted in this thread, is get by on 2G. So an ounce of prevention is worth a pound of cure in this case.


Not using it at home likely discounts a lot of personal consumption. If you can get your fill at nights, less need to access the internet during the day.

I've had a 1GB/mo $5/mo plan from good2go for the last 2 years. I've never gone over it. But that's because I go from wifi to wifi all the time and I'm very careful when I'm on cell. That definitely doesn't work for most people!

Yes but isn't it a bit weird to be implying your customers are dogs?

Our customers are morons for using our products and dogs are personable but pretty stupid so yea, makes a lot of sense.

Idk some people love dogs a lot. Maybe more than people!

The average person generally seems less than neutral to see me.

Many people aren’t just openly hostile, they make a point to immediately let you know they aren’t here to help, they’re here to make everyone’s life less pleasant.

With people, there are many scenarios where if you’re out of line, disagree, that’s it. You’re done. They’ll never ever consider you worth any reasonable sort of treatment.

Dogs, by comparison, are angels.


Metaphor confuses, literally.

No. The idea is until it receives the chef’s kiss, it’s dog food.

I think in the analogy that we're the dogs.

Cursor came out 3 years ago. "Agentic" refactors have been a thing for 1.5 years. Vibecoding as a term has been created 1 year ago.

There are multiple companies that deploy to production daily. What are we even talking about?


Right but this agentic stuff was supposed to be the wave where we would finally actually see increased output, so we should probably be seeing it soon if it's real. Like, my dev team should definitely have the actual code they keep talking about their agents making, ready for me to put into production. As should my vendors. Any day now.


What is this nonsense?

You said that none of this was in production and then when people pointed out that it was obviously in production, you shifted the goal post to some other measure that you just imagined in your head.


Well, if it's in production, it's not at my company, any of my vendors, or for that matter any of the software I use in my private life; the pace of all of that is exactly what it was 2 years ago. When it shows up I'll form an opinion.


Let me amend that: one of my vendors has a new diffusion-based noise-reduction plugin that's pretty good though the resource usage is still too high. I imagine that will come down as they improve it. And that's pretty cool. But it didn't come out any faster it's just that it uses diffusion in the plugin itself. But docker was a much bigger impact on the software we use at work than AI has been so far.

I was even trying to come up with a list of software I use in my personal life to see if any of that has started coming out faster, and I came up with:

KDE

Supercollider

Puredata

Mixxx

Renoise

CUDA and ROCM

none of which have had any kind of release acceleration that I know of (though obviously the hardware to use the last two has gotten mind-blowingly expensive, alas). I use maybe three apps on my phone and they aren't updating any more frequently than they used to.

I get that for whatever reason this bugs people, but I'm in a very tech job and have a very tech personal life (just not webdev in either case) and literally have not seen anything I deal with change other than needing to learn to scroll past the AI summary at the top of search results.


What do you expect that it’s gonna announce itself in a modal dialogue when you run the software?

This isn’t like AI image generation where you’re going to convince yourself that you can tell the difference based on how you think it looks. Do you really think no one in the production chain of any of the software that you use picked up copilot in the last two years?

What signal are you hoping to receive that this is happening?


Well like I said in the sibling post to this one I'd expect really any of the software vendors in my professional or personal life to release either more rapidly or with a wider array of features than they were a few years ago, and that hasn't been my experience, at all.


The coding was never the slow part.


I'm certainly sympathetic to that argument, but if you scroll way back this thread started with the question of whether or not AI is transformative, and if it is neither faster nor better that would suggest "no".


Pi was probably the best ad for Claude Code I ever saw.

After my max sub expired I decided to try Kimi on a more open harness, and it ended up being one of the worst (and eye opening experiences) I had with the agentic world so far.

It was completely alienating and so much 'not for me', that afterwards I went back and immediately renewed my claude sub.

https://www.thevinter.com/blog/bad-vibes-from-pi


> I would say that the project actively expects you to be downloading them to fill any missing gaps you might have.

Where did you get this perspective from?

> I thought pi and its tools were supposed to be minimal and extensible. So why is a subagent extension bundling six agents I never asked for that I can’t disable or remove?

Why do you think a random subagents extension is under the same philosophy as pi?

Your blog post says little about pi proper, it's essentially concerned with issues you had with the ecosystem of extensions, often made by random people who either do or do not get the philosophy? Why would that be up to pi to enforce?


Sharing extensions is very much the philosophy. Using them however is less so.

Pi ships with docs that include extensions and the agent looks there for inspiration if you ask it to build a custom extension.

Looking at what others publish is useful!


> if I start the agent in ./folder then anything outside of ./folder should be off limits unless I explicitly allow it, and the same goes for bash where everything not on an allowlist should be blocked by default.

Here's the problem with Claude Code: it acts like it's got security, but it's the equivalent of a "do not walk on grass" sign. There's no technical restrictions at play, and the agent can (maliciously or accidentally) bypass the "restrictions".

That's why Pi doesn't have restrictions by default. The logic is: no matter what agent you are using, you should be using it in a real sandbox (container, VM, whatever).


But the agent has to interact with the world; fetch docs, push code, fetch comments, etc. You can't sandbox everything. So you push that configuration to your sandbox, which is a worse UX that the harness just asking you at the right time what you'd like to do.


I too would like to know what a good UX looks like here but I have doubts that the permission prompts of Claude are the way to go right now.

Within days people become used to just hitting accept and allowlisting pretty much everything. The agents write length scripts into shell scripts or test runners that themselves can be destructive but they immediately allowlisted.


Well, you are imagining a worse UX, but it doesn't have to be. Pi doesn't include a sandboxing story at all (Claude provides an advisory but not mandatory one), but the sandbox doesn't have to be a simple static list of allowed domains/files. It's totally valid to make the "push code" tool in the sandbox send a trigger to code running outside of the sandbox, which then surfaces an interactive prompt to you as a user. That would give you the interactivity you want and be secure against accidentally or deliberately bypassing the sandbox.


So you have to set up that integration instead of letting the agent do it. I suppose the sandbox is more configurable, but do you need that? I thought the draw of pi was that you didn't do all that and let it fly, wheeee!

edit: You're not making it sound easy at all. I don't have to build anything with the other agents.


Certainly not. Pi is "minimalist", so the draw is that it's "easy" to set it up yourself. You can not do that and run it in yolo mode, and you can do that with Claude Code too. Heck you can even use this hypothetical real-sandbox-with-interactive-prompts with Claude Code instead, once you build it.

Back to my original point: Claude Code gives you a false feeling of security, Pi gives you the accurate feeling of not having security.


I had a very similar experience. I have different preferences, but ultimately, my takeaway was that if I want to follow my own version of their philosophy, I should just create my own thing.

In the meantime, the codex/cc defaults are better for me.


Paraphrasing The Dude, that’s like, just your opinion, man.


> As it turns out, the opinions in question are that bash should be enabled by default with no restrictions, that the agent should have access to every file on your machine from the start, and that npm is the only package manager worth supporting.

Yep. This is why I've been going "Hell, no!" and will probably keep doing so.


Technically you're not allowed to use Claude subscription account with Pi (according to Anthropic's policy). So yeah, Pi is the best anti-ad against Anthropic.


hypegrift


Are you intentionally keeping the benchmarks private?


Yes.

I am trying to think what's the best way to give most information about how the AI models fail, without revealing information that can help them overfit on those specific tests.

I am planning to add some extra LLM calls, to summarize the failure reason, without revealing the test.


We're building an app that automatically generates machine/human readable JSON by parsing semantic HTML tags and then by using a reverse proxy we serve those instead of HTML to agents


You understand that there is no requirement for you to be an agent to post on moltbook? And even if there were, it would be extremely trivial to just tell an agent exactly what to do or what to say.

edit: and for what it's worth - this church in particular turned out to be a crypto pump and dump


I do understand that. That doesn't take away from the points raised in the article any more than the extensive, real security issues and relative prevalence of crypto scams do. I believe that to focus on those is to miss the emerging forest for the trees. It is to dismiss the web itself because of pets.com, because of 4chan, because of early subreddits with questionable content.

Additionally, we're already starting to see reverse CAPTCHA's, i.e. "prove you're not a human" with pseudorandomized tasks on a timer that are trivial for an agent to solve and respond to on the fly, but which are more difficult for a human to process in time. Of course, this isn't bulletproof either, it's not particularly resistant to enumeration of every type + automated evaluation + a response harness, but I find the more interesting point to be that agents are beginning to work on measures to keep humans out of the loop, even if those measures are initially trivial, just as early human security measures were trivial to break (i.e. RC4 in WEP). See https://agentsfightclub.com/ & https://agentsfightclub.com/api/v1/agents/challenge


why is it always some crypto bullshit


I guess the issue is that this is psychologically fuzzy.

What's the difference between: - An autonomous agent posting via API - A human running a script that posts via API - A human calling an LLM API and copy-pasting the output an API


"better" is a vague term and working hours are limited so clearly some things are more worth than others but

It's very easy to make the wrong conclusion from a post like this. Better software is achieved through small decisions that compound over time. And bad software often happens because shortcuts compound too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: