It’s a different type of thing, really. I like rogue-likes because they are a… pretty basic… story about my character, rather than a perfectly crafted story about somebody else’s.
Even when I play a game like Expedition 33 or Elden Ring, my brain (for whatever reason) makes a solid split between the cutscene versions of the characters and the gameplay version. I mean, in some games the gameplay characters is a wandering murderer, while the cutscene characters have all sorts of moral compunctions about killing the big-bad. They are clearly different dudes.
It looks like the Super Mario Bros series has a good showing, but it is the first one. I bet 3 falls into an unlucky valley where the game-playing population was not quite as large as it is now, but it isn’t early enough to get the extreme nostalgia of the first one.
> One thread per core, pinned (affinity) to separate CPUs, each with their own epoll/kqueue fd
> Each major state transition (accept, reader) is handled by a separate thread, and transitioning one client from one state to another involves passing the file descriptor to the epoll/kqueue fd of the other thread.
So this seems like a little pipeline that all of the requests go through, right? For somebody who doesn’t do server stuff, is there a general idea of how many stages a typical server might be able to implement? And does it create a load-balancing problem? I’d expect some stages to be quite cheap…
> For somebody who doesn’t do server stuff, is there a general idea of how many stages a typical server might be able to implement?
On the HTTP server from the article, what I understood is that those 2 you are seeing are the ones you have. Or maybe 3, if disposing of things is slow.
I'm not sure what I prefer. On one hand, there's some expensive coordination for passing those file descriptors around. On the other hand, having some separate code bother with creating and closing the connections make it easier to focus on the actual performance issues where they appear, and create opportunity to dispatch work smartly.
Of course, you can go all the way in and make a green threads server where every bit of IO puts the work back on the queue. But you would use a single queue then, and dispatch the code that works on it. So you get more branching, but less coordination.
> Second-order optimizers and natural gradient methods
Do second order optimizers help improve data efficiency? I assumed they’d help you get to the same minimum faster (but this is way outside my wheelhouse).
yes! typically the optimizer that trains faster also get better data efficiency. it maybe not be absolutely true, but that has been my observation so far. also see https://arxiv.org/pdf/2510.09378 for second-order methods.
Fundamentally I don't believe second-order methods get better data efficiency by itself, but changes to the optimizer can because the convergence behavior changes. ML theory lags behind the results in practice.
> I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.
NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!
> Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.
I’m sure this is true to some extent, about the lawyers. But also, I wonder (aka I don’t have any data to back this up, it is just based on random stories I’ve heard) to what extent people “I’m right but can’t afford the lawyer time” as a sort of pride maintaining excuse. Or to what extent lawyers use that as a soft-no to reject clients that they don’t think have a strong enough case.
Which isn’t to say the world is fundamentally just. Just, in some case the laws are legitimately stacked in favor of the big guys, or you sign a contract without carefully reading it, etc etc.
In my experience lawyers will tell you very directly when you don't have a good case, or if you do have a good case but it's not worth pursuing it (the most likely scenario). Also, the time that I did pursue my case, it took around $50,000 to the lawyers before I was able to convince the defendant to settle (for a large multiple of that $50K). If the other side had been more stubborn it would have been around $100K to take it to trial. If I hadn't had the money to pay the lawyer I would have been SOL, and most people don't have $50K to spend on an uncertain outcome like that. So “I’m right but can’t afford the lawyer time” is a very real scenario.
That $100k is also on the cheap side. If the other side has a lawyer and a lot of money to burn, they can easily hike that way up. Filing a billion motions that your lawyer has to respond to, deposing everyone you've ever met, going after every document you've ever looked at. The more money someone has, the easier it is to make you spend more money, even if you are right.
Right. My case was a very simple contract dispute with very little discovery and only a couple of people to depose, so I was lucky there. And the other side did have more money than me, but not so much that they could burn several hundred K on it without feeling it.
> So “I’m right but can’t afford the lawyer time” is a very real scenario.
For most cases like the ones we're talking about (NYC unlawful eviction and/or tenant harassment), if you have a good case, you don't have to pay up-front. A lawyer will take it on contingency and get paid by the defendant if you win.
In addition, there are also plenty of free legal resources dedicated to this exact topic as well.
True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist. As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.
> As for non LLM legal resources, they are just fine if you have all day to read them and all of another day to draft the required filings, but most people have jobs.
You misunderstand. If you are facing tenant harassment in New York City, there are other avenues for you to resolve it that don't involve engaging a lawyer at all.
> True, but it is only an incredibly narrow subset of legal cases where contingency based lawyers exist.
Not really? If anything, there's a pretty narrow subset of cases where it's not possible to get someone on contingency but it is possible to use an LLM to meaningfully push your case forward without one.
$50k is going to be on the cheap side for any case that ultimately involves the court. Anytime a case goes to trial, you can easily be looking at $1M+.
There's a reason companies keep lawyers on staff. It's a whole lot cheaper to give a lawyer an annual salary than it is to hire out a lawfirm as the standard rates for law-firms are insanely high. On the low end, $150/hour. On the high end, $400. With things like 15 minute minimums (so that one draft response ends up costing $100).
Take a deposition for 3 hours, with 2 lawyers, that'll be $2400.
Doubt that. There's no point of bringing in a litigator on day 1 of a trial save for the fact they are probably a better public speaker. Whatever needed to get done needed to be done well before trial started.
Sure there is, if you can send back a strong response to a challenge, a potential litigant may back down ultimately saving money.
On staff legal council is there to be able to make the call when a more expensive firm should be hired and brought in. There's a lot of BS lawsuits, however, that flow through. For example, every software company that gets big enough will likely get sued for some BS patent infringement. Having on staff legal will be able to make the call of "yeah, you should just give them $10k to go away". That's a lot cheaper than hiring a firm to come in and then tell you "Yeah, you should give them $10k to go away".
Particularly for a business, it takes years before any case gets close to going to trial. Plenty of time for your council to make the determination on when bigger guns should be brought in.
>Sure there is, if you can send back a strong response to a challenge, a potential litigant may back down ultimately saving money.
Do you litigate? Hiring a new attorney to show up day of trial only communicates to the other side that it's clown-city.
>On staff legal council is there to be able to make the call when a more expensive firm should be hired and brought in. There's a lot of BS lawsuits, however, that flow through. For example, every software company that gets big enough will likely get sued for some BS patent infringement. Having on staff legal will be able to make the call of "yeah, you should just give them $10k to go away". That's a lot cheaper than hiring a firm to come in and then tell you "Yeah, you should give them $10k to go away".
Do you litigate? Do you know what's involved to actually get to a trial? Let alone the day of trial? In house is going to take depositions and brief summary judgment? In house is going to prepare the pre trial order? Get proposed jury instructions? Again, do you litigate?
>Particularly for a business, it takes years before any case gets close to going to trial. Plenty of time for your council to make the determination on when bigger guns should be brought in.
You said, in particular, "up until the point where you start a trial."
> You said, in particular, "up until the point where you start a trial."
That was wrong of me to say.
My intent was more to communicate that there's a lot of legal work before a case gets close to going to trial or even discovery which an in-house attorney can and will handle. Including evaluating if a case needs the big guns called in.
For what it's worth (to you), I've only ever dealt with one in-house litigation team and they were monsters. Typically, once a suit is filed, you get someone serious on it. It'd be pretty crazy to have a non-litigator draft responsive pleadings like an answer or a motion to dismiss.
For the grad students especially, there’d be a career advancement incentive to still publish in the top journals. The professors might still want to publish in them just out of familiarity (with a little career incentive as well, although less pronounced than the grad students).
I think it’d be a big ask from someone whose role doesn’t typically cover that sort of decision.
Why can’t LLMs understand the big picture? I mean, a lot of companies have most of their information available in a digital form at this point, so it could be consumed by the LLM.
I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.
Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.
Most senior could make sense (although I’d like to see a collection of independent guilds coordinated by an LLM “CEO” just to see how it could work—might not be good enough yet, but it’d be an interesting experiment).
Ultimately, I suspect “AI” (although, maybe much more advanced than current LLMs) will be able to do just about any information based task. But in the end only humans can actually be responsible/accountable.
> Because LLMs don't understand things to begin with.
Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.
The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.
To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.
Claude is perfectly capable of all of this. Give it access to meeting notes and notion/linear and it can elegantly connect the dots within the context of a given problem.
> I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.
LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.
On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.
They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).
> The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.
The relationship between coding ability and memory requirement is nonlinear, right? Just a short Python code and an ide? Probably fine. Some complex ide with all sorts of agentic stuff? Need more ram. True enlightenment? Vim even with some unnecessary extensions will run on megabytes.
Even when I play a game like Expedition 33 or Elden Ring, my brain (for whatever reason) makes a solid split between the cutscene versions of the characters and the gameplay version. I mean, in some games the gameplay characters is a wandering murderer, while the cutscene characters have all sorts of moral compunctions about killing the big-bad. They are clearly different dudes.
reply