Hacker Newsnew | past | comments | ask | show | jobs | submit | corbet's commentslogin

I'm not sure what you're reading; there is a distinct lack of GPL advocates in that conversation.

I feel obligated to mention LWN - https://lwn.net/ - since that is exactly what we aspire to.


Something broke down somewhere ... I got emails a while back about the acquisition and giving options about whether to go along with the move or not.

Since Gusto is our payroll provider, I didn't see a reason not to do that... hopefully there will be less finger pointing the next time something goes screwy with the 401k transfers.


My guess is your different experience is precisely because you use Gusto as your payroll provider. My previous employer does not, or at least they did not when I was working there. This was truly the first and only email I've gotten about it, but I have always gotten regular transactional and notification emails from Guideline just fine, including yesterday with a confirmation that I'd changed my asset allocation!


The "compliance officer" at Bright Data, instead, offered me a special deal to protect my site from their bots ... they run a protection racket along with all the rest of their nastiness.


I worked for an Amazon scraping business and they used Luminati (Now Brightdata) for a few months until I figured out a way to avoid the ban hammer and got rid of their proxy.

They indeed provided "high quality" residential and cellular ips and "normal quality" data center ips. You had to keep cycling the ip pool every 2-3 days which cost extra. It felt super shady. It isn't their bots, they lease connections to whoever is paying, and they don't care what people do in there.


> ... until I figured out a way to avoid the ban hammer ...

You had my curiosity ... but now you have my attention.


Without bothering to check on Amazon, I successfully scraped meta stuff for years at rates exceeding 20gbit/s without any proxies but just rotating IPv6 addresses on the same couple of blocks for every request

There are usually silly bypasses like this that easily work even with bigco stuff


My heat pump, in Colorado, kept the house warm at -18°F last winter. Without firing up the backup resistance heating strip. I think it works.

(It is more expensive to operate than the natural-gas furnace was, though).


Insulation matters as well and I’m guessing your Colorado house is far newer and better sealed than most New England homes.


We do try to spell things out and/or link them in LWN articles to make the context available, but some things we just have to assume.

Additionally, spelling out "Berkeley Packet Filter" is not going to help any readers here; BPF is far removed from the days when its sole job was filtering packets, and that name will not tell readers anything about why BPF is important in the Linux kernel.


See, for example, the binder driver merged for 6.18. It's out there, and will land when it's ready.


In discussions like this, I sometimes feel that the importance of related work like the increasing use of Rust in Android and MS land is under-appreciated. Those who think C is fine often (it seems to me) make arguments along the lines that C just needs to have a less UB-prone variant along the lines of John Regehr and colleagues' "Friendly C" proposal,[0] which unfortunately Regehr about a year and a half later concluded couldn't really be landed by a consensus approach.[1] But he does suggest a way forwards: "an influential group such as the Android team could create a friendly C dialect and use it to build the C code (or at least the security-sensitive C code) in their project", which is what I would argue is happening; it's just that rather than nailing down a better C, several important efforts are all deciding that Rust is the way forward.

The avalanche has already started. It is too late for the pebbles to vote.

0: https://blog.regehr.org/archives/1180 1: https://blog.regehr.org/archives/1287


Oof. That's a depressing read:

> This post is a long-winded way of saying that I lost faith in my ability to push the work forward.

The gem of despair:

> Another example is what should be done when a 32-bit integer is shifted by 32 places (this is undefined behavior in C and C++). Stephen Canon pointed out on twitter that there are many programs typically compiled for ARM that would fail if this produced something besides 0, and there are also many programs typically compiled for x86 that would fail when this evaluates to something other than the original value.


Some parts of the industry with a lot of money and influence decided this is the way forward. IMHO Rust has the same issue as C++: it is too complex and a memory safe C would be far more useful. It is sad that not more resources are invested into this.


I'm entirely unconvinced that a low-level† memory safe C that is meaningfully simpler than rust is even possible, let alone desirable. IMHO Basically all of rust's complexity comes from implementing the structure necessary to make it memory safe without making it too difficult to use††.

Even if it is though, we don't have it. It seems like linux should go with the solution we have in hand and can see works, not a solution that hasn't been developed or proved possible and practical.

Nor is memory safety the only thing rust brings to the table, it's also brings a more expressive type system that prevents other mistakes (just not as categorically) and lets you program faster. Supposing we got this memory safe C that somehow avoided this complexity... I don't think I'd even want to use it over the more expressive memory safe language that also brings other benefits.

† A memory-safe managed C is possible of course (see https://fil-c.org/), but it seems unsuitable for a kernel.

†† There are some other alternatives to the choices rust made, but not meaningfully less complex. Separately you could ditch the complexity of async I guess, but you can also just use rust as if async didn't exist, it's a purely value added feature. There's likely one or two other similar examples though they don't immediately come to mind.


I don't think so. First, Rust did not come from nowhere, there were memory safe C variants before it that stayed closer to C. Second, I do not even believe that memory safety is that important that this trumps other considerations, e.g. the complexity of having two languages in the kernel (even if you ignore the complexity of Rust). Now, it is not my decision but Google's and other company's influence. But I still think it is a mistake and highlights more the influence of certain tech companies on open source than anything else.


> First, Rust did not come from nowhere, there were memory safe C variants before it that stayed closer to C.

Can you give an example? One that remained a low level language, and remained ergonomic enough for practical use?

> Second, I do not even believe that memory safety is that important that this trumps other considerations

In your previous comment you stated "a memory safe C would be far more useful. It is sad that not more resources are invested into this". It seems to me that after suggesting that people should stop working on what they are working on and work on memory safe C instead you ought to be prepared to defend the concept of a memory safe C. Not to simply back away from memory safety being a useful concept in the first place.

I'm not particularly interested in debating the merits of memory safety with you, I entered this discussion upon the assumption that you had conceded them.


> Can you give an example? One that remained a low level language, and remained ergonomic enough for practical use?

They can't, of course, because there was no such language. Some people for whatever reason struggle to acknowledge that (1) Rust was not just the synthesis of existing ideas (the borrow checker was novel, and aspects of its thread safety story like Send and Sync were also AFAIK not found in the literature), and (2) to the extent that it was the synthesis of existing ideas, a number of these were locked away in languages that were not even close to being ready for industry adoption. There was no other Rust alternative (that genuinely aimed to replace C++ for all use cases, not just supplement it) just on the horizon or something around the time of Rust 1.0's release. Pretty much all the oxygen in the room for developing such a language has gone to Rust for well over a decade now, and that's why it's in the Linux kernel and [insert your pet language here] is not.

BTW, this is also why people being are incentivized to figure out ways to solve complex cases like Rcu-projection through extensible mechanisms (like the generic field projection proposal) rather than ditching Rust as a language because it can't currently handle these ergonomically. The lack of alternatives to Rust is a big driving factor for people to find these abstractions. Conversely, having the weight of the Linux kernel behind these feature requests (instead of e.g. some random hobbyist) makes it far more likely for them to actually get into the language.


I don't think there are many new ideas in Rust that did not exist previously in other languages. Lifetimes, non-aliasing pointers etc all certainly existed before. Rust is also only somewhat ready for industry use because suddenly some companies poured a lot of money in it. But it seems kind of random why they picked Rust. I do not think there is anything which makes it particularly good and it certainly has issues.


"Lifetimes" didn't exist before. Region typing did, but it was not accompanied by a system like Rust's borrow checker, which is essential for actually creating a usable language. And we simply did not have the tooling required (e.g. step-indexed concurrent separation logic with higher order predicates) to prove a type system like that correct until around when Rust was released, either. Saying that this was a solved problem because Cyclone had region typing or because of MLKit, or people knew how to do ergonomic uniqueness types because of e.g. Clean, is the sort of disingenuous revisionist history I'm pushing back on.

> But it seems kind of random why they picked Rust. I do not think there is anything which makes it particularly good and it certainly has issues.

Like I said, they picked Rust because there was literally no other suitable language. You're avoiding actually naming one because you know this is true. Even among academic languages very few targeted being able to replace C++ everywhere directly as the language was deemed unsuitable for verification due to its complexity. People were much more focused on the idea of providing end to end verified proofs that C code matched its specification, but that is not a viable approach for a language intended to be used by regular industry programmers. Plenty of other research languages wanted to compete with C++ in specific domains where the problem fit a shape that made the safety problem more tractable, but they were not true general purpose languages and it was not clear how to extend them to become such (or whether the language designers even wanted to). Other languages might have thought they were targeting the C++ domain but made far too many performance sacrifices to be suitable candidates, or gave up on safety where the problem get hard (how many "full memory safety" solutions completely give up on data races for example? More than a few).

As a "C++ guy" Rust was the very first language that gave us what we actually wanted out of a language (zero performance compromises) while adding something meaningful that we couldn't do without it (full memory safety). Even where it fell short on performance or safety, the difference with other languages was that nobody said "well, you shouldn't care about that anyway because it's not that big a deal on modern CPUs" or "well, that's a stupid thing for a user to do, who cares about making that case safe?" The language designers genuinely wanted to see how far we cold push things without compromises (and still do). The work to allow even complex Linux kernel concurrent patterns (like RCU or sequence locking) to be exposed through safe APIs, without explicitly hardcoding the safety proofs for the difficult parts into the language, is just an extension of the attitude that's been there since the beginning.


Rust isn't perfect, but it's basically the most viable language currently to be used in software such as Linux. It's definitely more of a C++ contender than anything else, but manages to be very usable in most other cases too. Rust 1.0 got a lot of things right with its compile-time features, and the utility of these features for "low-level" code has been demonstrated repeatedly. If a language is to replace Rust in the future, I expect it will take on many of the strengths of Rust. Moreover, Rust is impressive at becoming better. The work for Rust-for-Linux, alongside various other improvements (e.g. next trait solver, Polonius and place-based borrowing, parallel rustc frontend) show that Rust can evolve significantly without a huge addition in complexity. Actually, most changes should reduce its complexity. Yes, Rust has fumbled some areas, such as the async ecosystem, the macro ecosystem, and pointer-width integers, but its mistakes are also considered for improvement. The only unfortunate thing is the lack of manpower to drive some of these improvements, but I'm in it for the long run. Frankly, I'd say that if the industry had to use only one language tomorrow, Rust is the best extant choice. Really, I'm open to

And, it's really funny that GP criticizes Rust but doesn't acknowledge that of course blood, sweat, and tears have already gone into less drastic variants for C or C++. Rust itself is one of the outputs of the solution space! Sure, hype is always a thing, but Rust has quite demonstrated its utility in the free market of programming languages. If Rust was not as promising as it is, I don't see why all of these companies and Linus Torvalds would seriously consider it after all these years of experience. I can accept if C had a valid "worse is better" merit to it. I think C++, if anything, has the worst value-to-hype ratio of any programming language. But Rust has never been a one-trick pony for memory safety, or a bag of old tricks. Like any good language, it offers its own way of doing things, and for many people, its way is a net improvement.


For example cyclone, checked C, safe-c, deputy etc.

I agree that memory safety is useful, but I think the bigger problem is complexity, and Rust goes in the wrong direction. I also think that any investment into safety features - even if not achieving perfect safety - in C tooling would have much higher return of investment and bigger impact on the open-source ecosystem.


The occasional sharing of subscriber links in this way only does us good. If you enjoy the content, please subscribe and help ensure that there will be more of it!


Ossification does not come about from the decisions of "Linux maintainers". You need to look at the people who design, sell, and deploy middleboxes for that.


I disagree. There is plenty of ossification coming from inside the house. Just some examples off the top of my head are the stuck-in-1974 minimum RTO and ack delay time parameters, and the unwillingness to land microsecond timestamps.


Not a networking expert, but does TCP in IPv6 suffer the same maladies?


Yes.

Layer4 TCP is pretty much just slapped on top of Layer3 IPv4 or IPv6 in exactly the same way for both of them.

Outside of some little nitpicky things like details on how TCP MSS clamping works, it is basically the same.


…which is basically how it’s supposed to work (or how we teach that it’s supposed to work). (Not that you said anything to the contrary!)


The "middleboxes" excuse for not improving (or replacing) protocols in the past was horseshit. If a big incumbent player in the networking world releases a new feature that everyone wants (but nobody else has), everyone else (including 'middlebox' vendors) will bend over backwards to support it, because if you don't your competitors will and then you lose business. It was never a technical or logistical issue, it was an economic and supply-demand issue.

To prove it:

1. Add a new OSI Layer 4 protocol called "QUIC" and give it a new protocol number, and just for fun, change the UDP frame header semantics so it can't be confused for UDP.

2. Then release kernel updates to support the new protocol.

Nobody's going to use it, right? Because internet routers, home wireless routers, servers, shared libraries, etc would all need their TCP/IP stacks updated to support the new protocol. If we can't ship it over a weekend, it takes too long!

But wait. What if ChatGPT/Claude/Gemini/etc only supported communication over that protocol? You know what would happen: every vendor in the world would backport firmware patches overnight, bending over backwards to support it. Because they can smell the money.


Many years ago I worked in a Safeway grocery store. We would have occasional power failures that would leave the entire store dark; we would all be given flashlights to help customers find their way out.

The cash registers, though, had backup power, so the store could still take their money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: