Hacker Newsnew | past | comments | ask | show | jobs | submit | l1k's commentslogin

NTB = Non-Transparent Bridge

DW = DesignWare


Some of the sites that went online around 1994 are still there:

https://north.pole.org/

https://town.hall.org/

I believe Carl Malamud (Internet Multicasting Service) was behind these.

The audio files are in Sun Audio format, which all browsers supported natively back then. Chromium apparently no longer does, requires saving and opening in VLC.


One of my pages from around then still just about lives:

http://www.exnet.com/springboard.html



If you need a lot of RAM, usually you need to buy servers with multiple CPUs to which you can attach the memory. Because the amount of DRAM you can attach to one CPU is limited.

If you don't have the need for all the extra CPUs, just being able to attach more memory to a single CPU through CXL may be cheaper.


The original Raspberry Pi SoC (BCM2835) is ARMv6 with VFP2 Hard Float support.

Debian's "arm" architecture is ARMv7 with VFP3. It doesn't support BCM2835.

Debian's "armel" architecture is ARMv4. It doesn't use BCM2835 to its full potential.

So the BCM2835 is awkwardly positioned in-between Debian's two stock ARM 32-bit architectures, which motivated the decision to recompile all packages for a BCM2835-specific "armhf" distribution.

In a sense, it's a historic artifact.


Interesting.

For my "canned mainframe" (https://github.com/rbanffy/vm370) ARM images, I'm using Debian as a base, since it has the armv6 architecture listed and I didn't notice any adverse effects.

I wonder if I should have used an RPi-specific base image.

OTOH, that'd render the container image incompatible with other 32-bit ARM boards.


The point here is likely to pull the rug out from under scalpers' feet.

With the Raspberry Pi 5 out in two weeks, all the held-back inventory of older models will be dumped, prices will plummet, availability will become a non-issue.

In that sense it's a wise move.


Mouser sold about 3000 Pi 4s in the last couple of weeks. I'm hoping a few scalpers are about to get seriously burned.

Interestingly Digikey has over 2000 left. I wonder if they are limiting quantities.


Finding older models were also almost impossible in the past two years. It's unlikely that Raspberry Pi 5 will solve the issue. But even so, it's not a wise move because what is the point of bringing a new model when they can't make it available to normal people?


The Raspberry Pi 5 will only be for sale to individuals until the end of this year (no industrial customers competing for inventory like the older models)


Plus, they're only launching the 4/8gb models to start. So there'll be another wave of cheaper ones a little later. Really hoping they still hit the $35 price point on the 1gb model.


> they can't make it available to normal people

I guess you haven't been looking recently? I can go to a local store and pick one up. It looks easy to pick up one online too.

https://rpilocator.com/


I think I've seen Pi Kits at my local target. The issue with those is, they're for niche things I might not care about and now I got tech waste on my hands, but also might not be the exact model I want.

Note I'm not disagreeing, just saying in some cases, the ones in-store are kits.


I wouldn't expect Target to carry bare Raspberry Pi-- how many people walk into a Target wanting a Pi with no accessories?


I have and it is still a pain. Many websites still have limits on how many you can buy. The situation has improved but it is far from what your comment implies.


Many websites still have limits on how many you can buy.

For a hobbyist / individual, is that really a big deal? I mean, how many do you need at one time?

Anyway, the claim all along has been that supplies would be "back to normal" by the end of this year, and so far things seem to be tracking that way. If you look at rpilocator.com now, the entire first two pages are full of green lines, which is a DRASTIC improvement compared to just 6 months ago. And some of the major distributors are getting in shipments of 5,000, 6,000 at a time of some models and having them in stock for weeks on end. So one can clearly see that the situation is improving rapidly.

That said, I will make no claim one way or the other with regards to the question of whether or not shipping a Pi 5 is a "good idea" or not.


On a similar note, I'm genuinely curious as to why Pi chose the "authorized reseller" model instead of selling them directly.


B2C, small quantity sales are not fun. B2B selling pallets full.


I'd be surprised if reselling through authorized sellers isn't much simpler and problem-free than selling them directly.

I also expect that using resellers ensures better odds of protecting the brand/project goodwill. Resellers deal with problems like "I paid a ton of cash for a board and it arrived late and/or broken". Support alone is a nightmare, and I recall that raspberry Pi struggled with PR when they started out. I vaguely recall Liz Upton being behind some ill-advised episodes that didn't improved Raspberry Pi's image and would get anyone other PR person sacked.


They do also sell them directly, though not in large quantities. They even have a retail store somewhere in the UK.


Because world wide sales is really hard.

Having trusted local resellers is a much more scalable way to sell to local markets.


If scalpers are able to sell a product at higher price, doesn't that mean the company priced the product too low?


I think scalping is more of a supply issue, raising the official price of the product would only require more cash when scalpers are doing their buying.


Raspberry should raise the price until scalping isn't profitable. Keeping the price low is just handing money to scalpers that should be going towards future product development, until they can meet the demand.


A lot of RISC CPU arches which were popular in the 1990's declined because their promulgators stopped investments and bet on switching to IA64 instead. Around the year 2000, VLIW was seen as the future and all the CISC and RISC architectures were considered obsolete.

That strategic failure by competitors allowed x86 to grow market share at the high end, which benefited Intel more than the money lost on Itanium.


It's more complicated than that.

Sun didn't slow down on UltraSPARC or make an Itanium side bet. IBM did (and continues to) place their big hardware bet on Power--Itanium was mostly a cover your bases thing. I don't know what HP would have done--presumably either gone their own way with VLIW or kept PA-RISC going.

Pretty much all the other RISC/Unix players had to go to a standard processor; some were already on x86. Intel mostly recovered from Itanium specifically but it didn't do them any favors.


Actually, they did. Intel promised aggressive delivery schedule, performance ramp, and performance. The industry took it hook, line, and sinker. While AMD decided not to limit 64 bit to the high end and brought out x86-64.

Sun did a port IA64 port of solaris, which is definitely an itanium side bet.

HP was involved in the IA64 effort and definitely was planning on the replacement of pa-risc from day 1.


> HP was involved in the IA64 effort and definitely was planning on the replacement of pa-risc from day 1.

As my memory remembers and https://en.wikipedia.org/wiki/Itanium agrees, Itanium originated at HP. So yes, a replacement for pa-risc from day 1, but even more so...


Another way to look at the Itanic is that HP somehow conned Intel into betting the farm on building HP-PA3 for HP. Which is pretty impressive.


Sun didn't slow down on UltraSPARC but they were just not very good at designing processors.


This isn't really true. IBM/Motorola need to own the failure of POWER and PowerPC and MIPS straight up died on the performance side. Sun continued with Ultrasparc.

It wasn't that IA64 killed them, it's that they were getting shaky and IA64 appealed _because_ of that. Plus the lack of a 64bit x86.


Its simply economics Intel had the volume. Sun and SGI simply didn't have the economics to invest the same amount, and they were also not chip company, the both didn't invest enough in chip design or invested it wrongly.

Sun spend an unbelievable amount of money on dumb ass processor projects.

Towards the end of the 90s all of them realized their business model would not do well against Intel, so pretty much all of them were looking for an exit and IA64 hype basically killed most of them. Sun stuck it out with Sparc with mixed results. IBM POWER continues but in a thin slice of the market.

Ironically there was a section of Digital and Intel who thought that Alpha should be the bases of 64 bit x86. That would have made Intel pretty dominate. Alpha (maybe a TSO version) with 32 bit x86 comparability mode.


Look closely at AMD designs (and staff) of the very late 90s and early 2000s and/or all modern x86 parts and see that ...more or less, that's what happened, just not with an Alpha mode.

Dirk Meyer (Co-Architect of the DEC Alpha 21064 and 21264) lead the K7 (Athlon) project, and they run on a licensed EV6 bus borrowed from the Alpha.

Jim Keller (Co-Architect of the DEC Alpha 21164 21264) lead the K8 (first gen x86-64) project, and there are a number of design decisions in the K8 evocative of the later Alpha designs.

The vast majority of x86 parts since the (NexGen Nx686 which became) AMD K6 and Pentium Pro (P6) have been internal RISC-ish cores with decoders that ingest x86 instructions and chunk them up to be scheduled on an internal RISC architecture.

It has turned out to sort of be a better-than-both-worlds thing almost by accident. A major part of what did in the VLIW-ish designs was that "You can't statically schedule dynamic behavior" and a major problem for the RISC designs was that exposing architectural innovations on a RISC requires you change the ISA and/or memory behavior in visible ways from generation to generation, interfering with compatability so... the RISC-behind-x86-decoder designs get to follow the state of the art changing whatever they need to behind the decoder without breaking compatibility AND get to have the decoder do the micro-scheduling dynamically.


Yes that very much part of the history.

However I disagree that its the best of both worlds.

RISC doesn't necessary require changing the ISA, not anymore then on x86.


I'm certainly not going to claim that x86 and its irregularities and extensions of extensions is in _any way_ a good choice for the lingua franca instruction set (or IR in this way of thinking). Its aggressively strictly ordered memory model likely even makes it particularly unsuitable, it just had good inertia and early entrance.

The "RISC of the 80s and 90s" RISC principles were that you exposed your actual hardware features and didn't microcode to keep circuit paths short and simple and let the compiler be clever, so at the time it sort of did imply you couldn't make dramatic changes to your execution model without exposing it to the instruction set. It was about '96 before the RISC designs (PA-RISC2.0 parts, MIPS R10000) started extensively hiding behaviors from the interface so they could go out-of-order.

That changed later, and yeah, modern "RISC" designs are rich instruction sets being picked apart into whatever micro ops are locally convenient by deep out of order dynamic decoders in front of very wide arrays of microop execution units (eg. ARM A77 https://en.wikichip.org/wiki/arm_holdings/microarchitectures... ), but it took a later change of mindset to get there.

Really, the A64 instruction set is one of the few in wide use that is clearly _designed_ for the paradigm, and that has probably helped with its success (and should continue to, as long as ARM, Inc. doesn't squeeze too hard on the licensing front).


Seems to me that you just have to be careful when bringing out a new version. You can't change the memory model from chip to chip but that goes for x86 to. Not sure what other behaviors are not really changeable.

Can you give me an example of this? SPARC of the late 90s ran 32bit SPARC.


Plus the lack of a 64bit x86.

If you look at the definitions of various structures and opcodes in x86 you'll notice gaps that would've been ideal for a 64-bit expansion, so I think they had a plan besides IA64, but AMD beat them to it (and IMHO with a far more inelegant extension.)


  > and IMHO with a far more inelegant extension
what could they have done that would have been better?


>That strategic failure by competitors allowed x86 to grow market share at the high end, which benefited Intel more than the money lost on Itanium.

In that sense, Itanium was a resounding success for Intel (and AMD).


Itanium was a success right until they actually made a chip.

What they should have done is hype Itanium and then they day it came out they should have said yeah this was a joke, what we did is buy Alpha from Compaq and its literally just Alpha with x86 comparability mode.

Then they would have dominated.


You'd be surprised. Here's an article I wrote on the modernization of PCIe hotplug in Linux:

https://lwn.net/Articles/767885/


Thunderbolt devices appear in the OS as a PCIe switch, so you need two additional bus numbers (one for the Switch Upstream Port and one for the Switch Downstream Port). If the device is hotplugged to a port which has run out of bus numbers, you'll get this error message.

Mika Westerberg is constantly fine-tuning the allocation of PCI resources in the Linux kernel to avoid such scenarios. Some recent patches:

https://lore.kernel.org/linux-pci/20220905080232.36087-1-mik...

https://lore.kernel.org/linux-pci/20221130112221.66612-1-mik...

On macOS, it's possible to pause the PCI bus, reallocate resources and unpause the bus:

https://developer.apple.com/library/archive/documentation/Ha... (search for "Supporting PCIe Pause")

We don't have that on Linux unfortunately, so we depend on getting the initial resource allocation right.

Sergei Miroshnichenko has worked on such a reallocation feature for Linux but it hasn't been accepted into mainline yet and he hasn't posted a new version of his patches for almost two years, so the effort seems stalled:

https://lore.kernel.org/linux-pci/20201218174011.340514-1-s....


It sounds like the pause/unpause might be the way to fix this properly, since trying to be heuristically smarter sounds like a recipe for never-ending corner case bugs like the OP’s issue.

The patch for pausing and unpausing seems quite reasonable, except that it does require driver support (unsurprising - you’re literally reallocating the resources used by the driver!). I suppose if you had at least a few movable devices then you should be ok in the event of a hotplug event, so you’d have to hope that enough drivers bother to support the feature.

I wonder what is necessary to get people to care about the patch enough to fix it up and mainline it? I suppose the problem it fixes is still niche enough that not so many people are clamoring for the fix.


The PCI resource allocation code is fairly intricate and everyone is scared that changing it may cause regressions. Sergei's patch set is quite intrusive and it would be necessary to somehow break it up into smaller pieces that are slowly fed into mainline over several release cycles, always watching out for regression reports. So, the problem is known, but the engineers working on PCI code in the kernel are given higher priority stuff to work on by their employers, hence the issue hasn't gotten the attention it deserves.

Actually I forgot to mention there's another solution: A PCIe feature called Flattening Portal Bridge (PCIe Base Spec r6.0 section 6.26). That was introduced with PCIe 5.0. It's more likely that FPB support is added in mainline than the pause/unpause feature. It's supported by recent Thunderbolt chips and it's an official feature of the PCIe standard, so companies will prefer dedicating resources to it rather than some non-standard approach.


In the dynamic use cases, the PCIe specs is kind of shabby on the addressing space: it is theorically fixed by FPB.

I guess this is sorry for those niche hardware use cases.

Isn't FPB into PCIe 4.0? (I am not a SIG member, cannot read the specs).


I meant, I know about PCIe addressing (from the web, linux code, and a book I read years ago), but I cannot read the modern specific FPB specs.


Would a workaround be that whenever the kernel detects this happening (and it did, it dmesg printed it) that it somehow increases an internal counter so on next reboot there will be more resources?

This would require the kernel being able to either update its own command line somehow, or having some permanent storage somewhere it could store it.

Or this could all be done by systemd - detect that message, increase the resource, next reboot will fix it.


Kernel state does not survive reboots afaik.

That would need help from userland, which is not involved in the early boot process.

You could I guess change kernel init parameters and save that in your boot loader, but that is very hackish.


Maybe it can be introduced gradually, making the reallocation an optional feature that a driver might support. Then drivers can independently implement the resource reallocation feature.

Mainline drivers can move gradually. If they want to be nice for out-of-tree drivers then they can describe a timeline for deprecating and removing the support for non-reallocating drivers.


What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?


> What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?

With open source and mainlined drivers, it's very difficult to change all the drivers and ensure they work.

Without open source and mainlined drivers, it becomes impossible.


Possibly it is hard, tedious, or the people able to fix it don’t think it is worth the effort.

Open source projects rely on volunteers mostly so it isn’t like there’s some outside force to appeal to. If nobody volunteers a solution, then it isn’t important enough to solve. The point is that, if it were important enough to fix, anybody with the requisite skills could do so.


[flagged]


You are free to comment on the question raised, instead of a dismissive empty ad-hom.


While tongue in cheek, the answer is accurate: open source does not mean "free service contract", it means that you can take the code and modify yourself (and preferably upstream the fix).

Patches come both from vendors and users experiencing an issue. Vendors take care of most things, but for esoteric problems you might only have a handful of people experiencing it. The vendor is unlikely to care, so if you do not write the patch or pay someone to do it, who will?

Still better than the competition, where such problems will never be fixed unless it generates sufficient bad PR...


You're doing the same thing, dragging the original conversation off into berating the commentor for not fixing it themselves because you owe them nothing and they shouldn't be so entitled.

> "Still better than the competition, where such problems will never be fixed unless it generates sufficient bad PR..."

The competition comes under "pay someone else to do it".


What OS are you using where you can get the vendor to implement kernel features to fix obscure driver issues?

I'm sure there's an amount of money you can throw at Microsoft to get something done. I don't know how much it is, but I'm guessing it's more than it would cost to find a vendor to do it for Linux.

The serious answer to " What is the point in having all of the drivers be open sourced and mainlined if we're not willing to fix them to support this?" is "There are many points to this, but one of them is that it's possible to fix them to support it, if someone wants to put in the effort. It's worth something that it's theoretically possible if you really need it, even if no one else has done it yet.".

The answer "You can do it yourself" is meant to help them understand "Anyone can do it, someone needs to step up to the plate. But it's also true that it costs resources. If you're wondering why no one else has done it yet, it's the same reason you haven't done it yet".


With enough money you can get that kind of support from any Linux vendor, eg RedHat or Oracle.


> You're doing the same thing, dragging the original conversation off into berating the commentor for not fixing it themselves because you owe them nothing and they shouldn't be so entitled.

Ah, this argument again. Yeah, the maintainers owe you nothing, as they have already worked their asses off to give you something for free. You have the right to make polite bug reports and discuss fixes, but no one is entitled to force volunteers to do work.

But, that is not the same thing as everyone having to fix their own shit. 100 million users does not need 100 million developers.

What matters is that the users that have issues can fix issues, and if the issue affects enough people, it will eventually affect someone able and willing to fix it. That is why open source works, but it requires that some people put in the effort, and many learn to do it exactly when they get annoyed by a bug.

So yeah, if you are not willing to wait for someone else to come around and volunteer to fix it, patch it yourself or pay someone else to do it. That's how the system works, regardless of how demeaning you feel this is to non-developers or developers that feel that their time is more valuable than that of others.

> The competition comes under "pay someone else to do it".

Sure, if you have enough money to convince Apple or Microsoft specifically to prioritize fixing your issue (which may be in an unsupported or deprecated configuration) above what else they were doing, which would cost a whole lot more than just engineer and manager time. You have no alternative, as only your specific vendor can make the fix. Realistically speaking, if you had that kind of money you probably already have employed engineers that you could get to fix your open source issues for you and would not be arguing on hacker news about the need to write patches.

For open source, you don't have to convince anyone in particular. Can't convince the first person you try with money? Just ask the next person, anyone can submit the patch.


I think you are mixing two arguments. One is a good, valid, argument which is about how volunteer maintainers don't owe anyone anything, and absolutely don't deserve to be harassed, insulted, coerced, guilt tripped, etc. And the other is is about internet Linux commenters (away from bug trackers and issue lists) replying in ways that close down and end conversation of anything which isn't toeing the 'party line' of how great Linux/FOSS/libre/gratis software/etc. is.

The parent comment by AnIdiotOnTheNet was not in the context of bug reports filed to maintainers, or insulting anyone, or demanding anything specific. The parent of that said that the patch looked good "but" would need driver support, perhaps suggesting that's a showstopper. AnIdiotOnTheNet asked what the point of having open source drivers is if they can't be fixed, or charitably steelmanned read as "the drivers are open so they can be fixed to work with the patch". Blueflow's reply "you are free to submit a patch" is technically correct, but low value - few people on HN aren't aware of that. The following "or request a refund" is conversation ending, "fix it or shut up, stop talking about it".

It's a common reply format on internet Linux discussions which is closer to 'silence wrongthink' or 'cancel culture' than tech discussion.

> "So yeah, if you are not willing to wait for someone else to come around and volunteer to fix it, patch it yourself or pay someone else to do it."

Or ... talk about it, rant about it, 'raise awareness', exercise freedom of speech. "Patch it or shut up" aren't the only options. And look, dkozel replied with a long and technically detailed comment[1] and didn't need anyone jumping in to silence unapproved questions.

[1] https://news.ycombinator.com/item?id=34016094


Your ideas and intentions might be good and noble, but in the end of the day its the contributors and maintainers who burn out. And from my impression, most people in OSS are already fed up with supporting users. And I'm too. Telling users off like "fix it or shut up, stop talking about it" is the necessary step to protect yourself.


What it sounds like you’re saying is “If you don’t know how to code, GTFO” because in all likelihood the parent made this comment because they’re not capable.


Not quite. If you are a non-developer not patient enough to wait for others to volunteer, or are a developer thinking that your time is somehow more valuable than those of the maintainers, then GTFO. :)

Waiting patiently and politely reporting bugs is a fine strategy: If a problem affects enough people, it will eventually affect a developer capable and willing to fix it. If you want it go faster, you will have to get your hands dirty - many contributors acquired the skills exactly because they were annoyed by an issue and decided to fix it.


Where did AnIdiotOnTheNet's comment include not being a developer, not being patient, or thinking their time is more valuable than others?

Why is it so much more important to put someone in their place and win internet points with ad-homs in the "community" which supposedly values freedom?


"Whats the point in ${enormous amount of work others did for me for free} ..." is never a polite move. Neither is your second sentence.

You two are just rude. No amount of freedom forces other people to bear with that. And if you think its not rude, still, the world is large enough to avoid each other.


> ""Whats the point in ${enormous amount of work others did for me for free} ...""

That is the least charitable interpretation of it, and I think not at all what it actually says.A much better reading of it is, paraphrased:

"The PCI-e patch is nice, but it would be a dealbreaker waiting on driver support."

"The drivers are open source and checked into the same tree, so there's no dealbreaker there in waiting for vendors or coordinating with third party organisations and their release schedules {and that's the point of desiring open source drivers, so the system isn't hobbled by binary blob drivers and vendor release schedules}".


> dealbreaker

Very poor choice of words. If the deal is bad, request a refund!

I understand what you are trying to say, but i think that's just entitlement.

Just accept that, as a non-contributing OSS user, you have zero leverage over which features other people pour their energy into. If you do, that's a charitable exception and happens at the generosity of the one doing the work for you.

Edit: Let me quote the licensing terms that you agreed to and gave you permission to use the linux kernel at all:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.


Can you get off your high horse about finding someone to punish for some imaginary sin which hasn't even been committed in this thread, and respond to some of the things I said and the points I made?

The "deal" in question was imaginary person A offering patches to the kernel to fix the articles's PCIe allocation, and the Kernel maintainers hypothetically refusing the patches on the grounds that it would break compatibility with drivers and so it cannot happen. Then the comment that the drivers could be updated to match, so the patch could hypothetically happen. There is NO entitlement anywhere in this imaginary scenario, there is no demand for anyone to write such patch, no expectation that someone get right on updating the drivers, no insinuation that someone owes a review of such a patch, nothing that you are so het up about has happened or been implied to happen, been demanded, expected, requested, suggested or implied.

> "Just accept that, as a non-contributing OSS user, you have zero leverage over which features other people pour their energy into."

Where did you come up with the idea that I am a non-contributing OSS user? Because it drives your superiority fantasy, I suspect, where "putdowns of the inferior" are the order of the day.


I wouldn't exactly call the comment you're replying to an ad hominem attack.


Why not? You don’t think it changes the topic from being about the content of the comment (open source drivers) to being about the person who wrote the comment (what they should and shouldn’t be permitted to say based on how much they paid or didn’t pay)?


If the problem lies in the entitlement of a person, an appropriate response is going to be an ad hominem, and rightfully so.


You have not shown any signs of this spectre of entitlement haunting your every comment. If you had, an appropriate response would be educational, helpful, or perhaps a link to Rich Hickey's Gist[1].

By your comments, it would be appropriate to ad-hom you now for your apparent entitlement to controling other people's speech about OSS, right?

[1] https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...


Its simpler: your attitude is bad, i told you off, you have no intentions to change your attitude, neither do i.

And if thats "controling other people's speech" to you, then yes, your attitude is the problem. Realistically, i'm just some random dude on this forum, telling you off is the worst thing i can do to you.

Enjoy your refund.


> "you have no intentions to change your attitude, neither do i."

Multiple times now I have pointed out that I have not shown the behaviours you are accusing me of. The difference of who has intentions to change is that you are wrong.

> “i'm just some random dude on this forum, telling you of

You are doing so like a teacher with the wrong understanding, and despite being corrected you are intent on making sure someone gets told off whether it’s appropriate or not.

> Enjoy your refund.

You are free to link to the place in this thread where I personally said I was unsatisfied with some piece of software, or unsatisfied with the work someone was doing on it, or in any way mentioned desiring a refund or expecting more work? I think you won't be able to find such a place, so your 'clever comment' falls flat.


The key takeaway, especially for the author who wants things to "just work", is that he should have used a USB external disk, instead of a Thunderbolt external disk. OSes tend to have more issues with Thunderbolt disks (such as this issue you explained in details) than with USB disks, because the latter is more common, so more corner case bugs have been eliminated.


Yeah, first thing I thought was oh that drive looks esoteric.


Any book or documentation you recommend to read to someone interested in getting involved?


Filed chapter 11 twice.

Bad management made the wrong bet, thought Itanium and Windows would take over the world.

But what really broke all UNIX workstation manufacturers' backs was the unwillingness to cannibalize their products with affordable machines. SGI workstations were not affordable to students, so they got x86 machines instead and installed Linux. Google was built with x86-based Linux boxes because that's what the founders were using and could afford. UNIX workstation manufacturers lost an entire generation of young engineers that way. Apple eventually offered what they should have: Sleek, affordable machines with a rock-solid UNIX underneath a polished UI.


This is why some people are so excited about RISC-V, BTW - they're re-enacting the exact same market play as x86 did back then. Starting out from low-end hardware only good for single-purpose use (we call that "embedded" these days) and scaling up to something that can run a proper OS, with MMU and virtual memory support. And doing it while beating everyone else on price, as well as potentially on performance.


I think it was 2001? that Industrial Light & Magic (ILM) replaced their SGI workstations with linux boxes running RedHat 7.5 and powered by a Nvidia Quadro2 gpu.


That and in ~2004, ILM, along with the other LucasFilm and Games --> Lucas Presidio... during which a lot of SGI machines were scrapped.

Source: I was the designer of the datacenter and network cabling infra for the Presidio.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: