Hacker Newsnew | past | comments | ask | show | jobs | submit | EdNutting's commentslogin

Side note: Formal theorem proving is even more rare than formal model checking..!

It is funny how "hardware design" is commonly used in the chip industry to describe what semiconductor design/verification engineers do. And then there's PCB designers using those same chips in their _hardware designs_.

Also there's computer architects being like "So, are we hardware design? Interface design? Software? Something else?"...

Meanwhile, all the mechanical engineers are looking from the outside saying "The slightest scratch and your 'hard'ware is dead. Not so 'hard' really, eh?" ;) ;)

Every sector has its nomenclature and sometimes sectors bump into each other. SemiEngineering is very much in the chip design space.


PCB Design != Chip Design.

The article was about chip design.

Not trying to stop you debating the merits and shortcomings of PCB Design roles, just pointing out you may be discussing very very different jobs.


I'm talking about chip design: Verilog, VHDL, et al.

Very specifications-driven and easily tested. Very easy to outsource if you have a domestic engineer write the spec and test suite.

Mind you, I am not talking about IP-sensitive chip design or anything novel. I am talking about iterative improvements to well-known and solved problems e.g., a next generation ADC with slightly less output ripple.


Sure, so, yeah "general1465" seemed to be talking about PCB Design.

And from what I know of SemiEngineering's focus, they're talking about chip design in the sense of processor design (like Tenstorrent, Ampere, Ventana, SiFive, Rivos, Graphcore, Arm, Intel, AMD, Nvidia, etc.) rather than the kind of IP you're referring to. Although, I think there's still an argument to be made for the skill shortage in the broader semiconductor design areas.

Anyway, I agree with you that the commoditized IP that's incrementally improving, while very important, isn't going to pay as well as the "novel stuff" in processor design, or even in things like photonics.


> easily tested.

Definitely not. You do normally have pretty good specifications, but the level of testing required is much higher than software.

> Very easy to outsource

The previous company I was in tried to outsource some directed C tests. It did not go well. It's easy to outsource but it's even easier to get worthless tests back.


> the level of testing required is much higher than software

No dispute there. I suppose I meant "simply" instead of "easily".

Outside of aeronautics software (specifically, aviation and spaceships/NASA), the topology of the software solution space can change dramatically during development.

Stated differently: the cyclomatic complexity of a codebase is absurdly volatile, especially during the exploratory development stage, but even later on... things can very abruptly change.

AFAICT, this is not really the case with chip design. That is, the sheer amount of testing you have to do is high, but the very nature of *what you're testing* isn't changing under your feet all the time.

This means that the construction of a test suite can largely be front-loaded which I think of as "simple", I suppose...


* The fact that there are comments misunderstanding the article, that are talking about PCB Design rather than (Silicon) Chip Design, speaks to the problem facing the chip industry. Total lack of wider awareness and many misunderstandings.

* Chip design pays better than software in many cases and many places (US and UK included; but excluding comparisons to Finance/FinTech software, unless you happen to be in hardware for those two sectors)

* Software engineers make great digital logic verification engineers. They can also gradually be trained to do design too. There are significant and valuable skill and knowledge crossovers.

* Software engineers lack the knowledge to learn analogue design / verification, and there’s little to no knowledge-crossover.

* We have a shortage of engineers in the chip industry, particularly in chip design and verification, but also architecture, modelling/simulation, and low-level software. Unfortunately, the decline in hardware courses in academia is very long standing, and AI Software is just the latest fuel on the fire. AI Hardware has inspired some new people to join the industry but nothing like the tidal wave of new software engineers.

* The lack of open source hardware tools, workflows, high-quality examples, relative to the gross abundance of open source software, doesn’t help the situation, but I think it is more a symptom than it is a cause.


> * Chip design pays better than software in many cases and many places (US and UK included;

Where are these companies? All you ever hear from the hardware side of things are that the tools suck, everyone makes you sign NDAs for everything and that the pay is around 30% less. You can come up with counterexamples like Nvidia I suppose, but that's a bit like saying just work for a startup that becomes a billion dollar unicorn.

If these well paying jobs truly exist (which I'm going to be honest I doubt quite a bit) the companies offering them seem to be doing a horrendous job advertising that fact.

The same seems to apply to software jobs in the embedded world as well, which seem to be consistently paid less then web developers despite arguably having a more difficult job.


Oh by the way, I agree, NDAs all the time, and many of the tools are super user-unfriendly. There's quite a bit of money being made in developing better tools.

As for a list of companies, in the UK or with a UK presence, the following come to mind: Graphcore, Fractile, Olix, Axelera, Codasip, Secqai, PQShield, Vaire, SCI Semiconductor and probably also look at Imagination Tech, AMD and Arm. There are many other companies of different sizes in the UK, these are just the ones that popped into my head in the moment tonight.

[Please note: I am not commenting on actual salaries paid by any of these companies, but if you went looking, I think you'd find roles that offer competitive compensation. My other comments mentioning salaries are based on salary guides I read at the end of last year, as well as my own experience paying people in my previous hardware startup up to May 2025 (VyperCore).]


[flagged]


What a ridiculous comment…

Depends if you're looking at startups/scaleups or the big companies. Arm, Imagination Tech, etc. for a very long time did not pay anything like as well (even if you were doing software work for them). That's shifted a lot in the UK in recent years (can't speak for the rest of the world). Even so, I hear Intel and AMD still pay lower base salary than you might get at a rival startup.

As for startups/scaleups, I can testify from experience that you'll get the following kind of base salaries in the UK outside of hardware-for-finance companies (not including options/benefits/etc.). Note that my experience is around CPU, GPU, AI accelerators, etc. - novel stuff, not just incrementing the version number of a microcontroller design:

* Graduate modelling engineer (software): £50k - £55k * Graduate hardware design engineer: £45k - £55k

* Junior software engineer: £60k - £70k * Junior hardware engineer: £60k - £70k

* Senior/lead software engineer (generalist; 3+ yoe): £75k - £90k * Senior compiler engineer (3+ yoe): £100k - £120k * Senior/lead hardware design engineer: £90k - £110k * Senior/lead hardware verification engineer: £100k - £115k

* Staff engineering salaries (software, hardware, computer architecture): £100k - £130k and beyond * Principal, director, VP, etc. engeering salaries: £130k+ (and £200k to £250k not unreasonable expectation for people with 10+ years experience).

If you happen to be in physical design with experience on a cutting edge node: £250k - £350k (except at very early stage ventures)

Can you find software roles that pay more? Sure, of course you can. AI and Data Science roles can sometimes pay incredible salaries. But are there that many of those kinds of roles? I don't know - I think demand in hardware design outstrips availability in top-end AI roles, but maybe I'm wrong.

From personal experience, I've been paid double-digits percentage more being a computer architect in hardware startups than I have in senior software engineering roles in (complex) SaaS startups (across virtual conferencing, carbon accounting, and autonomous vehicle simulations). That's very much a personal journey and experience, so I appreciate it's not a reflection of the general market (unlike the figures I quoted above) so of course others will have found the opposite.

To get a sense of the UK markets for a wide range of roles across sectors and company sizes, I recommend looking at salary guides from the likes of: * IC Resources * SoCode * Microtech * Client-Server


> Senior/lead software engineer (generalist; 3+ yoe): £75k - £90k

For London. Maybe higher for Remote US.

For the rest of the country, it's a fair amount lower, typically around the £60k region.


I was quoting salaries for people in Bristol and Cambridge ;)

It feels like software jobs will be moving more to Europe generally.

It is just the work ethic divide that is at issue. The staying power of San Francisco is that people may have been willing to work long hours for stock, but not even stock isn't worth enough these days.


> The fact that there are comments misunderstanding the article, that are talking about PCB Design rather than (Silicon) Chip Design, speaks to the problem facing the chip industry. Total lack of wider awareness and many misunderstandings.

No, there is no misunderstanding. Even the US companies mentioned _in the very article_ that have both software and "chip design" roles (however you call it) will pay more to their software engineers. I have almost never heard of anyone moving from software to the design side, but rather most people move from design side to software which seems like the more natural path.


You've taken two separate points I made and rolled them into one, resulting in you arguing against a point I didn't make.

The "misunderstandings and lack of awareness" I was referring to is in regards to many people outside the semiconductor industry. These aspects are hurting our industry, by putting people off joining it. I was not referring to people inside the industry, nor the SemiEngineering article.

As for salaries: See my other comments. In addition, I think it's worth acknowledging that neither hardware nor software salaries are a flat hierarchy. Senior people in different branches of software or hardware are paid vastly different amounts (e.g. foundational AI models versus programming language runtimes...). For someone looking at whether to go into software or hardware roles, I would advise them that there's plenty of money to be made in either, so pursue the one which is more interesting to them. If people are purely money-motivated, they should disappear off into the finance sector - they'll make far more money there.

As for movement from software into hardware: I've primarily seen this with people moving into hardware verification - successfully so, and in line with what the article says too. The transfer of skills is effective, and verification roles at the kind of processor companies I've been in or adjacent to, pay well and such engineers are in high-demand. I'm speaking from a UK perspective. Other territories, well, I hear EU countries and the US are in a similar situation but I don't have that data.

Do more hardware engineers transition into software than the other way around? Yeah, for sure, but that's not the point I think anyone is arguing over. It's not "do people do this transition" (some do, most don't), rather it's:

"We would like more people to be making this transition from SW into HW. How do we achieve that?"

And to that I say: Let's dispel a few myths, have a data-driven conversation about compensation, and figure out what's really going to motivate people to shift. If it only came down to salary, everyone would go into finance/fintech (and an awful lot of engineering grads do...) but clearly there's more to the decision than just salary, and more to it than just market demand.


I knew a guy who was a digital verification engineer for intel. He was unceremoniously laid off and ended up taking work doing some sort of compliance at a very low paying state agency I was a developer at.

Pretty sharp guy, we worked together a few times on problems far outside both our responsibilities/domain. I always wondered why he ended up taking that gig. Must have been horrible doing compliance work on what was likely at least a 100% pay cut.


Software pays better, which is why so many hardware people switched, including myself. In my group, which is mixed between the two, my software job classification nets me a higher bonus and easier promotions

Edit: also never have to stay late to rework components on dozens of eval boards, and also never have to talk with manufacturers 10 timezones away


> Software pays better, which is why so many hardware people switched

Something I noticed years ago browsing jobs in random large companies:

hardware or anything close to hardware (firmware, driver dev, etc.) was all outsourced. Every single job I saw in that domain was in India or China/Taiwan.

High level software jobs (e.g. node.js developer to develop the web front end for some hardware device) were still in the US.

I’ve wondered if thats impacted why so many hardware people ran off to software.


Maybe the grass-is-greener on the other side applies here, but, I would find it a privildeg to be in a position where I could take a pay-cut and work on hardware.

Also, I'm not convinced hardware pays less, I would just do it for less pay.


> * Chip design pays better than software in many cases

You are comparing the narrowest niche of hardware engineering to the broad software profession overall?

> (US and UK included; but excluding comparisons to Finance/FinTech software, unless you happen to be in hardware for those two sectors)

How many hardware jobs are in the finance/fintech sector? I've never anyone working on hardware in finance nor have I seen a job posting for one. And I doubt the highest paid hardware engineer is making remotely close to what the highest paid software engineer in finance is making.

> but I think it is more a symptom than it is a cause.

Or parents, industry professionals, college professors/advisors, etc advise students on future job prospects and students choose accordingly.


> You are comparing

Following the lead of everyone else... But if you like I could compare chip design to a selected narrow niche of software that helps me make my point? It doesn't matter. The point is that "hardware doesn't pay" isn't universally true, in the same way "software pays well" is also an untrue universal statement. See my other comments for more nuance or dive into salary guides. One of my comments listed some starting points.

> How many hardware jobs are in the finance/fintech sector?

Quite a few in absolute terms. Not many in relative terms. Pick the view that matches what you wanted to hear.

High Frequency Trading uses FPGAs and custom-ASICs extensively. They're even building their own fully custom data centres (from the soil testing to the chips to the software - in some cases, all done in-house). It's a secretive industry though, by nature, so you'd have to go digging to find out what Jump Trading, XTX Markets, Optiver, etc. are up to -- in London, Bristol, Cambridge, Amsterdam - to name but a few cities with these jobs. I know because I have friends doing them :-).

> Or [...] advise students

Yeah I would love that! But it hasn't really been working as a mechanism for a long time now. Most such people I come across have no awareness of semiconductors. At UK universities, we don't have department-specific expert career advice services, so they're useless. Parents are rarely familiar with the field (as per the general population). Professionals have minimal to no contact with students, especially anyone under 18. Professors/advisors are the best bet but that's really only going to capture students that were _already_ showing an interest.

To be honest, I think having a few more popular US/UK/EU YouTube channels doing any kind of FPGA-based or silicon-based hardware design (i.e. not just RPi or PCB stuff) would help hugely. I've not worked out how a content strategy in this space that I think will work - yet!


> To be honest, I think having a few more popular US/UK/EU YouTube channels doing any kind of FPGA-based or silicon-based hardware design (i.e. not just RPi or PCB stuff) would help hugely. I've not worked out how a content strategy in this space that I think will work - yet!

It doesn't exist because it's impossible. Practical experience valuable enough to get you an apprenticeship is inexistent without knowing an engineer dedicated enough to take out the two or three hours of downtime they have to teach you for free.

Software pays, hardware never will.


How does one pivot? It seems to me the job market demand is probably even more concentrated than the software market?

From Software into Hardware? Your fastest route in is to learn Python and find one of the many startups hiring for CocoTB-based verification roles. Depends a bit on what country you're in - I'm happy to give recommendations for the UK!

If you're feeling like learning SystemVerilog, then learn Universal Verification Methodology (UVM), to get into the verification end.

If you want to stay in software but be involved in chip design, then you need to learn C, C++ or Rust (though really C and C++ still dominate!). Then dabble in some particular application of those languages, such as embedded software (think: Arduino), firmware (play with any microcontroller or RPi - maybe even write your own bootloader), compiler (GCC/LLVM), etc.

The other route into software end of chip design is entry-level roles in functional or performance modelling teams, or via creating and running benchmarks. One, the other, or both. This is largely all C/C++ (and some Python, some Rust) software that models how a chip works at some abstract level. At one level, it's just high-performance software. At another, you have to start to learn something of how a chip is designed to create a realistic model.

And if you're really really stuck for "How on earth does a computer actually work", then feel free to check out my YouTube series that teaches 1st-year undergraduate computer architecture, along with building the same processor design in Minecraft (ye know, just for fun. All the taught material is the same!). [Shameless plug ;) ]


As a Software Engineer, i had long thought about learning (and possibly moving into) Hardware Chip Design and/or its ancillary support domains i.e. what you have listed.

I understand that learning FPGA programming (Verilog/VHDL/etc.) is a first-step in that journey. Would you agree? Have you looked at books like FPGAs for Software Programmers? - https://link.springer.com/book/10.1007/978-3-319-26408-0

For each of the domains you have listed, would you mind sharing books/tools/sites etc.?

For example, While researching the above long ago, i had come across the following;

C++ Modelling of SoC Systems Part 1: Processor Elements - https://www.linkedin.com/pulse/c-modelling-soc-systems-part-...

C++ Modelling of SoC Systems Part 2 : Infrastructure - https://www.linkedin.com/pulse/c-modelling-soc-systems-part-...

gem5 Simulator - https://www.gem5.org/

Verilator Simulator - https://www.veripool.org/verilator/

Maybe you can provide a step-by-step roadmap on how a software guy (C/C++, systems programming) can move on to hardware chip design?


When I became interested in FPGAs recently, I read this book https://nostarch.com/gettingstartedwithfpgas

I bought a cheap FPGA board based on Lattice's ice40. There are free OSS tools to write, simulate, and install your Verilog/VHDL design onto the ice40.

It's probably a far cry from what a professional FPGA programmer does with Vivado etc but it might give you an inexpensive idea of the basics and if you want to pursue it.


Right; i already have looked into this (had gotten a FPGA board from Digilent with Vivado and a whole bunch of FPGA books a while ago) but have not done much with it. I generally like to read/learn different subjects for intellectual curiosity before looking at job/business/etc. opportunities using it.

What i was interested to know from the gp's comment is; what would it take today to actually get into this industry; how the current AI tools make it easy (if at all) and what one should concentrate on if one wants to approach Hardware Chip Design as a whole. The C++ SoC modeling articles i listed above were a great help for me to understand where my software skills could be of immediate value here. Since the gp seemed to be knowledgeable in this domain i was curious to know his take on the overall domain.


I thought this might be better captured in a blog post, so here you go: https://ednutting.com/2026/02/20/sw-to-digital-logic-design....

I think if I spent more than the last 90 minutes on this, I might come up with more detailed and nuanced opinions for you. Irrespective, hopefully this offers some useful thoughts.

Happy to continue the conversation here on HN.


Good introductory overview post.

However, most of it was already known and i was looking for something more specific/detailed given the OP article.

In the past i had worked on a custom SoC (on the software side) and had often interacted with the hardware designers to understand more of their domain. The first surprise was that most Verilog/RTL guys didn't know anything about software (not even assembly/C!) while of course embedded software guys (like myself) didn't know anything about HDLs. There was (and is) a very hard disconnect which is quite interesting. In the spirit of the OP article, the book i linked to actually shows a path via Vivado HLS for software folks to move into hardware design using their C/C++ programming skills. But i would like to see some hardware designer here validate that approach in the real-world. Especially now that you have powerful AI tools available to help you do stuff faster and easier.

With the rise of demanding AI/ML/Crypto applications, there is now a greater interest in designing new types of custom hardware requiring Hardware/Software Modeling(verification/benchmarking)/Co-Design/Co-Verification etc. They involve designing complete SoCs containing CPU/GPU/FPGAs based on specific designs. Given that hardware design is a universe of its own, not knowing the overall picture i.e. architecture/tools/methodologies/etc. makes it quite daunting for software folks to approach it.

PS: Maybe you can augment/create-new blog post with an actual case study based on your experiences on steps involved going from ideation to tapeout.


Getting an EE degree is always an option — but since CS isn’t an engineering degree getting a second bachelor’s will take four years part-time.

I’m doing that now at ASU and the total requirement for me is 71 semester credits. Maybe I could have found a program for which I only needed 60ish, but that’s the only program in the country with part-time remote classes that will cover what I need (antennas and RF). Someone who is interested in digital design will have more options. (And I haven’t really looked at other countries so YMMV considerably outside the US.)


> Getting an EE degree is always an option

If you aren't from London, San Francisco or Taipei, don't even bother.

EE was a complete waste of my time. I wish I had gone into SE/CS instead.


> * The lack of open source hardware tools, workflows, high-quality examples, relative to the gross abundance of open source software, doesn’t help the situation, but I think it is more a symptom than it is a cause.

To this, I would point to librelane/yosys/TinyTapeout/waferspace and say there are quite a bit of opportunities to learn stuff and there are oss initiative trying to _do stuff_ in this field. I wouldn't know how it applies to the wider industry, but the ecosystem deff piqued my interest. I do write quite a bit of embedded systems in my day to day though, so I got a rough idea what is in a chip. Would love to have the time to dive deeper.


that's all digital though right?

Of the things mentioned, yes. But there’s opensource analogue stuff too. Still, even with the open source stuff that there is, it’s a hard hobby to get into from scratch. The barriers to entry are still relatively high compared to just whipping up a website or toying with a Raspberry Pi.

You can submit analog to TinyTapeout now!

I was going to see if I could quote some job postings from my employer to compare this, and then discovered that even the intranet jobs board does not have salary ranges posted. Sigh. Going to have to feed that back to someone.

> Software engineers make great digital logic verification engineers. They can also gradually be trained to do design too. There are significant and valuable skill and knowledge crossovers.

> Software engineers lack the knowledge to learn analogue design / verification, and there’s little to no knowledge-crossover.

Yes. These are much more specific skills than HN expects, you need an EE degree or equivalent to do analogue IC design while you do not to do software.

However I think the very specific-ness is a problem. If you train yourself in React you might not have the highest possible salary but you'll never be short of job postings. There are really not a lot of analogue designers, they have fairly low turnover, and you would need to work in specific locations. If the industry contracts you are in trouble.


If there is a shortage, and engineers are trainable, are there apprenticeships available? I’d gladly move to this field.

In the UK, yes there are apprenticeships available (generally at the bigger companies like Arm) but not a huge number of them.

The new UK Semiconductor Centre has recently been asking (among many other questions) why the industry hasn't taken up the govt apprenticeship schemes more given the lack of engineers. The answers as to why are ultimately "it's complicated".

Your view on the salary during an apprenticeship will depend a lot on where you're coming from and expectations. They're generally lower than UK Median Salary (for any type of job; April 2025 it was £39k) at around £30kpa, but you're being paid to learn (rather than university studies, where you spend to money to learn). Also, god knows why, but the apprenticeships aren't always in the most in-demand areas (though if I had to guess, it would be because there already aren't enough employees to do the in-demand work, let alone spend some of that time training new people... which in the long-term is a disaster but we're in a short-term-thinking kind of world).


Im coming from an ok paid job in the us, but like you said, any pay is better than paying a school, and you get real on the job experience, not some textbook version of reality.

Nah, we don't do that here, instead ideal entry level applicant should have 5y of experience when applying.

Our ideal apprenticeship applicant must have:

- 5 years of experience in the proprietary, unique-to-our-company tech stack

- a PhD in semiconductor physics (MSc with 10+ years of experience is also acceptable)

- Taiwanese and US citizenship

- a desire to work 16+ hours a day for 6 days a week


I rather hope the mods detach this and the other asinine comments you’ve left across these threads…

Asinine? What I don't want is for potential undergrads to waste their time and money futilely chasing a mirage formed by propaganda.

You can sue me for this, but I don't think lying to starry-eyed teenagers to compete like starved beasts for an ultra long shot at some semblance of a career is a good thing.

When the dust settles, all that they will have learned will be completely and utterly useless and they will have to reskill immediately. How about doing the right thing from the start?


Are all positions onsite for these kind of jobs?

It varies a lot by company and by role.

Most jobs in the architecture/modelling/design/verification roles are basically like software roles (in terms of working patterns / work environment). So, fully remote, hybrid and fully on-site are all possibilities. Hybrid (1-3 days per week in-office) is probably the most common arrangement I've come across in the UK.

If you're moving into stuff like physical design then you start to get involved in chip bring-up, in which case you need to be in a lab which you're unlikely to be able to build at home. That's when on-site starts to become a requirement.


Only if chip-to-chip communication is as fast as on-chip communication. Which it isn’t.

Only if chip-to-chip communication was a bottleneck. Which it isn't.

If a layer completely fits in SRAM (as is probably the case for Cerebras), you only have to communicate the hidden states between chips for each token. The hidden states are very small (7168 floats for DeepSeek-V3.2 https://huggingface.co/deepseek-ai/DeepSeek-V3.2/blob/main/c... ), which won't be a bottleneck.

Things get more complicated if a layer does not fit in SRAM, but it still works out fine in the end.


It doesn't need to, during inference there's little data exchange between one chip and another (just a single embedding vector per token).

It's completely different during training because of the backward pass and weight update, which put a lot of strain on the inter-chip communication, but during inference even x4 PCIe4.0 is enough to connect GPUs together and not lose speed.


It affects both. These systems are vastly more complex than the naive mental models being discussed in these comments.

For one thing, going chip-to-chip is not a faultless process and does not operate at the same speed as on-chip communication. So, yes, throughput can be reduced by splitting a computation across two chips of otherwise equal speed.


You’re mixing up HBM and SRAM - which is an understandable confusion.

NVIDIA chips use HBM (High Bandwidth Memory) which is a form of DRAM - each bit is stored using a capacitor that has to be read and refreshed.

Most chips have caches on them built out of SRAM - a feedback loop of transistors that store each bit.

The big differences are in access time, power and density: SRAM is ~100 times faster than DRAM but DRAM uses much less power per gigabyte, and DRAM chips are much smaller per gigabyte of stored data.

Most processors have a few MB of SRAM as caches. Cerebras is kind of insane in that they’ve built one massive wafer-scale chip with a comparative ocean of SRAM (44GB).

In theory that gives them a big performance advantage over HBM-based chips.

As with any chip design though, it really isn’t that simple.


So what you’re saying is that Cerebras chips offer 44GB of what is comparable to L1 caches, while NVidia is offering 80GB of what is comparable to “fast DRAM” ?

Sort of. But SRAM is not all made equal - L1 caches are small because they’re fast, and vice-versa L3 SRAM caches are slow because they’re big.

To address a large amount of SRAM requires an approximately log(N) amount of logic just to do the addressing (gross approximation). That extra logic takes time for a lookup operation to travel through, hence large = slow.

It’s also not one pool of SRAM. It’s thousands of small SRAM groups spread across the chip, with communication pathways in between.

So to have 44GB of SRAM is a very different architecture to 80GB of (unified) HBM (although even then that’s not true as most chips use multiple external memory interfaces).

HBM is high bandwidth. Whether that’s “fast” or not depends on the trade off between bandwidth and latency.

So, what I’m saying is this is way more complicated than it seems. But overall, yeah, Cerebras’ technical strategy is “big SRAM means more fast”, and they’ve not yet proven whether that’s technically true nor whether it makes economic sense.


Right. L3 caches, i.e. SRAMs of tens of MB or greater sizes have a latency that is only 2 to 3 times better than DRAM. SRAMs of only a few MB, like most L2 caches, may have a latency 10 times less than DRAM. L1 caches, of around 64 kB, may have a latency 3 to 5 times better than L2 caches.

The throughput of caches becomes much greater than of DRAM only when they are separated, i.e. each core has its private L1+L2 cache memory, so the transfers between cores and private caches can be done concurrently, without interference between them.

When an SRAM cache memory is shared, the throughput remains similar to that of external DRAM.

If the Cerebras memory is partitioned in many small blocks, then it would have low latency and high aggregate throughput for data that can be found in the local memory block, but high latency and low throughput for data that must be fetched from far away.

On the other hand, if there are fewer bigger memory blocks, the best case latency and throughput would be worse, but the worst case would not be so bad.


> L1 caches are small because they’re fast

I guess you meant to say they are fast because they are small?


Thanks, TIL.

This author thinks Cerebras chips were deployed at scale to serve users worldwide in just one month since the partnership announcement?

Seems like nonsense to me.


Did the author claim this?

OpenAI and Cerebras have been working together at some level for nearly a decade.


Cerebras has been serving their own inference users for sometime. Not unreasonable to deploy a turnkey product as-is to start a partnership and then iterate from there?

At least Einstein didn't just suddenly turn around and say:

```ai-slop

But wait, this equation is too simple, I need to add more terms or it won't model the universe. Let me think about this again. I have 5 equations and I combined them and derived e=mc^2 but this is too simple. The universe is more complicated. Let's try a different derivation. I'll delete the wrong outputs first and then start from the input equations.

<Deletes files with groundbreaking discovery>

Let me think. I need to re-read the original equations and derive a more complex formula that describes the universe.

<Re-reads equation files>

Great, now I have the complete picture of what I need to do. Let me plan my approach. I'm ready. I have a detailed plan. Let me check some things first.

I need to read some extra files to understand what the variables are.

<Reads the lunch menu for the next day>

Perfect. Now I understand the problem fully, let me revise my plan.

<Writes plan file>

Okay I have written the plan. Do you accept?

<Yes>

Let's go. I'll start by creating a To Do list:

- [ ] Derive new equation from first principles making sure it's complex enough to describe reality.

- [ ] Go for lunch. When the server offers tuna, reject it because the notes say I don't like fish.

```

(You know what's really sad? I wrote that slop without using AI and without referring to anything...)


That's some pretty good verbatim Claude Opus 4.6 if I'd say so myself

Hear hear!

I maintain it’s because productive people know how to focus on what matters, to cut through the noise, and it’s not just by carefully thinking things through (though that’s an important skill too). It’s partly because they “just don’t see” the noise - if you like, they’re not distracted by it, they can tune it out - or rather, they don’t need to spend any energy tuning it out because they don’t ‘see’ or hear it in the first place!

I’ve frequently been: 1. Complimented on my productivity 2. Told I need a less messy workspace/environment.

One of these is true, the other is a road to depression - wasting time and energy tidying up and then feeling like I got no actual work done because, well, I didn’t!

There’s obviously a limit - continual small bits of sorting and organising ensure I can still sit at my desk and find stuff on my computer, but it doesn’t need to be the extreme clear-desk policy that proponents of Clean Work seem to be pushing. There’s a huge zone in between the two extremes.


> if you like, they’re not distracted by it, they can tune it out - or rather, they don’t need to spend any energy tuning it out because they don’t ‘see’ or hear it in the first place!

The human body is not made of regular lines.You can see it in ergonomics accessories. They are not what we would call beautiful. While I love to tidy every once in a while (mostly for cleaning), everything will eventually fallback into some organic arrangement where I don't need to think about what I need and what I don't will eventually get removed. I think about task planning, then I offload the result in the environment. Starting fresh every day will just gobble up my time in order to reconstruct the environment again.


I don’t tidy up very often, but when I do, it doesn’t take much time or energy. I just dump everything that isn’t version controlled into a junk folder, and it feels great.

I keep Inbox zero, mostly, using this system. If I haven't read it, how important could it have been, CTRL+A, DEL gets you to zero.

Instructions unclear: I purchased multiple bins, labeled them V1, V2, V3, and have dumped most of my pens, pencils and notebooks into them. What now?

Lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: