The Linux kernel seems like a poor example of professionalism. It's written in C, released with barely any testing (and thus has major regressions, such as the dup system call being broken a while back), and the community around it is very unfriendly (sometimes with reason).
The development model is interesting, and was basically bound to happen once you had the license, the internet and eventually the tools (git). It's a step forward from proprietary software but we're not at the level of discipline required to build bridges, skyscrapers and nuclear power plants.
Software failure is costly and impactful. Why not aspire to levels of completeness and rigor comparable to civil engineering?
As a software craftsman and a professional, I recognize the necessity to impose a practice on younger, less experienced programmers to guide them towards high quality work. Civil engineering is a good example for all coders to learn from.
Safety critical software projects do have levels of completeness and rigor comparable to civil engineering. An example is the space shuttle control software, which ran for decades without a single serious failure - by contrast, the shuttle hardware in the same time period suffered two lethal failures.
The price paid for that, of course, is levels of cost and bureaucracy comparable to civil engineering. Try to build a website for a startup that way and you'll be out of business long before you ship anything. The correct level of rigor depends on what you're doing.
In all seriousness, you could have any level of 'rigor' you want, but you probably don't want to pay for it. I used to work at a hardware company that used custom ASICs. Each new hardware platform used a handful of new ASICs and took around five years to develop. And if you made a mistake in an ASIC it was costly to fix--sometimes you could modify a metal mask, but a full respin was approaching $1M. In that environment, you do everything you can to not make a mistake.
I was always glad I was in software. I used to joke that if I made a mistake, it cost around one cent of electricity to recompile.
But over time, they started to treat software development like they did ASIC development. It took forever to get a feature implemented, Seriously, even tiny things that were like "I've got an UI idea that I can code it in a couple of days and see how it works out, if it sucks no harm no foul" were turned into six month orgies of spec writing, endless meetings, multiple layers of buyoff, and just overall inefficiencies.
It's interesting - to me, anyway - that there's endless talk about patterns and methodologies and various heavily promoted (but often questionable) project management traditions in software.
But there's no formal, explicit and established pattern/process which guides a project through the stages of invention, innovation, refinement (debugging and user feedback), deployment and promotion, and preparation for maintenance with documentation.
If you're lucky and/or experienced and/or have good project management some of these things will be done well.
If not - nope.
In your case I can see why ASIC culture crept over to software. But if software had an existing culture, you - and everyone else - would be able to try out new ideas without management terror that you were going to break something.
> Software failure is costly and impactful. Why not aspire to levels of completeness and rigor comparable to civil engineering?
Completeness and rigor is also costly and impactful. There's a tradeoff to be made there. Additionally, it's easier to be complete and rigorous in civil engineering when the inputs are easier to model and predict.
The development model is interesting, and was basically bound to happen once you had the license, the internet and eventually the tools (git). It's a step forward from proprietary software but we're not at the level of discipline required to build bridges, skyscrapers and nuclear power plants.