I often tell people about this post by Dan Luu from 2014:
https://danluu.com/everything-is-broken/
It's about how, very, many, bugs there are in lots of software and how most of us developers have gotten used to unseeing them. Since reading it, I've always found it harder to unsee the bugs myself.
This post is a great writeup of another angle on the same phenomenon -- how even as we unsee the bugs we adapt our behavior to avoid them.
(Dan's post also has a proposed solution, but I've never found that one as convincing as the picture he paints of the problem.)
As a programming language developer, this is a persistent difficulty I’ve slowly been training myself out of. I know how to use a language I’ve written because, well, I wrote it—but if I sit down with a new user for an hour, they discover all the bugs I knew about and a few I didn’t! (This applies to all kinds of software, of course, it’s just particularly noticeable with developer tools.)
But an hour of someone’s time is hard to get, and once they’ve used the software a bit, they’re spent—you can’t make them naïve again. So fuzzing and randomised property testing have begun to take their place for me, as well as an adversarial dialogue in my own mind of “What happens if I try this stupid thing?” versus “The software should patiently and correctly do something reasonable, with no casualties, no matter what stupid thing the user does”, each constantly trying to one-up the other. It’s pretty fun, actually—I’ve thought of getting a “devil duck” and an “angel duck” to personify them, in addition to the standard-issue debugging duck. :)
This post is a great writeup of another angle on the same phenomenon -- how even as we unsee the bugs we adapt our behavior to avoid them.
(Dan's post also has a proposed solution, but I've never found that one as convincing as the picture he paints of the problem.)