Part of it is the brevity, and part of it is the shortcuts. E.g. when I did my masters, one thing I quickly realised was that papers that expressed an algorithm using mathematical notation almost always lacked essential details.
My impression is that it's too obvious when there are too large leaps in code, whereas in mathematical notation everyone accepts leaps that can obscure that essential details have been left out.
E.g. you'd have papers on thresholding of images for OCR (deciding what is background and what is foreground) where it turned out the results were highly dependent on certain values represented by certain variables that were never defined, for example, putting in a situation of reconstructing parameters by trial and error if you wanted to reproduce the results.
Today I'm immediately suspicious if results are presented as maths outside of fields where the maths is essential (and sometimes even then) as I see it as having a tendency to be used to gloss over sloppy work or save space by leaving out essential details.
I'm sure this is not the case in all fields, and that people with a more extensive maths background will be able to fill in more of those leaps without much effort, and so it might very well be acceptable in some fields. But to me a notation that makes it that easy to hide missing details is a liability.
Code's easy for me (unless "mathy" in appearance like Haskell) but mathematical notation's always made me feel dyslexic.
I'd love to see this beauty or clarity or whatever that people find in mathematics, but I've never caught even a hint of it. Seems like it needs a good IDE to make up for deficiencies in its language.
That's because code has documented and testable semantics whereas mathematical notation is more by convention than anything. It's in between natural language and code in terms of ambiguity, but is sufficiently flexible and clear to practitioners that it remains the best way to communicate to other practitioners.
...that's the thing actually: in programming syntax defines semantics because "syntax" is executed and in practice "it means what it does (what is executed)".
(Linguists would want to murder me for saying this, I know.)
That's why some programming languages can even be defined by implementation. (Though as a programmer I try my best to avoid these languages...)
A lot of that is familiarity but I don't think all of it is.