It's really more of a proto-agile approach! I've never done the manual first personally, though I had a boss that had been on a project run that way early in his career who always wanted to do things that way again. The idea is to skip distilling requirements into specifications, engineering documentation, and user documentation. Instead you gather requirements AS user documentation, and only work backwards from there as necessary. Sort of like skipping all the specifications to V&V activities in a traditional systems engineering approach by writing tests first in test-driven-development.
My comparison to waterfall was a reductionistic take on the specifications up-front before coding starts aspect. If comparing it to the full waterfall process then it is definitely a fresh take!
I miss the passive nature of the web. A page got loaded and then it was idle. Now I can stare at a page of text with my cpu spinning at 100% in the background, it adds nothing extra to my experience except for a quiet hum from the CPU fans
Not to mention animations. Adding a second's hand to https://mro.name/o/testbild.svg makes 25% CPU load on modern hardware. Ridiculous. But even more so, that it's often enough done anyway.
The TDD advocates I have been exposed to are mostly cargo culting consultants and this alone makes me very sceptical. It has some good core ideas but I find that in practice it leads to unit tests that are highly coupled to implementation details. Quick to execute but almost worthless from a test perspective. Its best to throw them out after the inital implementation is done since the usually just become a burden to maintain. Black box at system boundaries are the only tests that bring value long term in my experience. I would really like to hear more details about projects that use TDD and how they use it a few years down the line. All I can find are toy examples..
I don't like people saying just hello and expecting me to answer before they get to the point. If I see the message and immediately respond hello back then I have to sit and wait and watch the 'person is writing' indication until I get the the real reason they contact me. This time is just active waiting and can't be used for anything. In the situation where I see the hello at some later time and reply back the other person might not even be there. Should I actively wait or start something else and keep an eye on the chat, prepared to be interrupted? A lose-lose situation for me..
If you are working on an old code base for a reasonable time I recommend to get to know the old developers. Check out the commit history, who wrote what, talk with them a bit if they are still around to get a feel for them. After a while you will probably see patterns, guy A usually wrote solid code, guy B a bit more sloppy. If you are investigating a bug in a specific area of the code use git blame and start investigating any additions by guy B first
Also, have the humility and the generosity to assume complex code is that way for some reason. The people who came before you are not likely to be such bigger idiots than you. If something looks wacko, slow down and ask yourself how it got to be that way.
This is a rule of thumb. You should always check your assumptions about code being correct or not. However, if somebody and their code have proven themselves unreliable, it is a useful optimization to first consider they are wrong, instead of first considering they are right (humility.)
If you were lucky enough to contact them that is! Attrition rate is such these days that many coders simply flee after a release or so, also depends on how old the code-base is. Git or other version control systems come quite handy, I agree.
Yes definitely. But only reading the code and looking at the history can sometimes be enough to build the picture when the same names repeatedly show up in troublesome code
A variant of no 3 is used for some large scale telecom equipment supporting multi millions of attached users with serious uptime requirements. State is distributed and handled locally and replicated to a sibling node for fallback protection in case a node goes down. The replication is dynamic so new sibling relationships are established when nodes come and go. There is also some more persistent state (like billing information) that is used to recover in case of total node failure but a lot of the transient state can be rebuilt when users are forced to reattach
Agree with this. I obviously comment on bugs and also convoluted code that I know will be hard to understand later. If it is just small details it is easier to just change them myself when I inevitably have to revisit the code later on rather than nitpicking and arguing during the review. If I don't have to revisit the code for any change then just let it be. Out of sight, out of mind