Most algorithms questions only prove one thing: whether a candidate has brushed up on algorithms beforehand. That has some usefulness, being a proxy measure for conscientiousness, but offers little beyond that. Maybe that they’re able to follow the reasoning, but it could just as easily be learned by rote and parroted back. The fact is that no one comes up with a from-scratch efficient algorithm for doing anything in 5 minutes, the candidate either knows it going in, or they’re screwed.
I've been a cpp and java programmer for 12 years and not once needed to use algorithmic complexity on the job. The fact that so many places focus on this baffles me. Big-O questions should be reserved for system architect positions only. Out of all the arbitrary topics, I think they should focus on pedantic syntax questions for whatever language they use. Show snippets of code and ask "what's wrong with this picture?". That would at least select for people with better language mastery and, in theory, faster development times.
I don't really believe this. You use algorithmic complexity every time you choose between a HashMap and an Array. I'm a javascript developer, and I make that kind of choice several times a day.
Having said that, I've never once needed to know the kinds of questions that the ask in these interviews off the top of my head. And I highly doubt google engineers need to often either.
Since Google is such a well-spring of computational resources and have incredible scalability requirements, I'd question your claim that they don't use algorithmic thinking regularly.
He is saying they do need to care about algorithmic complexity in choosing data structures or implementations, but rarely need to figure out really clever algorithmic tricks (of the sort which tend to be tested by harder algorithmic questions, like the "find whether there is a loop in singly-linked-list", which relies on the running pointer technique which you never need to use in real life).
If you didn't know the answer to the question, would you have been able to come up with it on your own, from scratch?
Merely practicing interview questions without pondering them produces the false impression that they're useless trivia, when actually deciding to come up with the answers to these questions and struggle with the clues inside them which produce the answers show a much richer conceptual tapestry for data structures that goes beyond memorizing tricks.
The test lost its validity over time, but how many people care about their craft enough to know how to write software without using a library, and be willing to reason about it from first principles? Or with systematic method? They would be capable of doing original work. The rest I'd give CRUD tasks to.
An algorithms question proves today what it proved 15 years ago: that the candidate learned the algorithm. It does not and never did prove that the candidate spontaneously generated the solution in the interview. In other words, it isn’t an intelligence test. It isn’t even a programming aptitude test, but it is a conscientiousness test because the candidate went to the trouble of learning many algorithms successfully. That does count for something, but not necessarily what the interviewers expect.
If you want to test for programming aptitude, give the candidate a realistic but difficult real world problem and let them solve it in an IDE.