Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been interviewed at Google and I was given a brainteaser.

"Can you estimate Google AdWords revenue for Italy - B2B?"



Get population estimate for all countries with adword revenue, get adword revenue for each of these countries, fit to a linear model, predict italy. Or calculate adword revenue per capita and apply.


Er, if you can do that, then why not "get adword revenue for each of these countries, read the value for Italy"?


No, you had to give out a number.


How about $5B with a 99% confidence interval between $0 and $1 trillion. That's a number.


Sorry, that's not close enough. Can you do better?

(For the record, the person you originally responded to did not share that question with the intention of you actually trying to answer it. The point was the illustrate how ludicrous some of those questions are. You don't have access to any of the information that you mentioned in your response in the interview.)


“Why would I need to estimate that if I worked at Google?”


“I’d like to understand how you structure problems, we can do a different topic if you like?”

“With ~50M people capable of clicking ads, and assuming they click one ad a day at 20cnt CPC, it would be around 3.6bn. I think my assumptions are on the high side, but order of magnitude is probably correct”

If done well, solving a problem (not a brain teaser) together can yield insight in how people approach new challenges.

If I’m hiring for someone to do what they already know this indeed not a useful question. on the other hand, if I expect the role to tackle various business problems as well I’m not sure what would be a much better way to get a sense on this. (Specifically because I’d want to discuss a not-too-complex problem that is outside your usual area of expertise)


What I find weird about these questions, is that they are expected to be answered without any research? How often do you expect your people to solve a problem in a domain they know nothing about without doing any research? And the given example is odd, a lot of useful information that would help to answer the question will be public, and the obvious way to answer the question would be to look it up, before estimating the stuff that isn't public (but you won't know what information is publicly available until you look for it)


In the context that I've seen them used, the interviewer usually provides information if asked (and says so up front). e.g. if you don't know the # of inhabitants of Italy it should be completely okay to ask.

In any case I don't think it is about the calculation or the final number. It's really the approach that you develop that's interesting.

I've seen this mostly used in (management)consulting though, where you actually are regularly expected to solve problems that you might not have deep understanding of. When I was interviewing developers I discussed how they would architect an application to solve a specific problem. It gives similar insight and is closer to the actual job.


The problem is that the interviewer asks for a number. If the question was phrased as "How would you go about finding out the GoogleAds B2B revenue in Italy for 2017, which tools and methods would you use", I'd totally go for that question.

The reason is that to many people (myself included) your answer above is nonsensical.

  “With ~50M people capable of clicking ads, and assuming they click one ad a day at 20cnt CPC, it would be around 3.6bn. I think my assumptions are on the high side, but order of magnitude is probably correct”  
Where did the "people capable of clicking ads" 50M estimate come from? Why one ad a day on average? Why not 0.2? Or 15? Why 20c CPC and not 0.001 euro cent? Why not 1USD?

I'm all for "first-principals based structured thinking" but asking someone to demonstrate that by generating a rudimentary formula, that won't survive any contact with reality, and populate it with almost completely random values, seems like a weird way to go about it.


It's typical of people who think that a tiny amount of knowledge makes them experts in anything related to reasoning and data, as a good number of practitioners in this field do.


It's about knowing how to structure problems, whether the components are public or whether the problem space is completely unfamiliar.

E.g. someone who guesses "$3 billion" on only a gut feeling likely wouldn't be a strong hire even if the number is correct, while someone who can correctly decompose the final answer into its components and make reasonable estimates would be a much better candidate.

For estimation interview questions, and problem-solving questions more generally, a "structural" approach to breaking down problems is important because on the job the interviewee will need to be able to identify what levers are available to them.


Oh, that is how they do it. /s


This isn’t a brain teaser. It’s market size estimation, I’m guessing this was not an engineering role?


How is that question different from "please estimate how many golf ball fit in a school bus" ?

The only thing changing are parameters.

EDIT: it was for the Search team, search spam prevention, can't remember role it's been 4-5 years, sorry.


If you were interviewing for a job at a company whose main revenue stream involved transporting golf balls in school buses, that might be a perfectly reasonable interview question.


If it was an engineering role I’d definitely call it a brain teaser. But for say a marketing role, an interviewers wants to see that you understand the way money flows through the business and what the parameters therein are. Maybe it’s still a brain teaser but I would say it’s relevant to the job.


If someone is sending emails about golf balls and school busses, it's probably spam. ;)


"Estimating it from the taxes paid to the state, I would say 0€." (Yes, that’s not the kind of expected answer)


Satisfacton from saying this during an interview at Google would be worth not getting the job:)


TRUE STORY:

I interviewed at Google earlier in my career. In one of the interviews, the interviewer asked me to recite/reconstruct off the top of my head the convex hull algorithm. I remembered the lectures in undergrad algo class where the professor talked about it. I remembered where in CLRS it was covered. I remembered the general outline (efficient algorithms are O(n log n), because you have to sort the points as one of the steps). But I was struggling to remember the details, because, you know, I hadn't worked on that sort of thing in several _years_.

So, the interviewer started pacing around the room, impatient that I couldn't remember the details (or recite in 10 minutes what it took a professor with notes 2 lectures to go through). He told me finally "We expect Google engineers to be able to solve problems. What if you were a Google engineer, and you had to solve this problem, what would you do?"

There is, of course, only one right answer to that question, which I provided: "I would look it up in a book. I know exactly where to find it." I wouldn't spend time trying to reinvent an algorithm that someone else had already thought of.

The interviewer did not like that answer. I did not get a job offer. Now that I interview candidates at a different large tech company, I make a point to not get hung up on things that are easily researchable.


> "What if you were a Google engineer, and you had to solve this problem, what would you do?"

Well... I might ask one of the other people on the team (or in the company) for some help. Because I'm probably going to ask one of them for a review (or someone else will review it anyway). No man is an island, etc.

What did Sergey and Larry do to solve these problems? They hired more people!


They consulted existing literature, they asked in internet forums, etc

Expecting people to know and remember beyond the basics is simply ridiculous especially at the age of Google


> We expect Google engineers to be able to solve problems.

Sounds more like he expects Google engineers to have perfect recall, rather than to be able to solve problems.

That's just a trivia question, nothing to do with problem solving.


If I wanted to have a job that required perfect recall, I would have become a doctor. Maybe I should have if that is what this industry is going to require.


As an interviewer I always advocate for coding problems to be done at the computer and to allow internet searches. On the rare occasions someone does a cut-and-paste solution, I then modify the problem to see if they understand the solution enough to make the necessary change.

It's not perfect, but it seems as close to the actual work experience they are interviewing for.

I interviewed at Facebook recently and was shocked when they used Coderpad but did not allow code execution. Threw me for a loop, because I like to decompose problems and test that everything is still working as I add to it, something this did not allow me to do. I consider that a GOOD quality, why prevent me from demonstrating it?


I do that; I ask candidates to code in notepad instead of an environment where you an compile, and it's intentional.

Failure come in three types - bad understanding (failure to understand the problem), bad planning (plan doesn't actually solve the problem), and bad implementation (failure to implement the plan correctly). The third type includes typos and off-by-one errors that I don't want the candidate to get hung up on. I might ask about an issue if I spot it, but if not I'd rather keep going on the more relevant parts of the solution.

Also, you can leave part of the solution in pseudocode without getting distracted by a bunch of squiggly underlines, and handwave away parts of the problem that we don't care about.


> "What if you were a Google engineer, and you had to solve this problem, what would you do?"

"I would Google it", and this time the answer is very corporate! Still not what's expected by the interview, but it would be funny to watch him argue that the main product of its employer is not qualified for the task.

Besides, I find it silly too to ask for details that are easily found in printed material, with less probability of mistake than remembering it.


He told me finally "We expect Google engineers to be able to solve problems. What if you were a Google engineer, and you had to solve this problem, what would you do?"

At that point - he's talking to you as if he thinks you have the understanding of a child, basically.

There really is no positive recourse available other than to politely end the interview, and chalk up another day of your life (plus N days spent preparing) that you'll never get back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: