Do we have different definitions of "marginally qualified"? Idk, I feel I'm a decent engineer - I can certainly do whatever leetcode medium they throw at me, as much as this counts of anything - and can actually code, but I still get maybe 1 callback per 50 applications.
Does "marginally qualified" mean "Ivy League Competitive Programmer PhD" or something?
> Do we have different definitions of "marginally qualified"?
Not every candidate is an interview. I recently hired. I got 90 applications and from these, 80% were an instant "No". They didn't match the job description or had no permit to work in my country. I invited the rest. Simple interview, pair-program a dead simple App with a prebuilt skeleton with me with any framework of choice. Make one GET request, render it and realize one needed optimization which needs to be implemented. 90% (I'm not joking) of candidates failed the first task. In half an hour, they couldn't send a GET request and translate that into JSON. All were allowed to google, open any documentation they liked. Of all 18 who failed, 16 asked me if they could use an LLM for this task, which I denied.
The GP is still passing through hundreds of people, dozens of them capable, until he reaches somebody that convinces him of their competence. You were passed down because you weren't convincing enough.
Or maybe he is getting resumes from a channel that has been victim of machine-gun filling, and there indeed thousands of incompetent people posting resumes into every channel and just half a dozen real applicants.
TBF, I have no idea how to fix either one of those problems. Hiring is just completely broken.
I have only a resume to convince them. I have job experience at major companies, with examples of what I've built when there, personal projects I've made, a github link, a great GPA from a good school.
And I know similar Junior-mid people in the same boat. We can all do Fizzbuzz, we've all built things, and somehow we're not getting interviews, but people that can't do Fizzbuzz are.
Do thousands of incompetents also machine-gun apply to, say, mechanical engineering, accounting, marketing, HR, or finance gigs? Is it just tech?
> Do thousands of incompetents also machine-gun apply
Enough that it's HR or some automated software that first screens your application.
> Something isn't adding up.
Yes. The pipeline "posting > application > screening" is now completely broken. In your case it's quite possible it's HR and screening software that your resume is not convincing. I have been hoping for anecdotes and studies in this direction (from people who have access to the HR and/or software screening and are inclined to report) - but it's at least not common. What we do hear from, is tons of people who can program and who are not getting even first interviews.
Big companies and small need to agree on some standards and create qualifying exams that they will actually accept as proof of competence. Degrees somehow don't prove anything, experience doesn't, blah blah blah. It's exhausting to have to prove to interviewers that I'm not mentally disabled at every turn and it's a waste of time for everyone.
Create certifications that actually count for something and aren't just a blip on a resume that may tick a box, but will actually move you past technical trivia questions. I know some people have a deep repulsion to this and I think it would be fine to have a technical interview gauntlet for those that choose not to engage with any type of certification and a simplified interview format for those that have passed the prerequisite tests.
I don't care how long, rigorous, or ridiculous the tests are. Just agree on some effing standard.
Watch out for what you ask for. Plenty of big vendors have certification programs. ... And for some combinations of field and vendor, they are red flags - rather than pre-validation. That is, far too many applicants have the certification but do not have the grounding knowledge without which the certification is sort of useless - potentially more dangerous than validating.
That's exactly what I mean though, agree on something that actually means something. Not certs that are just a checkbox and you still get grilled bc many people don't believe they much value or others believe are a negative signal.
>"And most refused to look at anybody deviating from their ideal background in my experience."
This is often because the culture of job-hopping for better pay every 18 months has eroded the willingness to pay for training or adaptation. Why pay for someone to learn if they're just gonna leave soon; the pre-trained person is a better deal if you'll have to pay to retain anyway.
Which was caused by cost cutting measures, MBA disease, in companies to begin with.
We’re just seeing the end of the cat and mouse struggle that’s been going on since the 60s. And massively accelerated in the 80s.
It’s unfortunate for companies though because they’re the ones that will lose out in the end when all the experienced people start retiring and they have no one to hire.
It’s an untenable position to not train people, period. There is no schooling you could go through that would educated junior dev to the level of a senior dev. And it’s the same for any other role. Experience is not optional.
I think the primary stimulus which creating the “job hopping culture” was actually the hot labor market for software developers. Other fields experienced real ‘cost-cutting’, without resulting in a lot of ‘job-hopping’.
I agree that this situation is undesirable, but it seems to be stable, somewhat like the result of repeated play of the prisoner’s dilemma.
That definitely massively accelerated it but you’re looking way too short term that’s only been in the last 10 to 15 years.
I agree that other industries are not YET at the point where software is , but you’re not looking hard enough if you don’t see the short tenures compared to the 25-30 years they used to have.
And yeah, it might be in an equilibrium now, but how long can it stay in an equilibrium? I’d guess at max 10 to 15 years.
I'm guessing the majority of people now in their 50s and 60s in computer-related careers had very eclectic jobs before settling down in computer-related stuff. After all, many never used computers at all until college or beyond.
My understanding is even in the early 2000s it was pretty much just firmware versus desktop software with a small niche for Mac developers.
Edit: my point was not that specialized software applications didn’t exist. It was that people were expected to be able to jump from stack to stack when they change roles in a way that has disappeared from modern job applications.
Well, and mainframes. And trading and financial systems. And numerical/scientific computing. And network services. And web sites and e-commerce. And flash, java applets, and browser plugins. And control systems. And operating systems and tooling. And cell phone applications. And games. And video/image/audio/music processing. etc etc
It was probably about as hard to move between those domains now as it is today. Which is to say that it's pretty hard and needs some concerted, non-trivial effort in shaping your experience and how you present it before trying to make a transition, and often either some kind of inside reference to vouch for you or an employer that was especially hard up for candidates. Or else an employer that straddle multiple domains and actively supported internal transitions.
Depending on what you could bring attention to in your prior experience and the size/needs of the new orgs you seeking to move to, certain transitions were more feasible than others, but you could easily spend decades working in mind-numbing enterprise applications while wishing for opportunities in game development or trading or whatever and never get your resume so much as looked at. (And vice versa, even, for those who dreamed to "retire" into the supposed quiet of enterprise apps or government IT or whatever)
I basically agree with your edit. There was a lot more fluidity among roles and even just moving into computer roles from other engineering (and even non-engineering) fields. But that's not really what you wrote initially.
Other than fast factorization and linear search, is there anything that Quantum Computing can do? Those do seem important, but limited in scope - is this a solution in search of a problem?
I've heard it could get us very accurate high-detail physics simulations, which has potential, but don't know if that's legit or marketing BS.
Gary Marcus is cringe and wrong, but it's good to listen to folks who are cringe and wrong, because very occasionally, their willingness to be cringe means they're not wrong about something everyone thinks is true.
Garry Marcus constantly repeats the line that "deep learning has hit a wall!1!" - he was saying this pre-ChatGPT even! It's very easy to dunk on him for this.
That said, his willingness to push back against orthodoxy means he's occasionally right. Scaling really has seemed to plateau since GPT-3.5, Hallucinations are still a problem that are perhaps unsolvable under the current paradigm, LLMs do seem to have problems with things far outside their training data.
Basically, while listening to Gary Marcus, you will hear a lot of nonsense, it will probably give you a better picture of reality if you can sort the wheat from the chaff. Listening to only Sam Altman, or other AI Hypelords, you'll think the Singularity is right around the corner. Listen to Gary Marcus, you won't.
Sam Altman has been substantially more correct on average than Gary Marcus, but I believe Marcus is right that the Singularity narrative is bogus.
>Sam Altman has been substantially more correct on average than Gary Marcus
I've seen some of Marcus' other writing and he's definitely a colorful dude. But is Altman really right more often/substantively? Actually, the comparison shouldn't be to Altman but to the AI hype train in general.
And, while I might have missed some of Marcus's writing on specific points, on the broader themes he seems to be effectively exposing the AI hype.
He recently posted a question he put to grok3 — a variation on the trick LLM question (my characterization) of "count the number of this letter in this word." Apparently this Achilles heel is a well-known LLM shortcoming.
Weirdly though, I tried the same example he gave on lmarena and actually got the correct result from grok3, not what Gary got. So I am a little suspicious of his ... methodology?
Since LLMs are not deterministic it's possible we are both right (or were testing different variations on the model?). But there's a righteousness about his glee in finding these faults in LLMs. Never hedging with, "but your results may vary" or "but perhaps they will soon be able to accomplish this."
EDIT: the exact prompt (his typo 'world'): "Can you circle all the consonants in the world Chattanooga"
I think it's fair to say though that if your results may vary, and be wrong, then they're not reliable enough for many use-cases. I'd have to see his full argument though to see if that's what he was claiming. I'm just trying to be charitable here.
I'm trying to be charitable as well — I suppose to both sides of the debate. Myself, I see pros and cons. The hype absolutely needs to be shut down, but a spokesperson that is more even-handed would be more convincing (in my opinion).
I don't see it as righteous glee but just hoping that people will see the problem with how you could even begin to be suspicious of him. If it is so easy to get something wrong when you're trying to be correct, or get something accidentally correct as you're trying to expose things that are wrong ... Then what are we really doing here with these things.
Well, like any tool, hopefully using it where it makes sense. We already know that asking it to count vowels, etc. is not what we should be doing with these things. Writing code in Python however is a very different story.
If we get the Singularity, it's overwhelmingly likely Jesus will return concurrently.