Hacker Newsnew | past | comments | ask | show | jobs | submit | phreeza's commentslogin

Seems completely unsurprising?

Unless they are contributing to the survival of their offspring.

Which is one theory why grandmothers (post-menopausal women) are a thing

It can work the other way, too. Your offspring may be more likely to survive if you stop consuming resources once they become viable.

Are you sure that availability of resources was a limiting factor during a large part of human evolution?

ie what has driven human population growth - a fundamental change in availability of natural resources or a fundamental change in how humans exploited them?

I'd argue it's the latter, and that's driven by accumulated knowledge - and before writing - the key repository of that was - old people.


Humans have selective adaptations to reduce resource competition between older and younger members of populations - examples are menopause and testosterone levels.

Part of the reason it benefited us that some but not all people become old is because people require more attention during two phases of their lives. Our biological evolution has prioritized care for the very young over the very old, with respect to a limit on resources (like attention), effectively until the modern age. In some cultures, for instance, those with teeth must pre-chew food for those without, or expected members to engage in ritual suicide at a certain age.


I think it's a mistake ( common ) to view any organism at a point in time as perfectly adapted.

It's like saying cars pistons are designed to wear out - because they do and as the car is perfectly designed ( the mistake ) then it must be for a reason.

Also take menopause - it happens a female has all the oocytes ( eggs ) they will ever have already at birth. Menopause happens when they run out.

What you are arguing is that the number at birth is optimised with a very indirect feedback loop - as oppose to a very direct one of how much resources do you put aside for eggs in terms of maximising number of direct children versus resources used. Occams razor suggests the latter is going to be stronger.

If what you say is true - think about it - old people wouldn't gradually crumble due to wear and tear, they would have evolved some much more efficient death switch. ie Women don't suddenly die post menopause.


The vast majority of human evolution happened in non-humans

Sure - though the tuned behaviour around turning the innate immune system up and down is probably dominated by the more recent part of that long history.

Well, given that the biggest killer of humans throughout most of our history was starvation, I think there's a good chance that's true.

How much accumulated knowledge do hunter-gatherers have?


Reminds me of my personal HN glory days, now 16 years ago!! https://news.ycombinator.com/item?id=1395726


I got so confused by that list of stubs, I read several of them expecting some sort of narrative arc to be completed pointing back at the opening.


Not OP but most common reason I've heard is military age males with unresolved mandatory military service status.


Oh good point! I knew some young Turkish men a while back that could not visit Turkey for similar reasons.


Observationally, ad spamminess is inversely correlated with user intent and platform prestige. So I suspect it will take quite a while until it gets quite this bad for the premium platforms.


It will happen slowly over time:

1) start with a notification that ads are coming (already there) 2) adding 1 ad to start with 3) slowly increase ads 4) make it a huge part of the experience (like Google now)


2) Go elsewhere.


3) Private equity buys the company, and turns the screws on the sticky users until blood comes out of the tap.

4) eulogies on the front page of HN


I think you meant 'elegy'.


I can’t recall the last time I’ve seen a poem or song about their amazing journey. So no.


Was leetcode-style interviewing not a thing before that? Cracking the coding interview was published in 2008 so I would assume it was already quite established by then.


I would argue that back then leetcode-style interviews probably filtered for the real talent Google was looking for (and that made possible many things). Then companies started cargo-culting it and people started gaming the system.


I wasn’t a dev at the time but in my research of how we got here, Google started with more abstract mental discovery questions.

Then people got frustrated on both ends because some people got through it by just BSing, and people who lack any creative or big idea processing coming from the Microsoft Outlook team I imagine couldn’t comprehend a connection between abstract question discussion vs solving a math test. So they whined and threw a temper tantrum until the process was based around dev work (leetcode) but the Google abstract discussion was still the key part of the interview; you didn’t need to literally solve the leetcode question perfectly.

Then you needed to solve the leetcode question perfectly because people who BSed through that process by knowing just enough to mislead interviewers snuck in and turned out to be toxic employees. So you have to get the question 100% correct, character for character, or it’s wrong and you’re a toxic bad dev. But what if you memorize that question—- they’re all posted online now (thus defeating the purpose of it being an abstract discussion centered around programming)…. Well using statistics we’ll just determine that getting past 5-10 of them in a row is acceptable odds. And just in case that’s not good enough, we’ll have multiple prison guards in every interview. Each round new guards. If any guard doesn’t pass you for any reason or for no reason then you’re out.

Then people started writing books about this, making videos about it, selling courses. While they work at FAANG. They’re the ones interviewing so they can get you through the interview if you buy their course. Of course they’re not going to want to get rid of the process they’ve spent years mastering and make more money on the side than their full time Google job. If they had to grind and get past the first guards who asked light silly questions, why can’t you get through hardcore legendary no respawn no tools timed trial elite 4 group panel lecture interviews. It’s literally identical to what they went through right?

Even still, it was relatively justifiable for Google who had a legitimate reason to use masters of programming who could live-invent answers to these without using a mental hashmap lookup of answers. They can solve problems fast and make things scalable for big internet work.

But now that every company is cloning that, they’re picking and choosing to “improve” off of Google’s hiring process. Why should we settle for Google’s engineers when our shoe company website can do better. Fixed the flaw in Google’s process where there’s a discussion about your thought process by just fully automating the process. Live monitor solve 10 leetcode problems on your own time and if it’s not an exact character match then it’s wrong and you’re blacklisted.

New hires getting in can’t even get their foot in the door to begin to have an opportunity to invent Google Maps. All time that would have gone to exploring ideas like that now has to go to memorizing leetcode and basic day to day survival with every layer of badly cloned bloat like Agile Scrum meetings where the project manager is screaming at devs when they point a story “wrong.” Despite every sprint for the past 3 years not being able to complete because everyody’s overloaded. Google does it so we just cloned what they’re doing and made it better and the lazy devs are sandbagging. Just imagine how bad it would be if we DIDN’T use leetcode. Thank god the CFO introduced leetcode and personally sits in every engineer interview to let us know when they’re fake bad devs.


My first job, in California, was just transitioning to leetcode-style interviews in 2007 following the industry trend. So it was probably spreading around that time.


But this is missing exactly the gap which OP seems to have, which is going from a next token predictor (a language model in the classical sense) to an instruction finetuned, RLHF-ed and "harnessed" tool?


The book has a sequel https://www.manning.com/books/build-a-reasoning-model-from-s...

It will give you an answer to the extent anybody can.


Not sure if you were maybe joking, but Seeing like a Bank is itself a pun on the famous book "Seeing like a state"! https://en.wikipedia.org/wiki/Seeing_Like_a_State

So you've come almost full circle!


It is the full circle! patio11 refers to that explicitly in the blog. But most people here probably saw and remember Pat's blog more than the book.


The book is very famous! I would guess more people have heard of it than read that specific BAM post.


You're almost certainly right. But I bet the tables tip distinctly the other way if you're talking about HN readers instead of everybody. So I'd guess you're both right.


These companies are spending billions on custom datasets for a gazillion of valuable tasks and are clamping down on exfil for distillation. It's not guaranteed open source models will continue to keep pace.


China might purchase the data and train their models just to make the AI bubble pop. A few billions to throw a wrench in your competing superpower economy might be totally worth it


If they did that it would be pursuing a commodify-the-complement strategy of some kind, not just "popping" a bubble. Same as nVidia publishing their own open models. If anything the value of everything AI would rise even further due to Jevons' Paradox.


And yet open models have been tailing closer lately?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: