Hacker Newsnew | past | comments | ask | show | jobs | submit | xitrium's commentslogin

Can you move to a city? This is what most people I know in this situation do. Though I had a great time getting a car and taking myself out for hikes, sauna / spa days, activities and parties in the east bay near SF. Great place for practicing being alone. I had to think about it like dating myself - where would I have taken a date for fun? Try a bunch of things and see what sticks and remember you can appreciate moments by yourself with this mindset and it's like 80% as good.


Ironically I find cities more isolating than the countryside. At least in the countryside you have the beauty of nature. In many modern cities, there is less and less social connection and community. Sometimes I suppose it is finding the right groups... And sometimes you have to take the initiative and create in person groups.


The suburbs, though, are the worst of both worlds.

Cities at least are full of a huge variety of people looking to make connections.


Depends on the suburb and HOA. Mine has groups for books, card games, mahjongg, cycling, ladies lunch, men's lunch, happy hours, pickle ball, etc... Some are in our community center, some are hosted in people's homes. There are also occasional block parties, although they tend to revolve around kids.


+1 Moving to a city.


Came here to say this. Pure slop, wtf is everyone going on about? Are they all LLMs too?


Someone mentioned mass-psychosis a while ago. I feel the same.

What's isn't a singularly but a zombie apocalypse.


The last thing I read about the link between amyloid-β accumulation and Alzheimer's was that the entire field was full of fake data ( https://www.science.org/content/blog-post/faked-beta-amyloid... ). In particular, even treatments that directly reduce amyloid-β in the brain did not restore cognitive abilities.

At least this paper tests both cognitive abilities as well as "amyloid-β pathologies." I'm not at all an expert in this field but gold nanoparticles sounds like something you'd see on a late night infomercial, lol.


In a massive field, one researcher does not constitute "full of faked data," despite how concerning it is.

The problem is viewing individual papers as the unit of truth in science. The "self-correcting" nature of science will actually reject entire papers, and entire directions of inquiry. Including, maybe, a casual relationship between beta amyloid and AD, but maybe not.

The other key part of science is holding everything in a state of uncertainty. There's some "facts" but mostly just hints and clues. And with Alzheimer's disease in particular we are trying to make progress with completely inadequate vision; we really can't even measure so much of what we want to measure. Feynman said it back in the 1960s, too, physicists have failed to deliver the tools to biologists to really measure what needs to be measured. There have been advancements, and DNA sequencing technology in the past decade has been turned into the most clever sorts of information theoretic microscopy by combining DNA sequences with many other biochemical processes. But we as a species still can not measure a lot of the things we'd like to measure.


I appreciate your commitment to modernist capital-S Science here :) I'm familiar with how the field ought to work but after working in Andrew Gelman's lab for some years, also with how it can fail us. Here I think the researcher in question has had a much larger impact than you are allowing for. Here's a choice quote:

> Every single disease-modifying trial of Alzheimer’s has failed.

> The huge majority of those have addressed the amyloid hypothesis, of course, from all sorts of angles. Even the truest believers are starting to wonder. Dennis Selkoe’s entire career has been devoted to the subject, and he’s quoted in the Science article as saying that if the trials that are already in progress also fail, then “the A-beta hypothesis is very much under duress”. Yep.

And the original expose is quite interesting if you haven't read it yet https://www.science.org/content/article/potential-fabricatio...


The hypothesis was under great duress even in 2004, when I took a protein structure course that spent a lot of time on prions and the beta amyloid. Many people have devoted their careers to chasing this down, only one as far as we know published impactful fake data.

However, the particular faked data, despite lots of citations, has apparently not lead to any clinical trials:

> Did the AB*56 Work Lead to Clinical Trials? That’s a question that many have been asking since this scandal broke a few days ago. And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB*56 oligomer itself (I’ll be glad to be corrected on this point, though).


I wish to retract this comment, as it was not based on full information. I was going off of the data fr the linked article, but here are many more cases of fraud from leaders in the field:

Marc Tessier-Lavigne https://stanforddaily.com/2023/02/17/internal-review-found-f...

Berislav Zlokovic https://www.science.org/content/article/misconduct-concerns-...

Hoau-Yan Wang https://www.science.org/content/article/co-developer-cassava...

Sylvain Lesné - the researcher from grandparent comment's article

This list taken from Chris Said on Twitter https://x.com/chris_said/status/1724448550493315436?s=46


But he's correct, amyloid plaque theory was founded on bad data. Amyloid plaques as causal agents is unclear, but it was made to appear clear by poisoned data, and many studies conducted afterwards, in good faith, assumed that the information and conclusions were sound. However it doesn't appear to be the case, and instead something along the amyloid beta pathway is more likely to be the true causal factor, and plaques an association. It has spawned something of a wild goose chase in Alzheimer's research and treatment.


The faked data is not foundational to the field, if we are to believe the article linked from that comment.

> But my impression is that a lot of labs that were interested in the general idea of beta-amyloid oligomers just took the earlier papers as validation for that interest, and kept on doing their own research into the area without really jumping directly onto the 56 story itself. The bewildering nature of the amyloid-oligomer situation in live cells has given everyone plenty of opportunities for that! The expressions in the literature about the failure to find 56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so. Just not Lesné’s oligomer.

And

> Did the 56 Work Lead to Clinical Trials? That’s a question that many have been asking since this scandal broke a few days ago. And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB56 oligomer itself (I’ll be glad to be corrected on this point, though).


A most eloquent response that needed to be said.


It's hard to discern discourse in a field one is unfamiliar in, I'd tried with the Alzheimer's fiasco. Here's my tuppence:

The plaques are known to be linked to Alzheimer's, the debunking of one paper that messed with its figures does not detract from the whole body of research. The inefficacy of plaque-targeting treatments may not be proof that the plaques are not causal in nature, only that their damage is not reversible/fully understood.


Don't worry, it's not just this field, roughly 70% of medical studies are fake or severely flawed [0].

[0] https://news.ycombinator.com/item?id=37572394


That may or may not be the case, but you're rather detracting from the original comment, either deliberately or not.

The issue in the Alzheimer's world is the possibility that the very disease mechanism concept underlying the vast majority of research and interventional trials into which countless multiple billions have been poured, is incorrect.

Within that space, this is orders of magnitude more fundamental and serious than a flip aside that lots of trials have problems, so who cares about another?


> the very disease mechanism concept underlying the vast majority of research and interventional trials into which countless multiple billions have been poured, is incorrect.

Not "is incorrect," but might be incorrect. And we almost certainly won't know it is correct until we actually have a therapy.

Those pursuing cures could have waited until there was more solid science, but they and their funders took on the risk, knowing full well that the amyloid hypothesis is not proven.

This is not some indictment of science, this is normal risk taking for a problem that hugely affects society.


> Not "is incorrect," but might be incorrect.

That's why the part you quoted is preceded by "the possibility that".


But the entire framing of the comment is that the amyloid hypothesis is taken as fact and not possibility, when in fact the core research question is whether it is true or not.

It is the best possible explanation so far, but four decades of research have not reached a definitive conclusion.

An open problem is not a problem for science, that's the fundamental focus of science. The problem is people misrepresenting what science is and what it aims to do.


(Late back to this but) It's far more skewed than you're making out. That the a-beta hypothesis is true, is/was the vastly dominant prevailing belief in the field, to the extent that it hasn't been a question that many 'experts' were willing to meaningfully address.

To be clear: for decades, researchers wishing to pursue lines of inquiry contrary to the a-beta hypothesis struggled for traction and funding, and saw their careers struggle as a result[0]. As such, trying to disprove the a-beta hypothesis was not the core research question for many/most, for long time.

[0] https://www.statnews.com/2019/06/25/alzheimers-cabal-thwarte...


(Edit: forgot to say, thanks for continuing the conversation, it is much appreciated! This comment may come across snippier than I intended, but please know I appreciate your effort here even though my experiences lead me to a different conclusion.) This article is just sensationalization of the standard scientific process. Grants do get awarded by friends and it does appear very much like a cabal. Or things like this:

> A top journal told one that it would not publish her paper because others hadn’t.

Oh the horror, not getting published in a top journal! Turns out that most good science gets published outside the top journals.

This sort of behavior is bad, and has always been part of the process, and may actually be better today than it was a century ago, as the clubs are not nearly so tight as they were back then.

Early in my career I remember reading some of ET Jaynes' (an early Bayesian reasoning guy) discussions of his early career, and how he had to very very carefully choose his topics so that he wouldn't upset the big personalities in physics and thereby have his entire career crushed. It's better these days than it was then!

There will be sour grapes about funding, just as there are when VCs all jump on the hype train for the same idea, but my only scientific exposure to the amyloid hypothesis for the past 20 years has been in terms of it being an unproven hypothesis. Starting down exploratory routes for explanatory hypotheses should have been pursued, and was pursued, and will continue to be pursued, but the question of "how much" is exceptionally difficult to answer.

Perhaps I'm biased from being in Science too long, but I've seen so many sensational Stat News article that never pan out when pushed upon. I wouldn't trust them at all with stuff like this.


Ok, this is awful, and clearly goes much deeper, and I'm starting to be very convinced of your position:

https://www.science.org/content/article/misconduct-concerns-...


Ah, the game of telephone.

>For more than 150 trials, Carlisle got access to anonymized individual participant data (IPD). By studying the IPD spreadsheets, he judged that 44% of these trials contained at least some flawed data: impossible statistics, incorrect calculations or duplicated numbers or figures, for instance. And in 26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data.

Firstly, this is only from one journal, Anesthesiology. Second, the phrase "at least" indicates that while 44% had some amount of (presumably) flawed data, only 26% of the studies were bad enough to be judged fake or severely flawed by this one (admittedly esteemed) researcher in the field of anesthesiology. It's important to be skeptical and do your homework when you hear sweeping and/or shocking results. It's also important to read carefully, especially with science journalism because it is written for clicks and broad audiences, not to reduce ambiguity and adhere to strict standards of accuracy.


I didn’t go look up the quote but based on your version here it sounds like roughly half (44%) had some kind of suboptimality, and of those roughly a quarter (26%) had serious problems preventing them from being relied on.

That means 11% of the total papers should be discarded, which means 89% of the papers can be used.


As per usual, science reporting fails to use precise language. It can be interpreted either way, although I think your interpretation is the slightly larger leap based on phrasing. In any case, it is far below the 70% (and not directed broadly at all scientific research) that GP states.


I went and read the article but have not tracked down the original paper.

It is at least 26%, because that was the percentage of studies that provided access to their data that proved faked or fatally flawed.

It may be substantially higher.


> In particular, even treatments that directly reduce amyloid-β in the brain did not restore cognitive abilities.

Correct, but this doesn’t constitute “fake data”. It could be that amyloid-β is a marker rather than a causative factor. Or it could be that amyloid-β related damage is downstream, and remove amyloid-β after the damage has been done won’t remove the other damage.

It’s too quick to wave away an entire field because a single theory didn’t pan out. Most medical research proceeds with a lot of dead ends before it is figured out.


Very true, but it seems like you may not have seen the expose on this field in Science https://www.science.org/content/article/potential-fabricatio...


The current law is more general; it’s the current policy’s consumer price heuristic that has become a bad approximation to the law. I like “The Economists’ Hour” on the topic.


Statistical Rethinking is the first book that helped me make sense of statistical modeling (and probability as applied to modeling): https://xcelab.net/rm/statistical-rethinking/

Huge fan, can't recommend enough.


did you go through the course or used the book independently? I got a couple of chapters in but fell off


I will say up front that I don't think the social good is worth what we are collectively paying for it, but I do think the market hours are a reasonable device. This is basically because there are humans involved and they need to sleep (Matt Levine has written about this).

If you want the best price, you need to have all of the market participants bidding together. Market hours serve as a coordinated period in which ~all market participants agree to be online and bidding. Prices, thus, get stale overnight. But we assume that that is mostly okay, as business is normally conducted during business hours, and we assume that transactions can wait until the next day. ACH transfers take multiple days! (technically so do stocks, but that's mostly invisible to retail traders).

If you're a retail trader, I would caution you somewhat against trading after-hours; there is very little liquidity and it could cost you 100s of bps more.


I like "The market can stay irrational longer than you can stay solvent." -Keynes


> Unfortunately, the rethinking package which is a major component of the book itself depends on the V8 engine for some reason.

This is my fault, in a sense. In order to get the new Stan compiler (written in OCaml) distributed via CRAN (which requires everything to be built from source on its antiquated build servers), we decided to use js_of_ocaml to translate the OCaml compiler into javascript. See this thread for more details: https://discourse.mc-stan.org/t/a-javascript-stanc3/11044

When I posted that, I didn't really think we would end up using it.


Thanks for this. It's great to get some more perspective on changes ^_^


This is weirdly deliberately misquoted, maybe as a joke? full: https://statmodeling.stat.columbia.edu/2013/07/21/bayes-rela...


It says above that quote: "These are satirical, but real"


I'm surprised to see this argument. Tooling and infrastructure are only as clean as the services they support. I don't think you get to wash your hands because all you did was build e.g. Palantir a giant database that's great for storing locations if you know customers will be assassinating political dissidents with it.


I don't (or no longer) think it's great form to say negative things about former employers on the Internet, so allow me to expound on the positive aspects of my point.

I think having great tooling / infrastructure at least gives stakeholders more options in terms of the business direction they're taking. You can pivot and execute faster, which to me means you can pivot away from something ethically bad and execute in a different direction faster.

Great tooling / infrastructure in my mind is also ethically salvageable and redeemable. A great tool can help an ethically positive division of the company as it can help an ethically negative division of the company. It may not always be black and white.

Lastly, great tooling / infrastructure generally requires top talent, which can move anywhere and is sensitive to things like ethics. Having great tooling / infrastructure, or the threat of losing great tooling / infrastructure by losing talent to ethical issues, can act as pressure for management to choose certain projects over others. I think Grasshopper is an example of one such decision.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: