I know someone who used ChatGPT to diagnose themselves with a rare and specific disease. They paid out of pocket for some expensive and intrusive diagnostics that their doctor didn't want to perform and it came out, surprise, that they didn't have this disease. The faith of this person in ChatGPT remains nonetheless just as high.
I'm constantly amazed at the attitude that doctors are useless and that their multiple years of medical school and practical experience amounts to little more than a Google search. Or as someone put it, "just because a doctor messed up once it doesn't mean that you are the doctor now".
I have a family member with an uncommon (1/1000) genetic condition. The only doctor they have ever been to that didn’t google it in the exam room with us was the PI of a study on the condition.
The best part is they always immediately start badly explaining it to us like we’ve never heard of it either.
Between that and having concerns repeatedly dismissed before we secured a diagnosis has sincerely changed my view of Dr. Google.
They're not useless but they're also human with limited time and limited amount of inputs.
To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects. ChatGPT Health reportedly connects to Apple Health and reads the medications you're on; to me that's huge.
> To me it's crazy that doctors rarely ask me if I'm taking any medications for example, since meds can have some pretty serious side effects.
This sounds very strange to me. Every medical appointment I've ever been to has required me to fill out an intake form where I list medications I'm taking.
Understanding drug interactions is the job of pharmacists (who are also doctors…of pharmacy). Instead of asking apple health or ChatGpt about your meds, please try talking to your pharmacist.
Doctors are wrong all the time as well. There are quite a few studies on this.
I would in no way trust a doctor over ChatGPT at this point. At least with ChatGPT I can ask it to cite the sources proving its conclusions. Then I can verify them. I can’t do that with a doctor it’s all “trust me bro”
Many, many, many doctors (including at a top-rated children's hospital in the US) spent 4+ years unsuccessfully trying to diagnose a very rare disease that my younger daughter had. Scores of appointments and tests. By the time she was 13, she weighed 56 lbs (25 kg) and was barely able to walk 100 yards. Psychiatrists even tried to imply that it was all imaginary and/or that she had an eating disorder.
Eventually, one super-nerdy intern walking rounds with the resident in the teaching hospital remembered a paper she had read, mentioned it during the case review, and they ran tests which confirmed it. They began a course of treatment and my daughter now lives normally (with the aid of daily medication.)
I fed a bunch of the early tests and case notes to ChatGPT and it diagnosed the disease correctly in minutes.
I surely wish we had had this technology a dozen years ago.
Same here, right now (couldn't get up without numb back pain, can barely walk, ChatGPT educated me on the quadratus lumborum muscle and how to solve that ... which was a lot better than my brain going 'well, I'm wheelchair-bound'.
Yep same, with the caveat that any actionable advice requires actual research from reliable sources afterwards (or at least making it cite sources).
i mean; i kinda get the concerns about misleading people but … are people really that dumb? okay if it’s telling you to drink more water, common sense. If you’re scrubbing up to perform an at home leg amputation because it misidentified a bruise then that’s really on you.
Yes, absolutely. The US has measles back in rotation because people are "self-educating" (aka taking to heart whatever idiocy they read online without a 2nd thought), and you think people self diagnosing with a sycophant sentence generator is anything but a recipe for disaster?
If we build a bridge over this river, sure, people can get across the river, but what about if the bridge fails and people fall into the water! Let's not build the bridge instead.
Same here. It’s a double-edged sword, though. I know some people who work in health care, including some doctors. They deal with a lot of hypochondriacs — people who imagine they have all sorts of issues and then try to MacGyver themselves to better health. You can’t read an HN thread on health care issues without dozens of those coming out of the woodwork to share their magical, special way of beating the system. Silicon Valley has a long history of people that did all sorts of weird crap. There's a great anecdote about Steve Jobs turning orange when he was restricting himself to a diet of carrots because he believed god knows what. In the end he died young of pancreatic cancer. Probably not connected but smart person that did some wacky stuff that probably wasn't that good for him.
I'm on statins that have side effects that I'm experiencing. That's a common thing. ChatGPT was useful for me to figure out some of that. I've had other minor issues where even just trying to understand what the medication I'm being prescribed is supposed to do can be helpful. Doctors aren't great at explaining their decisions. "Just take pill x, you'll be fine".
Doctors have to diagnose patients in a way that isn't that different from how I would diagnose a technical issue. Except they are starved for information and have to get all their information out of a 10-15 minute consult with a patient that is only talking about vague symptoms. It's easy to see how that goes wrong sometimes or how they would miss critical things. And they get to deal with all the hypochondriacs as well. So they have to poke through that as well and can't assume the patient is actually being truthful/honest.
LLMs are useful tools if you know how to use them. But they can also lead to a lot of confirmation bias. The best doctors tell you what you need to hear, not what you want to hear. So, tools like this are great and now a reality that doctors need to deal with whether they like it or not.
Some of the Covid crisis intersected with early ChatGPT usage. It wasn't pretty. People bought into a lot of nonsense that they came up with while doom scrolling Reddit, or using early versions of LLMs. But things have improved since then. LLMs are better and less likely to go completely off the rails.
I try to look at this a bit rationally: I know I don't get the best care possible all the time because doctors have to limit time they spend on me and I'm publicly insured in Germany so subject to cost savings. I can help myself to some extent by doing my homework. But in the end, I have to trust my doctor to confirm things. My mode is that I use ChatGPT to understand what's going on and then try to give my doctor a complete picture so he has all the information needed to help me.
"The attribution of the invention of the architecture to von Neumann is controversial, not least because Eckert and Mauchly had done a lot of the required design work and claim to have had the idea for stored programs long before discussing the ideas with von Neumann and Herman Goldstine[3]"
Yes, von Neumann was tasked with doing a write up of what Eckert and Mauchly were up to in the course of their contract building the eniac/edvac. meant to be an internal memo. goldstein leaked the memo and the ideas inside were attributed to the author, von Neumann. this prevented any of the work being patented btw, since the memo served as prior art.
The events are covered in great detail in Jean Jennings Bartik's autobiography "Pioneer Programmer", according to her von Neumann really wasn't that instrumental to this particular project, nor did he mean to take credit for things -- it was others that were big fans of his that hyped up his accomplishments.
I attended a lecture by Mauchly's son, Bill, "History of the ENIAC", he explains how eniac was a dataflow computer that, when it was just switches and patch cables, could do operations in parallel. There's a DH Lehmer quote, "This was a highly parallel machine, before von Neumann spoiled it." https://youtu.be/EcWsNdyl264
I think the stored-program concept was also present internally at IBM around those times, although as usual no single person got the credit for that: https://en.wikipedia.org/wiki/IBM_SSEC
It's worth noting that neither of those books contain any code at all.
I suppose that's what makes the ISLA being translated such a big deal. A sufficiently advanced student in ML/Statistical modeling doesn't really need code at all since it should be fairly trivial to translate the mathematical models into computational ones, and the ability to do so is a prerequisite to understanding these models in the first place.
Recommended Textbooks:
Pattern Recognition and Machine Learning, Christopher Bishop
Machine Learning: A probabilistic perspective, Kevin Murphy
[2] University of Toronto CSC 311: Introduction to Machine Learning
Suggested readings are optional; they are resources we recommend to help you understand the course material. All of the textbooks listed below are freely available online.
Bishop = Pattern Recognition and Machine Learning, by Chris Bishop
ESL = The Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman.
[3] EPFL CS-433 Machine Learning:
Textbooks(not mandatory)
Gilbert Strang, Linear Algebra and Learning from Data
Christopher Bishop, Pattern Recognition and Machine Learning
[4] University of Washington CSE 446: Machine Learning
The required textbook for the course is:
[Murphy] Machine Learning: A Probabilistic Perspective, Kevin Murphy.
The following three texts are also excellent and their PDFs are available for free online.
[B] Pattern Recognition and Machine Learning, Christopher Bishop.
[HTF] The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Trevor Hastie, Robert Tibshirani, Jerome Friedman.
[5] Cornell University ECE4950: Machine Learning and Pattern Recognition
Materials
We will take materials from various sources. Some books are:
Pattern Recognition and Machine Learning, Christopher Bishop
Machine Learning: a Probabilistic Perspective, Kevin Murphy
[6] Princeton University COS 324: Introduction to Machine Learning
Optional Machine Learning Books
[Murphy] Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press.
[Bishop] Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer.
[7] ETH Zurich Introduction to Machine Learning (2023)
Other Resources
K. Murphy. Machine Learning: a Probabilistic Perspective. MIT Press, 2012.
C. Bishop. Pattern Recognition and Machine Learning. Springer, 2007.
[8] TUM (Technical University of Munich) Machine Learning
This award-winning introductory Machine Learning lecture teaches the foundations of and concepts behind a wide range of common machine learning models.
Literature
Pattern Recognition and Machine Learning. Christopher Bishop. Springer-Verlag New York. 2006.
Machine Learning: A Probabilistic Perspective. Kevin Murphy. MIT Press. 2012
[9] MIT Introduction To Machine Learning:
Books: No textbook is required for this class, but students may find it helpful to purchase one of the following books. Bishop's book is much easier to read, whereas Murphy's book has substantially more depth and coverage (and is up to date).
Machine Learning: a Probabilistic Perspective, by Kevin Murphy (2012).
Pattern Recognition and Machine Learning, by Chris Bishop (2006).
[10] UC Berkeley CS-194-10: Introduction to Machine Learning:
Reading List (Preliminary Draft)
The first two books are very helpful, and are available online, so those (in addition to AIMA) will be the primary sources. Bishop has a wide range of solid mathematical derivations, while Witten and Frank focus much more on the practical side of applied machine learning and on the Weka package (a Java library and interface for machine learning).
Trevor Hastie, Rob Tibshirani, and Jerry Friedman, Elements of Statistical Learning, Second Edition, Springer, 2009. (Full pdf available for download.)
Kevin P. Murphy, Machine Learning: A Probabilistic Perspective. Unpublished. Access information will be provided.
Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, Third Edition, Prentice Hall, 2010.
Christopher Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
Ian Witten and Eibe Frank, Data Mining: Practical Machine Learning Tools and Techniques, Third Edition, Morgan Kaufmann, 2011.
ISL is a more introductory book than Bishop or Murphy. There's no reason not to read all of them, they're all excellent books that cover different topics. I'd also throw in Elements of Statistical Learning from the same authors as ISL(R/P). I've read ISL, ESL, and Bishop, started Murphy but didn't finish it (no real reason, just lost track of it when I got busy). I highly recommend any and all of these texts.
I heard good things about Bishop however I am a SE that would like do know more about what the ML team is doing and maybe work on some ML side projects. Would you recommend Bishop here or is it considerer to theoretical for such a case?
Bishop is going to be more theoretical than ISL. It is true that Bishop is taught as an introduction to ML in many universities, but if you want more hands on to start with, ISL is an excellent option. There is another text called "Elements of Statistical Learning" that pairs well with ISL for a more theoretical treatment. I haven't looked at ESL in a long time, the only concern I'd have is if they aren't covering some introductory deep learning topics. Most of ISL, ESL, and Bishop are more traditional machine learning, covering a wide variety of algorithms, so bear that in mind.
I see this "embarrassed millionaires" line a lot. It seems unbelievably cynical. Do you really think a meaningful fraction of workers are thinking "I'll oppose unions because even though I'm hurting workers, it'll be good for me when I'm rich"?
It's rarely a specific and conscious line of thought during the decision-making process. The more common case is making the working class feel like they're "just like" the wealthy—playing on the narrative that "anyone can get rich in America"—and then selling them on policies that actively work against their own interests, and for the interests of the wealthy. This step often looks like talking about things that would primarily affect the very wealthy as if they would hurt everyone. Things like "taxation is theft", "increasing taxes punishes success", "government small enough to drown in a bathtub", etc.
I'm not sure, but I think that when I looked into it, I found out that Code started from a level or two lower, but doesn't go up as many layers of abstraction as TEOCS.
No it doesn't. I listened to the audiobook and agree it has very good lessons. One of the big ones has to do with empathy.
I wish I'd had a book like this when I was a kid.