Hacker Newsnew | past | comments | ask | show | jobs | submit | a1studmuffin's commentslogin

And that's a dangerous game because the cheaper compute gets, the more likely consumers are to self-host rather than pay a subscription.

Apple could figure out a way to neatly package it into their ecosystem.

Not really. Most people won't self host.

The general public will self-host it's built in to your next phone or laptop straight out of the box or maybe from the App Store.

I agree that that's what it would take, but compute would need to get very cheap for it to be feasible to keep models running locally. That's an awful lot of memory to have just sitting with the model running in it.

True. I was thinking more of power users. Do you think Opus level capabilities will run on your average laptop in a year? I think that's pretty far away if ever.

You can demonstrate "running" the latest open Kimi or GLM model on a top-of-the-line laptop at very low throughput (Kimi at 2 tok/s, which is slow when you account for thinking time) today, courtesy of Flash-MoE with SSD weights offload. That's not Opus-like, it's not an "average" laptop and it's not really usable for non-niche purposes due to the low throughput. But it's impressive in a way, and it does give a nice idea of what might be feasible down the line.

This is a cool tech demo, not a video game. The fact that video game stocks slid by so much after this release just shows how reactive, speculative and uneducated most investors are.


Your health is ultimately your own responsibility - it's your body. You have free will, and your appetite for risk is yours alone. You can choose to ignore expert advice and refuse to wear a seatbelt, skip your rehabilitation exercises, invest all-in on crypto, or smoke cigarettes. None of this responsibility should fall on the expert if they communicated the risks clearly.


What you're communicating here, perhaps unintentionally, is that what matters is not results, but blame. If the doctor said what to do but the patient didn't do it, all that matters is the patient is to blame.

You've communicated that by ignoring or dismissing the question of whether better outcomes are possible through other means than demanding that everyone follow doctors' orders and blaming them if they don't.

"Who cares if better outcomes are possible, so long as blame is in the right place"? Is that how we want to approach this?


It's hard to help someone that doesn't want to be helped.


Struggling to change is different from not wanting to change. People seem to have trouble with basic distinctions like this when they're heavy into moralizing failure to change.


Profound point. My mother struggled with alcoholism and ultimately succumbed to that disease. In philosophy of mind they use “akrasia” and “akratic thinking” for acting against ones better judgement. It helped me somewhat getting to understand what my mother was going through at that time.

She wanted to change, tried a many multiple of times and it failed. Fault, guilt, blame are useless concepts to use on the Other. And only in moderation should they be applied to the Self. There deep disconnects between what we think, know and do.


I find it helps to explicitly abandon the expectation that each person has a unitary and consistent will.

Bob the gambler wants to quit and wants to wager, sometimes sequentially and sometimes simultaneously.

The question isn't whether the whole Bob "means it", but which version of Bob we want to ally-with to war against the other, and what conditions or limitations we put on that assistance.


Reading this thread it seems like you're the only one moralizing and looking down on people. I don't see anyone here shaming people for their choices. But somehow you seem to have read the worst interpretation of every reply.


Drugs expand what helping yourself means to the point where people will actually do so.

Statins, GLP-1 antagonists, etc isn’t magic, but it changes people’s behavior and bodies in such as way as to diminish the importance of willpower. Thus, it’s not that people are lacking instead our medicine is simply to primitive to help with a wide range of issues.


Not that hard in this case. Just give them a pill.


Or, as we're becoming aware with GLP-1 drugs, an injection. (For now!). It's better to help people behave better with drugs than moral condemnation. Almost infinitely better, as it turns out, regarding a lot of problematic behavior regarded as "untreatable" previously.


Why not both?


The old adage "You can lead a horse to water but you can't make it drink" applies here.


It may not be the case for statins specifically, but my main concern is side effects. If there was a panacea, I would support giving it to everyone, but lifestyle changes are usually more available, if not easier.


Yeah this prickles my hackles too. It took a fairly high dosage of zepbound and many months for me to get to a normal set of eating habits after a couple of decades of bad, but a prediabetes scare surprise on my labs pushed me into the program, but I would not have done it by "white knuckling". I needed some medication to help me along. All these people just saying "calories in and calories out" "just start exercising dude" are making a complex issue into a "simple solution" that almost never works because change takes time; a lot of time that many people don't feel on a deep level that they have to apply to it. So, they just give up after a couple of weeks of "grit" and "will-power". Isn't it like maybe 1-3% succeed over time, while the rest fail when trying to lose significant weight or other health issues that could be resolved with habit only?


To me the terms mix and it helps to separate the things that are externally manageable from the things that are not. The physical is complex but straightforward - the body biochemistry operates on material in, biochemistry mix, expenditure out. The brain is physical - neurons, pathways, etc. The mind, OTOH, is a virtual little candle isolated in a prison of meat and bone trying to understand how to interact with the world around it. External forces can alter the body and brain, but only the mind can change the mind. And does, in ways that are very difficult to control because the sole operator is part of the mechanism. People who try to change on their own and can't aren't failing or weak, it's just really f-ing hard.


If my health is my responsibility, then shouldn't the treatment that I receive be to the standard that I request?

In 2015, https://pubmed.ncbi.nlm.nih.gov/26551272/ showed that medicating all of the way to normal works out better than medicating down to stage 1 hypertension, then insisting on diet and exercise. And yet my request in 2018 to be medicated down to normal blood pressure was refused, because the professional guidelines followed by the experts was to only medicate down to stage 1 hypertension, then get the patient to engage with diet and exercise. The expert standard of care was literally the opposite of what research had shown that they should do.

I agree that experts should not be accountable for my laziness. But can you agree that experts should be accountable for following standard of care guidelines that are in direct conflict with medical research? And (as in my case) refusing the patient's request to be treated in a way that is consistent with what medical research says is optimal?


Maybe 80-90% of people should take doctors at face value, but it is easy and only getting easier to at least access the knowledge to better advocate for your own healthcare (thanks to LLMs), with better outcomes. Of course, this requires doctors that respect your ability to provide useful inputs, which in your case did not happen.

My advice would be to "shop around" for doctors, establish a relationship where you demonstrate openness to what they say, try not to step on their toes unnecessarily, but also provide your own data and arguments. Some of the most "life-changing" interventions in terms of my own healthcare have been due to my own initiative and stubbornness, but I have doctors who humor me and respect my inputs. Credentials/vibes help here I think: in my case "the PhD student from the brand name school across the street who shows up with plots and regressions" is probably a soft signal that indicates that I mean business.


> In 2015, https://pubmed.ncbi.nlm.nih.gov/26551272/ showed that medicating all of the way to normal works out better than medicating down to stage 1 hypertension

Thanks for posting this. While I would generally advise a healthy dose of skepticism for any individual study, this one was very large and seems to be both well designed and executed. While there was a (statistically) significant increase in side effects with more intensive treatments, only about 1% more patients had adverse effects versus the standard treatment group, which seems like a very reasonable risk given the improved outcomes.

I've been trying to get my blood pressure under control recently and was thinking getting down to 12x/8x was good enough, but this has me rethinking that.


You should have bought some illegal street diet and exercise or cholesterol meds or whatever.


What if you have an intrinsically lower ability to perform temporal discounting?


Is that really something intrinsic and fixed or can you improve it over time with deliberate effort?


Open to evidence either way. I haven't seen people improve it even with what seems to be terrible negative consequences associated with poor temporal discounting ability, but I'd love to read differing perspectives.


Research on heritability have found that the amount of temporal discounting we do is moderately heritable. With twin studies ranging from 30-60% of our natural variability explained by genes.

This strongly suggests that genetics definitely slips a thumb on the scale, but ultimately we are able to also impact our personal behavior.

More importantly, research such as https://pubmed.ncbi.nlm.nih.gov/31270766/ shows that there are techniques (such as mindfulness practices) that have been demonstrated to improve our abilities in practice. I have personally seen these have an impact.

Of course if you have a condition such as severe ADHD, you might not be able to reach the same level as is possible for someone with good genetics. But you still have the ability to move the needle. If you have a condition such as traumatic brain injury, even your ability to move the needle may be lacking.

But most of us should be able to make a positive change.


> This strongly suggests that genetics definitely slips a thumb on the scale, but ultimately we are able to also impact our personal behavior.

If it's 30-60% heritable, that leaves 70-40% to split between personal decisions and environment. It does not guarantee that personal decisions matter much at all...


That is why I said "strongly suggests" instead of "guarantees".

And then further followed up with a link to research showing that it is, in fact, possible to change. With advice on how to change it.


What would be the upper bound of the effect of heritability where responsibility is no longer assumed?


I really don't understand the modern hate towards OOP. From my experience over the last few decades working with large C and C++ codebases, the former turns into a big ball of mud first.


Most hate of OOP comes from the definition that OOP = inheritance. Meanwhile, among people that consider themselves OO programmers, there is often the same revulsion towards inheritance and a preference for encapsulation while still calling that OOP. Because each language is subtly different, these things tend to flame war.

Which of course people do and why of course you have:

> PSYC 4410: Obsessions of the Programmer Mind


I think that OOP can be good for some things, but that does not mean that all or most programs should use OOP for all or most things. I would say that for most things it is not helpful, even though sometimes it is helpful.


I really hate the way The Register use terms like "techie" and "boffin". It always comes across to me as anti-intellectual.


Don't forget, "Fondle-Slab" for phones and tablets. I've got some people here in the US saying it now.


Great minds think alike! Here's my attempt using an ESP-based smart watch that clips onto your clothes at night: https://github.com/a1studmuffin/lilygo-twatch-sleepdevice


Plus 2) would require additional design, engineering and QA work to implement the code to drain users' batteries. And it would cause plenty of users to uninstall/complain if Messenger was draining huge amounts of battery. It really doesn't make sense considering Messenger has 1.3 billion users, and remaining battery percent would likely be a bell curve across all users regardless of sample size.


Hang on a second, you consider Jan 2014 to be "very, very old"? Commercial software often has dependencies on libraries going back decades.


It can be "very, very old" in its context. If a new release comes out every year, support duration is 5 years and you're 2 years past EOL, I'd agree that the software can be called "very, very old".


And that's where things go off the rails. A release that is only supported for five years is a hobby project.

The kind of software projects that make the world go round continue past the lives of their original authors and can easily span decades. 5 years is just enough for the original shake-out.


I mean i am sure we would do that if any of the user company would want to pay the price for that. None of them want. You probably multiply the price by at least 10 (i would say add at least +1.5 to the multiplier for every year of support and that is probably highly conservative).

Come back when they are ready to pay for that.


Are you really accusing ArcGIS of being a "hobby project"?


>A release that is only supported for five years is a hobby project.

Sure, I'll let my Product Lead know that we're selling toys that only skiddies, not telecos NOCs use.


Having worked in a number of Telco NOCs, that's probably not the best example for the point you're trying to make.


Totally agreed, but check upthread for a comment that literally says this tool is unsuitable for work where lives depend on it. And for telcos that's pretty much a given.


Yes, ok, thanks, the Python foundation will immediately spend a lot of time and money to support releases for 2 decades. How could they not know!? /s

That out of the way, you're right that some niches require a lot longer cycles, but it's the big big biiiig advantage of FOSS. Downstream can maintain it for as long as they wish. As you said things got shaken out by the community for free basically, and if some serious software is so so serious that upgrading and retesting/certifying is somehow more expensive than trying to airgap an EOLed pile of libs (while at the same time it needs support) then the stakeholders can do it.


No sarcasm needed, Red Hat happily invests time and money in supporting it until 2024.

That might have been an easy eay to provide upstream releases too, had the Python maintainers not been intent on using deprecations as an instrument to get the community moving.

That strategy doesn't work very well however, as we've seen when TLS 1.2 was held off from Python 2.


> The kind of software projects that make the world go round continue past the lives of their original authors and can easily span decades

The name on the software might remain unchanged for decades. That doesn't mean that the software remains identical for that time.


> A release that is only supported for five years is a hobby project.

and how much are you paying for support?


It looks like probably around $3000/year.

https://www.esri.com/en-us/arcgis/products/arcgis-online/buy


> A release that is only supported for five years is a hobby project.

I believe what you are trying to say is that you chose the wrong tool for the job. It is really condescending to lash out on other projects like that just because they don't share the same needs as you. They owe you nothing. Python is free and open source, just fork the damn language spec and support it yourself.


The discussion is about ArcGIS, not Python.


The discussion is about a specific version of ArcGIS that only supports a version of python that reached its end of life, so I beg to differ. My point stands: if you had the need for a really long supported tech, in the order of decades, choosing a tech based on a dependency that had a clear life span of 5 years or so was a poor choice. If, still, you need that, you can support it yourself, by forking your own version, but python owes ArcGIS nothing.


A finished software product that gets regular updates is old by the time it turns 7, I'd agree on that.

Dependencies on libraries is a different story, there's only so many ways you can implement a functionality, and some of these happen to be decades old!


Some people just enjoy their job. I know several engineers who retired, only to return after a hiatus because they missed working with a team on something they were good at and passionate about. Work isn't always about money.


Right, I can understand why someone might wish to work part time even if they didn't really need the money, but I don't see myself working full time beyond the point where I am fully financially independent from needing to.


Hell yes, I use it daily. I'm in AAA gamedev and the codebase I deal with goes back 20+ years. The last 10 years are readily accessible in Perforce and the rest can be found in another version control system. I am forever grateful to past engineers for outlining WHY they made their changes, and not WHAT the changes were per se. With thousands of engineers that have come and gone, this is incredibly useful information in addition to the code itself.

IMHO revision history is just as valuable to a company as the code itself.


> I am forever grateful to past engineers for outlining WHY they made their changes, and not WHAT the changes were per se.

So true, we have this one senior developer who gets mad if someone's algorithm isn't as efficient as it could be (fair enough I suppose). But we can't get him to use commit messages that are more than 1-3 words and simply mention a word or three about the area of code that was changed. Years later, he also can't remember WHY he made those changes, so I'd much rather work with someone who writes inefficient algorithms that are easily improved at any time than commit comments that are forever useless.

What was changed is easily seen in the commit itself, why needs to be in the commit message.


I wonder why is it so hard for people to right good commit messages. At my company I've actually tried trying talking in-person to people, even write a doc explaining the benefits and how to do it, pointing people to good resources like this one: https://chris.beams.io/posts/git-commit/

And still, I can't get people to do it. I find it so valuable to look at commit messages that are written, that explains the why behind the changes in the commit but can't get people to see the same value as I do. Any tips on that? Would be really appreciated. :)


"What was changed is easily seen in the commit itself, why needs to be in the commit message."

Commit message is like subject of an email. Is n't it faster to look at commit message, and get an idea about change rather than go through commit and understand it?


The first line of the commit message should be the "subject line". The rest of the commit message should contain a summary of the high-level stuff such as the "why".


Yes, fair enough... having both in the commit message is ideal. I just mean that the absolute minimum is the why because at least the what can be deconstructed from the commit.


> I am forever grateful to past engineers for outlining WHY they made their changes, and not WHAT the changes were per se.

Oh, so much this! The same applies to in-code comments. If your revision notes (or your code comments) are only telling me what I can plainly read in the code, then they're utterly pointless. Tell me what I can't read in the code: the "why"s, as well as potential consequences and "gotchas" the changes may present.


> I'm in AAA gamedev and the codebase I deal with goes back 20+ years.

I'm curious what parts you work on (engine/tooling?). I've always had the impression that games usually have more throwaway code than other types of applications.


Until the OP mentioned Perforce, I was 100% sure they were a co-worker of mine. I also work on a 20+ year old project, mainly on the core graphics/game engine. It’s mainly used to power a single game franchise, although I’d say that any commits from more than 2 major versions ago aren’t all that useful anymore. Too many things keep changing especially around the area that I work on.


Well, the Unreal Engine is 22 years old. I doubt the current incarnation is free of legacy code.


The current version was developed from scratch.


Could easily be a sports game. That was my guess, then I remembered that the quake engine powered Half Life 1 forever. Could possibly also be working with the iD engine?


How's the AAA industry nowadays? I got out awhile ago (though HoN never really counted as AAA) but it was a fun ride at the time.

Is work life balance a bit better now, or does everyone still push themselves pretty hard?


It's both better and worse than its ever been.

There's an awareness and discussion about "sustainable" development practices, but a large portion of our workforce had to leave for stress reasons last year, on a project that is saying "sustainable development" the loudest.. so while it feels like lip service, at least there's an awareness at some level.

(also, gamers are more entitled than ever, so we're always running; which causes our games to be buggy as hell which slows us down later.. horrible and completely unsustainable)


I’m making a game of my own right now and am curious about the larger industry

- is it common for AAA companies to claim ownership over all ip you create, even outside work? (My last job did this)

- How would one find part time or short term contract work in the games industry?


> - is it common for AAA companies to claim ownership over all ip you create, even outside work?

Yes. This is super common. Depending on the company you can make some kind of agreement. Most agreements are based on income (so if you make a lot of income you need to renegotiate, something 10% your yearly salary or something).

> - How would one find part time or short term contract work in the games industry?

If you’re an artist, I guess this is easier because those are contracted out positions. But for others it’s unlikely the company will hire you for part time work. They seem to want everyone giving 111%, a part timer might be useful but could cause blocking issues.


But that sounds like you value documentation, not necessarily the code history


Code history is documentation. There are lots of different kinds of documentation: code API level, module level, system level, tutorials, even books in some cases. Revision history is just another one of those levels, and I believe it is the best at capturing the "why"s of systems rather than just the "what"s.


I agree code history can be used as a form of documentation, but in cases like this looking through years of code to find the decisions/reasons leading to a particular design seems like inefficient communication. It seems like "real" documentation with a few sentences explaining directly would be more suitable.


I disagree. In some cases, code history can be much more efficient. You really need a mix of both.

There will be things that are much better captured as part of a revision/commit, especially if your commits are well-designed, grouped into logical chunks, and include messages themselves (and maybe are linked to a project management tool).

You will need information like "this code was added as by X as part of work they were doing on Y, and they also made changes in other parts of the code as part of that". That context is really valuable.

You can think of it like event sourcing, which captures a lot more information than traditional mutation of data, and as a pattern, is a lot more rock solid... except event sourcing for your data is (usually) much more difficult to implement in practice, and code revisions are already an almost completely solved problem.


Capturing most of your data in your commits/revisions seems to suffer from a lot of the unsolved event sourcing issues:

- How do I quickly find the info that I want? How do I "query" the commit log? Often times, we want a "view" of the history which tells us specific info. If I need to scan through half of the commits just to get a good understanding of the architecture of the code, then that's more wasteful than just having a design doc. If I'm troubleshooting a production bug, then the granularity the commit log offers becomes important enough to offset its slow "query" speed, so I'd want enough "why" commits in the commit log and outside of people's heads.

- People write to the commit log without a well-defined "schema". If you use something like tags, how do you handle changes to the tags ("schema evolution")

This is my train of thought for why I lean towards "why" comments near the code or in a design doc over commit messages, which I allow to be a more sloppy.

A higher-level thought: The attractiveness of the event sourcing analogy often comes from assuming that the commit log should be a strongly consistent source of truth. However, it's good to remember that a huge amount of info about the code is stored in the team members' heads. In particular, the code writer knows a huge amount that can't be easily documented. So an alternative analogy would be to think of each member as a VM attached to block storage. If a VM fails (the person gets sick) or leaves the cluster (they leave the job), then you lose all of the data in block storage. So, the team wants to facilitate just enough overhead/admin work to transfer important data from individual team members to shared but slower storage (like the commit log, design doc, comments, etc.)


I recommend checking out Peter Naur's essay "Programming as Theory Building"[0] as it touches on the subject of a program being more than the code + documentation, it lives on its designers' and developers' heads, their intents, visions, etc.

[0] http://pages.cs.wisc.edu/~remzi/Naur.pdf


I personally think of version control history as like a sedimentation layering of documentation that "updates" itself in the process of doing the work -- like the desk that looks messy, but by picking up and using papers, the most important stuff is on top. "Real" documentation can be clearer, but it must be maintained manually, and cleared out regularly. VCS kinda handles this with less process weight.

The right tactic is def a mix of both though, so I think I'm in agreement with you :)


It's because "your commit needs to link to something in the bug tracker" is pretty easy for code review to enforce, but I have never seen an org manage to indefinitely keep an accurate as-built design doc beyond the code itself. You can convince people to write new aspirational design docs for intended major changes, but after approval those never get updated to reflect what really got built, and lots of small bugfixes don't get one at all.


You might find some weird hack in codebase that isn't obvious what it does at the first glance. This isn't something that people document in official documentation but even finding a JIRA ticket that is linked with that specific commit can help tremendously.


This is a great comment, except for the last line, which is extreme hyperbole.


Parent clearly stated it as opinion.


Isn't it a tautology? Revision history includes the code itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: