Hacker Newsnew | past | comments | ask | show | jobs | submit | foresto's commentslogin

> It only needs to be "an app" if it is using hardware to do it's main job.

Even then, there's a good chance that web a API exists for the required hardware, so it still doesn't need to be an app.


> I will click the green "Play" button, it will change to a blue "Stop" button, as if the application was running, then shortly after silently switches back to the green Play button again, without any visible error and without actually starting the game.

You might want to enable Proton logging and have a look at what it says is going on.

https://github.com/ValveSoftware/Proton/?tab=readme-ov-file#...


Debian Stable gamer here.

> Honestly, don't use debian for gaming, as it is too far behind. Gaming stuff needs a bit more bleeding edge packages.

Please stop spreading this misconception. There are only a tiny handful of packages that a Debian gamer might need to update, and those are generally available in Debian Backports. It's not what I would call a beginner distro for any purpose, but gaming on it is perfectly viable.

I'm having a good time in games, still getting other computing tasks done, and enjoying Debian's low-maintenance respect for my time. AMA.


This is true, but you may be missing out on performance and compatibility improvements from recent ("bleeding edge") drivers. You need recent hardware for this to be relevant.

Generally speaking, you don't need rock-solid stability on a gaming rig or even a "workstation," since uptime isn't really a consideration. I run Debian on my home server, but my other machines, including a backup laptop, all run Arch. A good Arch setup is incredibly solid.


> This is true, but you may be missing out on performance and compatibility improvements from recent ("bleeding edge") drivers.

No, not missing out. Just waiting a few weeks longer than I would on a rolling distro, until the improvements arrive in Debian Backports. (If I'm really impatient, I can install something manually or make my own backport, but I'm assuming most people won't do that.) I have experienced cases like you describe, such as when I bought an RDNA3 GPU shortly after the platform was released, but they have been infrequent in my experience, and never so urgent that I couldn't wait a few weeks.

> you don't need rock-solid stability on a gaming rig or even a "workstation," since uptime isn't really a consideration.

System uptime is a consideration whenever I need my computer for something immediately, but my choice of Debian is not only about that. It's also about my time. Debian generally requires attention less often than other distros. Less time spent troubleshooting when things break. Less time re-learning things or adjusting workflows when new software versions change their behavior or interface. Fewer annoying interruptions. A low-maintenance system leaves me more time to get work done, or play games.

Also worth noting: These days, a lot of the components that games use are provided by the likes of Steam or Flatpak, which means they will be at exactly the same version and updated exactly as often on every Linux distro.


> System uptime is a consideration whenever I need my computer for something immediately, but my choice of Debian is not only about that.

Maybe you should try Arch on one of your machines. I have a lot of experience with both Debian and Arch, having used both extensively on all kinds of hardware over long periods of time, and have found Arch to be ideal on desktop. Having access to the latest software and drivers is a huge plus with recent hardware. I have never encountered breaking changes.


Your computer's bluetooth module could be the source of the trouble. Some people have found that using a different dongle fixed their wireless controller problems.

That makes sense. I've had the most issues on my old Dell XPS and a dongle would be annoying on such a small laptop

Are you sure? Bluetooth dongles these days can be smaller than most low-profile USB drives, barely protruding from a laptop's exterior. It might be worth browsing the available models.

> The "Nvidia on Linux compatibility" issues are something I wonder if I have side-stepped somehow either by lucky choice of GPUs, or lucky choice of Linux distros.

It could also be lucky consequence of what games you play and what else you do with your computer.

I was a long-time Nvidia user, and had plenty of problems with their drivers. They ranged from minor annoyances when switching between virtual consoles (which some people never do) to total system freezes when playing a particular game (which some people never play). It would have been be easy for someone else to never encounter these problems.

Since switching to AMD a couple years ago, I have been much happier.


Neat.

I wish the sample text included _underscores_, since I have occasionally found that they disappear with certain combinations of font + size + renderer.

And a run of all the numeric digits 0123456789, to show how their heights align.

And [square brackets], to show how easily they are distinguished from certain other glyphs.

And the vertical | bar, for the same reason.

...

Adobe Source Code Pro and Ubuntu Mono were my finalists. I think my preference would come down to window and font size, since Ubuntu Mono seemed to be narrower and leave more space between lines.

(Also, I kind of rushed the first few comparisons, so it's possible that I prematurely eliminated a typeface that I would have liked more.)


You can modify the sample text

I think I like the idea, but I can't help wondering if it would have unforeseen consequences.

Could this approach undermine the protections afforded by open-source licenses? (IANAL.)


>I think I like the idea, but I can't help wondering if it would have unforeseen consequences.

As I said in a sibling comment, quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion as opposed to "final bill that has been revised in committee and is going to the floor for a full vote". The details of implementation are certainly critical, and not trivial either! I'm fully in support of thinking through various use cases. But part of why I'm interested in alternate approaches is that they might give us finer grained tools.

>Could this approach undermine the protections afforded by open-source licenses? (IANAL.)

I have actually considered that as well but didn't add it into a quickie comment. If we take the second path of approaches I listed there, then thinking about it all open source software would fall under a special even more permissive class of the tier 3, in that it already has "fair, reasonable and non-discriminatory" licensing for all right? Except that it's also free. The motivation here is the "advancement of the useful arts & sciences" and the public good, so having it be explicit that "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."

All that said, I'll also ask fwiw if it'd even be that big a deal given the pace of development? I do think it'd be both ideal and justified if OSS had a longer period for free, that's still a square deal to the public IMO. But like, even if an OSS work went out protection (and keep in mind that a motivated community that could raise even a few thousand dollars would be able to just pay for an extra decade no problem, the cost doesn't really ramp up for awhile [which might itself be considered a flaw?]) after 10 years, how much is it worth it that 2016 era OSS (and no changes since remember, it's a constantly rolling window) now could have proprietary works be worth it against 10 year old proprietary software all getting pushed into the public domain far faster? That's worth some contemplation. Maybe requiring that source/assets be provided to the Library of Congress or something and is released at the same time the work loses copyright would be a good balance, having all that available for down the road would be a huge win vs what we've seen up until now.

Anyway, all food for thought is all.


> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion

Agreed, and my comment was aimed at exactly that. :)

An example of my concern: What would happen to GPL-licensed software if the copyright expired quickly? Would that allow someone to include it in a proprietary product and (after the short copyright term ended) deny users the freedoms that the GPL is supposed to guarantee? I think those freedoms remain important for much longer than 10 years.

> (and no changes since remember, it's a constantly rolling window)

Do you mean that the copyright term countdown would reset whenever the author makes changes to their work? (I'm not sure if this is the case today.) If so, couldn't someone simply use an earlier version in their proprietary product in order to escape GPL obligations early?

> "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."

Yes, I think this makes sense. Thanks for sharing your thoughts.


> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion

Indeed.

Setting aside variable details like time frames and cost structures which can be debated separately, what I found interesting about your suggestion is it's a mechanism to create an escalating incentive for copyright holders to relinquish copyrights even sooner than the standard copyright period. Currently, no matter what the term length, it costs nothing to sit on a copyright until it expires - so everyone does - even if they never do anything with the copyright. And the copyright exists even if the company goes bankrupt or the copyright holder dies. Thus we end up with zombie copyrights which keep lurking in the dark for works which are almost certainly abandon-ware or orphan-ware simply because our current system defaults to one-and-done granting of "life of the inventor + 70 years" for everything.

Obviously, we should dramatically shorten the standard copyright length but no matter what we shorten it to (10, 15, 20 yrs etc) we should consider requiring some recurring renewal before expiration as a separate idea. Even if it's just paying a small processing fee and sending in simple DIY form, it sets the do-nothing-default to "auto-expire" for things the inventor doesn't care about (and may even have forgotten about). That's a net benefit to society we should evaluate separately from debates about term lengths.

I see your suggestion about automatically escalating the cost of recurring renewal as another separate layer worth considering on its own merits. My guess would be just requiring any recurring renewal would cause around half of all copyrights to auto-expire before reaching their full term - even if the renewal stayed $10. The idea of having recurring renewal costs escalate, regardless of when the escalation kicks in, or how much it escalates, is a mechanism which could achieve even more net positive societal benefits by increasing the incentive to relinquish copyrights sooner.


Only for 10+ year old versions. You'd be able to re license ancient stuff but it would be so far behind it wouldn't be all that relevant.

Most people? What mainstream Linux distros ship without fsync or esync support?

Well I can tell you that if it didn’t make it upstream Fedora didn’t ship it.

It looks there was a copr for a custom kernel-fsync and projects like Bazzite or Nobara are adding patches.

From my understanding the fsync patches were never upstreamed.


The common gaming-focused Wine/Proton builds can also use esync (eventfd-based synchronization). IIRC, it doesn't need a patched kernel.

The point being that these massive speed gains will probably not be seen by most people as you suggest, because most Linux gamers already have access to either esync or fsync.


Maybe you are right about esync but anyway I would also gather a lot of people don’t have that either. At least personally I don’t bother with custom proton builds or whatever so if Valve didn’t enable that on their build then I don’t have it.

> if Valve didn’t enable that on their build then I don’t have it.

The Proton build is Valve's build. It supports both fsync and esync, the latter of which does not require a kernel patch. If you're gaming on Linux with Steam, you're probably already using it.

https://github.com/ValveSoftware/Proton/?tab=readme-ov-file#...


I thought you meant Proton-GE or such other patched builds of proton.

I would assume most of them? I'd be surprised if distros like Debian, Ubuntu, Fedora, etc. would ship non-mainline kernel features like that.

Sure, gaming-focused distros, or distros like Arch or Gentoo might (optionally or otherwise), but mainstream? Probably not.

Of course, esync doesn't require kernel patches, so I imagine that was more broadly out there. But it sounds like fsync got you performance pretty close to what ntsync can do, but esync was quite a bit behind both? With vanilla being quite a bit behind esync?

(Also, jeez, fsync, what a terrible name. fsync is a syscall that has to do with filesystem data. So confusing.)


> I would assume most of them? I'd be surprised if distros like Debian, Ubuntu, Fedora, etc. would ship non-mainline kernel features like that.

It's best not to assume with these things. With my stock Debian Stable kernel, Proton says this:

fsync: up and running.

And when I disable fsync, it says this:

esync: up and running.

> But it sounds like fsync got you performance pretty close to what ntsync can do, but esync was quite a bit behind both?

No, esync and fsync trade blows in performance. Here are some measurements taken by Kron4ek, who maintains somewhat widely used Wine/Proton builds:

https://web.archive.org/web/20250315200334/https://flightles...

https://web.archive.org/web/20250315200424/https://flightles...

https://web.archive.org/web/20250315200419/https://flightles...

> With vanilla being quite a bit behind esync?

Yes, vanilla Wine has historically fallen behind all of them, of course.

> Also, jeez, fsync, what a terrible name. fsync is a syscall that has to do with filesystem data. So confusing.

We can agree on this. :)


Last I checked, every distro of note had its own patchset that included stuff outside the vanilla kernel tree. Did that change? I admit I haven't looked at any of that in... oh, 15 years or so.

Depends on the distro.

Fedora looks like it carries a whooping 2 patches on top of upstream


esync and fsync both use mainline kernel features.

xcancel.com seems to work at least as well as any other still-maintained nitter instance. Here's a list:

https://github.com/zedeus/nitter/wiki/Instances



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: