Hacker Newsnew | past | comments | ask | show | jobs | submit | c0ffe's commentslogin

I used to like D too.

At the time there was an alternative standard library called Tango that was more object oriented like Java or .NET, combined with a desktop UI library (desktop apps were my focus at the time) that was similar to .NET's Windows.Form (I think it was called DFL, and surprisingly the site seems to be still online [0]), plus the native speed & simple approach of D1 I really thought it was going to stay relevant for years.

Sadly, then D2 appeared, the community fragmented over time, then Tango was disbanded, and at some point I think I moved on to web development with Flash.

[0] http://www.dprogramming.com/dfl.php


This leaves with a bit of concern about how reliable is storage encryption on consumer hardware at all.

As far as I know, recent Android devices and iPhones have full disk encryption by default, but they do protect the keys against random bit-flipping?

Also, I guess that authentication devices (like YubiKey) are not safe and easy to use at the same time, because the private key inside can be damaged/modified by a cosmic ray, and it's not possible (by design) to make duplicates. So, it's necessary to have multiple of them to compensate, lowering their practicality in the end.

Edit: from the software side, I understand that there are techniques to ensure some level of data safety (checksumming, redudancy, etc), but it thought it was OK to have some random bit-flipping on hard disks (where I found it more frequently), since it could be corrected from software. Now I realize that if the encryption key is randomly changed on RAM, the data can or becomes permanently irrecoverable.


Disk is unreliable anyways, so they already have to use software error correction. I suspect that protects against these kind of errors.


I understand that Fuchsia is currently targeted at IoT or consumer devices, but can it run "containerized apps" in server environments with its current design (compared to Linux namespaces)?

Maybe Kubernetes can be extended later to support Fuchsia nodes. It sounds really interesting to have a new target OS, with a different network and virtualization stacks.


Considering all resources are accessed via scoped and capability-based handles, I suspect so. Not sure I want to see more kubernetes more places though... I'd rather just write software on a secure os in the first place.


Compiling every dependency into a single WASM binary (including database engine, language runtime, etc) and just deploy and scale it on a serverless platform.

No more containers to develop or to deploy, and eventually, no UNIX filesystem around the runtime.


This, WASM, some virtual OS API, and edge deployment (5G), even on user devices. Also some energy efficient blockchain mechanisms and smart contracts may bring us closer to trusted computing. Although that won't happen quickly enough, not next after K8s.


I'm thinking of this as well. Given how frontend frameworks like Svelte are moving towards a "framework as compiler" approach and how newer runtimes like are Deno now support compiling to a self-contained standalone binary, my guess is that WASM will become the main target for cross-platform development.


The last time I had to work on a Gatsby project, it had build times of approximately nine hours.

Last thing I heard from them, is that when faced with the need to update news on the site more quickly (not on realtime, but "sooner"), the development team had to make a separate mechanism to query the news from the backend when the site was running on the browser, instead of doing it at build time, using the builtin GraphQL database. This defeats the central idea that everything on Gatsby comes from a central data source.


Check PulseEffects, it has an UI that allows to enable and configure effects for both input and output. Last I checked, it is on the default repository of many distros.

https://github.com/wwmm/pulseeffects


I agree: plain text is after all, easier to manage.

Personally, I've used Joplin [0] some months ago, and Im using QOwnNotes [1] now. Both are open source, cross platform, and can synchronize with Nextcloud. QOwnNotes may feel a bit faster since its written in Qt/C++ (Joplin is an Electron app).

However, I still find hard to write tables in Markdown manually.

[0]: https://joplinapp.org

[1]: https://www.qownnotes.org


I would like to add Bootstrap to the list, including other CSS libraries/frameworks.

In Bootstrap case, it included a grid, and other UI components (like a dropdown menu, and a modal dialog), but most importantly, it took care that the page worked the same in all major browsers of the time.

It helped me to prototype various projects over the years.


I have a small Nextcloud instance at home that uses BTRFS (on HDD, with noatime option) for file storage, and XFS (on SSD) for database.

I started it just for testing, and has been running for up to two years, and had no problems so far.


I would like to know if somebody has good experiences to share about AMD video cards on Linux (with the latest driver AMDGPU).

I bought a Ryzen (Zen 2) for workstation, where I need to run a few VMs, a local k8s cluster, run builds, some browsers tabs, and Slack. I have everything running smoothly on top of a Linux 5 kernel, and so far, Im pleased with the results.

But I kept an older NVIDIA card, and the drivers always had a bit of trouble with desktop Linux support (like Wayland, plymouth bootsplash, etc).


I ran a Radeon 380 between 2015 and this year, it worked flawlessly.

I bought a 5700 XT in July; it was not usable out of the box, but all the pieces are at least upstreamed now. Desktop stability is great, gaming performance is great, and all the basic stuff (Wayland, Plymouth) is solid.


I've been using AMD GPUs since they first stabilized the radeonsi driver ~6 years ago.

7870 -> 290 -> 580 -> just got a 5700 XT yesterday.

They are good. It generally takes 6 months after a card is announced for the drivers to work properly, but I'm currently on linux-mainline 5.4r6 and mesa-git and the 5700 XT is working nicely. On 5.3 and Mesa 19.2 / LLVM 9 there were a lot of graphical glitches and crashes, so that series should be in place within a few months.

The other 3 just keep chugging along working nicely. The 7870 is too old to get AMDGPU / Vulkan support unless its turned on manually, but that has worked in light testing.

My only complaint is that hardware video encoding is awful - it hogs enough resources to substantially hamper game performance if used concurrently, enough that it makes more sense to software encode on a beefier CPU than to try to use the hardware encoder on the GPU.


On Linux, AMD has a MAJOR advantage over Nvidia simply due to the fact that the driver is FLOSS and built into the kernel itself. This means you get full GPU support out of the box and fixes/improvements are delivered through the same update channel as the kernel.

The userland tools aren't ported to Linux however, so you don't get access to the fancy social-media-augmented gamer stuff. If you want to overclock/etc you have to rely either on a /sys filesystem interface (which wasn't stabilized when I tried it but could very well be now) or third party tools of varying quality.

As for the actual experience itself, I've owned GPUs from multiple architectures (Polaris, Raven Ridge, Vega) and I've noticed a common pattern. When the hardware is new, it's unstable. A few kernel updates later (typically over a month) they run flawlessly. To be fair a lot of the crashes/freezes I've experienced could be traced down to Mesa and LLVM. I still would give new AMD hardware time to mature though.

Performance is on par with the Windows driver package (probably because they share a lot of code). You get your money's worth. Some of the games I run on DXVK offer near-native performance.

tl;dr there's never been better a GPU driver on Linux but it's not quite ready for your grandma yet


I've got an RX580; it works almost flawlessly, including for games emulated via Proton/etc.. The only significant problem I've had was when I (unwittingly) received a new Mesa installation (I'm on NixOS/unstable, so, rolling release) and everything I'd already played stopped working. Took me a while to figure out I had to delete shaders that'd been cached for the older version of Mesa. I imagine most non-rolling distros wouldn't have that problem.


That should not happen on rolling release distros either - sounds like it would be a NixOS packaging issue where the version (or git commit hash) of Mesa/LLVM is missing or the Nix package applied patches without changing the version in the cache key.


Bought a RX570, worked great. Undervolted for more performance, would recommend it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: