Hacker Newsnew | past | comments | ask | show | jobs | submit | abound's commentslogin

That's a bummer, especially since folks normally use the ".rs" TLD for Rust projects, so the (perhaps accidental) implication from the domain is that this is a Rust project with the source available somewhere.

It is a rust project, just not source available.

Obligatory shout-out to Berkeley Mono [1], which understandably isn't on this site because it's a paid font. I really enjoy the customizer that comes with it, I use the font on all my terminal/IDE environments, as well as on my blog.

(FWIW, I just did the codingfont bracket and got Source Code Pro, which I've used in the past, along with Iosevka and Commit Mono)

[1] https://usgraphics.com/products/berkeley-mono


Unironically, I think 9% uptime would be "one-tenth of a nine".

Are you saying 9.999% isn’t four nines?

Can’t tell if this is intended as humor, but I LOL’ed.

It unarguably is.

Reliability where N is the number of nines:

1 - 10 ^ -N (multiply by 100 for percent)

So 9% is 0.09 for the calc

1 - 10 ^ -N = 0.09

So

10 ^ -N = 0.91

So

N = -log10 0.91

So 0.09 (9%) reliability is 0.0409586077 of a nine.

And running it thru... a tenth of a nine is 0.2056717653 or about 20.57% reliability


A bit later though:

> LLMs are late additions to Palantir’s ecosystem;they were added in late 2024, years after the core system was operational, “AIP” was added as a natural language layer that summarizes documents or constructs and answers queries.

So LLMs are now a part of these systems, even if it sounds like they aren't directly involved in targeting yet.


I don't think that's a charitable take of the article. To many programmers, it wouldn't be obvious that some of these footguns (autoboxing, string concatenation, etc) are "bad", or what the "good" alternatives are (primitives, StringBuilder, etc).

That said, the article does have the "LLM stank" on it, which is always offputting, but the content itself seems solid.


Of course it's not. I don't see any reason of posting an article that repeats the very same basics of Java or literally any programming language. Simple basics. Sure, if a particular programmer is not aware of these, the particular programmer would be very surprised that any of their operations are not imaginary O(0) (if they'd even care).

Looking at the patch itself (linked in the article), the description has this:

> We now support configuring bandwidth up to ~1 Tbps (overflow in m2sm at m > 2^40).

So I think that's it, 2^40 is ~1.099 trillion


It makes sense that each "stick" needs to be higher density since you can fit far fewer of them flat on a motherboard (especially true for servers, I'm curious to see what that'll look like).


There is also SOCAMM (Small Outline CAMM) and it looks like this: https://www.servethehome.com/micron-socamm-memory-powers-nex...


Seems to be much better in terms of footprint and modularity. Guess we won't see these on the desktop though.


I use Jujutsu in mostly the same way at work. I have a `jj review <branch or PR number>` alias that checks out a copy, and then I do the review with three copies open: the IDE (for quick navigation and LSP integration), the diff (i.e `jj diff` with a nice pager), and prr [1] so I can leave comments directly from my editor.

[1] https://github.com/danobi/prr


Thanks for telling me about prr, I've always wanted something like this!


For reference for Kobo in particular: https://github.com/noDRM/DeDRM_tools

I recently threw an LLM at it to turn the Obok plugin into a statically linked CLI in Rust, it works great.


Wouldn't a hash work great for this purpose? I.e.

1. User requests for email alice@example.com to be removed from database

2. Company removes "alice@example.com" from 'emails' table

3. Company adds 00b7d3...eff98f to 'do_not_send' table

Later on, the company buys emails from some other third-party, and Alice's email is on that list. The company can hash all the email addresses they received, and remove the emails with hashes that appear in their 'do_not_send' table.

You'd have to normalize the emails (and salt the hashes), but seems doable?


No need to salt individual hashes, just one hard coded salt for all.


So in the end, they have a list of emails that match the hashes in their blacklist? What's the point?


Any entry that matches a hash needs to be deleted. The point is presumably to minimize the retention of PII.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: