Hacker Newsnew | past | comments | ask | show | jobs | submit | mananaysiempre's commentslogin

I’ve heard such things in the US were because of accessibility law that required the website (for the general population) to work no better than the associated call center (for the people who can’t interact with the website for whatever reason).

On one hand, that seems obviously stupid. On the other, I don’t see how you could phrase a legal requirement of this nature.


That's better than my assumption, which was that it was running off the Visual Foxpro instance on somebody's desktop and that guy had to be logged in for it to work.

> package-lock.json merge=ours

This strikes me as a bad idea. Which side of the merge is “ours” and which ond is “theirs” during merges or rebases is something of a crapshoot[1], so this kind of setting only makes sense when merge conflicts are only ever discovered and resolved by automatic tooling (e.g. the git-annex branch[2] in git-annex-enabled repos).

[1] https://stackoverflow.com/questions/25576415/what-is-the-pre...

[2] https://git-annex.branchable.com/internals/#index2h2


The usual pictures of и / п / т / ш ambiguity that you see are exaggerated in that they show forms that are nominally “standard” but basically impossible to reproduce without a fountain (or, even better, dip) pen (think round hand or, as 'cyberax mentions, Spencerian script), yet use a constant stroke width that such an implement wouldn’t produce. For the latter two, people who actually write m and not т will often resolve the ambiguity with ш with an over- resp. underbar (the same ones that Serbian uses even in print[1]). It’s also pretty normal to exaggerate letter joins when they come out looking too similar to parts of other letters, etc. Overall, modern Russian cursive is about as legible as the modern French one, and I don’t think people complain much about the latter.

I also find the hand-wringing about English accents somewhat surprising. Yes, different accents exist, and yes, English has a much wider variation than (urban) Russian (there are things in the countryside that urban dwellers haven’t heard for a century), but phonemic orthographies are a thing, and though children in e.g. Moscow may perpetually struggle with orthographic distinctions that no longer correspond to anything in their accent, the idea of a spelling competition remains about as laughable as that of a shoelace-tying one. Nobody makes you represent the many mergers of English with a single letter in your new orthography (though it would be funny).

[1] https://commons.wikimedia.org/wiki/File:Cyrillic_alternates...., rightmost column


> Then, when it fails [...], you can either poke it in the right ways or change your program in the right ways so that it works for you again. This is a horrible way to program; it’s all alchemy and guesswork and you need to become deeply specialized about the nuances of a single [...] implementation

In that post, the blanks reference a compiler’s autovectorizer. But you know what they could also reference? An aggresively opaque and undocumented, very complex CPU or GPU microarchitecture. (Cf. https://purplesyringa.moe/blog/why-performance-optimization-....)


They even kept the ability to use / instead of Alt, for those with Lotus 1-2-3 muscle memory. The muscle-memory problem experienced by office-software challengers is old.

I like everything about tldr except for the fact that every way to use it is an Internet-dependent client for some reason. Looking at how it works again, I’m half-tempted to write a converter from their dialect of Markdown to roff and run it against https://github.com/tldr-pages/tldr so I can do `man tldr ffmpeg`.

In theory, for straight-line code only, the If Statement Ladder of Doom is an alternative:

  int ret;
  FILE *fp;
  if ((fp = fopen("hello.txt", "w")) == NULL) {
      perror("fopen");
      ret = -1;
  } else {
      const char message[] = "hello world\n";
      if (fwrite(message, 1, sizeof message - 1, fp) != sizeof message - 1) {
          perror("fwrite");
          ret = -1;
      } else {
          ret = 0;
      }
  
      /* fallible cleanup is unpleasant: */
      if (fclose(fp) < 0) {
          perror("fclose");
          ret = -1;
      }   
  }
  return ret;
It is in particular universal in Microsoft documentation (but notably not actual Microsoft code; e.g. https://github.com/dotnet/runtime has plenty of cleanup gotos).

In practice, well, the “of doom” part applies: two fallible functions on the main path is (I think) about as many as you can use it for and still have the code look reasonable. A well-known unreasonable example is the official usage sample for IFileDialog: https://learn.microsoft.com/en-us/windows/win32/shell/common....


> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG

Huh? If you can chew through however many gigabytes of the supposed CSPRNG’s output, do some statistics, and with a non-negligible probability tell if the bytes in fact came from the CSPRNG in question or an actual iid random source, then you’ve got a distinguisher and the CSPRNG is broken.


It all comes down to actual specific statistical tests, and how hard they are to break in specific applications.

No CSPRNG is absolutely perfect, no CSPRNG has ever absolutely passed every statistical test thrown at it.

In MCMC, it stresses very different statistical tests than the typical CSPRNG tests.

Every PRNG is absolutely broken if you want to be absolute about it. MCMC and crypto applications push on different aspects where statistical issues will cause application level failures.

See e.g. this paper https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf

(it's not the end all be all, but it's a good survey of why this stuff matters and why it's different)


> no CSPRNG has ever absolutely passed every statistical test thrown at it

As far as I know (admittedly not a high standard), there is no published statistical test that you could run on, for example, a single AES-256-CTR bitstream set up with a random key and IV, running on a single computer, that would be able to tell you with a meaningful likelihood ratio that you were looking at a pseudorandom rather than truly random input before the computer in question broke down. (I’m assuming related-key attacks are out of scope if we’re talking about an RNG for simulation purposes.)


Cryptographic operations when done correctly result in full chaos within the discrete domain (the so-called avalanche effect). Any bias of any kind gives rise to a distinguisher and the primitive is regarded as broken.

One way to imagine what symmetric cryptography does is a cellular automaton that is completely shuffled every iteration. In the case of Keccak/SHA3, that is almost exactly what happens too.


There has never been a perfect CSPRNG.

There have been a very large number of CSPRNGs for which there does not exist any known practical method to distinguish them from TRNGs.

For all of them there are theoretical methods that can distinguish a sequence generated by them from a random sequence, but all such methods require an impossible amount of work.

For instance, a known distinguisher for the Keccak function that is used inside SHA-3 requires an amount of work over 2^1500 (which was notable because it was an improvement over a naive method that would have required an amount of work of 2^1600).

This is a so ridiculously large number in comparison with the size and age of the known Universe, that it is really certain that nobody will ever run such a test and find a positive result.

There are a lot of other such CPRNGs for which the best known distinguishers require a work of over 2^100, or 2^200, or even 2^500, and for those it is also pretty certain that no practical tests will find statistical defects.

There are a lot of CSPRNGs that could not be distinguished from TRNGs even by using hypothetical quantum computers.

Even many of the pretty bad cryptographic PRNGs, which today are considered broken according to their original definitions, can be made impossible to distinguish from TRNGs by just increasing the number of iterations in their mixing functions. This is not done because later more efficient mixing functions have been designed, which achieve better mixing with less work.


> There has never been a perfect CSPRNG.

What is a perfect CSPRNG?


"before the computer in question broke down."

A good MCMC simulation might test that! E.g. say, training a large diffusion model. It takes way more computing power than the average time for a single computer to fail.

Also, the standards of those tests vs. does it bias the statistical model fitted with MCMC are different.


I am aware of tests vs. ChaCha20 here https://www.pcg-random.org/index.html, I am not aware of tests vs. AES-256-CTR.

However at some point, 100x faster performance w/o an exploitable attack vector is also relevant! (though sometimes people find ways).

CSPRNGs are mostly worried about very specific attack vectors, and sure, they're like to be completely unpredictable. But other applications care more about other attack vectors like lack of k-dimensional equiprobability, and that hurts them far more.

The idea that CSPRNGs are the end all and be all of rngs holds CS back.


I am familiar with that site and the PCG PRNGs are based on a sound principle, so they are good for many applications.

However I have never seen a place where the author says something about finding a statistical defect in ChaCha. She only correctly says that ChaCha is significantly slower than PRNGs like those of the PCG kind (and that it also shares the same property that any PRNG with a fixed state size has, of limited high-dimensional equidistribution; this is also true for any concrete instantiation of the PRNGs recommended by the author; the only difference is that with PRNGs having a simple definition you can make the same structure with a bigger state, as big as you want, but once you have chosen a size, you have again a limit; the PCG PRNGs recommended there, when having greater sizes than cryptographic PRNGs, they become slower than those cryptographic PRNGs, due to slow large integer multiplications).

In the past, I have seen some claims of statistical tests distinguishing cryptographic PRNGs that were false, due to incorrect methodology. E.g. I have seen a ridiculous paper claiming that an AI method is able to recognize that an AES PRNG is non-random. However, reading the paper has shown that they did not find anything that could distinguish a number sequence produced by AES from a true random sequence. Instead, they could distinguish the AES sequence from numbers read from /dev/random on an unspecified computer, using an unspecified operating system. Therefore, if there were statistical biases, those were likely in whichever was their /dev/random implementation (as many such implementations are bad, and even a good implementation may appear to have statistical abnormalities, depending on the activity done on the computer), not in the AES sequence.


Are they claiming that ChaCha20 deviates measurably from equally distributed in k dimensions in tests, or just that it hasn't been proven to be equally distributed? I can't find any reference for the former, and I'd find that surprising. The latter is not surprising or meaningful, since the same structure that makes cryptanalysis difficult also makes that hard to prove or disprove.

For emphasis, an empirically measurable deviation from k-equidistribution would be a cryptographic weakness (since it means that knowing some members of the k-tuple helps you guess the others). So that would be a strong claim requiring specific support.


By O’Neill’s definition (§2.5.3 in the report) it’s definitely not equidistributed in higher dimensions (does not eventually go through every possible k-tuple for large but still reasonable k) simply because its state is too small for that. Yet this seems completely irrelevant, because you’d need utterly impossible amounts of compute to actually reject the hypothesis that a black-box bitstream generator (that is actually ChaCha20) has this property. (Or I assume you would, as such a test would be an immediate high-profile cryptography paper.)

Contrary to GP’s statement, I can’t find any claims of an actual test anywhere in the PCG materials, just “k-dimensional equdistribution: no” which I’m guessing means what I’ve just said. This is, at worst, correct but a bit terse and very slightly misleading on O’Neill’s part; how GP could derive any practical consequences from it, however, I haven’t been able to understand.


As you note, a 256-bit CSPRNG is trivially not equidistributed for a tuple of k n-bit integers when k*n > 256. For a block cipher I think it trivially is equidistributed in some cases, like AES-CTR when k*n is an integer submultiple of 256 (since the counter enumerates all the states and AES is a bijection). Maybe more cases could be proven if someone cared, but I don't think anyone does.

Computational feasibility is what matters. That's roughly what I meant by "measurable", though it's better to say it explicitly as you did. I'm also unaware of any computationally feasible way to distinguish a CSPRNG seeded once with true randomness from a stream of all true randomness, and I think that if one existed then the PRNG would no longer be considered CS.


You care when you're trying to generate random vectors which may be of a different size, and if you are biasing your sample.

Is it enough to truly matter? Maybe not, but does it also matter if 80 bit SHA1 only has 61 bits?


Nobody cares even then, because any bias due to theoretical deviation from k-equidistribution is negligible compared to the desired random variance, even if we average trials until the Sun burns out. By analogy, if we're generating an integer between 1 and 3 with an 8-bit PRNG without rejection, then we should worry about bias because 2^8 isn't a multiple of 3; but if we're using a 256-bit PRNG then we should not, even though 2^256 also isn't a multiple.

If you think there's any practical difference between a stream of true randomness and a modern CSPRNG seeded once with 256 bits of true randomness, then you should be able to provide a numerical simulation that detects it. If you (and, again, the world's leading cryptographers) are unable to adversarially create such a situation, then why are you worried that it will happen by accident?

SHA-1 is practically broken, in the sense that a practically relevant chosen-prefix attack can be performed for <$100k. This has no analogy with anything we're discussing here, so I'm not sure why you mentioned it.

You wrote:

> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG

I believe this claim is unequivocally false. A non-CS PRNG may be better because it's faster or otherwise easier to implement, but it's not better because it's less predictable. You've provided no reference for this claim except that PCG comparison table that I believe you've misunderstood per mananaysiempre's comments. It would be nice if you could either post something to support your claim or correct it.


> You wouldn't expect a Vim user to just abandon their muscle memory and just switch to Emacs.

Expect to just abandon, no. Expect to spend a reasonable time learning and trying out the new interaction paradigm to see if it could (after controlling for unfamilliarity) also suit them, yes. (The ultimate answer may turn out to be negative, like for me in the case of Emacs.)


What symmetric cryptography is there that would be reasonable on a small 8-bitter? This means

- As little code as possible;

- As little constant data as possible;

- Little to no shifts by amounts not divisible by 8, as there may not be a barrel shifter even for bytes;

- No shifts by variable amounts, including as a space-saving technique, for the same reason;

- No multiplies beyond 16×16 bits, and preferably none at all, as there may not be a multiplier.

Speck, mentioned in TFA, fits this very well. None of the things that came out of eSTREAM or the NIST lightweight cryptography competition even qualify, as far as I can tell, as the “lightweight” part is very keen on things that are easy in hardware but hard (slow, space-hungry, or both) in software. Gimli exists but is kind of chonky. So is Speck truly it? Is just noöne interested in the problem?


ChaCha20 satisfies your conditions.

The only disadvantage of ChaCha20 vs. Speck is a bigger state, you need 128 bytes for it (64 bytes of state + 64 bytes for the intermediate computations), but that is not likely to be a problem, except in the smallest microcontrollers.

The bigger state of ChaCha20 is determined by higher security requirements. The advantage of ChaCha20 is that it is supported by standard protocols, e.g. TLS 1.3 and SSH.

The standard protocols mentioned above include ChaCha20 precisely for the case of communication with smaller or older CPUs, which do not have hardware AES support.


For some reason (and despite remembering it being called an “add-rotate-XOR design”) I was sure that ChaCha20 used multiplies, even though of course it does not. Thank you for setting me straight on this.

I’m not sure I’m all that optimistic about its code size—the standard C implementation with its eight inlined quarter-rounds seems certain to end up downright bloated compared to Speck—but I guess if I wasn’t picky about performance it could be boiled down to something reasonable. (Same for ASCON of eSTREAM & NIST LWC fame, which I also remembered being worse than it actually is.) Could be worth sitting down with an assembler at some point.

There’s also the question of why you’d bother with an 8-bitter at all (for anything more substantial than a TV remote or a musical postcard) in a world where the CH32 exists.

As for TLS or SSH, I’m not sure how much of a meaningful advantage it is. Talking to just about anything in the outside world likely excludes non-ephemeral TLS-PSK, which means that you’re going to need to implement a key exchange. And the code for that is likely to dwarf everything else, isn’t it?..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: