Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick."

RH does not imply this (under any reasonable interpretation of what you wrote; e.g. RH would not imply a fast algorithm that given n outputs the nth prime).

The prime number theorem says that the number of primes up to x (denoted pi(x)) is "well approximated" by Li(x). The Riemann hypothesis says that this is a REALLY good approximation. In particular, it says that the error incurred in approximating pi(x) by Li(x) is of size about sqrt(x). (More precisely sqrt(x)log(x).)



There is actually one interesting connection between RH and integer factorization. This has to do with the idea of "Mobius pseudorandomness." For simplicity, I'll phrase things in terms of not Mobius, but the Liouville function lambda.

The Liouville function comes up naturally if you are interested in whether the "typical" number has an even or odd number of prime factors. More precisely, define the function lambda(n) (the "Liouville function") as follows. lambda(n) = 1 if n has an even number of prime factors, and -1 if n has an odd number of prime factors. If you expected that about half of the positive integers have an even number of prime factors, then you would expect that lambda is 0 on average, i.e. that |(1/x) * sum_(n <= x) lambda(n)| tends to zero as x tends to infinity.

It turns out that this statement is equivalent to the prime number theorem. Further, it turns out that the Riemann Hypothesis is equivalent to the STRONGER statement that the size of the sums |sum_(n<=x) lambda(n)|, where lambda(n) is the "Liouville function," don't exceed size about sqrt(x). This is exactly what you would expect from the Central Limit Theorem if you thought that lambda(n) behaved like a Bernoulli random variable taking the values +1,-1.

What does this have to do with factorization? It's easy to compute lambda if you know how to factor n, so if computing lambda is certainly no harder than factoring. On the other hand, we don't know of any way to compute the Liouville function without factoring n. This isn't a proof that computing lambda(n) is at least as hard as factoring n, but it certainly seems to be the case.

Now, one rigorization of what it means for a sequence to behave randomly is that it doesn't correlate with any easily computable sequences. (Replace "easily computable" with "computable" and you are not too far from the definition of Martin-Lof randomness.) In particular, it shouldn't be easily computable.

So in other words, if you think that lambda behaves randomly, then it shouldn't be easily computable, which in turn means that factoring is hard!

But as mentioned above, one of the simplest consequences of lambda behaving randomly is (by the Central Limit Theorem) the Riemann hypothesis!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: