Re: your point 1 -- waiting on a mutex happens regardless of whether you're using blocking or non-blocking IO.
If your program only ever makes IO system calls and nothing else, then your point might make sense, but this isn't a realistic real-world assumption.
Re: your point 3 -- not using the complete timeslice is not a realistic assumption. If you're creating threads then you're presumably doing some little bit of CPU-intensive work and not just copying bytes from one file descriptor to another.
You're making terribly silly assumptions about what kind of work servers actually do, and ignoring the real benefit of async solutions.
The real benefit is being able to control how threads switch by yourself instead of using the kernel's builtin 'black box' scheduling algorithms. The problem with the 'black box' is that the kernel might decide to penalize your threads for inscrutable reasons of 'fairness' and then you suddenly get inexplicable latency spikes.
Of course rolling your own scheduling is an engineering boondoggle and most people just opt for a very primitive round-robin solution. (Which, incidentally, is what you want anyways if you want good latency.)
In which case you might as well create a bunch of threads and schedule them as 'real-time' (SCHED_RR in Linux) and get the same result.
(Seriously, try it -- benchmark an async server vs a SCHED_RR sync server and see for yourself.)
> Does the CPU interrupt threads running happily on cores even when there are no other threads which want to run or which have affinity that would allow them to run on that core?
Yes. Even if you are careful to ever run only one process (so: no monitoring, no logging, no VM's, no 'middleware', etc.) and limit the number of threads to strictly equal the number of processors, you still have background kernel threads that force your process to context switch.
Because the "HTTPS everywhere or you're a dinosaur and you don't deserve to live" hysteria forced everyone to put HTTPS even in places where it doesn't belong.
> In 20 years, the $100,000 attack will be a $100 attack (or perhaps a $1 attack)
No. Moore's Law has been dead for years and will never come back. The benefits we saw in recent years came from people figuring out how to compile code for SIMD processors like GPU's, not faster or cheaper silicon.
Moore's law still holds, and is expected to hold true until at least 2025 (see wikipedia). I don't think it will be done by then, but that is just guess work.
Moore's Law (the 2-year version upwards-revised from the original 18-month version) stopped holding last year when 10nm (Cannonlake) was delayed to this year and Intel introduced a third step in its tick-tock process. The quotes you're looking at about 2025 were from 2012 (the one cited in 2015 has no support for the quote) and all three should be removed from the article or the assertion altered.
People predicted the death of Moores law in 2005, we all know what happened. Intels CEO seems to think it still holds true, and will continue to do so for the forseeable future. (http://fortune.com/2017/01/05/intel-ces-2017-moore-law/) This is probably partly marketing, but I'm sure there is some truth to it as well.
The linked article admits that the two-year doubling ended, which is what Moore's Law has been for most of its history. Moore's Law ending doesn't mean we won't ever have another die shrink, it means that the notion that we just have to wait two years to get twice the transistors for the same cost (or die area, depending on who you ask) is no longer true (and therefore, projections based on the notion of such a cadence should be considered even more silly than they already were). I don't understand why people continue to claim that Moore's Law's death "has been predicted many times" or whatever when it already ended; what happened in 2005 was that raw clock cycles stopped improving, and guess what: they still haven't improved that much for twelve years.
Moore's law is not about single thread performance, and attacks like this are easily parallelized anyway. Not to mention that fixing them at 3Ghz is just trying to coherce the conslusion to be that we have seen little gain in the last 5 years.
> Not saying it's easy, but now it's on the horizon.
Not really. It's not a preimage attack. They spent several hundred dollars to find two random byte strings with the same SHA1 hash. There's still no way to SHA1-collide a specific byte string instead of random junk.
This is exactly what euyyn is saying: create two files with the same SHA1 (by adding bytes of gibberish to an unused section), commit one to the repository, and now you have an collision available.
> And what do you do when clang isn't an option, due to customer, target hardware, OS or even language extensions?
Then you cannot use Rust and must settle for lack of safety. (A profoundly silly question -- if modern C++ is not an option for whatever reason, then Rust is doubly so.)
You made a bad argument, hide your ego in your pocket and admit it instead of writing such a nonsense. Both clang and Rust compiler are based on LLVM and they both can target only those platforms that LLVM target. If you can't use clang today then you can't use Rust also. We are not talking about "hypothetical" situations or alternative worlds because in those situations i can say whatever I want about clang also...
You didn't even reply to what I wrote? We are talking about Rust and Clang compilers and platforms they target and this was the context. It started when you tried to belittle Clang because it shows warnings and then you tried to make argument about hypothetical Rust compiler that can target OS that Clang can't, again trying to show Rust > C++. Even if you can't show superiority of Rust over C++ you are inventing hypothetical compilers that you think will work as your argument. If anyone will read your comments then he can see you are obviously biased and your ego just magnifies that effect. Think about it, objectively, get some air, there is world besides HN also.
> These days, I'm more likely to suggest Brave over FF, if someone really requires a Chrome alternative.
Brave is just a reskinned Chrome.
> We [web developers] owe a lot to FF, Firebug, etc, but the writings on the wall for mobile and desktop.
Where I work developers have mostly switched to Firefox over the last few years. Firefox is just a better browser (faster, less bloated) under the hood. Yes, Firefox will have a difficult time sine they don't have their own proprietary walled-garden ecosystem as a distribution channel, but the technical product is solid.
Do you mean chromium? And it's much more than a skin, lol.
And it's not the experience for many to have a faster browsing experience, but I'm not going to argue how anyone quantities that -- really don't care what others use.
On desktop, I find the tooling much worse and sluggish. But the thread was about mobile, and there are a vanishing number of users of mobile FF.
C++ occupies the same mental space that languages like Haskell, Scala or OCaml.
Compared to its real competition, C++ is very elegant and a joy to use. These languages are meant for enjoying the maximum out of compile-time type abstractions, not as an easy-to-use tool for simple enterprise apps.
http://www.gbresearch.com/axe/