Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its not even HTTP vs SPDY. Its HTTP vs SPDY, when all content is coming from the same hostname. This is the best possible environment for SPDY, and the worse possible environment for HTTP.

You are looking at one TCP connection for SPDY, with everything multiplexed. With HTTP, you are looking at, best case, 4 TCP connections to the server, that all start cold, doing 350+ requests in parallel. That's not even considering the potential for the server push feature with SPDY.

I love SPDY and all, but this is not even close to a real world scenario.



I'd agree with Billy's comments that this isn't really a real world test.

HTTP Archive shows the average number of requests per page is around the 100 mark (I suspect the median is higher) http://httparchive.org/trends.php#bytesTotal&reqTotal

By using many, very small downloads that probably never fill the congestion window the HTTP case is imposing a huge latency overheard on the test.

Looking at the connection view for the HTTP case in WebPageTest there's a HTTPS connection that delays the opening of the other parallel TCP connections - http://www.webpagetest.org/result/141202_VM_dc5d5bd74e398960... (for comparison SPDY test is http://www.webpagetest.org/result/141202_6B_a853f2009006b2ec...)

In my testing I've seen SPDY / HTTP/2 be anywhere from 35% faster to 5% slower, in my experience key factors to getting it to perform are quality of TLS config, whether the server supports prioritisation and TCP config.


Yes, not technically real-world, but hopefully enough to push decision makers to start encrypting. I also added a param to reduce images loaded: http://www.httpvshttps.com/?images=100

From my testing, SPDY was generally 60% faster, but as of now it's been slow. Could be my computer or the server


Due to the way slow-start works the throughput of a TCP connection increases with use (assuming no packet loss) as the congestion window grows.

The HTTP test has images that are ~5KB so after the first round trip for an image the congestion window grows but none of the subsequent requests grow it further, where as in a real world example many of the files would be larger than 5KB the window would grow further an the number of round trips would be reduced.

The SPDY example can make use of the ever growing congestion window because the multiplexing will fill it i.e. we'll get the data from more than one image in a single round trip.

It's not that the test isn't 'technically real-world' it's that it has (I don't think intentionally) a design that highlights an area where HTTP performs really poorly due to the latency penalty i.e. many very small requests.

A more real world test case would mirror a typical page construction with varying file sizes - HTTP Archive can give you some clues here.

Have a watch of John Rauser's "TCP and the lower bound of web performance" for more detail on slow start - https://www.youtube.com/watch?v=G6ah2cq4LFY


I don’t know what multiple XORs have to do with SPDY, but yeah, they should have done it with one site, a CDN for images, some javascript loaded from weird other CDNs, etc.

—— [1] Multiple XOR –> Multiplexor –> Multiplexer, an electronical circuit that hakes multiple bundles of lines and returns the values of the bundle which has been selected via the control input.


Multiplexing matters if you make 300 requests: with HTTP 1, you have to submit a request and wait for each response to transfer completely before making the next one (HTTP pipelining was supposed to solve this but has never been enabled by default because compatibility problems).

With SPDY, you can request all 300 requests up front and receive replies out of order so the server can send each chunk of data as it's ready.


... uhhh... Read this and then come back: http://en.wikipedia.org/wiki/SPDY#Design




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: