Hacker Newsnew | past | comments | ask | show | jobs | submit | emforce's commentslogin

I learned about git switch through comments for this article on reddit. It's definitely something I'm actively trying to adopt at the moment over my old workflow but it's soo hard to overcome years of muscle memory


Comparisons were done with CloudFlare on static site, it's been a very long time since the site was based on Laravel and PHP.

I had set Browser Cache Expiration to 1 month within my CloudFlare settings as was luckily screenshotted within a previous article: https://medium.com/@elliot_f/my-journey-into-web-speed-optim...

I'm fairly sure that I had some speed optimizations set within my Nginx server block. Unfortunately, I don't have the Nginx config file to hand anymore as I've (somewhat stupidly) deleted the snapshots without taking a backup.


I'm no Cloudflare nor Cloudfront expert but these should be apples-to-apples comparisons and seeing that huge disparity in latency makes me think it's most likely a configuration issue, as all comparisons I've ever seen claim they perform roughly about the same: https://blog.latency.at/2017-09-06-cdn-comparison/

Specifically though we're talking about how long Cloudflare caches your content on their proxy/edge servers. Browser caching is irrelevant for this discussion, as speed tests of this nature should always be performed on a clean request.


I can't claim I am either, unfortunately. I required assistance ensuring the certs were in place and my configuration was correct when doing the migration.

And just to clarify, based on some of the articles I've read, I actually think CloudFlare may be a better choice of CDN, this post just highlights one way of achieving a global-scale website and doing it in such a way that it's resilient and cheap as chips!


There is a substantial performance difference in SSL handshake speed and site performance if you have a central server, vs multi-edge storage before you get to Cloudflare. The closer your edges are to Cloudflare's edges when the handshake happens the faster things go. The handshake delay is particularly slow internationally with Heroku running under CF, since you can't predict what heroku IP you are fetching from, unless you fix to one point and then latencies are still long at far away locations.


That setting is what Cloudflare passes on down to your visitors, not how long it caches itself.

I don't actually know cloudflare personally, but assuming they're following the spec, you need to set "Cache-Control: max-age=300" or "Cache-Control: s-maxage=300" to tell cloudflare to cache a given response for 5 minutes.


I was previously using Cloudflare in conjunction with a Linode server. The disadvantage of this, however, is that you need to ensure that one server never goes down otherwise anyone hitting a cold cache would see the site down.

Also, why do you need a server at all? If you are doing nothing but serving static files then why incur the overhead of maintaining LetsEncrypt certificates and writing Nginx config files? AWS provides a simple interface in which you can view and manage your static files without having to ssh into a server.

On the point of price, I'm predicting my total cost of hosting for a site serving roughly 40-45k users per month will be roughly $7/month. All whilst minimizing the amount of administration time required and monitoring of a more traditional system.

Hope this clarifies this, would be keen to hear any counter thoughts!


We use Azure and we have multiple sites we're getting that visitor count (or more, for PPC campaigns) for way less than $7/mo using Cloudflare/Azure Edge/Web Apps. This might not work for a single site, but multiple sites on a service plan we can handle way more visits per site (especially if it were static) for $2-$3/site/mo and we could auto scale if needed though that would raise that number, it would not exceed your number. Unless I'm missing something, I don't see how $7 isn't viewed as rather expensive for that visitor count.


Sure! here are my counter thoughts: What if your site changes every 5 minutes, and is about 100Mb?

12x24x100= lots of mb per day

S3 costs (if only for the bandwidth) will be greater than a server with a fair share unlimited connection

Also, cloudfront non free plans will kill you - again because of the bandwith.


> What if your site changes every 5 minutes, and is about 100Mb?

This is probably an exceptional rate of change for the _vast_ majority of static sites.


> This is probably an exceptional rate of change for the _vast_ majority of static sites.

I also doubt that many static sites have much "overhead" related to maintaining nginx config files and Let's Encrypt certificates.


Sure, but then there's the whole overhead of maintaining a server.


Clarify this: "The difference between zero servers and one server is much larger than the difference between 100 servers and 101 servers"

The person you're talking to is likely thinking of the "100=>101" case, not the "0=>1" case.


Maintaining a server is not complicated nowadays, with ansible. I have a few dozens, some of them don't need any admin for years (like linode), the others have debian testing and ansible to deploy matching configurations and keep up with the updates


I would be interested in getting more feedback on the article from yourself, what is it you dont feel and I could improve upon? Cheers, Elliot


I think the Unity analogy doesn’t make sense (I use Unity on occasion for hobby stuff) and that the article is too shallow. It lacks concrete use cases and measurable quantities.


I believe Unity has ultimately changed the way game developers work on Indie titles though. It's undoubtedly had a huge effect on all levels of indie developers due to its ease of use and the fact it handles a hell of a lot of the complexities for you. Similar to how Serverless does too.

But I appreciate the feedback, it was somewhat hastily written as I was preparing for the new year celebrations haha but I will take this feedback onboard for my next articles!


Hi Jacques, I really appreciate your comment!

1. Yeah, there are always going to be situations where low-latency is a must.

2. PaaSes most certainly do, I'm a huge advocate for CloudFoundry usage within my place of work and help onboard people for just this reason. FaaS will simply provide one extra layer of abstraction so that developers won't necessarily have to deal with larger frameworks in situations where it doesn't make sense

3. This is really interesting point of view and I'm inclined to agree with you, there does need to be extensibility that some of these platforms don't currently offer.

P.s. I'm very much looking forward to working with Pivotal's function service offering once it is made available to us!


If we look at the likes of Unity3D, nothing was perfect straight off the bat, and I feel like the same can be said of Serverless architectures.

The people offering serverless are no doubt aware of the difficulties and I wouldn't be surprised if they said they were developing something that would make this easier.

Cold starts are a valid issue though and I feel that the effects of this can be minimized with the use of fast compiled languages such as Go.

Lock-in is also a worry for most, as we progress we need to ensure that we are able to write serverless code in such a manner that they can easily be transported across to other serverless providers.


> Cold starts are a valid issue though and I feel that the effects of this can be minimized with the use of fast compiled languages such as Go.

Warmed-up code will always have an edge. Even if Go code can launch very fast, the same binary retained in main memory and heavily represented in L2 and L3 caches is going to absolutely stomp the same bits being loaded from disk over and over.

> Lock-in is also a worry for most, as we progress we need to ensure that we are able to write serverless code in such a manner that they can easily be transported across to other serverless providers.

This will come down to folks building such platforms. I am working for Pivotal on Project Riff. Oracle released Fn, IBM and others are supporting OpenWhisk. There are really a lot of people working on it.


Re-reading it this morning, I would tend to agree, it's slightly terse and could have done with some more backstory. I'll take this feedback onboard for my next article!

I am new to this style so I am still playing around and seeing what sticks. I do really appreciate your comment!


I don't believe it would be a case of building your own cloud on top of Amazon, it would be a case of saying something like:

I want 2 instances of my account service, 2 instances of my comic-book viewer service behind a load balancers. You are specifying how you want your application to look as opposed to a fully fledged cloud offering.

I hope this clarifies things!


Ok, looks like I misunderstood. Thank you for clarifying...


Hi All, this has received an incredible amount of views and comments since I posted this and left the house!

I appreciate the feedback comments! I'm very much still playing around with different technologies and writing about them as I go, this is more a learning experience for me that I've documented and I would take what I'm saying with a pinch of salt.

I thought I'd also clarify that I'm using the code/docker images that I created in the previous article as the base from which I'm running my tests as that seems to have gotten missed by a few peeps!


I definitely missed it. Maybe adding a link to this one as well would be helpful.

Edit: I see the code using the `com.sun.net.httpserver` package, but not the Spring Boot version.


I believe I addressed this in the 4th paragraph of this article. I can't imagine many teams going back and optimizing their memory usage after they've deployed something to production.

Unless you have capacity in your sprint to do some fine grained optimization of these things you'll most likely be looking at extending the platform or picking up potential feature requests.

But even still, the point still stands whilst not to the same extreme. You have to actually to set this flag in order to reduce 64MB which is still 60+ times the size of a similar Go service and 3-4 times the size of a thin Java/Python application.


What you forget is: You get so many features with Spring Boot and you can actually use them without increasing the memory footprint.

Yes of course, for a single service with one endpoint that does nothing it is bloat. But which microservice does nothing?

Think about a database backend, actuator endpoints, etc. They are all possible with a memory footprint of ~200mb. It can be less is you use another language, but I doubt that Go will give you such a major framework for developing a microservice. And this will save you a lot more time than taking an hour to reduce your memory footprint.

And one thing you missed too: Using the JVM should also be combined with a basic knowledge about how the heap works. And if you do not limit it, it will get a lot bigger than required, this should be common knowledge. Comparing this to Go it seems like a much better way. In Go there is afaik no way to limit the heap. In worst case it will grow and grow and grow.


Go comes with pretty much everything you would need for micro services in the standard library and linking those parts in isn’t likely to change the memory profile very much.


Not really seeing anything in the Go standard library for https://docs.spring.io/spring-boot/docs/current/reference/ht..., most of which we need for microservices.


> I can't imagine many teams going back and optimizing their memory usage after they've deployed something to production.

After your two years in the industry, I am surprised that you can't imagine someone changing a command-line parameter in a configuration file.


You lack imagination :) what I find hard to imagine is many teams going back and forth between microservices in go, java, python because of memory consumption.

Besides, every decent Java dev knows that it's a memory hog, comparing to most of the platforms out there; but that normally isn't a concern when you choose something Jvm based.


> You have to actually to set this flag in order to reduce 64MB which is still 60+ times the size of a similar Go service and 3-4 times the size of a thin Java/Python application.

This is a pointless way to evaluate alternatives, though, because you'll never beat BCHS[1].

[1]: https://learnbchs.org/start.html


The setting of the JVM arguments is so simple and easy, I can't imagine that it would take that much sprint work apply them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: