In other words, start using the `//url.to/something-here/` shortcut and the world will be a better place.
EDIT: Use `//` instead of `http://` or `https://` and it will use whatever the protocol of the page being fetched is using.
EDIT 2: Double check when you use the `//` shortcut that the website you are linking to supports HTTPS, some still don't and they don't redirect properly...
No, don't, unless you want bad things to happen when a whole bunch of corner cases start sprouting up. I did this for a while and ultimately gave up after seeing the error rate spike on my server. There are lots of people out there using stuff which just does not get the protocol-independent // scheme. I went back to doing either the whole thing (http[s]://host/path) or a relative thing (/path) but nothing in-between.
I disagree with your conclusion. The fewer sites this works with, the quicker developers will fix it. Supporting the wrong way is not the right thing to do.
From the standpoint of the individual developer, you handle edge cases like this so you don't break your clients' experience.
From the standpoint of the ecosystem, broken tools are bad, and they cause individual developers to have to handle more edge cases.
If one individual developer doesn't handle these edge cases, their clients will just think they are bad. If all of the individual developers decided not to handle these edge cases in a coordinated fashion, sure, they could trigger a change. But I don't think that's how it happens in the real world.
To look back on the days of IE6/IE7, most developers didn't just stop supporting these browsers and hope that their clients would stop using them; they supported the browsers until larger forces caused their clients to shift to newer browsers.
Getting back to my original point, I think it's possible for each of you to have the viewpoints that you have, but also for rachelbythebay to say, "Sure, if I could coordinate with all other devs to stop handling broken edge cases at the same time, I would do that," and for you to say, "Sure, I can see how you would want to keep your job and handle edge cases until they are fixed upstream."
I half-agree with rachelbythebay's position, but, in my experience, users will definitely say "hey, this site doesn't work with your service", the developer investigates and fixes it. It'd also probably be worth it to just email the devs of the service, it'd probably get fixed a significant proportion of the time.
We are running one of the largest sites in the Netherlands with protocol-independend css/js/img: http://www.marktplaats.nl/
Haven't heard real complaints from the users about broken things. Probably anecdotal but it saves quite some complexity
But only for uncached resources - see Eric Lawrence's comment:
“Internal to Trident, the download queue has “de-duplication” logic to help ensure that we don’t download a single resource multiple times in parallel.
Until recently, that logic had a bug for certain resources (like CSS) wherein the schema-less URI would not be matched to the schema-specified URI and hence you’d end up with two parallel requests.”
You could use a resource loader if you really cared but with IE8 under 10% and dropping I'd recommend keeping your site clean and maintainable – anyone using IE8 at this point is used to the web being slow and ugly so something like this will be the least of their worries.
You really care about that? Well, relative URLs (and maybe using a basename meta tag) might be a workaround. But seriously, I wouldn't care about these Microsoft's bugs.
You will often need to dynamically generate all URLs depending on the current state of the page. A common example is an econmerce site where normal pages are HTTP, but things like the shopping cart and checkout are HTTPS. Your template will need to know to load resources as HTTPS when on a secure page. It's not the end of the world, but certainly annoying.
Many CDNs charge a lot (10x for all bandwidth, including non secure) for https. If you are fronting one of your sites with these, it is not economically viable to switch over.
Hint: Local testing/loading of html files might require a local server then, since file:// is then used for resources, too, and this is rather problematic.
Well, if you double-click on a html file on your computer, it will open that in your browser using file://. If you use the short `//example.com/foo.bar`, it is quite problematic.
`//fonts.googleapis.com/css` becomes `file://fonts.googleapis.com/css` → doesn't work
Additionally, JavaScript files might not load or work due to browsers' (security) limitations. Not sure about the exact details here.
Using a local server (instead of file:// access) would be a solution to that problem, not another problem.
That's not great...
A lot of folks get started editing HTML by downloading an existing page, editing it in some small way, and viewing the resulting page to see if it worked.
I agree, and I think not just for novice people editing HTML pages. Being able to load HTML pages locally, or send them in a zip file is important some times.
When I built Giraffe[1], a front-end for Graphite, one of my aims was that people can launch the dashboard from their desktop, then add dashboards to it by editing one file locally. So most of these people will use a server one day, but forcing them to launch a local server before they even start, really decreases the ability to play with it instantly and try it out... the code in giraffe's index.html needs to work both on the server and locally.
I already experienced strange behaviour with loading JSON/JSONP from a file:// based url, and I know it's an edge case, but it's still a useful use-case in my opinion.
But it would suck that you'd have to launch a server just to view an HTML file locally... (actually, the problem is already here for some time, thanks to JS requiring protocols to match to fetch data).
Just always use the https version then. There is nothing wrong in embedding https resources in http. `//` is just shorter and more flexible (e.g. you could switch of ssl if you want).
I agree that using protocol relative URLs is the way to go, but there is one particular situation to watch out for: if the user saves the page to disk and opens it again then // will become file:// and all relative links will be broken.
So on something like an "invoice" page that the user is likely to want to save you may not want to do that (or use a piece of javascript to rewrite the links dynamically)
Not using protocol relative urls causes a great amount of pain. Unfortunately when you're building content for third party pages you need more graceful degradation than focus stealing dialogues.
To anyone now panicing about user generated content and non-SSL images, and thinking "What I need is some kind of SSL proxy for user generated images"...
And I whipped one up in PHP for some old PHP site that I worked on if anyone wants to see that. I shoved that behind Nginx so that I also get a file cache for the most requested files.
For my project I purchased an extra SSL domain name ( https://sslcache.se ), as I had some concern about serving user generated content on my primary domain. Concerns which are valid, as github.com recently acknowledged by moving their UGC pages to github.io .
As far as I can tell, this change doesn't apply to images, and probably shouldn't apply to user-generated content (i.e., you shouldn't be letting users write/embed arbitrary CSS, JS, plugins, fonts, frames, etc...)
I had read the link, and whilst it doesn't mention images as being included in the change it also doesn't mention images as being excluded, and does imply that all mixed content is blocked. It was my assumption given those conditions that it included images.
And there are many scenarios in which you do want to allow user generated content to include JS, off the top of my head Google Maps does so to allow user maps to be extensible. The issue is how such content is managed safely, and enabling SSL and putting the content on another domain is a good thing. Google do the right thing and serve such content over SSL and via an iframe on a totally different domain ( http://whois.domaintools.com/googleusercontent.com ).
FWIW I too wrote a camo clone, but in Go[1]. Decently sized project for learning a new language. At $dayjob we have a python version too (considering replacing it with the Go version at some point)...
According to the link, the block only applies to certain types of resources, notably excluding images (presumably because malicious images cannot really take over the page).
If by hotlinking you mean that terrible act of theft of bandwidth, then yes! Down with that✝.
If by hotlinking you mean inline images that are an essential part of hypertext documents, then no! It's a great thing to support.
But the basic thing is that by not hosting, and by being just a proxy, we haven't expressed any ownership or liability over the content that passes through the SSL proxy.
And as a side benefit, we don't have to build out storage for this.
✝ for those who like to externalise their responsibility to determine whether their servers serve a request by just stomping around claiming people 'steal' bandwidth.
Firefox 23 does not have TLS v1.2, but Nightly will very soon.
During March there have been very active development in NSS for TLS v1.2 and they are AFAIK checking in bit by bit now.
From what i've read, it looks like the server advertises all the TLS it supports, the client picks the highest version, but then an attacker sends an RST and the client goes to the next version in the list. Is that accurate? (The only other downgrade attack I saw was on False Start which has since been disabled in Chrome)
Could (or should) they support an option in the browser to require only the highest possible version of a protocol? Or is there some other fix required to mitigate the attack?
That means insecure scripts, stylesheets, plug-in contents, inline frames, Web fonts and WebSockets are blocked on secure pages, and a notification is displayed instead.
That seems to me like a complete list and does not include images.
It's odd they have a secure version of nytimes.com at all. They have separate subdomains like myaccount.nytimes.com that are secure, and the links back to the homepage are all explicitly http.
IE took a lead move in favor of security, kudos to them.
It seems like Chrome really forced the ecosystem to move towards auto-updates and sandboxing. Each of those have transition impacts for developers and publishers.
Mixed content though, I've got to imagine that's a hard area for Google to lead on, since its transition challenges primarily affect ad integration.
This follows on the heels of the "disable third party cookies by default" row. I'm wondering if a) Google's business interests will prevent them from being a first mover on security and privacy in browser development, and b) if other browsers will start exploring these issues just to force Chrome to make hard choices.
Chrome has been doing this a fair while now, not very well in my opinion either - the option to enable secure content is hidden away in a tiny silver shield at the right of the URL bar.
It's fun looking at unintended tack-on effects of decisions like this.
For example, requiring SSL for all assets served on SSL pages is going to make the profits of CloudFlare, and other CDN providers with their same business model, spike precipitously. You have to have a paying plan ($20/mo to start) to get SSL CDNing support, which basically means CloudFlare's free plan is now useless to anyone who enforces HSTS.
I would expect most companies serving pages over SSL to already be serving assets over SSL to avoid the mixed content warnings that most browsers currently give when loading non-SSL assets on an SSL page.
A lot of companies make profit by selling SSL as an pro feature. Just look at all those services where SSL login is only available at a certain cost level. (And using SSL only for the login form is a joke, too.)
It's a shame that these are making money from that basic consumer security. Especially since SSL is neither expensive nor need a lot of performance.
If you care about security enough to use HTTPS, you need to serve assets over SSL anyway. Note that this does not affect all assets – only active content – which means that you can still serve images, audio, video, etc. over HTTP if you're comfortable with the risk of interception, spoofing, etc.
Browsers blocking insecure content has been a challenge for us. Users can add embeds into their pages, unfortunately, no two embeds are alike. There are way too many services that don't offer secure versions of their embeds, and on top of that, several implement secure vs insecure embeds differently.
Yes the current implementation works similar to click to play plugins. A shield icon shows up in the address bar as well as a notice that the domain has http:// content. (in more user friendly terms) Examples of the click to play UI at https://blog.mozilla.org/security/2012/10/11/click-to-play-p...
its been a while since I made a Facebook App but I am pretty sure last time I checked you can made an app on a non-SSL domain and it will be iframed into the secure page, which in FF23 will break, so some apps may not work anymore. just saying.
If you access the app using https-ed Facebook, i.e. via https://apps.facebook.com/appname, it would use https:// for the iframe as well. So, no problems there (at least no new ones that did not already exist).
EDIT: Use `//` instead of `http://` or `https://` and it will use whatever the protocol of the page being fetched is using.
EDIT 2: Double check when you use the `//` shortcut that the website you are linking to supports HTTPS, some still don't and they don't redirect properly...