If you’ve been following my recent posts, you’ll know that I moved my blog over to AWS - hosted out of an S3 bucket (static HTML) and fronted by CloudFront CDN. It has made things a lot easier and more secure, especially when coupled with a DevOps style approach to deployment. One thing I never got around to sorting though were the lack of security headers for my site.

An F Rating?!

As you’ll see from the image below, a scan of my site (using securityheaders.com) throws up a less than perfect score…

Security Header Score F

From a practical threat perspective, not all of these are that important or relevant for a static blog page run by little old me. But I like to practice what I preach and have them in place - it sends out a much better message.

If I had control of the web server, this would have been a straightforward thing to have done initially, but of course, I had to be different and go and host it out of an S3 bucket! I can’t control the web server in this instance, so it reduces my options a little. There isn’t anything I can do on the S3 side of things to remedy this, but I imagine I will have some options within CloudFront (coupled with AWS Lambda) to achieve what I want. Something confirmed by this article.

The Basic Setup

The basic idea is that a response served from the S3 bucket to CloudFront will trigger a Lambda script. This script will add the relevant security headers before they are served back to the end user (and will also be cached for subsequent requests).

I’m not going to use this post to go into what Lambda is (maybe one for another day) but it is essentially AWS’ “serverless” compute offering that allows you to run code for event-based triggers. My setup will specifically be using “Lambda@Edge”, which takes Lambda functionality and pushes it down to the same AWS edge locations that my CloudFront distributions reside in. Great!

Lambda@Edge only currently supports node.js 6.10 or 8.10 at the moment (I’m a Python man personally!) but for such a simple and clearly documented use-case, this won’t be a problem. I’ll park any ideas of using Python for now and just modify the code example as necessary (not a habit you want to get into too often of course!)

The basic code building block used will be as follows:

'use strict';
exports.handler = (event, context, callback) => {
    
	//Get contents of response
	const response = event.Records[0].cf.response;
	const headers = response.headers;

	//Set new headers 
	<CODE TO SET HEADERS CODE WILL GO HERE>

	//Return modified response
	callback(null, response);
};

I’m leaving the rest of the Lambda settings as default for now, but we will need to revisit a few bits in a little while. Let’s step through each of the headers flagged by the scan earlier and have a look at what they do. I will then add the appropriate code and re-test.

Strict-Transport-Security

If you’re running HTTPS on your site (which you hopefully are!) then you want to be using HTTP Strict Transport Security (HSTS) too. HSTS is a way for web servers to tell your browser to always use HTTPS, regardless of what the user has typed (i.e. if they type http:// instead of https://).

HSTS helps guard against downgrade attacks and other hijack methods. Imagine you’re sat in a coffee shop and browsed to “mycoolbank.com” without typing https://. If a malicious attacker spoofed a DNS response (trivial to do) they could point you to a phishing site without HTTPS or redirect you to a more convincing site with HTTPS (perhaps a typo-squatted domain such as https://myc00lbank.com.

Whichever attack type was used, there would be no big in your face certificate warning presented by the browser, as the user typed “http” not “https”, so the browser obliged. HSTS is an easy way to make the browser act as if the user always typed the https:// prefix, regardless of whether they did or not. This would cause both the previous examples to fail, as the initial connection to https://mycoolbank.com would fail.

The code I am going to add to my Lambda function is as follows:

headers['strict-transport-security'] = [{key: 'Strict-Transport-Security', value: 'max-age=63072000; includeSubdomains; preload'}]; 

This header will tell the browser to cache the value for 2 years (max-age), apply this to any subdomains (such as staging.mikeguy.co.uk) and to mark the site as being eligible for preloading (preloading is adding your site to the hardcoded values in browser deployments - it mitigates the risk of the initial connection being intercepted).

WORD OF WARNING - before you go implementing HSTS and setting values, please ensure you understand what they do and that you’ve got HTTPS in use everywhere applicable - if you don’t your traffic stats may drop a little! Scott Helme wrote a good blog on HSTS here.

I’ve saved the Lambda code, and added CloudFront as a trigger with the following settings:

  • Distribution - the ID of my CloudFront distribution
  • Cache Behaviour - left as “*” as I want to apply it to all responses
  • CloudFront Event - set to “origin response” as I want to add headers to the content served up by S3

Click deploy and off we… oh…

Execution Role Error

Remember I said we would need to revisit a few bits? Lambda setup a basic role and policy when we created it, but obviously didn’t know what we would be using it for, so we need to amend these. I’ll go back to the main Lambda function and create a new policy from the “Basic Lambda@Edge permissions (for CloudFront trigger)” template.

Lambda Execution Role

Let’s try and deploy it again…

Lambda Edge Deploy Success

Success! Now let’s give it a little test after creating an invalidation to flush CloudFront’s cache…

Security Header Score D

It’s gone from an F to a D. Progress - I like it. Let’s fix the rest.

X-Frame-Options

The X-Frame option can be used to tell the browser whether or not the page should ever be loaded in an iFrame. Why would this be a bad thing? Attackers can use a site loaded in an iFrame with some clever CSS styling to hide it in the background. To the end user it appears if they are interacting with a “website a” but in fact they are clicking on links in the hidden background site - something known as clickjacking.

Imagine for instance you’d been browsing around Amazon and they didn’t have this implemented (they do!) and were logged in. You see a website offering free stuff (who doesn’t love free stuff?!) and click a link, unknowingly actually clicking on something in the background Amazon site (perhaps they placed an order or something else shady). You get the idea - an attacker can convince you to click on something in a genuine site by layering on top a deceptive overlay. So unless you need your site to be framed somewhere, you should deny this behaviour. If you do need it, be specific in the sources you allow to frame yours.

As I’ve no need for any framing on my site, I am just going to plain deny it by adding the following to my existing function. I will update my “$LATEST” version of the Lambda code with the following:

headers['x-frame-options'] = [{key: 'X-Frame-Options', value: 'DENY'}]; 

Again, I will re-deploy the code, invalidate the CloudFront cache and re-run the scan…

Security Header Score D2

Still a “D”, but we are getting there. Two down, four to go!

X-Content-Type Options

Only one possible value for this particular option and that is “nosniff”. It is a directive to tell browsers not to try and determine content-type themselves. Instead they should honour what the server has indicated in the “content-type” header.

This header can help reduce the likelihood of being compromised by some types of drive-by-downloads and from being tricked into running carefully crafted file types. I’m not hugely familiar with the ins and outs of this one or if implementing content sniffing has any useful real-world scenarios - so if you do know I’d love to hear from you.

Regardless, I don’t need it! So I’m adding the following code…

headers['x-content-type-options'] = [{key: 'X-Content-Type-Options', value: 'nosniff'}]; 

Edit. Save. Deploy. Test…

Security Header Score C

Halfway there.

Referrer-Policy

Typically, when you click a link on a webpage a special header known as the “referrer” header is inserted by the browser. This tells the destination site where you originated from which can be useful for things like analytics and site customisation.

Normally a pretty useful thing, but there are times for security or privacy reasons that you may not want to share this information with the destination site. Or maybe you want to restrict the circumstances in which the information is sent - this header can help.

There are a number of different options for this header with a good writeup on them here. For my site I am going to use the “no-referrer-when-downgrade” option. This will prevent the header being sent if the connection gets downgraded to HTTP - I like to encourage TLS where I can.

The code I will add is:

headers['referrer-policy'] = [{key: 'Referrer-Policy', value: 'no-referrer-when-downgrade'}];

Rinse and repeat the testing and…

Security Header Score B

Two to go!

Feature-Policy

Down to the final two. Next one is Feature-Policy. A relatively new header in the grand scheme of things. It allows you to control what browser features your site will use, including any components that have been loaded from third-party sites. For instance, if you’ve no need for microphone, camera or geolocation information - why allow them?

Coupled with a Content-Security-Policy (CSP - we will come to that last) it is a good way of protecting your end-users from potentially compromised third-parties. Imagine you were using some third-party JavaScript and this was maliciously modified to access people’s microphones or cameras. Wouldn’t it be better if you could prevent that from happening in the first place?

There are a lot of options with this directive and it seems to be ever-growing. There doesn’t seem to be a way of just a blanket saying, “none of these are needed” (that I’ve spotted), so this will be quite a long line:

headers['feature-policy'] = [{key: 'Feature-Policy', value: 'ambient-light-sensor none; autoplay none; accelerometer none; battery none; camera none; display-capture none; document-domain none; encrypted-media none; execution-while-not-rendered none; execution-while-out-of-viewport none; fullscreen none; geolocation none; gyroscope none; magnetometer none; microphone none; midi none; payment none; picture-in-picture none; speaker none; sync-xhr none; usb none; wake-lock none; webauthn none; vr none; xr-spatial-tracking none'}]; 

Security Header Score A

Onto the last piece of the puzzle…

Content-Security-Policy (CSP)

Well kind of the last - I’m sneaking in an extra one after this!

I deliberately left this one till (almost) last as it is a little more complicated but also massively important. The CSP tells the browser what sources it should load content from on your site and with what restrictions. Why may this be useful? Modern websites include all sorts of CSS, JavaScript, fonts, images - the list goes on. A lot of the time these are being repurposed from various third-party sources with little or no control over what they are doing. If you just blanket allow everything you are at a much higher risk of Cross-Site Scripting (XSS) attacks. By taking a whitelist approach you can control what should and should not be loaded by the browser.

In addition to this, the prolific use of third-party resources opens you to further attack. You may have the most secure site in the world, but if you are loading random scripts from third-parties and they don’t have as-good security, this could be used to compromise your users.

Take the Ticketmaster breach from a few years ago. Ticketmaster themselves were not directly compromised, instead a third party who had JavaScript on their checkout page was. This compromised code was then changed so that it was silently skimming card details as they were entered by the user at the point of checkout, as well as performing its true function (a support chat tool).

How could CSP have helped with the Ticketmaster? Well for starters, third-party JavaScript probably needs to be kept off your checkout pages! Secondly, CSP provides the option to whitelist based on SHA hashes. So instead of just saying “allow this third party” you could say “allow this particular SHA hash”. That way, if the code is modified it will no longer load. This clearly comes with some operational challenges in today’s dynamic world but is a great option to have in your back-pocket - particularly on very sensitive pages. There is also an option with “nonces” (numbers used once) which alleviates some of these challenges, but we shall park that for now.

So back to my site - just whack in a CSP and be done with it right? No. I’ve used various third-party code in my site and I wouldn’t be able to tell you off the bat what is what. If I went straight into a blocking approach I would likely break my site.

Instead, I am going to use the “report-only” enforcement mode, coupled with a site called report-uri.com to monitor what data sources are being reported as “violations”. These violation reports from end-user browsers will then allow me to tune my CSP before moving it into an enforcement mode. ReportURI is a really handy site and seems to be growing in the number of tools it offers.

So for now my code will simply be as shown below:

headers['content-security-policy-report-only'] = [{key: 'Content-Security-Policy-Report-Only', value: 'default-src self; report-uri https://mikeguy.report-uri.com/r/d/csp/reportOnly'}]; 

The code above tells the browser that content can be loaded only directly from my site (the “self” keyword) and any violations should be reported to “https://mikeguy.report-uri.com/r/d/csp/reportOnly". As the violations trickle in to the ReportURI (without breaking anything) I can learn what sources I need, tune the policy and re-deploy.

For now, let’s deploy the updated Lambda and re-scan the site…

Security Header Score Still A

Hey what gives? It is showing as red still? This is because it is still in report-only mode, so offers no actual protection yet (though visibility is a great first step). If I scroll down a little I can see that the header is being correctly presented:

CSP Header

After I’ve tuned the policy and put it into enforcement it will show up as green! I’ll update the post once I do.

One More Header…

Ok, I said CSP was “kind of” the last one, but I wanted to touch on one more that wasn’t flagged by securityheaders.com - X-XSS-Protection.

X-XSS-Protection seems to have been superseded on securityheaders.com reporting by the CSP, which to be fair makes sense - CSP takes care of a lot of the problems. That being said, the two don’t do the exact same thing and not all browsers support CSP yet (though most up to date ones should), whereas X-XSS-Protection (a heuristics-based approach) has been around for a lot longer(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection).

No harm in having both so I am going to add that too…

headers['x-xss-protection'] = [{key: 'X-XSS-Protection', value: '1; mode=block'}]; 

Summary

Hopefully this has given you a flavour of some of the common security-related HTTP headers you may want to consider implementing on your sites. Granted, this was nice and easy to do on a static HTML blog and will definitely be more challenging in a production deployment, but I’d strongly recommend considering implementing these to help keep your users out of harm’s way as much as you can.

Any questions or if you want more info then feel free to ping me via one of my contact options!