There are lots of cool security products on the market. But security is not achieved solely by deploying products. Security needs to be a layered approach that is intertwined throughout your organisation. It isn’t always easy and it is also about a lot more than technology alone. That being said, the right technical solutions can of course help, but if you’re not getting the basics right you’re doing yourself and your organisation no favours.

The problem

I see all manner of networks in my day job – the good, the bad and the downright ugly! As a project engineer I’m often tasked with installing a new technology into an existing brownfield site. The state of the existing solutions can make a massive difference as to how simple the migration is and how effective the solution will be post deployment. A simple example of this would be a badly configured and/or messy firewall – it makes migrating the configuration and any potential troubleshooting a lot more time consuming and difficult.

If I commence a project and notice such problems I will always flag it and discuss options with the client. Would they like to look at reviewing and tidying the config before the project begins? Would they like assistance in doing this review work? Unfortunately, with constrained budgets and lack of resources the answer is often just to migrate on an “as-is” basis and look at reviewing at a later date. Do they look later? Sometimes yes, but not always. As much as I’d love to just start ripping rules out and closing up holes during a project, it is not the time or place to do it. There are already risks from transitioning from one solution to another and they don’t need compounding by significantly adjusting rulesets.

Prevention is better than cure!

The old adage is true! Continuing on with the badly configured firewall example – if that had been more carefully managed in the past then it may have been a lot easier to migrate and provided an overall better security solution. Now, I appreciate this isn’t always easy. Small teams and people being pushed to manage things they don’t fully understand often lead to the aforementioned results.

So, this article is a plea to all IT and security managers/engineers to say don’t forget to make time for the basics! Get them right and you’re going to help yourself in the future and improve your overall security posture. The longer you leave things, the harder they tend to be to try and fix.

In the rest of this post I want to look at some of what I consider to be the “basics” of good IT security that I frequently see organisations getting wrong. It is by no means of a definitive list but will hopefully give some managers and engineers some food for thought. It is never too late to start looking at these.

Understand your environment

Sometimes easier said than done – particularly in large or dynamic organisations. But if you don’t understand your systems, how can you secure them? Sometimes when talking with clients and querying what certain things are I get vague or unsure responses. A lot of this can often be attributed to lack of cross-team communication – something that is important from a security perspective.

I’m not saying you have to be an expert in all the systems you have, but you should have at least a high-level understanding around what they do and how they are used. You don’t need to know SQL in order to understand that a database server is used by a web server to store related data. Lack of understanding is one of the reasons you typically end up seeing “temporary” rules in firewalls and similar systems. Rules that allow too much access just to see if they work temporarily. Problem is – once things are working they often end up staying as they are.

If you don’t have an understanding or even visibility of your network and the systems on it, then that is probably a good place to start on your journey of security improvements.

Understand your systems and tools

In a similar vein to the points above, trying to secure an organisation with tools that you don’t understand is also a very difficult task. A lot of misconfigurations I see are just down to a lack of understanding as to how something works. No-one can be any expert in everything – nor should anyone expect them to be.

So, what is the answer? Time. People need time to play with new products. Time to configure them, break them and then fix them again. This allows people to really understand how things are working under the hood and what results come out of certain changes. If you are skipping this vital learning and testing step in your projects, then you’re much more likely to introduce errors or misconfiguration into the final deployment.

What if time is a luxury you don’t have? Consider finding someone that has already put in the time. This could be a permanent employee who has the skills you need, a contractor, or company offering professional services. No one is saying you have to outsource something completely, but sometimes getting help in to do the difficult part and get things up and running can be a cost-effective alternative to the trial and error you will carry out yourself. Let someone else who has done the work before take some of the pain out of the process.

A word of caution on getting help in though. Unless it is a fully outsourced solution, don’t think you can just get someone in and wash your hands of it. Ultimately if the solution is one you will be managing going forward then you still need to understand how it all works. Make sure time for training and handover is factored in.

Periodic review

Modern IT is anything but static. Things change all the time and it is important that you are periodically reviewing your systems to ensure they are still correct. Leaving old systems online or holes in firewalls that aren’t needed provides attackers with potential entry points into your network. Old systems that are just left online without patching are a sure-fire way to increase your risk of being breached.

Obsolete firewall rules can also be a problem. Not only just from a “keeping things tidy” point of view (which makes managing and monitoring WAY easier) but also from preventing potential future issues. What if you decommission a system, but don’t remove and tidy up rules in a firewall? The IPs may eventually get re-used for a new system with differing requirements. Those old rules you’ve left in place could now pose a risk by allowing unintended access to the new system.

The same goes for many other systems. What about users who have left the organisation? Do you have “offboarding” measures in place to disable accounts? Do you review any exceptions you’ve added to your NAC solution? You get the idea. Carving some time aside periodically to review your systems is a good thing to do. Certain standards such as PCI actually require you to do this as well.

Patching

Probably one of the most important things you can do! This goes for all systems in your environment – PCs, servers, firewalls, load balancers. The lot. Don’t get me wrong – patching can be a pain. Deploying them can be risky, and they can go wrong. But they are hugely important. Software WILL contain vulnerabilities and bugs – you need to keep an eye on these and patch them when necessary.

Now, this isn’t to say that every single minor patch that is ever released has to be installed straight away. You need to take a common-sense approach to patch management and installation. Things that may determine how quickly patches are installed can include:

  • The severity of bugs or vulnerabilities that the patches are fixing
  • Whether the system is publicly facing, such as a web server
  • How the device is used. Is it a PC/Laptop? End users are often the weakest link in a security chain so endpoint security and patching is hugely important
  • What the impact in applying the patch is – does it cause down time for multiple users?
  • Plus many more…

Please don’t assume either that just because a system is internal that it doesn’t need patching. Yes, publicly facing systems are at a much higher level of risk, but if an attacker ever gets into your network they are going to try and move around laterally and exploit other weak unpatched systems. Don’t make their life easier.

Internal segmentation

I added this to my list of “basics” as it is something I think everyone should be doing. But in all honesty segmentation of a network (particularly brownfield sites) can be incredibly difficult. Get it right though and you can save yourself a whole lot of headaches. Lots of networks that I’ve seen over the years have been pretty flat. Maybe the odd firewall here and there, perhaps protecting data centre resources or at the Internet edge, but overall little internal segmentation. The risk you face here is lateral movement in the event of a breach or malware infection.

Cast your mind back to last year. Remember WannaCry and NotPetya? They had a devastating impact to many organisations worldwide. Why were they so successful at spreading? A combination of unpatched systems and lack of internal segmentation. Once one system was compromised it spread like wildfire. Let’s look at just one example of each of these malware outbreaks to get an idea of the impact:

  • The National Health Service in England and Wales was one of the largest to be struck by WannaCry with an estimated 70,000 devices affected including computers, MRI scanners and other healthcare equipment. This led to non-critical emergencies being turned away, appointments getting cancelled and delays in service across numerous trusts
  • Global shipping giant Maersk was struck badly by NotPetya – it had a massive impact and took down huge swathes of their systems. They were left pretty much rebuilding their infrastructure from scratch all over again. The cost? Maersk estimated the incident to have cost them 200 – 300 million USD (though people suspect that the actual figure was higher). There is an interesting write up on this here which is worth a read

The impact of these attacks could have been reduced or prevented entirely. They both relied on the leaked NSA EternalBlue exploit released shortly before the attacks. But at the time the infections took hold, patches were available for systems. Similarly, if networks had been better segmented then the spread of the malware due to lateral movement would likely have been limited and the organisations may have been restoring a handful of machines rather than thousands.

Credential management

As humans we are pretty crap at passwords generally speaking. And it is no wonder! How many accounts do you have online now? Checking my password manager I have just under 90 credentials stored at the moment. No one can be expected to remember that many passwords. What do people tend to do? Re-use passwords and/or store them insecurely.

Whilst re-using passwords may be great for memory recall, it is no good for security. If a site is breached and hasn’t been storing your credentials securely, then an attacker will have your email and password. They can then try plugging this into other sites such as Gmail, Facebook and Twitter – something known as a credential stuffing. If you’ve re-used your password across multiple sites, then you’ve now gone from one breached account to multiple.

The same goes in corporate environments. If you are re-using credentials or storing them insecurely then you are opening yourself up to potential widescale breaches in the future. Some form of credential management solution is needed. For personal use this could be something like a simple password manager such as 1Password, Dashlane or LastPass. For enterprise environments these solutions don’t really scale well and you’ll likely be looking at a more fit for purpose enterprise credential/identity management solution that also provides additional features such as full access auditing.

2 Factor authentication

Following on with the “humans are crap at passwords” theme I want to talk about 2 Factor Authentication. You can provide your employees the tools they need to securely manage credentials, but that doesn’t mean they will use a strong password. Forcing complex passwords and periodic rotation is generally accepted nowadays as bad advice which actually leads to poorer password strength. If you are relying on passwords alone (particularly for external facing services) it is only a matter of time before your user’s credentials are compromised. One option for mitigation is 2 Factor Authentication.

What is 2 Factor Authentication? in summary it is using two out of the three methods shown below when implementing authentication:

  • Something you know – e.g. a password
  • Something you have – e.g. a One-Time Password (OTP) generated by some form of hardware or software token
  • Something you are – e.g. biometrics such as finger print scanners

The most commonly selected options tend to by a password and some form of one-time password. Nowadays there are multiple methods for obtaining that OTP such as hardware-based tokens, mobile applications and SMS based (though there is some debate if SMS can count as 2FA – I’m not getting into that right now!). Using these in addition to passwords means even if a user’s username/password are compromised, attackers must still have the relevant second factor of authentication – which should be a lot more difficult for them to obtain.

Summary

As I’ve mentioned, security is not about any one solution. If you think by putting the latest flashy solution in your network that you are secure you need to think again.

Don’t forget the basics. If you do, you could be undermining all these fancy systems you are putting in place.