Security is hard. Anyone who thinks it isn’t, probably isn’t implementing it particularly effectively. But with a market saturated with different solutions and buzzwords where do you begin?

Standards, frameworks and controls

Fortunately to make life a little easier there are a number of published and well recognised frameworks, standards and security controls you can reference. These can help you work out where your gaps are and where to focus your attention. Some of the more common ones you are likely to have heard of include ISO27001 and the NIST Cyber Security Framework. On top of these you also have industry and regulatory-based standards such as PCI-DSS and HIPAA which have specific requirements for certain types of organisations and the way they handle data.

In this article, I want to take a brief look over the Centre for Internet Security (CIS) Security Controls (formerly published by SANS). In the words of Wikipedia…

The security controls give no-nonsense, actionable recommendations for cyber security, written in language that’s easily understood by IT personnel.

And that is exactly why I like these controls! The controls are clear, concise and provide proven recommendations you can take within your organisation to improve security. The CIS controls are not intended to be a replacement for other frameworks or standards, but the recommendations within it map directly to various aspects within most.

Basic, foundational and organisational

At a high-level the twenty security controls are grouped into three main categories.

  • Basic – Key controls that every organisation should be implementing. As the name suggests, controls within this category are considered to be the basics of a good security posture
  • Foundational – Controls in this category go a step further and build on the base ones. Perhaps not all are applicable or necessarily relevant to all organisations, but definitely a good thing to implement where possible
  • Organisational – Controls in this category are less geared around specific technical solutions and more around the people and processes that are also vital to an effective security posture

In the next few articles we will take a brief look at each of these categories and what they entail at a high-level. Please keep in mind that security is not one-size-fits-all! Whilst the techy in me would love to see all organisations implementing these in full, this is not a realistic prospect. Security is there to be an enabler for organisations and as such different organisations have differing requirements and priorities. Security should not just be implemented for the sake of it.

If you start reading through the controls below and think “wow, I don’t do any of these” don’t panic! It’s never too late to start improving things. Sit back, take stock of where you are and work out what you need to prioritise based on your organisation and the risks that are relevant to it.

Basic controls

In this article we are going to start by looking over the basic controls and how they can help organisations enhance their security effectiveness.

1) Inventory and Control of Hardware Assets

This control is all about ensuring you maintain an accurate inventory of your organisation’s hardware and ensure that only authorised devices are gaining access to the network. Though it references “hardware” this also applies to keeping track of virtual machines as well – think of this section more as “anything with a network card”. This could be through active or passive discovery tools, logging and other such methods. It also discusses the importance of processes to go along with this to ensure you are correctly onboarding and offboarding assets.

Without an accurate inventory of the devices in your organisation it is virtually impossible to ensure you are providing adequate security across the board. What if someone brings a new server online that you are not aware of? If you don’t know about its existence how can you be sure that the relevant software controls are applied to it? This is obviously critically important for Internet facing systems, but internal systems are equally as important as well. In the event of a breach vulnerable internal systems can provide pivot points and additional footholds for attackers.

In terms of controlling access to the network, port-level access controls are recommended (such as 802.1x) to ensure that only authorised devices gain access to resources. This is something I’ve discussed before in a previous blog – The Importance of Network Access Control. Speaking from experience, lots of organisations implement some form of inventory, but often lack any kind of wired network access control. It is often when these organisations then implement network access control that they find there are a lot of unknown devices on the network! Keep in mind in this day and age, lots of your assets are likely to be mobile and off the network for large periods of time. This can present challenges in its own right.

2) Inventory and Control of Software Assets

In a similar vein to above, you need to keep track of the software you have installed on your devices. If you don’t know what software is on your network, then how do you know what to protect? This inventory will likely be tightly integrated with the hardware inventory referenced above. Unpatched software makes endpoints much more susceptible to being breached and makes an attackers life easier if they manage to get into the network. As mentioned before, vulnerabilities on internal systems allow attackers to move laterally through the network in the event of a breach.

Correctly implemented, this control will not only allow you to keep track of what needs to be kept updated but will also allow you to see and respond to unauthorised software on endpoints. Whitelisting is a suggested approach, whereby only approved applications can be installed/run. This approach brings with it the added benefit of not needing to manually respond and uninstall approved software etc.

Also consider that this control does not only apply to endpoints, but also the various networking and server infrastructure you will have. I’ve seen plenty of networks running end-of-life and end-of-support appliances for critical functions. If a vulnerability were to be discovered in one of these then you would likely have no way to effectively address it.

3) Continuous Vulnerability Management

Security teams need to keep on top of vulnerabilities to remain secure. But this is a difficult task when there is a constant stream of patches, security advisories and threat bulletins publicly released every day. These announcements are as equally used by malicious adversaries as they are by those trying to protect organisations. When you think about how many different systems you have in your network as well, it can feel like an overwhelming task to stay on top of these.

This control recommends the use of automated vulnerability scanning and automated patch-management systems to help simplify the process. However, it is also important that you prioritise your responses to ensure you focus your attention where it is needed based on the risk it poses. If you were to just go and run a full scan of your network right now I guarantee you that you would have a huge number of results come back. Even if you are up to date with patching you are still likely to see hundreds or thousands of alerts. It is important your solution or internal processes provide you with a way of focusing remediation based on the risk posed.

Vulnerability management systems often look at more than just software vulnerabilities and can typically highlight where there are certain types of mis-configurations as well. Often these run as “authenticated” scans, whereby the solution has permissions to login to devices and evaluate them in more depth.

Again, from experience this is often something I see organisations lacking. Admins may try and keep up-to-date with patching and monitoring emails for vulnerability disclosures etc. however relying solely on these methods can often result in things being missed.

4) Controlled Use of Administrative Privileges

The concept of “least privilege” is widely known and accepted in IT security, but unfortunately not as often put into practice. Misuse of administrative credentials (both local and domain-based) are common ways for attackers to cause more damage or move laterally in the network in the event of a breach. Reducing the use of administrative credentials and improving security around them are effective ways to help mitigate the problem. Consider the following scenario’s where their use can cause problems:

  • End users running with administrative privileges – If they are fooled into opening malicious attachments etc. (which is pretty common) then the attacker has the ability to take over the entire machine, potentially installing keyloggers, screen grabbers or remote access shells
  • IT admins running with administrative privileges – The same goes for IT admins too. Except these accounts could also run the risk of potentially having even further access such as domain admin privileges. Privileged accounts such as domain admin should never be used on a day-to-day basis, instead separate accounts should be used only when required. Credential hashes can often remain in memory of devices even when the user is not connected – for example after disconnecting RDP sessions. These can be abused in “pass the hash” attacks in the event of breaches
  • Password re-use – If passwords are re-used across less critical accounts or across numerous devices then you open yourself up to a much higher risk of an attacker gaining a further foothold. Suppose one machine was breached and you use the same local admin user/password across all machines. If an attacker cracks that one password and you’re not restricting access for remote connections from that login, then they potentially have access to all other machines in your environment

This control makes a number of recommendations including maintaining an inventory of admin accounts, ensuring admins use dedicated accounts for privileged functions, using unique passwords and implementing multi-factor authentication wherever possible. Equally it is important the user of admin accounts is audited and analysed which ties in with one of the controls below. Careful use of administrative level credentials can really help to minimise the impact of any breaches.

5) Secure Configuration for Hardware and Software

Most systems are not secure out of the box. Mis-configurations are a common reason for breaches and are things I tend to see in environments over and over again. It is really important that you are ensuring your systems have been securely configured and any default settings have been changed. Hardening of solutions to ensure unneeded services are disabled is also important to help reduce your attack surface. This applies across all devices in the network – PCs, Laptops, Servers, Firewalls, Load Balancers – the list goes on!

The control recommends documenting and maintaining secure configuration baselines that you use for your systems. Remember – these are not static. New software or configuration changes can often introduce unexpected and unwanted behaviours. Where changes have to be accounted for as a result of these, then there should be a process to update the documentation too. In terms of trying to define secure baselines, there is also no point in trying to re-invent the wheel! Many industry accepted benchmarks already exist to help with the secure configuration of systems such as the CIS benchmark program or the NIST National Checklist Program.

Maintenance and secure storage of images is recommended to prevent any tampering and use of automated tools can help analyse system configuration and to monitor for any unauthorised changes.

6) Maintenance Monitoring and Analysis of Audit Logs

Finally, but by absolutely no means last – logging and analysis of said logs! This is definitely up there in terms of things I don’t see organisations doing. Virtually all systems you are deploying will have the ability to enable logging and auditing of the activities that are going on. Monitoring these logs can help indicate potential issues or signs of attack or breach. They can also be extremely useful for troubleshooting and post-incident analysis too.

Obviously manual review of all log files would be a very difficult task. When you consider how many devices there are in a network, having one or even a team of people manually reviewing these logs would be very difficult. Instead, the control recommends the use of Security and Information Event Management (SIEM) and other analytics tools (such as User and/or Device Behaviour Analytics). Such products can ingest logging from numerous sources and automatically alert on potential issues. In addition, they act as a secure central repository for your logging information, so in the event you need to carry out any forensic-type investigations you have one place to go to in order to search for the information and build a timeline of events.

That being said, a SIEM should not be thought of as a system you can “install and forget”. The system should still be regularly reviewed by security professionals who can apply additional intelligence and context. Alerts should also be tuned as necessary to prevent “alert fatigue”. Systems that aren’t correctly tuned and generate lots of noise end up getting ignored or disabled – both of which can lead to missing important events. The control also highlights the importance of accurate timestamps and the use of NTP to ensure devices are all logging with accurate time.


Hopefully this article has given you some food for thought highlighted some controls that you were perhaps previously unaware of. If you want to look further into them then please pay a visit to the CIS site where the controls document can be downloaded free of charge – CIS Controls

In the next article we will have a look at the foundational level controls and what they entail.