Metrics and statistics, whilst subtly different, are often seen as the accountants yardstick and the pragmatists whipping stick. The use of metrics in IT has had a long and perhaps uneasy route. Technicians want to implement, design and fix. Managers and budget owners need to show value, deliver service and ultimately keep the customer, production line or CFO happy. An efficient and sustainable business position is a meeting place between the two, where tangible (and intangible) metrics (not statistics) are important to both parties.
Why Use Metrics?
But what is the driver for security? Well the main ones are probably compliance necessity, brand damage (especially if customer record losses occur) and the clean up costs of breaches. So the CEO wants their company to be secure. The infosec guys want the company to be secure, so what's the problem?
There are two main ones. Firstly the non-infosec community within the IT component will often not have security as their default modus operandi. That's not to say they security-averse, just not pro-security by default. This can hamper design, policy and implementation. Secondly, how do the ideas and strategies from the CxO level filter down to the infosec implementers? One is talking budgets and ROI, the other is talking about standards, compliance, APT's, firewalls and DLP.
The use of some sort of metric driven analysis can not only aide implementation, but also help non technical members of the business understand the reason, rationale and benefit that a secure infrastructure can provide. As a metric is a snapshot in time, it can also provide a useful benchmark for gauging performance and success of a particular project, policy or component. This can not only help individuals, but also aide with budget realignment and project funding.
What to Measure?
The key to defining what to measure, is rooted in being able to define a framework that can help show progress and performance from all components of the infosec life cycle, whilst being of benefit to the board, IT and infosec teams. To break this down further, it's important to understand what infosec posture the organisation is taking. This should include what security policies have been created and how are they being implemented? What systems, devices and data are being monitored, controlled or impacted by these policies? In addition, it's important to understand the type and structure of the metrics being used.
Metrics don't always have to be numeric and tangible in structure. Metrics can also be more subjective and intangible, covering things like brand awareness, confidence levels and so on. For example, what is the damage to the a large on line retailer if they lost 100k customer credit card details? The impact on brand and future custom could be quite difficult to measure tangibly, but that's not to say it can't me measured in some way.
The most obvious low level areas to cover would be things like anti-virus coverage. A basic % showing the number of devices, those protected by AV software and the % with virus definitions older than 3 days for example. Others could include the patch latency average. This could be for particular servers, desktops or devices, showing the lag between a vendor released security update and the time taken to roll that update out. Other more subtle measures could be for things like the number of password resets a help desk receives. This could indicate if a password policy is too complex for users to remember their own passwords. A password strength checking metric could also be used to see how successful a password education policy has been.
The catalogue of metrics should include both technical and non-technical aspects. The underlying aim would be to show the general performance of the security infrastructure of the organisation. Security isn't just about firewalls and access control lists. It is about education, personnel and physical attributes too.
How to Measure?
The initial measurement should be recorded periodically and then used against other business and project data to show efficiency or at least at an attempt at a return on investment. For example, on the basic antivirus approach I mentioned earlier, the following could be a good starting point: perform an asset inventory of devices that could carry or could become the victim of a virus or malware attack. Information such as device/service owner, the business impact if unavailable and perhaps previous downtime statistics would be useful too. Next apply the coverage metric. So identify which devices have some sort of antivirus protection installed. Now isn't the time to question the why's, why nots and versions. Just make a note. Next could be another more detailed metric analysing if the antivirus definitions are within a certain threshold. This threshold value should really come from the underlying security posture and policy surrounding antivirus protection. The metrics will ultimately help to shape that policy in the long term.
There's quite a lot of information already in that small metric. It will undoubtedly require some sort of automation and will probably require assistance from system and network administrators. This can often be a sensitive issue. The task of scripting or dragging off the version and coverage data may require a bit of non-BAU work be carried out by a team which may not initially see the benefit of getting the data. A discussion around the benefits to the general IT team of being able to measure this type of data is imperative here. Focus on things like showing that ultimately it will draw positive focus on what perhaps was a mundane 'behind the scenes' job and assist with funding, upgrades, over time and so on, evening if in the short term, the results may not seem to be positive.
Reporting the Results
Here real impact data should be used. Monetary data is often useful, but isn't always the most easy to obtain. For example, if a mail device is not protected, or is only partially protected using out of date definitions, the likelihood of a an outbreak will increase. The costs to recover from an outbreak could be $100k split on consultancy, out of hours overtime and a % on unhappy customers who received spam from the malware that was 'released'. It's the impact that budget or service owners are interested in. That must always be the underlying theme of how the results are reported. The impact on budget and/or customer happiness and delivery of key components that affect those initial two factors.
Ideally the business should have enough information from reading the report that they themselves can make an informed decision as to whether a particular security posture is being upheld or not.
The reporting process should be periodic as opposed to an annual audit style approach. This will give a more regular, ingrained approach to security. Ultimately, a metric driven approach is only a means to an end. The end is to help ingrain security as part of the overall business and technical aspects of the organisation, where this is appropriate. This proactive stance, will ultimately be more cost and effort efficient if a secure posture is required.
A metric driven approach will help to refine budget and identify weakness of course, but should also help show that information security is a more proactive and contributory discipline, with benefits to the entire business life cycle as opposed, to being a component of reactionary IT, used when necessary when something bad has happened.