Infosec Europe 2012 Review

The end of another week, and the end of Infosec Europe at Earls Court.  Europe's biggest free three day event seemed as popular as ever, with an estimated 10k visitors over the three days (most seemingly at once at Wednesday lunchtime...).

Whilst there seemed to be a vaguely superhero theme (I certainly saw Robocop, a troupe of Wonderwomen and perhaps a Purple Meenie as marketing gimmicks..) there was a selection of some great talks, technical demo's and water cooler chat surrounding the main security issues of the day.

The keynotes were split across a range of topics from general compliance and CISO management through to the general focus on BYOD and mobile devices.  With the latter, many organisations know there is a potential threat with mobile and personal devices, but many are struggling to find the correct balance between policy, controls and manageability.  Thursday on the keynotes was another chance to go over the newer concept of Advanced Evasion Techniques.  Whilst there is a definitive case for separating the APT payload from the AET delivery method as two separate threats that require separate management, many believe it to be a mere marketing campaign and vendor hype.

The Technical Theatre areas were once again pretty much full.  These informal open air style podiums are a great idea for sponsors to perhaps get visitors passing by and taking an interest on a topic which they initially wouldn't go to one of the break out sessions for.  Was good to see Insider Threat and Social Engineering featuring in a couple of these sessions as I think those topics are often over looked for the more buzzy areas like cyber and encryption.

The Business Strategy Theatre pretty much had constant queues which is testament to the quality of speakers and their content.  Some of the big players were presenting in the likes of Barclaycard on payment security and Deloitte and Cisco both taking different angles on Cloud.

The Technology Showcase Theatre again used the 'open air' style to promote vendor products ranging from SIEM, perimeter and virtualization technologies.  Whilst the vendors only get 25 mins or so, many generally go through a product management style approach showing the product in a perfect light against a back drop of latest news articles that prove the business case.  The great benefit though is simply being able to breeze past and get a glimpse of something new and off radar.

This year in particular I noticed more academic and educational institutions than previous years.  I don't have any stats to back this claim up, but the universities of Oxford, Glamorgan and Belfast were present as well as the omnipresent general bodies like ISC2, ISACA and IISP promoting awareness, best practice and training.

Whilst the weather was more like October and the beer (if you had to pay when the freebies ran low) was ridiculously pricey, the week was another great success and one of the best free events in the IT industry  - let alone within security.  In the current economic climate it was good to see so many vendors with new product versions, marketing gimmicks and positivity for the year ahead even if there were some notable exceptions in the form of Oracle and CA.

Whilst many virtual conferences are appearing throughout the year and give a good top up on concepts and new products, it's good to see the physical IT conference is still going strong and Europe can just about keep up with the US events for style and substance.  Except for the weather of course.

(Simon Moffatt)

Big Security Data to Big Security Intelligence

The concept of 'Big Data' is not new and was generally used to discuss the vast amounts of information resulting from the processing in areas such as astromony, weather or meteorological planning calculations.  The resulting processes produced petabytes of data, either in the form calculated results, or via the collection of  raw observations.  Approaches to store, manage and query the data vary, but many utilise concepts such as distributed and grid computing with vast amounts of physical storage, often directly attached (DAS), based on solid state or high speed SATA devices in order to allow rapid query execution.  Massively Parallel Processing was then applied in front of the data in order to execute rapid relational queries.

In recent years, networking, consumer and and social media applications have started to produce vast amounts of log, user and alerting information that needs to be stored and analysed.  For example, Walmart handles more than 1 million customer transactions every hour.  That equates to around 2.5 petabytes of data.  Those transactions not only need to processed accurately, but they will also need storing for accounting, management reporting and compliance mandates.  Social networking is another area requiring the storage of huge user data, such as news feeds, photo objects, pointers, tags and user relations such as followings, friendships and associations.

The main issues with storing such vast amounts of data are generally around being able to index, query and analyse the under lying data.  End users require search results in near real time response.  Analytics are expected to be contextual, with detailed and flexible trend, history and projection capabilities, that can be easily and simply expanded and developed.

Another producer of this big data concept is that of security appliances, devices and software.  Intrusion Protection Systems, firewalls and networking equipment will produce huge amounts of verbose log data that needs to interpreted and acted upon.  Security Information and Event Management (SIEM) solutions over the last 10 years, have developed to a level of maturity where log and alerting data is quickly centralised, correlated, normalised and indexed, providing a solid platform where queries can be quickly interpreted and results delivered with context and insight.

But as security data continues to increase, simply having the ability to execute a query with a rapid response is not enough.  The first assumption to this is that the query that needs to be run, is actually a known query.  That is, a signature based approach.  A set criteria is known (perhaps a control or threat scenario) which is simply wrapped within a policy engine, that is compared against the underlying data.

As security data starts to develop further and include identity, business and threat intelligence data, a known query may not exist.  The concept of the 'unknown unknowns' makes it difficult to be able to traverse vast amounts of data without knowing what trends, threats, exceptions or incidents really need attention or more detailed analysis.  The classic needle-in-a-haystack scenario, but this time needle is of an unknown, colour, size and style.

A simple example is analysing which entitlements a user should or should not have.  If an organisation has 100,000 employees, each with twelve key applications, with each application containing 50 access control entries, the numbers alone require significant processing and interpretation.  If the compliance mandate quickly requires the reporting and approval of 'who has access to what' within the organisation, a more intelligent approach is required.

This intelligence is in the form of having a more adaptable, contextual based approach to analysing the large volumes of data.  It simply wouldn't be effective to perform static queries.  A dynamic approach would include being able to automatically analyse just the exceptions held within a large data set, with the ability to 'learn' or adapt to new exceptions and deviations.

As attack vectors continue to increase, utilising internal and external avenues, security intelligence will become a key component of the information assurance counter measure tool kit, resulting in a more effective and pin pointed approach.

(Simon Moffatt)

Security Patching for People

The updating of application and operating system software is a common phenomenon with individuals and organisations keen to reduce to zero-day threat impact that exists when a security vulnerability is still unknown to the software vendor.  Obviously, once the vulnerability has been identified, a new hotfix, patch or service pack is released which can reduce or remove that threat window which may have been exploited during that 'zero-day' phase.

There are countless warning centres for specific operating systems and platforms that aim to identify vulnerabilities to existing versions and in turn provide guidance on how to remove the vulnerability.  In general, software vendors nearly always recommended environments are patched to the most recent stable release in order to provide the best possible support.  In many scenarios support agreements can become quickly invalid or at least support withheld, if an environment is not at the most recent patch level.  All fairy straight forward.

However, as the main entry point to the most pieces of software is the end user, there seems a disproportionate amount of time spent on patching the environment and not the end user.  Software in general, is often not used or configured in the way the designer or developer meant.  In some circumstances this results in the end user being dissatisfied with a particular feature or product as they believe it doesn't perform in the way they expect.  Iterative development and continual open communication regarding the usability can overcome this during the early part of a release cycle.

However, the main concern can be when a miss-interpreted feature or configuration results in a security vulnerability.  A firewall rule set is incomplete, a default password is being used, a port is left open, a policy is incomplete, access not set-up correctly and so on.  All of which can leave critical sensitive data open to attack.  This can become more of an issue with complex middleware products (the security glue that is perhaps managing data transfers, directory linking, backend web access control and so on) which are continually evolving and changing.

From an everyday perspective, the regular employee is a key component of the information data flows and in truth security processes that exist to help protect corporate data.  Whilst what they do within a particular application or feature is important, their complete attitude, awareness and approach to information security is altogether more important.

Here is where the training and awareness aspect is vitally important.  Awareness and access to an updated security policy, regular trainings and workshops are important as well as a basic understanding of things like physical security (tail gating) and to counteract the increasing threat from social engineering.

As threats and attack vectors continually change each  year, as should the training and awareness of employees to understand the key risk areas that will directly effect them, and more importantly, what they can do at the individual level to help counteract those threats.

(Simon Moffatt)

Do We Have a Duty to Run Anti-Virus Software?

If you have children under the age of 11, you are probably already familiar with the continual trips to inoculation clinics for things like Polio, Tetanus, Hepatitis, Measles, Mumps and so on. Whilst not all vaccines are compulsory by law, there is a strong suggestion, that unless your child has a known reaction, there should be inoculated. Whilst there might be a small chance of a side effect, the general goal is to overcome the small risk to the individual and focus on the benefit to society as a whole, if a particular disease can either eradicated in its entirety, or managed to such an extend that it no longer becomes mainstream.

The same approach can really be said around the practice of anti-virus and anti-malware for both the individual and corporate landscape. There's a process of virus identification, then a preventative approach governed by anti-virus and anti-malware software distributed on all exposed devices. The end result is hopefully one where the virus has limited effect. Like in the real world, that in turn will cause the virus to 'mutate' or be developed further by an attacker so that it can become effective. Many corporate infrastructures, will operate anti-virus software at the server and desktop level. This will include automated software roll out, remote patch updates pulled directly from the anti-virus vendor and then pushed out to the end points within minutes of release. In addition many operational event and alerting mechanisms, are likely to be in place to identify any machines that have yet to receive an update and therefore could be vulnerable to attack.

At the corporate desktop level, much of that responsibility lies with the organisation and in turn the infrastructure administrators. This reliance however, starts to break down slightly when we introduce the Bring Your Own Device aspect to the landscape and also the introduction of more advanced smart phones and even home working.

The responsibility for the protection of these new devices is often left to the individual. The devices will be of different makes, models and operating systems making it impossible for a corporate policy to cover all angles and this also assumes that the organisations knows that a personal device is being used. So the protection aspect falls to the individual. Like child inoculation, not all personal devices will be protected by anti-virus or anti-malware software either due to choice, ignorance, neglect or simply as various different releases and versions provide inconsistent protection.

But does an individual have a duty to protect their machine and in turn prevent proliferation of a virus? I think in reality yes. If you use one of the many Linux flavours on your laptop for example, there is an argument that says, 'well Linux environments are less likely to be attacked by viruses so I don't need anti-virus software at all'. That argument holds true to an extent, if the only machines you ever communicate with are Linux. In reality that is not true. If you send a single email, the chances are the recipients will use a myriad of operating systems such as Windows XP, 7, Vista, Mac OSx and Ubuntu and so on. A virus which maybe harmless to one platform, could be hugely damaging to another and that can easily be spread by a single email.

As nearly all devices, smart phones, laptops and now home gadgets are connected to the internet, they are open to the proliferation of virus based software and even if not directly targeted, can become attack vector proxies for other victims.

The cynical view is that anti-virus software is a multi-billion dollar industry which effectively manages the marketing and impact of the viruses themselves in order to justify their own existence.  The flip side is that by being protected, simply accelerates the need to develop tougher to identify viruses, as they need to overcome the ever increasing protection levels.  I personally, am not that cynical.  I'd rather be protected today and have piece of mind, rather than assume what might happen tomorrow.  As software develops, as do counter measures and protection approaches and the most effective position to be in, is surely safe today, which helps the individual and in turn the rest of the connected internet.

(Simon Moffatt)

Does a Data Breach Make You More Secure?

A breach.  A data loss incident.  An insider leak.  A media report of client data loss.  All would probably bring about a mild panic attack for most CISO's.  Eventually and dependent on the size of the organisation, that data breach will end up in the public eye, either via official acknowledgement that a breach had occurred - as is required by say the UK Information Commissioners Office - or a simple media response to explain that 'everything is under control'.  Ultimately that public information, could damage the brand and future customer base of the organisation.  Dependent on the industry and type of product or service that is being offered, the damage could be irreparable.

The sources of data breaches and losses are many and complex, with new and complex attack vectors appearing all the time.  If we could quickly categorize a data breach we would probably come out with a list something like this:

  • Malicious cyber attack
  • Malware within the corporate network
  • Negligent employee (laptop loss, USB loss)
  • Malicious insider
  • Careless insider (erroneous data copying, emailing of confidential data)
  • Mis-configured software and hardware

Whilst that is only a high level view, it would cover a multitude of data loss scenarios for many organisations.  In response to a known threat, there are several process and technology counter measures an organisation could implement to reduce or ultimately remove the threat.  

  • Malicious cyber attack > Firewall, Intrusion Detection System, Intrusion Prevention Systems
  • Malware within the corporate network > Anti-virus, SIEM logging, abnormal event monitoring
  • Negligent employee (laptop loss, USB loss) > Data Loss Prevention, data & asset management
  • Malicious insider > Event monitoring, access monitoring
  • Careless insider (erroneous data copying, emailing of confidential data), DLP, event monitoring
  • Mis-configured software and hardware > Baselining, penetration testing, auditing

Each counter measure would be applied using a standard risk framework to identify any vulnerabilities and any threats that could exploit those vulnerabilities.  In turn, a structured approach to counter measure selection would be done in order to provide a decent return on investment with regards to the loss expectancy before and after a counter measure was put in place - less of course the cost of the counter measure.

This is basically following the standard Annual Loss Expectancy = Single Loss Expectancy X Annual Risk of Occurrence.  The counter measure selection would be based on an implementation that would be lower than the ALE.  This assumes that the ARO is accurate (which is often not the case) and that the ALE is accurate (which is often not the case).

So, if in one year the ARO was zero, would their viewed return on investment of the counter measure be higher or lower?  Well if you've never been attacked or had a vulnerability exploited it could be difficult to quantify the true effect of the existing counter measures. On one hand it could be argued, the counter measures are infinitely worth more than the actual cost of implementation, as the assets their are protecting have never been exposed.  It is the potentially the case however, that the loss of an asset is worth infinitely more once lost than when secure, so it would only take one loss to reduce all protection measures to have been meaningless.  

I think in practice, if an organisation has identified a failing in a process, product or scenario that has resulted in a data breach or loss, it becomes politically justifiable to implement further counter measures above and beyond the ALE, due to the non-tangible effects of such a loss.  Similarly, if the ARO was zero, could the reduction of the counter measure be justified for the following year?

Of course, their are probability analytics that could be applied to help formulate a result mathematically,  but the costs of brand damage, reputation and future trade loss are often difficult to quantify, which could result in a 'belt & braces' approach from a post-breach organisation.

(Simon Moffatt)