Identity Management: Data or Security?

I was having a discussion this week with a colleague, regarding identity management transformation projects and how organisations get from the often deep quagmire of complexity, low re-usability and low project success, to something resembling an effective identity and access management (IAM) environment.  Most projects start off with a detailed analytics phase, outlining the current 'as-is' state, before identifying the 'to-be' (or not to be) framework.  The difference is wrapped up in a gap analysis package, with work streams that help to implement fixes to the identified gaps.  Simples right?

IAM Complexity

IAM is renowned for being complex, costly and effort consuming from a project implementation perspective.  Why?  The biggest difference to for example, large IT transformation projects (thinking enterprise desktop refresh, operating system roll-outs, network changes and so on), is that IAM tends to have stake holders from many different aspects of the business.  A new desktop refresh will be ultimately decided by technicians.  Business approvers will help govern things like roll out plans and high level use cases, but not the low level implementation decisions.  IAM is somewhat different.  It impacts not only technical administration of managed resources, but also business processes for things like access requests, new joiners, team changes and so on.

IAM Becomes A Security Issue When It Doesn't Work

IAM is often seen as part of the security architecture framework.  This makes total sense.  The management of subjects and their access to data objects is all well understood, with loads of different access control mechanisms to choose from (MAC, ABAC, RBAC etc).  However, IAM should really be seen more as a business enabler.  I always like to pitch IAM as the interface between non-technical business users and the underlying IT systems they need in order to do their jobs.  10-15 years ago, when IAM started to become a major agenda, it was all about directories, (meta, virtual, physical, partial, synced, any more terms..?) and technical integration.  "Developing a new app?  Whack some groups in an LDAP for your authentication and authorization and you're done".  The next step was to develop another layer that could connect multiple directories and databases together and perform multiple account creations (and hopefully removals) simultaneously.  Today IAM is more than just technical integration and provisioning speed.  It's more about aligning with business processes, organisational team requirements, roles based access control, reporting, compliance and attestation.  All of these functional areas have use cases that touch business users more than technical users.  However, if those IAM services fail (access misuse, insider threat, hacked privilege account) a security incident occurs.

Think of IAM As Building Data Routes

During the continued discussion with my colleague, he brought up the notion that IAM is really just about data management.  The movement of data between silos, in order to get it to it's destination in the most effective and efficient path.  IAM data could originate from an authoritative source such as an HR database, before ultimately being transformed into a system account within an LDAP or database.  The transformation process will require business understanding (what will the account look like, which roles, permissions, what approvals are required etc) but none-the-less a new piece of data will be created, which requires classification, auditing and reporting.  Just the same as a file on a share.  By breaking down the entire IAM elephant into bite sized chunks of data creation, transformation and output, you can start to make the implementation process a lot more effective, with re-usable chunks of process and project machinery.

Like with any large scale project, it's often the smallest footprints that make the biggest impact.  In the case of IAM, taking small, but smart data management style steps, could be the most effective.


Sony ICO Fine: Damage Was Already Done

This week tech and games giant Sony, was hit with a nifty £250k fine from the UK's Information Commissioners Office (ICO).  This was in response to Sony being hacked back in April 2011, in a situation which exposed millions of customer records - including credit card details -  for users of the Play Station Network (PSN).  The ICO stated that Sony failed to act in accordance with the Data Protection Act, for which as a data controller, it must do, to certain standards of information protection.

The incident itself proved to be a logistical and PR nightmare, costing Sony an estimated $171m in lost revenue, legal and fix up costs.  Whilst the fine by the ICO is insignificant to the actual cost of the damage done nearly two years ago, it acts as a timely reminder that every significant data breach by a data controller, will be investigated, with any irregularity identified, and appropriate accountability applied.

The ICO has the ability to fine organisations up to half a million pounds for data controller irregularities, which may seem like small change to the likes of corporate giants such as Sony.  However, the ICO has a broad range of users to keep in check, from public sector, education and health care providers, right through to start-ups and corporate machines, where £500k is not insignificant.

The use of the ICO as a security enabler in this case obviously did little, as the breach occurred and the aftermath needed thorough investigation.  However, the damage to the Sony brand, customer dissatisfaction and the internal security recovery costs would not have been unknown.  All three could and should have been used as a bare metal driver for implementing the appropriate information security steps, such as patching, auditing and management of database security best practises.

Whilst information security is seen as a nice to have, it inevitably has budget restraints to work against, with business justification a constant balancing act to manage.  As areas such as information security metrics and security-RoI measures are used to help justify the tangible gains from a succinct information security policy, it is often the intangible damage that can occur from breaches and data loss which is higher.

Whilst intangible costs such as brand damage, confidence levels and user satisfaction are often hard to quantify, that isn't to say they shouldn't be taken into account when analyzing appropriate risk mitigation strategies.

The case with Sony, painfully highlights the financial and brand damage costs a significant data breach can have, which should act as a powerful use case for organisations looking to either reduce or avoid implementing up to date and robust information security practises when it comes to personal or credit card information.


Security Analytics: Hype or Huge?

"Big Data" has been around for a while and many organisations are forging ahead with Hadoop deployments or looking at NoSQL database models such as the opensource MongoDB, to allow for the processing of vast logistical, marketing or consumer lead data sources.  Infosec is no stranger to a big approach to data gathering and analytics.  SIEM (security information and event monitoring) solutions have long since been focused on centralizing vast amounts of application and network device log data in order to provide a fast repository where known signatures can applied.

Big & Fast

The SIEM vendor product differentiation approach, has often been focused on capacity and speed.  Nitro (McAfee's SIEM product) prides itself on it's supremely fast Ada written database.  HP's ArcSight product is all about device and platform integration and scalability.  The use of SIEM is symptomatic to the use of IT in general - the focus on automation of existing problems, via integration and centralization.  The drivers behind these are pretty simple - there is a cost benefit and tangible Return on Investment of automating something in the long term (staff can swap out to more complex, value driven projects, there's a faster turn around of existing problems) whereas centralization, often provides simpler infrastructures to support, maintain and optimize.

The Knowns, Unknowns and Known Unknowns of Security

I don't want to take too much inspiration from George Bush's confusing path of known unknowns, but there is a valid point, that when it comes to protection in any aspect of life, knowing what you're protecting and more importantly, who, or what you are setting protection from, is incredibly important.  SIEM products are incredibly useful at helping to find known issues.  For example, if a login attempt fails 3 times on a particular application, or the ability to identify traffic going a blacklisted IP address.  All characteristics have a known set of values, which help to build up a query.  This can develop into a catalog of known queries (aka signatures) which can be applied your dataset.  The larger the dataset, the more bad stuff you hope to capture.  This is where the three S's of SIEM come in - the sphere, scope and speed of analysis.  Deployments want huge datasets, connected to numerous differing sources of information, with the ability to very quickly run a known signature against the data in order to find a match.  The focus is on a real-time (or near time) analysis using a helicopter-in a approach.  Can this approach be extended further?  A pure big-data style approach for security?  How can we start to use that vast data set to look for the unknowns?

Benefits to Security

The first area which seems to be gaining popularity is the marrying of SIEM activity data to identity and access management (IAM) data.  IAM knows about an individuals identity (who, where and possibly why) as well as that identity's capabilities (who, has access to what?), but IAM doesn't know what that user has actually been doing with their access.  SIEM on the other hand, knows exactly what has been going (even with out any signature analytics) but doesn't necessarily know by whom.  Start to map activity user id's or IP addresses to real identities stored in an IAM solution and you suddenly have a much wider scope of analysis, and also a lot more context around what you're analyzing.  This can help with attempting to map out the 'unknowns' such as fraud and internal and external malicious attacks.

Unknown Use-Cases

Managing the known attacks is probably an easier place to start with.  This would involve understanding what metrics or signatures an organisation whats to focus on.  Again, this would be driven by a basic asset classification and risk management process.  What do I need protecting and what scenarios would result in those assets being threatened?  The approach from a security-analytics perspective, is to not be focused on technical silo's.  Try to see security originating and terminating across a range of business and technical objects.  If a malicious destination IP address is found in a TCP packet picked up via the firewall logs in the SIEM environment, that packet has originated somewhere.  What internal host device maps to the source IP address?  What operating system is the host device?  What common vulnerabilities does that device have?  Who is using that device?  What is their employee-id, job title or role?  Are they a contractor or permanent member of staff?  Which systems are the using?  Within those systems, what access do they have, was that access approved and what data are they exposed to and so on?  Suddenly the picture can be more complex, but also more insightful, especially when attempting to identify the true root cause.

This complex chain of correlated "security big data", can be used in a manner of ways from post-incident analysis and trend analytics as well as for the mapping of internal data to external threat intelligence.

Big data is here to stay and security analytics just needs to figure out the best way to use it.


Protection Without Detection

I read an article this week by the guys at Securosis, that referred to a study on anti-virus testing.  I'm not going to  comment on the contents of the article, but I loved the title of the blog, which I've subtly used for inspiration here.  The concept of protection without detection.  Just think on that for a second.  It's a mightily powerful place to be at.  It's also a position we generally see applied to the 'real world' too.  Not that information security isn't the real world of course.

You take prescribed medicine or wash your hands with antibacterial gel without knowing the names, consequences or impact of the bacteria you have killed.  You lock your luggage with a combination lock and are not aware at the other end of the flight, who has attempted to touch up, open and get into your bag.  Your salary gets paid in to the bank every month, at which time the bank can invest that cash, lend it to other people and so on.  You aren't really concerned about the details of those transactions as your salary will always be available for you to withdraw (unless of course the bank defaults...).  Your ISP could well be stopping thousands of cyber attacks a day before you see the handful of attacks on your local multi-function router in your front room.  Ok, the last example is slightly off-piste, but the concept of protection without detection, or even protection-by-default is a nice panacea to be at.

It removes the concept of security being a managed outcome.  It stops security being a cost, an effort laden piece of work, a distraction from the 'real world' of living, going on holiday or safely browsing the internet.  Isn't that how security should be?

Often as consultants, technologists and engineers, we sometimes fail to see things through the eyes of the normal subscriber and end user.  When the majority of us buy a car we are concerned about mpg, reliability, safety and performance.  We are not generally wanting to speak directly with the mechanic, designer or component builder about the injection system, the carbon mix of the break pads or the improvements made to the vanos.  It's the end goal or deliverable that will directly impact our lives that we are really interested in.

Many end users and individuals will see security in this light.  They want to be (or at least feel) secure, without having to worry about implementation, detection and reaction.  They want security as a given proposition, perhaps guaranteed to a certain level.  In exchange they maybe be prepared to pay a sum of cash or put up with a particular change in service of lifestyle as long as a certain level of security can be guaranteed.

Security itself is a means to an end.  The end being a protected lifestyle or protected identity or piece of data.      Promoting security as a default proposition, makes its more attractive for those who may not be prepared to struggle with the inconvenience or details, on how competing security options deliver a level of safety.