The Blurring of the Business Identity

The concept of a well defined business identity is blurring and this is causing a complex reaction in the area of identity and access management.  Internal, enterprise class identity and access management (IAM) has been long defined, as the managing of user access as defined by approval workflows, authoritative source integration and well defined system connectivity.

Historical Business Structures
Historical business identity management has been defined by several well defined structures and assumptions.  An organisational workforce that was managed by an IAM programme, was often permanent, static and assigned into a set business function or department.  This helped define multiple aspects of the IAM approach, from the way access request approvals were developed (default of line manager as first line of approval), to how roles based access control implementations were started (use of business units or job titles to define functional groupings for example).  IAM is complex enough, but these assumptions helped to at least create a level of stability and framing.  IAM was seen as an internal process, focused solely within the perimeter of the 'corporate' network.  Corporate is this sense is indeed quoted, as the boundary between public and private internal networks are becoming increasingly ill-defined.

Changing Information Flows
If IAM can be viewed as data and not just a security concern, any change to the data or information flows within an organisation, will have a profound impact on the flow of IAM data too.  One of the key assumptions of IAM is that of the underlying business structures.  They are often used for implementation roll out prioritization,  application on-boarding prioritization, workflow approval design, data owner and approver identification and service accountability.  This works fine if you have highly-cohesive and loosely coupled business functions such as 'finance', 'design' and 'component packaging'.  However, many organisations are now facing numerous and rapidly evolving changes to their business information lines.  It's no longer common for just the 'finance' team to own data relating customer transactions.  Flows of data are often temporary too, or perhaps only existing in order to fulfill part of a particular process or primary flow.  Organisational structures are littered with 'dotted-lines' reports and overarching project teams, that require temporary access, or access to out sourced applications and services.

Technical Challenges
The introduction of a continued raft of out sourced services and applications (Salesforce.com, Dropbox etc) adds another layer to the complexity, of not only information in general, but IAM information and it's implementation.  Accounts need to be created on external directories, with areas such as federation and SSO helping to make 'cloud' based applications become closer to the organisations core.  However, those those technical challenges often give way to larger process and management issues too.  Issues surrounding ownership, process re-design and accountability need to be accounted for and require effective business buy-in and understanding.

Bring Your Own Device (BYOD) brings another dimension.  The data control issues are widely described, but there is an IAM issue here too.  How do you manage application provisioning on those devices, and the accounts required to either federate into them or natively authenticate and gain authorisation?

The Answer?
Well like most things, there isn't a quick, technical answer to this evolving area.  IAM has long been about business focus and not just security technology.  Successful IAM is about enabling the business to do the things they do they best, namely make revenue.  Nothing from a technical or operational perspective should interfere with that main aim.  As businesses evolve ever more rapidly to utilize out sourced services, 'cloud' based applications and an increasingly reliance on federation and partnerships, IAM must evolve and help to manage the blurring of information flows and structures that underpin the businesses main functions.

@SimonMoffatt


Mandiant Lifts The Lid on APT


The claim that China is the root of all evil when it comes to cyber attacks, increased a notch yesterday, when security software specialists Mandiant, released a damning report claiming a sophisticated team of hackers, with suspected connections to the People’s Liberation Army (PLA) and China Communist Party (CCP), had systematically hacked over 140 organisations over a 7 year period.

Why Release The Report?
There have been numerous attempts over the last few years to pin every single cyber-attack onto a group or individual, originating from a Chinese network.  Some justified, some not so, but it’s an easy target to pin things against.  Many of the claims however, have lacked the detailed technical and circumstantial foundation, to back up the claims and move towards either active defence or proactive prosecution.  The Mandiant report – and I really recommend reading it in full to appreciate the level of detail that has been generated – really looks to point the finger, but this time, with a credible amount of detail.  The obvious outcome of being so detailed is that the attackers now have a place of reference, from which they can now mobilise further obfuscation techniques.  However, the report provides several powerful assets such as address and domain information, as well as malware hashes.  This is all useful material in the fight against further attacks.

How Bad Is It?
The detail is eye watering.  141 victims attacked over a 7 year period, with terabytes of data is not a nice read, whatever the contents.  The startling fact was simply the scale of the operations upholding the attacks.  Not only were the attacks persistent, but the infrastructure required to allow such complex and sustained attacks to take place, covered an estimated 1000 servers with hundreds, if not thousands of operators and control staff.  The victim data was equally interesting, with several of the top sectors attacked, being on the industry list for the China 5 year strategic emerging industries plan.  This starts to bring questions surrounding ethics, morality, intellectual protection and competitive behaviour too.  The data points to a strategic industrial programme to steal and use legal, process, leadership and technical information on a vast scale.

What Happens Now…
The report will no doubt create a lot of split opinion in both the infosec community and also the surrounding political avenues too.  The report points to industrial theft on a grand scale. The links to the PLA and CCP are not to be made on a whim and there will be a political response no doubt.  From an effective defence perspective, where does it leave us?  Well the report contains the practical information that many secops teams can effectively utilise for blacklists and malware identification.  The longer term impact may well be unknown at present.  The team behind APT1 will obviously apply counter measures, altering their approach and attack vectors.  Mandiant themselves may well be at risk of hacking as a result if they were not already.

I think ultimately it goes some way to crystallise the view that long term effective attacks via the internet are common place, sophisticated and long term.  They provide an effective way for industrial secrets to be stolen and used, regardless of the levels of software and process protection organisations use.

The Drivers For Identity Intelligence

From the main view of Identity & Access Management 1.0 (I hate the versioning, but I mean the focus on internal enterprise account management as opposed to the newer brand of directory based federated identity management, commonly being called IAM 2.0...), identities have been modeled within a few basic areas.

The 3 Levels of Compliance
'Compliance by Review' (access certification or the checking of accounts and the associated permissions within target systems), 'Compliance by Control' (rules, decision points and other 'checking' actions to maintain a status-quo of policy control) and 'Compliance by Design' (automatic association of entitlements via roles based on the context of the user), probably cover most of the identity management technology available today.

I want to discuss some of the changes and uses of the first area, namely access review.  This periodic process, is often used to verify that currently assigned, previously approved permissions are still fit for purpose and match either the business function and risk, or audit and compliance requirements.  The two requirements are really the carrot and stick of permissions management.  From an operational perspective, automating the access review process has lead to the numerous certification products on the market, that allow for the centralized viewing of account data, neatly correlated to HR feeds, to produce business friendly representations of what needs to be reviewed and by whom.

The Failings of Access Review
The major failing of many access review campaigns is often associated with information overflow, or the lack of context surrounding the information presented for review.  For example:  asking a non-technical manager to approve complex RACF permissions or Active Directory group names will result in check box compliance, as the manager will be unsure which permissions should be removed.  Glossary definitions and incremental style certifications then start to reduce the burden and volume of information made available.  Whilst these are nice features, they're really just emphasizing the weakness in this area.

Use Your Intelligence
A commonly heard head teacher berate, is the 'use your brains' or 'use your intelligence' theme when it comes to managing easily distracted or unthinking pupils.  The intelligence is often present by default, but not naturally used.  The same can be said of access review.  To make the review process effective - and by effective I mean actually giving business value, not just complying to a policy - we need to think more about the value of doing it.  Instead of focusing on every application and every account and every permission, lets apply some context, meaning and risk to each piece of data.  Do you really need to verify every application, or just the ones that contain highly sensitive financial or client data?  Do you really need to verify every user account or just the ones associated with users in the team that processes the data.  Do you really need to certify every permission, or just the ones that are high risk, or perhaps vary based on the common baseline for that team or role?

Manage Exceptions and Let Average Manage Itself
By focusing on the exceptions, you can instantly remove 80% of the workload, from both an automation and business activities perspective.  The exceptions are the items that don't map to the underlying pattern of a particular team, or perhaps have a higher impact or approval requirement.  By focusing in this way, you not only lessen the administrative burden, but help to distribute the accountability in to succinct divisions of labour, neatly partitioned and self-contained.  If 80% of user permissions in a particular team are identical, capture those permissions into a role, approve that one singular role, then focus the attention on the exceptional entitlements.  Ownership of the role, it's contents and applicability, can then be removed from the view of the line manager in a nice demarcation of accountability, resulting in a more streamlined access review process.

Whenever I see a process being re-engineered with neat 'features' or add-ons, I think the time has come to start re-evaluating what is actually happening in the entire process.  Improvements in anything are great, but sometimes they are just masking an underlying failure

@SimonMoffatt


Twitter Hack: What It Taught Us

Last week Twitter announced that it had been the victim of a hack, that resulted in 250,000 users having their details compromised.  Pretty big news.  The password details were at least salted, but a 1/4 of a million records is a damaging amount of data to lose.  Twitter responded by resetting the passwords of those impacted and revoking session tokens.

Not A Case Of If, But When

The attack again goes to highlight, that cyber attack activity is omnipresent.  Regardless of how large the organisational defense mechanism (and you could argue, that the larger the beast, the more prized the kill, but more on that later), it is fair to say that you will be hacked at some point.  A remote attacker only needs to be successful once.  Just once, out of the thousands of blocked, tracked and identified attacks that occur every hour.  Certainly if you're a CISO or infosec manager at a 'large' organisation (regardless of whether it's actively a web service company or not), from a risk and expectations management perspective, it will be beneficial for the organisations long term defense planning, to assume an attack will happen, if it already hasn't.  This can help to focus resource on remediation and cleanup activities, to minimize an attack impact from both a data loss angle and also a public relations and brand damage perspective.


Target Definition - If You're Popular, Watch Out

How do you know if you'll be a target?  I've talked extensively over the last few months regarding cyber attacks from both an organisational and consumer perspective and the simple start to that series of articles, was that "...any device that connects to the internet is now a potential target..".  Quite a basic statement but ultimately far reaching.  The 'success' of many cyber attacks is generally being driven by the complexity of how the attack has developed.  It is no longer good enough to simply identify a bug on an un-patched system.  As good as hackers are, anti-virus, intrusion prevention systems, client and perimeter firewalls, application white listing and kernel level security provide a strong resistance to most basic attacks.  Twitter themselves acknowledged that the attack on them "..was not the work of amateurs.." and that they "..do not believe it was an isolated incident.."

The complexity of the Twitter attack, would make you think that the 250,000 accounts that were compromised where not targeted directly, and more would have been lifted if the attack was not stopped.  It seems the main driver is simply the fact that Twitter is a massively popular site, with headline grabbing strength.  Why are Windows XP and Android malware infections so high?  Regardless of underlying technical flaws, it's simply because they are well used.  A cyber attack will always gravitate to the path of least resistance, or at least greatest exploit-ability, which will always come from the sheer volume of exposure.  Be that number of potential machines to infect, or number of users to expose.


Response & Handling

The underlying technical details of the Twitter attack are yet to be understood, so it's difficult to provide a rational assessment on how well the response was handled.  If you separate the attack detection component for a second, the response was to reset passwords (thus rendering the captured password data, worthless), notify those impacted via email and revoke user tokens (albeit not for clients using the OAuth protocol).  All pretty standard stuff.  From a PR perspective the Twitter blog posted the basic details.  I think the public relations aspect is again probably the area that many organisations seem to neglect in times of crisis.  This is fairly understandable, but organisations the size of Twitter must realize that they will make significant waves in the headline news and this needs to be managed from a technical, community and media relations perspective.

@SimonMoffatt