Happy Christmas (This isn't a Scam)

It really isn't - just a simple note to wish all the Infosec Pro readers a relaxing festive break, for yourself, friends and family.

2013 has been a interesting year yet again in the Infosec world.  Connectivity has been the buzz, with topics such as the 'Internet of Things' 'Relationship Management' and 'Social Graphing' all producing great value and enhanced user experiences, but have brought with them some tough challenges with regards to authentication, context aware security and privacy.

OAuth2 - The Passwordless World of Mobile

Keeping in vogue with the fashion of killing off certain standards, technology or trends, I think it's an easy one to say, that the life of the desktop PC (and maybe even the laptop...) is coming to an end.
Smartphone sales are in the hundreds of millions per quarter and each iteration of both the iOS and Android operating system brag of richer user experiences and more sophisticated storage and app integration.  The omnipresent nature of these powerful mini-computers, has many profound benefits, uses and user benefits.


The Road To Identity Relationship Management

The Problems With Identity & Access Management

I am never a fan of being the bearer of dramatic bad news - "this industry is dead!", "that standard is dead!", "why are you doing it that way, that is so 2001!".  Processes, industries and technologies appear, evolve and sometimes disappear at their own natural flow.  If a particular problem and the numerous solutions are under discussion, it probably means at some point, those solutions seemed viable.  Hindsight is a wonderful thing.  With respect to identity and access management, I have seen the area evolve quite rapidly in the last 10 years, pretty much the same way as the database market, the antivirus market, the business intelligence market, the GRC market and so on.  They have all changed.  Whether for the better or worse, is open for discussion, but in my opinion that is an irrelevant discussion, as that is the market which exists today.  You either respond to it, or remove yourself from it.

European Open Identity Summit - Review

This week saw the first European Open Identity Summit hosted by identity management vendor ForgeRock [1].  Following hot on the heels of the US summit, that was in Pacific Grove, California in June, the sold out European event, brought together customers, partners, vendors and analysts from the likes of Salesforce, Deloitte, Forrester and Kuppinger Cole amongst others.

Whilst the weather was typically October-esque, the venue was typically French chateau, set in panoramic grounds, with great hosting, food and wine to keep everyone in a relaxed mood.

The agenda brought together the key themes of the modern identity era, such as standards adoption (XACML, SAML2, OAuth2, OpenID Connect, SCIM), modern implementation approaches (JSON, API, REST) through to the vision for modern identity enablement for areas such as mobile and adaptive authentication, all whilst allowing customers and partners a chance to collaborate and swap war stories with some great networking.


The Evolution of Identity & Access Management

Identity and access management is going through a renaissance.  Organisations, both public and private have spent thousands of hours (and dollars) implementing and managing infrastructure that can manage the creation of identity information, as well as management of the authentication and authorization tasks associated with those identities.  Many organisations do this stuff, because they have to.  They're too large to perform these tasks manually, or perhaps have external regulations that require that they have a handle on the users who access their key systems. But how and why is all this changing?

2-Factor Is Great, But Passwords Still Weak Spot

The last few months have seen a plethora of consumer focused websites and services, all adding in two-factor authentication systems, in order to improve security.  The main focus of these additional authentication steps, generally involve a secondary one time password, being sent to the authenticating user, either via a previously registered email address or mobile phone number.  This is moving the authentication process away from something the user knows (username and password) to something the user has - either an email address or mobile phone.  Whilst these additional processes certainly go some way to improve security, and reduce the significance of the account password, it highlights a few interesting issues, mainly that password based authentication is still a weak link.

The Rise & Protection of the API Economy

Nearly every decent web site and application will have an application programming interface (API) of some sort.  This may simply be another interface into the applications most advanced administrative controls, controls which perhaps are used by only 5% of users and would clutter up even the most clearly designed user interfaces.  To make those controls open to end users, they have traditionally been exposed in a programmatic manner, that only deep technologists would look at or need to use.  In addition, those API's were probably only ever exposed to private internal networks, where their protection from a security perspective was probably less of a concern.


Identity & Access Management: Give Me a REST

Give me a REST (or two weeks stay in a villa in Portugal if you're asking...).  RESTful architectures have been the general buzz of websites for the last few years.  The simplicity, scalability and statelessness of this approach to client-server communications has been adopted by many of the top social sites such as Twitter and Facebook.  Why?  Well, in their specific cases, developer adoption is a huge priority.  Getting as many Twitter clients or Facebook apps released, increases the overall attractiveness of those services and in a world where website and service competition is as high as ever, that is a key position to sustain.

BYOID: An Identity Frontier?

[bee-oi]. [b-yoy]. [be-yo-eye]. [bee-oy-ed].  Whichever way you pronounce it, the concept of bringing your own identity to the party is becoming a popular one.  Just this week Amazon jumped on the identity provider bandwagon, by introducing it's 'Login With Amazon' API.  What's all the fuss?  Isn't that just the same as the likes of Twitter and Facebook, exposing their identity repositories so that 3rd party application and service developers can leverage their authentication framework without having to store usernames and passwords?


It's Not Unhackable, But Twitter Makes a Start

This week Twitter introduced a new two-factor authentication process to verify account logins.  This comes on the back on some pretty big Twitter account hacks in recent months.  Now, whilst you can argue that it is not Twitter (or any other service providers) responsibility for you to keep your account details secure, they potentially do have a duty to some extent to make increased security an option if an end user does want to use it.

A typical end user isn't particularly interested in security.  Yes, they don't want hacking, yes, they don't want to have their bank details stolen, or their Facebook timeline polluted with nasties, but a typical end user won't actively take extra steps to avoid that from happening.

Forget Firewalls, Identity Is The Perimeter

"It is pointless having a bullet proof double-locked front door, if you have no glass in your windows".  I'm not sure who actually said that (if anyone..), but the analogy is pretty accurate.  Many organisations have relied heavily in the past, on perimeter based security.  That could be the network perimeter or the individual PC or server perimeter.  As long as the private network was segregated from the public via a firewall, the information security manager's job was done.  Roll on 15 years and things are somewhat more complex.

"Identity as the perimeter" has been discussed a few times over the last year or so and I'm not claiming it as a strap line - albeit it is a good one at that.  But why is it suddenly becoming more important?


Infosecurity Europe 2013: Round Up

This week saw London bathed in glorious spring like sunshine, just as the 3 day annual Infosecurity Europe conference took place at Earls Court.  Over 330 vendors, 190 press representatives  and 12,000 attendees converged to make a interesting and thought provoking look at information security in 2013.

The keynote panel discussions focused on best practices as identified by experiences CISO's and security managers, with the general theme of education, awareness and training being top priorities, for organisations wishing to develop a sustainable and adaptive security posture.  Budget management is also a tough nut to crack, but it is becoming clear that technical point solutions don't always deliver what is required and properly training security practitioners, coupled with cross department accountability make for a more cost effective approach.

Advanced Persistent Threats, cyber attacks and SCADA based vulnerabilities were all up for hot discussion, by both vendors and attendees alike.



See below for a detailed write up of some of the keynote sessions.

Hall Of Fame Inducts Shlomo Kramer & Mikko Hypponen
Keynote Panel: Smarter Security Spending
Technical Strategy: Defining APT
Keynote Panel: Battling Cyber Crime
Keynote Panel: Embedding Security Into The Business
Technical Strategy: SCADA The Next Threat
Analyst Panel: Future Risks

Infosecurity Europe 2014 will run from April 29th to May 1st 2014

By Simon Moffatt

Infosecurity Europe 2013: Smarter Security Spending

Information security should be focused on "moving from the 'T' in IT, to the 'I' in IT' according to panel moderator Martin Kuppinger from KuppingerCole Analysts.  Information security has often been focused on technical related controls, with point solutions based on software and hardware being deployed, in the hope that a 'silver' bullet style cure is found for all attacks, breaches and internal issues.  This is an unsustainable model, from both a cost and effort perspective, but what areas provide a good return on security investment?  An expert panel in the keynote theatre at day 3 of Infosecurity Europe, aimed to find out.

The People, In People, Process & Technology

Michelle Tolmay, from retailer ASOS, commented that the people, in the people, process and technology triad, is increasingly more important that simply installing and configuring technology.  Dragan Pendic, from drinks manufacturer Diageo, also described how building the information security business case, requires focus on the 'right people' within the organisation.  As budgets are finite, all spending needs to be fully justified and explained in business language to key business stakeholders.  Dragan articulated, that whilst the majority of the security budget is ring fenced for legal and regulatory compliance, any remaining funds are spent wisely, focused on identifying security stakeholders with the correct role and responsibilities in order to the make existing and new security technology work smarter.

Education, Training & Awareness

Graham McKay,  of DC Thomson, described, that whilst risk should be decided by the business, countermeasures should be implemented by the IT and security teams, with a key focus on sustainable education.  He argued point solutions are nearly always breachable at some point in time, and that employee training and awareness is a much more effective and sustainable way to protect information.

Cal Judge from Oxfam explained that for training to be effective, it needs to take a personable and story based approach, trying strongly to avoid the dry, theoretical policy lead content.  Michelle also added, that by making examples of the security implications employees face in real life, helps to articulate what measures need to be implementated in the work place.

Accountability -v- Commerciality 

In any organisation, there is a clear trade off between business effectiveness and security implementation.  Graham described that an organisation will never be 100% secure, as commerciality will always take hold.  Whilst technology obviously has a major role to play, learning the full technical limitations, integration steps and implementation paths are key to fully maximising a return on investment commented Pendic.  Often technology is not implemented to the maximum of it's capability, resulting in cheaper alternatives being overlooked or not evaluated.  Cal Judge promoted the use of vulnerability scanning of existing technology as an effective spend, arguing that this can help to simulate what an external attacker would look for, from an internal and external asset perspective.

Michelle Tolmay added that overly restrictive policies are actually counterproductive and costly, resulting in employees taking shortcuts and workarounds that will ultimately put the business at risk.  She also commented that relationships are the underlying success factor for effective infosec spending.  Relationships between internal employees across departments and external relationships between the organisation, audit teams and external regulators all play a key part in understanding how to fuel infosec project spending.


By Simon Moffatt

Infosecurity Europe 2013: Defining APT

Targeted and complex malicious software has seen a significant increase in infection rates since 2007 according to Fireeye's Alex Lanstein.  "Since the US Air Force used the APT label to describe specifically Chinese origin attacks, multiple variations, from different geographies are now common place".

Malware Occurrence & Complexity On The Rise

The occurrence and complexity of malicious software has lead to numerous significant breaches.  Powerful state sponsored and organised crime lead groups, have developed powerful automated ways of generating sophisticated, hard to identify, track and block, malware payloads.  Many payloads are now masked as basic everyday application files such as PDF's, Word and Excel documents and images, whilst underneath, harbour well crafted executables, that can seamlessly connect to multiple remote command and control servers.  These command and control servers are often accessed through intermediary instruction sets, distributed via well known domains such as Twitter, Yahoo and Wordpress blog sites, that wont look suspicious to organisational outbound traffic analysis tools.  The instruction sets are often encrypted, or at least masked as base64, to prevent detection.

Sophisticated Social Framing

As anti-virus and signature based scanning tools become more accurate, malware designers are leveraging the human factor as a means of entry into an organisations network.  By identifying  key employees via social media tools such as LinkedIn and Twitter, malware payloads are delivered directly to an individual via spear-phishing techniques.  Basic social framing such as good-news stories or studies looking for work placements or advice are typical according to Lanstein.

Automation

Many of the payloads being delivered are being manufactured using small utilities to help create a 'factory' of malware operators who can quickly craft a malicious document or image in minutes.  These payloads are created specifically for individual organisational targets, with subtle differences and nuances, in order to look realistic.

The Human Element Behind APT

The human element is not to be overestimated in the entire APT food chain.  Whilst the payloads are technical in nature and command and control centres allow for hundrads if not thousands of remote bots , human decision making, framing and social engineering are playing a large part in overcoming first line defences.  As technical protection gets better, the human factor at both the malware operator and target level becomes ever more important, with increased awareness and training a key tool in malware defence.

By Simon Moffatt

Infosecurity Europe 2013: Battling Cyber Crime Keynote

Cybercrime, either for financial gain or hacktivist tendencies is on the rise.  The US and UK governments have invested significant sums in the last 12 months on new defence measures and research centres.  The sci-fi talk of 'cyber war' is becoming an increasing reality, but what are the new attack vectors and what can be done to defend against them?

Changing Priorities, Changing Targets

Arnie Bates from Scotia Gas Networks described that freely available tools, are now commonplace  and can help a potential cyber attacker, to initiate distribute denial of service (DDOS) attacks simply and easily, without complex development skills, that would have been required only a few years ago.  The simplicity of attack initiation, has lead to 'simple' attacks resulting in more sophisticated impact, as highlighted by Misha Glenny, Writer and Broadcaster, who pointed to the recent attack on the Associated Press' Twitter account.  The attack itself seemed simple, but the resulting impact on the NYSE was tangible.


Hacktivism -v- Financial Reward

DS Charlie McMurdie from the MET Police's cyber crime unit, articulated the need to identify the true motive for each cyber crime attack.  The majority of attacks being reported, derive from a financial motive.  Whilst hacktivism is still an important protest tool, the greater complexity and rise in attacks is based on a monetary reward, either directly through theft or via indirect theft of identity credentials, that in turn lead to a cash reward for a successful attacker.  From a government perspective, Adrian Price from the UK's MoD, described how state level espionage is still a major concern, as it has been for decades, but now the attack vectors have simply moved online.  And whilst state level attacks could ultimately lead to government involvement and ultimately war and loss of life, national defence related attacks still fall under the protest category, if a government's political and foreign policy is openly objected to.

Defence Via Shared Intelligence

Whilst DS McMurdie described there isn't a "signal bullet to defend against" when it comes to cyber attacks, there equally isn't a silver bullet that will provide ultimate protection.  Private sector organisations still need to promote cyber awareness and education to generate a more cross-departmental approach to defence.  At the national and critical infrastructure level, shared intelligence initiatives will help provide a more adaptable and responsive defense mechanism.

By Simon Moffatt

Infosecurity Europe 2013: Embedding Security into the Business

A strong keynote panel discussed the best practices for embedding security into the business, and how the changing perceptions of information security are helping to place it as a key enabler to business growth.

Infosec Is The Oil Of The Car

Brian Brackenborough from Channel 4, best described information security as being "the oil in the car engine".  It's an integral part of the car's mobility, but shouldn't always be seen as the brakes, which can be construed by the business as being restrictive and limiting.  James McKinlay, from Manchester Airports Group, added that information security needs to move away from just being network and infrastructure focused and start to engage other business departments, such as HR, legal and other supply chain operators.

The panel agreed that information security needs to better engage all areas of the non-technical business landscape, in order to be fully effective.

Business Focused Language

Many information security decisions are made on risk management and how best to reduce risk, whilst staying profitable and not endangering user experience.  A key area of focus, is the use of a common business focused language when describing risk, the benefits of reduction and the controls involved in the implication.  According to James, organisations need to "reduce the gap between the business and infosec teams view of risk, and standardise on the risk management frameworks being used".

Education & Awareness

Geoff Harris from ISSA promoted the argument of better security awareness, as being a major security enabler.  He described how a basic 'stick' model of making offenders of basic infosec controls, buy doughnuts for the team, worked effectively, when used to reduce things like unlocked laptops.  James also pointed to "targeted and adaptive education and training" as being of great importance.  Different departments, have different goals, focuses and users, all which require specific training when it comes to keeping information assets secure.

All in all, the panel agreed, that better communication with regards to information security policy implementation and better gathering of business feedback when it comes to information security policy creation, are all essential.

By Simon Moffatt

Infosecurity Europe 2013: SCADA The Next Threat

Physical and industrial control systems are now all around us, in the form of smart grid electrical meters, traffic light control systems and even basic proximity door access control panels.  These basic computer systems can hold a vast array of sensitive data, with fully connected network access, central processing units and execution layers.  Many however lack the basic security management expected of such powerful systems.  Many 'don't get a quarter of the security governance an average corporate server' gets according to Greg Jones, of Digital Assurance.

Characteristics and Rise In Use
Micro computers with closed control systems have been in use for a number of years in industrial environments, where they are used to collect processing data or execute measurement or timing instructions.  Their popularity in mainstream use has increased, with the likes of TV set-top top boxes and games consoles following a similar design.  These more commercially focused devices however, often have stronger security due to their makers wanting to protect revenue streams, say Jones.

Lack of Security Management
Many of the control type systems in use, aren't manufactured or managed with security in mind.  Performance, durability and throughput are often of greater importance, with basic security controls such as secure storage, administrative lockdown and network connectivity all potential hotspots.

Protection Gaps
The main security focus of many smaller control devices, is around physical protection.  Devices such as traffic light systems or metering boxes, are generally well equipped to stave off vandalism and physical breaches, but much less so from a logical and access control perspective.

Data is often stored unencrypted, with limited validation being performed on any data collection and input channels.  This can open up issues with regards to data integrity, especially in the field of electrical meter reading.  This will certainly become of greater significance, as it is forecast that by 2020, 80% of European electricity supplier customers, will be using a smart-style.

By Simon Moffatt


Infosecurity Europe 2013: Analyst Panel Keynote: Future Risks

At the end of day 1, of the Infosec Europe conference, on a wonderfully warm Spring afternoon at Earls Court, saw the keynote theatre host an interesting panel discussion focusing on future risks.  Andrew Rose from Forrester, Wendy Nather from the 451 Research group and Bob Tarzey from Quocirca provided some interesting sound bites for what future threats may look like.

Hacktivism versus Financial Reward
All panelists acknowledged that hacktivism has been a major concern for the last few years, with Andrew pointing out that attacks are now becoming more damaging and malicious.  Bob produced a nice soundbite of "terrorists don't build guns they buy them", highlighting the fact that hacktivists can easily leverage available tools to perform sophisticated and complex attacks, without necessarily spending time and effort developing bespoke tools.  Wendy pointed out that attacks driven by financial reward have somewhat different attack patterns and targets, with new avenues such as mobile, smart grids and CCTV devices being identified as potential revenue streams for malicious operators.

Financial reward is still a major driver for many attacks, with new approaches likely to include mobile devices, to leverage potential salami style SMS attacks.  Intellectual Property theft is still a major obstacle at both a nation state and organisational level.

Extended Enterprises
Andrew commented on the increasing complexity many organisations now face from a structural perspective.  Increased outsourcing, supply chain distribution and 3rd party data exchanges, make defensive planning difficult.  Bob also pointed out that the complexity of supply chain logistics have made smaller organisations, traditionally thought to be more immune to larger scale attacks, are now more likely to be breached, simply due to the impact it may have on their business partners.

Insider Threat and Privileged Account Management
Trusted employees can be still be a major headache from a security perspective.  Non-intentional activity such as losing laptops, responding to malicious links and being the victim of spear-phishing attacks, were all highlighted as being the result of poor security awareness, or a lack of effective security policy.  Bob argued that privileged account management should be a high priority, with many external attacks utilising root, administrator and service accounts with their escalated permissions.

Data Chemistry and Context Aware Analysis
Whilst there is no 'silver bullet' to help prevent against the known knowns and unknown unknowns, the use of security analytics can go some way to help detect and ultimately prevent future attacks.  Wendy used the term 'data chemistry' to emphasise the use of the right data and the right query to help provide greater detail and insight to traditional SIEM and log gathering technologies.  Bob promoted the use of greater profiling and context aware analysis of existing log and event data, to further highlight exceptions and their relevance, especially from a network activity perspective.  Andrew also commented that information asset classification, whilst a well known approach to risk management, is still a key component in developing effective defence policies.

By Simon Moffatt

Infosecurity Europe 2013: Hall of Fame Shlomo Kramer & Mikko Hypponen

London, 23rd April 2013 - For the last 5 years the medal of honour of the information security world has been presented to speakers of high renown with the ‘Hall of Fame’ at Infosecurity Europe. Voted for by fellow industry professionals the recipients of this most prestigious honour stand at the vanguard of the technological age and this year both Shlomo Kramer and Mikko Hypponen will be presented with the honour on Wednesday 24 Apr 2013 at 10:00 - 11:00 in the Keynote Theatre at Infosecurity Europe, Earl’s Court, London.

Microsoft Security Intelligence Report Volume 14

Yesterday, Microsoft released volume 14 of its Security Intelligence Report (SIRv14) which included new threat intelligence from over a billion systems worldwide.  The report was focused on the 3rd and 4th quarters of 2012.
One of the most interesting threat trends to surface in the enterprise environment was the decline in network worms and rise of web-based attacks.  The report found:


Who Has Access -v- Who Has Accessed

The certification and attestation part of identity management is clearly focused on the 'who has access to what?' question.   But access review compliance is really identifying failings further up stream in the identity management architecture.  Reviewing previously created users, or previously created authorization policies and finding excessive permissions or misaligned policies, shows failings with the access decommissioning process or business to authorization mapping process.


Protect Data Not Devices?

"Protect Data Not Devices", seems quite an intriguing proposition given the increased number of smart phone devices in circulation and the issues that Bring Your Own Device (BYOD) seems to be causing, for heads of security up and down the land.  But here is my thinking.  The term 'devices' now covers a multitude of areas.  Desktop PC's of course (do they still exist?!), laptops and net books, smart phones and not-so-smart phones, are all the tools of the trade, for accessing the services and data you own, or want to consume, either for work or for pleasure.  The flip side of that is the servers, mainframes, SAN's, NAS's and cloud based infrastructures that store and process data.  The consistent factor is obviously the data that is being stored and managed, either in-house or via outsourced services.

Passwords And Why They're Going Nowhere

Passwords have been the bane of security implementers ever since they were introduced, yet still they are present on nearly every app, website and system in use today.  Very few web based subscription sites use anything resembling two-factor authentication, such as one-time-passwords or secure tokens.  Internal systems run by larger organisations implement additional security for things like VPN access and remote working, which generally means a secure token.

Optimized Role Based Access Control

RBAC.  It's been around a while.  Pretty much since access control systems were embedded in to distributed operating systems.  It often appears in many different forms, especially at an individual system level, in the form of groups, or role based services, access rules and so on.  Ultimately, the main focus is the grouping of people and their permissions, in order to accelerate and simplify user account management.

Insurance For Information Security

We can get insurance for virtually anything these days.  Cars obviously (albeit if that wasn't law, how many would pay for it?).  Ourselves.  Pets.  Eyes.  Teeth.  Holidays.  You name it and The Meerkat can sort it out.  The market for insurance is highly complex, with econometrics playing a large part in determining the potential risk levels of individual insurance consumers.  The insurance underwriters, like any other capitalist organisation, are primarily concerned with making a profit.  They won't provide insurance to those they deem a probable risk and charge higher premiums to those that are a possible risk.  Insurance for the consumer is to cover loss against an unexpected even.  The risks of that unexpected even occurring will obviously change.  Flying to Spain on holiday increases the risk of having a plane crash.  Getting old increases the risk of falling and breaking your hip.  But a lot of the time, the unexpected risk is just that: unexpected.

Identity In The Modern Enterprise

I was on a webinar last week by the highly articulate Eve Maler from Forrester, where the discussion was around the future of identity and access management.  Everyone has an opinion on the future of everything, and IAM is certainly no different.  The view of IAM 1.0 (enterprise provisioning) and IAM 2.0 (federated identity, 'cloud' services and so) is continually evolving and it's pretty clear that identity management now has a greater role to play for many organisations, as they look to embrace things like increased mobility and out sourced service driven applications.

Information Security: Time for a Different Approach

This last few weeks have seen, yet again, some pretty significant hacks (namely the Evernote hack).  Large amounts of user data, including passwords, were released into the wild.  The situation could have been worse in the Evernote case, but at least the passwords were salted and hashed.  Evernote's response, has been to perform a mass password reset on it's user base, in a proactive damage limitation style exercise and no doubt several internal streams of investigation will be looking for the who, what and why behind the attack.

Security is Reactive
I've blogged on the topic of reactionary security on a few occasions recently ("Protection Without Detection", "Preventative -v- Detective Security") and it seems the approach is still pretty much the default  (Or perhaps security is only really questioned and tested after an event?).  There are of course, lots of 'pro-active' components to security.  Hashing a password could be seen as one for example, but many of these activities are often small tactical steps at the implementation level, not the strategic level.  Audit is obviously detective, with an audit response proactive in some sense, but really only proactive to get you back to the status-quo of reactive.  Big data for security, ("Security Analytics: Hype or Huge?", "Big Security Data to Big Security Intelligence") is another example in my mind or purely reactive security.  The big data promise is based entirely on scale and speed.  Scale obviously (the word big might help there), with regards to aggregating and correlating multiple data sources and speed, for trying to develop queries and analytic steps to identify root causes, patterns and so on.  Longer term, the results of the big data analytical steps could of course be proactive in nature.

A Different Approach
In my mind a different approach is needed.  I'm not advocating what that approach should be, but the panacea would of course be to get security so embedded, it seamlessly integrates with revenue generating business focused practices within an organisation.  The gap between security and convenience, needs to be minimized to as close to zero as possible.  Security really needs moving up the organisational food chain, away from the bigger, faster, shiny implementation level approach, which will constantly chase (and lose) an attackers tail, to be a default stance with all business related policy decisions.  This is difficult of course, but in the longer term will help move away from a reactionary standpoint to something resembling security by default.

@SimonMoffatt


The Blurring of the Business Identity

The concept of a well defined business identity is blurring and this is causing a complex reaction in the area of identity and access management.  Internal, enterprise class identity and access management (IAM) has been long defined, as the managing of user access as defined by approval workflows, authoritative source integration and well defined system connectivity.

Historical Business Structures
Historical business identity management has been defined by several well defined structures and assumptions.  An organisational workforce that was managed by an IAM programme, was often permanent, static and assigned into a set business function or department.  This helped define multiple aspects of the IAM approach, from the way access request approvals were developed (default of line manager as first line of approval), to how roles based access control implementations were started (use of business units or job titles to define functional groupings for example).  IAM is complex enough, but these assumptions helped to at least create a level of stability and framing.  IAM was seen as an internal process, focused solely within the perimeter of the 'corporate' network.  Corporate is this sense is indeed quoted, as the boundary between public and private internal networks are becoming increasingly ill-defined.

Changing Information Flows
If IAM can be viewed as data and not just a security concern, any change to the data or information flows within an organisation, will have a profound impact on the flow of IAM data too.  One of the key assumptions of IAM is that of the underlying business structures.  They are often used for implementation roll out prioritization,  application on-boarding prioritization, workflow approval design, data owner and approver identification and service accountability.  This works fine if you have highly-cohesive and loosely coupled business functions such as 'finance', 'design' and 'component packaging'.  However, many organisations are now facing numerous and rapidly evolving changes to their business information lines.  It's no longer common for just the 'finance' team to own data relating customer transactions.  Flows of data are often temporary too, or perhaps only existing in order to fulfill part of a particular process or primary flow.  Organisational structures are littered with 'dotted-lines' reports and overarching project teams, that require temporary access, or access to out sourced applications and services.

Technical Challenges
The introduction of a continued raft of out sourced services and applications (Salesforce.com, Dropbox etc) adds another layer to the complexity, of not only information in general, but IAM information and it's implementation.  Accounts need to be created on external directories, with areas such as federation and SSO helping to make 'cloud' based applications become closer to the organisations core.  However, those those technical challenges often give way to larger process and management issues too.  Issues surrounding ownership, process re-design and accountability need to be accounted for and require effective business buy-in and understanding.

Bring Your Own Device (BYOD) brings another dimension.  The data control issues are widely described, but there is an IAM issue here too.  How do you manage application provisioning on those devices, and the accounts required to either federate into them or natively authenticate and gain authorisation?

The Answer?
Well like most things, there isn't a quick, technical answer to this evolving area.  IAM has long been about business focus and not just security technology.  Successful IAM is about enabling the business to do the things they do they best, namely make revenue.  Nothing from a technical or operational perspective should interfere with that main aim.  As businesses evolve ever more rapidly to utilize out sourced services, 'cloud' based applications and an increasingly reliance on federation and partnerships, IAM must evolve and help to manage the blurring of information flows and structures that underpin the businesses main functions.

@SimonMoffatt


Mandiant Lifts The Lid on APT


The claim that China is the root of all evil when it comes to cyber attacks, increased a notch yesterday, when security software specialists Mandiant, released a damning report claiming a sophisticated team of hackers, with suspected connections to the People’s Liberation Army (PLA) and China Communist Party (CCP), had systematically hacked over 140 organisations over a 7 year period.

Why Release The Report?
There have been numerous attempts over the last few years to pin every single cyber-attack onto a group or individual, originating from a Chinese network.  Some justified, some not so, but it’s an easy target to pin things against.  Many of the claims however, have lacked the detailed technical and circumstantial foundation, to back up the claims and move towards either active defence or proactive prosecution.  The Mandiant report – and I really recommend reading it in full to appreciate the level of detail that has been generated – really looks to point the finger, but this time, with a credible amount of detail.  The obvious outcome of being so detailed is that the attackers now have a place of reference, from which they can now mobilise further obfuscation techniques.  However, the report provides several powerful assets such as address and domain information, as well as malware hashes.  This is all useful material in the fight against further attacks.

How Bad Is It?
The detail is eye watering.  141 victims attacked over a 7 year period, with terabytes of data is not a nice read, whatever the contents.  The startling fact was simply the scale of the operations upholding the attacks.  Not only were the attacks persistent, but the infrastructure required to allow such complex and sustained attacks to take place, covered an estimated 1000 servers with hundreds, if not thousands of operators and control staff.  The victim data was equally interesting, with several of the top sectors attacked, being on the industry list for the China 5 year strategic emerging industries plan.  This starts to bring questions surrounding ethics, morality, intellectual protection and competitive behaviour too.  The data points to a strategic industrial programme to steal and use legal, process, leadership and technical information on a vast scale.

What Happens Now…
The report will no doubt create a lot of split opinion in both the infosec community and also the surrounding political avenues too.  The report points to industrial theft on a grand scale. The links to the PLA and CCP are not to be made on a whim and there will be a political response no doubt.  From an effective defence perspective, where does it leave us?  Well the report contains the practical information that many secops teams can effectively utilise for blacklists and malware identification.  The longer term impact may well be unknown at present.  The team behind APT1 will obviously apply counter measures, altering their approach and attack vectors.  Mandiant themselves may well be at risk of hacking as a result if they were not already.

I think ultimately it goes some way to crystallise the view that long term effective attacks via the internet are common place, sophisticated and long term.  They provide an effective way for industrial secrets to be stolen and used, regardless of the levels of software and process protection organisations use.

The Drivers For Identity Intelligence

From the main view of Identity & Access Management 1.0 (I hate the versioning, but I mean the focus on internal enterprise account management as opposed to the newer brand of directory based federated identity management, commonly being called IAM 2.0...), identities have been modeled within a few basic areas.

The 3 Levels of Compliance
'Compliance by Review' (access certification or the checking of accounts and the associated permissions within target systems), 'Compliance by Control' (rules, decision points and other 'checking' actions to maintain a status-quo of policy control) and 'Compliance by Design' (automatic association of entitlements via roles based on the context of the user), probably cover most of the identity management technology available today.

I want to discuss some of the changes and uses of the first area, namely access review.  This periodic process, is often used to verify that currently assigned, previously approved permissions are still fit for purpose and match either the business function and risk, or audit and compliance requirements.  The two requirements are really the carrot and stick of permissions management.  From an operational perspective, automating the access review process has lead to the numerous certification products on the market, that allow for the centralized viewing of account data, neatly correlated to HR feeds, to produce business friendly representations of what needs to be reviewed and by whom.

The Failings of Access Review
The major failing of many access review campaigns is often associated with information overflow, or the lack of context surrounding the information presented for review.  For example:  asking a non-technical manager to approve complex RACF permissions or Active Directory group names will result in check box compliance, as the manager will be unsure which permissions should be removed.  Glossary definitions and incremental style certifications then start to reduce the burden and volume of information made available.  Whilst these are nice features, they're really just emphasizing the weakness in this area.

Use Your Intelligence
A commonly heard head teacher berate, is the 'use your brains' or 'use your intelligence' theme when it comes to managing easily distracted or unthinking pupils.  The intelligence is often present by default, but not naturally used.  The same can be said of access review.  To make the review process effective - and by effective I mean actually giving business value, not just complying to a policy - we need to think more about the value of doing it.  Instead of focusing on every application and every account and every permission, lets apply some context, meaning and risk to each piece of data.  Do you really need to verify every application, or just the ones that contain highly sensitive financial or client data?  Do you really need to verify every user account or just the ones associated with users in the team that processes the data.  Do you really need to certify every permission, or just the ones that are high risk, or perhaps vary based on the common baseline for that team or role?

Manage Exceptions and Let Average Manage Itself
By focusing on the exceptions, you can instantly remove 80% of the workload, from both an automation and business activities perspective.  The exceptions are the items that don't map to the underlying pattern of a particular team, or perhaps have a higher impact or approval requirement.  By focusing in this way, you not only lessen the administrative burden, but help to distribute the accountability in to succinct divisions of labour, neatly partitioned and self-contained.  If 80% of user permissions in a particular team are identical, capture those permissions into a role, approve that one singular role, then focus the attention on the exceptional entitlements.  Ownership of the role, it's contents and applicability, can then be removed from the view of the line manager in a nice demarcation of accountability, resulting in a more streamlined access review process.

Whenever I see a process being re-engineered with neat 'features' or add-ons, I think the time has come to start re-evaluating what is actually happening in the entire process.  Improvements in anything are great, but sometimes they are just masking an underlying failure

@SimonMoffatt


Twitter Hack: What It Taught Us

Last week Twitter announced that it had been the victim of a hack, that resulted in 250,000 users having their details compromised.  Pretty big news.  The password details were at least salted, but a 1/4 of a million records is a damaging amount of data to lose.  Twitter responded by resetting the passwords of those impacted and revoking session tokens.

Not A Case Of If, But When

The attack again goes to highlight, that cyber attack activity is omnipresent.  Regardless of how large the organisational defense mechanism (and you could argue, that the larger the beast, the more prized the kill, but more on that later), it is fair to say that you will be hacked at some point.  A remote attacker only needs to be successful once.  Just once, out of the thousands of blocked, tracked and identified attacks that occur every hour.  Certainly if you're a CISO or infosec manager at a 'large' organisation (regardless of whether it's actively a web service company or not), from a risk and expectations management perspective, it will be beneficial for the organisations long term defense planning, to assume an attack will happen, if it already hasn't.  This can help to focus resource on remediation and cleanup activities, to minimize an attack impact from both a data loss angle and also a public relations and brand damage perspective.


Target Definition - If You're Popular, Watch Out

How do you know if you'll be a target?  I've talked extensively over the last few months regarding cyber attacks from both an organisational and consumer perspective and the simple start to that series of articles, was that "...any device that connects to the internet is now a potential target..".  Quite a basic statement but ultimately far reaching.  The 'success' of many cyber attacks is generally being driven by the complexity of how the attack has developed.  It is no longer good enough to simply identify a bug on an un-patched system.  As good as hackers are, anti-virus, intrusion prevention systems, client and perimeter firewalls, application white listing and kernel level security provide a strong resistance to most basic attacks.  Twitter themselves acknowledged that the attack on them "..was not the work of amateurs.." and that they "..do not believe it was an isolated incident.."

The complexity of the Twitter attack, would make you think that the 250,000 accounts that were compromised where not targeted directly, and more would have been lifted if the attack was not stopped.  It seems the main driver is simply the fact that Twitter is a massively popular site, with headline grabbing strength.  Why are Windows XP and Android malware infections so high?  Regardless of underlying technical flaws, it's simply because they are well used.  A cyber attack will always gravitate to the path of least resistance, or at least greatest exploit-ability, which will always come from the sheer volume of exposure.  Be that number of potential machines to infect, or number of users to expose.


Response & Handling

The underlying technical details of the Twitter attack are yet to be understood, so it's difficult to provide a rational assessment on how well the response was handled.  If you separate the attack detection component for a second, the response was to reset passwords (thus rendering the captured password data, worthless), notify those impacted via email and revoke user tokens (albeit not for clients using the OAuth protocol).  All pretty standard stuff.  From a PR perspective the Twitter blog posted the basic details.  I think the public relations aspect is again probably the area that many organisations seem to neglect in times of crisis.  This is fairly understandable, but organisations the size of Twitter must realize that they will make significant waves in the headline news and this needs to be managed from a technical, community and media relations perspective.

@SimonMoffatt




Identity Management: Data or Security?

I was having a discussion this week with a colleague, regarding identity management transformation projects and how organisations get from the often deep quagmire of complexity, low re-usability and low project success, to something resembling an effective identity and access management (IAM) environment.  Most projects start off with a detailed analytics phase, outlining the current 'as-is' state, before identifying the 'to-be' (or not to be) framework.  The difference is wrapped up in a gap analysis package, with work streams that help to implement fixes to the identified gaps.  Simples right?

IAM Complexity

IAM is renowned for being complex, costly and effort consuming from a project implementation perspective.  Why?  The biggest difference to for example, large IT transformation projects (thinking enterprise desktop refresh, operating system roll-outs, network changes and so on), is that IAM tends to have stake holders from many different aspects of the business.  A new desktop refresh will be ultimately decided by technicians.  Business approvers will help govern things like roll out plans and high level use cases, but not the low level implementation decisions.  IAM is somewhat different.  It impacts not only technical administration of managed resources, but also business processes for things like access requests, new joiners, team changes and so on.

IAM Becomes A Security Issue When It Doesn't Work

IAM is often seen as part of the security architecture framework.  This makes total sense.  The management of subjects and their access to data objects is all well understood, with loads of different access control mechanisms to choose from (MAC, ABAC, RBAC etc).  However, IAM should really be seen more as a business enabler.  I always like to pitch IAM as the interface between non-technical business users and the underlying IT systems they need in order to do their jobs.  10-15 years ago, when IAM started to become a major agenda, it was all about directories, (meta, virtual, physical, partial, synced, any more terms..?) and technical integration.  "Developing a new app?  Whack some groups in an LDAP for your authentication and authorization and you're done".  The next step was to develop another layer that could connect multiple directories and databases together and perform multiple account creations (and hopefully removals) simultaneously.  Today IAM is more than just technical integration and provisioning speed.  It's more about aligning with business processes, organisational team requirements, roles based access control, reporting, compliance and attestation.  All of these functional areas have use cases that touch business users more than technical users.  However, if those IAM services fail (access misuse, insider threat, hacked privilege account) a security incident occurs.

Think of IAM As Building Data Routes

During the continued discussion with my colleague, he brought up the notion that IAM is really just about data management.  The movement of data between silos, in order to get it to it's destination in the most effective and efficient path.  IAM data could originate from an authoritative source such as an HR database, before ultimately being transformed into a system account within an LDAP or database.  The transformation process will require business understanding (what will the account look like, which roles, permissions, what approvals are required etc) but none-the-less a new piece of data will be created, which requires classification, auditing and reporting.  Just the same as a file on a share.  By breaking down the entire IAM elephant into bite sized chunks of data creation, transformation and output, you can start to make the implementation process a lot more effective, with re-usable chunks of process and project machinery.

Like with any large scale project, it's often the smallest footprints that make the biggest impact.  In the case of IAM, taking small, but smart data management style steps, could be the most effective.

@SimonMoffatt




Sony ICO Fine: Damage Was Already Done

This week tech and games giant Sony, was hit with a nifty £250k fine from the UK's Information Commissioners Office (ICO).  This was in response to Sony being hacked back in April 2011, in a situation which exposed millions of customer records - including credit card details -  for users of the Play Station Network (PSN).  The ICO stated that Sony failed to act in accordance with the Data Protection Act, for which as a data controller, it must do, to certain standards of information protection.

The incident itself proved to be a logistical and PR nightmare, costing Sony an estimated $171m in lost revenue, legal and fix up costs.  Whilst the fine by the ICO is insignificant to the actual cost of the damage done nearly two years ago, it acts as a timely reminder that every significant data breach by a data controller, will be investigated, with any irregularity identified, and appropriate accountability applied.

The ICO has the ability to fine organisations up to half a million pounds for data controller irregularities, which may seem like small change to the likes of corporate giants such as Sony.  However, the ICO has a broad range of users to keep in check, from public sector, education and health care providers, right through to start-ups and corporate machines, where £500k is not insignificant.

The use of the ICO as a security enabler in this case obviously did little, as the breach occurred and the aftermath needed thorough investigation.  However, the damage to the Sony brand, customer dissatisfaction and the internal security recovery costs would not have been unknown.  All three could and should have been used as a bare metal driver for implementing the appropriate information security steps, such as patching, auditing and management of database security best practises.

Whilst information security is seen as a nice to have, it inevitably has budget restraints to work against, with business justification a constant balancing act to manage.  As areas such as information security metrics and security-RoI measures are used to help justify the tangible gains from a succinct information security policy, it is often the intangible damage that can occur from breaches and data loss which is higher.

Whilst intangible costs such as brand damage, confidence levels and user satisfaction are often hard to quantify, that isn't to say they shouldn't be taken into account when analyzing appropriate risk mitigation strategies.

The case with Sony, painfully highlights the financial and brand damage costs a significant data breach can have, which should act as a powerful use case for organisations looking to either reduce or avoid implementing up to date and robust information security practises when it comes to personal or credit card information.

@SimonMoffatt



Security Analytics: Hype or Huge?

"Big Data" has been around for a while and many organisations are forging ahead with Hadoop deployments or looking at NoSQL database models such as the opensource MongoDB, to allow for the processing of vast logistical, marketing or consumer lead data sources.  Infosec is no stranger to a big approach to data gathering and analytics.  SIEM (security information and event monitoring) solutions have long since been focused on centralizing vast amounts of application and network device log data in order to provide a fast repository where known signatures can applied.

Big & Fast

The SIEM vendor product differentiation approach, has often been focused on capacity and speed.  Nitro (McAfee's SIEM product) prides itself on it's supremely fast Ada written database.  HP's ArcSight product is all about device and platform integration and scalability.  The use of SIEM is symptomatic to the use of IT in general - the focus on automation of existing problems, via integration and centralization.  The drivers behind these are pretty simple - there is a cost benefit and tangible Return on Investment of automating something in the long term (staff can swap out to more complex, value driven projects, there's a faster turn around of existing problems) whereas centralization, often provides simpler infrastructures to support, maintain and optimize.

The Knowns, Unknowns and Known Unknowns of Security

I don't want to take too much inspiration from George Bush's confusing path of known unknowns, but there is a valid point, that when it comes to protection in any aspect of life, knowing what you're protecting and more importantly, who, or what you are setting protection from, is incredibly important.  SIEM products are incredibly useful at helping to find known issues.  For example, if a login attempt fails 3 times on a particular application, or the ability to identify traffic going a blacklisted IP address.  All characteristics have a known set of values, which help to build up a query.  This can develop into a catalog of known queries (aka signatures) which can be applied your dataset.  The larger the dataset, the more bad stuff you hope to capture.  This is where the three S's of SIEM come in - the sphere, scope and speed of analysis.  Deployments want huge datasets, connected to numerous differing sources of information, with the ability to very quickly run a known signature against the data in order to find a match.  The focus is on a real-time (or near time) analysis using a helicopter-in a approach.  Can this approach be extended further?  A pure big-data style approach for security?  How can we start to use that vast data set to look for the unknowns?


Benefits to Security

The first area which seems to be gaining popularity is the marrying of SIEM activity data to identity and access management (IAM) data.  IAM knows about an individuals identity (who, where and possibly why) as well as that identity's capabilities (who, has access to what?), but IAM doesn't know what that user has actually been doing with their access.  SIEM on the other hand, knows exactly what has been going (even with out any signature analytics) but doesn't necessarily know by whom.  Start to map activity user id's or IP addresses to real identities stored in an IAM solution and you suddenly have a much wider scope of analysis, and also a lot more context around what you're analyzing.  This can help with attempting to map out the 'unknowns' such as fraud and internal and external malicious attacks.

Unknown Use-Cases

Managing the known attacks is probably an easier place to start with.  This would involve understanding what metrics or signatures an organisation whats to focus on.  Again, this would be driven by a basic asset classification and risk management process.  What do I need protecting and what scenarios would result in those assets being threatened?  The approach from a security-analytics perspective, is to not be focused on technical silo's.  Try to see security originating and terminating across a range of business and technical objects.  If a malicious destination IP address is found in a TCP packet picked up via the firewall logs in the SIEM environment, that packet has originated somewhere.  What internal host device maps to the source IP address?  What operating system is the host device?  What common vulnerabilities does that device have?  Who is using that device?  What is their employee-id, job title or role?  Are they a contractor or permanent member of staff?  Which systems are the using?  Within those systems, what access do they have, was that access approved and what data are they exposed to and so on?  Suddenly the picture can be more complex, but also more insightful, especially when attempting to identify the true root cause.

This complex chain of correlated "security big data", can be used in a manner of ways from post-incident analysis and trend analytics as well as for the mapping of internal data to external threat intelligence.

Big data is here to stay and security analytics just needs to figure out the best way to use it.

@SimonMoffatt


Protection Without Detection

I read an article this week by the guys at Securosis, that referred to a study on anti-virus testing.  I'm not going to  comment on the contents of the article, but I loved the title of the blog, which I've subtly used for inspiration here.  The concept of protection without detection.  Just think on that for a second.  It's a mightily powerful place to be at.  It's also a position we generally see applied to the 'real world' too.  Not that information security isn't the real world of course.

You take prescribed medicine or wash your hands with antibacterial gel without knowing the names, consequences or impact of the bacteria you have killed.  You lock your luggage with a combination lock and are not aware at the other end of the flight, who has attempted to touch up, open and get into your bag.  Your salary gets paid in to the bank every month, at which time the bank can invest that cash, lend it to other people and so on.  You aren't really concerned about the details of those transactions as your salary will always be available for you to withdraw (unless of course the bank defaults...).  Your ISP could well be stopping thousands of cyber attacks a day before you see the handful of attacks on your local multi-function router in your front room.  Ok, the last example is slightly off-piste, but the concept of protection without detection, or even protection-by-default is a nice panacea to be at.

It removes the concept of security being a managed outcome.  It stops security being a cost, an effort laden piece of work, a distraction from the 'real world' of living, going on holiday or safely browsing the internet.  Isn't that how security should be?

Often as consultants, technologists and engineers, we sometimes fail to see things through the eyes of the normal subscriber and end user.  When the majority of us buy a car we are concerned about mpg, reliability, safety and performance.  We are not generally wanting to speak directly with the mechanic, designer or component builder about the injection system, the carbon mix of the break pads or the improvements made to the vanos.  It's the end goal or deliverable that will directly impact our lives that we are really interested in.

Many end users and individuals will see security in this light.  They want to be (or at least feel) secure, without having to worry about implementation, detection and reaction.  They want security as a given proposition, perhaps guaranteed to a certain level.  In exchange they maybe be prepared to pay a sum of cash or put up with a particular change in service of lifestyle as long as a certain level of security can be guaranteed.

Security itself is a means to an end.  The end being a protected lifestyle or protected identity or piece of data.      Promoting security as a default proposition, makes its more attractive for those who may not be prepared to struggle with the inconvenience or details, on how competing security options deliver a level of safety.

@SimonMoffatt