Infosec Professional 2012 Review


This is the 2012 Infosec Professional review, containing all the articles and interviews from the past 12 months.  It's been a fascinating year from an information security perspective, with some eye-watering data breaches, great conferences and innovative new products coming to market.  A thank you to all who spent some time being interviewed and giving their comments on the industry - some great chats and thought provoking comments. 

Interviews

Conferences

Cyber Security

Management

Technology

General

The Obligatory 2013 Infosec Predictions Post

2012.  Been and gone pretty much, in the blink of an eye.  Well it's lasted pretty much as long as 2011, give or take, but one thing's for sure, it seems information security became more of a big deal.  In my eyes, it always has been a big deal.  Security is a default in my opinion, both in my personal and professional life.  I fail safe when it comes to processes or technical changes.  I believe security is essential, not only for an individual team, system, person or organisation level, but also from an industry and society perspective too.

The Year That's Been

The biggest take away for me, seemed to be that non-security people started to take security seriously.  Governments got involved with information security in a big way.  The US had several issues with SOPA, the online piracy act and then turned its attention to cyber war, with several policy discussions and hardening of attitude towards the likes of China and Iran, from a cyber security standpoint.  October saw the release of a damning report against Chinese network component provider Huawei, indicating the organisation posed a significant threat to the US from an intelligence gathering and supply chain disruption perspective.

The UK got involved too, announcing an investment of £650 million to be spent over 4 years on cyber security research, in partnership with some of the UK's top universities.

'Big Data' again grabbed the headlines at most of the vendor trade shows, with products focusing on data aggregation and advanced intelligence and analytics.  Information-centric security response, has become a talking point, with the focus on centralised SIEM and logging solutions being combined with identity and behaviour profiling systems, in order to create a more contextual view of potential threats.  The concept is interesting, but again, reactive.  Organisations are generating vast amounts of data across all pillars, not just security, and finding even the smallest crumb of competitive advantage within the data mountain is now seen as the holy grail.

From a consumer perspective, the topic which consistently caught my attention was the rise of mobile malware, especially concerning smartphones on the Android operating system.  The significant rise of Android handsets, simply means an attacker has a greater potential revenue pool to tap into, if a malware app was successful.  The rise of dialers, texters and spambots landing on Android devices, seems to be an expected tidal wave in the coming months.

So What's Ahead?

I'm not one for big predictions at all.  Technology in general, evolves so quickly, that 12 weeks is an age when it comes to new ideas, iterative development and market changes - and security is no different.  However, the main areas I will personally be following with interest though, will be the BYOD/BYOA, personnel, preemptive security and social intelligence areas.

BYOD / BYOA

Bring Your Own Device is a bit 2009, but is now starting to infiltrate into many organisations infosec plans, with several on a version 2.0 implementation strategy.  The sheer rise in consumer ownership, of the laptops-in-your-hand style of phones, makes leveraging their capability a cost effective and beneficial internal marketing strategy by many companies.  As more and more employees shout for the use of iPad like applications and user interfaces, organisations ultimately have to listen.  The biggest concern is obviously security.  BYOA (..your own application) is a variation on a theme and I will be looking to see how organisations implement approaches surrounding personal and business data separation, the development and distribution of internally built apps and the logistical and legal implications.

Security Personnel Shortages

2012 saw many independent and non-for-profit research papers being released on the continual shortage of information security professionals.  The reports indicated, that the infosec industry will create at least 2 million more jobs within the space due to market demand.  The upward trend, is seemingly being driven by more complex architectures such as cloud adoption and BYOD, as well as an increasing focus on compliance.  It will be interesting to see, whether there is in fact a shortage of good quality information security professionals, or simply issues within the hiring process, where organisations are unable to articulate and map the skills they require.  The salary trends in both the US and Europe will be interesting reading, as will the number of qualified security professionals, especially covering the defaults such as CISSP, CISM, CISA and CEH.

Preemptive Security

Preemptive security has always been a big interest area for me.  Many products in the market today are often focused on the reactive.  Analysis tools, post incident investigation and even areas that look to stop the bad stuff from happening could be deemed to reactionary.  I have always argued for a longer term shift for security to be more embedded, as a default and preemptive.  Areas such as security-by-default operating systems, as recently announced by Kaspersky, or white-listing, push security to an implicit position as a default.  Instead of trying to develop an infinite number of signatures to stop a piece of malware or an insider attack pattern, instead, stop everything, unless it's known to be good.  Windows 8 for example, in its attempts at boosting security, include a boot-loader feature which stops the OS from loading if tampering has been identified due the use of file hashing.

Social Intelligence & Data Aggregation

Back in September, Google acquired anti-malware start-up VirusTotal.  It didn't seem to set the airwaves fluttering, but it caught my eye for several reasons.  VirusTotal is an aggregation system, for file and URL scanning.  They sit in front of several of the top anti-virus providers and provide a free service either via HTTP or an API, so you can either scan a file natively, or ping over a hash and check whether that file or URL has been involved in any skirmishes.  Not very revolutionary, but the focus on aggregation and as-a-service is a powerful notion.  Price comparison use a similar approach (air tickets, electronics, insurance) and the application of this approach to more security related arenas is welcome, especially with a general focus on big is better (aka big data) and how processing vast amounts of alerts/vulnerabilities/signatures is key.

@SimonMoffatt

Do Better Technical Controls Increase People Focused Attacks?


Technical controls are often the default security response for many organisations.  When I refer to technical controls, there is obviously a people element to that, from a design and implementation perspective, but ultimately the control is focused on a piece of hardware or software.  For example, cryptographic algorithms have continued to evolve over the last 40 years, to levels which allow them to be computational secure and can be used on a wide scale without major concern.  PKI and other crypto infrastructures are often too focused on the algorithms; hardware security module usage and technical touch points, than for example, the people related process and awareness.  It is all very well having an industry standard algorithm, but that becomes less useful if a user doesn't protect the un-encrypted payload when it’s at rest, or allows it to be stored in temporary memory for example.

Casually thinking of the default security controls for many organisations and many are in fact software or hardware related: antivirus, firewall, intrusion detection systems, encryption, data loss prevention systems or security information and event monitoring solutions.  The focus is on faster, stronger or cheaper software or hardware technology.

People as an attack vector

People play a critical role in the security landscape of an organisation.  From a design and implementation perspective from those working under a chief information security office or security ops team, right through to non-IT related individuals, all can be seen as a potential attack vector and therefor, a threat to an organisations information assets.
System accounts are created for individuals.  Staff, have physical security badges and proximity cards.  Audit trails are linked in real people (or should be). 

More than one way to skin a cat

The last 24 months has seen a significant rise in the number of external or cyber related attacks.  These attacks have either been advanced persistent threats using advanced evasion techniques, or simple “hacktivist” style approaches, would undoubtedly have utilised, an internal account to gain unauthorised access.  That account is likely to have already existed, have permissions (or enough to start a privilege escalation process) and might also be assigned to a real person, as opposed to a service or system account.
However, to gain access to an initial password, a hacker would always choose the simplest and most cost effective (from a time and money perspective) method of entry.  If a user’s complex password or passphrase is hashed using a salt, and algorithm that is computational secure – resulting in say 400 years of brute force protection, why bother attempting to crack it, if you can use more subtle methods?

Increase in social engineering

People are undoubtedly the biggest threat and biggest asset to an organisations security position.  Social engineering can be seen as a more direct approach to exposing real security assets such as passwords, processes, keys and so on.  Via subtle manipulation, carefully planned framing and scenario attacks, through to friending and spear phishing attacks, people are increasingly becoming the main target, as technologically is seen to becoming more secure and more expensive to crack.

Information Security: Why Bother?


I have heard this sentiment, perhaps not put quite as bluntly as that, on several occasions over the last few years when working with clients and engineers on security related projects. My role would have been to help embed a particular piece of security software or introduce a piece of consultancy or business process which would help improve the organisations security posture.

The question, often raised as a bargaining tool, is often focused on the, ‘well I understand what you propose and I know it will increase the security of scenario X, but why should I do it?’. In honesty, it is a good question. Organisations have finite budgets which will cover all of IT and related services, and it is a fair objective, to have to show and prove, either via tangible or intangible RoI, that a piece of software or consultancy will have a beneficial impact on the organisation as a whole.

Justification and SRoI

Return on Investments, or Security Return on Investments are clearly a useful tool for proving that a particular security related project will have a benefit to an organisation. An organisation will probably already know that this value will break even very quickly, before even starting to look at service and software providers to help implement such a project. During the business case and feasibility study phase, a basic high level SRoI could generally be used to see if initiating the project is actually worthwhile.

The main drivers for many security related initiatives have often been related to external factors. I refer to these factors as external, as I am referring to factors that are generally reactionary or not originating from the overall strategy of the business. These factors could include things like compliance requirements, responses to previous security attacks or data breaches. If these factors didn’t exist, would those security projects and budgets be allocated?

Security as a default

Unfortunately, the answer may be no, hence the thoughts prompted by this article title. Security is often not seen as essential to the business strategy either via from a delivery, efficiency or cost savings perspective. It is something the organisation often feels they have to do. “If we don’t sort the access control process out, we’ll get fined”. “If we get hacked again, and lose more customer records, our reputation will be unrecoverable”. Sound familiar?

Security as a default option is probably some way off the agenda for many enterprise IT strategists. The fail-safe option is costly, complex and evolving. The generation of the CISO role, is a great step forward in providing security level awareness to the overall business strategy. Whilst currently that role is really focused on completing the ‘must’ have security practices, over time this may evolve to allow security to become a default option. Default within the software development lifecycle, new business processes, and employee attitudes and so on.

The key to making this happen will take a careful balance of showing the tangible and non-tangible benefits of having a better security posture, without restricting business or employee agility.

@SimonMoffatt

Preventative -v- Detective Security

There's an Italian proverb which reads 'vivere da malato per morire sano' - living like an invalid to die healthy.  Whilst that is looking at one lifestyle extreme, looking after your body is generally seen as a positive if you want to live a long and healthy life.  Prevention is indeed, generally seen as being better than the cure.  The same concept applied to information systems can produce some interesting results.

From a non-security perspective, I would say, most management approaches and project budgets, are focused on the reactive.  IT has historically, not always been seen as an efficiency provider for the business, with budget often only being assigned, when it's acknowledged that the business front line would be negatively impacted if a system, project or team would were not present.  From a security perspective, I think reactionary policy is still deep in the mindset too.


Reactionary Security

When you casually think of information security tools and products, how many are naturally related to post incident or reacting?  Security Information and Event Management (SIEM) and logging tools are generally post-incident, as if the event has been logged it's surely already occurred.  File Integrity Monitoring (FIM) another post-incident approach.  Anti-virus and anti-malware software, could arguably be reactive, as you are checking signatures for a known attack, indicating the software has already been spotted if an alert is triggered.  The flip side to something like anti-virus, is that although something malicious has been spotted, you are preventing the real impact, which would occur if the malware were left to spread.  Identity and Access Management (IAM) could be deemed to purely proactive however, as the process is attempting to restrict access before an issue could occur, either through malicious or non-malicious means.  

Ethical hacking and penetration testing is another more proactive industry, but often, these services are not engaged until after an organisation or application has been attacked and breached previously.  Budget release, especially for cyber security related technologies, is often easier, after an organisation has been attacked.

Moving to Proactive

Security has several issues from a proactive implementation perspective.  Like anything, a detailed return on investment, including both tangible and non-tangible benefits, is required in order to sanction a project which wont necessarily deliver something immediately.  Proactive security is more of a mindset and long term strategy, which can often be hindered if an organisation is then attacked after implementing a more proactive approach.

The implicit embedding of security in all software, projects and processes is often key to shifting to a more proactive standpoint.  This can be difficult at several levels.  Developers operating in the software development life cycle, are often more focused on time to delivery and software quality, with approaches such as Agile and eXtreme Programming (XP) not necessarily making security a high priority.  Security can often be seen to slow down the development process and take attention away from use cases the client wants completing.

From a business process perspective, security can often be seen as inhibitive or restrictive.  Again, time is a factor, but also, non-technical personnel are quite rightly more focused on their individual business use cases:  delivering products, realising revenue opportunities and keeping customers happy.  Unless, security is silently embedded into a process, it too can be see as time consuming and non-essential.  Until, of course, a breach of attack occurs.

Security awareness is often a key part of the progress towards a more proactive approach.  Awareness not only at every day non-technical personnel, via regular on line training and workshops, but also at the board level too.  Security metrics can be used to help promote the idea that security up front is often more cost effective and business efficient than spending thousands on post-incident consultancy and investigative products.






Cyber Security Part V - Critical Infrastructure

The final part in the cyber security series, will focus on the issues critical infrastructure environments face.  Supervisory Control and Data Acquisition (SCADA) systems and Industrial Control Systems (ICS) are two of the standard environments that can constitute a critical environment.  Whilst many financial services environments can be described as critical, critical infrastructure is more focused on the key assets described by a government as being essential to the standard function of the society and economy.  This would include key utilities such as electricity and water supply, public health institutions and national security groups such as policing and the military.

In recent years they have been subject to specific and prolonged attacks, opening up long standing vulnerabilities.

Difference of priorities: CIA to AIC

The standard information security triad consists of confidentiality, integrity and availability.  The priorities for many business information systems will follow the CIA approach in that order.  Confidentiality is still the number one priority, with things like access management, network perimeter security and data loss prevention strategies still the number one budget grabber.  The main driver behind such decisions, is often related to the protection of intellectual property, client records or monetary transactions.  The output of many service related organisations, obviously takes on a more intangible nature, placing a greater reliance on digital management, storage and delivery of the processes and components that make that organisation work.

From a critical infrastructure perspective, I would argue the priorities with regards to the security triad, alter, to focus more on availability, with integrity and confidential being less important.  An electrical generation plant has one main focus: generate and distribute electricity.  A hospital has one priority: keep people alive and improve their health.  These types of priorities, whilst relying on information systems substantially, are often managed in a way that makes their delivery more important than the component systems involved.

This difference in attitudes towards how security policies are implemented, can have a significant impact on vulnerability and exploit management.

Vulnerabilities - nature or nurture?

Vulnerability management from a consumer or enterprise perspective is often applied via a mixture of preventative and detective controls.  Preventative comes in the form of patching and updates, in an attempt to limit the window of opportunity from things like zero-day attacks.  Detective defence comes in the form of anti-virus and log management systems, which help to minimise impact and identify where and when a vulnerability was exploited.  The many basic steps often associated with enterprise protection, are often not always available within critical infrastructure environments.  

Critical infrastructure is often built on top of legacy systems using out dated operating systems and applications.  These environments often fail to be patched due to the lack of downtime or out of hours permitted work.  ICS and energy generation systems, generally don't have a 'downtime' period, as they work 24 x 7 x 365.  Outage is for essential maintenance only and preventative patching wont necessarily fall into being an essential outage.  Due to the age and heterogeneity of such systems, a greater focus on additional patch management would seem natural.  Many critical infrastructure environments are also relatively mature in comparison to modern digital businesses.  Mechanisation of industrial and energy related tasks is well over a century old, with computerization coming only in the last 35 years.  This maturity, has often resulted in cultural and personnel gaps when it comes to information security.  

Basic security eroded

Some of the existing security related policies that have been implemented in critical infrastructure environments are now starting to erode.  The basic, but quite powerful and preventative measure, of using air gapped networks to separate key systems from the administrative side of the organisation, is now being eroded.  The need for greater management information, reporting and analytical systems, has lead to cross network pollution.  The low level programmable logic controllers (PLC's), used for single purpose automation of electromechanical tasks, are now being exposed to the potential of the public network.  Through the connection of desktop and laptop devices to previously secured networks, has brought the risk of infection from internet related malware a lot higher.


Recent attacks and a change in culture

The two major exploits, focused specifically on critical infrastructure related environments in the last couple of years, have probably been the Stuxnet and Duqu attacks. Whilst the motives for these attacks are maybe different to the standard monetary or credibility drivers for malware, they illuminated the potentialfor mass disruption. As with any security attack, post-incident awareness and increased focus often result, with several new attempts at securing critical infrastructure now becoming popular. There are several government lead and not-for-profit organisations that have contributed to security frameworks for critical environments.  Kasperky labs also recently announced plans to develop a new build-from-the-ground-up secure operating system, with a focus on critical environments.

Whilst previously only focused on the availability and delivery of key services and products, critical infrastructure environments, now have to manage the increasing threat posed by cyber attacks and malware exposure.

@SimonMoffatt

Protect Information Not Data

In an ideal world, should we not be protecting information instead of data?  This is an interesting concept.  We backup data.  We secure data.  We create and manage access control lists that allow the subject, access to an object.  The object is generally classified as data.  We talk about 'big data'.  Moving data to the cloud and so on.  But is the data component actually that important?  Obviously certain individual pieces of data are very important.  Certain documents, files and so on, have significant importance and exposure levels.  But on the whole, is an organisation run on data or information?

I guess we need to define both of the key terms here.  What is 'data' and what is 'information' and more importantly what are the differences?

What is 'data'?

A basic technical definition would be that data is the low level bits and bytes of an object.  This object on its own, comprises of basic, raw and unorganised facts.  The actual word would have a Latin equivalent of 'datum' to mean 'that which is given'.  As humans - or managers, analysts and so on - we need to interpret the data for it to become useful.  For example, backing up an email file such as .pst, is pretty useless in providing email reading and writing capabilities, without being able to interpret that file via an email client.  The same can be said of data.  Without the interdependence with other data sources and analytical tools and frameworks, data has limited use.  If you were given an exam score of 65, that 65 on it's own is pretty useless, without knowing the pass mark, maximum score, comparative scores, averages and so on.

So what is 'information' then?

I'd describe information as being data that has been interpreted, organised and given some context.  Once the context has been identified and applied to a singular piece of data, that can then be communicated and reported to others, making it useful information.  That information in turn can be used to develop intelligence over time.  An organisation as a whole, whether that's a manufacturing or service based company, will really function on information.  Information creation will start through interpreting the raw data, where information management takes over via analysis and collaboration and ultimately ending up with information dissemination either internally or to clients with products messages delivered.

The point of an information management system

The information management system (IMS) is ultimately the mechanics between the raw data and something useful at the end.  IMS's will take an input, perform some processing and deliver an output.  In addition you'll probably have some control and feedback components too.  An IMS will also contain an important couple of ingredients: people and processes.  Whilst many organisations would love automate as many people related tasks as possible, raw humans still have a pretty important role to part in any information chain.  They can add adaptability and rationality to decision making - as well as the opposite in some cases too.  But human knowledge is still an huge part of an organisations successful output.

Protecting the entire information chain

This brings me back to the main point.  Don't just protect the individual data component of the information chain.  Without the other ingredients, including people and processes, the data itself can have limited use.  Backup and recovery techniques should really look to contain the people and process related aspects, even if those components are not initially easily committed to tape the same as a database.  From a security perspective, an organisation should be protected from multiple levels, which would also include the processing and output components.  Processing could include collaboration tools  and techniques, analysis and reporting too.  Output is often an area which is often protected from the outside in - ie lets stop people seeing stuff we don't want to them see.  It should also be focused on internally, to make sure information going outbound is sufficiently restricted, managed and recoverable.





Infosec Product Release Review - 16th Nov

An overview of recently released information security products, services, frameworks and policies from the last 7 days:


NETGEAR Debuts More Powerful Version Of Popular VDSL

13 Nov 2012
In addition, like other members of the ProSecure UTM family of security appliances, ... As the second entry in the NETGEAR ProSecure UTM S product line, the UTM25S ... Inc. an InformationTechnology services company based in New York.

Cloud Security Alliance Releases Security Guidance 1.0

14 Nov 2012
The Cloud Security Alliance (CSA) has released version 1.0 of the "Security ... Additionalinformation about Trend Micro Incorporated and the products and ...


Who Do You Trust?

This is a tough question, whether it's focused on technology or real life.  'Who can you trust?' is often an easier angle to take, but ultimately that is a precursor to the main scene.  Peeling the onion a little, you can focus on bite sized chunks and respond with, 'trust with what?'.  If it's my life then the picture changes substantially.  I might trust Google with my search engine results, but perhaps not with diagnosing a disease.

The context will obviously help to determine the scope of who and what are trusted, but the decision making process will generally take on the same route.  We ultimately start off with a blank canvas of pre-decision making, slightly marked by some bias and framing, before ending up with a person, product or service that we then utilise to perform an action we can not perform ourselves.  Once that 3rd party has been chosen, we often fail to perform the checks again, placing our trust in them implicitly and explicitly.  This when issues can arise.

Skyfall - Cyber War Becomes Cool

I went to see James Bond's 23rd outing in Skyfall yesterday - for a second time this week I admit, I do love a bit Bond.  The film is great - go and see it! - and intertwines the new world action film, with all the old world British spy touches that has made Bond the longest running movie franchise of all time.

Gone were the gimmicky gadgets of old, with megalomaniacs trying to run the world, destroy the world or recreate the world, and in came a cyber terrorist with a personal vengeance.  Technology has always played a part in Bond.  The British secret service, Bletchley Park and GCHQ have all had their fair share of computer-related innovations, from encryption through to surveillance, so seeing a control room full of screens 'processing' unintelligible code and instructions is nothing new.  However, this time around, it was more the concept of cyber war that was more prominent as opposed to the technology.

Cyber Security Part IV - Consumer Protection

This is the 4th part of the cyber security series I started, and I want to focus on the consumer a little more.  Cyber attacks have been well documented in their ability to damage large organisations, government websites and critical infrastructure.  However, there is still a large volume of non-technical home and mobile users who are ending up as the victim of on line attacks and identity theft.

"The use of more portable devices, including smart phones, has increased user convenience, but also opened up a can of worms when it comes to security.  Smartphones are not really phones.  They're computers, that happen to make calls"

Cash, Credit Cards, Convenience and Security

I was recently asked by Microsoft to make a comment regarding the concept of 'User Convenience -v- Security' from a software perspective.  Security is often seen as restrictive or inhibitive, so is generally not the first thing many (non-technical) users think about or implement.  Also, from an SDLC perspective, security is often seen as an add-on and left to the QA and audit teams to implement before an application or piece of software is released into the wild.  Convenience in both counts, takes hold, reducing security to a post-incident action.

Convenience Wins Out

The same can be applied to many things.  Convenience versus safety is another angle.  How many of us don't bother with the seat belt on a roller coaster, flight or car journey if it's too tight and uncomfortable?  If it's restrictive we avoid it, even though in those examples, our lives could be at stake.  A broader view could look at the market for insurance.  The inconvenience component is the cost up front.  This could restrict us from spending our cash on something more instant and rewarding, instead of the potential for a payout in the future when things don't quite go to plan.

Social Networking Security Management

Like it or loathe it, social networking is omnipresent.  From the youthful party picture posting, to professional networking and virtual discussion boards, your on line personality and data sharing can be both powerful and an exploitable vulnerability.

The usefulness of many social networking sites is often increased, the more of your personal information you make available.  This in recent years, has seen many criticisms of the likes of both Facebook and Google+ for how they manage and make use of your personal identifiable information (PII).  Whilst there can be risks will publishing any personal data on line, careful management and protection of such data makes social networking less risky and more powerful than ever before.

Cyber Security Part III - Enterprise Protection

This is the third part of the cyber security series (Part I, Part II), with this week focusing on enterprise protection.  Any device connected to the internet is open to attack from either highly complex botnets right through to an individual port scanning for on line ftp or database servers.  Corporate networks are no stranger to being specifically targeted, or infected with malware that is delivered via the public network.

Attack Vectors and Entry Points


Firewall & Network Perimeter - Historically, enterprise security was often viewed with an 'us and them' mentality.  Everything on the internal LAN was safe, anything past the DMZ and on the internet was potentially bad.  The main attack vector in, was through the corporate firewall and any other perimeter network entry points.  The firewall was seen as the ultimate protection mechanism and as long as desktops had anti-virus software installed, that was as much as many organisations needed to do.

Infosec Product Release Review - 26th Oct

Tech Centre - the weekly review of newly released information security products, services, frameworks and policies.

Software


LANDesk Raises the Bar with Release of Integrated Systems

24 Oct 2012
"Management Suite and Security Suite 9.5 were created in order to help solve the ... "With theseproducts, our current and prospective customers now have the tools they ... This informationincludes power consumption per device, the health of ...

Objective releases Govt-standard DropBox

24 Oct 2012
While DropBox is a very popular information sharing service among consumers and ... According toproduct marketing manager Michael Warrilow, the service will be ... Government Information Security Manual up to and including “Protected”...

The Problem With Passwords (again, still)

Passw0rds!  The bane of most user and sys-admins lives.  I started talking about passwords earlier in the year, with the theme of 'the password's dead...long live the password'.  Obviously, the password isn't dead and is very much alive.  The story generally unfolds something like this:

  1. The infosec team, create a corporate password policy that requires a password to contain something like the following: to have a minimum length, include a number, an upper case character and also a special character, perhaps have a minimum age and be historically unique
  2. A sys-admin or developer, creates a function within an app/system/website to check the newly created passwords for complexity, in line with corporate password policy
  3. A user is created within a system / registers on a site
  4. A user is prompted to enter a new password for themselves, which must match the above policy
  5. If the policy is too complex, the user's initial password selection will generally be bounced for being too insecure
  6. The user iterates their password, adding numbers or additional characters until the password is accepted
  7. User convenience and satisfaction is probably reduced due to having to remember a large password
  8. The sys-admin believes the system is now relatively secure from hackers guessing passwords as everyone has a complex password

Cyber Security Part II - Botnets, APT's & AET's

This is the second of a five part series focusing on Cyber Security.  This article will examine some of the key terms and components that comprise of a cyber attack in 2012.  I'll take a look at the individual 'lone wolf' style attacks, right through to the complex networks of robots, capable of distributing malware on a vast scale.  I'll also quickly examine the components of an Advanced Persistent Attack and the increasing rise of Advanced Evasion Techniques, being used by malware to avoid detection.

From Lone Wolf to Botnets

The Lone WolfIn any walk of life the lone wolf is seen to be independent, agile and potentially unpredictable.  Whilst these characteristics are often seen to be difficult to defend against in a cyber security landscape, being an individual can have it's limitations.  In the new dawn of the internet era (yes I know, what was that like?) in the early 90's, the appearance of individual hackers was often portrayed as glamorous and cool.  The script-kiddy style attacker was generally male, 18-23 years old and a self-badged nerd/geek/social outsider.  Their main motive for attacking online systems was simply for prestige and credibility, driving for acceptance of their technical aptitude.

6 Steps to Selling Security to the Business

I spent a little time this week on two Twitter virtual discussions (#secchat, #hpprotect) covering security innovation and the like, where invariably the topic ended up focussing on how to basically promote or sell security into a business. This could be either from a vendor perspective, trying to promote new products or features, ultimately to make license revenue, or for the likes of internal security staff, attempting to justify business cases or budget for infosec related projects.

Kaspersky to Build Secure OS

Kaspersky recently confirmed the rumours that they are creating a new, built from scratch, secure operating system for the Industrial Control Systems (ICS) market.  Kaspersky argue that the well known issues with ICS, such as the Stuxnet, Duqu and Flame infections, have prompted them to re-evaluate the security of critical infrastructure type environments.  The conclusion was that a new, independently developed operating system, built using secure principles is the way forward.

The main issue with ICS environments is the fact they nearly always require 24 x 7 x 365 up time.  Not the usual 5-nines availability for critical or data storage or web apps.  These environments cannot be stopped.  Think oil pumps, water extraction systems, electricity production plants, gas transportation systems and so on.  It's not really a case of finishing at 6pm, doing a few hours patching and updating, testing and ready for the 8am login rush.  Those sorts of outage windows simply don't exist. This makes management of such environments difficult and has often lead in the past to legacy systems, often un-patched, performing critical services, simply because the downtime to replace or improve them is too great.  Introduce the likes of Stuxnet into the equation and you can see the opportunity for malware is too great to ignore.


Cyber Security Part I - (Cyber) War on Terror

This is the first in a five part series covering cyber security.  Each Monday, Infosec Professional will focus on many of the key aspects of cyber security, from government lead strategic defences, right through to individual consumer level protection.  Any device that connects to the internet is now a potential target, with the motives now becoming political, as control of the information highway becomes paramount.

US government security expert Richard A. Clarke, in his book Cyber War (May 2010), defines "cyberwarfare", as "actions by a nation-state to penetrate another nation's computers or networks for the purposes of causing damage or disruption".  This initial sentence is paraphrased straight from Wikipedia, but could just as well have come from a sci-fi movie of the mid 1980's.  Cyber war is no longer an imaginary concept, cocooned in the realms of laser gun protection and x-ray vision.  It's an everyday occurrence, impacting governments, corporate enterprise and individuals.

Security as a Service - Infosec the Cloud Way?

Last month Google acquired VirusTotal, an on line virus and malware scanning tool.  VirusTotal has been around about 8 years, and provides a simple and focused virus and URL scanning service.  They basically act as a service wrapper and aggregator for some 60 anti-virus engines and tools.  They then provide the ability for a file or URL to be scanned by the the underlying engines, before returning a scan result from the various different partners.  This is a simple, yet powerful concept for several reasons.

I'd imagine Google's main interest would be in the ability to scan a particular URL that is returned from a user's Google search, before they go ahead and click through it.  This would help Google to identify any malicious links, trojan destinations and so on, increasing their credibility and the safety of it's users. VirusTotal also provides various internet browser plugins, which would likely become an integral default part of the Chrome browser too.

Security Intelligence - Reactive -v- Proactive

The RSA Conference bandwagon rolled into London this week, which promises to bring some interesting sound bites from the big players in the security sector.  Yesterday's opening key note speech from RSA's own Arthur Coviello, focused on some of the key challenges organisations face from an information security perspective.  The lack of skilled personnel, shrinking security budgets and the difficulties of ever complex risk management, make attacks more difficult to identify and overcome.

Coviello called for more of an 'intelligence-driven' security model to help evolve the traditional security operations centre into something more analytical and proactive.  Whilst being able to carefully understand and dissect and attack source, flow and impact, security intelligence could also be seen as just another level of reaction, albeit a more detailed one.

The Future of Cloud Based Identity?

This week I was fortunate to spend some time with Mike Schwartz, CEO and founder of Gluu, the leading open source and on-demand cloud identity management provider.  Gluu is an Austin based start-up, that leverages open standards such as OpenID Connect, SAML 2.0, Shibboleth, and SCIM to make achieving single sign-on (SSO) secure and easy.

How has the concept of online identity management and federation services changed in the last few years?

Mike: Several fundamental changes are converging to the create the perfect storm of online identity: (1) Facebook Connect is bubbling up from the consumer space into the enterprise market, creating demand for instant connectivity based on user controlled decisions; (2) OpenID Connect is positioned to replace a plethora of other standards - SAML, OpenID versions 1 and 2, OAuth versions 1.0 and 1.1, WS-Fed and Information Cards;  (3) there has been a proliferation of authentication technologies - username / password is not the only option any more, and in fact we are being presented with many more easy to use, and more secure alternatives; (4) Email address has emerged as the definitive identifier for a person, and domain name the definitive identifier for an organization; (5) Due to the proliferation of mobile and cloud apps, the use cases for online identity needs to address not only the attributes or claims of a person,  but of the device or client to which the person is connected.

IPv6 Security

IPv6 is the natural progression for internet addressing.  With IPv4 addresses limited to just over 4 billion,  estimates have predicted a public address space shortage in months rather than years.  With over 7 billion people on the planet, it's easy to see why, especially as many in the western world, use smart phones and tablets as well as standard laptops, resulting in a individual using more than one address simultaneously.

What is IPv6

Internet Protocol version 6 is seen as a direct replacement for Internet Protocol version 4, operating at the internet layer of the OSI model.  There are a few main differences between the two approaches, mainly the fact that IPv6, has a considerably larger pool of available addresses - around 340 undecillion (lots of zero's..).  An IPv6 address is longer too, at 128 bits compared to the shorter 4 byte, 32 bit IPv4 address.  IPv6 also contains a fixed host identifier based on the devices MAC (Media Access Control) address.

Ransomware - Pay Up or Lose Your Files?

Ransomware has been around for years, but has seen a rapid rise to the popular mainstream in the last couple of months.  Ransomware is generally seen as a type of malware that restricts access to the computer or device it infects, not releasing control until some sort of monetary payment has been extracted.

The malware can generally operate at the boot or pre-OS level, encrypting the underlying files, photos and music that the user deems so important.  This encryption process is managed by the malware, with the contents not being decrypted until either a bank transfer, SMS or premium rate phone call is made to the malware operator.  Other basic ransomware payloads, simply restrict access to the main interfaces of the operating system.  So instead of encrypting the contents, access to things like explorer.exe in Windows or the command line shell are prevented, making the machine practically useless.

Why Information Security Metrics Are Important

"He uses statistics as a drunken man uses lampposts - for support rather than for illumination" ~ Andrew Lang

Metrics and statistics, whilst subtly different, are often seen as the accountants yardstick and the pragmatists whipping stick.  The use of metrics in IT has had a long and perhaps uneasy route.  Technicians want to implement, design and fix.  Managers and budget owners need to show value, deliver service and ultimately keep the customer, production line or CFO happy.  An efficient and sustainable business position is a meeting place between the two, where tangible (and intangible) metrics (not statistics) are important to both parties.

Why Use Metrics?


IT security has often been seen as a cost within the overall component of IT, which until very recently was also seen as a cost to the business.  IT was a necessary component granted, but organisations have historically not seen IT as a strategic part of the overall business delivery cycle.  It was never capable of driving efficiencies, saving money or being proactive in gaining and keeping customers.  That view has changed considerably and information security is now becoming the necessary component within IT.

Iran's Own Internet

The 'summer' break has been and gone and as the winter rains become a thing of unrelenting omnipresence, the main story that caught my eye was that of Iran building it's own internal intranet.

The politics and propaganda behind such a move are far beyond the scope of an information security blog, but idea has some interesting concepts.

Firstly there a few basic drivers behind such a move.  Control and censorship is one.  Regardless of political motives, building a brand new network, allows the creator to have a lot more control over the number and types of the devices that are connected and the information and data those devices share.  In a lot of regions where the internet is freely available, control and censorship is a big agenda item.

Mobile Security - Why You Should Care

Nearly all professional working people in the western world, have access to a mobile phone.  These phones are generally not just phones.  They're portable laptops, with processing and storage capabilities greater then a desktop PC 25 years ago, yet we treat them like toys that can easily be replaced.

With every pay monthly contracted sold (especially in the UK), an equivalent monthly insurance policy is sold too.  We're constantly reminded about the dangers of dropping the phone down the toilet, or smashing the screen, after inadvertently leaving the phone in your back pocket, or by damaging the outer casing by not having the correct protective membrane.  For another £12 a month, you can have 'piece of mind' that you're protected.  Great.

But what about the stuff the phone is actually used for?  Does that get protected too?  What stuff, why should I care about protecting that?

Are Security Qualifications Important?

Over the years I, like many IT professionals, have amassed a fair few number of qualifications.  Some vendor specific (MCSA, CNE, CCNA), some process related (PRINCE2, ITIL) and some security related (CISSP, CISA).  But in reality, has it been worthwhile pursuing them and have they made a difference to my career?

Well, there are a few ways to look at this.  Many people start out within IT either straight from college or university with a basic theoretical understanding of information systems or computer science principles.  Whilst this provides a basic understanding of some of the key technical and non-technical aspects of computing, I think it really acts to lay a foundation for how the person can pick up new information going forward, either through professional study or simply via on the job exposure.

When someone junior starts a new role, often, their main aim is to get promoted or gain a pay rise.  This can happen in a few ways - either through longevity (simply working in a role for x-number of months/years will result in a pay rise) or through being good at the role you're in.  Now, being 'good' is entirely subjective, unless some decent objectives and attainment targets are placed in a personal development plan.  One thing that 'good' can't argue against, is an industry qualification that relates to your role.  If an individual is qualified as good as an industry 'outsider', there are two implications for gaining a pay rise.  One, is that if the employer doesn't pay you the 'going rate', you are more likely to be able to leave and get employment elsewhere.  Secondly, if you do leave, the employer would find it difficult to replace you with someone with an industry qualification unless the salary increases.  The economics of salaries isn't really what I want to discuss, but ultimately, the chances of a pay rise will obviously go up.

On a practical aspect, does that industry qualification warrant the pay rise?  Will the qualification improve your ability to do the job, either through efficiency or via being able to complete more tasks and give more value?  This part becomes more subjective and depends largely on the qualification and how it relates to the everyday tasks.

From a security aspect there are several of the high-level umbrella style certifications that many employers ask for, or at least are added onto a recommended nice-to-have list.  I am thinking CISSP, CISA, CISM and CEH.  In addition there are numerous domain specific qualifications for things like penetration testing and forensics, but many are really assuming that a strong basic understanding of all aspects of security has been gained, either from one those 'big-4' qualifications or from 6+ years on the job experience.

There are numerous self-study approaches to passing each of the those 'big-4' qualifications, which in reality may help you pass a physical exam, but probably wont help with the real world issues those exams are aimed at solving.  Whilst some of the technical aspects can be consumed - thinking things like basic cryptography, risk assessment frameworks, network protocols and so on - the real world scenarios that require those skills to be implemented are very rarely picked up.  Whilst something like the CISSP is generally described as being an inch deep and a mile wide (referring to the fact the content is extremely broad but not in much detail), this can certainly help with picking up new factual knowledge in a range of areas.

But does being a CISSP for example really help with managing information security in an organisation or giving strong industry level advise to clients?  The answer is obviously subjective and as many positions require a candidate to have the exam passed it becomes a vicious circle.

I think the most important part is not necessarily the obtaining of the exam, but how the exam was obtained.  By this I refer to the training and preparation.  Some of the leading penetration testing exams, like for example the CPPT, have a much more hands-on approach to learning, with real-world simulations, labs and so on, taken over say 3-4 months.  This is certainly has a few benefits.  Not only do you approach the learning in a pseudo real-world setting, but you also have several months to cover the vast amount of material, avoiding that dreaded boot camp scenario.

As information security, not to mention IT in general, is a relatively immature profession in comparison to something like law or accountancy, qualifications certainly aim to help standardise the broad range of approaches and learning that is available.  I think the major concern is that how they are administered and viewed.  As with any qualification without experience they can become a dangerous tool, that only detailed interviewing and on the job monitoring can reproach

(Simon Moffatt)