Interview Series - Jo Stewart-Rattray VP of ISACA

As part of the Infosec Professional interview series, we are lucky enough to have grabbed some time with Jo Stewart-Rattray, Director of Information Security at RSM Bird Cameron and International Vice President of the Information Systems Audit & Control Association.

Ed: Hi Jo and thank you for agreeing to the interview.  How has information security changed in the last 3 years (perceptions, threats, protection etc) ?
Jo:  Information security is ever evolving; however, the last three years have seen an acceleration in the speed of events. There have been a greater number of attacks, in some cases on iconic brands. The rise of social media has given organisations new internal issues to consider, together with the move to the cloud and the potential jurisdictional issues that come with such a move.

What do you think are the main threats facing organisations in 2012?
Jo: Use of cloud providers, and indeed other providers, without proper due diligence and without appropriate service level agreements being in place. The big question could be “Where is my data?” and “Who, under law, can access it?


Are organisations ready to deal with those threats and what can they do to protect themselves?
Jo: Good research into the provider and the due diligence previously mentioned are extremely important. Of course organisations are able to deal with this sort of threat. It’s about an awareness of the risks involved and undertaking the appropriate treatment of such risks. Guidance on this is available at www.isaca.org/cloud.


What do you think are the main threats facing individuals in 2012?
Jo: Unbelievably, scams are still an issue for individuals. They become more and more sophisticated and less easy to identify. Privacy is another issue. How much is out there about you? Can someone recreate your identity? How much should you release to the world via social media and other outlets? Cyber bullying and cyber trashing are both issues as well. People tend to behave very differently online if they perceive there is a degree of anonymity.


Infosec has now become an independent profession, with job titles, budget and certifications. What challenges do infosec professionals face in 2012?
Jo: Some may face budget cuts and, potentially, job layoffs if the economy is affected by the European debt crisis. There are still organisations that see information security as a discretionary spend. Of course, the bad guys don’t stop just because the economy is less than booming.  On a more positive note, information security professionals must keep abreast of trends, ensure that their continuing professional education programme is in place. They should also look to certify if they have not already.


What are the key qualities that organisations look for when using the services of an infosec professional?
Jo: Certifications, experience and background are probably the three most important.

Which credential will be in hot demand for 2012?
Jo: Certified Information Security Manager (CISM) and Certified in Risk and Information Systems Control (CRISC) are certainly both growing. CISM was named a top certification in 2012 by the Information Security Media Group (ISMG) and CRISC has been earned by more than 16,000 professionals in its first two years.

Ed: Thanks to Jo for giving us her insight into the current trends in Information Security for 2012 and beyond.

Truth About Insider Threats

The 'insider' is the dude in the office.  He (or she) probably works in IT and looks and acts like a regular employee.  They are however, probably a bigger risk to the organisations corporate information than a hacker on the public internet.

An insider is generally seen as a trusted user of the network.  They have legitimate accounts and access on the corporate LAN to access, copy, modify and delete data without issue.  So why are they are threat?  We can define threat to be the potential exploitation of a vulnerability.  The vulnerability in this case could be the trust, means and motive of an employee to perform a potentially malicious act against the organisations data and infrastructure.  The motive and intent part is optional to an extent, as a malicious act may not necessarily be intentional, but could simply be erroneous or ignorance related.  For example the opening of a malware link.

Intent
Intent is a complex issue to discuss.  I think in the narrow sense they may be fewer individuals who will actively go and perform a malicious act against a corporate network.  However they do exist.  For example, the employee working on a notice period, disgruntled promotion hopefuls, an employee leaving to work for a competitor all may have some limited active motive to perform some sort of information discharge.

The intent though could be more subtle:  Curiosity of a super user to browse data shares not relating to their line of work; The checking of pay or HR information because 'they can'.  Or for example, users who don't want to follow desktop policy for things like screen savers, anti-virus or internet browser settings are all in a way creating a threat to the trusted network.

Identification
Managing, reducing or removing the threat of an insider attack can only be achieved if a correct understanding of the level and impact of the current threat has been completed.  It's important to be able to effectively identify 'who has access to what' within an organisation and correctly certify existing corporate LAN access levels.  This first step is a common approach for many compliance initiatives such as Sarbanes Oxley, PCI-DSS and components of ISMS frameworks such as ISO 27001.  Once existing access and users have been certified and any access misalignments and redundant accounts removed, it becomes easier to manage the remaining users and associated assets.

Data asset identification is also important here.  Classifying data and assigning data owners is a well documented process and one that is often time consuming and ongoing.  Understanding which data is critical and in turn which transactions access that data is an important step in creating a process to help protect the internal resources.

Mitigation
Mitigation, as opposed to complete remediation, is often the most effective response to insider threat.  Managing the risk involved is often more cost effective than attempting to remove the risk entirely - it can often be the case of spending $1000 on a padlock for a $100 bike.  Mitigation can be achieved in several ways.

  • Based on the risk identification and access certification process, users should be assigned the 'least privilege' required to do their job
  • Management of high privileged accounts is critical
  • Implement regularly updated Separation of Duty policies across key systems
  • Develop clear and well disseminated security policies and regular employee re-training
  • Implementation of a Data Leak Prevention process with associated tooling
  • Remove shared accounts and implement account-to-employee relationships to help drive auditing and accountability
  • Implementation of a Security Information & Event Management solution for centralised management of system, network and application logs
  • Use of abnormal access identification processes against the SIEM warehouse to help filter false positives and identify true access threats
One of the more advanced and stringent approaches is to treat content on the LAN as if it were held on a WAN or even public internet.  With the increased blurring of LAN boundaries due to mobile and cloud computing, the LAN is no longer the safe haven it once was, which includes the transient nature of employee working patterns and behaviour.

(Simon Moffatt)


Successful Privileged Account Management

Privileged Account Management is a major concern to large organisations trying to control the ever growing threat of the insider.  Privileged accounts generally relate to super user accounts such as root on Unix, or Administrator within Windows, as well as service accounts and accounts used for account administration tasks.  These elevated users have greater object access including circumventing audit and accounting processes.

The Risks
The biggest risk associated with PAM is that these accounts are often built into the underlying infrastructure, created at installation time.  The non-practical option is often to disable as many of the high permission accounts as possible without impacting service.  The second biggest risk with privileged accounts is that many are used programatically, hard-coded into scripts and code.  This can make finding the password a lot easier.  These accounts will often have permissions covering account CRUD activities, audit configuration and other meta-data related tasks.

The Users
Privileged account users are generally part of the system or security administration functions reporting into the CISO or CTO.  Whilst the permissions are required for the owners to be able to perform their jobs, working in teams that configure and manage the auditing and reporting infrastructure, makes identifying and managing anomalous access issues time consuming, complex and at times political.

Basic Management
The accounts that are required, need to be managed effectively.  That means strict correlation between the account and a tangible user record for accountability.  Firstly, though, you need to be able to identify and analyse the privileged accounts and understand what accounts have access to which systems.  The following is an example of a basic PAM policy:
  • Infrastructure level complex password policies in place
  • Expiring passwords, lockout and restricted time logons
  • Accounts should be disabled when not in use
  • Service accounts should be managed with generated passwords where ever possible or longer length pass-phrases
  • Associated entitlements must be documented via access control and subsequent approval
  • Account names should be renamed from defaults and well described in secure documentation

Anomalous Detection
Privileged accounts by association are internal to the corporate network.  Their use is expected and activities by the accounts is not in itself cause for concern.  However, due to the 'keys to the castle' nature of the permissions associated with these accounts, detecting anomalous and malicious use needs to be done quickly with an effective response.  Anomalous use doesn't always have to be done by users outside to the organisation.  Anomalous use could also arise from an employee with authorised access to use the account, but using the account to view data, change processes and perform operations at a time or location that could lead to a security breach.  Identifying any potential use requires detailed and accurate logging either via the proprietary system accounting or via a centralised System Information & Event Management solution.  A centralised view is important, but also removing the potential for false positives is also key.

Behaviour Profiling
The use of behavioural profilers can assist in identifying how privileged accounts are being used and which activities are deemed to be anomalous or malicious.  Behaviour can include which workstation is using the account, which network segment, the time of day, against which network device, file, object or process the account is being used.  All of which help develop a picture of expected account behaviour, which helps to reduce the noise often created by viewing the logs of every account transaction.  Spikes of suspicious use are then easier to spot and can be managed via the appropriate case workflow, notification and escalation processes to quickly track and resolve the potential breach.

Privileged accounts are here to stay so better ways of managing and reducing the risk their can pose is imperative if compliance and security efficiencies are to be achieved.

(Simon Moffatt)




Increased Connectivity - The Good, Bad & Ugly

Connectivity is on the rise by all accounts.  Interoperability is where it's act.  Languages, protocols, operating systems, identities, on-line profiles, devices, smart-phones, tablets, you name it, if connectivity isn't a feature it's not getting a look in.

If you look at pre-internet times (yes hard I know) device and data interconnectivity was seen as an important use case, but only implementable if deemed absolutely necessary.  As tooling and applications now allow data passage with a few clicks, the network of connected devices becomes enormous.

Whilst this brings many end user benefits it can also bring with it management issues, data loss prevention concerns and data proliferation where perhaps it shouldn't.

Increased Connectivity is Great Right?
The main area of increase recently has been the rise of the smart-phone.  Devices that now contain powerful processors, large portable micro-card storage and run operating systems with the same level of complexity of a desktop machine.  Smart-phones can hop onto a wi-fi network in seconds and communicate over TCP/IP like any other device.  Coupled with smart-phone 'always-on' capability, comes increased on-line connectivity.  By this I'm referring to the services that the internet provides.  For example, a Google account can link your phone contacts to your calendar, to your social network and in turn you can import your RSS feeds directly into a blog page and see the book recommendations from your friend feeds.  A document on your laptop can easily be shared, stored and copied to your phone, tablet and work colleague seamlessly.

Why is it a Problem?
The biggest danger with inter-connectedness comes data management.  If you use a basic cloud synchronisation service, you could quite easily have 3-4 copies of the same document.  A local copy, an on-line archive, a collaborative copy and so on.  Where is the ownership, protection and management of the original data?  No longer is corporate data restricted to the private LAN.  The boundaries of such a network are now blurred.  If corporate data can be downloaded, viewed and edited on a tablet or smartphone using 3G where does the corporate security policy end?  Data Loss Prevention can provide many answers.  Endpoint device management is a major concern as is the security of Data-in-Motion.  New technologies that focus on Information Rights Management that help restrict proliferated data access by unknown users is now popular.  Data-at-Rest is quite a well known concern area and disk encryption for laptops is popular and remote-wipe is also a common feature for smart-phones and tablets.

BYOD or Bring Your Own Device brings with it another complex set of security concerns.  Should organisations realise the potential of individually owned devices to create an inter-connected grid of data exchange?  What about employees with jail-broken phones, or phones with inconsistency patching, applications and so on.  What happens with an employee leaves an organisation?  Who owns the data and can it be legally wiped?


Shifting Boundaries
The expansion of the connectivity can create a blurring between the private and public networks and in turn cause policy jurisdiction issues.  A concern in recent years has been the increase in the number of SCADA (Supervisory Control & Data Acquisition) system attacks.  Historically these systems would be not have been so heavily inter-connected with the corporate network and in turn access to the internet.  SCADA networks were generally separate from existing LAN infrastructures, using faster lower level protocols.  As inter-connectivity with standard TCP/IP infrastructure increased, SCADA systems became inadvertently accessible via the internet and in turn more open to cyber and malicious software attacks.

It will be interesting to see as connectivity continues to increase at both the corporate, personal and industrial level, whether security policy and controls management can keep a pace, providing governance and support to help reduce data loss, attack and malicious software proliferation.

Virtualisation - More or Less Secure?

"Everything is virtual,  Nothing is real!".  Sounds like a songwriters lambast against modern day society.  It's not.  Virtualisation in a computing sense has been around a while and is here to stay.  From the virtualisation of physical machines, applications and network infrastructure, being virtual seems like an IT managers idea of heaven: less physical kit = less power = less cash = everyone's happy right?  Maybe...

Virtualisation at the server level is probably the most popular deployment.  By this I refer to the likes of VMWare or Microsoft Hyper-V which creates a hypervisor that sits on the physical tin and basically splices, isolates and distributes the physical components into virtual mini-machines.  These mini-machines can be individual servers running a plethora of different operating systems all using the same underlying physical machine. Neat eh?  Provisioning and de-provisioning a new server takes seconds.  Fault tolerance across applications, servers and infrastructure is simpler and more cost effective.  Virtualised networking allows the virtual machines to talk to each other without touching a physical ethernet cable.  Nice.

But does this concentration of resources increase of decrease or increase security?  I'm thinking from a physical perspective you have less kit to worry about.  Good thing right?  Well, you have this concentration factor which results in a smaller attack space that attackers can focus on.  If previously you had 250 physical DL380 servers racked in 4 server rooms which now becomes a pair of clustered boxes in 2 server rooms, there are less things that need protecting and there are less entry points an attacker could focus on.  However, if an attack were to be successful, the rewards are simply much higher.

If a hacker was able to gain access to the host machine (the physical kit running the hypervisor) they'd then had access to all the virtual machines running underneath?  Well perhaps not directly as they will be some logical security and isolation of those virtual machine files, but it theory you have more chance of creating a  DoS scenario than if you had 250 physical machines which would require 250 separate attacks.

From a practical perspective this virtual world will need to be managed and administered by someone.  This normally falls to a server support team of some description.  Whilst previously your pre-virtualised server world might be managed by different specialists (Unix, Solaris, Windows, Database, Web etc) the post-virtualisation world is managed at a meta level by a single team.  So again, the simplest place for an attack would be via the tools and interfaces that manage the virtual environment as that would give meta-access to the machines underneath - their configuration, their power on status, network configuration and so on.

Another area this virtualised concentration factor might influence is the physical aspect.  So previously you may have had more distributed physical machines.   Even in a concentrated virtual scenario with some sort of bi-locational clustering, you have fewer physical racks and more importantly fewer ethernet cables hosting your environment.  What changes need to be made to gain access to that physical environment?  If you gain access to the physical patching of a host machine, you potentially have access to sniff traffic from multiple machines which may in the past have taken more effort.

Whilst this is all a simplified view and there are many logical controls and processes that aide virtualisation security, many areas of weakness still exist, mainly around resource and administration concentration.  A reduced footprint can make administration and protection easier, but that protection will generally come under greater attack as the prize is now much higher.

(Simon Moffatt)

No-Tech Hacking - Identifying Unprotected Assets

When you think of hacking or start looking at ethical hacking and counter measures, the focus is on the highly technical.  Encryption hacking.  Packet sniffing and session hijacking.  Web site hacking.  SQL  injection and so on.  All require a fair bit of basic infrastructure, networking and coding experience.

Whilst there are many off-the-shelf tools, utils and scripts that makes the hacker (and ethnical hackers) job easier, being non-technical is a huge hindrance.

However as a security manager or engineer, protecting information and IT assets shouldn't just be about the cool tech.  It should also on the "no-tech" as well.  By "no-tech", I'm simply referring to areas of information protection that require basic process, training and awareness.

For example, servers should only run the services they are designed for and each server should have a modular cohesive function associated with it.  This is pretty standard config management by removing the complexity and support issues of having a device perform several functions.  If a server does one and one thing only, it is simple to remove, lock down or disable any ports, services or functions that are not needed.

An obvious one (and often ignored) is the basic requirement of PCI-DSS 2.1 which is to remove default passwords on any servers, services or devices that are installed.  For servers and services this can be quite well managed at times, but this also needs applying for every device on the network.  I'm thinking mainly routers and switches.  Often the least well managed of the networking infrastructure.  If accessed maliciously can be a fountain of knowledge and an area for a basic DoS attack.  In addition check, remove and edit any default SNMP community strings used to manage servers or network devices (especially the read/write strings).

Another area that is often overlooked is the management of service accounts.  Accounts used for things like printer management, backups, application installation and so on, often have admin or near admin capabilities.  Often as they're used by scripts, services and apps, the passwords are often simple (thinking the same as the account here) and not set to expire.  It's a lazy and often overlooked part of account management as the accounts are being used by the sys admins themselves.  A simple well documented policy here would close a lot of back door access.

Many organisations now have well developed policies for at least laptops, if maybe not quite the Bring Your Own Device / smartphone style devices.  Laptops often have group policies for things that prevent social networking or instant messenger products or the installation of additional software in general.  Local account passwords are often linked to a directory where a complex password policy is in place.

All good stuff, but what happens if the physical device is lost or stolen?  Takes probably 5 minutes to unscrew the back panel of the laptop, take out the disk, add it into an external USB caddie and mount it as a new slave drive.  No CTL-ALT-DEL password to by pass or network to attach to, just straight into the raw file system.  Unless of cause it was encrypted!  Basic (and good) encryption software is readily available for at least partitioning and full disk encryption (including the MBR) is now becoming standard too with on board crypto-processors.

Security in depth is key and basic disk encryption easily circumvents portable storage issues.

Other basic "no-tech" protection areas should be focused on social-engineering.  ID badge checking by the reception.  Zero-tolerance of tail gating and doors left open.  Passwords never written down or shared.

If something or someone looks suspicious ask, check and prevent the incident from occurring before it becomes damaging.  It may seem like extra effort in the short term, but it will beat any effort involved in a recovery exercise.

(Simon Moffatt)

The Rise of Social Engineering

Defence in Depth.  Rings of Security.  Multi-layered protection.  All well known terms when it comes to protecting information assets.  Information can generally be accessed in two ways:  via a network or straight from the disk.  Organisations pay great attention to policies and controls that help protect information both in transit and in situ.  Take a basic network example:

  1. Company has a firewall configured separating public and private network traffic
  2. An Intrusion Detection System is also present to detect traffic anomalies
  3. RADIUS access is managed using two-factor authentication, with tokens
  4. Within the private network VLAN's are configured to separate logical business areas
  5. Physical wired patching is managed and restricted using MAC address tables
  6. Wireless Access Points have obfuscated SSID's and complex passwords, with enhanced 256 bit encryption
  7. Physical machines are managed by group policy with regular patching, local firewalls and anti-virus configured
  8. Access to local machines is to non-root / Administrator accounts
All pretty standard stuff and most large organisations will probably have securely managed DHCP and DNS too.  So that's the internet to desktop access all sorted, right?  Lets look at data in situ.  This can be slightly more problematic.  Data can be stored in several places: local hard disk, portable hard disk, network storage (server/NAS/SAN) and so on.  So more places equals more risk.  To overcome some of those obstacles, some basic controls would include:
  1. Restricted access to physical network storage location (swipe/pin access to server room)
  2. Locked server cabinets
  3. RAID storage to reduce SPOF
  4. Secure back up
  5. Final resting place disk encryption
  6. BIOS password to prevent storage changes
  7. Prevention of USB access to machines holding data
Again all pretty standard and many organisations will utilise some or all of these controls.  Individually the controls could be attacked or circumvented, but when combined, the entire chain of security makes an attack less likely to succeed and maybe even less likely to be started due to the formidable steps involved.

The one area that chained security is less effective at preventing is social engineering.  The classic fraudster from the American films is not a new concept.  Mandrake the master illusionist or the well dressed con-artist who was 'such a nice man' all fall in to this category of social engineering.

Social engineering can cover a multitude of areas from tailgating, fake calls from IT support, email phishing right through to elaborate pre-texting and scenario based trapping.  All are designed to extract information such as usernames, passwords or personal information that can lead to data leakage.  The biggest safe guard for such techniques is often a well developed and disseminated security policy which clearly states, what for example, internal teams would ask for during standard business operations.  Many banks now place warnings stating they'd never 'ask for your PIN during a call' for example.

The same approach should be standard practice within the corporate environment, with detailed policy explaining that under no scenario would certain information be asked for or released, even if an employee is placed under social pressure.  Clearly devised process surrounding information release should be adhered to.  If a request needs to be approved by Mr X, allow it to be approved by only Mr X and under no circumstances should that not be the case.

It's the classic scenario of the corporate directory requiring complex passwords to be used, only for those passwords to be so complex the users cannot remember them so write them under their coffee mat.  

Defence in depth is only really as strong as the weakest link and sometimes that is always people.

(Simon Moffatt)

Take Your Head Out the Sand - (You WILL be Hacked Eventually)

Do it now please.  Stop ignoring the fact.  Stop living with your head in the sand, the 'it won't to happen to us' syndrome.  It will.  Sooner rather than later your corporate network, your information assets, your company Intellectual Property, the brand that has taken half a century to create and protect, they (or if you're lucky only one) of them will be hacked in the future.  The likelihood of it not happening is actually quite small, so you might as well start preparing for when the attack will happen and develop a plan for an effective response and recovery.

2011 saw the terms cyberwars, APT and malware command and control all become pretty much house hold terms.  Organisations with any sort of web based presence (how many don't?) continually went through vulnerability scanning exercises, patching roll-outs and IDS testing in an attempt to provide external auditors, board members and shareholders with some sort of assurance that the IS teams were in control of the infrastructure that protects their key information assets.

Many organisations within the financial services industry have separate IS teams that are now responsible for information security and governance, not only from a technical infrastructure perspective, but also from a policy and control perspective.  Whilst the main compliance procedures are focused on insider threats (who has access to what?, access control, reporting etc) many are now focusing on web based Advanced Persistent Threats.  APT's are becoming more complex, often being run and directed by large corporate-like groups involved with organised crime.  Where there's a successful APT there is cash.

Whilst most security programmes argue for pro-active measures (patching, account management policies, access provisioning, zero-day attack scanning and so on), it is equally important to have a re-active procedure and mindset when facing new age cyber security.  If a breach does occur is there a clear escalation plan and procedure to manage the 'who, what why?'  This could be viewed like a DR plan for cyber attacks.  Whilst the main security programme is focussed on making sure attacks never happen, in the increasingly likely outcome they will, an effective and and rapid response is imperative in order to lessen the impact and return to BAU operations.  In certain circumstances based on a detailed risk assessment the response could be more cost effective than the prevention (don't put a $100 padlock on a $10 bike...).

Whilst doctors will forever argue the virtues of healthy living, immunisation and health & safety legislation, each town and city still has a well funded Accident and Emergency department.  The same could be applied to corporate cyber security programmes.

(Simon Moffatt)