Cryptography - As Strong As Your Weakest Link

Cryptography is as old as communication itself in many respects, with people (and even animals) developing mechanisms to shield messages from those who are not trusted.  One of the most common that has passed the test of time is that of the Caesar Cipher.  The Caesar Cipher is a basic substitution approach, changing each alphabet letter with a new letter, n positions away.  So if your movement was by 3, A would become D, B would become E and so on.  Pretty simple to use, but obviously simple to reverse too.

Modern day cryptography is generally broken into two areas - symmetric and asymmetric.  Symmetric uses the same key to both encrypt the plain text and decrypt.  Again, this is nice and simple to implement, but no matter complex the key could be, if the key is stolen, the message can be easily decrypted back in to the original plain text.

Over time, asymmetric encryption has become popular, mainly through the implementation of public key infrastructures.  PKI requires two keys, one public that is generally used to encrypt messages, and a second private key that is used to decrypt.  The private key, as the name suggests, is kept secret generally password protected and local to the de-cryptor.  Public keys are made available to whoever wants to encrypt a message to the recipient.

A common mistake, is to use the term encryption and hashing interchangeably.  Hashing is a one way function that takes a variable sized piece of plain text data and creates a fixed size block of data that is unreadable to the human eye.  The complexity of the hashing function should be so, that no two pieces of plain text create the same hash digest.  This is known as collision avoidance.  It is impossible to retrieve the plain text from a complex hashing function (or so should be the case), hence hashing is often used for password storage.  To check confirm password equality, an entered password is passed through the hash function and compared to the original hash, as opposed to decrypting the encrypted value and comparing in plain text.  Encryption can be reversed, hashing in theory is irreversible.

Whilst there are attempts at breaking both PKI and hashing infrastructures (rainbow tables are often seen as the most plausible way of breaking a non-salted hash), encryption infrastructures are often only as strong as the weakest link.

There are several other factors involved in a complex encryption or hashing infrastructure than just the strength of the algorithms and functions being used.

Human factors play a large role in this infrastructure too.  How are keys being stored?  What happens to decrypted data once it has been read or used?  Are any keys or unencrypted data stored in temporary files anywhere?

If SSL access is being applied to a secure website, that level of security can become undermined if the underlying database is not secured or is accessible via Telnet or FTP for example.

Whilst the encryption of sensitive data, both at rest and in transit, is a key part in information security, the people, process and technology points of such an infrastructure, mustn't be ignored or deemed to be less significant.

You are only as strong as the weakest link, which was perfectly exemplified by the breaking of the Enigma code machine during World War II, when a huge break through occurred simply due to German operator error.

Don't let that operator exist in your organisation.

(Simon Moffatt)

Private Email - The Key To Your Personal Identity

Identity theft is big business, costing an estimated £2.7bn in 2010 [1], affecting millions of individuals.  Identity theft can occur from multiple attack vectors such as rubbish sifting for uncompleted credit card applications, right through to the more sinister underworld of fraudulent passport applications.

Whilst non-technical avenues such as dumpster surfing and social engineering are a major concern, technical methods of stealing the required information to assume a different identity are popular.  The most common is probably that of a fake URL to online banking and financial accounts.  URL Phishing is still common and still surprisingly effective.  A email correctly formatted with the appropriate wording, colouration and logo's, can often navigate through complex spam filters and land in the recipients inbox.
By arriving into the inbox, the email has instantly generated a level of trust from the potential opener, more so than if it had directly landed in to the 'junk' or 'spam' folders.  Once a level of trust has been established, it often takes a high level of observation to notice something is untoward and not as it seems.

If a victim does follow through with the phished link and enter their credentials for example, it is quite likely, that whilst their account will then be attacked and potentially usurped of hard earned cash, the attack is likely to stop at just that one account.  A quick call to the respective bank and the account is closed with all access revoked.  Whilst that is certainly unpleasant and inconvenient experience, the impact can be firewalled.

In 'real' life we generally like to avoid situations which result in the old cliché of putting 'all of our eggs into one basket'.  Drop the basket, lose your eggs, no cake for tea.  Whilst you could then have some counter measures like the equally old 'baker's dozen' approach, having a scenario which contains all of our treasured items in a single place, leaves them open to be lost/stolen/attacked/destroyed.  We know this, but why do we do it on line?

Take for example a personal email address.  Most people will have at least one.  A lot of people may have more than one, either due to signing up to things like Google, Yahoo or MSN which carry a free email address with them, or to simply separate work and personal life for example.  Ultimately though, you are likely to have one or two email addresses that you will then use to sign up to any on line service you use.  That could be from the benign like Facebook, through to online banking, tax, car insurance and so on.  Pretty major entry points all reliant on a single email address.   Whilst it's obvious good measure to not use the same password for all those accounts, the one thing that is the same is the email address.

Why would an attacker want to try and guess every password to every single on line account if they can simply attack the one that matters - the email password.  If an attacker knows that, they have the 'keys to the castle' and ultimately the key to your (online) personal identity.  Next step is to simply change your email account password and start forgotten password cycles on all other accounts signed in with that email address.  The attacker in addition, then has an entire email history for the victim including anything sent (which are generally not deleted) as well as any folders, trash and current inbox mails.

It goes without saying that the email address password should be more like a passphrase than a password, changed regularly and always accessed via HTTPS/SSL.  Emails should be cleared down regularly and anything that is personal, sensitive or financial should be deleted unless absolutely necessary.

Sometimes it is often the simple things that an attacker will look for, so it pays to think and act like simply in order to avoid the obvious pain of an attack.

(Simon Moffatt)





Sources
[1] - National Fraud Authority 2010 Report

3rd Party Software Library Security

I'm talking about software libraries of course, generally the 3rd party provided type.  That 3rd party could be from an open source community, a fully purchased library or a library even from previous employees or internal projects that are no longer active.

The likelihood is, that for nearly all of the internal and external software projects being run within an organisation, it's likely that libraries not created by the software project owner will be being used.  And why not?  Why bother creating yet another CSV parser, or email sender, or PNG generator, when 90% of  your use cases can be hit using a library already written?

Well, there are several areas of concern here.  Firstly, if you didn't write the code yourself, you can't testify that it meets the standards required either by the internal organisaion or your client, without going through the source line by line.  Secondly, a library built to do a specific task, will not necessarily be focused on security.  It's aim is very domain specific and the creator(s) will have just those use cases in mind.

Whilst open source libraries obviously have the benefit peer review and iterative, quick development cycles, there is the basic fact that the source is open to interpretation with any vulnerabilities open for all to see.  The flip side, obviously being that you have more developers performing code review and applying enhancements.

The biggest issue when using 3rd party libraries, is that they are more likely not to receive updates once consumed within a development cycle.  The large amount of interdependencies can often lead to 3rd party libraries remaining stagnant for a sustained period, whilst the 'home-grown' code adapts and evolves.

Whilst the risk of utilising libraries will not alter their mass consumption, there needs to be better ways of identifying and managing the risks associated with embedded library vulnerabilities.  There are several vendors that provide automated and semi-automated code scanning that can be used either from an assessment standpoint or outsourced code review approach.  This approach will only be useful if the libraries are well known and part of a complex database of known vulnerabilities.

A more pro-active approach is to engage security in the entire software development life cycle.  Whilst this approach has many benefits, the costs are often seen as too time consuming and are often overlooked.  By applying security throughout the entire development life cycle, this can often help to at least identify that issues may exist in the future 3rd party code base.  Utilising security throughout the development cycle is often most successful when the business are fully aware and can apply their own risk appetite to the development process.

(Simon Moffatt)




Does Older Mean More Secure?

It's an interesting thought.  Most operational security plans will promote the constant need for operating system and application software to be running the most recent stable release.  Patching and roll out platforms are big business and take up a significant portion of a system administrators time.  Keeping mobiles flashed, operating systems patched, router firmware updated and the IPS/NGFW/AV/Blacklist (delete as applicable) at it's most recent signature release, is a constant cycle of automation and checks.  But is it worthwhile?

A new release of any piece of software would (should) have gone through rigorous QA and UAT before being released and made available for roll out.  The maker of the said software will nearly always promote that the customer roll out the most recent release, as it makes their support processes easier to manage and getting everyone onto the same (or similar) version as soon as possible, helps with bug management and security issues.  This is a pretty fair assumption.  Unless you're an application vendor delivering a service via an online site where everyone uses the same version regardless, managing multiple versions of client software installed and configured on different platforms can be a complex process.

A newly released version or patch, on one hand fixes known issues, bugs and security loopholes, but is also open to newer attacks which have yet to be identified and patched by the vendor - the dreaded Zero-day scenario.  The other issue with newly released software, is that generally it will have fewer users, meaning that bugs and stability issues may still exist.  So whilst the chances of encountering a bug/security vulnerability or stability issue may increase in the short term, the speed and quality of a vendor response will likely increase.



But what if we look at this process from a few steps back?  What are the most attacked platforms and applications today?  Android?  Chrome browser? Flash or Javascript based vulnerabilities?  Windows 7 bugs and issues?  All are relatively recent developments, but are they attacked due to inherent flaws, or is it more likely because they're popular and well used?  An exploited vulnerability will have a bigger impact as the target user space is large.  So should we therefore all use unpopular applications, browsers and operating systems?

The strong argument for Linux based operating systems is that they are inherently 'more secure'.  Yes, the number of virus's is lower than say Windows and the basic approach to things like non-root running is a strong concept, but are they simply attacked less as they're used less?  How many Cobol and Fortran vulnerabilities are being identified and attacked?  Perhaps that is an unfair comparison as vulnerabilities for older platforms do exist and in the appropriate environment are identified, patched and make the head lines.

Sometimes security isn't about patching and controls though, it can be about doing things differently and counter acting potential attacks in a subtle and diversified manner.

(Simon Moffatt)




The Internet Browser - A Gateway Out or a Vulnerability In?

Every month there is a report on the market share of internet browser tools.  The big 4 (Microsoft's Internet Explorer, Mozilla Firefox, Google's Chrome and Safari on the Mac) are generally seen as taking the majority share of the browser market with regional differences in countries and continents.

As more thin and mobile devices enter the mainstream, the main application being used by the end user will likely become an internet browser.  The subtle adoption of 'cloud' providers for goods and services (thinking music, books, news, basic storage, photo's) is now embedded in the standard home users approach to computing.  If required, there could be very little actually stored locally on a users machine with everything stored, subscribed to and accessed via an internet connection.

This concept has seen one of the first browser based operating systems in the form of Google's Chrome OS.  This is basically a single application operating system aimed solely at accessing the internet, with the assumption that the use of applications, data and services will be done remotely ( - it seems like an ironic circle of computer development which has gone from centralised mainframes, client-server, PC and now back to what is effectively dumb remote machines accessing a powerful central hub, albeit that hub is now massively distributed...).

The main point though, is the internet browser is now a crucial component within the device's list of functions, making it a great attack vector for information disclosure and malicious intent.

The patch release cycle for browsers across all vendor's is probably one of the most dynamic and responsive of many applications and operating systems mainly due to the popularity of use, but also an exposed browser vulnerability can have a severe impact with regards to information disclosure (browser history, cookies, online banking, purchases, login credentials...) and the potential for full access to the users device.

The increased number of automated vulnerability scanners for public facing websites and applications, has now spawned many specific scanners at the browser level.  Qualys amongst others, provide a quick online browser checking tool, which analyses versions, patching and comparisons to known vulnerabilities.  Whilst patching and updating of browser technology at the individual or home level can be a quick and simple process, keeping browsers consistent and updated within a corporate landscape is complex and time consuming process.

The corporate environment also faces issue of training and familiarity as and when new browser releases occur which often results in a lack in deployment.

Whilst Google Chrome has taken a significant market share in the last couple of years, it has done so on the back of a simple message of being the 'fast' browser.  Whilst a good marketing initiative, it serves to illustrate that the end user wants speed, features and good looks to access newer HTML5 interactive and media laden content.  The focus on usability, speed and looks has hit all the major browser vendors, with Internet Explorer's next flagship solely being promoted on it's looks and features.

It will be interesting to see in the coming year, whether the main marketing focus shifts to security instead of playability.

As many smartphones and tablets are already the digital natives main route to the interweb, again the attack vector has a single and powerful entry point to a full plethora of user information and behaviour profiling and browser history, from devices where patch management and vulnerability scanning is not at it's most effective.

If there is one application I would patch to near boredom, it would generally be the one that accesses the internet either from a laptop, netbook or smartphone perspective.  It can however, often be something that is easily overlooked.

(Simon Moffatt)