InfoSec End of Year Review - 2011 into 2012

The end of the year is coming - the goose is getting fat, the gritters are ready (some maybe even with salt...) and the supermarket 2-for-1 offers on mince pies are overwhelming.  As such, I thought it would be a good time to reflect on what have been the main interest areas of 2011 from an information security perspective and what might become of 2012 - the 'new' threats or the possible realisation of the old ones - all before we lose ourself in the midst of Christmas parties, over-eating and the warmth of a log fire.

  1. Everyone is Aiming for the Sky (or at least the clouds) - Now beyond the hype point of deployment, many organisations are dipping their corporate toes into outsourced on-line provisioning of infrastructure, applications and services.  This emphasis on outsourced components will lead to many questions surrounding data privacy in multi-tenanted environments, supplier auditing, 'Chinese wall' considerations and SLA management.  Any organisations considering using cloud providers should pay close attention to the legal contracts involved and identify key stakeholder responsibilities, demarcation points and 3rd party employee credential checking.  These factors will continue into 2012 and beyond as many organisations look to reduce the IT cost base as a result of utilising cloud providing services.
  2. If it moves (it's mobile), attack it! - Mobile phones are no longer just that.  They're 'smart', sometimes super-smart, providing 'laptop in a pocket' capabilities.  Now work email, document editing and PDF viewing is all possible all on a device that can be easily lost or stolen.  Many organisations also allow a BYOD (Bring Your Own Device) policy which may appear to save the organisation the cost of initial capital outlay, but which ultimately brings the risk of excessive device management costs in the long run.  A main failure of many organisations at present, is the lack of a clear mobile security policy which is documented and distributed.  This is a key area of focus for both corporate & personally owned devices.  On a practical level, virus applications are now more common and should continue to provide mobile support in 2012, whilst all devices now provide PIN support - which should be realised with a 6 digit ping with a single repeated value.  Remote wipe settings are also now standard via network provider or 3rd party apps and should be considered for corporate use.
  3. Starwars becomes Cyberwars - 25 years ago, 'Starwars' was the main theme for the US's defence capability, with the weapon's shield spanning the globe - including Europe.  Whilst that is still being implemented - much to the distraction of Russia - a new type of warfare is developing - Cyberwar.  Online attacks aimed not only at corporate and individual users, but also at governments, public utilities and military installations have all come to the public attention in 2011.  APT's (Advanced Persistent Threats) have become the latest buzz, with the likes of Stuxnet in 2010 and Duqu in 2011 proving that cyber-attacks are no longer being committed by individual components of malware, but complex, multi-piece well engineered software that is sufficiently advanced to attack SCADA (Supervisory Control and Data Acquisition) as well as public facing devices.
  4. Governments get Geeky - Cyber-security and digital protection has taken on a significant presence in government defence strategy in the last year.  The UK recently announced a new Cyber Security Strategy, costing £650m over a 4 year period to help improve governmental protection and business awareness of cyber-threats and attacks.  In 2009, the US announced a new position of Cyber Security Co-ordinator to be held initially by Howard Schmidt, placing cyber-awareness right to heart of the Obama administration.  Both the UK and the US will look to drive home the concept that they are 'safe' to do online business with for 2012 and beyond.
  5. Social Networking becomes Socially Engineered - Facebook announced in 2011 that it had reached 800m members, with over half using the system daily to communicate with friends and family.  Alongside the likes of Twitter, LinkedIn and Google+ organisations now also have an outlet to extend their corporate reach and brand.  Whilst this can bring a new and more direct approach for consumer awareness, it can also bring risk: risk of quick proliferation of brand damage, bad reviews, fake accounts and so on.  The use of social media also needs controlling internally from both a personal usage perspective and also an outbound marketing approach.  Both require well documented and distributed policies.  Social networking has now become a standard use case for many CRM and middleware software products and in turn requires adequate control and protection.
  6. Zap the Zero-Day? - "Zero day attacks" have made the news on a number of occasions in the past 12 months, enough to probably make it a house hold term, if not fully understood.  Zero-day attacks are focused on exposing vulnerabilities in COTS software that have yet to be patched. Many home operating systems and installed components (media players, viewers, document and spreadsheet software etc) all have auto-update capabilities, but this assumes that the vendors can provide patches before a vulnerability has been exposed.  This may not be always be the case but many vendors are developing systems to reduce the zero-day threat, including the likes of McAfee, which recently announced DeepSafe, an anti-rootkit hardware protection system created in combination with knowledge from Intel which recently acquired the business. Instead of focusing on patch identification and deployment time, maybe 2012 could warrant a new approach to remove the vulnerability in the first place.

Securing Information - An Ideology not a Tool

Keeping stuff secure - it's a funny old business.  I've been fortunate to work at several different ends of that process.  Firstly within industry working alongside business as usual processes and policies, through to vendors making tools to help automate security processes through to implementation at various sized companies requiring business and technical consulting.

At all stages, the main focus was technology.  Configuring a piece of technology so it was more secure: password management, ACL management, encryption standards, service disabling, policy lock downs and so on.

Whilst working at numerous vendors, the main focus was on selling an idea that a tool could automate many of the manual tasks associated with keeping data secure - access certification processes, creating roles to manage ACL, creating audit reports and so on.

One of the big areas missing, was that of focusing on the human involvement in the security process. Whilst undoubtedly tooling has a huge part to place in the full circles driven life cycle of information security, it is individuals who implement the processes and configure the tools.

A recent project I was working on was focused on roles based access control.  Using business level functions to map access instead of using a more error prone and inconsistent system level approach.  Whilst tooling can help with the creation of roles for example, it was often non-IT focused business users who would be using the new system.  That would often require education, a strategy and internal marketing to drive the initiative.

I think many security projects contain a more human focused element, be that education, policy change, control creation or reporting but this area is often neglected or implemented poorly.

As many larger organisations now start to develop separate infosec teams often driven by a Chief Information Security/Systems Officer, security will start to become a more pro-active component of business planning and not just a reactionary technology driven cost centre.

A pro-active business and people lead security ideology will lead to longer term business efficiency and in turn cost savings but also competitive advantage.

The Best Firewall? People

A firewall.  An aggressive connotation. A wall, made of bricks and cement, literally on fire.  As far as protection goes, that is pretty good.  Firewalls in a computing and network sense have been around a while, gaining popularity in the late 1980's when inter-networked computers and later the full blown internet came to the fore.

The main crux of the firewall is prevent network traffic from reaching a destination based on a set of rules.  Pretty simple.  One side of the firewall is a trusted 'safe' area (normally known as the private network) and the other side of the firewall will be untrusted or the public network.  Keeping the two separate makes sense and allows for greater control over network traffic and it's data.  So firewalls are generally placed at the outer most part of the private network, often with a de-militarized-zone (DMZ) in between, which acts like a no-man's land where both public and private traffic can enter.

As security has gained focus over the last ten years or so, a few things altered.  Security moved not only from focusing on potentially 'evil' public traffic and 'safe' private traffic, but to focus on keeping data secure from all sorts of angels.  Insider threats.  Proliferation of virus's and malware within the private network.  Access control levels on data to improve confidentiality.  Encryption to prevent eaves dropping and more secure storage to upkeep data integrity.  This has lead to many different levels or 'rings' of security.

Firewalls on the outer-most part of the private network, intrusion detection and prevention systems, localised patch and virus management on individual machines, authentication and authorisation to restrict access and security policies and procedures to manage networks and data effectively.  All part of a complex ring of security.

The term 'ring of security' was used mainly when discussing operating system processing.  The rings simply provide levels of process authorisation to keep potentially unsafe operations affecting the core kernel of the OS.  Gates between the rings allowed processes to be managed through neatly defined routes.  Today, it can also reference the general security approach for an organisation.

Security, can and should cover a multitude of technologies and processes.  I guess this can be exemplified by the the number and variation of topics within many security qualifications such as the CISSP or CISA.

One of the key angels of security management I think is often missed is that of physical and indeed human security.  All very well having 128 bit encryption on your hard disk, but if your laptop is left unattended in a meeting room without swipe card access it's not particularly secure is it?

There are many common security pitfalls that are created by people, not tooling:
  • Security badged not been warn / checked.  Why do you have one round your neck?
  • Tail-gating - how many times have you held a proximity swipe door open for someone....
  • Shared desktops - colleague 'just needs to check something' or check their email.
  • Not logging out.  Turning the monitor off isn't quite the same.
  • Everyone knows a story about passwords.  Weak and easy to break.  Under the coffee mat.  Recycled with a new number on the end...
  • USB sticks / netbooks / portable drives, left unaccompanied.
  • Printed documents not cleared down from printers - clear desk policy.
  • Water cooler chat - social engineering for information, processes, passwords, locations is easier than you think.

All of which can easily be managed and avoided by first the definition of some basic security best practice and secondly through internal awareness and 'marketing'.  Making non-IT users aware of the importance of data security is key and how it not only affects the IT geeks, but the actual profitability, usefulness and general success of business units.  Awareness should also be followed up by readily available security resources - consistent internal documentation is a start - training, websites and updates is ideal.

The concept of using humans as firewalls is not new, but technology can only protect data so far.  Human intervention and judgement is now just as important as security moves into the main stream of effective business management.

You wouldn't trust your car to drive itself, so why let only technology look after your data?  

Emerging Threats

I was at a recent Information Systems Audit and Control Association event which discussed the future of threats to the individual and the enterprise, including concerns such as cyber attacks, advanced phishing, data governance and more.

Whilst working at Oracle as an EMEA consultant, I worked with many large organisation (> 30k employees) within the financial services and telecomms industries focusing on their approach to identity management - working out who has access to what and why.  This is still a fundamental approach to basic information security management, but now we are seeing information being accessed in a variety of different ways which in turns creates opportunities for information and data attacks in new and sophisticated ways.

  • Mobile - The increased use of mobile and hand held devices, whilst increases the ability for remote working, also increases the risks to the individual, from the likes of rogue apps, identity threat and virus's.  Whilst many major app stores have some basic verification of the validity of the app's content, not all do, creating an opportunity for badly written or in fact rogue apps to proliferate quickly across mobile devices.  Also, as phones become more adept at handling complex data such as video's, PDF's and even terminal services sessions, what protection does your mobile provide for things like anti-virus detection, personal firewall, buffer over flow protection, kernel safety and the the like?

  • Phishing - whilst not a new concept, phishing has now moved into new areas.  Quick Response, or QR Codes, allow smart phone users to scan what in essence is a 2D bar code.  This pictorial data representation is impossible to de-cypher by the naked eye.  Using a simple replace technique, sometimes using even a basic sticker, QR codes can now be replaced with a malicious version navigating the scanner to a rogue website or worse.

  • Third Party Content Management - Sounds like a convention of some sort doesn't it.  Third party content is what I would interpret as data, often social media, that is associated with an organisation or individual, that didn't derive from the organisation or individual in question.  A simple example could be a review of a hotel.  The review data actually originated from a visitor, but obviously references information about the hotel.  This content can proliferate virally across social networks such as Twitter, YouTube and Facebook if the information is sufficiently interesting or topical.  Why is this a threat?  Well the information could be of a malicious intent such as slander, or competitive fud, right through to a fraudulent website attempting to resell products in which it is not licensed to sell or review.  Due to the increased interconnected nature of the social graph, the rate at which this information can spread can become   a serious threat to an organisations brand or a users identity.

  • Email Break - This is really more of a threat for the individual than an organisation.  Most people will have a personal email address.  This address will be asked for when signing up to any on line service, store or social network.  The same will be true for on line banking.  This puts the users email address at the centre of their on line idenity -  a bit like the keys to the castle.  Whilst a hacker would need to potentially know several passwords in order to access the users many on line sites and stores, knowing just their email password gives them access to everything in one easy place.  Reminder emails, password resets, receipts and not to mention the ability to send emails on the users behalf.

  • Cyber Security Arms Race? - We're all pretty familiar with the cold-war arms race of nuclear subs, missiles and the rest.  But has this now evolved into something a lot less confrontational and more on line and subvert?  There have been several alleged on line Denial of Service attacks at a country level in recent years including the North and South Korea stand off which many claim included a cyber aspect.  The US recently created a cyber security special advisory team, with security veteran Howard Schmidt providing direct guidance to President Obama.  The increased threat of DoS attacks to bring down governmental, military and large corporate websites will only increase as more information is made available on line.

Whilst threats and attacks evolve constantly over time, the increased reliance on the internet will put increasing focus on the identification and prevention of on-line malicious activity.  Whilst once road, rail and food where the main stays of an effective social grouping, the internet has now become the main de-facto way of not only accessing data and information, but also ordering services, food, entertainment content, news and more either via personal computers but more likely by mobile devices not initially built with security in mind.

Facey and The Social Graph

Sounds like a good film doesn't it?  Well, last week, Mark Zuckerberg et al, announced the next phase of Facebook development and focus at their F8 conference.

Whilst his stage presence still has a lot to be desired, the rich vein of social networking foresight and feature list is as thought leading as ever.  Whilst Facebook can claim it's 750m (or whatever the number is this week) of signed up users, those users have generally been focused on social interactions.  The show and tell of life.  Updates, sharing pictures, engaging with lost contacts, far-flung family and the like.  You know how you use Facebook.

Over time those interactions tried to branch into different categories.  Bands and businesses created pages.  Groups evolved.  Apps became pandemic.  Facebook contains a lot of folks and this attracts advertisers, attention seekers and information distributors.

However, the concept of the social graph is taking those interactions into the next level.  The idea being that everyone has interactions in different circles (no Google pun intended there) and if you can leverage those interactions to aide decision making, your social interactions take on more importance.

For example.  Take a simple purchase decision.  If you could receive feedback, or a comment, or a like or some other direction from some one you trust, that will help you decide whether a product is good for you or not.  The key to making those decision points work is the word TRUST.  If the people in your social circles are providing that information, you are more likely to accept it.

Your social circle generally tends to contain people who have similar views, backgrounds, spending habits, favourite bands, political leanings and so on.  Of course, there are bound to be people in there who's view you couldn't care less about, but you can't have everything.

That feedback is now available dynamically through the use of Facebook's social plugins.  These copy and paste style pieces of HTML and Facebook mark-up, allow web owners to place interaction points on any on line content.  The plugin then interacts asynchronously with Facebook and the current user viewing the web content to provide information such as whether anyone in your friends list has recommended, liked, commented or interacted in any way with the content.

Neat eh?  Powerful certainly, and with the ease of use of the plug-ins and improved developer and platform support Facebook are now providing, they are seemingly moving into the territory of social platform provider.

The platform is the cloud was one organisations strap line a few years a go, I think that can now be updated to the social platform is the future of the web.

Simple Design for Happier Users

How many buttons does Google have?  Yes, exactly (2 is the answer if you can't be bothered checking).  OK, so they are a few hyperlinks to click as well, but as far as buttons associated with a form are concerned there are just two.  How many on Twitter?  Once logged in there aren't any!  How simple can it get?

One of the many things the product design team at Scholabo have to manage, is how to control the amount of information each of the end users will be exposed to.  For those who don't know Scholabo, it's an online communication and content distribution site acting as a conduit between schools and parents.  The parents being the consumers of information and the teachers and schools being the producers.

One of the key aims was always to make the decision making part for the end user as small as possible.  By that, I simply mean taking the Convention-over-Configuration approach to how a user actually uses the system.  For 80% of the end user use cases, we aimed to implement these automatically.  There would be nothing the end user would need to select, configure, choose or decide over, to get the news and information that what pertinent to them, their school or the teachers they wanted to watch.

Obviously that approach, took effort in understanding what those use cases were, and how the parent would like to consume the information that is available to them.  One of the key issues now regarding social media lead information flow, is picking out the valuable data from all of the noise.  There are many ways to filter out data noise from user learning techniques that result in automatic filtering, through to manual filtering based on a criteria check list.  The end result should allow the end user access to the information they are interested in quickly and simply.

In Scholabo we tried to make the information flow as simple as possible with the parent having instant access to the School news and content automatically.  In addition they would have a basic choice to decide which teachers they would like content from .  This layered approach to information flow keeps the noise to a minimum.

A common theme of social media is to aggregate data.  Once a filtering exercise has been completed, the data that is left is then aggregated often into a single view the user can muddle through.  Taking Twitter as a quick example, the time lines in Twitter can become unmanageable for a user following say several hundred or thousand users.  Many Twitter clients are available that allow you to group the data into streams or interest areas that allow the end user to manage the data in more bite sized chunks or threads.

In Scholabo data is automatically grouped based on origin and creation time, making it easy and simple to quickly find what is required.

From a product perspective, buttons and links are kept to a minimum.  Not only is this quite nice from a UI perspective, but it also makes the decision making process for the end quick and simple.  The number of duplicate pathways - button and link routes to a certain page - are minimized,  giving each link a distinct modular task, removing ambiguity and confusion,

When developing a SaaS solution, one of key dilemmas facing a product management team, is how to drive the use case and road map list.  A longer 6-12 month strategy can be pretty straight forward, but enhancement request always arrive from individual end users who want a specific option, change or alteration.  In this case, the idea is to try and baseline the user requests to identify the weakest link.

Which feature can be implemented that covers the bare minimum of all similar requests without damaging the user experience of ANY users?  As with any site, it can be difficult to implement edge or exception cases, as they will potentially impact other users who have no requirement for the new feature.

Simple generally always results in robustness.  Not just from a pure coding perspective, but from an end user perspective.  If they deem a feature or service as robust, they have a clear association in their mind of what the service will offer them - what features, what questions will it answer, how much time will it require and so on.  It's like developing a brand.

If your product is known for one thing and one thing only, it's quick for existing as well as new users to identify with your product and make the best use of it's service.

The DNA of Search

The internet.  It's a big old place.  Full of stuff.  Files, stories, movies, music, pictures, news, reviews.  You name it, the internet has a virtual online version of it.  But how do you find what you want?  Via a search engine of course.

The search engine of choice is generally seen to be Google.  Obviously there are local variations to this, with Baidu in China for example and other more specialised engines such as ChaCha which focuses more on human analysis of the results instead of pure computational searching.   However, to generally get the most out of the internet you need to search, index and categorise what you want to view.

The basic idea behind a search engine is firstly for it to create an index of available web pages.  This index is created by automated robots or spiders, that crawl as many existing public web pages as possible, checking links and identifying the contents of the HTML pages to allow searches to be performed.

A user would then enter a list of keywords (sometimes combined with some operators such as AND, OR and NOT) to help explain what they are looking for.  The search engine scans it's index trying to perform a basic match.  The result set that the search engine returns, is then presented to the user.

Now this result set is the important part.  The result set could be quite small, in which case it's generally pretty easy for the person searching, to quickly validate and and discard any results which they deem to be inaccurate, inappropriate or just darn right bad.  However, in general, the result set will be too large to process by hand.  It could generally contain several thousands hits or sites that would need to be verified or ranked, based on their content.

Can you trust what you're looking for? (via

Most search engines will attempt to perform some basic ranking process.  The ranking could be based on using keywords that other humans have utilised programmatically over a period of time, or assigning values to index results such as the number of links within a site and so on.  Each search engine will have a proprietary way of ranking results data, which will result in different engines producing different results.

Many search engines will promote the idea of net neutrality which allows network services, responses and searches to be created unhindered and free from the likes of government, corporate or competitive interference.

But can a search engine be free from bias?  Many search engines utilise advertising to generate a revenue stream and do those advertise links cloud the true search result?  Google will identify a paid for link by tagging with the word 'sponsored' next to it to provide some clarity.

One other major form of search bias is based on previous user search history.  The idea behind this is to try and personalise the results set based on what the user has previously searched for and the subsequent websites they have clicked through to.  But this increased personalisation, whilst may have its benefits, starts to reduce the opportunity for new and random results.  The user becomes increasingly held within their own bubble of navigation and knowledge, not knowing what they don't know.

The main concern with such an approach, is that the end user has no real knowledge of the results ranking and parsing process, so they become unaware of other potentially valuable search results at their disposal.

It will be interesting to see over the coming years as the internet undoubtedly becomes larger and more diverse, whether search engine theory and the underlying ranking algorithms can become sophisticated enough to produce personalised content, whilst remaining open to the random and new.

Has The Big Dog Had It's Day?

The end of the megalithic software vendor?

Who Are The Big Dogs?
When you think of the big dog, game changing software companies you think of Microsoft (PC's), Apple (cool-factor), Oracle or IBM (enterprise), Google (search and now mobile) and I guess if you stretch it a little Cisco (yes I know they are primarily network hardware, but that hardware needs an OS) too.  There are a few others, but you get the idea.  Most of these big dog software vendors, are indeed just big dogs, and occupy many positions in the NASDAQ's top 10 for market capitalisation.

But Has the Big Dog Had It's Day?
Is there a point where these organisations like these, either become too 'large' or simply become less relevant?  15 years ago Windows was seen as the only way for desktop operating systems, certainly within the enterprise, being bundled on the latest HP and Dell hardware without question.  Today, it doesn't take long to find the latest netbook running Ubuntu or another Linux distribution.  Oracle was once seen as the de-facto standard in enterprise databases.  Whilst that is still the case in many parts, the number of greenfield sites has reduced dramatically, resulting in Oracle diversifying massively and at one point, running an acquisition trail of something like 50 new companies in 5 years.  Why such diversification?  The need to stay ahead.

Courtesy of JDurham via

Microsoft, Cisco and Oracle shares for example are pretty much stagnant compared to 10 years ago.  However, I'm not really a fan of using that as a metric for success or growth of the software industry as a whole, mainly as you could argue that equally, shares in Google or Apple have risen significantly, peaking several thousand percent higher on some days.

Big is Good?
Larger companies are generally good at scale.  Massive scale.  They can leverage internal and external buying power, create efficiencies from removing personnel duplication and invest heavily in R&D and automated process.  This generally allows larger companies to manage huge client bases, distribution channels and develop in-comparable brands.  However, that scale comes at a cost - agility.  A huge client base make it's difficult to make short product or marketing decisions without affecting (potentially negatively) large percentages of your customer base.  It's often difficult to add new features, release new versions or implement strategic plans without considerable effort from all parts of the organisation.

Those issues are the same for any large organisation.  Why is software any different?  Well, I guess software always has a large amount of innovation.  One of Microsoft's initial goals was to get programming into the home.  Get a compiler in front of the man on the street, give him the tools to create something and see what happens.

The Source Of Innovation?
Today, a novice coder can download the tools and libraries, blogs and videos to be able to create a basic phone app in say 24 hours.  That is pretty incredible.  Put that approach in the hands of skilled developers and innovation proliferates.  But is that a threat to the larger organisation?

The many Linux distributions have always claimed to compete on features, with the better known and more costly closed source alternatives, but recently the argument seems to have taken heed, with many of the smaller netbook style machines switching to the less memory and processor intensive distributions.

The many flavours of Linux are generally supported by communities of volunteer developers, adding features, fixing bugs, porting drivers and so on.  Is that individual innovation a threat to a corporate machine?  Surely, if a distribution or feature became popular it would just be acquired right?  Well, perhaps, but when it comes to open source and free software, the sum of the parts are generally greater than the individuals, and cash can't buy an ethos.

So the larger software machines will fall on their own capitalist sword then?  Well I'm not saying that either.  The key to their future is to understand the true source of the innovation.  You want a Windows PC and all the associated closed source software running on a free Linux distro?  That can be done in a few hours.  There are Office copies, iTunes copies, Adobe copies, you name it, there's a free open source version of a popular closed source product.  But how many open source products are there which don't have a closed source version?  Surely that would indicate that the innovation came external to the closed source large machine?

Has The Dog Bitten It's Tail?
By creating a larger ecosystem of users, a large closed source, license based approach software vendor, can achieve many of it's aims, namely to keep its shareholders happy and pay out decent dividends.   However, those aims are often seen to be at the disposal of user happiness.  I'm talking license cost, long term corporate lock-in, lack of control, lack of response and ultimately lack of choice.

Realistically, many of the larger corporate vendors have the cash and experience to buy themselves out of any potential trouble, through acquisitions and re-branding, but the true source of the innovation that will drive these organisations for the next 15 years is often more difficult to control.

Pizza, Music & Beer - How To Build a Rockstar Team

Building any new team, takes considerable effort, thought and direction.  Building a core startup team, capable of work far out reaching the number and talent of the people involved,is the holy grail, but not impossible.

  • Create and publicise an end goal - To get the best out of any team requires direction.  That direction comes from identifying an end goal.  What is the team really there for?  Not the small every day tasks and job description values, but what is the underlying value that the team gives above and beyond anything else they have to do to achieve that?  Those goals are generally far reaching and ambitious but still succinct and easy to understand and benchmark.  "It doesn't matter what it takes, but our website must always be up" etc.
  • Identify the path ahead - The path ahead will be rocky.  But it needs identifying for at least the next 6 - 12 months.  The finer points of that journey will (and hopefully should) change.  That's the flexibility of working in a new and generally small team.  The route should include key milestones (either product, personnel or team related) that can be measured.  If you can measure it you can alter your direction to make sure you stay on track.
  • Choose the right tools to help you - This is vital.  Tooling could be as simple as the making sure your hosting provider or IDE are consistent and correct for your team, or identifying the best framework to choose for tasks within a product.  Selecting the correct tools allow you to focus on what your key value-add skills are.  If you're a race car driver you don't want to spend time servicing your car.  Spend time canvassing opinion and making informed decisions.  Involve as many people as possible.  It may take longer, but the end result is more robust and long standing.
Suat Eman / FreeDigitalPhotos

  • Make things repeatable - Standard Agile coding practice is to keep your code DRY (Don't Repeat Yourself) with respect to logic, so this may seem contradictory.  What I'm really referring to is simple tasks and operations in the team that can be relayed to others, new team members or even partners, oursourcers and interfacers to your team.  If a process is repeatable it can be done my more than one team member (reducing risk) and that generally means it can be improved through iteration and pair analysis.
  • Get people with passion - Techies love new stuff.  Many know a lot about a lot of stuff - tools, languages, infrastructures, libraries and so on.  How do you select the best personnel?  Well, ideally that would be through your own personal network.  Knowing people individually, either through direct work relationships or recommendations, is considerably better than the standard interview, show, tell and test method.  Identify people with a passion in something different.  An obscure language.  An unheard of rock band.  A blog on turtles.  The content doesn't matter.  If someone has passion, identify it and use that passion within the team.
  • Document without being bureaucratic - Documentation is often seen as a hindrance.  It's time consuming.  "We'll do it later".  Creating expressive code that is self explanatory, is a good habit from an Agile / XP standpoint.  Use of clear variable and method names for example.  But also, document certain processes.  Backup steps, roll out process, recovery steps, password hooks and so on.  Doesn't need to be War and Peace - a text file on a shared access point, or even a white board note.  Get people to work as if they're training someone else to replace them, picking up simple processes, making them repeatable and documented.
  • Have structure without restriction - Again this is more focused on simplicity rather than complexity.  Basic contact hours for team members.  What IM channels can you expect people to use at what times.  Set a day each week where everyone is in the same place at the same time.  Set a time for a virtual meeting twice a week if people work in different states or cities.  Also important from an organisational perspective, is making people aware of who to go to for certain company or personnel related issues.  A basic structure allows people to understand their role and what is expected from certain people and scenarios.  This allows them to concentrate on what they're good at.
  • Add pizza, music and beer* - Building a successful team isn't just about the product, company or 'results'.  It's about building individual careers and personalities.  Work is the place you spend the majority of your waking hours.  Make it fun.  Make it personal.  Make it real life.  Share interests and order a few chilli-beef pizzas and a carton of beer for Funday Friday's.  It's only a job after all.

*substitute as necessary.

Agile Programming and Agile Selling

In web tech, everyone is keen on Agile development.  If it's not agile (or a variant delivering similar results) most folks aren't interested.  The main themes behind Agile development tend to focus on speed, change, transparency and increased satisfaction.  Themes, in reality, can and perhaps should, be applied to most areas of customer facing business.

Everyone wants something yesterday.  Most people want improvements or changes to existing processes, standards or services.  Most people want to know what's going on  - and that is especially so when things go wrong.  So that means most people want transparency too.

One of the themes regarding agile design is that of feedback.  Good quality end user feedback is like the holy grail.  I don't mean good as in they like your stuff, I mean good as in it's quick, reliable and appropriate and can accelerate the development process with regards to identifying bugs, incorrect features or processes.

Selling should be the same.  A new market or product will not necessarily take the form of existing channels.  Understanding of the customer, their problems, the solution they require and how that should be packaged and communicated are key to any sales execution. 

How agile are you?

How is that done in practice?  Replication of existing practices is generally the normal starting point.  "If it's worked before it will work again..."  A decent thought process.  What is key, is the ability to learn quickly if that process works or fails.  That requires a metric to determine what is classified as successful, as well as a method to be able to collate and collect that metric - the feedback theme again.  How is that possible?  Metrics are all around.  Click through rates, response rates, conversion rates, attendance rates, signup rates, referrals, recommendations, cancellations.  The key is to understanding the numbers that are returned.  What is considered a bad enough result to warrant a change in direction?

Once a decision has been made to stop or alter an approach, it is then key to quickly figure out a new approach, iron out the deficiencies of the original method and run again (iterative development??).  This is just really a positive feedback loop.  The difficulty is, that the customer doesn't know they're being canvassed for their opinion.  They're simply being offered a product or service via a marketing, advertising or sales channel and either converting as a purchaser or ignoring.  As the product owner or seller, it's then key to either try and understand individually why the conversion failed or strategically if a failure occurred across a larger group.

Like in software, you can't create a product/feature in isolation from the people who will use it.  You can create the most beautifully complex piece of code in the world.  If no one uses, needs or likes it, it's not really doing it's purpose.  Conversely you'll often find the most hacked up, short term, sticky plaster of an approach is massively successful, loved and used beyond all expectation.  Both of those circumstances can be normalised if a solid feedback and rescoping process is run.

What's All The Google Plus Fuss?

Unless you've had your netbook / laptop / iPad / iPhone / Android / desktop PC (do they really still exist?) switched off in the last week or so, you would have noticed many people tweeting and blogging about Google's new social networking project.  Google+ (or Google Plus as there isn't a consistency afaik) is the so called death eater of Facebook, the overnight disease of Twitter and for the recently Microsoft acquired Skype who knows?  If the hype is to believed of course.

With features that take arguably take the best out of the most popular of the existing social networking platforms, it's easy to see why the hype and attention that has been placed upon it.  With any product though, there are benefits to be realised from having the first-mover-advantage.  In the case of Google+, you can just argue they've let other players iron out the market before they've come along with a more succinct approach.  If it wasn't for the Wright brothers we wouldn't be enjoying the A380.  Google's argument is they can provide better privacy through 'Circles' and better connectivity through 'Hangouts'.

So for that argument to hold it assumes that Google+ is firstly a competitor of the likes of Facebook / Twitter / Skype (lets say FbTwSk to save my poor fingers from typing) and that their features are at worst comparable and at best an improvement.  The features don't need to be new.  Biggest mistake number 1 of many entrepreneurs trying to enter a market: nothing needs to be new, it's just needs to be better.

Assuming the features are at least on par, will Google+ be able to take over, or at least leverage the same user base, as FbTwSk?  Well it would be unlikely that any new user to Google+ has no social networking presence at all. It's more likely Google+ will be aimed at existing users of socnet sites offering them an improvement on what they already have.  So this must be where the competition angle starts?

The launch of Google+ was done with restricted signup.  Although a pretty old trick, it was nonetheless pretty cute.  This is just the same as making people queue to get into the boutique shop sale.  It creates a funnel effect which creates a false sense of want, amongst those who cannot get in.  By simply restricting access to something, Google instantly made a marque good, reducing supply to increase price via excess demand.

Those who did have access became the minority.  To amplify their minority status what would be the best way to tell others about being in the minority?  You utilise your existing social network.  So instantly Google leverages it's so called 'competitors' to promote it's own product.  Nice.

"Mark Zuckerberg has joined Google+" was a headline I saw tweeted yesterday and he was the user with the largest number of followers.  A lot of people seemed surprised.  He is the probably the biggest name (I don't mean the 14 chars) in social networking so why wouldn't he have an account on a new social networking site?

I guess there's a few ways of looking at that:
  1. He doesn't join every new social networking site.  There's probably several hundred a day starting up.
  2. It increases brand awareness of both Google and FB.  So mutually beneficial / damaging?
  3. By signing up he keeps his enemies close.  Or at least sees their ammunition
  4. Does it encourage existing FB users to sign up too?
  5. Leverages an interdependence between the two products.  How many real estate agent shops / car dealers do you see working right beside other real estate agent shops / car dealers? 
What does it all mean?  Well Facebook relies heavily on advertising.  Advertising works when you have a large pool of people to aim your adverts at.  Facebook has got a few users signed up by now.  Google is first and foremost a search engine, which relies heavily on advertising.  Advertising works when you have a large pool of people to aim your adverts at.  Google has got a few users signed up by now.  Hmm. 

Both doing the same thing, but coming from different angles.  From a purely business related viewpoint, what Google has done is not that surprising.  They've looked at their existing customers and provided them with a value added service based on what's best in the industry.  No different to say BMW adding in a free generic MP3 adapter.  But do we need another social networking outlet?  Time will tell but you can only improve something so far before it needs reinventing.

House on the Cloud?

I work in IT.  I know a a few things about computers.  So when someone mentions the word 'cloud' I generally know what they're talking about.  And generally glaze over when they start talking about 'the future', or 'amazon', or they're working on a 'cloud infrastructure'.  So what?  Big deal.  Will it improve the business or end user experience?

In the short term probably not.  Most organizations will have a cloud project of some sort.  Even if that project is to simply find out what the cloud project should be.  That's fair enough.
The technology, process, security and personnel of cloud are *relatively* new in comparison to stuff like server-client computing or thin-client infrastructures.  However the more subtle uses of cloud like services have started to appear in my home.  And that I am all for.

Take television for example.  I last year I upgraded my satellite kit to include a disk based recording system.  So now I can record and watch TV at the time I want to watch it.  Nothing majorly new there.  However, the service also allows me to watch series and movies 'anytime' I like with the media either streamed or downloaded locally to my disk based recording system.  I now no longer need to buy a DVD box set.  I don't even need to 'own' anything, I just pay for the service and enjoyment of watching The Sopranos season 1 from the beginning, as delivered through the satellite and broadband infrastructure of my TV provider.

Are you in the clouds?

I also recently upgraded my mobile - yes I know it seems like I lived in the dark ages, but I was waiting for you guys to rid me of the bugs with your first mover advantage - to a more capable smart phone.
I instantly downloaded the Kindle for Android app - a combination I may add that couldn't be more perfect if they tried.  Within minutes I had access to copyright free (and thus cost free...) classics such as The Origin of The Species, The Wealth of Nations, The Communist Manifesto and The Life of Buddha.  All 4 will no doubt be great reads, but I would never have bought the hard copies mainly due to cost, space and the time it takes to read (everyone knows reading on a Kindle book via a smart phone takes only 1/100th of the time to read a 'real' book.  The 99th saving coming mainly from coolness....).  So now, my phone or the service I'm actually subscribed to, contains all my reading material, bookmarked, organized and ready for use 24 x 7 without me having to actually physically own anything.

The same principles can also be applied to music which was probably first on this band wagon from an entertainment perspective, with the likes of allowing you to create your online radio station.  The end result is you listen to music, without owning the disk or MP3.

This increased 'as-a-service' approach, will soon start to cover other elements of our non-work life as long as the end product isn't altered or degraded.  Most people want good service at a better price and this virtual out sourcing approach certainly fits the bill for the time being.  Our homes could become objectless states of service delivery and all we have to do is cough up the cash for the pleasure.

So in a changing world, spending cash will be one thing that stays the same.

Do Obscure Tools Make Better Products?

If all mechanics used the same set of tools, took the same approach and offered the same service they'd all cost the same and be highly competitive.  This is true for any homogeneous service or product offering.  Salt is salt no matter where you buy it from, so competition is based on price, as salt from one supplier is deemed to be substitutable with salt from another supplier.  The opposite approach to that, is making your product or service offering highly differentiated thus creating brand or product loyalty.  Here, competition can't be based on price as no other offering in the market place is a true replica of your own so substitution can't occur.  Non-price competition allows you to leverage an increased price and gain greater competitive advantage.

Not all products or offerings are suitable for differentiation, as a lot depends on the market place and the existing conditions.  However, one way to become differentiated is to alter what tools and processes you use to create your offering.  I'll focus on physical products here (software to be exact) but you can apply this to any service offering, in which case, your tool box is simply the approach and process you use to create your offering.

Luxury cars (thinking BMW, Mercedes, Rolls Royce, Jensen here) all have very proprietary build methods.  The electrics system in a Rolls Royce will not fit a Mercedes and vice versa.  To replace the suspension on a BMW you require specialist tools, that only a BMW focused mechanic would own.  Doing things differently allows the products to become specialized, offer different features and create a niche within the market place.

What's in your toolkit?

When creating software there are many choices to make:  what functionality is required?, what framework to use?, the development approach, the development methodology, the release cycle and so on.  There are many many books talking about language frameworks and best practices, explaining in great detail why things have to be done a certain way and the rationale behind it.  So, if everyone is doing everything the correct way, using the correct framework what will the resulting software look like?

Many IT Directors up and down the land will often choose the most popular language for a certain project.  Why?  Well, from their perspective they are seen to be making a risk averse choice - a popular language probably means labour is cheap, support is plentiful, SDK's are stable and time to deliver should be lower.  And to be honest, that is quite a pragmatic approach.

But what about the start ups - the actual software companies offering new ideas, sites and products?  If they choose to use the most common languages and frameworks, yes they can probably get some hackers for cheap, but wont the end product be the same as everyone elses?  Well you can probably argue that if a toolkit or approach is mature, programmers will be so well versed they can be original and optimize how that toolkit is being used.  Paul Graham's book Hackers & Painters argues that using an obscure language (his example was Lisp) firstly identifies coders with a real passion and enthusiasm for their tool of choice.  Lisp isn't the easiest of languages to take on board, so those that do, obviously have a great interest in making it and the products it creates succeed.  His also argues, that using a toolkit different to your competitors, allows you to develop different features a lot faster, as you're not behaving as if you're in an arms race.

Altering your approach or the tools you use, can give you fresh impetus on a personal level and also make you stand out from the crowd on a product or services level.  It just takes a few brave steps to make that move, but you'll probably have a lot more fun in the process.

Is the Internet Too Big?

Well, to be honest I'm not sure what 'too big' actually means.  I guess firstly, you would need to define what the internet is, define a metric, create a yardstick, compare the two, analyze the outcome, create some reasoning for your argument and so on....but that really isn't that interesting.  My thought was really around how do we, as simple human beings, consume, use and manage all the data thrown at us from the internet?  And really, is there two much data out there?

Is the Internet getting too big?
Think of the wave of truly internet ready sites that have become as common as sliced bread, the car and TV.  I thinking Google, Wikipedia, Facebook and more recently Twitter.  There are probably others that most people could not live without, but most people on the planet are likely to have heard of at least one of those 4, even if they'd never used them or indeed owned a computer.  They have become part of our working and personal lives.  We alter our patterns and habits around them, arrange social events, research topics and get our news from them.  But are sites such as Google and Twitter, once a great catch all for our questions and queries becoming too cumbersome and less focused?

Wikipedia UK has over 3,521,648 entries alone.  Twitter has over 190 million registered users producing 65 million 140 character posts every day.  Google responds to 34,000 searches a second.  The numbers become meaningless after a while.  If a computer is processing the results who cares?  Well, what about us as users of these services?  Simple human beings with small (some smaller than others) brain space unable to sift and filter the data we need.  Being able to search massive amounts of data is great, if you're looking for something specific.  If you're not, you'll just waste time sifting.

The creation of sites such as Stumbleupon are a basic attempt at placing some logic and value-add to the the potentially meaningless data we are sometimes faced with.  "Stumbling" is a way of being presented with random web pages based on some basic search characteristics you're interested in.  It's more of an entertainment tool, but the idea has vast potential for things like contextual driven news and collaboration.  If the first part of the second boom is social networking, I'd bet at least 2 cents, that the second part (of the second part) is actually taking the vast levels of communications, broadcasting and interaction to a level of context and automatic personal filtering, without the need to even think about what data we require.