Facey and The Social Graph

Sounds like a good film doesn't it?  Well, last week, Mark Zuckerberg et al, announced the next phase of Facebook development and focus at their F8 conference.

Whilst his stage presence still has a lot to be desired, the rich vein of social networking foresight and feature list is as thought leading as ever.  Whilst Facebook can claim it's 750m (or whatever the number is this week) of signed up users, those users have generally been focused on social interactions.  The show and tell of life.  Updates, sharing pictures, engaging with lost contacts, far-flung family and the like.  You know how you use Facebook.

Over time those interactions tried to branch into different categories.  Bands and businesses created pages.  Groups evolved.  Apps became pandemic.  Facebook contains a lot of folks and this attracts advertisers, attention seekers and information distributors.

However, the concept of the social graph is taking those interactions into the next level.  The idea being that everyone has interactions in different circles (no Google pun intended there) and if you can leverage those interactions to aide decision making, your social interactions take on more importance.

For example.  Take a simple purchase decision.  If you could receive feedback, or a comment, or a like or some other direction from some one you trust, that will help you decide whether a product is good for you or not.  The key to making those decision points work is the word TRUST.  If the people in your social circles are providing that information, you are more likely to accept it.

Your social circle generally tends to contain people who have similar views, backgrounds, spending habits, favourite bands, political leanings and so on.  Of course, there are bound to be people in there who's view you couldn't care less about, but you can't have everything.

That feedback is now available dynamically through the use of Facebook's social plugins.  These copy and paste style pieces of HTML and Facebook mark-up, allow web owners to place interaction points on any on line content.  The plugin then interacts asynchronously with Facebook and the current user viewing the web content to provide information such as whether anyone in your friends list has recommended, liked, commented or interacted in any way with the content.

Neat eh?  Powerful certainly, and with the ease of use of the plug-ins and improved developer and platform support Facebook are now providing, they are seemingly moving into the territory of social platform provider.

The platform is the cloud was one organisations strap line a few years a go, I think that can now be updated to the social platform is the future of the web.

Simple Design for Happier Users

How many buttons does Google have?  Yes, exactly (2 is the answer if you can't be bothered checking).  OK, so they are a few hyperlinks to click as well, but as far as buttons associated with a form are concerned there are just two.  How many on Twitter?  Once logged in there aren't any!  How simple can it get?

One of the many things the product design team at Scholabo have to manage, is how to control the amount of information each of the end users will be exposed to.  For those who don't know Scholabo, it's an online communication and content distribution site acting as a conduit between schools and parents.  The parents being the consumers of information and the teachers and schools being the producers.

One of the key aims was always to make the decision making part for the end user as small as possible.  By that, I simply mean taking the Convention-over-Configuration approach to how a user actually uses the system.  For 80% of the end user use cases, we aimed to implement these automatically.  There would be nothing the end user would need to select, configure, choose or decide over, to get the news and information that what pertinent to them, their school or the teachers they wanted to watch.

Obviously that approach, took effort in understanding what those use cases were, and how the parent would like to consume the information that is available to them.  One of the key issues now regarding social media lead information flow, is picking out the valuable data from all of the noise.  There are many ways to filter out data noise from user learning techniques that result in automatic filtering, through to manual filtering based on a criteria check list.  The end result should allow the end user access to the information they are interested in quickly and simply.

In Scholabo we tried to make the information flow as simple as possible with the parent having instant access to the School news and content automatically.  In addition they would have a basic choice to decide which teachers they would like content from .  This layered approach to information flow keeps the noise to a minimum.

A common theme of social media is to aggregate data.  Once a filtering exercise has been completed, the data that is left is then aggregated often into a single view the user can muddle through.  Taking Twitter as a quick example, the time lines in Twitter can become unmanageable for a user following say several hundred or thousand users.  Many Twitter clients are available that allow you to group the data into streams or interest areas that allow the end user to manage the data in more bite sized chunks or threads.

In Scholabo data is automatically grouped based on origin and creation time, making it easy and simple to quickly find what is required.

From a product perspective, buttons and links are kept to a minimum.  Not only is this quite nice from a UI perspective, but it also makes the decision making process for the end quick and simple.  The number of duplicate pathways - button and link routes to a certain page - are minimized,  giving each link a distinct modular task, removing ambiguity and confusion,

When developing a SaaS solution, one of key dilemmas facing a product management team, is how to drive the use case and road map list.  A longer 6-12 month strategy can be pretty straight forward, but enhancement request always arrive from individual end users who want a specific option, change or alteration.  In this case, the idea is to try and baseline the user requests to identify the weakest link.

Which feature can be implemented that covers the bare minimum of all similar requests without damaging the user experience of ANY users?  As with any site, it can be difficult to implement edge or exception cases, as they will potentially impact other users who have no requirement for the new feature.

Simple generally always results in robustness.  Not just from a pure coding perspective, but from an end user perspective.  If they deem a feature or service as robust, they have a clear association in their mind of what the service will offer them - what features, what questions will it answer, how much time will it require and so on.  It's like developing a brand.

If your product is known for one thing and one thing only, it's quick for existing as well as new users to identify with your product and make the best use of it's service.

The DNA of Search

The internet.  It's a big old place.  Full of stuff.  Files, stories, movies, music, pictures, news, reviews.  You name it, the internet has a virtual online version of it.  But how do you find what you want?  Via a search engine of course.

The search engine of choice is generally seen to be Google.  Obviously there are local variations to this, with Baidu in China for example and other more specialised engines such as ChaCha which focuses more on human analysis of the results instead of pure computational searching.   However, to generally get the most out of the internet you need to search, index and categorise what you want to view.

The basic idea behind a search engine is firstly for it to create an index of available web pages.  This index is created by automated robots or spiders, that crawl as many existing public web pages as possible, checking links and identifying the contents of the HTML pages to allow searches to be performed.

A user would then enter a list of keywords (sometimes combined with some operators such as AND, OR and NOT) to help explain what they are looking for.  The search engine scans it's index trying to perform a basic match.  The result set that the search engine returns, is then presented to the user.

Now this result set is the important part.  The result set could be quite small, in which case it's generally pretty easy for the person searching, to quickly validate and and discard any results which they deem to be inaccurate, inappropriate or just darn right bad.  However, in general, the result set will be too large to process by hand.  It could generally contain several thousands hits or sites that would need to be verified or ranked, based on their content.

Can you trust what you're looking for? (via morgueFile.com)


Most search engines will attempt to perform some basic ranking process.  The ranking could be based on using keywords that other humans have utilised programmatically over a period of time, or assigning values to index results such as the number of links within a site and so on.  Each search engine will have a proprietary way of ranking results data, which will result in different engines producing different results.

Many search engines will promote the idea of net neutrality which allows network services, responses and searches to be created unhindered and free from the likes of government, corporate or competitive interference.

But can a search engine be free from bias?  Many search engines utilise advertising to generate a revenue stream and do those advertise links cloud the true search result?  Google will identify a paid for link by tagging with the word 'sponsored' next to it to provide some clarity.

One other major form of search bias is based on previous user search history.  The idea behind this is to try and personalise the results set based on what the user has previously searched for and the subsequent websites they have clicked through to.  But this increased personalisation, whilst may have its benefits, starts to reduce the opportunity for new and random results.  The user becomes increasingly held within their own bubble of navigation and knowledge, not knowing what they don't know.

The main concern with such an approach, is that the end user has no real knowledge of the results ranking and parsing process, so they become unaware of other potentially valuable search results at their disposal.

It will be interesting to see over the coming years as the internet undoubtedly becomes larger and more diverse, whether search engine theory and the underlying ranking algorithms can become sophisticated enough to produce personalised content, whilst remaining open to the random and new.