“All that glisters is not gold” – some thoughts on dashboards

13 Min Read

Yesterday I was tweeting quotes from Poe and blogging lines attributed to Heraclitus. Today I’m moving on to Shakespeare. Kudos to anyone posting a comment pointing out the second quote that appears later in the text.
 
 
Introduction
Dashboards are all the rage at present. The basic idea is that they provide a way to quickly see what […]

Yesterday I was tweeting quotes from Poe and blogging lines attributed to Heraclitus. Today I’m moving on to Shakespeare. Kudos to anyone posting a comment pointing out the second quote that appears later in the text.
 
 
Introduction

Dashboards are all the rage at present. The basic idea is that they provide a way to quickly see what is happening, without getting lost in a sea of numbers. There are lots of different technologies out there that can help with dashboards. These range from parts of the product suites of all the main BI vendors, through boutique products dedicated to the area, all the way to simply using Java to write your own.

A lot of effort needs to go into how a dashboard is presented. The information really does need to leap off the screen, it is important that it looks professional. People are used to seeing well-designed sites on the web and if your corporate dashboard looks like it is only one step removed from Excel charts, you may have a problem. While engaging a design firm to help craft a dashboard might be overkill, it helps to get some graphic design input. I have been lucky enough over the years to have had people on my teams with experience in this area. They have mostly been hobbyists, but they had enough flair and enough of an aesthetic taste to make a difference.

However, echoing my comments on BI tools in general, I think an attractive looking dashboard is really only the icing on the cake. The cake itself has two main other ingredients:

  1. The actual figures that it presents (and how well they have been chosen) and
  2. The Information Architecture that underpins them

I’ll now consider the importance of these two areas.
 
 
Choosing the KPIs

The acronym KPI is bandied about with enormous vigour in the BI community. Sometimes what the ‘K’ stands for can get a bit lost in the cacophony. Stepping back from dashboards for a few minutes, I want to focus on the measures that you have in your general business intelligence applications such as analysis cubes. Things like: sales revenue, units sold, growth, head count, profit and so on.

[Note: If you don’t like BI buzzwords, please feel free to read “figures”, or “numbers” where ever you see “measures”. I may attempt to provide my own definitions of some of these terms in the future as the Wikipedia entries aren’t always that illuminating.]

When you have built a Data Mart for a particular subject area and are looking to develop one or more cubes based on this, you may well have a myriad of measures to select from. In some of the earliest prototype cubes that my teams built, we made the mistake of having too many measures. The same observation equally applied to the number of dimensions (things that you want to slice and dice the measures by, e.g. geography, line of business, product, customer etc.). Having too many measures and dimensions led to a cube that was cumbersome, difficult to navigate and where the business purpose was less that crystal clear. These are all cardinal sins, but the last is the worst as I have referred to elsewhere. The clear objective is to cut down on both the figure and the business attributes that you want to look at them by. We set a rule (which we did break a couple of times for specialist applications) of generally having no more than ten measures and ten dimensions in a cube and ideally having less.

Well this all sounds great, the problem – and the reason for this diversion away from dashboards – is which measures do you keep and which do you drop. Here there is no real alternative to lots of discussions with business partners, building multiple prototypes to test out different combinations and, ultimately, accepting that you might make some mis-steps in your first release and need to revisit the area after it has been “shaken down” by real business use. I won’t delve into this particular process any deeper now. Suffice it to say that choosing which measures to include in a cube it is both an area that is important to get right and one in which it is all to easy to make mistakes.

So, retuning to our main discussion, if picking measures at the level of an analysis cube is hard, just how hard is it to pick KPIs for a dashboard. I recall a conversation with the CEO of a large organisation in which he basically told me to just pick the six most important figure and put them on a dashboard (with the clear implication that sooner would be rather better than later). After I had explained that the view of the CEO in this area was of paramount importance and that his input on which figures to use would be very valuable, we began to talk about what should be in and what should be out. After a period of going round in circles, I at least managed to convey the fact that this was not a trivial decision.

What you want with the KPIs on a dashboard is that they are genuinely key and that you can actually tell something from graphing them. The exercise in determining which figures to use and how to present them was a lengthy one, but very worthwhile. You need to rigorously apply the “so what?” test – what action will people take based on the trends and indicators that are presented to them. In the end we went for simplicity, with a focus on growth.

There was a map showing how each country was doing against plan; colour-coded red, amber and green according to their results. There were graphs comparing revenue to budget by month and the cumulative position and there was a break-down by business unit. The only to elements of interaction were to filter for a region or country and a business unit or line of business. Any further analysis required pulling up an underlying cube (actually we integrated the cube with the dashboard so that context was maintained moving from one to the other – this was not so easy as the dashboard and cube tools, while from the same vendor, were on two different major release numbers).

There were many iterations of the dashboard, but the one we eventually went live with received general acclaim. I’m not sure what we could have done differently to shorten the process.
 
 
Where does the data come from?

A dashboard without an underlying Information Architecture

The same range of dashboard tools that I mention in the introduction are of course mostly capable of sourcing their data from pretty much anywhere. If the goal is to build a dashboard, then maybe it is tempting to do this as quickly as possible, based on whatever data sources are to hand (as in the diagram above). This is probably the quickest way to produce a dashboard, but it is unlikely to produce something that is used much, tells people anything useful, or adds any value. Why do I say this?

Well the problem with this approach is that all you are doing is reflecting what is likely to be a somewhat fragmented (and maybe even chaotic) set of information tools. Out of your sources, is there a unique place to go to get a definitive value for measure A? Do the various different sources hold data in the same way and calculate values using the same formulae? Do sources overlap (either duplicating data, or function), if so, which ones do you use? Do different sources get refreshed with the same frequency and do they treat currency the same way? Are customers and products defined consistently everywhere?

A dashboad underpinned by a proper Information Architecture

Leaving issues like these unresolved is a sure way to perpetuate a poor state of information. They are best addressed by establishing a wider information architecture (a simplified diagram of which appears above). I am not going to go into all of the benefits of such an approach, if readers would like more information, then please browse through the rest of this blog and the links to other resources that it contains (maybe this post would be a good place to start). What I will state is that a dashboard will only add value if it is part of an overall consistent approach to information, something that best practice indicates requires an Information Architecture. Anything else is simply going to be a pretty picture, signifying nothing.
 
 
Summary

So my advice to those seeking to build their first dashboard has three parts. First of all, keep it simple and identify a small group of measures and dimensions, which are highly pertinent to the core of the business and susceptible to graphical presentation. Second, dashboards are not a short-cut to management information Nirvana, they only really work when they are the final layer in a proper approach to information that spans all areas of the organisation. Finally, and partly driven by the first two observations, if you are in charge of building a dashboard, make sure that the plans you draw up reflect the complexity of the task and that you manage expectations accordingly.
 

Bookmark this article with:
Technorati | del.icio.us | digg | Reddit | NewsVine

 

Posted in business, business intelligence, enterprise performance management, management information, technology Tagged: bi, business intelligence, dashboard, information management, information technology, it business alignment, management information

Share This Article
Exit mobile version