Why Analytical Applications Fail

November 21, 2008
79 Views

Many analytical applications fail for a simple reason: they assume users know precisely what they need before they’ve begun the analysis. There are cases where this assumption holds and the user has a specific end-point in mind. But more often, users depend on the tool to track down an answer with only a vague idea of where to start. The exploratory analysis that follows can feel like swimming upstream when the application isn’t desi

Many analytical applications fail for a simple reason: they assume users know precisely what they need before they’ve begun the analysis. There are cases where this assumption holds and the user has a specific end-point in mind. But more often, users depend on the tool to track down an answer with only a vague idea of where to start. The exploratory analysis that follows can feel like swimming upstream when the application isn’t designed to facilitate the journey.

The source of this mismatch is partly rooted in the technical perspective of database developers. The simplest path to providing data access is to let users fill out a form to define a SQL query. It is a linear mindset that isn’t well-suited to ambiguous problems.

I’d like to offer a couple examples that illustrates the difference between the common, form-based approach and a more dynamic, interactive approach. Then I’ll explain the implicit assumptions behind the different models and why it matters.


At its heart, Travelocity is a travel analysis tool intended to help you find the best flight (or hotel, car rental, package, etc.) given a complex set of parameters. The relative importance of each of these parameters (departure day/time, return day/time, airports, connections, preferred airlines, price, etc.) is a personal preference… but not one that is explicitly or fully known even to the user. For example, it would be hard for me to say exactly how much more I would pay for a non-stop flight or what is the relative value of a more convenient airport versus a more reliable airline. These preferences are hard to understand prior to seeing specific trade-offs.

Travelocity approaches this complex problem in the way that so many analytical problems do: it asks for all your preferences first then offers a static list results for the specified query.

Travelocity Results

A few things to note about this search results page:

  1. On a busy web page, “Change Your Search” is not emphasized.
  2. The “tracker” across the top shows a linear five-step process. The user is expected to flow through this sequence in order.
  3. Getting results for a new search takes more than ten seconds.

I’ve been a loyal Travelocity user for years, and I don’t want to imply that this site is poorly designed or difficult to use. The problem is more subtle than that.

By way of comparison, let’s take a look at a more recent entrant to the online travel business, Kayak. This site is designed with a different usage model in mind. Kayak starts by asking for the same information as Travelocity, but the results pages is designed to support further analysis:

Kayak Results

The biggest difference is the prominent filtering functionality on the left side of the page. The filters allow users to narrow down their original search without leaving the results page (it takes less than a second to view refreshed results after changing a filter—no “run report” button required). In addition, Kayak places more emphasis on the start-over option. The designers of this site did not assume your first search would be enough to get you to the perfect flight option. Finally, notice the different “views” of the data that are available for a given result set. The views help support different types of decisions based on the same search parameters.


Analytical applications for business have similar underlying structures and usage models. The analysis process in Omniture SiteCatalyst, the leading web analytics platform for large sites, offers a typical example:

Omniture start page

This application offers lots of functionality, and it feels like featuring functionality is the primary purpose of the start page. If you want to get to useful data rather than view an advertisement for Omniture products and events, you can start by selecting the “Report Builder:”

Omniture form

Now, it is form-filling time. Like Travelocity, the user is expected to choose the precise parameters before they get to see anything. The resulting report requires a 10 second wait, and the result is static. Any additional filtering will require you to run a new report

Now let’s look at how Google Analytics chooses to structure the user experience:

Google Analtyics dashboard

In contrast to SiteCatalyst, Google Analytics shows you results immediately—no defining or configuring a report before you can get started. Similar to Kayak, the application offers a bunch of options on the report results page to refine parameters (e.g. data ranges, metrics, comparisons).


Travelocity and Omniture make a few assumptions common to analytical applications:

  • Users can accurately define their need (i.e. they already know what they are looking for).
  • Users can precisely define their need (i.e. they know all the relevant parameters).
  • Users’ workflow will follow a linear sequence of events. Going back to the beginning is a failure of the process or user.

More effective analytical applications like Kayak and Google Analytics make different assumptions:

  • Users have a general question, but do not necessarily know details about what they’re looking for.
  • Users need to see results before they can ask better, more detailed questions. These feedback loops provide critical learning.
  • Users need to get to data as quickly and easily as possible. A screen without data is delayed progress.
  • Different views of the data can provide different insights about results.
  • Users want the application to keep up with their trains of thought. Speed and responsiveness matter. Here’s a framework from Jakob Nielsen’s blog about response time:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

In my experience, making the right assumptions about user behavior makes all the difference between an application people enjoy and depend on and an application people dread using.

 Link to original post