Making Magic Happen: Predicting User Behavior in AJAX applications

This essay is adapated from a presentation I gave at the Oreilly / AJAX summit. The original powerpoint for the presentation is also available.
We need better metaphors to explain the value of rich clients to business decision makers. We need proof that richer user experiences make business sense.

Common Ground
Reducing system latency (the time that users are waiting for the system) is an uncontroversial goal. Developers and designers, even if they agree on little else, agree that fast systems are much better than slow ones. In fact, speed is an intuitively valuable attribute of an application. A good way to articulate the value of rich clients is to frame it in terms of making faster applications.
From an engineering perspective, making faster applications is a basic part of the engineering job description: making the most performant system, given existing hardware constraints. There are two strategies to squeeze more performance out of a system: optimizing the actual execution of code, and harvesting unused system resources to do work before the user asks for it. AJAX is an example of the latter case.
The Power of Metaphor
The metaphor I use when explaining the engineering behind AJAX to clients is valet parking. If you get your car at the same time every morning, a good valet will get the car ready for you so that when you arrive, you don’t have to wait for the car to be fetched. This costs the valet nothing (he has to fetch the car either way), while dramatically improving your user experience (you don’t have to wait for your car). The success of this maneuver depends on your valet observing your behavior, and using these observations to make predictions of your future behavior. A good application should do the same thing: observe your behavior, and over time optimize it’s data fetching to reduce the amount of time the client has to wait.
From a design perspective, a different metaphor is needed. The metaphor I use to explain AJAX from a design perspective is a magic trick. Magic is all about the control of attention, and every magic trick is fundamentally the same. The magician sets up the artifact, then causes the audience to focus their attention elsewhere while the artifact is manipulated (by the magician or his helpers). The attention of the audience is then diverted back to the artifact. The transformation of the artifact can only be explained by magic. Similarly, in an AJAX application, you control the user’s attention using dynamic elements in the user interface (animations, color transitions, etc). This is used to focus the users attention on one part of the screen while, in the background, data is being fetched. When the user attempts to manipulate the interface, something magical happens: even though the data is 1000s of miles away, to the user the application feels as responsive as it would if it were a desktop application. The only explanation is magic, and the user is suitably impressed.
Building a Model of User Behavior
In order to pull of the trick that the valet pulled off in the parking garage, your software needs to have a theory, or model, of what the data the user will request next. This output of this model is simply a ranked list of data to be prefetched, with an estimate of the odds the data will be used for each chunk of data.
You can build such a model by simple introspection, but it’s hard to get it right. For example, if you were a developer working on the google maps project, how would you estimate whether users were more likely to pan left/right, pan up/down, or zoom in/out? You might assume an equal probability of all of these actions. But preliminary user testing might reveal that users immediately zoom in before doing anything else. This would effect your model of user behavior, and thus which data you chose to prefetch first.
It’s not enough to just get a good “first guess” though. You need to be able to tweak your model of user behavior over time. In order to do that you need to instrument your code so that you can track what data is being requested, and what data is being used. The real bummer here is that with AJAX applications, you can’t instrument your server code (where it’s most convenient to have such instrumentation, given the homogonous run-time environment and proximity to the database). You have to record user actions in your flakey, if-then-else riddled client-side javascript, and ship the data back to the server sometime later. Messy.
The payoff for all this work are metrics that you can use to measure the benefit of preloading your user data. In particular, you can track
Responsiveness: % of user requests met by prefetched data
As this number goes up, the application appears faster and runs smoother. Of course, it’s easy to cheat and just download any data that might possibly be requested by the user. This would waste bandwidth, which costs money. That’s why it’s also important to track
Efficiency: % of pre-fetched data that ends up being used
With the data that you get from your instrumented javascript, you can tweak your model to drive these two numbers higher over time. These are the benchmarks that managers should look at in measuring the success of a preloading strategy.
Obviously, responsiveness and efficiency trade-off against each other: it’s hard to improve one metric without making the other worse. Whether Responsiveness or Efficiency are more important to you depends on the reason you are using AJAX.
Economic models for the Cost and Value of Preloading
As I originally described in an essay last year, cost and value of preloading can be defined by the following formulas:
Value = Value of Reduced Latency * Odds data will be needed
Cost = Cost of Download * Odds data won’t be needed
If we plug some numbers into these equasions and start playing around with them, we notice a few things right away:
1)A the odds of a piece of data being used decrease, cost increases while value decreases. At a particular point, it will no longer make sense to preload data.
2) The only variable that is really up for grabs is the Value of Reduced Latency. If that value is high, than it makes sense to preload a lot of data. If that value is low, it only makes sense to preload very high-probability data.
This flash visualization shows how adjusting the Value of Reduced Latency changes the cost and value of a preload. Just move the slider to see cost and value (and where they intersect) change.
What kind of AJAX application are you trying to build?
What this model implies is that you need to figure out what kind of an AJAX application you are trying to build, before you can know how much data to preload. If you are doing a tactical improvement to an existing site, than the incremental increase in speed may have only marginal value, and it is best to stick to high probability preloads (or even, as many developers seem to be doing, to 100% probability preloads!). On the other hand, if you are doing something like google maps, than your business success is dependent upon that “it’s magic” effect I described earlier. In that case, it makes sense to spend bandwidth even on low-probability preloads, if they will make your application appear seamlessly smooth and fast.
Future Directions
The ROI of AJAX applications is a topic of continued interest to me. Where does the value come from? We developers and designers know that the applications are “better”, but how can we describe the value in terms that business decision-makers will understand?
The value of AJAX is only partially in the increased speed of the experience. It is also in the increased continuity of the experience, and in the ability to protect users from information overload by presenting only the information relevant to them. The business value of this depends (of course) on the specific application being built. It never makes sense to talk about ROI of a particular technology in the abstract.
I’ll be writing about AJAX configurators and shopping carts (a high-value web application if there ever was one) in the coming weeks. Anyone who has worked on AJAX-enabled configurators / shopping carts should feel free to ping me for a chat.