User Mental Models of Persistence in RIAs

First A Little History
Rich Internet Applications are widely believed to be the new paradigm for application development. This is the most exciting thing since all the desktop applications had to be ported over to the web 5 years ago. THAT was the most exciting thing since all the client server applications were ported to the desktop 10 years ago. The technology industry reverses its opinion about whether computing power should be centralized or distributed every five years or so, seemingly as regular as the tides or the seasons.


Of course, every computing platform has different features and drawbacks, so applications never make it through a transition without changing significantly. For example, when all the corporate business apps were ported to the web, drag and drop disappeared. HTML just didn’t support it, and so a whole interaction paradigm disappeared.
Similarly, when applications were ported to the web, the way that user data was saved changed dramatically without anybody noticing. When the user navigated from one page to the next, their data went back to the server, and NOT saving it meant that it had to be stored somewhere on the server until it could be saved to the database at a later date. So web applications typically save a users work automagically, every time an act of navigation is performed.
This is the exact opposite of how desktop applications operate. In a typical desktop application, user data is saved every time the user selects File->save, (or control S). That model had been criticized by usability professionals for years (most famously by Alan Cooper). It relies on the user understanding the difference between RAM and hard disk, an implementation detail that is irrelevant to most users. This flawed model was fixed (by accident!) in the move to web applications.
RIA Persistance Models
How Rich Internet Applications will behave is up for grabs. This issue has not arisen until now because the early adopters of Flash-based UIs were either game developers (with no user data to save) or transactional sites like brokerages or e-commerce sites (where the data is sent to be saved at a particular time, when the user explicitly initiates a transaction). But now RIAs that allow users to “work on their data” are starting to emerge, raising the question of what their model for saving user data will be. Desktop accounting software would save user data on command. An accounting web application would save data automatically. What would an accounting RIA do?
The user expectations here are almost certainly that an RIA will behave like a web application (since it is running inside their browser). However, history shows that developers will implement the persistence model most convenient to them. As I will show, implementing the web application persistence model in an RIA presents significant technical and design challenges.
The first challenge is managing error handling for automatic saves. The saves would certainly be asynchronous (since avoiding latency during navigation is one central reason people are switching to RIAs in the first place). However, the internet is flaky technology, and background saves will fail with some regularity due to an internet connection going down. This is not a problem with web applications because when the internet connection goes down, the navigation breaks, so the user knows there is a connectivity issue. In an RIA, the user could work for hours without an internet connection, as long as they don’t request any resources from the web. But if the system is saving without their knowledge, than it must let them know when the saving mechanism breaks (so that they don�t do an hour’s worth of work that can�t be saved and is lost). The user will probably get information about a connectivity problem some time after the failed save, so communicating what has and what hasn�t been saved becomes a real design problem.
The second challenge is technology: saving constantly requires a way of making sure that save requests are saved in the order that they were emitted (if change 4 gets committed before change 3, and requires data that was input on change 3, an error condition will result). Of course you have to worry about what happens if change 3 never makes it at all: do you resend? That might be a good idea. Before you know it, you’ve basically redesigned http or a similar network protocol. These kinds of programming issues are hard, and frankly might not even be cost-effective to address within an individual application.
The Desktop Persistence Model: Most Likely to Succeed?
Given the design and technical issues above, I think that it’s likely RIAs will follow the desktop / client-server model of saving on command, and informing the user at save time if there is a problem. Since that is the case, “save” buttons should be prominent, and extra care will need to be taken with labeling and workflow design in RIAs. Users are expecting RIAs to behave like a web site: not meeting this expectation brings up real usability issues that will have to be carefully managed. RIAs should use standard components that are evocative of desktop applications, and will have to explicitly communicate throught their design that unsaved work will be lost if the user leaves the site without saving it.

5 thoughts on “User Mental Models of Persistence in RIAs

  1. Brian Foley June 11, 2004 / 9:44 am

    Hey John,
    You should have ended with, ‘But at least we may get drag and drop back’ 🙂 I totally agree with the last paragraph – and think that’s the way it ‘should’ work. I’m not sure people have a hard time with understanding the difference between RAM and Hard Disk (i.e. MS Word)… and I know most corporate users understand the difference between RAM and the Database server… even if they don’t think about RAM (they think about their screen and the database…and they know it’s not in the database until they ‘save’). I suppose your article is slanted a little more toward the home web-site user though…. and this is a bit different.
    BTW: I read an article last month (not sure what publication) about “the return of the fat client”. I’m curious if an RIA is different than the use of applets? (or is that basically what it is)

  2. jon June 11, 2004 / 11:46 pm

    Ha! “It’s not fat, it’s rich.” “It’s not poor, it’s thin.” Technical marketing at it’s finest. You can never be too rich or too thin, right?
    In my opinion, Cooper is right on in his critique of the “Control S solution”: Peoples work should NEVER be lost, right? Which means, in an ideal design, it should ALWAYS be saved. With very powerful undo functionality, so that you can always back out of any change you have made.
    Of course, if a design is going to cost a gazillion dollars to implement, you’ll get push-back. That’s what just happened with my team last week. Working through the issues involved, we realized that continuous saving would be a bugbear of a technical and design problem.
    Rich clients could certainly be implemented using applets. In my mind, when you’re running in a browser with zero install, and your display logic crosses over to the client, but your persistence is still remote, you’ve got a rich client.
    The problem is that a)the java download is too big for dial-up, and b)too many tech managers were burned by applet designs that didn’t work out four years ago. At least for the consumer space, other technologies (like Flash or .Net) have a much better chance of success.

  3. Brian Foley June 15, 2004 / 8:37 am

    “Rich” – yes, this is a much better term!
    Sometimes “undo” functionality isn’t enough if many systems use the data. Say we deploy a worldwide Product Introduction system (RIA) and when product managers enter thier product data and prices – they don’t want the Quoting Systems to pick it up until they are “done”. I suppose the best solution would be to have “holding” tables for the incomming data…. then let the user hit a button to make the product “live” (which moves the data to the “live” tables).
    In a client/server app, where you don’t have to deal with multiple navigation requests and the stateful/stateless delimma, this situation above is implemented by simply making the user “save”… then it’s “live”.

  4. jon June 15, 2004 / 11:13 pm

    Sounds like the abstraction that you’re referring to is “publish” rather than “save”, no? Holding tables of some kind would be exactly the way to implement this. I have tons of articles on this site that aren’t live yet. They’re lurking in the database, waiting for me to hit the “publish” button and unleash them on the world. ;->
    What happens in the client / server app if the client crashes (say in a power cut, a phenomenon I’m getting quite familiar with)? Are the user changes being temporarily saved to a local persistence layer, or are they gone for good? If they are saved locally, and then committed to the database when the user presses “save”, then you have implemented holding tables. Otherwise you’ve just punted on the problem.

  5. Brian Foley June 24, 2004 / 9:37 am

    Yes… a “publish” model would be the most user friendly in this situation. In most client/server apps if the power gets cut the user is SOL…there is no local persistence layer.
    For these sort of design considerations it just comes down to costs. In a document publishing situation, or even in my Product Introduction example above, you’re well assured to not have to deal with contengency issues from other users. In many client/server apps however, where “holding” tables could be implemented to enhance usability, “publish” can become a complicated operation. It’s so much cheaper to punt! 🙂

Comments are closed.