Thursday 9 August 2012

Representing time: big-endian vs little-endian?

Few people now remember the bitter debates over the storage order for multi-byte values in 8-bit memory architectures. There were advantages to putting the LSB first, and other advantages to the opposite. The gently mocking term "little-endian" compares the debate to a trivial political dispute in Gulliver's Travels over which end an egg should be eaten from. A Wikipedia author picks out the key point as follows:

"On Holy Wars and a Plea for Peace" by Danny Cohen ends with: "Swift's point is that the difference between breaking the egg at the little-end and breaking it at the big-end is trivial. Therefore, he suggests, that everyone does it in his own preferred way. We agree that the difference between sending eggs with the little- or the big-end first is trivial, but we insist that everyone must do it in the same way, to avoid anarchy. Since the difference is trivial we may choose either way, but a decision must be made."

In user interface design, we regularly find ourselves in this kind of situation. In the early days of the scroll bar, it was far from clear whether the text should move up when the scroll bar moves up, or the other way round (i.e. the window moves up, so the text moves down). The best solution to these simple choices is sometimes so far from obvious that it can take years to get it right - people are still discovering (and disconcerted by) the decision to reverse the scrollbar drag direction that is used by default on Macintosh trackpads.

As Cohen notes in the case of standards wars, it's sometimes more important to agree on the choice than it is to make the right one. Sadly for the prototype developer, the only person you have to agree with is yourself. So this afternoon, I made the sudden decision to reverse the way in which the Palimpsest layer stack is rendered. I know I spent some time agonising over this about 9 months ago, but have stuck to my decision ever since then.

The problem is - should the stack be rendered in conventional narrative time order (oldest layers appear at the top of the screen, with newer ones appearing lower down), or in geological order (oldest layers at the bottom, with newer ones higher up)? I've just changed to the second of these options, in part because writing the tutorial made me increasingly uncomfortable that I had to refer to the layer "under the current one" when that layer was clearly above the current one on the screen.

It was easier to reverse this than I had feared, although an amusing discovery along they way was the realisation that the mapping of keyboard arrows to layer traversal had always been counter-intuitive. The down arrow moved up the stack, and the up arrow moved down it. Perhaps this should have been a sign that I made the wrong decision 9 months ago. (Though an interesting observation, back to the days when I said I was combining the Photoshop History and Layer palettes, is that the History palette renders time going down the screen, while the Layer palette has time going up the screen (if you paste, a new layer is created above the previous one). I wonder whether Photoshop users are ever disconcerted by this?


Cute is not (always) clever

Well, this is embarrassing ... several conclusions from the last blog post turn out to be completely wrong. But perhaps for interesting reasons.

After spending a couple of days preparing a brief introduction tutorial, I tried it out on my first independent user (the long-suffering Helen - thank you xx).

As you'd expect, there were a number of faults in both the tutorial, and the default behaviour of the system. More on these later. But the most annoying one was that the menu visualisation I created last week was really unhelpful.

In the last blog post I had been pleased with myself, because the tabbed menu had been implemented using pretty much the same elements I'd already created. In particular, the active areas that the user clicks to move between tabs were the same SubMenuCreator buttons that had previously been used to navigate between different menus. The appearance of a tabbed interface was created just by sticking a background rendering of tabs behind these buttons.

The result was both cute and elegant (in my own opinion), with the new tabbed interface immediately inheriting all the good things that came with the button regions.

Unfortunately, elegant uniformity is one of the last priorities for usability, as has been noted by countless people before me. (Remember the days when car dashboards had rows of identical switches? Cheaper to make and tidy to look at, but impossible to use without memorising their position or taking your eyes from the road to squint at the labels.)

So my elegant approach to controlling tabs was just really confusing - in fact, my trial user had not even noticed that they were tabs, but thought they were just more buttons. I should have seen the warning signs when writing that blog post last week. The real appeal of the "cute" and elegant solution was that it saved me coding effort. This ought to make us all hesitate, when we use "elegance" as a criterion for a good software solution in a user-centred application.

The replacement, after a half day coding and redrawing, now looks subtly different - with tabs no longer looking like buttons. Let's hope this works!


Friday 3 August 2012

Pretty = what you expect


Spent a day making things look "pretty" (as I was thinking of it at the time - lots of pixel nudging and colour shading). This is really in response to Luke's comment that the next thing needed is some usability improvements. At first, prettiness was just a side effect of adding some more conventional visual effects - in particular, the tabbed menus in the illustration, which replace the previous minimalist (semi transparent) menu layers. However, as I spent more time getting them right, I realised that "right" actually means that they look like they work.

Interestingly, all of this surface ordinariness was achieved without any compromise on the underlying behaviour - these tabbed menus are still live code, and any of the icons can be dragged elsewhere or incorporated into execution behaviour by the user. Making them look ordinary to start with is just a bit of reassurance for the new user, and perhaps even adds to the surprise and delight :-) when it turns out that you can do things with them beyond the ordinary.

One more picture, just to show that  things made with Palimpsest don't often look ordinary. Here's some processing of the blog logo:

Wednesday 1 August 2012

Time to fake the rationale



Not really! (Title taken from famous paper on faking design rationales). It's actually time to do some rather boring tidying up, removing final bugs, and getting ready for public showing at VL/HCC. Along the way, this has involved returning to things that were already boring - Java persistence, for example, as changes since my last big persistence binge a few months ago have broken it in new ways.

But in presenting to an academic audience, some more explicit rationale will be required. Some of it has been published along the way in this blog, but there are lots of minor decisions, not interesting enough to be included here. A recent example is that the "secondary notation" device, despite being one of the earliest things implemented, had almost no usable function. A change this week has allowed secondary notations to pass on a value from whatever layer they are annotating. This became useful in the context of more complex combinations of functionality, such as the use of multiple event trigger layers at the same time. In classic visual language usability style, it quickly became impossible to tell which of the nearly identical visual objects was which.