Thursday 9 August 2012

Representing time: big-endian vs little-endian?

Few people now remember the bitter debates over the storage order for multi-byte values in 8-bit memory architectures. There were advantages to putting the LSB first, and other advantages to the opposite. The gently mocking term "little-endian" compares the debate to a trivial political dispute in Gulliver's Travels over which end an egg should be eaten from. A Wikipedia author picks out the key point as follows:

"On Holy Wars and a Plea for Peace" by Danny Cohen ends with: "Swift's point is that the difference between breaking the egg at the little-end and breaking it at the big-end is trivial. Therefore, he suggests, that everyone does it in his own preferred way. We agree that the difference between sending eggs with the little- or the big-end first is trivial, but we insist that everyone must do it in the same way, to avoid anarchy. Since the difference is trivial we may choose either way, but a decision must be made."

In user interface design, we regularly find ourselves in this kind of situation. In the early days of the scroll bar, it was far from clear whether the text should move up when the scroll bar moves up, or the other way round (i.e. the window moves up, so the text moves down). The best solution to these simple choices is sometimes so far from obvious that it can take years to get it right - people are still discovering (and disconcerted by) the decision to reverse the scrollbar drag direction that is used by default on Macintosh trackpads.

As Cohen notes in the case of standards wars, it's sometimes more important to agree on the choice than it is to make the right one. Sadly for the prototype developer, the only person you have to agree with is yourself. So this afternoon, I made the sudden decision to reverse the way in which the Palimpsest layer stack is rendered. I know I spent some time agonising over this about 9 months ago, but have stuck to my decision ever since then.

The problem is - should the stack be rendered in conventional narrative time order (oldest layers appear at the top of the screen, with newer ones appearing lower down), or in geological order (oldest layers at the bottom, with newer ones higher up)? I've just changed to the second of these options, in part because writing the tutorial made me increasingly uncomfortable that I had to refer to the layer "under the current one" when that layer was clearly above the current one on the screen.

It was easier to reverse this than I had feared, although an amusing discovery along they way was the realisation that the mapping of keyboard arrows to layer traversal had always been counter-intuitive. The down arrow moved up the stack, and the up arrow moved down it. Perhaps this should have been a sign that I made the wrong decision 9 months ago. (Though an interesting observation, back to the days when I said I was combining the Photoshop History and Layer palettes, is that the History palette renders time going down the screen, while the Layer palette has time going up the screen (if you paste, a new layer is created above the previous one). I wonder whether Photoshop users are ever disconcerted by this?


Cute is not (always) clever

Well, this is embarrassing ... several conclusions from the last blog post turn out to be completely wrong. But perhaps for interesting reasons.

After spending a couple of days preparing a brief introduction tutorial, I tried it out on my first independent user (the long-suffering Helen - thank you xx).

As you'd expect, there were a number of faults in both the tutorial, and the default behaviour of the system. More on these later. But the most annoying one was that the menu visualisation I created last week was really unhelpful.

In the last blog post I had been pleased with myself, because the tabbed menu had been implemented using pretty much the same elements I'd already created. In particular, the active areas that the user clicks to move between tabs were the same SubMenuCreator buttons that had previously been used to navigate between different menus. The appearance of a tabbed interface was created just by sticking a background rendering of tabs behind these buttons.

The result was both cute and elegant (in my own opinion), with the new tabbed interface immediately inheriting all the good things that came with the button regions.

Unfortunately, elegant uniformity is one of the last priorities for usability, as has been noted by countless people before me. (Remember the days when car dashboards had rows of identical switches? Cheaper to make and tidy to look at, but impossible to use without memorising their position or taking your eyes from the road to squint at the labels.)

So my elegant approach to controlling tabs was just really confusing - in fact, my trial user had not even noticed that they were tabs, but thought they were just more buttons. I should have seen the warning signs when writing that blog post last week. The real appeal of the "cute" and elegant solution was that it saved me coding effort. This ought to make us all hesitate, when we use "elegance" as a criterion for a good software solution in a user-centred application.

The replacement, after a half day coding and redrawing, now looks subtly different - with tabs no longer looking like buttons. Let's hope this works!


Friday 3 August 2012

Pretty = what you expect


Spent a day making things look "pretty" (as I was thinking of it at the time - lots of pixel nudging and colour shading). This is really in response to Luke's comment that the next thing needed is some usability improvements. At first, prettiness was just a side effect of adding some more conventional visual effects - in particular, the tabbed menus in the illustration, which replace the previous minimalist (semi transparent) menu layers. However, as I spent more time getting them right, I realised that "right" actually means that they look like they work.

Interestingly, all of this surface ordinariness was achieved without any compromise on the underlying behaviour - these tabbed menus are still live code, and any of the icons can be dragged elsewhere or incorporated into execution behaviour by the user. Making them look ordinary to start with is just a bit of reassurance for the new user, and perhaps even adds to the surprise and delight :-) when it turns out that you can do things with them beyond the ordinary.

One more picture, just to show that  things made with Palimpsest don't often look ordinary. Here's some processing of the blog logo:

Wednesday 1 August 2012

Time to fake the rationale



Not really! (Title taken from famous paper on faking design rationales). It's actually time to do some rather boring tidying up, removing final bugs, and getting ready for public showing at VL/HCC. Along the way, this has involved returning to things that were already boring - Java persistence, for example, as changes since my last big persistence binge a few months ago have broken it in new ways.

But in presenting to an academic audience, some more explicit rationale will be required. Some of it has been published along the way in this blog, but there are lots of minor decisions, not interesting enough to be included here. A recent example is that the "secondary notation" device, despite being one of the earliest things implemented, had almost no usable function. A change this week has allowed secondary notations to pass on a value from whatever layer they are annotating. This became useful in the context of more complex combinations of functionality, such as the use of multiple event trigger layers at the same time. In classic visual language usability style, it quickly became impossible to tell which of the nearly identical visual objects was which.

Wednesday 18 July 2012

Getting the connotations right

Having returned to Cambridge this week, my 6 months as isolated bush-coder are complete, and it's time to show Palimpsest to some real users. The first of these was Melissa Pierce Murray, sculptor. Melissa originally trained as a physicist, so potentially comfortable with the abstraction inherent in Palimpsest, but by her own claim "doesn't get on with computers". As I've seen in the past with artists considering what they might do with a computer, her first impressions were that she might use this to make a web page, or a powerpoint presentation - computers have never been relevant to her creative practice in the past, and this is answering question she doesn't have.

Nevertheless, after an hour or so of discussion, some points of connection did emerge - she has been sketching grids of visual elements, which she describes in terms of "matrices" and "boundary conditions". The collection operations in Palimpsest (though sadly crashing when demonstrated, because a minor piece of debugging code introduced while passing through Singapore disabled them) do indeed provide exploratory potential that is relevant to these creative questions.

This discussion focused on the potential of software as an exploratory sketching tool, in much the same way as our Choreographic Language Agent has been used at Random Dance. Our most recent thinking on sketching is set out in work with Claudia Eckert and colleagues (see below). An interesting aspect of that collaboration was our investigation of the importance of connotation in sketching - the fact that a sketch looks like a sketch, and has social functions arising from its appearance.



In the case of Palimpsest, I had just made a change to place a mathematical-looking "graph-paper" grid under the currently selected layer, as part of a more general visual overhaul. This has no real function, other than looking generally technical, and providing a bit of colour/texture to help interpret the transparency in the foreground. But Melissa specifically commented on this as helping her to understand the intention of the system, and what it could do for her. In particular, it helps to distinguish the abstract / symbolic / notational / allographic elements of the Palimpsest display from the pictorial / interpretative / autographic elements. Despite the fact that Palimpsest deliberately plays with the allographic/autographic dividing line (what Luke previously called the "anti-semantic"), I think users need to know which is which.
References

Eckert, C., Blackwell, A.F., Stacey, M., Earl, C. and Church, L. (in press). Sketching across design domains: roles and formalities. To appear in Artificial Intelligence for Engineering Design, Analysis and Manufacturing 26(3), special issue on Sketching and Pen-based Design Interaction.

Wednesday 13 June 2012

The perennial problem of abstraction


In the past few weeks, I've been grappling with some hard questions that were always on their way, but that I've managed to defer until now. After my second attempt at creating collection mapping operations ground to a halt, a third approach is proving more fruitful.



The essence of the problem is that, once operations deal with sets of values rather than single values, it is necessary to describe the structure and properties of those sets - a fundamentally abstract activity. In my case, collections of parameterised layers share some of their bindings, but not all. One approach to this has been to treat them as curried functions, but the challenge in doing so is specifying the curry bindings, and distinguishing them from the defaults that may be providing (user-perceived) desirable layer behaviours without having explicitly arisen from user choices. Most of my attempts to provide a user interface to this specification have been less than successful - complex, unreliable, and hard to plan in advance.

The latest approach has been to build more explicitly on the fact that parameters appear as graphical elements within the layer. Given the challenge of abstraction in this system is that the user must make a transition from interacting with images to interacting with bindings, I've allowed the bindings themselves to be manipulated as image elements, using the existing mask operations to select those parts of the binding set that should be preserved across a map. As I was building this, I was rather constantly reminded of the related challenges that Ken Kahn faced in ToonTalk, where Dusty the vacuum cleaner is used to specify value types by "sucking off" the value binding to leave the coloured pad. When Elizabeth used ToonTalk as a young child, this was one of the aspects that particularly upset her (along with everything that happened in the robot's thought bubble - the abstract world mode). A constant frustration was that a slip of the hand could easily delete the type, rather than the value. Easily undone, but an example of how error-proneness in the abstract notation carries significant weighted risk for attention investment. I hope that I've avoided this, by visualising the binding choices as a masked overlay that does not modify the original layer instance from which it is derived.



The next stage is to apply the binding mask overlay approach to cases where, rather than mapping a single layer (function) over a value collection, two collections are joined - typically with curried (function) layers in one (these may have resulted from a previous join), and value layers in the other. This seems like the right point to start experimenting with inference at last. In a previous attempt at the bind-then-play execution paradigm, I had created typed collections that could then be applied in contexts similar to those of single layers of the same type. However, the resulting constraints on users, for example that the collection type had to be maintained by constraining future addition of members, made this pretty cumbersome.

The inference approach that I'm about to start on, in contrast, allows users to place anything they like in a collection, while continuing to support aggregate bindings and maps. My intention is that the type of the collection will be determined (and visualised) statistically, based on the type of the largest proportion of its members. Members that are incompatible with this type can simply be passed over - the user may have intended them as secondary notation, or they may be accidental, or simply experiments to see what would happen. A user who is setting out to create an array value in a more systematic fashion can also do so, and the inferred type will be precisely as intended.

Wednesday 23 May 2012

More computational toys for artists


Luke had developed the habit of calling his early experiments "computational toys" - image-based interactive systems that exhibit some kind of computational behaviour, thereby providing opportunities for creative experiences of flow and emergence. I think he recently referred to Palimpsest as a computational toy too, though it's rather more complex than anything of the kind we've built in the past. (Though to be honest, Michelle's positive experience was more by nature of a toy - the further complexities may yet turn out to be of minimal additional advantage).

Other blog readers have had their eye out for projects that they see as being related. All of them exhibit this same kind of behaviour, suggesting that the general category is recognisable, even to people without the precise theoretical fixations that Luke and I had. Two have come up recently:


  • Sam Aaron pointed out the Recursive Drawing system built by Toby Schachman for his project Alternative Programming Interfaces for Alternative Programmers (I created the illustration above using this).
  • Beryl Plimmer noted a presentation at CHI of the Vignette system from Igarashi's Design Interface Project, which uses gestures to replicate and fill with texture elements that can be dynamically created and modified while being used as drawing elements.

Both of these recent projects might be considered within the general class of procedural drawing systems - they are oriented toward artists rather than programmers, though Recursive Drawing is especially computational in its feel - as indicated by the project title. Vignette is sufficiently close to real professional design requirements that similar systems have been created fairly regularly in the past - from Sketchpad, to Paul Richens' Piranesi system for architectural rendering with its similar perspective and texture tools.

The challenging question for Palimpsest is, to what extent can these interactive procedural drawing systems be extended with more sophisticated computational abstractions, before they lose their appeal as computational toys? The answer relies on finding a particular balance between flow and attention investment, as well as sufficient quality of design. Recursive Drawing is rather similar to Palimpsest in its visual layout, including a side bar that resembles the Palimpsest layer stack. However it is far more elegant! Perhaps because its computational properties are quite specific and limited, but perhaps also because it is an art school product informed by computing expertise, rather than the other way round.

Wednesday 16 May 2012

From naive physics to naive computation



Having done another couple of demos to interested observers, I've realised that the "translate" operation is not that useful. Translation is currently defined in terms of a vector, but the use of translation is most often to move something to the place you want it - the vector required to get it there is a byproduct, not the main point of interest. In fact, getting an object to a specified place is a real pain at present. The steps are:

  • Create a point representation layer. The default behaviour, when a point representation is created over an image, is to use the centroid of the image as the default value, so this works OK.
  • Create another point representation. The default behaviour, when another point is under this one, is to create a new randomised value. This is also OK, because an arbitrary translation can be used as the basis from which the user explores alternative values.
  • Create a vector calculator. This will make a vector derived from the two points, which is also OK.
  • Create a translation layer, which will use the original image, and the calculated vector, to move the image to the required point.

So there are four layers required to achieve one effect, which is arguably the most natural way to specify a translation anyway. Today I created a simpler variant of translation, which simply takes a point as a parameter and moves to there.

However, this does seem to point to a more general issue. My geometric operators are all nicely based on properly defined mathematical transformations. As a side-effect, this made them really easy to implement - each one simply corresponds to one of the Java Graphics2D AffineTransform operations. But perhaps we should be suspicious when the abstraction needed to implement a system function is too convenient - it's a sign that we might be imposing the programmer's mental model on the user.

In fact, every one of the geometric transforms has turned out to be not so closely related to the user applications that I've found interesting for those transforms. In a "naive geometry" approach, they could be described as follows:

  • Translate -> "move it to here"
  • Rotate -> "make it spin round" (usually as an animation that doesn't stop)
  • Scale -> "stretch or squash" (not uniformly, but in various ways that drag handles can produce)

I suspect that I should discard the proper mathematical versions, replacing them with a move layer (which is what I had originally before making the more elegant mathematical transforms), a spin layer, and a layer that can reproduce any number of adjustments using the direct manipulation handles. This last will also overcome another problem. Although it was possible to generate a mathematical transform layer initialised according to handle manipulation, only the last manipulation was included - mainly because it would be so surprising to the user to see multiple layers of transform appear in response to a single button press.

Finally, a little reflection on "naive geometry". This takes me back to my MSc thesis, when I formulated a naive physics-style "qualitative trigonometry" that could be used for robust spatial reasoning by robots. Along with the rest of the naive physics / qualitative spatial reasoning movement in the mid 1980s, this could well have represented a user-orientation within the AI community, as we tried to create knowledge representations that were better aligned with common sense. At that time, the motivation was to replicate human problem-solving performance, rather than making computer "reasoning" easier for humans to understand, but the latter was  undoubtedly a side effect.

From a more HCI orientation, the Natural Programming project of Brad Myers and his students could be considered as an approach to defining a "naive computation" where program behaviour is described in common sense terms. I should probably have spotted this earlier, because I've been recommending it to Jamie Diprose, a student of Beryl's who is creating a visual programming language for healthcare robotics. I've encouraged Jamie to take an approach derived from John Pane's natural programming work, interviewing healthcare professionals to establish a vocabulary of domain concepts for use in his language. The healthcare robotics domain is sufficiently unlike the general purpose mechanical assembly domain of my own earlier work that I hadn't noticed the analogy to qualitative trigonometry, but now that I've noticed it, I could regard my project as creating a domain-specific language for exploratory image manipulation.

Wednesday 2 May 2012

Some precedents


During James Noble's stay, he made a number of useful observations regarding precedents for some of the features in Palimpsest.

He thinks the overall feel is similar to that of early graphical constraint systems like Borning's ThingLab, or Smith's Alternate Reality Kit. The representation of constraints as "parameters" controlling the appearance of specific layer effects reminded him of the pointer constraints used in Myers' Garnet. Finally, the way that these are assigned default values from the context in which they are created (by type inference / extension from layers further down the stack) is like the implicit parameters in the Scala language.

References

Alan Borning. 1981. The Programming Language Aspects of ThingLab, a Constraint-Oriented Simulation Laboratory. ACM Trans. Program. Lang. Syst. 3, 4 (October 1981), 353-387.

Randall B. Smith. 1986. Experiences with the alternate reality kit: an example of the tension between literalism and magic. In Proceedings of the SIGCHI/GI conference on Human factors in computing systems and graphics interface (CHI '87), John M. Carroll and Peter P. Tanner (Eds.). ACM, New York, NY, USA, 61-67.


Brad Vander Zanden, Brad A. Myers, Dario A. Giuse, and Pedro Szekely. 1994. Integrating pointer variables into one-way constraint models. ACM Trans. Comput.-Hum. Interact. 1, 2 (June 1994), 161-213.


Monday 30 April 2012

Reducing abstraction hunger


It's a bit of a pilgrimage, out here into the forest, and we have few visitors other than family. So those who do make it over the mountains are guaranteed a demonstration of Palimpsest if they show any interest. Yesterday's guest, old friend Michelle Greenwood, was the first hands-on user apart from me and Elizabeth. I couldn't have asked for a more sympathetic third user - as an engineer and musician, Michelle is about as similar to my own motivations as could be managed. So where I advise students to find trial users as different to themselves as possible for critical assessment of their interaction design, I have managed to avoid any critique beyond the blindingly obvious.

Nevertheless, there is clearly some work to be done, even in addressing the blindingly obvious. A first priority is to make a smoother transition between direct manipulation of shapes and geometric transformations of those shapes. At present, it is possible to rotate, translate or scale a shape either by dragging handles (as in a conventional direct manipulation drawing editor), or by specifying a geometric transform (which can then be adjusted by directly manipulating its parameters). However, the more powerful of the two - the transform layer - must be requested explicitly. Only after doing this can the user modify the transform under program control. Use of the direct manipulation handles is transient, changing the state of the object, but with no opportunity to reproduce or adjust that state change.

In CDs terms, this represents a combination of abstraction hunger and premature commitment. The user can either create abstractions routinely (thus allowing them to be adjusted at any time), or create them only when necessary - but this involves premature commitment to know in advance whether the abstract alternative will be necessary. Michelle was (unsurprisingly) unsure about the difference between the options, and confused by the implicit state changes between them. I can deal with this by, whenever a shape is directly manipulated, saving the state changes as potential parameters for a transformation layer, and giving the user the option to create that layer.

This does actually return to one of my first blog discussions - probably going back to last October. The was the point at which I created a "move" layer that was automatically generated in response to the user dragging of any content. This turned out to be rather annoying, as move layers quickly accumulated, appearing rather "heavyweight" for minor adjustments (a form of abstraction hunger, even though the abstractions were created automatically, in programming-by-example style). As a result, I created the direct manipulation handles in December. Now I understand that they should always have been combined, allowing transition from direct to abstract.

On the positive side, Michelle liked using Palimpsest, even in its prototype form. She said it was a lot of fun to play with, and immensely superior to her son's current experience as a student of introductory graphics programming, where it has taken him weeks of work to achieve the visual effects that she could explore within seconds. If I had any faith in Likert-scale evaluations of user experience, I could report an early confirmation sample!


Wednesday 25 April 2012

Manipulate, Automate, Compose


Margaret Burnett has been a great supporter of the Attention Investment model of abstraction use, in large part because it provided the motivation that led to her design strategy of Surprise, Explain Reward, that has proven so valuable in the development of end-user debugging systems. After many weeks wrestling with the problem of where the "language" is in my layer language, I realised that I have unconsciously been relying on my own design strategy, similarly motivated by Attention Investment, but until now not articulated.

We can call this strategy "Manipulate, Automate, Compose", in homage to Margaret's own three-part strategy for user experience design. (If you want to cite this, contact me first - there's a chance I might eventually decide to publish in a slightly revised form).

My hitherto unnamed, but analogous, strategy dates back to the invention of the "Media Cubes" over a decade ago - one of the first applications of Attention Investment. My reasoning at that point was that users would become familiar and comfortable with the operation of the individual cubes, in the course of operating them as simple remote controls. Once those unit operations had become sufficiently familiar in this way (perhaps over a period of months or years), the physical cubes would naturally start to be treated as symbolic surrogates for those direct actions, and used as a reference when automating the action (for example, setting a timer to invoke the relevant action). Once the use of references had become equally familiar, the user might even choose to compose sequences of reference invocations, or other more sophisticated abstract combinations. All of this is consistent with Piagetian education principles, and indeed with Alan Kay's original motivations in applying those principles to the design of the first GUIs.

What we have lost sight of since then is the second two steps in this process - most GUI users are stuck at the "Manipulate" phase, and are given little encouragement to move on to Automating and Composing - precisely the points at which real computational power becomes available. The various programming by demonstration systems (as in Allen Cypher's seminal collection) aim to move to the Automate step, while programming by example uses additional inference methods that Compose them as a map over different invocation contexts.

Typical approaches to programming language design often proceed in the opposite order - the mathematical principles of language design are fundamentally concerned with composition (for example in functional languages). Once the denotational semantics of the language are established, an operational semantics can be applied, so that the language can be applied to things that the user wants to automate. Finally, a programming environment is provided, in which the user is able to manipulate the notation that represents these semantics. After a language has been in use for a while, live debugging environments might even provide the user with the ability to directly manipulate objects of concern to themselves (rather than the elements of the language / notation, which for the user are a means to an end).

Those viewing the Layer Language up until this point (Beryl Plimmer's workshop in February, and James Noble's observations last week) have commented that I've provided a number of interesting user capabilities, but that they don't see where the "language" is. To be honest, I've had the same concern myself - it looks so unlike a normal language that it has taken some determination to persist along this path (not for the first time - the Media Cubes suffered from the same problem, to the extent that Rob Hague felt obliged to create a textual equivalent in order to prove that it was a real language, even though operating the direct manipulation capabilities of the system by typing in a text window would have seemed slightly ridiculous).

So after James' departure a couple of days ago, I returned to thinking about execution models. Before he arrived, I had implemented a simple event architecture that allows events to be generated from visual contexts (and hence automated), and during his recording session last week, I took the chance to implement a persistence mechanism for collections of layers (hence making composition more convenient). It's pretty clear that once these are working smoothly, they will provide a reasonable execution model, that is consistent with the visual appearance and metaphor of the multiple layers. Furthermore, users will be able to apply these in the same way as with Media Cubes and some of my other design exercises in the past - the system is quite usable for direct manipulation, with those experiences giving users the confidence for the attention investment decisions of automating their actions and composing abstract representations of them.

So this is the design strategy expressed in the Layer Language - the same one as in Media Cubes, and various other systems. The user can achieve useful results, and also become familiar with the operation of the system, through direct Manipulation that provides results of value. The notational devices by which the direct manipulation is expressed can then be used as a mechanism to Automate them, where the machine carries out actions on the user's behalf. Finally, all of the functions that the user interacts with in these ways can be Composed into more abstract combinations, potentially integrated with more powerful computational functions. The same Manipulate, Automate, Compose benefits can be seen in products such as Excel - hence the spreadsheet analogy that I have been making when explaining my intentions for the Layer Language.

Furthermore, I realised that the past 6 months work represent a meta-strategy for applying attention investment to design. I have intentionally deferred the specification of the language features for Automation and Composition until I had gained extensive experience of the Manipulation features. In part this comes from the hours of "flight-time" in which I've been using those features. But even more, it comes from the fact that I've been implementing, debugging and refining the direct manipulation behaviours as I've gone along. This has meant that the abstract aspects of the language design have been formed from my own reflective experience of the use and implementation of the direct manipulation aspects. A name for this meta-design strategy might be "Build, Reflect, Notate".

I suspect that this may be the way that many language designers go about their work in practice. I had several illuminating discussions with James about the work he and his collaborators are currently doing on the design of their Grace language. James has a great deal of expertise in architecture and usability of object-oriented languages (we had some enjoyable discussions on the beach, comparing my experiences of Java coding over the course of my project so far), so like most language designers, he is creating a language informed by his experiences as a Java "user". The difference between that kind of project and mine, however, is that the user domain in which his design work is grounded is the manipulation of OO programs by his students, in the context of teaching exercises to train them in OO programming. This is perfectly appropriate to his project, since Grace is intended as a teaching language. However, it means that the attention investment consequences arising from his use of this meta-design strategy are very different from mine. Rather than the end-user programming principles of Manipulate, Automate, Compose, his language will support some educational principles related to the the acquisition of programming skill (maybe Challenge, Analyse, Formulate). Perhaps Margaret's Surprise Explain Reward arose in a similar way from the same meta-design strategy - I look forward to discussing it with her at some point.



References

Surprise, Explain Reward: Aaron Wilson, Margaret Burnett, Laura Beckwith, Orion Granatir, Ledah Casburn, Curtis Cook, Mike Durham, and Gregg Rothermel. 2003. Harnessing curiosity to increase correctness in end-user programming. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '03).

Programming by Demonstration: Allen Cypher, with Daniel C. Halbert, David Kurlander, Henry Lieberman, David Maulsby, Brad A. Myers, and Alan Turransky. (1993). Watch What I Do: Programming by Demonstration. MIT Press

Media Cubes: Blackwell, A.F. and Hague, R. (2001). AutoHAN: An Architecture for Programming the Home. In Proceedings of the IEEE Symposia on Human-Centric Computing Languages and Environments, pp. 150-157.
Grace: The Grace Programming Language
http://gracelang.org/

Saturday 21 April 2012

Some (routine) "tricky corners" in Java


When Sam and I were trying to develop analogies between jazz improvisation and live coding, Nick Cook told us that preparation for performance might involve rehearsing "tricky corners" that can arise in a particular improvisation genre. Although I'm not coding anything live, there are some kinds of software development task that always leave me feeling a little nervous, no matter what language I'm working in. One of these is management of realtime threads. Another is interacting with a new file system for the first time.

So in the past week, I've taken some deep breaths, and plunged into both of these. Both are routine enough, and every Java programmer (or teacher of Java programming) knows what to do - but as always, newcomers have to learn it themselves.

I'd already created a number of Java animation threads during the project, and had got accustomed to the use of SwingWorkers and timers to ensure that the system still responds to user interaction while doing memory-intensive image shuffling. I'd speeded up some of these animations with better double-buffering of the layer rendering, separating fast animated components from relatively static elements that can be updated less often,

But lately there have been too many animations all running concurrently, now that some state variables are animated (the rate and event value types), and these control other animated layers (paths) that themselves interact with or control other layers. After some encouragement from James Noble (still visiting), I therefore aggregated all the animated updates into an update task list that is reviewed and updated in a single animation thread - James tells me this is how Smalltalk does it, and that's good enough for me! However, several days of thread debugging ensued. James was off at a recording session, and it turns out could have told me immediately what my problem was - but what I thought was a subtle deadlock problem turned out to be the simple fact that exceptions in background Java threads die silently if the thread invocation wasn't made within a try-catch block. So my problem was simply an invisible null pointer exception. Once I'd learned to catch this and dump a stack trace, it became relatively easy to debug the threaded animation.

The second plunge was into Java persistence, which I've never had any reason to use before. It seems about time that my "programs" (quote marks necessary until I persuade James that this really is a programming language) can be saved by the user - at least, it is now possible to create things sufficiently complex that I might want to recover them in future. The illustration on todays post is a "Mr Watson, come here" - the first layer successfully saved and dragged back into a new Palimpsest session. As with my complex animation thread, there had been good reason for nerves - the order in which persistent objects are written to a stream is not that easy to anticipate, meaning that getting a complex class hierarchy persisted involves several hours of trial and error, with many exceptions along the way.

Tuesday 17 April 2012

Visualising parameter bindings


James Noble has been to visit, and given lots of useful advice. He also chastised me for not keeping up a daily diary of my development work. This is almost solely due to the low bandwidth of my modem connection, and the fact that Blogger has to download a Javascript editor every time I make a new post (typically a 10-15 minute load time on the page, at speeds around 300 bps). Lots of things have been happening, but I haven't necessarily written about them.

Nevertheless, a brief update on a piece of recent work - I've changed the visualisation of parameter bindings, so that they look like little inserts within the layer. A rather simple metaphor, but at least visually distinctive. There's a sample in the image above - it is a snapshot of a filter layer that has two parameters, one referring to an image value, and the to a mask value that has been applied to that image.

Sunday 15 April 2012

Sufficiently complete to surprise myself

I've now got a reasonably complete set of functions for the pictorial spreadsheet behaviour. Sufficiently complete that it's fun to play with, and see what I can create - or what emerges. This isn't the sense of "emergent" that our arts collaborators typically employ, but is probably related - here we have something that "emerges" from the user's own activity, rather than from the behaviour of the system. Rather close, in fact, to what you might describe as creative experience (or at least playful).

One example of things that have emerged from my own play in the past few days is a photograph that appears from within its own colours, as a sampling window travels over the image, controlling a translucent overlay in the colour of the current sample contents. Hard to show this in a static image, but it's sufficiently pleasant to watch that I left it running for half an hour, and could imagine it hung on a wall as a dynamic picture.

A second (illustrated) example was a dynamic paintbrush, that changes its shape and colour according to position on the screen. This one works under mouse control, so not so suitable as a displayed work (unless controlled by viewers using non-contact sensing), but an effect that can be seen more easily in a screenshot.

Both of these are things that I hadn't expected to create, demonstrate intentions that emerged while I was playing, and had results that were pleasingly surprising. That whole lot come together in a kind of liveness/flow experience. It's possible that Chris already coined a word for this in his PhD thesis, as it's pretty much the same experience that musical composers are looking for. Will have to ask him.

Thursday 29 March 2012

The marvels of dependency graphs

Now that it's pretty clear I really am making a spreadsheet for images (something that I was telling non-technical enquirers back in December, when they asked what this project is all about), I've been thinking about the marvels of the Excel dependency viewer. Spreadsheets are usable without a dependency view, but they can save a lot of time when debugging. That's an example of hidden dependencies that I often use when teaching Cognitive Dimensions, but the Layer Language (Palimpsest - it says so on my prototype's window headers now, even if not in the tagline of this blog :-) is just starting to get complex enough that this is annoying.

So after deferring it for a couple of weeks while doing other things, I sat down in earnest today to create the dependency graph viewer. Preparations included kicking Elizabeth and Helen out (Elizabeth driven to school instead of walking with me, Helen over the mountain to do a week's shopping), and cancelling my routine weekly visit to Beryl's group in Auckland.

After the first hour of work, it was clear that this was a more difficult problem than the typical day's coding - I had to draw a diagram! Perhaps this is a measure of how programmers really regard diagrams. A necessary evil, to be resorted to only when the problem is too tangled to be dealt with by hacking code directly. In the past couple of months, I think this has happened about 3 times (the first time, we had to go out and buy some paper, because there was none in the house - I hadn't needed paper until this point, but it's notable that when it comes to thinking with diagrams, the last thing you want to do at this stage is fire up a proper drawing program).

It's just possible that these two issues are related - the problem I'm trying to solve, and the tool I need when trying to solve it. A typical "tree of trees" episode, that could easily distract an abstraction-lover into a meta-shift, with progress on the original problem greatly deferred. Hopefully, today, I can stick with my piece of paper and finish coding my dependency graph (cycles and all :-)

Friday 23 March 2012

Doing murky arithmetic

More features that resemble (somewhat) conventional programming. I gave a talk in the Auckland CS department on Wednesday, describing the extended audience for end-user programming, beyond those who "think like engineers" (the EUSES agenda) to sloppy thinkers, as I've described them in this blog. Sam Aaron has sent an encouraging email, saying he likes this novel emphasis, so I'd better credit it properly - Thomas and other PPIG folk used to distinguish "neat" and "scruffy" programming styles, but both those styles described practices within the spectrum of professional programming. My recent emphasis comes more from a comment made at a EUSES AGM by Mary Shaw, when she told me that the Computational Thinking education campaign was intended to discourage "murky thinking". However from my perspective (and as I said on Wednesday), many typical artistic practices are unavoidably "murky" - relying on creative ambiguity, social interpretation, emergent behaviour, and other things that you don't really want in standard programming.

So how do you do arithmetic within a framework of murky thinking? I haven't yet found any need for standard (integer/real) number representation, with count and proportion being better suited to the operation parameters I need. Counting is counting, with little need for arithmetic, but it's getting a bit boring having proportions that are either the same as each other or inverse. I therefore set out to make a four function calculator for proportions. This isn't going to work the same way as a standard four function calculator, because all inputs and outputs have to be in the range 0-1 for compatibility with the rest of the system. This means that "multiply" is actually scaling up, and "division" scaling down, relative to a log slider. Addition and subtraction combine and compare proportions. All results are clipped to the floor and ceiling values.


A solution to the problem of how to visualise this non-standard arithmetic model was inspired by a hint from Helen, that she explains multiplication to children in terms of the area of a rectangle. The input value is therefore visualised as a precisely scaled version of the original proportion layer, with the output value visualised as a rectangle whose area can be larger or smaller than the original layer boundary. All four functions are controlled by sliders - vertical add and subtract change the relative height of the rectangle, while horizontal multiply and divide modify the width with log-scaling. Hopefully, direct manipulation of these sliders makes their behaviour sufficiently familiar, that the effects of subsequent parameterisation (by dragging and dropping another value layer onto a slider) can be easily anticipated.

Sunday 18 March 2012

More programmable is less usable

With an execution model in place, it's clear that this is the least usable part of the system so far. It's a bad sign when even I can't construct a syntactically valid example! (Bad news, in the sense that any other user will certainly be unable to). There are now three levels of abstraction in the interaction - direct creation and manipulation of single layers, indirect manipulation of other layers via constraints (spreadsheet-style), and generation/execution of new layers. As they become more abstract, it's unsurprising that they are harder to use - but not ideal!

The final abstraction level, which might be compared to user-defined functions in spreadsheets (i.e. almost no regular users use them) has an execution model that I've called "bind-then-play". It maps one or more operations over a set of arguments, where each operation can have a number of unbound parameters. As soon as an operation receives bindings for its parameters, a new layer instance is created and executed. As I've already commented, implementing this seemed a lot more like regular computer science - type inference for the bindings and so on - but it's unclear yet whether it will turn into anything for end-users. I've also created a more macro-like record and playback facility which is much easier to understand, and at present more fun to use.

Sunday 4 March 2012

Some recognisably computational features

Blog entries are becoming sporadic, due to slow dial up collection (and now, intermittent mains power, after our first storm of the year).

Nevertheless, progress is steady (isolation has its benefits!), with a number of more recognisable programming language features now implemented. Many of the basic layers are now parameterised in simple ways, and the mechanics of binding and substitution of parameter types has been necessary. This in turn has required a type mechanism - the basic set of value types are point, vector, proportion, rate, colour and count. These can be derived from each other in various ways, as well as from properties of source images. 

Bound parameters are propagated dynamically, and it is possible to explore some interesting interactive effects by binding multiple layer controls to the same values as in the picture here. These are sufficiently interesting that I wanted to watch the results over more extended periods - so I've created a mouse tracker, and a record/replay layer that allow dynamic sequences to be used, with the replay rate also under dynamic rate control (motivated by the importance of dynamics in the Random Dance project). It also turns out to be nice to provide a motion-persistence filter that can be superimposed on these dynamic images, providing more of a visual transition between the static and interactive displays (as used for this image).

I've also made an initial attempt at a layer supporting a second order function, mapping a layer with one or more unbound parameters over a collection of possible parameter values. It turned out to be quite a challenge to define an appropriately "sloppy" approach to this, since the kinds of functional language that usually provide features like this emphasise correct type matching and map cardinality as fundamental control mechanisms. Instead, I've created a parameter binding mechanism that searches a set of available values for one of compatible type, with a new binding instantiated as soon as sufficient values are found. If the set doesn't have enough members for even one binding, partial instantiations can be created. And if the number of resulting instances is insufficient, the user can define a minimum count value. I should probably stop coding to think more carefully about the cognitive dimensions implications of these decisions.

Another slightly disconcerting result of all this computational work is that my own usage of the system has gravitated toward repeated definition of simple geometric figures - lines, points and circles. This is not at all the kind of aesthetic that I'm wanting to promote, but is so much more natural and convenient when implementing and debugging rather mathematical relationships. In principle, it should be very easy to parameterise the image operations that I created before leaving Cambridge, and also derive values from those images. But I shouldn't delay much longer before getting around to it - this is exactly the kind of thing that I complain to students about, when they get too absorbed by the mathematical requirements of their work, and neglect aspects of the user experienced that are less precisely specifiable.

Tuesday 21 February 2012

Bret Victor's "Dynamic Drawing"

Sam Aaron pointed me to a talk by Bret Victor, a designer who has worked at Apple among other places. Victor says that "creators need an immediate connection to what they're creating" - the same principle of liveness that Luke, Chris and I have been promoting for programming languages more generally.

Checking out Victor's website, I see that he has already been playing with the idea of new languages to support this kind of experience when doing graphic design. He calls these "Dynamic Pictures" - an interesting comparison to my original proposal to create "Living Images" some years ago.

Some of the things he says in this little essay could easily be repeated as motivations for my current project:

  • I believe that dynamic pictures will someday be the primary medium for visual art and visual explanations.
  • Dynamic means that the picture changes when you change some input. A dynamic picture looks different in different situations
  • Dynamic drawing means that the artist creates the picture by directly manipulating the picture itself, instead of working with some indirect representation that doesn't resemble the art-in-progress.
  • With today's tools, dynamic design requires creating pictures by writing text. It is only because we are so accustomed to this situation that we don't recognize how bizarre, even barbaric, it is.
  • A "user interface" is simply one type of dynamic picture. [Apple designers] were dependent on engineers to translate their ideas into lines of text. […] It's fashionable to rationalize this helplessness with talk of "complementary skillsets" and other such bullshit. But the truth is: An author can write a book. A musician can compose a song, a animator can compose a short, a painter can compose a painting. But most dynamic artists cannot realize their own creations, and this breaks my heart.

He offers the following illustrations of the text/picture divide in some typical artist/end-user environments:

Sunday 19 February 2012

Tools for sloppy programmers

Most programming languages have to keep things neat and tidy, but in the past week I've realised at a couple of points that I don't want to place too much responsibility for tidiness on my users - and that sometimes untidiness is likely to be positively welcome.

The first of these was was the outcome of a struggle to deal with circular references in dependency chains. Until today, I'd been trying to track these down at the time the circularity was created - every time a new reference is added, I search the dependency tree from that point, to see if there are any cycles. The problem with this was how to resolve the cycle. Initially, my approach was to save the user from themselves, deleting the final link that causes the cycle. However, this wasn't as sensible as it first seemed. I didn't want to delete the new link that had just been created - the cause of the problem was more likely to be an old link that was no longer so interesting to the user. But as it turned out, it was much harder than I thought to guess which dependency is uninteresting, or even "old". As more interactivity has been appearing in the language, dependencies accumulate pretty quickly, and many of them would be bad ones to disconnect (for example, between an a value interactor and the value that it manipulates).

The resolution to this was a realisation that circular dependencies are my problem, not the user's. I've decided to allow cycles in the graph, and I simply have to remember not to follow them when making updates. In fact, this is a lot easier than deciding how to fix them.

The second sloppy feature was discovered partly through boredom, as I was implementing and testing map functions. I was often creating multiple values to map over, and it was necessary to assign or adjust each one before carrying out the map. To save myself time, I decided to initialise each new value to a random number, rather than assume that the user will assign it or derive it from something sensible. As it turns out, this has been a lot of fun - I keep discovering interesting new colours that I wouldn't have thought of creating. Another case of a behaviour that wouldn't really be considered in a "serious" programming language, but for sloppy programmers, getting variables with random initial values seems kind of cool.

Wednesday 15 February 2012

Computational Stuff - I need some types

After Beryl and Robert's challenge last week, I've been implementing some new layer types that are starting to look like more conventional CS. Monday devoted to creating a decision layer that either ignores or passes its source depending on comparison of two value parameters. Tuesday spent playing around with value substitution - at first interactive, then in response to a map treatment that maps a parameterised layer over a collection of alternative values.

All of these experiments have required more sophistication in the dynamic type processing. Until now, type equivalence (for example of value fragments and value layers) has been rather crude, with layer references and "promotion" of fragments simply hard-coded with "instanceof" to retain necessary types. A first attempt provided my own set of type identifiers, with all content elements being able to report their types. However, this is going to involve a lot of logic distributed around the place, detraining type equivalence, type conversion, and maintaining type/behaviour relations.

It's starting to look as though I should define value types as Java interfaces to the layers that support them, with type conversions and equivalence supported via multiple interfaces for some kinds of layer. In this case, Java reflection should provide all the dynamic type processing that I need, rather than constructing an independent type mechanism.

An Overview of the Language!

Beryl Plimmer welcomed me to Auckland with a one-day workshop, coinciding with a visit by Gem Stapleton from Brighton. Participants from Beryl's included new post-doc (previously PhD student) Rachel Blagoevich, with current PhD student Jamie Diprose and summer research assistant Ryan, as well as department lecturer Robert Sheehan. All are interested in notation design, with Robert being more oriented toward end-user programming, and the others theoretical and usability principles (especially future plans to use tangibles and multitouch alongside the group's existing expertise in sketch recognition).

This audience provided an excuse for my first attempt at an overview so far - I try to reproduce that summary below.

Motivation

This project is mainly inspired by my work with various collaborators in recent years, creating end-user language/notations for use by artists working with digital media. In the context of design research, collaboration with artists offers a valuable counterpoint to work with engineers, arguably a necessary element of exploring the full range of requirements for professional design tools. Working with artists also draws attention to particular parts of the attention investment / user experience equation, where end-users engage in construction activity for its intrinsic rewards (including, for example, flow experiences).

Observations of artist collaborators has also drawn attention to two interesting technical requirements. One is that many professional artists find the keyboard an obstacle to creative practice (thinking here of observations of Random Dance, and discussions with Bruce Gernand). This revives the long-standing challenge of the "purely visual" language, of which Dave Smith's Pygmalion, George Furnas's BitPict, Clayton Lewis's NoPumpG, and Wayne Citrin's VIPR have been stimulating if rather impractical examples. Although often mooted, such languages have seldom seemed compelling on machines that do, after all, have keyboards. In the age of tablets and touch interaction, keyboards have suddenly become a real inconvenience and hindrance in routine interaction, so this seems a better time than any to explore text-free notations. Everything that I have done so far, in principle, will be completely usable on a tablet (assuming I ever manage to port the graphical operations to Android Java).

Features

This language allows users to combine source images (photographs, simple shapes or ink) and treatments of those images.

Sources and treatments are layered on top of one another, in the manner of a palimpsest (a potential name for the language!).

All layers can be broken up into fragments, with these fragments being combined in a visual collage (another potential name).

The behaviour of treatments and fragments can be modified by adjusting values. Values themselves are also represented as fragments or layers - these can define visual parameters such as proportion/intensity, vectors, colours and so on.

Interaction with the system involves creating new layers, and superimposing them on layers already created. The resulting stack of source, values and treatments provides both a visual palimpsest, and a historical record of the process by which it was achieved.

The other element of the visual interface, apart from the main image manipulation area, is a visualisation of the stack of layers. The usability implications of the layer stack can be considered in terms of conjoining two standard features of Photoshop: the layer window and the history window. Some disadvantages of doing this can be understood by analogy to the (notoriously hard to understand) Photoshop history brush. The workshop audience expressed concern about this aspect of the user experience (they could have, but did not, express it in terms of premature commitment and viscosity). However, from my perspective, this mingling of process and product is also characteristic of artistic practice - our attempts to regularise it in the projects with Bruce Gernand and Random Dance provided them with history management mechanisms, but not mechanisms that they integrated fluently into their creative activity. We'll see how this enforced intermingling plays out.

Computational properties

The primary data structure facility at present is a layer whose fragments represent references to other layers. These fragments can be used either to modify or refer to particular image sources, treatments or values from other layers, or to preserve a whole stack as an ordered collection of fragments within a single layer.

The workshop audience was sceptical, that this data structure presents sufficient complexity to be described as a programming language. The behaviour of the system at present is rather reminiscent of Hypercard, but without any of the Hypercard script - from a traditional programming language perspective, it seems as though I have removed the only linguistic element of Hypercard (ignoring the notational analysis perspective, that the card representations and interaction environment provide a powerful notational system). I made some analogies to early programming languages - fragments as being like dictionary words in Forth, or stack sequences and references as like primitive LISP - with all symbols replaced by images. However, although these might support behavioural interpretation, they are not behaviours in themselves. A final poor analogy is that, as with the lambda calculus, these arrangements of images support a computational interpretation, in principle including sufficient expressive power to implement a Turing machine. (This interpretation won't help much in Auckland - it seems that the lambda calculus is not taught at undergrad level).

Nevertheless, the advice given by Robert and Beryl was that, if I wished to have this recognised as a programming language, it should have some behaviours that look like program execution. They recommended iteration, or evaluation of conditions. I've taken this on board, and that's where I'm going next ...

Monday 6 February 2012

Back on board, from Karekare NZ

I should have warned readers (if there were any!) of the scheduled 2 months family time through December and January, as we visited with all the various branches of the family around New Zealand. Now back in action, installed deep in the bush at Karekare, and visiting with Beryl Plimmer's group at the University of Auckland.

The first day of coding on return was a little rusty (editor commands forgotten, basic maths skills lost), but I managed to add a couple of incremental functions - an interactive ellipse class to join the rectangle, and upgrading of the basic line class to allow interactive editing.


After this, time to get stuck in and add support for a fundamental new concept that I dreamt up during a jetlagged night in Singapore.  I've decided that the stack view of layers should be a first class object, interchangeable with collections of layer references. In order to make this obvious to users, I think this equivalence should be visible in a graphical transition from collection to stack. That's relatively straightforward, but still lots of geometric detail to be figured out in the animation. A quarter of yesterday spent on false starts for representing that geometry sensibly, and another quarter thinking that I had created a sublte reference contention problem in my animation thread (in fact, had just forgotten to initialise the reference at all).

An interesting consequence of the 2 month break is reluctance to get my hands dirty in refactoring code that I was blithely chucking around at the start of December. Perfectly happy now to make incremental changes or add features, but the core architecture has now acquired substantial viscosity after being "paged out" of my head. Hopefully the fluidity will be recovered before too long.