Margaret Burnett has been a great supporter of the Attention Investment model of abstraction use, in large part because it provided the motivation that led to her design strategy of Surprise, Explain Reward, that has proven so valuable in the development of end-user debugging systems. After many weeks wrestling with the problem of where the "language" is in my layer language, I realised that I have unconsciously been relying on my own design strategy, similarly motivated by Attention Investment, but until now not articulated.
We can call this strategy "Manipulate, Automate, Compose", in homage to Margaret's own three-part strategy for user experience design. (If you want to cite this, contact me first - there's a chance I might eventually decide to publish in a slightly revised form).
My hitherto unnamed, but analogous, strategy dates back to the invention of the "Media Cubes" over a decade ago - one of the first applications of Attention Investment. My reasoning at that point was that users would become familiar and comfortable with the operation of the individual cubes, in the course of operating them as simple remote controls. Once those unit operations had become sufficiently familiar in this way (perhaps over a period of months or years), the physical cubes would naturally start to be treated as symbolic surrogates for those direct actions, and used as a reference when automating the action (for example, setting a timer to invoke the relevant action). Once the use of references had become equally familiar, the user might even choose to compose sequences of reference invocations, or other more sophisticated abstract combinations. All of this is consistent with Piagetian education principles, and indeed with Alan Kay's original motivations in applying those principles to the design of the first GUIs.
What we have lost sight of since then is the second two steps in this process - most GUI users are stuck at the "Manipulate" phase, and are given little encouragement to move on to Automating and Composing - precisely the points at which real computational power becomes available. The various programming by demonstration systems (as in Allen Cypher's seminal collection) aim to move to the Automate step, while programming by example uses additional inference methods that Compose them as a map over different invocation contexts.
Typical approaches to programming language design often proceed in the opposite order - the mathematical principles of language design are fundamentally concerned with composition (for example in functional languages). Once the denotational semantics of the language are established, an operational semantics can be applied, so that the language can be applied to things that the user wants to automate. Finally, a programming environment is provided, in which the user is able to manipulate the notation that represents these semantics. After a language has been in use for a while, live debugging environments might even provide the user with the ability to directly manipulate objects of concern to themselves (rather than the elements of the language / notation, which for the user are a means to an end).
Those viewing the Layer Language up until this point (Beryl Plimmer's workshop in February, and James Noble's observations last week) have commented that I've provided a number of interesting user capabilities, but that they don't see where the "language" is. To be honest, I've had the same concern myself - it looks so unlike a normal language that it has taken some determination to persist along this path (not for the first time - the Media Cubes suffered from the same problem, to the extent that Rob Hague felt obliged to create a textual equivalent in order to prove that it was a real language, even though operating the direct manipulation capabilities of the system by typing in a text window would have seemed slightly ridiculous).
So after James' departure a couple of days ago, I returned to thinking about execution models. Before he arrived, I had implemented a simple event architecture that allows events to be generated from visual contexts (and hence automated), and during his recording session last week, I took the chance to implement a persistence mechanism for collections of layers (hence making composition more convenient). It's pretty clear that once these are working smoothly, they will provide a reasonable execution model, that is consistent with the visual appearance and metaphor of the multiple layers. Furthermore, users will be able to apply these in the same way as with Media Cubes and some of my other design exercises in the past - the system is quite usable for direct manipulation, with those experiences giving users the confidence for the attention investment decisions of automating their actions and composing abstract representations of them.
So this is the design strategy expressed in the Layer Language - the same one as in Media Cubes, and various other systems. The user can achieve useful results, and also become familiar with the operation of the system, through direct Manipulation that provides results of value. The notational devices by which the direct manipulation is expressed can then be used as a mechanism to Automate them, where the machine carries out actions on the user's behalf. Finally, all of the functions that the user interacts with in these ways can be Composed into more abstract combinations, potentially integrated with more powerful computational functions. The same Manipulate, Automate, Compose benefits can be seen in products such as Excel - hence the spreadsheet analogy that I have been making when explaining my intentions for the Layer Language.
Furthermore, I realised that the past 6 months work represent a meta-strategy for applying attention investment to design. I have intentionally deferred the specification of the language features for Automation and Composition until I had gained extensive experience of the Manipulation features. In part this comes from the hours of "flight-time" in which I've been using those features. But even more, it comes from the fact that I've been implementing, debugging and refining the direct manipulation behaviours as I've gone along. This has meant that the abstract aspects of the language design have been formed from my own reflective experience of the use and implementation of the direct manipulation aspects. A name for this meta-design strategy might be "Build, Reflect, Notate".
I suspect that this may be the way that many language designers go about their work in practice. I had several illuminating discussions with James about the work he and his collaborators are currently doing on the design of their Grace language. James has a great deal of expertise in architecture and usability of object-oriented languages (we had some enjoyable discussions on the beach, comparing my experiences of Java coding over the course of my project so far), so like most language designers, he is creating a language informed by his experiences as a Java "user". The difference between that kind of project and mine, however, is that the user domain in which his design work is grounded is the manipulation of OO programs by his students, in the context of teaching exercises to train them in OO programming. This is perfectly appropriate to his project, since Grace is intended as a teaching language. However, it means that the attention investment consequences arising from his use of this meta-design strategy are very different from mine. Rather than the end-user programming principles of Manipulate, Automate, Compose, his language will support some educational principles related to the the acquisition of programming skill (maybe Challenge, Analyse, Formulate). Perhaps Margaret's Surprise Explain Reward arose in a similar way from the same meta-design strategy - I look forward to discussing it with her at some point.
Surprise, Explain Reward: Aaron Wilson, Margaret Burnett, Laura Beckwith, Orion Granatir, Ledah Casburn, Curtis Cook, Mike Durham, and Gregg Rothermel. 2003. Harnessing curiosity to increase correctness in end-user programming. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '03).
Programming by Demonstration: Allen Cypher, with Daniel C. Halbert, David Kurlander, Henry Lieberman, David Maulsby, Brad A. Myers, and Alan Turransky. (1993). Watch What I Do: Programming by Demonstration. MIT Press
Media Cubes: Blackwell, A.F. and Hague, R. (2001). AutoHAN: An Architecture for Programming the Home. In Proceedings of the IEEE Symposia on Human-Centric Computing Languages and Environments, pp. 150-157.