Thursday 1 December 2011

A visibility and juxtaposability problem solved

I've been waiting for the point where my expert knowledge of Cognitive Dimensions will change what I would have done anyway (at least, in a conscious manner, rather than just general motivation). So yesterday, I figured out that there was a serious visibility/juxtaposability problem, resulting from references to layers that might have been "collapsed" out of the stack. It's possible (sometimes) to retrieve a layer by expanding the reference again, but this isn't an ideal option if your problem was simply that you'd forgotten what was on it (a trade-off between viscosity, visibility, or a hard mental operation - the three-way trade-off that Thomas described by analogy to the ideal gas law, back in the 1998 tutorial). In fact, even if the layer is somewhere in the stack, it might not be easy to find it - a fundamental problem with the whole layers idea. References have always had the default behaviour that they will navigate you to the right place in the stack, but this is annoying too - because you lose your previous context, which doesn't allow juxtaposability.

So the outcome today was to implement a function that allows you to hover over any layer reference, and get a preview of that layer's contents, whether or not the layer is currently present in the stack (illustration here - the ink spiral is a layer that has been deleted from the stack, but still has a reference button - I've hovered over that button to see the contents of the original layer).


As so often, the simple concept was harder to implement than I thought. JavaScript apparently supports a 'hover' callback as a standard attribute of UI widgets, but Java components do not. Implementing hover within my mouse interaction model meant that I had to grapple with threads more seriously, to figure out what to do when the mouse isn't moving. On the positive - 'craft' reflection - side, it's an oddly pleasing feeling when you finally have some purpose for the spurious empty implementations of interface methods that Java makes you leave around all over the place. This was the first time that I've had anything I wanted to do with the mouseMoved (as opposed to mouseDragged) method of the MouseMotionListener.

Tuesday 29 November 2011

Exploded and collapsed layers

Now a range of regions can be "exploded" into new layers in the stack, and stacked layers can be collapsed again into sets of regions. First random test image of this behaviour behaved correctly, but is somewhat disturbing.

Wittgenstein vindicated - types are not a hierarchy

In the days when object-orientation was new to most programmers, we were occasionally surprised by apparently profound failings of metaphysical conception among our colleagues. In one case that was notorious among members of our team for a long time, an early draft of a programming tutorial described inheritance in terms of the parts of a car - the class "car" should inherit the class "wheel", the tutorial explained.

Wittgenstein might have observed a problem in the metaphysics of object-oriented programming, rather than an ontological disconnect in the life of that author (and of every person who had read that draft before I saw it). Is it really the case that we always know what category is a kind of what other category?

Well, I spent most of today resolving a similar conceptual reversal. Until today, I had assumed that "content" was a kind of "layer". Now I've decided that layer is a kind of content. Of course the real problem is that neither is fully a kind of the other. Most development projects involve epistemological compromises, and expert users of the software we create (if the category terms are revealed) manage to subtly change their previous understanding of those words, in that context, to anticipate the actual system behaviour. Unfortunately, when programming in Java, reversing the position of two classes in the inheritance hierarchy is far from a trivial exercise.

Saturday 26 November 2011

Standardising geometry

I resisted the temptation, when I started coding, to create my own basic geometry classes - the Java libraries are full of dependencies on basic classes like Point and Rectangle. But they seem to require far more mundane repetition than you might hope from an object oriented language (C++ operator overloading and auto-type casting, where are you when I need you).

I didn't mind at first, because it was kind of meditative, doing all those repeated operations of x + width, y + height and so on. But I finally cracked, and created some new Point and Rectangle classes (well, Location and BoundingBox) that do everyday stuff more conveniently. Of course, now that I have 10,000 lines of code, much of it doing geometry, I wished I'd started a lot earlier. And quite a few things didn't work afterward.

Never mind. Almost back to where I started, and my ink has bounding boxes again.

Friday 25 November 2011

Defining selection regions

I wanted to be able to define a subset of regions within a layer as an interactively-edited selection. This turned out to provide a nice example of the tension between formalised algorithms and "intuitive" user intention.

The basic idea was that the user should be able to draw a line around the items of interest, or that a previously defined image layer could act as a region selection mask. In either case, an enclosed region is clearly intent to select something.

The enclosure could be defined computationally using either a flood fill, or a convex hull. However, neither of those corresponds very well to informal graphical conventions. A partially closed boundary still indicates containment, but couldn't be flood-filled, while a concavity should certainly be respected, if specifically drawn to exclude something.

As a result, I spent nearly a day inventing a more informal alternative to the well-known formal algorithms. It seems to work as expected in most cases, and while less efficient than a good flood fill, is faster than a naive convex hull. I doubt there is much future to the field of informal algorithm design, but hopefully this will be good enough to suffice for the rest of my project.

Wednesday 23 November 2011

Adding a play button

I'm very impressed by the new Lua-based end-user game programming app for the iPad, Codify. Their approach to the need to distinguish between code manipulation and game interaction is to use a simple play button. That's what we expect with macro recorders, of course, though many environments complicate that metaphor. In Scratch, for example, you can still interact with the code while the program is executing, and it can control its own execution state through use of the green flag.

So after a final Cambridge drink with Sam Aaron and David Coyle last night, where Sam pressed me on exactly what kind of execution this language does, I thought I should add my own play button today. Fortunately, it worked as expected pretty much straight away, and created a whole bunch of new layers on the fly.

Tuesday 22 November 2011

Some usable parameterised layers

So it's taken a few days, but now I have operations with value parameters, and a usable subdevice that can be used for viewing and modifying the order of the layer stack.

The result can be used to create basic scripted interactions, with some interesting behaviours resulting from layer dependencies (this image).

From here, the next step is a choice between two optons: a) adding a greater variety of parameter and operation types (e.g. parameterising the image thresholding, or adding rotate and scale operations to the current vector translate), or b) extending the computation model (e.g. processing layer regions as sets - they can already include references to other layers, or interactive commands, so this would extend the power of the language pretty substantially).

Wednesday 16 November 2011

Pain and sub-devices (drag and drop)

I've never seen an API for drag-and-drop that I liked. The Java 2 version is truly horrendous. Unfortunately, if you create notational sub-devices and want the window manager to help with them (as per a recent blog entry - it was nice to have the layer stack sub-device in a separate pane, and perhaps make it scrollable in future), then you find yourself in the horrible world of wanting to coordinate graphical behaviour between different window system components.

Hence a morning spent (wasted?) trying to get to grips with the Java DnD API. I really don't recommend it. After giving up on that, I spent an afternoon trying to roll my own. Guess what - if you press the mouse in one component, then drag into another, the new component does not receive any events (not drag events, not motion events). You can tell it has entered, but then no more information. The really crude response (as implemented by yours truly) is to collect the events in the drag source, and pipe them over to the drop target. The astute reader will have noticed that this means recreating the complexity of the DnD architecture that I was trying to avoid in the first place (well not quite, because I have avoided data transfer between applications). Even if I had got Java DnD working, apparently it doesn't work under Windows anyway.

So as of the end of the day, I can finally drag items off the layer stack and onto another layer. Here's a picture of an interim work product, sometime during today (drop target rendering not quite sorted!).

A reflective note - one of the rants that I give my HCI lecture class is that most GUI applications do not really implement direct manipulation. This is a classic example of why they don't - it's just so difficult to for application developers to do it properly. It's in the interests of the API vendors to make every GUI function as much like a disguised command line as possible - because that is easier for them to implement.

Tuesday 15 November 2011

Making mundane sub-devices

As promised in the last entry, I had to make a sub-device to improve the usability of layer manipulation in many significant ways. It's interesting that this threw me back into the world of the conventional GUI, as I had to start making things like split-panes, and then adding buttons to initiate some of those useful functions that had been missing. Unfortunately, early attempts are both boring and annoying (as Luke tells me is true of all programming).

(yes, a boring sub-device)

And this was followed by some tasks that were at least a bit more layer-like, but with more of those tricky rendering decisions ... where a visual cue is needed to indicate the metaphor, but none of the options are quite as attractive as one might have hoped. For example, choosing between these two options of slightly tilted cards to make the ordering of the layer stack clear:

Sunday 13 November 2011

Reaching a usability nadir

With some parameterised operations, a few literals, and layers that return values, it was finally time to make something that resembles a program. The result was unusable to a remarkable degree. Friday ended in 20 minutes struggle with an environment offering absurdly high premature commitment, viscosity (layers can't yet be deleted or re-ordered), and poor visibility. The last is perhaps the only interesting one. All layers are visible, but it's not always easy to tell what order they are in. As Thomas observed early on (in comparison to Photoshop) I need a sub-device.

Thursday 10 November 2011

Collaging parameters

Some nice juxtapositions today. After a couple of days trying to get literal values working properly, I finally sorted out my rusty statistics and trigonometry enough to derive magnitudes and orientations from images. That meant that I could build a first function that is parameterised by those. More to come tomorrow.

But meanwhile, a spurious picture generated at some point during this morning's experimentation with those vectors (they may not have been right at this point - mistakenly using a figure in degrees where radians are expected takes a lot longer to diagnose than the opposite).

Tuesday 8 November 2011

Don't forget the secondary notation

We're always having to remind visual language designers that they need some secondary notation. (Otherwise known as "comments", for any non-CDs readers stumbling across this). You might say that, up until this point, almost everything has been secondary notation because the execution semantics of my representation has been vague-to-non-existent.

However, in a remarkable display of self-discipline, I have created a secondary notation mechanism before starting on the execution semantics. Here's what it looks like at this stage:

Monday 7 November 2011

Getting thresholding right

Eric Saund told me ages ago that good thresholding is the key to working with captured sketch images. So today has been spent putting together a (hopefully) moderately competent adaptive thresholding algorithm for background removal. The basic approach is inspired by the technique that Pierre Wellner created for the Digital Desk, and that has been used for years in the Rainbow Group and elsewhere since then. However, I've added some enhancements based on the histogram method that Alistair Stead used in his adaptive blob detector last year.

Here's a lovely image, created from a bunch of images that I happened to have lying around my desktop as I was testing auto-detection and alpha blending of white and black backgrounds.

On the purely mechanical side, a couple of hours wasted trying to get access to the built-in MacBook camera from Java so that I could wave arbitrary bits of paper in front of the screen. Unfortunately, this seems to get you implicated in some kind of religious war between Quicktime and Java Media Framework, such that nobody wants to tell you how to do it.

Friday 4 November 2011

Working with Beryl - is it sketching?

I've spent several days in a fog of nasty bugs, as it becomes increasingly clear that my model for managing selection (as discussed in week one) was wrong. I've implemented an alternative, but it doesn't work yet.

On to more cheerful matters - Beryl has proposed a mini-workshop for my first week in Auckland, since Gem Stapleton is visiting that week too. Beryl summarised the relevant research interests of those involved, and said that I was working on "sketching for visual programming" (with a query whether that was correct).

So that was a nice point for reflection. It's certainly true that I've asked to work with Beryl because the work her group does on sketching will be a valuable input to the layer language. As to whether my stuff is sketching, I don't think anything I've done yet looks like any previous piece of work in the sketching community. But on the other hand, it definitely builds on the ideas of mixed-formality representations described in the paper that Luke, Beryl and I wrote with Dave Gray.

Tuesday 1 November 2011

Premature optimisation ...

... is the root of all evil, according to either Dijkstra, Knuth or Hoare. Whoever it was, I figure that this is good company to be in. (Assuming that the original statement was a product of personal experience).

Unfortunately, premature optimisation is not an easy habit to give up. You might as well say that unnecessarily general class hierarchies are the root of all evil (another of my recent problems).

In a combination of both problems, I spent most of yesterday reversing the rendering order of my basic layer abstraction. The first implementation had rendered each layer in turn, followed by those above it. There is no point in recalculating a layer that hasn't changed, so unchanged regions from layers above are preserved in the local cache. Of course (?), this is completely the wrong way around. It is the layer on top that should be the main priority for rendering, because this is where the user is currently looking/interacting. It is the layers underneath that should be cached, because they are unlikely to change.

So failure to take a user-centric view in the original architecture meant that I had to completely reverse the caching order (not yet finished, in fact). If I hadn't spent so much time implementing the caching in the first place, reversing the rendering stack would have been a one hour job instead of a two day one. A clear case of premature something or other.

Friday 28 October 2011

A first bit of serendipity?

A little reflection on my discovery earlier this week ...

The idea that a "semantic" region of a photo could be indicated by a shadow so that it appears to float above the rest of the photo (blog post on Monday) was an accidental result of the fact that I was working on "treatment layers" and needed a second example of a treatment. The first had been experiments with a "rubbing" layer that would transfer painted parts of the layer under it (credit to Mike Trinder, who was the first I saw using that metaphor, in his PhD supervised by Paul Richens at the Martin Centre).

As a second treatment, I wanted to implement an algorithmic image filter that would have an effect a bit like scumbled paint finishes (interesting because those finishes blur more semantically diagrammatic boundaries in favour of rich textures). This was going to be my first pixel-shifting transformation, so I needed an example to draw on. The only example I had to hand of a context-based image filter was the Gaussian blur algorithm that I'd used to implement shadows a couple of weeks ago. This was pretty complex, so after 15 minutes exploring the code to see if I could make a scumbling version, I just thought I'd see what happens when the original shadow code was applied to my source photo. Nothing, as it turns out, because the shadow was calculated from the alpha channel, and the photographic test image had a uniform alpha. Fortunately, the rubbing layer I'd just implemented didn't have a uniform alpha, so I used that instead. This visually transformed the appearance of rubbing into floating patches of picture. I then played around with superimposing these on various backgrounds in order to see them better. When superimposed on the original picture, this resulted in the appearance of a "semantic" layer that had been specified by the previous user interaction of rubbing (the user naturally rubs areas that are "interesting" for some reason).

Serendipity relies on the observer being prepared to recognise the novel occurrence. In this case, my recognition of the potential application was prompted by one of the images I had created in my own PhD experiments. There, I had made a number of photographs of playground equipment look more "diagrammatic" by rendering some of the picture elements in higher contrast over a reduced contrast background layer of the original photo.


In my discovery on Monday, it had turned out that the shadows were easier to see if a further alpha-blended white layer was rendered between the photo and the shadow. This happened to create a similar contrast-reducing effect as in my PhD experiment, and resulted in the "rubbed" elements seeming clearly diagrammatic.

Monday 24 October 2011

A journey to AbstractLand

Time for tubby bye-bye.

Semantic images

School half-term this week, so I have my resident end-user (Elizabeth) on hand to play with prototypes. This allowed me to make a first attempt at the "semantic layer" that Luke and I long regarded as an oxymoron. 


Friday 21 October 2011

Finding performance limits

As I see it, Moore's law tells us that if a research prototype runs fast enough, then you aren't doing it right! I'm expecting that this thing will not run at the speed of a respectable language, at least not on the development platforms I'm using. I've therefore been waiting for my experiments to run into performance limits. I've finally found one, after realising that you can't have nicely antialiased lines if you are going to do affine transforms - for that to work, the transform has to be interpolated. Bicubic interpolation produced nice smooth results on this image (a first stack made of index cards rather than book pages), but runs slowly enough that the page turn animation was already finished before the pages were rendered. An entertaining (not) half-day ensued, as I played with ImageObservers and MediaTrackers in multiple animation threads. At the end of this, I just gave up - I don't think I understand how they work, and index cards don't need to bend anyway.

Tyranny of realism

After another day improving the rendering of my metaphorical book, I stop to think about the complaints by Alan Kay and Ted Nelson, back in 1985, about the detrimental effects of user interface metaphor. Kay argued that computer interaction should be magical rather than mundane. More extravagantly, Nelson compared over-arching conceptual metaphors to totalitarian regimes, constraining the (otherwise abstract) freedom of the designer. 

So in the course of making this more "compelling" (realistic), I learned quite a lot about animation, gradient fills, double buffering and so on. By the end of a day, I was just starting to implement a nice animation of pages that curl from the corner (building on a nice Java example by Arnoud Philip), when I remembered that I didn't really think a book was the right metaphor anyway. It was surprising how far I had been distracted by the appeal of naive realism. I'm still pretty certain that physical simulation does help users to become immersed in the abstract world (the physical realism of the touchscreen swipe-to-scroll is what really convinced me), so I'll return to this in future.

However, I'm not going to spend any more on the book. A stack of index cards might be a better starting point.

Thursday 20 October 2011

Making a metaphor

One of my key problems is how to present a concept of the "layer" that doesn't scare people off in the way that Photoshop layers seem to do. This brings us back to metaphor - much as I hate to admit it. So yesterday's task was a first attempt at providing a layer navigation visualisation that is more easily recognised through use of a physical metaphor. The geometry is a little crude here (using only efficient AffineTransforms), but interacting with this for a bit was sufficient to suggest that a book is probably not the right metaphor for the layer language.

Tuesday 18 October 2011

There should be a name for ...

... that feeling you get when you've defined a new superclass, but more and more of the interface of the original class starts to migrate into it, as you realise that the abstraction wasn't sufficiently isolated from the specific properties of the original class. In the pre-object-oriented days, I could have called it "globalisation".

I spent quite a lot of yesterday doing this, having separated the language into "content" layers and "operation" layers. It kept turning out that these are harder to separate than you might think. Perhaps not a problem for me, if the language goes all the way to being a visual Lisp, with lots of run-time type checking to make my life easier (and trip up incautious users).

Sunday 16 October 2011

Making marks on the medium of time

Selection layer is working, and moved quickly on to a first draft of a layer that implements a move operation. Some more refinement to do on this (at present it's unary with a fixed vector), but already sufficient to make one of my key problems clear. It goes back to a long phone call with Thomas around 1999, subsequently reviewed with students including Darren, Chris and Luke, where my main advice to them was to avoid it!

Warning - the rest of this post descends into metaphysics of Cognitive Dimensions ...

Here is the issue - if every "notation" can be regarded as a transcribed notation-layer that is derived from another notation, then how do we characterise those notation-layers where an information structure is represented by a sequence of user actions? (Note that the idea of a notation layer in CDs has no relationship (I think) to the layers in the layer language). The example we used in 1999 was interaction with a telephone or other device with minimal user controlled display, where the "notation" is a sequence of button presses. Since we were formalising the concept of notation as marks arranged on some medium (building on Ittelson), the suggestion was that this interaction-as-notation could be regarded as marks (i.e. events) arranged on the medium of time.

However, not only is the idea of marks-on-time an unintuitive extension of the concept of notation, this would be a notation with very peculiar properties. It has low visibility and juxtaposability (you can't simultaneously experience events at different times), high viscosity (you can't change things in the past), severe imposed lookahead (time is relentlessly linear) and so on.

So the essence of the layer language is that it is an attempt to reconfigure this situation. I am explicitly representing user actions (so far, only selection and move) as a sequence of temporal layers, but providing users with the ability to view, manipulate and replay those layers. The result could be a new intermediate ground between direct manipulation and program execution.

Hence the two main challenges in my project. The first is technical - providing a sufficiently powerful yet lightweight mechanism to present the whole sequence of user actions, in their full visual context, for explicit interaction. The second is "philosophical" - will the counter-intuitive starting point result in an intuitive experience? We already have one example of how to make this puzzling for users - the History Brush in Photoshop. It would be interesting to interview Photoshop users to see how many people have figured out what it does. (Rather like the long-ago project carried out by Kerry Rodden, looking at mental models of browser history).

Perhaps many people have thought about this before - the Photoshop History Palette is visually similar to the Layers Palette, so others must have wondered whether the two could be integrated. (Much like the proposals Kerry, Natasa, Rachel and I developed to integrate the history pane and back button in Internet Explorer).

Friday 14 October 2011

The Ladder of Abstraction

A beautiful piece of work by Bret Victor: http://worrydream.com/LadderOfAbstraction/


The "Ladder of Abstraction" is of course the day to day problem of programming life - both the user experience I'm trying to create, and also my own experiences of making it. Yesterday's long refactoring session was typical of both.

The key problem: if all user actions can be treated as program actions, then how about selection? As Thomas and I observed in the CDs tutorial, making a selection in a GUI creates a 'transient abstraction'. In the layer language design, I had already decided that it should be possible to refer to this abstraction via a 'selection layer'. So yesterday was the day that I grasped that nettle. Until now, selection has been an attribute of a layer. But yesterday afternoon I set about making it an independent layer. Obvious problem here - what happens when you select something in a selection layer?

My first attempt - that selection layers are 'special', is breaking my internal type hierarchy. That familiar refactoring experience, where the new abstract class looks more and more like the class you started with, because it turns out almost every aspect of the existing behaviour is essential to the concept. Last thing last night (coming home between dropping off Elizabeth at her piano lesson and collecting her, in order to get another 30 minutes coding) - leaves me with a newly refactored design that finally compiles, but the user interface consists only of selections. Everything else has become invisible. Metaphysically ironic, n'est-ce pas?

I don't expect that elegant abstractions for my end user will necessarily correspond to elegance in my own code - but I suspect this will be a space worth watching, for the integrity of the whole project.

Tuesday 11 October 2011

Symbolic technologies in social context

A technically dull day, with a morning of struggle, followed by slightly more productive afternoon, all aimed at starting to make the drawing functions work the way Elizabeth expected them to yesterday. (It's all to do with recognising what is a new ink stroke, and what is a continuation of the previous one).

Some reflection on the purpose of all this from Helga Nowotny's comments on symbolic technologies:
To unfold curiosity's potential, the use of cognitive tools - particularly thinking, the capacity for abstraction, and the technical skills needed to produce material tools that change the environment - has to be embedded in cultural practices and anchored in a social structure (Insatiable Curiosity: Innovation in a Fragile Future p.24)
http://www.fuckpdf.com/2010/09/05/nowotny-insatiable-curiosity-pdf.html

Monday 10 October 2011

Elizabeth's first use

Connected my prototype in development to a graphics tablet, prompting Elizabeth to create her first image. She likes the way it works - but the pen must have been the most important part.

Saturday 8 October 2011

Fidgeting - what's it good for?

I've bought a few books to provide conceptual guidance for the project. During breaks yesterday, I found support for my assumptions (:->) in two of them: Steven Connor's Paraphernalia: The Curious Life of Magical Things, and Peter Seibel's Coders at Work: Reflections on the Craft of Programming. For Connor, magical things are fidgetable - you gain a more intimate relationship with them, and they transcend their mundane functional nature. I think there is a close relationship between this experience, and the intrinsic motivation when programmers engage in tinkering, as I called it in work with Laura Beckwith and Margaret Burnett.

There is a delicate balance for me here - the pleasures of fidgeting with code could easily occupy weeks of my time (yesterday, for example, exploring how to create custom pen strokes using the Java2D createStrokedShape). So I want to avoid them, but this is also the experience I want users of my language to have - so I need to make sure I embrace fidgeting from time to time. Seibel's interviewees are eloquent on the day-to-day pleasures of programming, but in terms of my overall goal, Simon Peyton Jones' description of research programming resonates: "thinking about programming in a whole new way [r]ather than just putting one more brick in the wall, we can build a whole new wall." I want to ensure that my fidgeting doesn't get me stuck in conventional ways of thinking. The results may be unpredictable - but as Simon says, academic research involves "professors go off and do sort of loopy things without being asked how it's benefiting the bottom line. Some will do things that turn out to be fantastically important, and some less so: but you can't tell which is which in advance!" (p.251).

Thursday 6 October 2011

Research programming ...

... is a different kind of activity to "regular" programming, I remember after spending much of my time today chasing a tricky recursion bug. The last time that I regularly faced this kind of problem was in my AI research days, 25 years ago. I know that my computer science colleagues do this kind of thing all the time, but in my day-to-day software engineering career, recursion was seldom necessary.

It's fun, in a way. But the community of people who enjoy novel kinds of programming language as a form  of entertainment are a specialised audience. Sam Aaron introduced me to the online community at Lambda the Ultimate. I may have to start reading this, especially since James Noble pointed out that they've recently name checked me.

Wednesday 5 October 2011

Coding and material

First day of really serious coding. At the end of this, a working version of a class that I expect to be central to my final implementation (and hence, likely to be rewritten many times from here on).

In the past, I attempted to persuade architecture students that they should think about code as a building material. It will be interesting to see whether I continue to think this myself after being immersed in it some more. The romantic view of design is certainly that engagement with one's materials is of the essence.

(A recent article on the work of a designer who is letting the material take the lead).

Tuesday 4 October 2011

The thrills of technical progress, and monkeys with beards

A whole day playing with the Java 2D Graphics facilities for image compositing. It seems that Java does everything I need nowadays - in the 10 years or so since I first conceived this language, students have struggled to do this stuff at interactive speeds, and I've published research papers based on less technical progress than I have made today[1]. Won't be able to do exactly the same stuff on Android, but let's cross that bridge when I come to it.

Further good news is that I haven't yet suffered the curse of working at home. When announcing this blog to a few friends, I mentioned my mental image of what happened to Dilbert. Quentin picked this up via Facebook, bringing the question of why monkeys don't grow beards to a new generation of readers.

[1] Blackwell, A.F. and Wallach, H. (2002). Diagrammatic integration of abstract operations into software work contexts. In M. Hegarty, B. Meyer and N.H.Narayanan (Eds.), Diagrammatic Representation and Inference, Springer-Verlag, pp. 191-205.

Monday 3 October 2011

Night falls on Day 1

A solid day coding - somewhat disconcerting for Helen, who despite planning a day's study herself, had not completely appreciated the degree of focus necessary for programming work. Tomorrow, when the temperature finally drops from high 20s to October norm, I'll move into the loft.

All morning was spent on the trivial detail of establishing a compile and execute cycle for the Android tablet code that Luke kindly sent me this morning, prompted by project start. Two gotchas discovered - one that IntelliJ finds it difficult to make a transition between versions of the Android SDK. It insisted for some time that my SDK had disappeared, when compiling Luke's code. The other is the behaviour of the Xoom emulator. I suffered the new experience of an emulator that runs convincingly like the real hardware (if slowly), but leaves me asking "where is my app?".


The answer? The emulator was randomly rendering only the top part of the screen, showing the icon for the app that was being debugged, but not the label underneath that would have told me what it was. A simple problem once you know, but left me mystified for a good 30 minutes. (Tried Googling "android xoom emulator where is my app"). After realising what was happening, the workaround was to navigate to another app, come back to the desktop, and then see all icons. Eventually I discovered that you can just leave the emulator running, and the current app gets reloaded and run in place, without going via this screen at all.

After all that drama, the afternoon was spent stripping down one of Luke's apps to a very basic graphics app that demonstrates all the facilities I think I need. That seems very promising, and the challenge for tomorrow is to see whether it will be practical to develop for twin platforms - regular Java (with a tablet display) and Android. It seems like the graphics features of the Android Canvas class are rather different from (for example) the HTML 5 Canvas class.

Correct me if I'm wrong!

Day 1 begins - what do you say after you say Hello World?

Starting a blog seems like a distraction activity, but several people have asked me to keep a 'reflective diary' of my work on this project - so here you are.

On the first Monday morning in October, I think I've managed to close down all regular collaborative activities around Crucible and the University, and have mothballed my office in the Computer Lab.

First priority is to get immersed in the new tool set I'll be using - and I don't mean Blogger! Target platform is Java graphics, ideally running on an Android tablet. Luke Church has recommended IntelliJ IDEA, rather than the standard Android development platform of Eclipse. The disadvantage of IntelliJ is that many of the Android start-up tutorials assume Eclipse. However, I took the precaution of getting a Hello World application running last week, while I still had access to advice (and a purchasing budget) in the event of a total breakdown in the build chain. By the end of that, I wished that I had started this diary earlier, to document the inevitable ups and downs of the new tool user - but my estimate would be about 5-10 hours of work, spread over the previous 2 weeks, before I had the words Hello World on the screen of a Motorola Xoom.

That was last Wednesday, after which a couple of days panic in the office closing things down, a last concert with the Sampson Orchestra (great soprano - Anna Dennis), and Sunday cleaning out the loft for my temporary office. That gets us to Monday morning - perhaps this will be the last and only diary entry for the 12 months, in which case, thanks for reading, and sorry.