With an execution model in place, it's clear that this is the least usable part of the system so far. It's a bad sign when even I can't construct a syntactically valid example! (Bad news, in the sense that any other user will certainly be unable to). There are now three levels of abstraction in the interaction - direct creation and manipulation of single layers, indirect manipulation of other layers via constraints (spreadsheet-style), and generation/execution of new layers. As they become more abstract, it's unsurprising that they are harder to use - but not ideal!
The final abstraction level, which might be compared to user-defined functions in spreadsheets (i.e. almost no regular users use them) has an execution model that I've called "bind-then-play". It maps one or more operations over a set of arguments, where each operation can have a number of unbound parameters. As soon as an operation receives bindings for its parameters, a new layer instance is created and executed. As I've already commented, implementing this seemed a lot more like regular computer science - type inference for the bindings and so on - but it's unclear yet whether it will turn into anything for end-users. I've also created a more macro-like record and playback facility which is much easier to understand, and at present more fun to use.