So it's taken a few days, but now I have operations with value parameters, and a usable subdevice that can be used for viewing and modifying the order of the layer stack.
The result can be used to create basic scripted interactions, with some interesting behaviours resulting from layer dependencies (this image).
From here, the next step is a choice between two optons: a) adding a greater variety of parameter and operation types (e.g. parameterising the image thresholding, or adding rotate and scale operations to the current vector translate), or b) extending the computation model (e.g. processing layer regions as sets - they can already include references to other layers, or interactive commands, so this would extend the power of the language pretty substantially).