Since a language must be learned it is difficult to know ahead of time what will work. Though assumptions must be made it is important that the software constructed to give it voice be as general as possible so that it can evolve. Schemes for mapping to spatial coordinates or to color can be selected from pull down menus and different databases can be selected as starting points for compositions. Unfortunately, flexibility is perpetually at war with expedience and the desire to simply get to a point where enough was in place to generate compositions. The result is code that needs a good spring cleaning which I seem to always put off in favor of the more fun stuff. Perhaps, down the road I'll make it available, but for now a general overview will, hopefully, serve to illustrate what underlies the compositions.

Some assumptions that were made:

To start with a primary tree is generated using Φ and its inverse, φ. The tree is stored in a mysql database as it's created and can be searched using a simple expression grammar which returns a subtree that is displayed using a 3D graph. The graph can then be zoomed in on, moved and rotated and its nodes colored according to the coloring scheme in effect. Each node can be played for a chosen duration using a chosen instrument and then be selected and represented as a note in a separate composition panel. Once represented on the music staff, notes can have attributes associated with them such as duration, an instrument, whether it is part of a tuplet or a tie, and an image (branchgraph) loaded or created using an animation panel. The composition can then be played and output to a midi file, a wav file and a video file which can be created using the branchgraph associated with each note.