Since a language must be learned it is difficult to know ahead of time what will work. Though assumptions must be made it is important that the software constructed to give it voice be as general as possible so that it can evolve. Schemes for mapping to spatial coordinates or to color can be selected from pull down menus and different databases can be selected as starting points for compositions. Unfortunately, flexibility is perpetually at war with expedience and the desire to simply get to a point where enough was in place to generate compositions. The result is code that needs a good spring cleaning which I seem to always put off in favor of the more fun stuff. Perhaps, down the road I'll make it available, but for now a general overview will, hopefully, serve to illustrate what underlies the compositions.
Some assumptions that were made:
- That each node represents a section with two endpoints (A||B) and therefore contains enough information to map it to a three dimensional coordinate system based upon the relation G = B - A. This makes it possible to map a node to a spatial coordinate system and to one that describes color.
- Since the frequency of a note can be described with a single variable, then it follows that each node represents three notes with their frequencies determined by A,B and G. The composition panel, therefore, uses three staves for the three corresponding voices inherent to each node. This does not mean, however, that A,B and G need always be associated with the same voice. Each note can be set so that A,B or G determines the frequency of the note and its position on the staff.
- Since the frequency represented by A,B, or G often lies outside the range of human hearing, it is assumed that bringing them within audible range by dividing or multiplying by octaves doesnt change the perceptual identity of a note. At the moment the software is only capable of applying this adjustment to the frequency that each note represents. It has not, as of yet, been taken into account when mapping the color of each note.
- That the munsell system is best suited for mapping a node to a color because it is based on the notion of equal perceptual intervals. Using perception as the measure is preferable to one based on the physical properties of light and the response of the human eye. The final goal, afterall, is one of perception.
- Traditional musical intervals should not be discarded and should instead be seen as a necessary part of any attempt to bridge the gap between music and color.
- That the application of form is best served if kept open to interpretation.
To start with a primary tree is generated using Φ and its inverse, φ. The tree is stored in a mysql database as it's created and can be searched using a simple expression grammar which returns a subtree that is displayed using a 3D graph. The graph can then be zoomed in on, moved and rotated and its nodes colored according to the coloring scheme in effect. Each node can be played for a chosen duration using a chosen instrument and then be selected and represented as a note in a separate composition panel. Once represented on the music staff, notes can have attributes associated with them such as duration, an instrument, whether it is part of a tuplet or a tie, and an image (branchgraph) loaded or created using an animation panel. The composition can then be played and output to a midi file, a wav file and a video file which can be created using the branchgraph associated with each note.