Or you can represent the music as instructions to performers or synthesizers (ie notation) and you've got as many dimensions as you want.
Music is not sound, it's made of sounds. The fact that it gets mixed down to a single waveform when you consume it, either in the studio or when it hits your ear, isnt particularly relevant to how its made. I suppose the same is true of images though.
Markov chains applied to midi have been able to make "locally similar" stuff since forever. I wonder how this algorithm could be applied to higher-order aspects of music notation.
We still call images "two-dimensional" when they're colored. There is a difference between continuous dimensions like space and time, and discrete dimensions like color channels in an image, or like instrument "tracks" of a song. The latter can have correlations, but they'll be sparse associations, rather than structural formulaic ones.
Music is not sound, it's made of sounds. The fact that it gets mixed down to a single waveform when you consume it, either in the studio or when it hits your ear, isnt particularly relevant to how its made. I suppose the same is true of images though.
Markov chains applied to midi have been able to make "locally similar" stuff since forever. I wonder how this algorithm could be applied to higher-order aspects of music notation.