> I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.
Having dug quite deep into algorithmic music generation of all sorts, as well as having studied Machine Learning as my uni masters, I still believe that you still need an actual artist to make a music generator program.
A machine simply isn't going to figure out "swing" if you don't tell it to. And swing is one of the easiest things. Yes if you look just at note generation, I think algorithms can go a long way. But the subtleties of timing and timbre, it can only imitate in context. Which definitely is good enough for many purposes, and I agree with your prediction that algorithms will be able to generate beautiful music, but I also think there will always remain an "edge" for the artist. If only to discover novel things that are also cool, and then working those out in order to fully express the coolness of that new thing.
I'm also thinking about all the evolutions the many genres of electronic music have gone through the past decades. New and novel "sounds" (or moods, or styles, etc) are still being create/discovered. It is that process, that I don't think we're there yet. Yes the algorithm can probably generate beautiful psytrance, lo-fi hiphop beats and if I'm generous probably eventually also really complex (and jazzy!) stuff like squarepusher.
But what I'm not seeing happening any time soon (barring any breaking general AI type of advances) is if you give the algorithm a TB-303 for the first time and see if it figures out acid house. Yes you can probably teach it the origins of neurofunk DnB (think 1999 Optical & Ed Rush's Wormhole album) and produce super awesome dance music. But I don't see how it could ever develop what happened to DnB beyond that. Wavetable synthesis didn't really exist that way back then, and the bass came from more classic synthesis like the Reese bass. Nowadays, what you can do with a wavetable synth VST like Serum, almost defines what modern DnB sounds like. That particular sound was evolved and shaped through the genre of drum'nbass and became part of it, a new style of synthesis, heavily facilitated through the particular UX controls of these synth plugins, which again caused the author of Serum to amplify by creating his vision of what this UX should be like, is almost like the birth of a new instrument, together with artists having to learn the correct style to "play" this instrument. Which has settled enough now that it is appearing in other new genres. Yet is also still developing. And that is just one genre of music I happen to be somewhat familiar with, I'm sure similar examples can be named in many other genres (for instance I don't know much about the history of dubstep).
Those evolutionary steps, invention of truly novel things, for the foreseeable future, I don't think AI is there yet and artists do still have an edge, even if it's a very thin one.
Having dug quite deep into algorithmic music generation of all sorts, as well as having studied Machine Learning as my uni masters, I still believe that you still need an actual artist to make a music generator program.
A machine simply isn't going to figure out "swing" if you don't tell it to. And swing is one of the easiest things. Yes if you look just at note generation, I think algorithms can go a long way. But the subtleties of timing and timbre, it can only imitate in context. Which definitely is good enough for many purposes, and I agree with your prediction that algorithms will be able to generate beautiful music, but I also think there will always remain an "edge" for the artist. If only to discover novel things that are also cool, and then working those out in order to fully express the coolness of that new thing.
I'm also thinking about all the evolutions the many genres of electronic music have gone through the past decades. New and novel "sounds" (or moods, or styles, etc) are still being create/discovered. It is that process, that I don't think we're there yet. Yes the algorithm can probably generate beautiful psytrance, lo-fi hiphop beats and if I'm generous probably eventually also really complex (and jazzy!) stuff like squarepusher.
But what I'm not seeing happening any time soon (barring any breaking general AI type of advances) is if you give the algorithm a TB-303 for the first time and see if it figures out acid house. Yes you can probably teach it the origins of neurofunk DnB (think 1999 Optical & Ed Rush's Wormhole album) and produce super awesome dance music. But I don't see how it could ever develop what happened to DnB beyond that. Wavetable synthesis didn't really exist that way back then, and the bass came from more classic synthesis like the Reese bass. Nowadays, what you can do with a wavetable synth VST like Serum, almost defines what modern DnB sounds like. That particular sound was evolved and shaped through the genre of drum'nbass and became part of it, a new style of synthesis, heavily facilitated through the particular UX controls of these synth plugins, which again caused the author of Serum to amplify by creating his vision of what this UX should be like, is almost like the birth of a new instrument, together with artists having to learn the correct style to "play" this instrument. Which has settled enough now that it is appearing in other new genres. Yet is also still developing. And that is just one genre of music I happen to be somewhat familiar with, I'm sure similar examples can be named in many other genres (for instance I don't know much about the history of dubstep).
Those evolutionary steps, invention of truly novel things, for the foreseeable future, I don't think AI is there yet and artists do still have an edge, even if it's a very thin one.