hartmann neuron or other similar synths
Posted: Tue Jul 09, 2013 2:35 pm
I occasionally check to see if there are synths along the hartmann neuron vein.. Aiming to blend / morph between machine generated models that's a bit different from the fft/resynthesis paradigm.
But these new, seemingly outlandish synthesis methods always have a similar problem. I think this even includes synths like Rayblaster from Tone2. In a production setting, let's say I've got this sound I need to create. Maybe I can imagine in my head how I'd create it in modular. Maybe I can't define a signal flow for it, but I know the constituent parts that would make up the sound I want. Then comes this "new" synthesis method.
WIth neuron, the modeling section, and what exactly is going on, it unknown. It's some sort of machine "assisted" thing, where presumably the musician is not supposed to worry about the technical aspects, and concentrate on music.. or composing, or whatever. This is the part I have a problem with. Since when has "technical aspects" become something a musician was not supposed to worry about?
If I know what causes the sound I want, and the particular signal flow that is needed to make it happen, then of course I'm going to worry about precisely how the sound is generated. It might not be fun to think about, but as the sound is totally dependent on the generation method, and the music the result of the sounds, I believe that the "technical aspects" are very much a concern for the musician or composer. (or any advanced composer, at least) Lots of chefs don't spend all their time in the kitchen, but can be seen on farms, making sure that the produce is created exactly to his liking, or in the best way possible, every step of the way. In the end, I think that is what quality comes down to.
Machines filling in and doing the job is not the problem. I think AI is great and should be taken advantage of, but the philosophy is the problem. If AI were to assist in generating the sound, then it should learn from exactly how you like the sound to be generated, learning from the atomic level up. The musician coaxing the sound he wants from AI is completely upside down.
And as can be seen with Neuron, the conclusion is fairly consistent. The ultimately capable tool becomes useless because what was supposed to assist in creating new and awesome sounds, creates whatever sounds it most naturally creates, and becomes irrelevant to the context. (or whatever problem it needs to solve)
Though, it does seem like the individual parts are there. Machine learning of sounds, whatever method it is based on, the method of manipulating the learned models, and some sort of interface. But perhaps it's the learning part that doesn't tie everything together. Maybe the sound isn't to be learned from a pre-existing recording, but from the way the author creates sounds in the context of what other instruments do or sound. It's an interesting philosophical question to consider. What is a sound generation tool, what do we need it to do? I think that is the question that is not answered with these approaches. (because apparently morphing between models is not the end game)
Some of the examples I've read are the most impractical applications I've ever seen.
1. Imagine creating a pizzicato sound out of a strings model. (Why on earth should anyone do that? That's a waste of time)
2. Morph between a drumloop and a gong. (demo sounds like garbage) face palm.
3. Find models that have characteristics you like, and make the perfect blend. (or think a little harder and pinpoint what exactly it is you like, figure out how it's achieved, and recreate it)
It's all about laziness and ignorance. What it's saying is, "it's okay. You can still achieve spectacular things". I have zero tolerance for such thoughts. Music is complicated and difficult. If it's easy, then you're not trying hard enough. (it should be easy to understand for the listener) You can't be lazy and dumb. Actually you probably shouldn't be lazy and dumb for the majority of anything in real life anyway.
But these new, seemingly outlandish synthesis methods always have a similar problem. I think this even includes synths like Rayblaster from Tone2. In a production setting, let's say I've got this sound I need to create. Maybe I can imagine in my head how I'd create it in modular. Maybe I can't define a signal flow for it, but I know the constituent parts that would make up the sound I want. Then comes this "new" synthesis method.
WIth neuron, the modeling section, and what exactly is going on, it unknown. It's some sort of machine "assisted" thing, where presumably the musician is not supposed to worry about the technical aspects, and concentrate on music.. or composing, or whatever. This is the part I have a problem with. Since when has "technical aspects" become something a musician was not supposed to worry about?
If I know what causes the sound I want, and the particular signal flow that is needed to make it happen, then of course I'm going to worry about precisely how the sound is generated. It might not be fun to think about, but as the sound is totally dependent on the generation method, and the music the result of the sounds, I believe that the "technical aspects" are very much a concern for the musician or composer. (or any advanced composer, at least) Lots of chefs don't spend all their time in the kitchen, but can be seen on farms, making sure that the produce is created exactly to his liking, or in the best way possible, every step of the way. In the end, I think that is what quality comes down to.
Machines filling in and doing the job is not the problem. I think AI is great and should be taken advantage of, but the philosophy is the problem. If AI were to assist in generating the sound, then it should learn from exactly how you like the sound to be generated, learning from the atomic level up. The musician coaxing the sound he wants from AI is completely upside down.
And as can be seen with Neuron, the conclusion is fairly consistent. The ultimately capable tool becomes useless because what was supposed to assist in creating new and awesome sounds, creates whatever sounds it most naturally creates, and becomes irrelevant to the context. (or whatever problem it needs to solve)
Though, it does seem like the individual parts are there. Machine learning of sounds, whatever method it is based on, the method of manipulating the learned models, and some sort of interface. But perhaps it's the learning part that doesn't tie everything together. Maybe the sound isn't to be learned from a pre-existing recording, but from the way the author creates sounds in the context of what other instruments do or sound. It's an interesting philosophical question to consider. What is a sound generation tool, what do we need it to do? I think that is the question that is not answered with these approaches. (because apparently morphing between models is not the end game)
Some of the examples I've read are the most impractical applications I've ever seen.
1. Imagine creating a pizzicato sound out of a strings model. (Why on earth should anyone do that? That's a waste of time)
2. Morph between a drumloop and a gong. (demo sounds like garbage) face palm.
3. Find models that have characteristics you like, and make the perfect blend. (or think a little harder and pinpoint what exactly it is you like, figure out how it's achieved, and recreate it)
It's all about laziness and ignorance. What it's saying is, "it's okay. You can still achieve spectacular things". I have zero tolerance for such thoughts. Music is complicated and difficult. If it's easy, then you're not trying hard enough. (it should be easy to understand for the listener) You can't be lazy and dumb. Actually you probably shouldn't be lazy and dumb for the majority of anything in real life anyway.