I occasionally check to see if there are synths along the hartmann neuron vein.. Aiming to blend / morph between machine generated models that's a bit different from the fft/resynthesis paradigm.
But these new, seemingly outlandish synthesis methods always have a similar problem. I think this even includes synths like Rayblaster from Tone2. In a production setting, let's say I've got this sound I need to create. Maybe I can imagine in my head how I'd create it in modular. Maybe I can't define a signal flow for it, but I know the constituent parts that would make up the sound I want. Then comes this "new" synthesis method.
WIth neuron, the modeling section, and what exactly is going on, it unknown. It's some sort of machine "assisted" thing, where presumably the musician is not supposed to worry about the technical aspects, and concentrate on music.. or composing, or whatever. This is the part I have a problem with. Since when has "technical aspects" become something a musician was not supposed to worry about?
If I know what causes the sound I want, and the particular signal flow that is needed to make it happen, then of course I'm going to worry about precisely how the sound is generated. It might not be fun to think about, but as the sound is totally dependent on the generation method, and the music the result of the sounds, I believe that the "technical aspects" are very much a concern for the musician or composer. (or any advanced composer, at least) Lots of chefs don't spend all their time in the kitchen, but can be seen on farms, making sure that the produce is created exactly to his liking, or in the best way possible, every step of the way. In the end, I think that is what quality comes down to.
Machines filling in and doing the job is not the problem. I think AI is great and should be taken advantage of, but the philosophy is the problem. If AI were to assist in generating the sound, then it should learn from exactly how you like the sound to be generated, learning from the atomic level up. The musician coaxing the sound he wants from AI is completely upside down.
And as can be seen with Neuron, the conclusion is fairly consistent. The ultimately capable tool becomes useless because what was supposed to assist in creating new and awesome sounds, creates whatever sounds it most naturally creates, and becomes irrelevant to the context. (or whatever problem it needs to solve)
Though, it does seem like the individual parts are there. Machine learning of sounds, whatever method it is based on, the method of manipulating the learned models, and some sort of interface. But perhaps it's the learning part that doesn't tie everything together. Maybe the sound isn't to be learned from a pre-existing recording, but from the way the author creates sounds in the context of what other instruments do or sound. It's an interesting philosophical question to consider. What is a sound generation tool, what do we need it to do? I think that is the question that is not answered with these approaches. (because apparently morphing between models is not the end game)
Some of the examples I've read are the most impractical applications I've ever seen.
1. Imagine creating a pizzicato sound out of a strings model. (Why on earth should anyone do that? That's a waste of time)
2. Morph between a drumloop and a gong. (demo sounds like garbage) face palm.
3. Find models that have characteristics you like, and make the perfect blend. (or think a little harder and pinpoint what exactly it is you like, figure out how it's achieved, and recreate it)
It's all about laziness and ignorance. What it's saying is, "it's okay. You can still achieve spectacular things". I have zero tolerance for such thoughts. Music is complicated and difficult. If it's easy, then you're not trying hard enough. (it should be easy to understand for the listener) You can't be lazy and dumb. Actually you probably shouldn't be lazy and dumb for the majority of anything in real life anyway.
hartmann neuron or other similar synths
Re: hartmann neuron or other similar synths
I was only 4 paragraphs into my initial reply and realized I had too many balls in play. Some filtering in order imo...
Are you specifically after hardware? Hardware with a keybed? Software that runs realtime? Software in general?
Is this for performance or sound design?
Are you specifically after hardware? Hardware with a keybed? Software that runs realtime? Software in general?
Is this for performance or sound design?
Re: hartmann neuron or other similar synths
Suppose it depends on what you mean by 'technical'. I have a Yamaha PLG150VL which comes with an 'editor' where you can bring up a brass Embouchure and tweak its parameters for the attack part of a sound, and combine it with a sustain from a violin and control the 'raspiness' of the bow or some such. However, programming the PLG150AN ( a Prophet 5 type synth ) I know I can get a trumpet sound by selecting a sawtooth wave and putting a light delay on the attack.kensuguro wrote:WIth neuron, the modeling section, and what exactly is going on, it unknown. It's some sort of machine "assisted" thing, where presumably the musician is not supposed to worry about the technical aspects, and concentrate on music.. or composing, or whatever. This is the part I have a problem with. Since when has "technical aspects" become something a musician was not supposed to worry about?
On the VL synth, I'm concerned with real instrument techniques (embouchure and bow resin) and can ignore waveform techniques, whereas on the AN synth, I focus on waveform theory of the instrument (delayed attacks make for brass sounds) and it doesn't matter if I'm ignorant regards what an 'embouchure' is.
Fortunately, I know a bit about both types of 'technical' here - eg the 'instrument' technical and the 'synthesis' technical so I can make an informed choice about which approach to take.
So its good that the choice exists, if I want to build a sound rather that call up a preset.
I see not much room here for any type of AI, other than attaching a 'Siri' microphone to my synth and saying 'Fat Brass' into it would have the synth select a fat brass preset. Even then, Im not sure if you would call that AI - unless you were able to say the words 'less attack and more harmonics in the sustain' into the synth and it was able to follow that as well.
That would be a kind of cool AI - where you couldnt tell if a microchip was following your instructions, or if there was a little miniture dawman expert inside your synth, running around tweaking parameters as you spoke them.


Re: hartmann neuron or other similar synths
There are a lot of great software engineers but mostly with no imagination. I could say the same thing about musicians though.
I've given up on synthesis and concentrate mostly on creating my own samples, usually with very little use of effects. My main tool is a simple one, Battery 4. It takes a lot of work to make sample banks but once you are done, you can make great music so quickly.
I've given up on synthesis and concentrate mostly on creating my own samples, usually with very little use of effects. My main tool is a simple one, Battery 4. It takes a lot of work to make sample banks but once you are done, you can make great music so quickly.