Modulate parameters with audio signals

Anything about the Scope modular synths

Moderators: valis, garyb

Post Reply
User avatar
Tau
Posts: 793
Joined: Mon Jun 26, 2006 4:00 pm
Location: Portugal
Contact:

Modulate parameters with audio signals

Post by Tau »

Hello!

I'm trying to overcome some of MIDI's setbacks, namely the one where no more than 128 discrete values can be sent by the message. I was wondering if it was possible to control, say, a Filter's Cutoff Freq. by increasing or decreasing the amplitude (volume) of an audio signal, and also if that would be any different from controlling it via MIDI (you know, when you sweep frequencies with high resonance values, you get that stepping scaled sound, instead of a continuous sweep)

My idea was that, audio being calculated at 32 bits and with high SRs, this would provide a finer resolution signal for controlling the filter. But, do Modular filters accept this modulation, or are they also limited to 128 steps of discrete frequencies?

I made some experiments, like connecting an OSC to a VCA to a freq mod input on a filter but this gives me a bi-polar signal, (so it's not tracking the volume, more like the wave of the osc illator), and even converting to unipolar and smoothing, I can't get a stable signal, always some oscillation.

Mehdi's devices (S-Filter, for eg, and it seems the new MultiSynth as well) have the possibility to convert audio signals into CCs, or directly to modulation, and there's also Wolf's Audio to Control device, but I was wondering if it could be done with standard Modules and also if it has any influence on the response of the filters.

I understand that's the way digital filters are supposed to behave, I'm just trying to understand better what the filter can do and what MIDI makes it do.

Any input appreciated.

Thanks!


T
dawman
Posts: 14368
Joined: Sun Jul 24, 2005 4:00 pm
Location: PROJECT WINDOW

Post by dawman »

I also need to know about that.

I have been messing around with MSF-1 by Hifiboom and using a device I had made called the Bug Knob.

I couldn't get my pedals to respond in the way I like so I had this made in hardware. It's a 2" dial that rotates endlessley, wherever I grab it it starts.

It also turned out to be a pain in the ass. More decimals are needed as in the pitch bend wheel, where extra values seem to be used.
User avatar
at0m
Posts: 4743
Joined: Sat Jun 30, 2001 4:00 pm
Location: Bubble Metropolis
Contact:

Post by at0m »

Tau,

For modulation, there's 2 main types: sync and async. Sync means it's updated on each clock pulse (audio sample rate), async is updated on lower frequency. Regular modulation inputs are async. Some, like FM input or the whole Flexor range of modules, take sync signals. This means that you can send any audio signal to a Flexor filter cutoff modulation, for example. Doing the same on regular CW filters will not yield the same results, they're not designed to read the input so fast. The benefit of slower updates is that it takes less cycles to calculate, which is more DSP efficient. Flexor's modulation options take a bit more DSP, but is more flexible, for example you can make formants by modulating Flexor filter cutoff at audio rate.

For modulating filter cutoff by an audio signal's level, one would insert an envelope follower, which generates lower frequency signals, useable on all modulation inputs (sync and async) - although Flexor's HyperFollower can also generate audio rate modulation. Flexor's envelope/follower can work on time frames of one sample long. On a regular follower this would not make much sense, but you can modulate it to slide from normal following to audio rate modulation.

MIDI is slow, it does not make sense to convert audio rate signals to CC, since many of the CC will not be read adequately my the controlled parameter. Even fast envelope follower > CC would loose a lot of the resolution of the follower's original output. This is why Flexor uses audio rate/sync modulation on far most of it's mod inputs - it allows for some more synthesis techniques!

On top of that speed/freq difference, MIDI indeed uses 7bit signals, while audio modulation has 32bit resolution.

I hope this elaborates modulation signals a bit...
more has been done with less
https://soundcloud.com/at0m-studio
dawman
Posts: 14368
Joined: Sun Jul 24, 2005 4:00 pm
Location: PROJECT WINDOW

Post by dawman »

Thanks at0m,

You are like the big brokerage house on Wall Street called E.F.Hutton.

A room full of traders yelling and screaming, PA announcements, etc........And E.F.Hutton speaks and everyone freezes just to hear their valuable advice. :wink:


Thanks 4 The Explanation.

I usually have to read your posts a few times and have a diagram close by.

Just tried your Modular / Flexor Compressor trick.

I found a way to use it on Wolf's 16 Channel mixer I think.

I am trying to understand Modular for treatments like that where control instead of making synth sounds is the Soup du Jour.

Modular synths in Scope is still causing me grief and instability live, damn shame, as many patches w/ Flexor III modules have such good character and sound.

Dank Opnieuw,
User avatar
Tau
Posts: 793
Joined: Mon Jun 26, 2006 4:00 pm
Location: Portugal
Contact:

Post by Tau »

Thanks at0m! I'm beggining to get the picture!

Tonight, I will modulate



:)

T
User avatar
spacef
Posts: 3250
Joined: Sun Jun 17, 2001 4:00 pm
Contact:

Post by spacef »

it is the graphics that take cpu, not the midi by itself (CC= yes if assigned to parameters moving on the screen: there is lag in the graphics because scope/computer process audio/midi first (info in out pci bus).
- When intensive graphical activity in scope, you surimpose 3 info streams: audio+midi+graphics. Graphics come last, and that's why the graphics show a lag: the audio+midi is processed correctly and in time, but the graphics are processed "when the pc can" and the more CC you add, the less pc can update all graphics in the same time.

Even with an empty scope project, PC cpu load can be EXACTLY the same with window softwares (exemple, moving the Window XP "system monitor" shows same cpu load as moving a device around.
- Cause = slow computer (i have one: 2.5 ghz = slow (and it gets slower over time as we tend to ask more and more from it, limits are faster achived nowadays than a few years ago).

This is why you have less cpu load when devices are closed, because scope does not need to update graphics anymore.

This is also why SpaceF mixers allow to hide vu-meters, to minimize the graphical work scope has to do, while allowing the user to continue to edit the device (which is not possible if the device is closed, unless you have assigned midi ccs, which is a good method and does not put load on cpu when device panel is closed).

All the best

PS: to modulate CC/audio, audio converted into CCs has to be re-converted into audio to work best: that way, you don't even need to have a full resolution modulation source, and even 127 steps become much more than needed in that particular case (virtually, you would need only a minimum and maximum, and you can use an envelope follower to get all transitional values. Of course, The more steps in the midi signal, and the more control you have over the CURVE of the final audio flow).


If you have spacef devices, then most devices which include the XP Modulator (spacef's envelope follower) include an assignable midi CC: assign button to it to remote control minimum and maximum positions, and now you can create an LFO with classic or new shapes only with that button. on/off pressure will send the minimum and maximum value, and the Xpression modulator will create the 2 millions values that make the transitions.
Echo 3 has a button to do this directly (which is just the min/max of the "Manual" control).
A simple pot is better, has you have manual control over all steps, and not only min/max. That's why, DXD, AN-OSc, Biosc, and MultiSynth do not offer the button anymore, but a more versatile poti (taht can be assigned to buttons anyway).
dawman
Posts: 14368
Joined: Sun Jul 24, 2005 4:00 pm
Location: PROJECT WINDOW

Post by dawman »

I have to take a class just to follow Mehdi and at0m's replies !!


BTW Mehdi, the Multi-Synth is unbelievable. I still do not have the time it needs until this weekend, but the options are astounding.

I really love seeing the 2 generations of young Scope 4 live's, but they gotta go !!
User avatar
hifiboom
Posts: 2057
Joined: Thu Aug 03, 2006 4:00 pm
Location: Germany, Munich
Contact:

Post by hifiboom »

from my tests, the problemematic isn`t the 128/256 steps at all but the handling of async information in scope. if you do further async processing(int calculations instead of sync calcs) on midi information in scope the time resolution, also called quantization, is reduced.... meaning you have values like 128,111,102,..... and lets say a quantization of 64 updates per second, it will reduce to something like 128,70 and lets say only 8 updates per second which will lead to discrete steps on cc signal information.

so its no question of resolution/amplitude values but more something to do with time domain handling.

And yes audiorate input for modulation does fix this problematic.
Async calculations save dsp power for tasks that are unimportant regarding time domain: lets say you wanna calculate a room size on a reverb plug-in you don`t care if changing the size or diffusion needs 1/4 second until you hear the difference.
:wink:
fra77x
Posts: 889
Joined: Tue Apr 17, 2001 4:00 pm

ASync synchro flag

Post by fra77x »

But async signals have a synchron flag (in SDK of cource) which almost doubles the accuracy of async signals. check it out.

cheers,
John
Post Reply