DSP-based computational filtering - any ideas?
Posted: Wed Mar 12, 2008 12:21 pm
I've known for a long time that my Scope cards should be able to do a computational convolution for signal removal. (For exactly how I know this, see the bottom of this post.)
Consider the problem of recording a choir:
I finally got to record 8 channels simultaneously last weekend using VDAT. Fun! (Now I'm looking forward to trying 16!)
A lovely Samoan choir was making a CD for humanitarian/charity reasons. Man, those Samoans can sing! Okay, sometimes they sing off-key (Polynesian singers will routinely flat the 7th of a chord, because their native tonal systems do not use Equal Temperament), but these guys were tanks - they recorded for 4 hours straight, on their feet - usually barefoot - and cranked-out 12 songs, most in a single take, without anyone complaining about their feet hurting or anyone's voice giving out!
Anyway, the recording circumstances were less than ideal. Doing something for charity means that you seldom have a budget and you're stuck with whatever room whichever church will let you record in. And that's where my DSP question comes in.
The problem was partially the room itself that we were recording in, and the other part was the electronic "piano" that was accompanying the choir.
So, we've got an electronic "something" (piano is too loose a term) on channels 1 and 2 brought in direct (not from microphone). The signal from the piano was also amplified and monitored over a fairly nice Peavey-esque 2-channel PA system for the choir to sing to. (If only we'd had 50 pairs of headphones so they could have each heard the piano and themselves without a PA system, none of this would have been a problem. In my next life I will own 50 pairs of headphones!)
The entire choir was miked with 6 microphones, 3 on the women's side, 3 on the men's side. So; we've got the women on channels 3, 4, and 5, and the men on channels 6, 7, and 8.
So far so good. Okay, there was a little problem in that the center channels had to be adquate for the small "solo" groups that were used on some of the numbers, but because the whole session covered 12 songs in 4 hours, there wasn't a lot of time to mix or set levels on the fly. Anyway, sample-depth overhead should cover for levels, right?
It turns out the SM58 mics used for the center channels - 4 and 7 - were a lot hotter than the SM57 mics on the sides - 3,5,6 and 8. Also, an SM58 is pretty omnidirectional. Bottom line is that we got a LOT of the piano from the PA system into the mics, and especially through the center channels.
Okay, but we've got a really nice direct image of the piano on channels 1 and 2, right? Shouldn't we be able to just subtract that out from each of the mic channels?
Grin.
Using a multitrack software package, we could - theoretically - create an inverse of the piano signals (plural, because it's a 2-channel "stereo" piano).
But it would be FREAKIN' COOL if we could just tell an algorithm to listen for the piano (since it has a perfect sample on 1 and 2) and drop those signals out of the mic channels!
And then what would be even cooler - if I ever had to do this again - is if I could set up that algorithm in advance so that it could "learn" the piano and subtract its signal live, while we were recording. Well, okay, maybe it is better to do it in post, but it would still be cool to have such an algorithm, and to have a "learn" function to let it figure out how to automatically do this minor miracle of DSP convolution.
So, the question is, does anyone here have a recipe for this, or know of a good Scope Platform plug-in? I am pretty sure it was here on Planet Z that I heard of this being done on Scope Platform before. (If you're curious, I did this at Stanford back in the early 90s using a NExT system in post.)
Thanks in advance,
Dan Wilcken, Transition Films
filmguy24p
Consider the problem of recording a choir:
I finally got to record 8 channels simultaneously last weekend using VDAT. Fun! (Now I'm looking forward to trying 16!)
A lovely Samoan choir was making a CD for humanitarian/charity reasons. Man, those Samoans can sing! Okay, sometimes they sing off-key (Polynesian singers will routinely flat the 7th of a chord, because their native tonal systems do not use Equal Temperament), but these guys were tanks - they recorded for 4 hours straight, on their feet - usually barefoot - and cranked-out 12 songs, most in a single take, without anyone complaining about their feet hurting or anyone's voice giving out!
Anyway, the recording circumstances were less than ideal. Doing something for charity means that you seldom have a budget and you're stuck with whatever room whichever church will let you record in. And that's where my DSP question comes in.
The problem was partially the room itself that we were recording in, and the other part was the electronic "piano" that was accompanying the choir.
So, we've got an electronic "something" (piano is too loose a term) on channels 1 and 2 brought in direct (not from microphone). The signal from the piano was also amplified and monitored over a fairly nice Peavey-esque 2-channel PA system for the choir to sing to. (If only we'd had 50 pairs of headphones so they could have each heard the piano and themselves without a PA system, none of this would have been a problem. In my next life I will own 50 pairs of headphones!)
The entire choir was miked with 6 microphones, 3 on the women's side, 3 on the men's side. So; we've got the women on channels 3, 4, and 5, and the men on channels 6, 7, and 8.
So far so good. Okay, there was a little problem in that the center channels had to be adquate for the small "solo" groups that were used on some of the numbers, but because the whole session covered 12 songs in 4 hours, there wasn't a lot of time to mix or set levels on the fly. Anyway, sample-depth overhead should cover for levels, right?
It turns out the SM58 mics used for the center channels - 4 and 7 - were a lot hotter than the SM57 mics on the sides - 3,5,6 and 8. Also, an SM58 is pretty omnidirectional. Bottom line is that we got a LOT of the piano from the PA system into the mics, and especially through the center channels.
Okay, but we've got a really nice direct image of the piano on channels 1 and 2, right? Shouldn't we be able to just subtract that out from each of the mic channels?
Grin.
Using a multitrack software package, we could - theoretically - create an inverse of the piano signals (plural, because it's a 2-channel "stereo" piano).
But it would be FREAKIN' COOL if we could just tell an algorithm to listen for the piano (since it has a perfect sample on 1 and 2) and drop those signals out of the mic channels!
And then what would be even cooler - if I ever had to do this again - is if I could set up that algorithm in advance so that it could "learn" the piano and subtract its signal live, while we were recording. Well, okay, maybe it is better to do it in post, but it would still be cool to have such an algorithm, and to have a "learn" function to let it figure out how to automatically do this minor miracle of DSP convolution.
So, the question is, does anyone here have a recipe for this, or know of a good Scope Platform plug-in? I am pretty sure it was here on Planet Z that I heard of this being done on Scope Platform before. (If you're curious, I did this at Stanford back in the early 90s using a NExT system in post.)
Thanks in advance,
Dan Wilcken, Transition Films
filmguy24p