any downsides of short latency ?
-
- Posts: 518
- Joined: Tue Jun 23, 2009 10:55 am
any downsides of short latency ?
As I used scope for an synth-environment since, I never cared about latency, as long it is constant.
I used the maximum latency in ULLI.
Now some singers came in an I start caring about latency.
Are there some disadvanteges of setting latency to the lowest level ?
Do scope-devices recognize what latency is set ?
I used the maximum latency in ULLI.
Now some singers came in an I start caring about latency.
Are there some disadvanteges of setting latency to the lowest level ?
Do scope-devices recognize what latency is set ?
\\\ *** l 0 v e | X I T E *** ///
Re: any downsides of short latency ?
latency settings do not affect Scope devices, for the most part.
lowered latency means a higher CPU load.
monitor the singer and the DAW playback in the Scope mixer, NOT through the DAW(good reason NOT to use XTC mode), and the computer's latency will not matter to the singer.
lowered latency means a higher CPU load.
monitor the singer and the DAW playback in the Scope mixer, NOT through the DAW(good reason NOT to use XTC mode), and the computer's latency will not matter to the singer.
Re: any downsides of short latency ?
6 Msec. is the time it takes the sound of your drummers cymbal to reach your ear.
I’ve been using 6 msec. / 256 samples / 44.1k forever.
No advantaged for less msec. No advantages for smaller audio buffer, no need for higher resolution unless it’s for specific recording reason, or compatibility for connecting with different digital gear @ 48k, which is mostly for film.
I’ve been using 6 msec. / 256 samples / 44.1k forever.
No advantaged for less msec. No advantages for smaller audio buffer, no need for higher resolution unless it’s for specific recording reason, or compatibility for connecting with different digital gear @ 48k, which is mostly for film.
- Bud Weiser
- Posts: 2788
- Joined: Tue Sep 14, 2010 5:29 am
- Location: nowhere land
Re: any downsides of short latency ?
Easy rule of thumb for audio: 1ms = 1ft or 30cm
If you really need to be more precise (as in considering significant distances), add about 10% to your distance value for each ms (1ms = 13inches or 33cm).
Think of how that applies to live musicians. 3ms becomes almost nonsensical. 6ms is a very small stage. The percussion or bass sections of an orchestra are more than 20ms away from the conductor, and more than 30ms from each other. Even more for the arena-playing crowd.
If you really need to be more precise (as in considering significant distances), add about 10% to your distance value for each ms (1ms = 13inches or 33cm).
Think of how that applies to live musicians. 3ms becomes almost nonsensical. 6ms is a very small stage. The percussion or bass sections of an orchestra are more than 20ms away from the conductor, and more than 30ms from each other. Even more for the arena-playing crowd.
-
- Posts: 518
- Joined: Tue Jun 23, 2009 10:55 am
Re: any downsides of short latency ?
I tried to connect a second pc with a DAW to my Scope PC.
( I connected with ADAT to an RME-card in my DAW-Pc. (DAW= Samplitude X3))
For tests, I played back a track from my DAW-PC to scope, sent it back to the DAW-PC and recorded it.
The interfaces were set to 3ms ( scope) and 6ms ( RME-card). If I compared the tracks in the DAW, the difference was about 30ms.
Where do these 21ms come from ?
Is my setup stupid ?
( I connected with ADAT to an RME-card in my DAW-Pc. (DAW= Samplitude X3))
For tests, I played back a track from my DAW-PC to scope, sent it back to the DAW-PC and recorded it.
The interfaces were set to 3ms ( scope) and 6ms ( RME-card). If I compared the tracks in the DAW, the difference was about 30ms.
Where do these 21ms come from ?
Is my setup stupid ?
\\\ *** l 0 v e | X I T E *** ///
Re: any downsides of short latency ?
the Scope latency isn't even in the equation if you are connecting via ADAT. latency is the processing time for the cpu.
Re: any downsides of short latency ?
If you connect via ADAT or the other scope hardware in/outs, the latency of scope is like the latency of a digital mixer.
Should be around 1 or 2 msec.
The latency of the ULLI-settings only refer to the ASIO driver of scope.
If you set up the ASIO-driver of the RME card in Samplitude, you can just adjust the buffersize.
Higher values result in less CPU usage but in higher latency.
The result in msec is just a estimate of samplitude.
To get rid of this problem and have the exact result (for that certain buffersize at the used samplerate) you can measure it and then compensate.
Here is a nice explaination from Ableton:
https://help.ableton.com/hc/en-us/artic ... mpensation
Driver Error Compensation
Live Versions: Live 9.2 and later
Operating System: All
Background
Your audio interface reports a specific latency value to Live. This value is used to offset recording audio and MIDI when the recording track's monitor is set to "Off". However certain audio interfaces may report an inaccurate latency, which will result in recordings which need to be manually aligned in order to sync up correctly. Driver Error Compensation allows Live to compensate automatically for any inaccurate latencies. If a Driver Error Compensation value has been set, then Live will offset the recordings by the specified amount so that they play in sync with the rest of your set.
Important:
Although the "Overall Latency" amount in Live's Audio preferences is recalculated when Driver Error Compensation is adjusted, it does not affect overall latency in Live for playback (only for recording).
Driver Error Compensation is only applied if the monitor on the recording track is set to "off". If monitoring AND recording on a track where the monitor is set to "In" or "Auto", then Driver Error Compensation is not applied.
It's only needed if you have an interface which is not reporting its correct latency to Live.
It's only relevant in situations where you are recording audio or MIDI from an external source.
Check our How to reduce latency article for tips on what to do if you are experiencing high latency in Live.
Driver_Error_Compensation.png
Which interfaces require a Driver Error Compensation adjustment?
Audio interfaces using their own native Core Audio or ASIO Drivers
Interfaces running in Native mode report accurate latency values, meaning that there should be no need to adjust Driver Error Compensation.
Note: Certain devices offer both Native modes and Class Compliant modes. We recommend using those devices in Native Mode.
Class-compliant audio interfaces
Interfaces running in class compliant mode (which use the built-in driver of the system itself) report latencies inaccurately, therefore Driver Error Compensation should be used.
Built-in Soundcards
Mac and PC built-in soundcards do not report latencies accurately. Not only are they inaccurately reported, but the latency value grows as the buffer size increases, therefore Driver Error Compensation should be used.
How to calculate the correct Driver Error Compensation value
Live has a built-in lesson including a specifically calibrated set which allows you to set Driver Error Compensation. For the lesson you will need a cable and an audio interface with at least one physical input and output. This can be found in the help view:
Help > Help View > Audio I/O > Page 8 of the Lesson, click the link for Driver Error Compensation.
The Driver Error Compensation value can be positive or negative, depending on the specific offset needed. The value is only correct at the buffer size and sample rate used when testing. If either of these change then it needs to be calculated again and adjusted.
Note: If Driver error compensation is set to an extreme amount or used when it doesn't need to be, it can cause issues. If in doubt about whether to adjust Driver Error Compensation or not, it's best to leave it at zero.
Finding out more about Latency
How to reduce Latency
Understanding "Reduced Latency When Monitoring"
Should be around 1 or 2 msec.
The latency of the ULLI-settings only refer to the ASIO driver of scope.
If you set up the ASIO-driver of the RME card in Samplitude, you can just adjust the buffersize.
Higher values result in less CPU usage but in higher latency.
The result in msec is just a estimate of samplitude.
To get rid of this problem and have the exact result (for that certain buffersize at the used samplerate) you can measure it and then compensate.
Here is a nice explaination from Ableton:
https://help.ableton.com/hc/en-us/artic ... mpensation
Driver Error Compensation
Live Versions: Live 9.2 and later
Operating System: All
Background
Your audio interface reports a specific latency value to Live. This value is used to offset recording audio and MIDI when the recording track's monitor is set to "Off". However certain audio interfaces may report an inaccurate latency, which will result in recordings which need to be manually aligned in order to sync up correctly. Driver Error Compensation allows Live to compensate automatically for any inaccurate latencies. If a Driver Error Compensation value has been set, then Live will offset the recordings by the specified amount so that they play in sync with the rest of your set.
Important:
Although the "Overall Latency" amount in Live's Audio preferences is recalculated when Driver Error Compensation is adjusted, it does not affect overall latency in Live for playback (only for recording).
Driver Error Compensation is only applied if the monitor on the recording track is set to "off". If monitoring AND recording on a track where the monitor is set to "In" or "Auto", then Driver Error Compensation is not applied.
It's only needed if you have an interface which is not reporting its correct latency to Live.
It's only relevant in situations where you are recording audio or MIDI from an external source.
Check our How to reduce latency article for tips on what to do if you are experiencing high latency in Live.
Driver_Error_Compensation.png
Which interfaces require a Driver Error Compensation adjustment?
Audio interfaces using their own native Core Audio or ASIO Drivers
Interfaces running in Native mode report accurate latency values, meaning that there should be no need to adjust Driver Error Compensation.
Note: Certain devices offer both Native modes and Class Compliant modes. We recommend using those devices in Native Mode.
Class-compliant audio interfaces
Interfaces running in class compliant mode (which use the built-in driver of the system itself) report latencies inaccurately, therefore Driver Error Compensation should be used.
Built-in Soundcards
Mac and PC built-in soundcards do not report latencies accurately. Not only are they inaccurately reported, but the latency value grows as the buffer size increases, therefore Driver Error Compensation should be used.
How to calculate the correct Driver Error Compensation value
Live has a built-in lesson including a specifically calibrated set which allows you to set Driver Error Compensation. For the lesson you will need a cable and an audio interface with at least one physical input and output. This can be found in the help view:
Help > Help View > Audio I/O > Page 8 of the Lesson, click the link for Driver Error Compensation.
The Driver Error Compensation value can be positive or negative, depending on the specific offset needed. The value is only correct at the buffer size and sample rate used when testing. If either of these change then it needs to be calculated again and adjusted.
Note: If Driver error compensation is set to an extreme amount or used when it doesn't need to be, it can cause issues. If in doubt about whether to adjust Driver Error Compensation or not, it's best to leave it at zero.
Finding out more about Latency
How to reduce Latency
Understanding "Reduced Latency When Monitoring"
-/-
Re: any downsides of short latency ?
again, if you monitor the vocal and the playback in the Scope mixer, there is no latency to deal with, even though the DAW has latency. the playback is heard when it is heard and the vocal happens at the same time. when the signal goes to the DAW there is latency, but you don't hear that unless you monitor through the DAW. once the vocal is recorded, it should be in sync with the rest of the playback.
why does this need to be so complicated?
why does this need to be so complicated?
Re: any downsides of short latency ?
Yes! As long as the recorded vocals are compensated correctly by the DAW.
I posted the article to give nebelfürst an answer to where his 21msec COULD come from.
-/-
Re: any downsides of short latency ?
If I understand the article correctly, both PCI cards report their latency correctly to the DAW.
The Adat latency is negligible, it's almost exactly half a millisecond for a full roundtrip, 20 samples iirc.
(I measured it once with an Adat Mixer/Scope combo and got a nice filter effect while monitoring... forgot to switch the source off)
If you play from DAW to Scope, you get Asio output latency plus Scope input latency, which is 9ms if you monitor through Scope.
If you monitor 'behind' the DAW you have to add the same amount for the return path, giving 18 ms plus X for output conversion. If the DAW resides on the RME PC, add another 6 ms for the Asio output path, resulting in 24 ms
For latency free monitoring use VDAT in Scope with a copy of the backing track.
Then you're hardware only without even ULLI
The Adat latency is negligible, it's almost exactly half a millisecond for a full roundtrip, 20 samples iirc.
(I measured it once with an Adat Mixer/Scope combo and got a nice filter effect while monitoring... forgot to switch the source off)
If you play from DAW to Scope, you get Asio output latency plus Scope input latency, which is 9ms if you monitor through Scope.
If you monitor 'behind' the DAW you have to add the same amount for the return path, giving 18 ms plus X for output conversion. If the DAW resides on the RME PC, add another 6 ms for the Asio output path, resulting in 24 ms
For latency free monitoring use VDAT in Scope with a copy of the backing track.
Then you're hardware only without even ULLI
Re: any downsides of short latency ?
it is still latency free if you monitor playback and vocals in Scope.
if you do that, latency does not matter.
if you do that, latency does not matter.
Re: any downsides of short latency ?
I use a RME Raydat on Mac and I have to adjust the compensation by a few msecs. It´s not much...
-/-
Re: any downsides of short latency ?
Excellent ! A very good trick !
So, from one hear to the other is around 15cm = 0.5ms ! Brilliant !
Re: any downsides of short latency ?
Hee hee, and this is how your brain can tell which direction a sound is coming from!
-
- Posts: 518
- Joined: Tue Jun 23, 2009 10:55 am
Re: any downsides of short latency ?
Thanks for you help.
After playing around, my final result of latency for a roundtrip samX3(rme)->scope->samX3(rme) is 16ms.
This seems to be the possible minimum, by adding the asio drivers' latencies: 6+3+3+6 ms = 18ms
I found the parameter to compensate these 16ms in samX3, so I can achieve perfect synchronous playback of synths and recorded singer.
Gary is also right, that live monitoring the singer is without latency.
In my case, there is just one singer, which has to sing several tracks.
My singer tends to synchronise itself to the track of her recorded voice, not the instruments.
Unfortunately human singers lack the connector for midi clock injection.
After playing around, my final result of latency for a roundtrip samX3(rme)->scope->samX3(rme) is 16ms.
This seems to be the possible minimum, by adding the asio drivers' latencies: 6+3+3+6 ms = 18ms
I found the parameter to compensate these 16ms in samX3, so I can achieve perfect synchronous playback of synths and recorded singer.
Gary is also right, that live monitoring the singer is without latency.
In my case, there is just one singer, which has to sing several tracks.
My singer tends to synchronise itself to the track of her recorded voice, not the instruments.
Unfortunately human singers lack the connector for midi clock injection.
\\\ *** l 0 v e | X I T E *** ///
Re: any downsides of short latency ?
This is normal and expected, which is why GaryB would bring this up. In my preferred application for DAW usage, Logic Pro, we have had the ability to input this offset in Audio preferences for many years so that this 'alignment' between 'what you hear' and what is recorded is automatically compensated for. There are numerous factors with plugins and native usage that can shift this value, so it's always useful to check and insure that your master/output stack doesn't alter this in some unpredictable way and so on.
Re: any downsides of short latency ?
if you monitor in Scope, latency does not matter. the recorded sound(or vstis) have latency, but the live performer just hears playback whenever it arrives. the live performer performs, the live performer hears him/herself at the same time as the playback, perfectly in sync. the recording is made and the performance will be right in sync in the new playback.
the problem is the insane idea that just because one can, that one must be constructing a full mix while still constructing the composition. and that is where latency matters, in the mix when going back and forth between platforms and worlds.
for a live performance of say a vocal, a stereo rough mix is sufficient. you can add fx to the monitor, that you can record or not. so, from the DAW, only a stereo track is needed. in nebelfuerst's example, the singer sings, it is recorded. then what was recorded goes out of the DAW and into the Scope mixer. i use the STM2448 for this because it has the right features.
the vocal signal goes to a channel in the STM. it goes out the direct out or a record bus into the DAW(via ASIO or ADAT, or ?),
it also goes to the control room output via the master bus, which feeds my monitors.
it also goes out the monitor output, which feeds the singer's headphones.
if i need fx for the singer, i route a reverb or delay or distortion or pitchshifter, or all of those or more from aux1 to an extra channel on the STM.
that goes to the monitor via the channels monitor control.
it can also go to the control room, if i want to hear it, by enabling or disabling the master bus on that channel
it can also be recorded via one of the record busses or the direct out.
playback goes to a stereo channel.
it can go any of the places that the other signals can go, though i doubt you would want to record it again.
this is the simplest way to run a coherent session. it's the way it's always been done when people are spending a heck of a lot of money for the session.
the problem is the insane idea that just because one can, that one must be constructing a full mix while still constructing the composition. and that is where latency matters, in the mix when going back and forth between platforms and worlds.
for a live performance of say a vocal, a stereo rough mix is sufficient. you can add fx to the monitor, that you can record or not. so, from the DAW, only a stereo track is needed. in nebelfuerst's example, the singer sings, it is recorded. then what was recorded goes out of the DAW and into the Scope mixer. i use the STM2448 for this because it has the right features.
the vocal signal goes to a channel in the STM. it goes out the direct out or a record bus into the DAW(via ASIO or ADAT, or ?),
it also goes to the control room output via the master bus, which feeds my monitors.
it also goes out the monitor output, which feeds the singer's headphones.
if i need fx for the singer, i route a reverb or delay or distortion or pitchshifter, or all of those or more from aux1 to an extra channel on the STM.
that goes to the monitor via the channels monitor control.
it can also go to the control room, if i want to hear it, by enabling or disabling the master bus on that channel
it can also be recorded via one of the record busses or the direct out.
playback goes to a stereo channel.
it can go any of the places that the other signals can go, though i doubt you would want to record it again.
this is the simplest way to run a coherent session. it's the way it's always been done when people are spending a heck of a lot of money for the session.
Re: any downsides of short latency ?
I concur about Scope. Please note again that the issues discussed here are with your DAW, possibly native effects and the delay as it is *recorded* into the timeline versus *what the perfomer(s)* hear, which nebelfuerst does seem to have grasped.
GaryB knows his stuff too!
GaryB knows his stuff too!
Re: any downsides of short latency ?
no need to have vst fx in the record chain...none.
those go on after.
there are plenty of fx in the real world and in Scope to vibe a vibe. i would record any live fx separately. there are plenty of tracks in a modern DAW.
everyone here knows their stuff.
those go on after.
there are plenty of fx in the real world and in Scope to vibe a vibe. i would record any live fx separately. there are plenty of tracks in a modern DAW.
everyone here knows their stuff.