USB is a specification for a serial protocol (3 signalling speeds currently) and connector type. While it does specify the types of packets and how the packets are defined, midi rides 'on top' of this protocol. Midi runs at a datarate of 31.5 kBaud, significantly lower than any USB speed (hence this topic). Most current motherboard chipsets actually provide dedicated controllers for each usb port (or in pairs on some) so the bandwidth of the ports are actually not shared, meaning you could carry quite a bit more data than current usb<>midi interfaces use. Any 'usb issues' I've experienced over the years were either due to the 10-device limit for midi drivers in Xp, a ground issue with the midi device or cabling (especially if the connector is grounded) or simply due to plugging the device into the wrong usb port (usb devices are bound to the port the driver was installed for, plugging into a new port is equivalent to moving a PCI card as far as the computer is concerned.)
The Elektron TM-1 was a response (iirc) to the fact that the Elektron Machindrum had some serious performance issues with external input in the first iterations. Seemed like there was some processing overhead related to each midi note/etc and the issues inherent with midi timing in general were cumulative with the MD's own internal latencies (input processing & output) resulting in very noticeable timing issues, even compared to most other midi gear (since the MD is meant for percussive material). I kept up with it for a while on their forums because I had vague interests in getting one. I forget all the full details now but more information can be found on their forums. This might have applied to the MD trying to generate or receive sync as well as its responsiveness to control signals.
OSC is a completely independant specification, which can be converted to midi in OSC's software layer. It's a very nice specification imo, and it's unfortunate that it's slow to catch on as it solves several problems.
The current midi spec only uses 3 pins, pins 4 & 5 for the core and pin 2 is used for ground. So in theory you could specify a use for pins 1 & 3 and use a new specification, one that would perhaps be backwards compatible with gear that can only 'see' the center pins.
Music gear is so advanced, why do we use this?
Re: Music gear is so advanced, why do we use this?
Currently I am using a MOTU Fastlane USB midi interface and the one in my Echo Audio Gina 3G PCI audio card and the MOTU USB has a bad lag. I don't think USB is good for anything other than transferring data files but not in real time.
Re: Music gear is so advanced, why do we use this?
When you say 'bad lag' how do you mean? Give a specific usage or timing issue to clarify...
Re: Music gear is so advanced, why do we use this?
I think what you mean is that usb has no timebase stamp in its protocol? If so then I agree, that's correct and even timestamping midi events to deliver to the usb interface doesn't guarantee the cpu will handle the pio usb port in a timely fashion. There *is* an interrupt transfer mode that supports timestamping but it doesn't guarantee that other congestion won't be a factor and it needs to be supported at both ends (by the host and usb device, explained in excruciating detail following...)
Watch out, another lengthy post:
In practice though the way the device communicates is entirely up to the driver stack and the software application. There IS a Universal Serial Bus Device Class Definition for MIDI Devices and it specifies 2 methods for communicating with devices. The first is MIDI MUX intended for bulk transfer for MIDI interfaces, ie. to group data sent to multiple 'channels' or 'ports' on the same midi device into single 32bit usb messages. To use MIDI MUX the host application must be able to negotiate this.
Also instead of the bulk transfer mode (MIDI MUX) the interrupt transfer mode seems to be more widely used, at least from what I recall reading a few years ago. Although there is no support for high data rates (but still sufficient for MIDI), it offers better timing control of the data transfer (whereas the bulk transfer mode is specified for data which are 'not time critical'....) This latter mode is the mode that some interfaces market as being 'timestamped midi events' but you'll notice it always has a bullet point that specifies only with a certain software package as the host still needs to be able to negotiate this (meaning the hardware & software are usually from the same maker, ie steinberg interfaces for cubase/nuendo, motu interfaces require DP, emagic interfaces would timestamp only with logic etc.)
Also note that data transfers over USB can 'steal' system cycles due to the amount of time it takes to service USB interrupts, and since these are pio mode devices (a buffer served by the cpu) they can also suffer from waiting to be serviced while the system is busy with other things (similar to pci latency issues). On a modern system this should still be relatively negligable as Intel & AMD have gone to great pains with their implementations to address these issues by providing literally a dozen or more ports all hung off of their own controller (instead of sharing the pci bus), and both OSX & Windows have vastly improved their usb support compared to the days of Win98 & the BX chipset.
Yet another consideration, and one that is affected by the USB transmission mode(s) mentioned above, is the fact that these midi events must interleave into the USB datastream.
Consider though that the 'default' signalling rate for USB 1.1 is 1000hz for most "Full Speed" (12 Mbit/s) devices. That means sending 1 full packet 1000 times per second, for a delay between messages of 1ms (latency).
Now each USB packet is preceded by an 8-bit sync sequence, followed by a 2bit frame end, and USB packets come in 3 types: data, handshake and token. Tokens are control signals sent by the host to control the usb device connected, changing it to various states. Handshake packets are a single byte and usually sent after data packets (and require a response). Also every millisecond a "Full Speed" device transmits a "start of frame" token. This is where the '1 full packet of data per second' comes in... USB MIDI MUX and timestamped packets will be sent in data packets interleaved with these other signals, and data packets *also* include a CRC at the end (which eats additional bandwidth.) Finally usb itself has a certain tolerance (amount of jitter) that is allowed for: Clock tolerance is 12.000 Mbit/s ±2500 ppm at "Full Speed" and 1.50 Mbit/s ±15000 ppm at "Low Speed".
So final timing (I believe) for a single usb midi device message (which may combine several independant midi events into a single packet) is affected by all of the overhead mentioned. Now in practice that may mean that 3 midi events are clumped together into a single data packet and handed off to the midi device being handled. But handling the midi output at the other end may introduce additional overhead (interleaving or wait states). Many UART's (buffers used in serial data transmission) have additional wait states that they introduce, and may even not run at exactly the full midi spec; some run faster or slower meaning that the actual midi transmission rate is going to be reduced by the asyncronous timing.
At the output for each midi hardware 'port', you *still* have congestion when multiple events are trying to get 'out the door' at the same time since midi is serial and you can only transmit/recieve 1 event at a time. Since MIDI runs at is 31,250 bits per second (bps) and because there are 10-bits in every MIDI "byte" (consisting of 1 start bit, 8 data bits, and 1 stop bit), the actual bytes which can be transmitted per second is 3,125. That converts to a delay between each byte of (1/3125) .00032 seconds or .32ms (milliseconds), assuming a midi output/input device that runs at the exact same rate as the midi port(s) (this is 'syncronous' transmission assuming ideal conditions, emphases added because there are many places where data interleaving can cause asyncronous behaviour to reduce the final *actual* datarate--explained below.) This also assumes that upstream from this (all the blather posted above) hasn't affecting the timing of a given message (ie, actual transmit time is *after* any above overhead).
Now many sequencers used to use Running Status (allowed by the MIDI spec) to optimize the flow of data based on the current midi resolution (ppq) and allowed note & channel priorities so that important events were handled (output) before less important control data (and etc), but I honestly have no idea how this is affected by using sequencers that now use audiorate as their timebase (Cubase 4/5 etc). I know Logic and Digital Performer used to make a great deal of press around this (being able to intelligently 'thin' and prioritize the datastrea) but I don't even know where Logic stands on this now...(I'm pretty sure they made a change to using audiorate as the base clock in v7).
Finally, you'll notice that I give no real summary here. Every point is affected by too many variables to give a single overall answer. What USB transmission mode are the usb midi device's driver and your host software using--did they handshake the lower bandwidth but more timing accurate interrupt transfer mode, or are they using MIDI MUX? Are the messages in the midi datastream being intelligently optimized by your host sequencer or just bulk dumped to the output (software) interface? Is the host Operating System's USB handling introducing noticeable delay? (This is probably one of the single biggest factors in timing issues). Is the USB port waiting to be serviced for longer than normal due to high bus utilization in the host computer or a shared interrupt? Does the output port connected to the actual midi device achieve full midi datarate or some lower rate that's an even multiple of the UART and device on the other end?
*All* of these effects are cumulative and the way the data interleaves as it moves downstream is what cumulatively affects not only latency but the variance in that latency (ie, midi "jitter"). Now the GOOD NEWS is that you can measure and correct for these effects in several ways, once you undersdtand them. This post is so long already I'll hold off on more for now.
Also if I'm mistaken on some single point please feel free to clarify...
Watch out, another lengthy post:
In practice though the way the device communicates is entirely up to the driver stack and the software application. There IS a Universal Serial Bus Device Class Definition for MIDI Devices and it specifies 2 methods for communicating with devices. The first is MIDI MUX intended for bulk transfer for MIDI interfaces, ie. to group data sent to multiple 'channels' or 'ports' on the same midi device into single 32bit usb messages. To use MIDI MUX the host application must be able to negotiate this.
Also instead of the bulk transfer mode (MIDI MUX) the interrupt transfer mode seems to be more widely used, at least from what I recall reading a few years ago. Although there is no support for high data rates (but still sufficient for MIDI), it offers better timing control of the data transfer (whereas the bulk transfer mode is specified for data which are 'not time critical'....) This latter mode is the mode that some interfaces market as being 'timestamped midi events' but you'll notice it always has a bullet point that specifies only with a certain software package as the host still needs to be able to negotiate this (meaning the hardware & software are usually from the same maker, ie steinberg interfaces for cubase/nuendo, motu interfaces require DP, emagic interfaces would timestamp only with logic etc.)
Also note that data transfers over USB can 'steal' system cycles due to the amount of time it takes to service USB interrupts, and since these are pio mode devices (a buffer served by the cpu) they can also suffer from waiting to be serviced while the system is busy with other things (similar to pci latency issues). On a modern system this should still be relatively negligable as Intel & AMD have gone to great pains with their implementations to address these issues by providing literally a dozen or more ports all hung off of their own controller (instead of sharing the pci bus), and both OSX & Windows have vastly improved their usb support compared to the days of Win98 & the BX chipset.
Yet another consideration, and one that is affected by the USB transmission mode(s) mentioned above, is the fact that these midi events must interleave into the USB datastream.
Consider though that the 'default' signalling rate for USB 1.1 is 1000hz for most "Full Speed" (12 Mbit/s) devices. That means sending 1 full packet 1000 times per second, for a delay between messages of 1ms (latency).
Now each USB packet is preceded by an 8-bit sync sequence, followed by a 2bit frame end, and USB packets come in 3 types: data, handshake and token. Tokens are control signals sent by the host to control the usb device connected, changing it to various states. Handshake packets are a single byte and usually sent after data packets (and require a response). Also every millisecond a "Full Speed" device transmits a "start of frame" token. This is where the '1 full packet of data per second' comes in... USB MIDI MUX and timestamped packets will be sent in data packets interleaved with these other signals, and data packets *also* include a CRC at the end (which eats additional bandwidth.) Finally usb itself has a certain tolerance (amount of jitter) that is allowed for: Clock tolerance is 12.000 Mbit/s ±2500 ppm at "Full Speed" and 1.50 Mbit/s ±15000 ppm at "Low Speed".
So final timing (I believe) for a single usb midi device message (which may combine several independant midi events into a single packet) is affected by all of the overhead mentioned. Now in practice that may mean that 3 midi events are clumped together into a single data packet and handed off to the midi device being handled. But handling the midi output at the other end may introduce additional overhead (interleaving or wait states). Many UART's (buffers used in serial data transmission) have additional wait states that they introduce, and may even not run at exactly the full midi spec; some run faster or slower meaning that the actual midi transmission rate is going to be reduced by the asyncronous timing.
At the output for each midi hardware 'port', you *still* have congestion when multiple events are trying to get 'out the door' at the same time since midi is serial and you can only transmit/recieve 1 event at a time. Since MIDI runs at is 31,250 bits per second (bps) and because there are 10-bits in every MIDI "byte" (consisting of 1 start bit, 8 data bits, and 1 stop bit), the actual bytes which can be transmitted per second is 3,125. That converts to a delay between each byte of (1/3125) .00032 seconds or .32ms (milliseconds), assuming a midi output/input device that runs at the exact same rate as the midi port(s) (this is 'syncronous' transmission assuming ideal conditions, emphases added because there are many places where data interleaving can cause asyncronous behaviour to reduce the final *actual* datarate--explained below.) This also assumes that upstream from this (all the blather posted above) hasn't affecting the timing of a given message (ie, actual transmit time is *after* any above overhead).
Now many sequencers used to use Running Status (allowed by the MIDI spec) to optimize the flow of data based on the current midi resolution (ppq) and allowed note & channel priorities so that important events were handled (output) before less important control data (and etc), but I honestly have no idea how this is affected by using sequencers that now use audiorate as their timebase (Cubase 4/5 etc). I know Logic and Digital Performer used to make a great deal of press around this (being able to intelligently 'thin' and prioritize the datastrea) but I don't even know where Logic stands on this now...(I'm pretty sure they made a change to using audiorate as the base clock in v7).
Finally, you'll notice that I give no real summary here. Every point is affected by too many variables to give a single overall answer. What USB transmission mode are the usb midi device's driver and your host software using--did they handshake the lower bandwidth but more timing accurate interrupt transfer mode, or are they using MIDI MUX? Are the messages in the midi datastream being intelligently optimized by your host sequencer or just bulk dumped to the output (software) interface? Is the host Operating System's USB handling introducing noticeable delay? (This is probably one of the single biggest factors in timing issues). Is the USB port waiting to be serviced for longer than normal due to high bus utilization in the host computer or a shared interrupt? Does the output port connected to the actual midi device achieve full midi datarate or some lower rate that's an even multiple of the UART and device on the other end?
*All* of these effects are cumulative and the way the data interleaves as it moves downstream is what cumulatively affects not only latency but the variance in that latency (ie, midi "jitter"). Now the GOOD NEWS is that you can measure and correct for these effects in several ways, once you undersdtand them. This post is so long already I'll hold off on more for now.
Also if I'm mistaken on some single point please feel free to clarify...
Re: Music gear is so advanced, why do we use this?
Sorry, maybe that was overkill. Had just woken up & drank REALLY strong tea. The only reason why I've got even a dim idea over the way this cascades across the technologies in a pc is because I started trying to mate the PC's my father would hand down to me as a child to gear that it was largely incompatible with for years. Had I only had an atari... It still wouldn't have removed the requirement to use channel priorities and hand-shift note & control data around to keep the timing of important elements solid. I guess the flipside of theorizing about advances in our tech, a good balance for any limitation is knowing ways to work around it or with it.
Anyway I'm still curious what sort of latencies braincell is referring to (assuming he cares to share)?
I've got 3 machines here with midi interfaces attached, plus 3 soundcards. Across the board (usb interfaces and soundcard's dedicated midi i/o) I notice no more latency than ~7ms-12ms (a variance of +/- 5ms of 'jitter'). This is for individual notes triggering a single event as compared against an audio track which has a quantized transient event as 'click', the two are summed and compared. Timings are actually considerably better on my primary DAW, which is new as of last year, but even the worst system (laptop) is fine under a reasonable cpu load.
I certainly wouldn't try to layer multiple drum hits via separate midi notes in the same way that I can in a sampler (triggered off the same note) or software, but I think it's fine otherwise in terms of latency when usb is involved. It's the resolution & bandwidth of midi that's the real hangup these days I would think. Having realtime control over everything at audiorate resolution (as per Neutron's idea) in all hardware devices as well...that would be impressive especially if it allowed for high bandwidth use without 'shifting things around' as midi's serial nature does now.
Anyway I'm still curious what sort of latencies braincell is referring to (assuming he cares to share)?
I've got 3 machines here with midi interfaces attached, plus 3 soundcards. Across the board (usb interfaces and soundcard's dedicated midi i/o) I notice no more latency than ~7ms-12ms (a variance of +/- 5ms of 'jitter'). This is for individual notes triggering a single event as compared against an audio track which has a quantized transient event as 'click', the two are summed and compared. Timings are actually considerably better on my primary DAW, which is new as of last year, but even the worst system (laptop) is fine under a reasonable cpu load.
I certainly wouldn't try to layer multiple drum hits via separate midi notes in the same way that I can in a sampler (triggered off the same note) or software, but I think it's fine otherwise in terms of latency when usb is involved. It's the resolution & bandwidth of midi that's the real hangup these days I would think. Having realtime control over everything at audiorate resolution (as per Neutron's idea) in all hardware devices as well...that would be impressive especially if it allowed for high bandwidth use without 'shifting things around' as midi's serial nature does now.