I think what you mean is that usb has no timebase stamp in its protocol? If so then I agree, that's correct and even timestamping midi events to deliver to the usb interface doesn't guarantee the cpu will handle the pio usb port in a timely fashion. There *is* an interrupt transfer mode that supports timestamping but it doesn't guarantee that other congestion won't be a factor and it needs to be supported at both ends (by the host and usb device, explained in excruciating detail following...)
Watch out, another lengthy post:
In practice though the way the device communicates is entirely up to the driver stack and the software application. There IS a
Universal Serial Bus Device Class Definition for MIDI Devices and it specifies 2 methods for communicating with devices. The first is MIDI MUX intended for bulk transfer for MIDI interfaces, ie. to group data sent to multiple 'channels' or 'ports' on the same midi device into single 32bit usb messages. To use MIDI MUX the host application must be able to negotiate this.
Also instead of the bulk transfer mode (MIDI MUX) the interrupt transfer mode seems to be more widely used, at least from what I recall reading a few years ago. Although there is no support for high data rates (but still sufficient for MIDI), it offers better timing control of the data transfer (whereas the bulk transfer mode is specified for data which are 'not time critical'....) This latter mode is the mode that some interfaces market as being 'timestamped midi events' but you'll notice it always has a bullet point that specifies only with a certain software package as the host still needs to be able to negotiate this (meaning the hardware & software are usually from the same maker, ie steinberg interfaces for cubase/nuendo, motu interfaces require DP, emagic interfaces would timestamp only with logic etc.)
Also note that data transfers over USB can 'steal' system cycles due to the amount of time it takes to service USB interrupts, and since these are pio mode devices (a buffer served by the cpu) they can also suffer from waiting to be serviced while the system is busy with other things (similar to pci latency issues). On a modern system this should still be relatively negligable as Intel & AMD have gone to great pains with their implementations to address these issues by providing literally a dozen or more ports all hung off of their own controller (instead of sharing the pci bus), and both OSX & Windows have vastly improved their usb support compared to the days of Win98 & the BX chipset.
Yet another consideration, and one that is affected by the USB transmission mode(s) mentioned above, is the fact that these midi events must interleave into the USB datastream.
Consider though that the 'default' signalling rate for USB 1.1 is 1000hz for most "Full Speed" (12 Mbit/s) devices. That means sending 1 full packet 1000 times per second, for a delay between messages of 1ms (latency).
Now each USB packet is preceded by an 8-bit sync sequence, followed by a 2bit frame end, and USB packets come in 3 types: data, handshake and token. Tokens are control signals sent by the host to control the usb device connected, changing it to various states. Handshake packets are a single byte and usually sent after data packets (and require a response). Also every millisecond a "Full Speed" device transmits a "start of frame" token. This is where the '1 full packet of data per second' comes in... USB MIDI MUX and timestamped packets will be sent in data packets interleaved with these other signals, and data packets *also* include a CRC at the end (which eats additional bandwidth.) Finally usb itself has a certain tolerance (amount of jitter) that is allowed for: Clock tolerance is 12.000 Mbit/s ±2500 ppm at "Full Speed" and 1.50 Mbit/s ±15000 ppm at "Low Speed".
So final timing (I believe) for a single usb midi device message (which may combine several independant midi events into a single packet) is affected by all of the overhead mentioned. Now in practice that may mean that 3 midi events are clumped together into a single data packet and handed off to the midi device being handled. But handling the midi output at the other end may introduce additional overhead (interleaving or wait states). Many UART's (buffers used in serial data transmission) have additional wait states that they introduce, and may even not run at exactly the full midi spec; some run faster or slower meaning that the actual midi transmission rate is going to be reduced by the asyncronous timing.
At the output for each midi hardware 'port', you *still* have congestion when multiple events are trying to get 'out the door' at the same time since midi is serial and you can only transmit/recieve 1 event at a time. Since MIDI runs at is 31,250 bits per second (bps) and because there are 10-bits in every MIDI "byte" (consisting of 1 start bit, 8 data bits, and 1 stop bit), the actual bytes which can be transmitted per second is 3,125. That converts to a delay between each byte of (1/3125) .00032 seconds or .32ms (milliseconds),
assuming a midi output/input device that runs at the exact same rate as the midi port(s) (this is 'syncronous' transmission assuming ideal conditions, emphases added because there are many places where data interleaving can cause asyncronous behaviour to reduce the final *actual* datarate--explained below.) This also assumes that upstream from this (all the blather posted above) hasn't affecting the timing of a given message (ie, actual transmit time is *after* any above overhead).
Now many sequencers used to use Running Status (allowed by the MIDI spec) to optimize the flow of data based on the current midi resolution (ppq) and allowed note & channel priorities so that important events were handled (output) before less important control data (and etc), but I honestly have no idea how this is affected by using sequencers that now use audiorate as their timebase (Cubase 4/5 etc). I know Logic and Digital Performer used to make a great deal of press around this (being able to intelligently 'thin' and prioritize the datastrea) but I don't even know where Logic stands on this now...(I'm pretty sure they made a change to using audiorate as the base clock in v7).
Finally, you'll notice that I give no real summary here. Every point is affected by too many variables to give a single overall answer. What USB transmission mode are the usb midi device's driver and your host software using--did they handshake the lower bandwidth but more timing accurate interrupt transfer mode, or are they using MIDI MUX? Are the messages in the midi datastream being intelligently optimized by your host sequencer or just bulk dumped to the output (software) interface? Is the host Operating System's USB handling introducing noticeable delay? (
This is probably one of the single biggest factors in timing issues). Is the USB port waiting to be serviced for longer than normal due to high bus utilization in the host computer or a shared interrupt? Does the output port connected to the actual midi device achieve full midi datarate or some lower rate that's an even multiple of the UART and device on the other end?
*All* of these effects are cumulative and the way the data interleaves as it moves downstream is what cumulatively affects not only latency but the variance in that latency (ie, midi "jitter"). Now the GOOD NEWS is that you can measure and correct for these effects in several ways, once you undersdtand them. This post is so long already I'll hold off on more for now.
Also if I'm mistaken on some single point please feel free to clarify...