Or it's simply because 4 cards is pushing the hardware beyond what was feasible when this technology was current. Hence Xite..which of course is also limited in some ways due to the dsp architecture and hardware ecosystem around them.
It's great that Extenders can be used, imho. However if you are hitting limits with 3 cards, keep in mind that 3 full cards (14/15dsp) and Xite are also worthy upgrade paths that can keep your existing workflows mostly intact.
valis wrote: Mon Sep 16, 2019 4:12 pm
Or it's simply because 4 cards is pushing the hardware beyond what was feasible when this technology was current.
Afaik PCI busses are grouped in '3 slot units', sometimes 2 busses were fully exposed (6 slot design), but later a single external bus was more common, hence the 3 cards maximum.
There are actually typically 3 PCI 'bus lines', and oddly enough, 4 interrupt lines. Since PCI is parallel (as were interrupt controllers before PCIe), that's quite a few traces to run on the board without sharing and signaling issues cropping up. Later implementations changed this to be closer to PCIe's method (message signaling instead of level-triggering) and of course different chipsets had ENTIRELY different PCI+interrupt implementations. This is why we have had to road test platform by platform.
It's also why ion 2001 I bought my P4-DC6+ dual Xeon system (early P4 era), it specifically connected the southbridge to the northbridge via PCI-X, which provided not only 4 PCI-33 (32bit) lanes but also 2 PCI-64 (64bit) lanes in addition to using those signaling paths to connect both chipsets. There were some edge cases where a SCSI RAID array on the 64bit bus could swamp the whole north/south connection (the board has an onboard Adaptec SCSI RAID implementation available) but I never bought the raid-enabling daughterboard and once I resolved IRQ conflicts I never saw PCI bus overflows in unexpected situations. Of course that was during the 'peak' of the PCI bus era, and it's only gotten more & more confusing since then. Or as seems to be the case with these PCIe implementations, perhaps simpler.
valis wrote: Mon Sep 16, 2019 4:12 pm
Or it's simply because 4 cards is pushing the hardware beyond what was feasible when this technology was current. Hence Xite..which of course is also limited in some ways due to the dsp architecture and hardware ecosystem around them.
It's great that Extenders can be used, imho. However if you are hitting limits with 3 cards, keep in mind that 3 full cards (14/15dsp) and Xite are also worthy upgrade paths that can keep your existing workflows mostly intact.
Yes I know it's worked before, even GaryB has mentioned S|C ran 4 at some point as a proof of concept. The problem again though is that there is no standard for PCI implementation, and of course now that it's relegated to legacy bus support it's even more nonstandard and varies by every chipset that implements it. I don't envy Gary's support role, but he does a great job.
He does. I'm about to hassle my supplier if they have a motherboard (after I've trawl here to see those that can run XITE first) that can support an 8 core intel i9, 2-3 PCI slots, PCIe slots and a TB3 header. Fancy my chances ?
What gen PCIe? This makes a difference in bandwidth.
32bit 33Mhz PCI (Scope cards) is 1067 Mbit/s or 133.33 MB/s
PCIe 2.0 x1 interface limits read/write speed to 985MB/s (no problem imho)
PCIe 2.0 x1 interface limits read/write speed to 500MB/s (3 cards within peak bandwidth but may hit peak real-world usage)
PCIe 1.0 x1 interface limits read/write speed to 250MB/s (2 cards may theoretically peak usage)