XITE-1 PC CONFIGURATION REFERENCE

The Sonic Core XITE hardware platform for Scope

Moderators: valis, garyb

User avatar
Sounddesigner
Posts: 1087
Joined: Sat Jun 02, 2007 11:06 pm

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by Sounddesigner »

Glad to see intel put a focus on higher clock speeds with the 15th Generation i7/i5's. I see the newer i5 Core Ultra 245k has a base speed of 4.2GHz wich is like a 700MHz increase over last generation for P-Cores. And the E-Cores are now at 3.6GHz wich is faster than last generation P and E Cores. The i5 has 14 Cores making it far superior to my older 12th Generation in every way wich has only 10-Cores @ 3.7-GHz p-cores.

The new i7 Core Ultra 265 is 20-Cores with p-cores at a base speed of 3.9-GHz vs last generation 3.4GHz.

Intel has'nt done so well Business wise recently. Part of the problem i think is the reduction of clock-speed in the past to focus on more Cores, wich might have been needed to lay down the ground work for both Cores and Clock-speed greatness of the future wich is what we are now starting to get, but still i think this hurt them a little. Plus they are not prepared for current and future directions of the Market with graphic cards, ARM Processors, etc. The Apple M4 i heard preforms better at single thread tasks wich gets back to intels problem of just pure raw power with base Clock-Speed of a Single core. I do believe generally intel CPUs are more than enough power for now but progressive speed increase over time still is needed as people gravitate twards raw juggernaut power, hence the popularity of the Apple M4. Plus there has been some recent plugin releases that can challenge the best CPUs and prevent large Mix's when not managed properly, so Developers WILL DEFINITELY use more Clock-speed if given.

I hope intel finally get us over the 5GHz mark soon as well as get their business issues sorted out. Looks like the Federal Gov is buying 10% of intel to help out the struggling company.
pranza
Posts: 161
Joined: Sun Dec 14, 2008 1:22 pm
Location: Vilnius, Lithuania
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by pranza »

i think they were at 6GHz with 14th gen already, 14900K...

and the issue now is not the GHz but how to cool it.. that is, even at 5GHz it's not easy to cool it.
as for core ultra - they are kind of inferior due to being chiplet designs - multiple small shits instead of one proper blob == slower intercommunication
cortone
Posts: 283
Joined: Thu Jul 29, 2004 4:00 pm
Location: Pacific Northwest

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by cortone »

Heat is just the symptom of Intel's, or AMD's, challenge. Cooling is primarily a mechanical challenge, and the solutions are well known and available. The problem is the expense and space requirements of the solutions, followed by the monthly electical bills to run them.

The real challenge is the power draw of running high clock speeds (which also produces the heat). If you raise the voltage (and therefore the wattage), you can raise the GHz. You just have to pay for the tech and the power bills, and accept reduced life expectancy of the processor. Very few are willing to pay for this. Or endure the decibel levels you're willing to put up with (less dB costs lots more).

The solution is keeping the voltage/wattage low with a constant cadence of smaller manufacturing nodes which allow the GHz to creep up, spreading the processor instructions over more surface area with multiple execution units (lower wattage, easier cooling), and keeping the average clock speed low, with occasional turbo bursts to keep the relative performance high.

Intel and AMD would love to sell US$10k processors and manufacturers would LOVE to sell millions of US$25k systems with advanced cooling solutions. If enough people line up to buy them, they will.
User avatar
Sounddesigner
Posts: 1087
Joined: Sat Jun 02, 2007 11:06 pm

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by Sounddesigner »

@pranza, When i stated 5ghz i was referring to base speed not Turbo Boost. Yea i agree with Turbo Boost we've been at about 6GHz for a while. But since the dawn of the Modern Computer-Music-Era i don't believe base speed has gone beyond 5ghz, the individual Cores have gotten more powerfull in many ways and collectively with hyperthreading, etc but base speed has always been mysteriously lacking. I think the old pentium 4s got all the way up to 5GHz then Intel scraped it and released the first Quad-cores at 2.4GHz. They decided to go with more cores instead. Likewise many years ago Quads got up to 4.2 GHZ then intel dropped the base speed again and went with mega-cores 10/20/24-cores. With the latest 15th generation intel has increased the base speed back to 4.2 GHZ wich is a good move and makes mega-cores a lot better. Intel knows base speed is important and what many people want otherwise they would not have increased it but rather continue adding more cores and more Turbo boost.


I did not know the new Core Ultras had problems, thanks for the info! That's a real bummer. Hopefully in the future intel does figure out how to give us super fast computers that are cooled enough and affordable, there's definitely a growing need for them and i've seen situations where the individual core of intel was not fast enough and Turbo-boost does not seem to work properly in those situations thus not effective enough. The Apple M4s just seem to have more raw power per core.
User avatar
valis
Posts: 7719
Joined: Sun Sep 23, 2001 4:00 pm
Location: West Coast USA
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by valis »

M4 has more efficiency per core, and runs with less power/heat. That's because x86-64 includes a massive amount of legacy compatibility Apple shed with their ARM transition, which translates into larger dies and higher power/heat. Power vs. IPC (instructions per cycle) is the game here, and Apple beat them soundly, at least in mobile and small format desktop. Above that, it's areas where the unified RAM can shine (they make decent single user AI machines because you can load larger models than with Nvidia consumer GPU's on intel/AMD).

P4 was hobbled by losing RAMbus, at the time people were convinced RAMbus was a patent troll by press releases from the 3 memory makers in China who wanted to keep us on parallel RAM addressing (DIMMs rather than Rambus RIMMs). Rambus was a better match for the P4 pipeline, and is serial which would have allowed memory to scale to much higher speeds. Instead we got multi-level addressing with DIMMs (that's how we get the fake higher clock cycles, and it's similar to what consumer NAND does to enable more states and more space). We also don't run our current DIMMs at speeds for longevity (JEDEC) but rather almost all machines overclock (XMP, etc profiles) which means that consumer machines start having memory issues within 4-6 years, which is within the normal lifespan of most home users who game etc and so they don't really notice it or just feel it's time to upgrade when the machine's stability declines with age.

The biggest thing that held us back with Intel was them being the dominant architecture for so long. Apple, MS and Intel all have issues now in the AI era as the focus in future machines will change. Intel as a company suffered from hubris and "not invented here" and so lost their competitive edge, a common pattern with companies as they age. Hopefully the fat they shed and the loss of marketshare wakes them up before they obsolesce entirely. Meanwhile it's not just ARM with Apple & server chips that is making gains, RISC-V is making gains and is replacing a lot of embedded chips (which were typically ARM or ASICs) because the open design allows anyone to contract custom designs to their spec that you can vet the instruction set fully on, and integrate with anything you choose. That's not here YET, but in low power machines there are already dev options that should scale with time. The most important aspect of this is we will see hardware that is 100% fully open in the long run, which gives the industry a reference machine to test things like AI models and drivers end to end with open hardware and software, something we haven't had for about 20 years. Not as relevant to the studio, but very relevant for keeping software stacks with Linux and foundational ML/AI models vetted for backdoors.
nebelfuerst
Posts: 583
Joined: Tue Jun 23, 2009 10:55 am

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by nebelfuerst »

At the age of P4/RAMbus, I worked with spice simulations. RAMbus hat some serious issues in its design. There was no big performance advantage over parallel RAM and it was more expensive.
Apple's M4 is an efficient CPU, but Apple's policy to change the CPU architecture from 68xxx to PPC to x86 to ARM really suxx.
I own an Alpha CPU, which was dominant in performance at its time. But the lack of compatible software limited its useablility.
So if anyone can demonstrate scope to run on an M4, I will change my mind :)
\\\ *** l 0 v e | X I T E *** ///
User avatar
Bud Weiser
Posts: 2913
Joined: Tue Sep 14, 2010 5:29 am
Location: nowhere land

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by Bud Weiser »

Well, I still have 2 ASUS P4T-E mobos in stock using RAMBUS,- and some working RAMBUS sticks too.
These,- together w/ Intel P4 2.8 GHz (same proc ran in KORG OASIS !) were the very best performing combo for SCOPE 4.5 (32Bit).
Unfortunately the BIOS chips became unreliable and are hard to replace ´cause these aren´t socketed.
I´d need v1008 chips,- hard to find too.
But when I´d find, I´d use one of these boards w/ RAMBUS and SCOPE PCI cards again.

B.t.w.,- Apple M6 is on the doorstep and it´s 2nm for the 1st time.
New Macbooks Pro will come w/ M6 Pro or Max,- 14 or 16" OLED (unfortunately "single RGB layer" only) and Thunderbolt 5.
Very hard to beat for Intel/AMD !
These Apple M chips keep (relatively) cool and the new Macbooks will be thinner than ever.
I don´t think the latter is an adavntage,- but they will be thinner and more lightweight again.
These M chips simply consumate much less power than Intel/AMD.

For those having the funds and are willing to pay,- these are real workhorses and for the audio/MIDI stuff the cheaper "pro" versions w/ just only a 500GB SSD will be a good investment since Thunderbolt 5 will be, for the 1st time, fast enough to put all the streaming large libraries on external drives.

I fancy w/ such machine, possibly replacing my bulky and heavy PC-towers and rackmounts and run SCOPE from my fastest of the outdated Lenovo workstation laptops in addition.
I don´t believe SCOPE will run on Mac again,- at least not in my life.
User avatar
valis
Posts: 7719
Joined: Sun Sep 23, 2001 4:00 pm
Location: West Coast USA
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by valis »

RAMbus was intended to scale in the future when the serial ring bus was adapted to the P4. So yes, it was more costly than DDR (because it was not yet manufactured at the same scale, economies of scale applied). And so when Intel was forced to release DDR based versions of the P4 chipset it hobbled RAMbus scaling with P4 as many users opted for the cheaper solution--as there was no major performance penalty. I can tell you that even in my 2001 era Dual Xeon the scaling of RAMbus was more apparent than in the consumer chipsets, and that was a Prestonia board--very early in the P4 lifecyucle.

Alpha CPU was a completely different arch, and while RISC in theory should use the higher speeds and lower latency (it had a deep pipeline and superscalar design like the P4, but came much earlier). Btw my Dad was chief sales manager for the Southwest USA government/military sales for DEC during the late 80's and early 90's. I got to play on VAX machines quite a bit in his office, and that was my intro to UNIX. They had demo machines not just running VMS (VAX/VMS) but also SCO/V and OSF-1, both of which lost tremendous marketshare and wound up being a dead end for the company in the long run (OSF-1 was supposed to replace SCO sysV as an industry wide collab effort, lol).

In any case, the reason RAMbus was the choice for the P4 is that the internal latency of the P4 was intended to match the serial RAMbus architecture as clock speeds scaled over time, which was your point above. DDR gave the system too much latency, AMD's shorter pipeline cpu's and DDR remained competitive at the low end of the market early in P4 era (and cooling was much easier even if performance was not quite up to Intel still in that era), and so by the time we reached the point where P4 was supposed to scale it was only Itanium and the higher end Xeon workstations that still supported (ECC RIMM) Rambus that the design showed itself scaling. But that was priced out of the consumer market, and Intel countered AMD with the Isreali designed P4-M (Dothan era, 2003/4) which reverted back to much of the P3 era internal core design updated for the current market, and that proved to be a better match to software and consumer needs at the top. So P4 was scrapped in favor of a complete overhaul of their design in favor of the desktop that emerged with Nehelem. P4 was not scaling because the cache misses were already showing issues with P4 scaling by then, due to the mismatch between DDR and its pipeline--which was designed for serial RAM addressing (RAMbus)! I recall all of this well, because as a 3D animator the serial bus didn't matter as much as internal core performance--3D isn't RAM limited but tends to stay in-cache on CPU very well, and thus Nehelem was a big boost for me.

However serial RAM bus architecture was actually meant for the Ring bus architecture of multi-core CPUs (integrating the dual cpu nature of Xeons into a single chip), so when multi core CPU's did finally emerge there were issues with multi-core Xeons (again, not a consumer issue) that resulted in Intel having to address separate pools of RAM per CPU (NUMA) to avoid the latency penalty of addressing DDR THROUGH the other CPU or an external Northbridge chip altogether. NUMA had all sorts of problems with consumer applications, and software that was compiled NUMA-aware was considerably higher in performance (because it kept threads local to the CPU it started on at the OS schedular level to avoid the latency penalty of being out-of-cache and not-local when the memory address lookup occurred). Meaning the pipeline redesign post-P4 didn't start performing well until post-Nehelem for workstation and server level machines, all of which hobbled everything Intel had intended to carry over to consumers from the Itanium design, hobbled moving consumers from X86 to IA-64, and allowed AMD to catch up in performance and even release multi-core designs that exceeded Intel's performance in memory operations because AMD had managed to purchase Cray IP that applied specifically to serial ring-bus architectures and integrated memory controllers--leading us into the X86-64 era by AMD releasing those instruction set extensions (the 64bit memory extensions) that kept everyone on the x86 arch altogether, killing IA-64 in the long run (which was the ultimate destination not just for RAMbus, but also for removing legacy X86 design issues that limited performance for ALL of us to the point where Apple's designs are now highly competitive based off of the much simpler ARM chips they now use).

In short, RAMbus was actually never a 'patent troll', if you watched in those years most of that stateside false reporting came right as Tomshardware (Tom) fired all of his staff, changed many stories that had been published already in his site to remove attribution to original authors, he got a major injection of cash and partial ownership through some umbrella-obscured ownerships from the Chinese DDR consortium, and RAMbus actually eventually won (over a decade later, if I recall) the court battles over the market's claims of patent trolling because RAMbus was not a manufacturer but only a research IP company (ie, not patent "troll" but rather they only released patents because they collaborated with partners to develop future IP!). Basically the equivalent of 'fake news' used to keep the market attached to the DDR consortium, which kept the market locked in due to economies of scale and sub-market pricing during that era.

Sorry, that's long, but it's directly related to how the serial bus of the P4 was supposed to scale when we went multicore, and that never happened. Refer back to the first 'two chips glued to one substrate' at the end of the P4 era, which was married ONLY to DDR in the consumer version by that time, for how that limited the deep Intel pipeline being mismatched with the 'shallow but wide' parallel addressing nature of DDR. The Irony is that RAMbus is actually more latent in design than DDR, but better matched for deep pipeline CPU's and so would have allowed scaling of clockspeeds by ensuring the RAM and CPU Cache level (L2, and later L3 when that was added) were perfectly matched in how they mapped to each other.

Personal note: My P4 era Prestonia machine housed WInXP (32bit) and Scope as a sidecar machine all the way up until about 2019 or 2020, when I finally built the E4-v6 Xeon rig that houses my PCI cards still today. That's 20 years of usable service! Kinda neat.
User avatar
valis
Posts: 7719
Joined: Sun Sep 23, 2001 4:00 pm
Location: West Coast USA
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by valis »

As for running Scope on M4, not sure Holger has the funds for developing an entirely new platform for the hardware side of Scope not to mention recompiling and debugging the UI software on an arch he's not familiar with. That would require an injection of cash, adapting Scope to Thunderbot/USB4 rather than the current PCIe card, and more. Right now it's looking likely we will need to help him find a way to get us to Scope 8 so that we can address the outstanding issues we have even on our current Windows 10/11 codebase. But more on when details emerge...
nebelfuerst
Posts: 583
Joined: Tue Jun 23, 2009 10:55 am

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by nebelfuerst »

Very interesting story, valis. I saw RAMBus from an electronic perspective, where the high frequency caused lots of troubles, especially because of RAM connectors and chip housing were facing resonances.
PCIe and SATA showed, that over time, electronics have evolved. Maybe some aspects of the RAMBus concept appeared too early in time.
NUMA is still around, my Threadripper 2970 has NUMA. This cpu runs like hell on 3D rendering, as it skales linear with its 48 treads. But for memory intense jobs, it's one of the worst cpus.
For me, Xeons with onboard FPGA were a very interesting approach, but it failed on the marked.
But I agree, that Scope 8 is not required to run on fancy systems, if I can run it on any of my (too many) systems. :)
\\\ *** l 0 v e | X I T E *** ///
User avatar
valis
Posts: 7719
Joined: Sun Sep 23, 2001 4:00 pm
Location: West Coast USA
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by valis »

Yes TR uses NUMA, as it allows each chiplet/core set to manage their own pool of RAM. This again had a performance hit for many applications that weren't NUMA aware in the workstation/normal user category. So there's a gaming mode that actually disables one chiplet entirely to preference single threaded and low thread count applications when necessary (games), and the OS can still schedule low priority tasks on the other chiplet.

High frequency noise is always an issue when dealing with high speeds in computing. The ECC RIMMs that my Xeon board used were different than consumer chips, so this was the first era of heatspreaders that were entirely metal over the RAM chips. This design continued for later Xeon compatible RAM in the Core era, except for Apple's Mac Pros which used much more beefy heatsinks instead. I've still got 2 Mac Pros, and an identical 2008 era dual Xeon Supermicro board as well. I loved that architecture actually.

The FGPA Xeons were a limited vertical market that was supposed to serve as a development platform, and an experiment. I don't think it was designed to succeed, as FPGA's are most often used to design software+firmware together that winds up being made into silicon as an ASIC at a much cheaper pricepoint. The only people who keep their designs on FPGA's are in labs where the firmware/operational code never solidfiies, again an even more narrow (and higher pricepoint) market. But cool, certainly.

Scope 8 should simply run on what Scope 7 runs on now, with wider compatibility and more bugfixes, at least we can hope so.
User avatar
Bud Weiser
Posts: 2913
Joined: Tue Sep 14, 2010 5:29 am
Location: nowhere land

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by Bud Weiser »

I´m surprised you guys wrap your head around SCOPE 8.
SCOPE 7 needs bug fixes and mainly a better ASIO driver.
Reaper,- also latest build,- runs pretty well w/ SCOPE 7 x64 ASIO and the VST2 and VST3 plugins loaded into Reaper do too.
BUT,- when I use any of the devices running standalaone w/ SCOPE ASIO, I get BSOD and the Win10 machines crash.
I´m talking about devices running as plugins (VST2 / VST3) and as application,- p.ex. IK Multimedia ST-4, Sampletron 2, Syntronic 2, B3-X and other Arturia etc..
The Cherry Audio worked w/ SCOPE 7 64Bit and Windows 7 Pro SP1 64Bit and I´ll install these on the Win10 "SCOPE" laptop just only for double checking if they do in standalone mode also w/ SCOPE 7 ASIO and Win 10 Pro 64Bit.

This scenario is really annoying and I don´t understand technically how that can be.
What´s the difference of running the same plugin/device inside Reaper or standalone,- both w/ SCOPE7 ASIO ? :-?
It´s so strange !

:)

Bud
User avatar
garyb
Moderator
Posts: 23398
Joined: Sun Apr 15, 2001 4:00 pm
Location: ghetto by the sea

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by garyb »

he stand-alone device has it's own ASIO configuration. that's where the problem lies.
User avatar
valis
Posts: 7719
Joined: Sun Sep 23, 2001 4:00 pm
Location: West Coast USA
Contact:

Re: XITE-1 PC CONFIGURATION REFERENCE

Post by valis »

DAW applications like Ableton Live, Reaper etc have their own ASIO layer and driver access model, written by each respective app's development team. So each DAW style application tends to implement things anew, and yet you'll find they also tend to be more compatible as they need the DAW to have stability first and foremost. The short of it is that they do NOT try to init every device on polling them, and so have held up much better post Scope 7 launch for our usage.

For the standalone version of plugins and other smaller apps, most of these developers focus more on the plugin itself and features related to that, and so the standalone modes are typically included as a nice feature for those who want it, but the devs/companies that focus on plugins don't code their own ASIO software interface to the driver. Instead, they rely on open source libraries (and in the past closed source but externally provided libraries they paid to integrate) which are written differently than DAWs as the open source provider attempts to make them suitable for every purpose, from the high end to the low end.

In the time since Scope 7 was developed, most of the libraries that these plugins rely on when in standalone mode (portaudio and the like) have made substantial changes to handle consumer soundcard/chipsets (realtek/azalia/etc), or been replaced by newer alternatives by the individual plugin developers (RtAudio, Miniaudio, etc all came along more recently) because they had better support for the consumers that represent a large portion of sales. They do this because as open source libraries they want to have features that appeal to as many developers as possible, and support as many devices as possible.

Almost all of these audio libraries do an init poll of *every* device enumerated in the machine for audio midi, and will query every supported mode and samplerate etc rather than just listing them in the dropdown (in the relevant app's settings), and doing the init on changing setting to a given device. Many even just assume control over the samplerate now, where in the past they would be greyed out with a soundcard that didn't support changing that in a software client (like Scope). Now, they just implement the open source libraries that assume they CAN control the soundcard, thankfully the apps that try to change Scope's samplerate tend to just crash or freeze when that doesn't return with a result as expected (rather than BSOD).

Long story short, the queries sent to Scope from these software libraries have changed since the WIn7/Scope 7 64bit era debuted, and so the standalone versions of apps (plugins) that use them now cause crashes for Scope. We all agree this should be fixed.

Remember, the generalized compute and tech industry is almost as bad at obsolescing things as the mobile device market. Firewire changed almost every WIndows and MacOS version, so many firewire devices were dropped when moving from XP > Win7/8, and this repeated with Win10. (For example there was a change in the firewire security model in Win10/11 around networking features over firewire, but it applied to all devices even if they didn't act as network devices and so they all needed new drivers). That's old hat now, but it was a huge source of complaints every time it occurred, and especially with Win10.

------------------------------------------

Mind you we're all in the same boat here. I use Bidule under Scope and he had to integrate portaudio into the app as it no longer worked as a linked library (.dll) and so I'm stuck on the last version of Bidule that had it externally as I could swap it for a version that doesn't cause scope to crash on launching Bidule. VCV Rack v1 runs fine, but they switched to a newer layer with VCV Rack 2 and so the same issue occurs. The same with several plugins I own, older versions work fine with Scope in Standalone, but don't actually work as well in modern DAWs (and Bidule), so we're left with the choice of an older version in standalone or using a host (or DAW) to use the newer version.
Post Reply