LeDoyen wrote:Apparently, the definition of SoC that we use in the semiconductor industry and the one you understand seem to differ yes
Ummm, you just work in a different semi-conductor industry than I do.
In fact, it sounds like a very Intel-only world, ignoring AMD, when it just comes to x86.
First off, System on a Chip (SoC) ...SoC has been around non-x86 for a long time. PowerQIC was a common, popular one for PowerPC In the '00s. Various ARMv7 and earlier options existed as well. I'm speaking from integration experience because those are the ones that I'm most familiar from last decade with before ARMv8 this decade. I won't touch on 68K and early Power, or MIPS for that matter. Back then I was actually designing memory and system interconnect with synthesis tools. But I haven't done much since '05, I'll admit, but I've done plenty of embedded integration and evelopment.
But even some 386/486 and, later, i686 SoCs were out there (IDT/Centuar, SGS-Thompson, etc... usually carrier grade and other industries), with everything inlcuding all LPC (Legacy PC) functionality that Intel still puts in the I/O Controller Hub (ICH). I remember this because it came up when the Fedora project considered moving from i486 to i686 instruction set minimum compatibility, and always optimizing for in-line Atom (because Atom sucked if it was i686 Pro/II/III optimized). There were still a handful of integrators using 486 compatible SoC options out there.
BTW, probably the first SoC was 186EM, but that's another debate -- although splitting hairs, because the LPC was more minimal back then. But AMD dominated Intel back then, and AMD even had a better 8087 (even helped Intel design and fab theirs, long story).
Secondly, NUMA and no FSB has nothing to do with SoC. One can have a FSB in-chip and still be SoC.I.e., What you call SoC, we call GPU, NUMA and peripheral interconnect, but not peripherals.
SoC actually includes all peripherals in our world, and with any additional peripheral interconnect is optional, not required.
E.g., Windows (GNU/Linux is another story -- although it's not really a PC any more at that stage, because it's non-PC compatible firmware) won't freak'n work without the Intel ICH, because there are still legacy PC (LPC) functions in it, no different than the Southbridge. Furthermore ...
AMD threw away the Front Side Bus (FSB) back in the 32-bit era when it created the Athlon 32-bit based on the 40-bit Alpha 21264 EV6 platform, with a crossbar switch. Eventually AMD just went with a broadcast mesh for 64-bit with up to eight (8) nodes, being 40-bit from day 1, and up'ing to the 48-bit maximum flat of x86-64 Long Mode and, later, the full 52-bit Paging Address Extensions (PAE) that x86-64 Long Mode is capable of.
x86-64 Long Mode being the i686 32-bit/36-bit PAE compatible mode, so x86-64 OSes can still run segmented i686 binaries/libraries. Without Long Mode, Windows x64 wouldn't work, as a lot of it is still 32-bit libraries. Even Canonical (Ubuntu) got a 'rude awakening' when it decided to yank all 32-bit libraries in its next release, only to find out it utterly breaks Steam, (not just WINE), etc... because so many libraries from so many games that are released for Linux are dependent on 32-bit libraries built for Windows as well. GNU/Linux might have been 64-bit 'clean' from day 1 (thanx to my colleague Jon "Mad Dog" Hall and others who got Alphas to Linus in '94), but WinForms/Win32 is freak'n so not.
All Nehelem did was follow what AMD had done almost a decade earlier, finally dumping the FSB, with one exception -- Intel never put the actual I/O MMU and other 'segmentation/protection' in hardware. That's why Intel is f'ing having exploit after exploit, because they still rely on software, unlike AMD. Nehelem also tried to address, unsuccessfully at first, not only the 36-bit PAE limitations of i686 (going to 38-bit), but the >32bit (4GiB) memory mapped I/O 'unsafe.' But it took a lot of freak'n Linux kernel hacking, and Microsoft didn't even bother with Windows x64 for awhile (it just kept I/O under 4GiB).
E.g., It's why Intel wasn't >32-bit "safe" when Intel's IA-32e (x86-640 processors first appeared, and the Linux/x86-64 kernel had to use 'bounce buffers' for any I/O mapped above 4GiB. It's a problem that still exists today, because there are only 'tricks' that the OS uses to mitigate performance hits, not actually doing it in hardware.
I know because I saw this crap under Intel NDA in late 2007 -- yes, before Nehelem was released -- because I was working on Intel pre-release engineering hardware on high speed trading at Goldman and Lehman brothers. I formally left embedded after I left Timesys (where I was previously under NDA on Atom -- I said it would fail to gain any market in embedded, and I was dead-on -- the 'netbooks' actually save it, with Microsoft licensing Windows at 1/5th the price) to join Red Hat in 2007 (and finally get away from the long-hours of work), although Red Hat still had me on a few 'custom projects,' like in trading, retail, etc... (the hours went back up -- but at least I got massive stock bonuses at times for doing those) once I joined, given my experience at Timesys, IPC Systems before that, let alone in actually doing some layout years before that.
That same, f'n
'design flaw' I saw in 2007 is still there! Intel still is relying on software. When I was working with Theseus Logic (just before they were acquired by Camgian), people I knew at API Networks (fka Alpha Processor, Inc, spun off by Digital, then part of AMD) predicted this would happen. That's why everyone in the x86-oriented embedded world was trying to support HyperTransport -- because unlike Intel, AMD actually designed a freak'n hardware safe x86-64.
Even ARMv7 added an I/O MMU and out-of-order speculative while Intel was still getting Atom up-to-speed! And I remember when ARM didn't even have any MMU, and uCLinux was a fork of the kernel, because the Linux kernel required at least a MMU.
Actual, real, Intel SoC ...Intel first introduced an x86 SoC with later in-line Atom designs, let alone out-of-order Atom designs (the latest being the Goldmont series), ignoring really old stuff like the 386EM or even 186EM, of course (although AMD bested them there too). But the Atom SoCs were largely introduced because AMD's Processor 14h (first Socket-FT1, then 16h/FT3) was kicking its butt, badly, in both cost and performance.
I.e., Before then, AMD Geode x86 SoC were always
'well behind' Intel's leading edge solutions, so partners and integrators were willing to pay for the price/power consumption 'penalty' of Intel ICH chips usually fabbed at bigger feature sizes. But Processor 14h was basically a full-up K10 just without the L3 cache and only a single DDR channel, severely cutting down on pinout (BGA-423) and traces. Intel woke up to integrators literally ignoring Atom, and going full AMD, when they just didn't pick up an i-core (or i3-based Pentium). AMD later brought out FT3 (BGA-769) for more traces and options, which is what finally solidified AMD BGA over Atom BGA options.
Intel still really doesn't have a
'cost effective,' higher-end SoC solution today, and it's only Intel partnerships that keep it alive with the Atom designs -- which include the Celeron/Pentium-J/N products -- now known as Pentium Silver (while real i-series are Pentium Gold). AMD, on the other hand, has their Jaguar/Kabini SoCs in all sorts of systems, including even the PS4 and Xbox One consoles.
And against ARM, Atom is a joke. I still remember when the Cortex A17 hit, and my friends at Intel just dropped F-words. Atom was always behind ARMv7 developments, and definitely v8 now. Sigh ... remember when 64-bit MIPS was going to be the future end-all, be-all for everyone back in '90?
So, again ...I really don't know where you f'ing got the idea that NUMA and removing the FSB was SoC. I honestly have to question if you know anything outside of Intel, because x86 SoC has been around since the 186EM days.
SoC means you need 0 support, other than voltage regulators, capacitance, memory (and sometimes not even then, especially if some of the SRAM can be used for memory instead of cache) and the traces necessary to interface. Intel purposely doesn't do that, and -- in fact -- hurts itself because it makes it so costly because they don't.
Intel really hasn't designed much this century as far as platform goes. Again, there was a reason why we didn't use Intel in the financial world during the '00s at all, and were wary even this decade ... for good reason.