operating-systemdistributed-systemprocessorcpu-coressmp

What is the difference between processor cores and SMP cores?


In figure 3.2 in the book Distributed Systems: fun and profit, it is mentioned:

Performance advantage of a cluster built with high-end server nodes (128 core SMP) over a cluster with the same number of processor cores built with low-end server nodes (four core SMP) for clusters of varying size.

How come 128 cores and four cores are referred to as "the same number of processor cores"?

I tried to Google SMP cores but could not understand the statement above. SMP cores enable multiple processors to share a standard memory, all belonging to a single OS. That entails context switching among processors does not exist anymore, thus making communication efficient.


Solution

  • How come 128 cores and four cores are referred to as "the same number of processor cores"?

    Assume one cluster uses 4 nodes with 128 processor cores per node (a total of 512 processor cores in the cluster), and another cluster uses 128 nodes with 4 processor cores per node (a total of 512 processor cores in the cluster). The second cluster has "the same number of processor cores (in the cluster)" as the first cluster.

    What they're saying is that (for the same total number of processor cores in the cluster) having more processor cores per node and fewer nodes is better for performance (very likely because communication between processor cores is faster when processor cores are in the same node, so "more processor cores per node and fewer nodes" means "less time consumed by network latency between nodes").

    What is the difference between processor cores and SMP cores?

    For SMP cores (or "symmetrical multi-processor cores"), "symmetrical" means they're the same. In other words, SMP cores are assumed to be identical processor cores (not a mixture of different CPU models possibly from different manufacturers, and not a mixture of "performance cores + efficiency cores" in the same chip like Intel's Alder Lake) AND have the same access to the same data/memory (not NUMA, and not "different processor cores need to use different amounts of networking to access the same data").

    SMP cores enable multiple processors to share a standard memory, all belonging to a single OS. That entails context switching among processors does not exist anymore, thus making communication efficient.

    I'm "very sure" it's about the extra cost of networking (and not context switches). E.g. sending data to a processor core in the same node is fast (its all in the same memory and nothing needs to be done) but sending a data to a processor core in a different node is slower (e.g. normal Ethernet costs about 1 ms more latency).

    Note that for these kinds of systems (and embarrassingly parallel workloads in general) often software creates 1 software thread per processor, so that there's no need for any context switches.