Level: Intermediate
M. Tim Jones ([email protected]), Consultant Engineer, Emulex
14 Mar 2007
As evidenced by major central processing unit (CPU) vendors, multi-core processors are poised to dominate the desktop and embedded space. With multiprocessing comes greater performance but also new problems. This article explores the ideas behind multiprocessing and developing applications for Linux® that exploit SMP.
You can increase the performance of a Linux system in various ways, and one of the most popular methods is increasing the performance of the processor. An obvious solution is to use a processor with a faster clock rate, but for any given technology there exists a physical limit where the clock simply can't go any faster. When you reach that limit, you can use the more-is-better approach and apply multiple processors. Unfortunately, performance doesn't scale linearly with the aggregate performance of the individual processors.
Before discussing the application of multiprocessing in Linux, let's take a quick look back at the history of multiprocessing.
History of multiprocessing
|
Flynn's classification of multi-CPU architectures
Single Instruction, Single Data (SISD) is the typical uniprocessor architecture. The Multiple Instruction, Multiple Data (MIMD) multiprocessing architecture has separate processors operating on independent data (control parallelism). Finally, Single Instruction, Multiple Data (SIMD) has a number of processors operating on different data (data parallelism).
See the Resources section below for detail on Flynn's original paper.
|
|
Multiprocessing originated in the mid-1950s at a number of companies, some you know and some you might not remember (IBM, Digital Equipment Corporation, Control Data Corporation). In the early 1960s, Burroughs Corporation introduced a symmetrical MIMD multiprocessor with four CPUs and up to sixteen memory modules connected via a crossbar switch (the first SMP architecture). The popular and successful CDC 6600 was introduced in 1964 and provided a CPU with ten subprocessors (peripheral processing units). In the late 1960s, Honeywell delivered the first Multics system, another symmetrical multiprocessing system of eight CPUs.
While multiprocessing systems were being developed, technologies also advanced the ability to shrink the processors and operate at much higher clock rates. In the 1980s, companies like Cray Research introduced multiprocessor systems and UNIX®-like operating systems that could take advantage of them (CX-OS).
The late 1980s, with the popularity of uniprocessor personal computer systems such as the IBM PC, saw a decline in multiprocessing systems. But now, twenty years later, multiprocessing has returned to these same personal computer systems through symmetric multiprocessing.
Amdahl's law
Gene Amdahl, a computer architect and IBM fellow, developed computer architectures at IBM, his namesake venture, Amdahl Corporation, and others. But he is most famous for his law that predicts the maximum expected system improvement when a portion of the system is improved. This is used predominantly to calculate the maximum theoretical performance improvement when using multiple processors (see Figure 1).
Figure 1. Amdahl's law for processor parallelization
Using the equation shown in Figure 1, you can calculate the maximum performance improvement of a system using N processors and a factor F that specifies the portion of the system that cannot be parallelized (the portion of the system that is sequential in nature). The result is shown in Figure 2.
Figure 2. Amdahl's law for up to ten CPUs
In Figure 2, the top line shows the number of processors. Ideally, this is what you'd like to see when you add additional processors to solve a problem. Unfortunately, because not all of the problem can be parallelized and there's overhead in managing the processors, the speedup is quite a bit less. At the bottom (purple line) is the case of a problem that is 90% sequential. In the best case for this graph, the brown line shows a problem that's 10% sequential and, therefore, 90% parallelizable. Even in this case, ten processors perform only slightly better than five.
Multiprocessing and the PC
An SMP architecture is simply one where two or more identical processors connect to one another through a shared memory. Each processor has equal access to the shared memory (the same access latency to the memory space). Contrast this with the Non-Uniform Memory Access (NUMA) architecture. For example, each processor has its own memory but also access to shared memory with a different access latency.
Loosely-coupled multiprocessing
The earliest Linux SMP systems were loosely-coupled multiprocessor systems. These are constructed from multiple standalone systems connected by a high-speed interconnect (such as 10G Ethernet, Fibre Channel, or Infiniband). This type of architecture is also called a cluster (see Figure 3), for which the Linux Beowulf project remains a popular solution. Linux Beowulf clusters can be built from commodity hardware and a typical networking interconnect such as Ethernet.
共5页: 上一页 1 [2] [3] [4] [5] 下一页