A. Types of parallelism
Bit-level parallelism
From the
advent of very-large-scale integration (VLSI)
computer-chip fabrication technology in the 1970s until about 1986, speed-up in
computer architecture was driven by doubling computer word size—the amount of information
the processor can manipulate per cycle. Increasing the word size reduces the
number of instructions the processor must execute to perform an operation on
variables whose sizes are greater than the length of the word. For example,
where an 8-bit
processor must add two 16-bit integers, the processor must first add the 8 lower-order
bits from each integer using the standard addition instruction, then add the
8 higher-order bits using an add-with-carry instruction and the carry bit
from the lower order addition; thus, an 8-bit processor requires two
instructions to complete a single operation, where a 16-bit processor would be
able to complete the operation with a single instruction.
Instruction-level parallelism
A canonical five-stage pipeline in a RISC machine (IF =
Instruction Fetch, ID = Instruction Decode, EX = Execute, MEM = Memory access,
WB = Register write back)
A computer
program, is in essence, a stream of instructions executed by a processor. These
instructions can be re-ordered and combined into groups which
are then executed in parallel without changing the result of the program. This
is known as instruction-level parallelism. Advances in instruction-level
parallelism dominated computer architecture from the mid-1980s until the
mid-1990s.
Modern
processors have multi-stage instruction pipelines. Each stage in the
pipeline corresponds to a different action the processor performs on that
instruction in that stage; a processor with an N-stage pipeline can have up to
N different instructions at different stages of completion. The canonical
example of a pipelined processor is a RISC processor, with five
stages: instruction fetch, decode, execute, memory access, and write back. The Pentium 4
processor had a 35-stage pipeline.
Task parallelism
Task
parallelism is the characteristic of a parallel program that "entirely
different calculations can be performed on either the same or different sets of
data". This contrasts with data parallelism, where the same calculation is
performed on the same or different sets of data. Task parallelism does not
usually scale with the size of a problem.