Automatic parallelization refers to converting sequential code into multi-threaded or vectorized code in order to utilize multiple processors simultaneously in a shared multiprocessor environment.
Though the quality of automatic parallelization has improved in the past few decades, fully automatic parallelization of sequential programs by compilers remains a grand challenge due to its need for complex program analysis and unknown factors such as input data range during compilation.
Gustafson’s Law is a law in computer science which states that any sufficiently large problem can be efficiently parallelized. Gustafson’s law addresses the shortcomings of Amdahl’s law, which does not scale the availability of computing power as the number of machines increase. It removes the fixed problem size or fixed computation load on the parallel processors.
Amdahl’s law, also known as Amdahl’s argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.