Parallel processing is the ability of the brain to simultaneously process incoming stimuli. This becomes most important in vision, as the brain divides and conquers what it sees. It breaks up a scene into four components: color, motion, form, and depth. These are individually analysed and then compared to stored memories, which helps the brain identify what you are viewing. The brain then combines all of these into one image that you see and comprehend. The advantage of parallel processing is that it allows the brain to simultaneously identify different stimuli and allow for quick and decisive action.
This is one of the reasons that the human brain is much more powerful than the computer. While the computer's speed is a million times faster than a human's neural network, it is the fact that we have a large number of processors compared to computers.
Parallel processing can also be called as parallel computing.
Parallel computing,symmetric multi processing,massively multi processing.
The simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several.. However, this type of parallel processing requires very sophisticated software called distributed processing software.
Parallel computing is a form of computing in which many instructions are carried out simultaneously.Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently ("in parallel"). Parallel computing exists in several different forms: bit-level parallelism, instruction level parallelism, data parallelism, and task parallelism. It has been used for many years, mainly in high performance computing, but interest in it has become greater in recent years due to physical constraints preventing frequency scaling Communication and synchronization between the different subtasks is typically one of the greatest barriers to getting good parallel program performance.
Types of parallel processors:
SISD Single Instruction stream, Single Data stream
MISD Multiple Instruction stream, Single Data stream
SIMD Single Instruction stream, Multiple Data stream
MIMD Multiple Instruction stream, Multiple Data stream
Types of parallelism
Instruction level parallelism
Problems with parallel processing:
Architectures - SMP
Symmetric Shared-Memory Multiprocessing (SMP)
Programs are divided into subtasks (threads) among all processors (multithreading)
"Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists".
Massively parallel processing
Massively Parallel Processing (MPP)
Individual memory for each processor
Messaging interface for communication
200+ processors can work on same application
Classes of parallel computers
Parallel computers can be classified roughly into classes according to the level at which the hardware supports parallelism. This is roughly analogous to the distance between basic computing nodes.Note that these classifications are not mutually exclusive.
Mainstream parallel programming languages remain either explicitly parallel or (at best) partially implicit with the programmer giving the compiler directives for parallelization. A few fully implicit parallel programming languages exist - SISAL, Parallel Haskell, and (for FPGAs) Mitrion-C - but these are niche languages that are not widely used.
What are They Used For
Climate prediction & Weather forecasting.
Advantages and disadvantages:
heavy mathematical calculations
analyzing large amounts of data.
Main disadvantage is time delay.
Our future task is to develop an FNPP based computing system including the following components:An FNPP LSI for a part of a main memory.A number of concurrent programming languages, libraries, APIs, and parallel programming models have been created for programming parallel computers. With the new innovations happening each day,parallel processing have never let the things go down and never lost its existence.