High performance computing, or HPC, is an architecture composed of several large computers doing parallel processing to solve very complex problems. Distribution computing, or distributed processing, is a way of using resources from machines located throughout a network. Combining grid computing concepts and supercomputer processing, HPC is most often used in scientific applications and engineering.
The computers used in an HPC are often multi-core CPUs or special processors, like graphical processing units (GPUs), designed for high-speed computational or graphical processing. By distributing the tasks across multiple machines, one doesn't need a single supercomputer to do the work. A network of nodes is used to distributed the problem to be solved.
In order to do this, applications must be designed (or redesigned) to run on this architecture. Programs have to be divided into discreet functions, referred to as threads. As the programs perform their specific functions, a messaging system is used to communicate between all of the pieces. Eventually a core processor and message manager puts all of the pieces together to create a final picture that is the solution to the problem posed.
High performance computing generates massive amounts of data. The standard file architectures can't manage this volume or the access times necessary to support the programs. HPC systems need file systems that can expand as needed and move the large amount of data around quickly.
While this is an expensive and complicated architecture, HPC is becoming available for other areas, including business. Cloud computing and virtualization are two technologies that can easily adopt high performance distributed computing. As the price of multi-core processors goes down and dynamic file systems become available for the average user, HPC will make its way into mainstream computing.
Chat with our AI personalities