High performance computing, or HPC, is an architecture composed of several large computers doing parallel processing to solve very complex problems. Distribution computing, or distributed processing, is a way of using resources from machines located throughout a network. Combining grid computing concepts and supercomputer processing, HPC is most often used in scientific applications and engineering.
The computers used in an HPC are often multi-core CPUs or special processors, like graphical processing units (GPUs), designed for high-speed computational or graphical processing. By distributing the tasks across multiple machines, one doesn't need a single supercomputer to do the work. A network of nodes is used to distributed the problem to be solved.
In order to do this, applications must be designed (or redesigned) to run on this architecture. Programs have to be divided into discreet functions, referred to as threads. As the programs perform their specific functions, a messaging system is used to communicate between all of the pieces. Eventually a core processor and message manager puts all of the pieces together to create a final picture that is the solution to the problem posed.
High performance computing generates massive amounts of data. The standard file architectures can't manage this volume or the access times necessary to support the programs. HPC systems need file systems that can expand as needed and move the large amount of data around quickly.
While this is an expensive and complicated architecture, HPC is becoming available for other areas, including business. Cloud computing and virtualization are two technologies that can easily adopt high performance distributed computing. As the price of multi-core processors goes down and dynamic file systems become available for the average user, HPC will make its way into mainstream computing.
International Journal of High Performance Computing Applications was created in 1987.
gangalia:is a distributed monitering system for high performance computing systems such as clusters and grids
SGID refers to a manufacturer of high performance computing solutions that include the computer software and hardware.
PlanetHPC's motto is 'Setting the R&D Roadmap for High Performance Computing in Europe'.
A content distribution network is used to serve content to users with high availability and performance. You can learn more about this at the Wikipedia.
Ralf Gruber has written: 'HPC@green IT' -- subject(s): Hochleistungsrechnen, Green-IT, Environmental aspects, High performance computing, Grid Computing, Betriebsmittelverwaltung, Information technology, Energieeffizienz
J. V. Ashby has written: 'Data management tools for high performance computing applications'
HIGH AVAILABILITY provided by Fault Tolerant Computing Systems & Networks which might be geographically distributed and thus overcomes Single Point of Failurein mission critical applications is the main goal of Distributed Computing.
What is High Performance Computing?High performance computing -- or HPC -- is the practical application of the mighty "supercomputer" and has been steadily developed since the 1960s to tackle complex and large scale computations. The Components Involved in High Performance Computing TechnologyA standard desktop computer usually contains a single processor, therefore leaving it stranded in the wake of a high performance computing system, which is a self-contained network with an entire system of nodes that communicate with one another to calculate a lightning fast and accurate response to a set of intricate algorithms or problems.Broken down into smaller programs or threads, these parts of a high performance computer work together to correspond to and satisfy the core and end result of the entire system. A high performance computer system stores data to be housed in their own unique threads, which are built to grow exponentially to fulfill the space requirements of any given project within the system. The common spatial considerations for a single computing system are not an issue with a supercomputer, which is continually monitored and adapted to to the needs put upon it.Grid Computing and Computer Clusters as HPC SystemsGrid computing, or decentralized computing, allows for systems that are intertwined yet at vastly different locations, potentially spread around the globe and even into space or deep under the ocean.A computer cluster, or centralized computing, is a term used to describe a supercomputer environment wherein the computer terminals are in close proximity to one another.Who Uses This Technology?Ideal for academic institutions, engineering environments, scientific researchers and certain government entities, such as the military. Some specific disciplines wherein HPC is used include biochemistry, physics and environmental modeling.This powerful type of computing may be adopted in more private business environments where advanced technology is needed and the corporation can afford it.http://content.dell.com/us/en/enterprise/hpcc
Most simply: high performance cost effectiveness flexibility scalability efficinecy For more details and case studies I recommend www.Gridipedia.eu
I suggest you start by reading a good site on Grid computing. Personally I recommend Gridipedia. For example, as well as a good introduction, you can find read up on the business case behind it and see iti in action in the case study library they have: www.gridipedia.com
a mainframe is a high performance computer used for large-scale computing purposes that require greater availability and security than a smaller-scale machine can offer.