Mainframe computer
From Wikipedia, the free encyclopediaJump to: navigation, search
For other uses, see Mainframe.
This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009) An IBM 704 mainframe
Mainframes (often colloquially referred to as Big Iron[1]) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
The term probably had originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames.[2] Later the term was used to distinguish high-end commercial machines from less powerful units.
Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z10 is the latest incarnation.) Otherwise, large systems that are not based on the System/360 but are used for similar tasks are usually referred to as servers or even supercomputers. However, "server", "supercomputer" and "mainframe" are not synonymous (see client-server).
Some non-System/360-compatible systems derived from or compatible with older (pre-Web) server technology may also be considered mainframes. These include the Burroughs large systems, the UNIVAC 1100/2200 series systems, and the pre-System/360 IBM 700/7000 series. Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.
Contents[hide]Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS - Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.
Software upgrades are only non-disruptive when Parallel Sysplex is in place, with true workload sharing, so one system can take over another's application, while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, because they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning (and implementation) is required to exploit these features.
In the 1960s, most mainframes had no interactive interface. They accepted sets of punch cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds or thousands of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) by the 1980s (if not earlier). Nowadays most mainframes have partially or entirely phased out classic terminal access for end-users in favor of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.[citation needed]
Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. In a major reversal, IBM now touts its newer mainframes' ability to reduce data center energy costs for power and cooling, and the reduced physical space requirements compared to server farms.[3]
[edit] CharacteristicsNearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not to the same degree or level of sophistication.
Mainframes can add or hot swap system capacity non disruptively and granularly, again to a level of sophistication not found on most servers. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer three levels of virtualization: logical partitions (LPARs, via the PR/SM facility), virtual machines (via the z/VM operating system), and through its operating systems (notably z/OS with its key-protected address spaces and sophisticated goal-oriented workload scheduling,[clarification needed] but also Linux, OpenSolaris and Java). This virtualization is so thorough, so well established, and so reliable that most IBM mainframe customers run no more than two machines[citation needed]: one in their primary data center, and one in their backup data center-fully active, partially active, or on standby-in case there is a catastrophe affecting the first building. All test, development, training, and production workload for all applications and all databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual.[4] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] While some other server families also offload certain I/O processing and emphasize throughput computing, they do not do so to the same degree and levels of sophistication.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do some independent analysts. Sun Microsystems also takes that view, but beginning in 2007 promoted a partnership with IBM which largely focused on IBM support for Solaris on its System x and BladeCenter products (and therefore unrelated to mainframes), but also included positive comments for the company's OpenSolaris operating system being ported to IBM mainframes as part of increasing the Solaris community. Some analysts (such as Gartner[citation needed]) claim that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments while blurring the differences between the different approaches to enterprise computing.
Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
[edit] MarketIBM mainframes dominate the mainframe market at well over 90% market share.[5] Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market, although they have been slow to introduce new hardware models in recent years.
The amount of vendor investment in mainframe development varies with marketshare. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce their development expenses, and they have also cut back their mainframe software development. (However, Unisys still maintains its own unique CMOS processor design development for certain high-end ClearPath models but contracts chip manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[6][7] IDC and Gartner server marketshare measurements show IBM System z mainframes continuing their long-running marketshare gains among high-end servers of all types, and IBM continues to report increasing mainframe revenues even while steadily reducing prices.
[edit] HistoryThis section does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries/z9 mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the Strela is an example of an independently designed Soviet computer.
Shrinking demand and tough competition caused a shakeout in the market in the early 1980s - RCA sold out to UNIVAC and GE also left; Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. Infoworld's Stuart Alsop famously predicted that the last mainframe would be unplugged in 1996.
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) All the largest Chinese banks now use IBM mainframes. Also, in late 2000 IBM introduced 64-bit z/Architecture and reinvigorated its mainframe software organization, developing hundreds of new mainframe software products in subsequent years. IBM also acquired numerous software companies with leadership in specific market segments, such as Cognos, and quickly introduced those software products to the mainframe. IBM has also been steadily reducing prices, taking advantage of increasing economies of scale and spurring additional demand. IBM's quarterly and annual reports in the 2000s reported increasing mainframe revenues and even faster increasing mainframe capacity shipments, with only a few brief interruptions prior to new model introductions. According to IDC, IT labor costs continued to rise in the 2000s, putting significant and increasing pressure on corporate budgets, and encouraging a shift toward the more labor-efficient centralized computing model, particularly mainframes. (IBM also focused on labor-saving product improvements.) In an ultimate irony, IBM credibly promotes its mainframes as the most space- and energy-efficient servers, just as many businesses are reaching data center expansion limits.
[edit] Differences from SupercomputersThis section does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008)
The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally are used for problems which are limited by calculation speed, while mainframes are used for problems which are limited by input/output and reliability and for solving multiple business problems concurrently (mixed workload). The differences and similarities are as follows:
There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different.
[edit] StatisticsAn IBM zSeries 800 (foreground, left).Computer sizes Larger Super · Minisuper ·Mainframe · Mini · Supermini ·Server
Micro Personal · Workstation · Home ·Desktop · SFF (Nettop) · Plug
Mobile
Portable / Desktop replacement computer · Laptop ·Subnotebook (Netbook, Smartbook) · Tablet (Ultra-Mobile PC) · Portable / Mobile data terminal · Electronic organizer · E-book reader · Pocket computer ·Handheld game console · Wearable computer
PDAs / IAs
Handheld PC · Pocket PC · Smartphone · PMPs · DAPs
Calculators
Scientific · Programmable · Graphing
Other Single-board computer · Wireless sensor network · Microcontroller · Smartdust ·Nanocomputer
Retrieved from "http://en.wikipedia.org/wiki/Mainframe_computer"
Categories: Mainframe computers
Hidden categories: NPOV disputes from July 2009 | All articles with unsourced statements | Articles with unsourced statements from March 2008 | Articles with unsourced statements from September 2009 | All pages needing cleanup | Wikipedia articles needing clarification from November 2007 | Articles with unsourced statements from November 2008 | Articles with unsourced statements from November 2007 | Articles needing additional references from July 2008 | Articles containing potentially dated statements from 2004 | All articles containing potentially dated statements | Articles with unsourced statements from May 2009
Views
Personal tools
if (window.isMSIE55) fixalpha();
Navigation
Search
Interaction
Toolbox
Languages
the computer makes 3 beeps
Was called TRADIC stands for TRAnisitor DIgital Computer,
Its not. If you didn't have internet on when you turned on the computer you wouldn't of received the email. When you sign in to your email all emails that have been sent to you are retrieved. And also unless the computer is switched off at the wall it is not fully "off" otherwise it wouldn't know what the time is.
Dumb terminals are input/output devices that lack processing capabilities and rely on a mainframe or server for processing tasks, whereas smart terminals have some processing capabilities and can perform certain tasks independently. Dumb terminals typically have limited functionality and rely heavily on the mainframe or server for computing power, while smart terminals can handle some tasks locally before sending data to the mainframe. In summary, the key difference lies in the processing capabilities and independence of the terminals in executing tasks.
It is unlikely that you will find a degree level course that is fully concerned with computer network cabling. However, in the UK a degree in general computer studies would take you three years.
Specifications for Chevy Crate Engines are client specific and to the order. You have several options in having the engine fully or even partially built.
A fully loaded computer means that the computer has loads of memory, a durable processor and plenty of useful features and applications that you wouldn't get with a regular computer. Fully loaded computers are particularly popular among gamers and business people.
First fully functional computer invented in 1944. Name of 1st F.F. computer was the Harvard Mark I, invented by Howard Aiken.
the computer makes 3 beeps
Yes, IF they are installed to fully meet the specifications of the manufacturer, AND IF the reflecting surface is kept clean.
means it is not fully working
TRADIC
ENIAC. No it was not the ENIAC. The first fully functional, fully programmable computer was the Z3. It was invented by a German Konrad Zuse before the ENIAC existed. Americans did not invent the computer.
William gates
computer science eg :c=computer sc=science so fully computer science
Yes, there is. Apple iPad is, I think, is a computer by Apple that is fully touch.
Was called TRADIC stands for TRAnisitor DIgital Computer,