Monday, October 21, 2019

Supercomputers Essays

Supercomputers Essays Supercomputers Essay Supercomputers Essay A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s supercomputer market crash. Today, supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world. The term supercomputer itself is rather fluid, and todays supercomputer tends to become tomorrows ordinary computer. CDCs early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of ordinary CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on off the shelf server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects. Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources. IBM SUPERCOMPUTERS According to the June 2010 TOP500 (link resides outside of ibm. com) List of Supercomputers, IBM continues to lead the list for the twenty-second consecutive time with the most installed aggregate performance. IBM has regained the lead with the most entries on the list with 196. IBM also leads the Top 10 for the twelfth consecutive time with a total of four of the systems including, the #3 system, the first system to sustain a petaflop of performance, which IBM built for the Roadrunner project at the Los Alamos National Lab. Other systems in Top 10 were the #5 IBM Blue Gene/P system at Forschungszentrum Juelich, with over 825 teraflops which is the most powerful supercomputer in Europe. The IBM Blue Gene/L at the US Department of Energy Lawrence Livermore National Lab, long time previous leader, is now the #8 system, and the #9 IBM Blue Gene/P at Argonne National Lab. IBM Offers: Unmatched expertise to help you solve your toughest problems due to our technology leadership in clustering, chip technology, Linux and support for a broad range of applications for your solution area. Improved flexibility provides you a full-scope infrastructure worldwide based on our vast portfolio spanning the widest range of platforms, architectures and operating systems. Faster implementations by applying our in-depth vertical expertise and broad range of deployment experience. Results that speak for themselves: IBM is supercomputing leader as provider of 27 of the worlds 100 most powerful supercomputers according to the most recent TOP500 Supercomputing Sites (link resides outside of ibm. com) ranking Awarded more patents than any other company for each of the last 17 years with 4,914 patents in 2009 alone. First to break the petaflop performance barrier. Leading energy-efficiency with 17 of 20 highest megaflops per watt systems. IBM7030 [pic] The IBM 7030, also known as Stretch, was IBMs first transistorized supercomputer. The first one was delivered to Los Alamos in 1961. Originally priced at $13. 5 million, its failure to meet its aggressive performance estimates forced the price to be dropped to only $7. 78 million and its withdrawal from sales to customers beyond those having already negotiated contracts. Even though the 7030 was much slower than expected, it was the fastest computer in the world from 1961 until the first CDC 6600 became operational in 1964. The IBM 7950 [pic] The IBM 7950, also known as Harvest, was a one-of-a-kind adjunct to the Stretch computer which was installed at the US National Security Agency (NSA). Built by IBM, it was delivered in 1962 and operated until 1976, when it was decommissioned. Harvest was designed to be used for cryptanalysis. ACS -1 The ACS-1 and ACS-360 are two related supercomputers designed by IBM as part of the IBM Advanced Computing Systems project from 1961 to 1969. Although the designs were never finished and no models ever went into production, the project spawned a number of organizational techniques and architectural innovations that have since become incorporated into nearly all high-performance computers in existence today. Many of the ideas resulting from the project directly influenced the development of the IBM RS/6000 and, more recently, have contributed to the Explicitly Parallel Instruction Computing (EPIC) computing paradigm used by Intel and HP in high-performance processors. BLUE GENE [pic] Blue Gene is a computer architecture project designed to produce several supercomputers, designed to reach operating speeds in the PFLOPS (petaFLOPS) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS). It is a cooperative project among IBM (particularly IBM Rochester and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia. There are four Blue Gene projects in development: Blue Gene/L, Blue Gene/C, Blue Gene/P, and Blue Gene/Q. The project was awarded the National Medal of Technology and Innovation by U. S. President Barack Obama on September 18, 2009. The president bestowed the award on October 7, 2009. BLUE WATERS [pic] Blue Waters is the name of a petascale supercomputer being designed and built as a joint effort between the National Center for Supercomputing Applications, the University of Illinois at Urbana-Champaign, and IBM. On August 8, 2007 the National Science Board approved a resolution which authorized the National Science Foundation to fund the acquisition and deployment of the worlds most powerful leadership-class supercomputer. The NSF is awarding $208 million over the next four and a half years for the Blue Waters project. CYCLOPS64 Cyclops64 (formerly known as Blue Gene/C) is a cellular architecture in development by IBM. The Cyclops64 project aims to create the first supercomputer on a chip. Cyclops64 exposes much of the underyling hardware to the programmer, allowing the programmer to wri te very high performance, finely tuned software. One negative consequence is that efficiently programming Cyclops64 is difficult. [citation needed] The system is expected to support TiNy-Threads (a threading library developed at the University of Delaware) and POSIX Threads DEEP BLUE [pic] Deep Blue was a chess-playing computer developed by IBM. On May 11, 1997, the machine won a six-game match by two wins to one with three draws against world champion Garry Kasparov. Kasparov accused IBM of cheating and demanded a rematch, but IBM refused and dismantled Deep Blue. Kasparov had beaten a previous version of Deep Blue in 1996. IBM Kittyhawk Kittyhawk is a new theoretical IBM supercomputer. The project was announced, which entails constructing a global-scale shared supercomputer capable of hosting the entire Internet on one platform as an application. Currently the Internet is a collection of interconnected computer networks. In 2010 IBM open sourced the Linux kernel patches that allow otherwise unmodified Linux distributions to run on Blue Gene/P. This action allowed the Kittyhawk system software stack to be run at large scale at Argonne National Lab. The open source version of Kittyhawk is available on a public website hosted by Boston University. MAGERIT [pic] Magerit is the name of the supercomputer which reached the second best Spanish position in the TOP500 list of supercomputers. This computer is installed in CeSViMa, Computer Science Faculty of the Technical University of Madrid. Magerit was installed in 2006 and reached the 9th fastest in Europe and the 34th in the world. It also reached the 275th position in the first Green500 list published. It is the second most powerful supercomputer designated for scientific use in Spain, after the Barcelona Supercomputing Center MareNostrum. Magerit is the ancient name of the current city of Madrid. The name comes from a fortress built on the Manzanares River in 9AD, and means Place of abundant water†. IBM Naval Ordnance Research Calculator (NORC) pic] The IBM Naval Ordnance Research Calculator (NORC) was a one-of-a-kind first-generation (vacuum tube) electronic computer built by IBM for the United States Navys Bureau of Ordnance. It went into service in December 1954 and was likely the most powerful computer at the time. The Naval Ordnance Research Calculator (NORC), was built at the Watson Scientific Computing Laboratory under the direction of Wallace Eckert. The computer was presented to the US Navy on December 2, 1954. At the presentation ceremony, it calculated pi to 3089 digits, which was a record at the time. The calculation took only 13 minutes. In 1955 NORC was moved to the Naval Proving Ground at Dahlgren, Virginia. It was their main computer until 1958, when more modern computers were acquired. It continued to be used until 1968. Its design influenced the IBM 701 and subsequent machines in the IBM 700 series of computers. PERCS PERCS (Productive, Easy-to-use, Reliable Computing System) is IBMs answer to DARPAs High Productivity Computing Systems (HPCS) initiative. The HPCS program is a three-year research and development effort. IBM was one of three companies, along with Cray and Sun Microsystems, that received the HPCS grant for Phase II. In this phase, IBM collaborated with a consortium of 12 universities and the Los Alamos National Lab to pursue an adaptable computing system with the goal of commercial viability of new chip technology, new computer architecture, operating systems, compiler and programming environments. IBM was chosen for Phase III in November 2006, and granted $244 million in funds for continuing development of PERCS technology and delivering prototype systems by 2010. Roadrunner [pic] Roadrunner is a supercomputer built by IBM at the Los Alamos National Laboratory in New Mexico, USA. Currently the worlds third fastest computer, the US$133-million Roadrunner is designed for a peak performance of 1. 7 petaflops, achieving 1. 026 on May 25, 2008 to become the worlds first TOP500 Linpack sustained 1. 0 petaflops system. It is a one-of-a-kind supercomputer, built from off the shelf parts, with many novel design features. In November 2008, it reached a top performance of 1. 456 petaflops, retaining its top spot in the TOP500 list. It is also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444. 4 megaflops per watt of power used. IBM Sequoia The Sequoia is a petascale Blue Gene/Q supercomputer being constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It is scheduled to go online in 2011 at qpetaflops was more than the combined performance of the top 500 supercomputers in the world, about 20 times faster than then reigni ng champion Roadrunner. It will also be twice as fast as Pleiades, a proposed supercomputer built by SGI at NASA Ames Research Center SHAHEEN [pic] Shaheen consists primarily of a 16-rack IBM Blue Gene/P supercomputer owned and operated by King Abdullah University of Science and Technology (KAUST). Built in partnership with IBM, Shaheen is intended to enable KAUST Faculty and Partners to research both large- and small-scale projects, from inception to realization. Shaheen, named after the Peregrine Falcon, is the largest and most powerful supercomputer in the Middle East and is intended to grow into a petascale facility by the year 2011, Originally built at IBMs Thomas J. Watson Research Center in Yorktown Heights, New York, Shaheen was moved to KAUST in mid-2009. The father of Shaheen is Majid Al-Ghaslan], KAUSTs founding interim chief information officer and the Universitys leader in the acquisition, design, and development of the Shaheen supercomputer. Majid was part of the executive founding team for the University and the person who also named the machine. Super computers [pic] Submitted by: Hina maheshwari BBM 3rd year 087518 Uses of Supercomputer Since its creation in the 1960s, the supercomputer has been used by a variety of large companies and colleges in an effort to conduct research that otherwise would not be possible. Because supercomputers can crunch numbers at a far superior rate than humans as well as work in a multidimensional way, the devices are essential to modern studies and research. 1) Quantum Mechanics Supercomputers are used heavily in the processing of information on quantum mechanics. They are used to study physical systems at the atomic level. 2) Weather Large-scale weather forecasting, such as that of global climate change, needs to use supercomputers in order to take into account globally changing conditions. 3) Modeling Intensive modeling is conducted using supercomputers. This is useful for molecular studies, polymer research, chemical composition and simulations such as wind tunnel research. 4) Military Military applications are very elaborate. From organizing war games to studying the effects of nuclear detonations on a large scale, many militaries across the planet use supercomputers. The Role and Importance of super Computers is hidden from no one. Yet The use of Super Computers has been limited to only a handful of nations in the world who have this expertise. In India the The Saga of Super Computer Dates backs to 80s when India was denied the Cray Super Computer. Since then India has made Several Indegenious Efforts which have been Highly successful. The Use of Super computers in Military is an All Together New Concept which has unleashed a new Era of Military Super Computing With India Signing the Nuclear Deal, It has become an urgent requirement to devise means that can test and simulate our weapons and one such application is Nuclear testing. As the readers might know the Indian Super Computing Efforts are centred around CDAC, Centre for Development of Advanced Computing, Pune which has developed the PARAM series of Super Computers. India has achieved the capability through which it can Actually Test a Nuclear Detonation and Without the fear of Sanctions Improve its Weapon Parameters. 5) Grand Challenge Unsolved problems (known as Grand Challenge problems) are frequently the subject of supercomputer use. Examples of this include mathematical problems and protein-folding techniques.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.