Menu English Ukrainian russian Home

Free technical library for hobbyists and professionals Free technical library


HISTORY OF TECHNOLOGY, TECHNOLOGY, OBJECTS AROUND US
Free library / Directory / The history of technology, technology, objects around us

Supercomputer. History of invention and production

The history of technology, technology, objects around us

Directory / The history of technology, technology, objects around us

Comments on the article Comments on the article

A supercomputer is a computer that is significantly superior in its technical parameters to most of the existing computers. As a rule, modern supercomputers are a large number of high-performance server computers connected to each other by a local high-speed backbone to achieve maximum performance within the framework of the parallelization approach of the computational task.

In 1996, the curator of the Museum of Computing in the UK, Doron Swade, wrote an article with a sensational title: "The Russian series of supercomputers BESM, developed more than 40 years ago, may testify to the lies of the United States, which declared technological superiority during the years of the Cold War."

Indeed, the mid-1960s was a high point in the history of Soviet computer technology. At that time, many creative teams worked in the USSR - the institutes of S.A. Lebedeva, I.S. Bruk, V.M. Glushkov, etc. At the same time, many different types of machines were produced, most often incompatible with each other, for the most diverse purposes.

Created in 1965 and released for the first time in 1967, the BESM-6 was the original Russian computer, designed on par with its Western counterpart. Then there was the famous "Elbrus", there was the development of BESM (Elbrus-B). V.M. Glushkov created a wonderful Engineering Calculation Machine - "Mir-2" (a prototype of a personal computer), which still has no Western analogues.

It was the Elbrus team that was the first to develop superscalar architecture, having built the Elbrus-1 machine based on it many years earlier than the West. In this team, a couple of years earlier than in the company "Cray" - a recognized leader in the production of supercomputers, the ideas of a multiprocessor computer were implemented.

Supercomputer
Supercomputer Cray-2

Boris Artashesovich Babayan, scientific leader of the Elbrus group, professor, corresponding member of the Russian Academy of Sciences, believes that the most significant achievement of the group is the architecture of the Elbrus-3 supermachine. "The logical speed of this machine is much higher than that of all existing ones, that is, on the same hardware, this architecture allows us to speed up the task by several times. We have implemented hardware support for secure programming for the first time, we have not even tried it in the West. "Elbrus-3 "was built in 1991. It was already ready at our institute, we started debugging it. Western firms talked so much about the possibility of creating such an architecture ... The technology was disgusting, but the architecture was so perfect that this machine was in two times faster than the fastest American supercar of the time, the Cray Y-MP."

The principles of secure programming are currently being implemented in the concept of the Java language, and ideas similar to those of Elbrus have now formed the basis of the new generation processor, Merced, developed by Intel together with HP. "If you look at Merced, it's practically the same architecture as in Elbrus-3. Maybe some details of Merced are different, and not for the better."

So, despite the general stagnation, it was still possible to build computers and supercomputers. Unfortunately, the same thing happened to our computers that happened to the Russian industry in general. But today, a new parameter, exotic at first glance, is persistently striving to get into the number of traditional macroeconomic indicators (such as GDP and gold and foreign exchange reserves) - the total capacity of computers that the country has. Supercomputers will have the largest share in this indicator. Fifteen years ago, these machines were unique monsters, but now their production has been put on stream.

“Initially, the computer was created for complex calculations related to nuclear and rocket research,” writes Arkady Volovik in the Kompaniya magazine. “Few people know that supercomputers helped maintain the ecological balance on the planet: during the Cold War, computers simulated changes in nuclear weapons, and these experiments eventually allowed the superpowers to abandon the real testing of atomic weapons.Thus, the powerful multiprocessor computer Blue Pacific from IBM is used precisely to simulate nuclear weapons testing.In fact, computer scientists did not contribute to the success of negotiations to stop nuclear tests. Compaq Computer Corp. builds Europe's largest supercomputer based on 2500 Alpha processors The French Nuclear Energy Commission will use the supercomputer to improve the security of French arsenals without more nuclear tests.

No less large-scale calculations are necessary in the design of aviation equipment. Modeling the parameters of an aircraft requires enormous power - for example, to calculate the surface of an aircraft, it is necessary to calculate the parameters of the air flow at each point of the wing and fuselage, per square centimeter. In other words, it is required to solve the equation for each square centimeter, and the surface area of ​​the aircraft is tens of square meters. When changing the geometry of the surface, everything must be recalculated. Moreover, these calculations must be made quickly, otherwise the design process will be delayed. As for astronautics, it began not with flights, but with calculations. Supercomputers have a huge field for application here."

The Boeing Corporation deployed a supercluster developed by Linux NetworX and used to simulate the behavior of fuel in the Delta IV rocket, which is designed to launch satellites for various purposes. Of the four cluster architectures considered, Boeing chose the Linux NetworX cluster because it provides an acceptable cost of operation, and even exceeds the requirements of the Delta IV project in terms of computing power. The cluster consists of 96 servers based on AMD Athlon 850 MHz processors, interconnected via high-speed Ethernet connections.

In 2001, IBM installed a 512-processor Linux cluster with a processing capacity of 478 billion operations per second for the US Department of Defense at the Supercomputing Center in Hawaii. In addition to the Pentagon, the cluster will also be used by other federal departments and scientific institutions: in particular, a cluster for predicting the speed and direction of the spread of forest fires. The system will consist of 256 IBM eServerx330 thin servers, each containing two Pentium-III processors. The servers will be linked using a clustering mechanism developed by Myricom.

However, the scope of supercomputers is not limited to the military-industrial complex. Today, biotechnology companies are major customers of supercomputers.

Supercomputer
Supercomputer IBM BlueGene/L

“As part of the Human Genome program, IBM,” writes Volovik, “received an order to create a computer with several tens of thousands of processors. However, the decoding of the human genome is not the only example of the use of computers in biology: the creation of new medicines today is possible only with the use of powerful computers Therefore, pharmaceutical giants are forced to invest heavily in computer technology, forming a market for Hewlett-Packard, Sun, Compaq. Not so long ago, the creation of a new drug took 5-7 years and required significant financial costs. Today, however, drugs are modeled on powerful computers that not only "build" drugs, but also evaluate their effect on humans. American immunologists have created a drug that can fight 160 viruses. This drug was modeled on a computer for six months. Another way to create it would require several years of work".

And at Los Alamos National Laboratory, the worldwide AIDS epidemic has been "rolled" back to its source. Data on copies of the AIDS virus were put into a supercomputer, and this made it possible to determine the time of the appearance of the very first virus - 1930.

In the mid-1990s, another major market for supercomputers emerged. This market is directly related to the development of the Internet. The amount of information on the Web has reached unprecedented proportions and continues to grow. Moreover, information on the Internet is growing non-linearly. Along with the increase in the volume of data, the form of their presentation is also changing - music, video, and animation have been added to the text and drawings. As a result, two problems arose - where to store the ever-increasing amount of data and how to reduce the time it takes to find the right information.

Supercomputers are also used in all areas where it is necessary to process large amounts of data. For example, in banking, logistics, tourism, transport. Compaq recently awarded a $200 million supercomputer contract to the US Department of Energy.

Hironobu Sakaguchi, president of PC game company Square, says, "Today we're preparing a movie based on our games. Square "calculates" one frame from the movie in 5 hours. On GCube, this operation takes 1/30 of a second." Thus, the process of media production reaches a new level: the time of work on the product is reduced, the cost of a film or game is significantly reduced.

The high level of competition forces players to reduce prices for supercomputers. One way to keep the price down is to use many standard processors. This solution was invented by several "players" of the large computer market at once. As a result, serial relatively inexpensive servers appeared on the market to the satisfaction of buyers.

Indeed, it is easier to divide cumbersome calculations into small parts and entrust the execution of each such part to a separate inexpensive mass-produced processor. For example, ASCI Red by "Intel", which until recently occupied the first line in the TOP500 table of the world's fastest computers, consists of 9632 conventional Pentium processors. Another important advantage of this architecture is its scalability: by simply increasing the number of processors, you can increase the performance of the system. True, with some reservations: firstly, with an increase in the number of individual computing nodes, performance does not grow in direct proportion, but somewhat more slowly, part of the time is inevitably spent on organizing the interaction of processors with each other, and secondly, software complexity increases significantly. But these problems are being successfully solved, and the very idea of ​​"parallel computing" has been developing for more than a decade.

“At the beginning of the nineties, a new idea arose,” writes Yuri Revich in Izvestia, “which was called meta-computing, or “distributed computing.” With such an organization of the process, individual computing nodes are no longer structurally combined into one common body, but represent Initially, it was meant to combine computers of different levels into a single computing complex, for example, preliminary data processing could be performed on a user workstation, basic modeling - on a vector-pipeline supercomputer, solving large systems of linear equations - on a massively parallel system , and visualization of the results - on a special graphic station.

Separate stations connected by high-speed communication channels can be of the same rank, this is exactly how the ASCI White supercomputer from IBM, which has now taken the first line in the TOP500, consists of 512 separate RS / 6000 servers (the computer that defeated Kasparov). But the real scope of the idea of ​​"distribution" acquired with the spread of the Internet. Although the communication channels between individual nodes in such a network can hardly be called high-speed, on the other hand, the number of nodes themselves can be dialed in an almost unlimited number: any computer in any part of the world can be involved in performing a task set at the opposite end of the globe.

For the first time, the general public started talking about "distributed computing" in connection with the phenomenal success of the SETI@Home search for extraterrestrial civilizations. 1,5 million volunteers who spend their money at night on electricity for the noble cause of finding contact with aliens provide computing power of 8 Tflops, which is only slightly behind the record holder - the mentioned ASCI White supercomputer develops a "speed" of 12 Tflops. According to project director David Anderson, "a single supercomputer equal in power to our project would cost $ 100 million, and we created it from almost nothing."

Colin Percival, a young mathematics student from the USA, effectively demonstrated the possibilities of distributed computing. For 2,5 years, with the help of 1742 volunteers from fifty countries of the world, he set three records at once in a specific competition, the purpose of which is to determine new consecutive digits of the number "pi". Previously, he was able to calculate the five- and forty-trillion decimal places, and most recently he was able to determine which figure is in the quadrillion position.

The performance of supercomputers is most often measured and expressed in floating point operations per second (FLOPS). This is due to the fact that the tasks of numerical modeling, for which supercomputers are created, most often require calculations related to real numbers with a high degree of accuracy, and not integers. Therefore, for supercomputers, a measure of the speed of conventional computer systems is not applicable - the number of millions of operations per second (MIPS). For all its ambiguity and approximateness, the evaluation in flops makes it easy to compare supercomputer systems with each other, based on an objective criterion.

The first supercomputers had a performance of the order of 1 kflops, i.e. 1000 floating point operations per second. The CDC 6600 computer, which had a performance of 1 million flops (1 Mflops), was created in 1964. The 1 billion flops (1 gigaflops) mark was surpassed by the NEC SX-2 supercomputer in 1983 with a score of 1.3 Gflops. The 1 trillion flops (1 Tflops) limit was reached in 1996 by the ASCI Red supercomputer. The milestone of 1 quadrillion flops (1 Petaflops) was taken in 2008 by the IBM Roadrunner supercomputer. Work is underway to build exascale computers capable of 2016 quintillion floating point operations per second by 1.

Author: Musskiy S.A.

 We recommend interesting articles Section The history of technology, technology, objects around us:

▪ Digital camera

▪ Typewriter

▪ Cellophane

See other articles Section The history of technology, technology, objects around us.

Read and write useful comments on this article.

<< Back

Latest news of science and technology, new electronics:

Artificial leather for touch emulation 15.04.2024

In a modern technology world where distance is becoming increasingly commonplace, maintaining connection and a sense of closeness is important. Recent developments in artificial skin by German scientists from Saarland University represent a new era in virtual interactions. German researchers from Saarland University have developed ultra-thin films that can transmit the sensation of touch over a distance. This cutting-edge technology provides new opportunities for virtual communication, especially for those who find themselves far from their loved ones. The ultra-thin films developed by the researchers, just 50 micrometers thick, can be integrated into textiles and worn like a second skin. These films act as sensors that recognize tactile signals from mom or dad, and as actuators that transmit these movements to the baby. Parents' touch to the fabric activates sensors that react to pressure and deform the ultra-thin film. This ... >>

Petgugu Global cat litter 15.04.2024

Taking care of pets can often be a challenge, especially when it comes to keeping your home clean. A new interesting solution from the Petgugu Global startup has been presented, which will make life easier for cat owners and help them keep their home perfectly clean and tidy. Startup Petgugu Global has unveiled a unique cat toilet that can automatically flush feces, keeping your home clean and fresh. This innovative device is equipped with various smart sensors that monitor your pet's toilet activity and activate to automatically clean after use. The device connects to the sewer system and ensures efficient waste removal without the need for intervention from the owner. Additionally, the toilet has a large flushable storage capacity, making it ideal for multi-cat households. The Petgugu cat litter bowl is designed for use with water-soluble litters and offers a range of additional ... >>

The attractiveness of caring men 14.04.2024

The stereotype that women prefer "bad boys" has long been widespread. However, recent research conducted by British scientists from Monash University offers a new perspective on this issue. They looked at how women responded to men's emotional responsibility and willingness to help others. The study's findings could change our understanding of what makes men attractive to women. A study conducted by scientists from Monash University leads to new findings about men's attractiveness to women. In the experiment, women were shown photographs of men with brief stories about their behavior in various situations, including their reaction to an encounter with a homeless person. Some of the men ignored the homeless man, while others helped him, such as buying him food. A study found that men who showed empathy and kindness were more attractive to women compared to men who showed empathy and kindness. ... >>

Random news from the Archive

Charging cable Phoenix Contact 375 kW 20.04.2023

Phoenix Contact has unveiled its new HPC charging cable from the Charx Connect Professional family, which is said to provide safe, high power charging with a constant output of 375kW and does not require cooling.

Fast charging cables in this performance class are usually liquid-cooled, which increases the cost of the cables themselves and the required cooling unit. Until now, however, manufacturers have relied on cooling as it allows for a lower conductor cross-section and for the end user to easily move the CCS cable in the charging station.

Although Phoenix Contact has increased the conductor cross section to 4x 50 square millimetres, the company believes that the cost advantages and the elimination of a liquid-cooled cooling jacket offset the larger conductor cross section. Uncooled charging cables must operate "permanently and safely" at 375 amps, even at temperatures up to 40 degrees Celsius. Since the cable is rated up to 1000 volts, this gives a design power of up to 375 kW, even if the voltage levels of today's electric vehicles are lower.

In Boost Mode, significantly higher charging currents are possible for a short time. In any case, the CCS standard does not provide for more than 500 amps. According to the technical data overview on the main page, up to 500 amps are possible in "boost mode" "depending on ambient conditions". However, since very few electric vehicles can use 500 amps (i.e. 200 kW charging power at 400 volts mains voltage) for only a few minutes, a 375 amp DC design is sufficient in practice - and still a short-term peak of 200 kW is possible.

In addition to the cost advantage of eliminating cooling components, the new HPC cable family features four-wire measurement technology. This allows the power loss in the charging cable to be recorded so that the energy transferred to the electric vehicle can be accurately determined, which is important for reliable and appropriate billing of charging processes.

Safety should be improved by a new type of double chamber sealing system. Thus, according to the company, the spatial separation of the two DC+ and DC- power contacts "reliably prevents possible short circuits". Another advantage is said to be due to the design: with freestanding repair kits, it is possible to replace the face of the connector, including the power pins, without changing the charging cable.

Other interesting news:

▪ Hive for wild bees

▪ 9-seater electric plane

▪ Single-chip batch processor for signal conversion

▪ Boiled water is more harmful than filtered water

▪ The use of puppets in medicine

News feed of science and technology, new electronics

 

Interesting materials of the Free Technical Library:

▪ site section Power supply. Article selection

▪ article by Wolfgang Amadeus Mozart. Famous aphorisms

▪ article What direction of the mental number line is innate? Detailed answer

▪ article Cleaner of industrial premises. Standard instruction on labor protection

▪ article Compact car amplifier. Encyclopedia of radio electronics and electrical engineering

▪ article Stable oscillator. Encyclopedia of radio electronics and electrical engineering

Leave your comment on this article:

Name:


Email (optional):


A comment:





All languages ​​of this page

Home page | Library | Articles | Website map | Site Reviews

www.diagram.com.ua

www.diagram.com.ua
2000-2024