A supercomputer is a powerful computer with an excessive stage of overall performance in comparison to a general cause computer. They are often used to describe the most powerful high-performance systems on the field at any particular moment. These computers have largely been utilized for scientific and technical work that requires extremely fast computations. Studying mathematical models for complicated physical events or designs, such as climate and weather, the development of cosmology, nuclear weapons and reactors, novel chemical compounds particularly for pharmaceutical applications, and cryptology, are all common uses for supercomputers. The overall performance of a supercomputer is generally measured in floating-point operations per second (FLOPS).
The Supercomputer is a witness to humanity’s toughest problems!
HISTORY: Looking back at the history of supercomputers beginning in the 1960s, when computer use became increasingly widespread. The important achievements since then are listed here, organized by generation.
The year 1960-2000: The only computer that severely affected the performance of Cray-1 in the 1970s was the ILLIAC IV. CDC 1604 designed by Cray was one of the first systems to use transistors instead of vacuum tubes, and it was widely used in scientific facilities. In 1961, IBM replied by developing its own scientific computer, the IBM 7030 (also known as Stretch). Yet, IBM, which was unlikely to try the transistor, found little customers for their tube-transistor combination, irrespective of its speed, and eventually exited the supercomputer business after an astounding loss in a few million.
Stretch was superseded as the fastest computer on Earth in 1964 by Cray’s CDC 6600, which could do three million floating-point operations per second (FLOPS). Cray again launched the Cray-2, a four-processor system, in 1985, and it was the first machine to surpass one billion FLOPS.
Later, Steve Chen developed Cray Y-MP as an upgrade to the X-MP and it was released in 1988 also it had eight vector processors running at 167 MHz with a peak performance of 333 megaflops per unit. Cray’s trial with gallium arsenide semiconductors in the Cray-3 in the late 1980s failed miserably.
NEC introduced the SX3/44R in 1989, and a year later it was named the world’s fastest 4-processor computer. Fujitsu’s numerical wind tunnel supercomputer, on the other hand, has 166 vector processors and was ranked first in 1994. Its highest speed per CPU is 1.7 gigaflops. Also, Hitachi SR2201 used the 2048 process to attain an outstanding performance of 600 gigaflops in 1996.
The Intel Paragon, which might have 1000 to 4000 Intel i860 microprocessors in different configurations and was considered as the fastest in the world in 1993, in a similar era. It was a MIMD machine that used a high-speed two-dimensional mesh to link CPUs, allowing processes to run on different nodes while interacting through the Message Passing Interface.
The year 2000-2010 In the first decade of the 21st century, substantial progress was accomplished. Supercomputer performance steadily increased, although not drastically. In 1991, the Cray C90 consumed 500 kilowatts of energy. By 2003, the ASCI Quality used 3.000 kW while 2.000 times quicker.
At the Japan Agency for Marine-Earth Science & Technology, the Earth Simulator Supercomputer developed by NEC in 2004 had 35.9 Teraflops, with 640 nodes each having eight patented vector processors. Blue Gene takes a different approach, this supercomputer can use over 60,000 transformers and connect them with a 3-D torus connection with 2048 processors per rack.
Later on from 2004 to 2011 Tianhe series was in rapid action, with a peak computing rate of 2.57 petaFLOPS, an upgraded version of the machine (Tianhe-1A) surpassed ORNL’s Jaguar to become the world’s fastest supercomputer in October 2010.
The Tianhe-1A was surpassed as the world’s fastest supercomputer in June 2011 by the K computer.
The 8.1 petaflop Japanese K computer, which has over 60,000 SPARC64 VIIIfx processors packed in over 600 cabinets, became the fastest in the world in July 2011. The fact that the K computer is more than 60 times faster than the Earth Simulator, and that the Earth Simulator is now ranked 68th in the world after holding the top spot for seven years, indicates both the rapid increase in top performance and the global expansion of supercomputing technology.
Fujitsu’s K, at the RIKEN facility in Japan, was the undisputed champion of high-performance computing, clocking in at 10 petaflops – four times faster than Tianhe-1A. K abandons Blue Gene’s low-power approach in favor of 88,128 eight-core SPARC64 processors. Each CPU has 16GB of local RAM, for a total memory capacity of 1,377 terabytes.
K consumes about 10 megawatts of power roughly the same as 10,000 suburban homes yet is naturally water-cooled (it has 864 cabinets!). K is also the most costly supercomputer ever built, costing 100 billion yen ($1.25 billion). The Earth Simulator had gone out of the top 10 by 2014, while the K computer had dropped out of the top 10 by 2018. Summit, at 200 petaFLOPS, had become the world’s most powerful supercomputer by 2018.
In 2020, the Japanese took the top rank once again with the Fugaku supercomputer, which was capable of 442 PFLOPS and reached 1.42 exaFLOPS (fp16 with fp64 precision) in the HPL-AI benchmark, making it the first supercomputer to achieve 1 exaFLOPS.
Fugaku is the world’s fastest supercomputer as of April 2021.
Nevertheless, DARPA has urged researchers to reimagine computing, acknowledging that present silicon technology may not even be capable of exaflops. IBM, on the other hand, is developing an exascale supercomputer to analyze the exabytes of astronomical data generated by the Square Kilometre Array, the world’s largest telescope. The telescope will be operational in 2024, giving IBM plenty of time to figure out how to multiply the performance of present computers by a factor of 100.