Earth Simulator

Earth Simulator

The Earth Simulator is a series of high-performance supercomputers deployed at the Japan Agency for Marine–Earth Science and Technology (JAMSTEC), primarily at the Yokohama Institute for Earth Sciences. Established to support advanced global climate modelling and solid earth geophysics research, the programme has evolved through several generations since its inception in the late 1990s. Each phase of development has reflected significant progress in vector computing architecture, energy efficiency, and simulation capability, making the system one of the most influential scientific computing platforms in environmental and geoscientific study.
The initiative began under the Japanese Government’s Earth Simulator Project, which aimed to produce a supercomputing environment capable of conducting holistic, high-resolution modelling of the Earth’s atmosphere, oceans, and solid interior. Since the launch of the first system in 2002, successive generations have expanded computational capacity and optimised performance to address increasingly complex scientific questions concerning climate change, extreme weather events, seismic activity, and oceanic processes.

Background and Development Context

The Earth Simulator Project emerged during the 1990s, a decade marked by rapid supercomputing advancement and growing global concern regarding anthropogenic climate change. Japan sought to establish a computing system that could support long-term environmental forecasting and high-precision modelling beyond the capabilities of contemporaneous American and European systems.
The project united several national scientific and technological institutions, including the Japan Aerospace Exploration Agency (JAXA), the Japan Atomic Energy Research Institute, and JAMSTEC. Design and construction responsibilities were entrusted to NEC, whose long-standing expertise in vector architecture played a central role in the system’s engineering approach.
Construction of the facility commenced in October 1999, culminating in the official opening of the Earth Simulator Centre in March 2002. By integrating specialised vector processors, massive parallelism, and extensive storage capacity, the first-generation Earth Simulator sought to revolutionise global environmental modelling.

First Generation Earth Simulator (2002–2009)

The first-generation Earth Simulator was based on the NEC SX-6 vector architecture and represented a landmark achievement in scientific computing. It employed 640 nodes, each containing eight vector processors and 16 GB of memory, creating a total of 5,120 central processing units and 10 terabytes of system memory. The hardware was arranged in cabinets approximately one metre wide, 1.4 metres deep, and two metres tall, with each cabinet consuming around 20 kW of power.
Key technical attributes included:

  • Storage capacity: 700 terabytes of disk storage, divided between system and user allocations, and 16 petabytes of tape-based mass storage.
  • Vector processing capability: Designed for highly parallel climate and geophysical workloads.
  • Simulation resolution: Able to perform fully coupled atmosphere–ocean simulations at spatial resolutions down to 10 kilometres, a significant advancement for the early 2000s.
  • Performance: Achieved 35.86 TFLOPS on the LINPACK benchmark, making it nearly five times faster than its predecessor, IBM’s ASCI White.

From 2002 to 2004, the system occupied the top position on the TOP500 list of the world’s fastest supercomputers. It remained a reference point for vector supercomputing and environmental modelling until its performance was surpassed by IBM’s Blue Gene prototype in September 2004.

Second Generation Earth Simulator 2 (2009–2015)

In March 2009, the first system was succeeded by Earth Simulator 2 (ES2), based on the NEC SX-9 architecture. ES2 significantly improved performance while reducing the number of nodes, demonstrating substantial efficiency gains in vector processing.
Notable features of ES2 included:

  • A quarter of the number of nodes found in the first generation, each providing 128 times the performance of its predecessor.
  • Increased clock speed and fourfold enhancement of processing resources per node.
  • Peak performance of 131 TFLOPS, with a delivered LINPACK score of 122.4 TFLOPS.
  • Recognition as the world’s most energy-efficient supercomputer at the time of its introduction.

In 2010, ES2 achieved a major milestone by topping the Global FFT benchmark of the HPC Challenge suite, registering 11.876 TFLOPS. This solidified its reputation as one of the most capable systems for scientific vector computing during its operational period.

Third Generation Earth Simulator 3 (2015–2020s)

Earth Simulator 3 (ES3) became operational in March 2015, incorporating the NEC SX-ACE vector architecture. The system introduced more advanced node designs and increased the total node count to 5,120, substantially expanding the computational landscape over its predecessor.
Key characteristics of ES3 included:

  • Performance reaching 13 PFLOPS, reflecting a shift from teraflop- to petaflop-scale capability.
  • Optimised architecture for large-scale geophysical and atmospheric simulations.
  • Joint operation with Gyoukou, an immersion-cooled supercomputer capable of up to 19 PFLOPS, between 2017 and 2018. This collaborative arrangement enhanced the computational flexibility of the research centre.

ES3 supported a broad spectrum of Earth system modelling applications, addressing issues such as climate variability, ocean circulation, and seismic wave propagation.

Fourth Generation Earth Simulator 4 (2020s–present)

The fourth-generation Earth Simulator, ES4, incorporates modern heterogeneous computing technologies combining scalar and vector processing paradigms. Its architecture is designed to accommodate converged workloads that span artificial intelligence, high-precision simulation, and large-scale data analysis.
Core architectural elements include:

  • AMD EPYC processors as primary scalar compute units.
  • NEC SX-Aurora TSUBASA Vector Engines, used to accelerate complex vector-based scientific workloads.
  • Ampere-architecture GPUs, providing additional performance for machine-learning-enhanced climate and environmental modelling.
Originally written on October 25, 2016 and last modified on December 1, 2025.

Leave a Reply

Your email address will not be published. Required fields are marked *