Articles
   

 

 


NASA’s High-End Atmospheric Model
for Climate and Weather Predictions

Shian-Jiann Lin

Scientists routinely use supercomputers and satellite data to make weather and climate change predictions. The stake in and influence of these computer-generated predictions of atmospheric evolution is high, because these predictions are often used to guide such decisions as the evacuation of thousands of people in the path of a hurricane or limitations on the growth of greenhouse-gas-producing industries. To improve forecast accuracy and to reduce uncertainties in climate change predictions, our goal at NASA is to develop and apply a high-end atmospheric model at the highest possible resolution for those predictions.

What is Atmospheric Modeling?
To be able to make predictions, data from conventional and remote sensing (satellite) platforms are used in a complex computer code that atmospheric scientists often call a model. Quite simply, a model of the atmosphere is a numerical-digital approximation of the fundamental physical laws that govern the dynamics, physics, and chemistry of the atmosphere; for example, conservation of momentum, mass, and total energy. Because such a model is only an approximation of the true state of the atmosphere, various types of data collected from conventional platforms and satellites are needed to calibrate and validate the model. More importantly, these observed data are optimally combined with model-generated data to construct the best possible estimate of a true initial state, which is then used as a starting point for making predictions.
The accuracy of weather predictions depends strongly on the correctness of the initial condition given as input to the model. Climate change predictions, on the other hand, are dominated by external forcings, such as changes in sea surface temperature (SST), concentrations of greenhouse gases (such as carbon dioxide and methane) and aerosols, and land use (such as deforestation and urban developments). The models used in these two applications are usually distinctly different.

The NASCAR Model
Development of a comprehensive global atmospheric model that can be used for both climate and weather predictions takes decades of team effort by scientists and software engineers. The Data Assimilation Office at the NASA Goddard Space Flight Center (GSFC) and the National Center for Atmospheric Research (NCAR) are combining their science and software engineering strengths to develop such a model. This joint model, the NASA finite-volume General Circulation Model (fvGCM, also known as the NASCAR model), is being used for climate simulations, weather predictions, and global data assimilation. Data assimilation is a procedure that optimally combines sparse time- and space-observed data with model predictions to construct the best possible estimate of the true state of the atmosphere.
NCAR contributes the physical parameterizations to the NASCAR model. Physical parameterizations are numerical approximations to physical processes in the atmosphere, such as clouds, solar heating, infrared cooling, and evaporation/condensation of water vapor. NASA develops the numerical approximation to the fluid dynamics of the atmosphere, commonly referred to as the dynamical core. This dynamical core approximates the motions of the earth’s atmosphere by finite control volumes covering the entire earth and from the earth’s surface to the top of the mesosphere (roughly 80 km above the sea level). Each individual volume contains such information as temperature, humidity, wind (speed and direction), pressure, and concentration of chemical constituents.

Computing Power
Approximating the atmosphere by a finite-volume model is analogous to digital photography, with the finite volumes being the digitized pixels. Imagine taking a picture of the entire atmosphere with a digital camera — the higher the total number of pixels in the picture, the better and clearer the picture would be. In an atmospheric model, the total number of finite volumes required to digitize the whole atmosphere ranges from 50,000 at the very low end to over 100 million at the high end, depending on the desired resolution and the model’s intended applications. The Japanese Earth Simulator hardware, for example, is at the very high-end, capable of at least an order of magnitude higher resolution than the most powerful supercomputer available to atmospheric scientists in the U.S.
The NASCAR model is constructed to be efficient on a variety of computers, from a single laptop/desktop PC (the low end) to massively parallel supercomputers (the high end). However, the accuracy of the resulting simulation or prediction depends strongly on the resolution of the model. Obtaining a coarse-grained representation of the global atmosphere would require roughly 500 km horizontal resolution with 10 to 20 vertical layers. A typical desktop PC costing $2,000 could perform weather prediction and short-term climate simulation at the low-end 500 km resolution with questionable accuracy and without the needed regional details. The computing cost increases at least quadratically with each doubling of the horizontal resolution. The high-end predictions/simulations carried out at NASA are routinely performed at roughly 55 km horizontal resolution, which requires memory and computing power equivalent to at least hundreds of desktop PCs working in parallel efficiently. The keyword is working efficiently, which requires expensive hardware for the CPU-to-CPU communication and years of software engineering investment to optimize the parallel efficiency of the model. The improvement in parallel computing efficiency and the advancement in scientific algorithms together can bring as much improvement to the overall computational efficiency as the hardware improvement predicted by Moore’s Law, which states that the computing power would double every 18 months.

Real-World Applications
For the data assimilation application, a high-end atmospheric model not only can fill the voids left by satellite observations but can also provide higher-resolution information that is not available from the observations. However, satellite data, even if retrieved at coarse time and spatial resolution, are still critically important to help correct and remove basic model biases.
We present here two other types of applications of the NASCAR model: climate simulation of the past and hurricane predictions. The model simulated zonal mean winter (DJF) and summer (JJA) temperatures were compared to the best estimate of the temperature from observations for the same period (1980 through 1994). Many of the observation data in the stratosphere and above are obtained from satellites. The vertical axis of the model plots is in pressure unit (mb), and the horizontal axis is the latitude. Although the model showed clear successes, particularly in the tropics and midlatitudes, the model also had significant biases, particularly near the poles and the model’s top. The comparisons between model simulations of past climate with available observations provide scientists with a measure of the confidence for model predictions of the future (Figure 1).
The NASCAR model has also been used for routine weather forecasts, including hurricane predictions. Comparing the tracks of the hurricane Lili (September 25 through 30, 2002) as predicted by NASCAR and the operational prediction from the National Centers for Environmental Predictions shows that the track predicted by NASCAR is significantly closer to the hurricane’s observed track (Figure 2).
Independent validation of the model forecast using the total moisture data from NASA’s Special Sensor Microwave Imager (SSM/I) satellite is also possible. As compared to the raw satellite data, the high-resolution NASCAR model actually provides a sharper view of the moisture spiral band associated with hurricane Lili. The model’s predictions of the water vapor structures elsewhere on the globe were also in excellent agreement with the SSM/I data (Figure 3).

A Look Into the Future
The high-end atmospheric model developed at NASA can be viewed as a prototype of a more ambitious scientific and engineering project to construct a “Virtual Planet” on a future high-end supercomputing system. It is well accepted that most of the uncertainty in modeling the earth’s climate stems from the inadequacy of the so-called “cumulus parameterization” for representing the effects of clouds that simply can not be resolved by today’s highest resolution global model. The ultimate goal of the Virtual Planet (to be run on the massively parallel Planet Simulator) would be to explicitly resolve the clouds, thus bypassing the cumulus parameterization, and pushing the uncertainty further down the scale. To this end, a horizontal resolution of 5 km or finer would be required. Using today’s most advanced U.S.-made microprocessors, it is estimated that the total number of computers needed to construct such a massively parallel “Planet Simulator” would be, assuming scalability, on the order of 50,000 or even larger. However, a project this scale would likely require a coordinated national effort involving several agencies (such as NASA, the National Oceanic and Atmospheric Administration, and the Department of Energy) and research institutions.

Acknowledgments
This work is funded by the NASA ESE Science Division through the Global Modeling and Analysis Program.

About the Author
Dr. Shian-Jiann Lin (known by colleagues as SJ) received in 1985 his M.S. degree in Aerospace and Mechanical Engineering from the University of Oklahoma. He received his Ph.D. in Atmospheric Sciences from Princeton University in 1989.
Dr. Lin is currently the head of the model development group at the NASA Goddard Space Flight Center’s Data Assimilation Office. He is the original developer of the NASA finite-volume dynamical core, which is also being used in the NCAR Community Atmosphere Model (CAM). His research interests include the development of the “Virtual Planet,” which is a high-resolution modeling system for the earth as well as other planets. Dr. Lin can be reached via e-mail at [email protected].

Back