
NSF Org: |
AGS Division of Atmospheric and Geospace Sciences |
Recipient: |
|
Initial Amendment Date: | July 24, 2012 |
Latest Amendment Date: | July 24, 2012 |
Award Number: | 1237613 |
Award Instrument: | Standard Grant |
Program Manager: |
Eric DeWeaver
edeweave@nsf.gov (703)292-8527 AGS Division of Atmospheric and Geospace Sciences GEO Directorate for Geosciences |
Start Date: | August 1, 2012 |
End Date: | July 31, 2016 (Estimated) |
Total Intended Award Amount: | $363,677.00 |
Total Awarded Amount to Date: | $363,677.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
400 HARVEY MITCHELL PKY S STE 300 COLLEGE STATION TX US 77845-4375 (979)862-6777 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
TAMU 3150 College Station TX US 77843-3150 |
Primary Place of
Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Climate & Large-Scale Dynamics |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.050 |
ABSTRACT
This project examines the impact of uncertainty and error in model formulation on errors in weather forecasts produced by ensemble prediction systems (EPSs). In an EPS, forecasts are produced by running an ensemble of forecast models in which each model starts out with a slightly different initial condition (models can also differ in their formulation), and the resulting ensemble of forecasts is analyzed statistically to produce an optimal forecast, and estimate of the error in the forecast, and (in combination with real-world observations), a set of initial conditions for the next forecast cycle. The work is based on the hypothesis that errors in model formulation (principally errors in parameterization and truncation) introduce errors into the model integrations at the scale of the parameterized processes, presumably at or near the truncation limit of the model, and these errors are propagated upscale by resolved model dynamics until they produce forecast uncertainty at synoptic scales. Because upscale propagation determines the forecast impacts at substantial lead times (day three, for instance), forecast errors due to model errors do not have any particular characteristics that would distinguish them from forecast errors due to initialization errors (which would likely undergo the same upscale propagation before affecting the forecast). Based on the above results, the PIs conjecture that the effect of model errors could be accounted for, at least approximately, by modulating the magnitude of the different error patterns in the low-dimensional vector space which contains most of the forecast uncertainty from all sources. The research has a three part agenda, in which the first part will test the hypothesis in forecasts archived in the THORPEX Interactive Ground Global Ensemble (TIGGE) data set. The TIGGE archive contains forecasts produced by a variety of ensemble prediction systems using a variety of techniques to account for errors in model formulation and intial conditions, thus allowing numerous tests of the hypothesis. The second part consists of a suite of "perfect model" experiments, in which the "true" state of the atmosphere will be taken from the same model used in the ensemble forecast system. The perfect model configuration enables experiments in which there is no model error, as the "true" system can have exactly the same physics and truncation as the forecast model. Such experiments are useful for considering other sources of forecast errors. The third part consists of forecast experiments using a state-of-the-art data assimilation system to assimilate real-world observations, and the PIs will attempt to specific challenging forecast cases, such as prediction of cyclogenesis produced from a warm-core tropical cyclone.
In addition to its scientiifc merit, the work will have societal benefit by developing a strategy to improve the quality of weather forecasts issued to the general public. The work also seeks to improve understanding of the uncertainty inherent in weather forecasts, so that information regarding the likely accuracy of forecasts can be included in forecast guidance. The work may also have applicability to climate and earth system models used to produce climate projections and long-range forecasts, and to understanding and predicting the behavior of other complex systems. In addition, the project provides support and training to a graduate student, thereby developing the workforce in this research area.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external
site maintained by the publisher. Some full text articles may not yet be available without a
charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from
this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
The ultimate goal of this project was to develop new knowledge that would lead to improved quantitative prediction of the uncertainty of model-based numerical weather forecasts. Because the main obstacle to achieving such improvements has been the limited understanding of the exact mechanisms by which model errors contribute to the forecast uncertainty, the focus of the project was on the effect of model errors and uncertainty on the forecast uncertainty. It was found that the existing state-of-the-art techniques for the prediction of the forecast uncertainty were highly efficient in predicting the forecast uncertainty up to 10 forecast days, especially in the range between 3 and 7 forecast days. It was also found, however, that the uncertainty predictions broke down beyond 10 days. The problematic aspect of this result is that while the model forecasts typically have some skill beyond 10 days, because of the generally high level of errors at those long forecast times, that skill has little value for the forecast users without an accurate quantification of the forecast uncertainty. The diagnostic techniques developed in the course of the project are expected to help the development of new modeling techniques to improve both the forecasts and the quantification of the uncertainty in the forecasts beyond 10 forecast days.
Last Modified: 11/13/2016
Modified by: Istvan Szunyogh
Please report errors in award information by writing to: awardsearch@nsf.gov.