Page History
...
Note: HRES and Ensemble Control Forecast (ex-HRES) are scientifically, structurally and computationally identical. With effect from Cy49r1, Ensemble Control Forecast (ex-HRES) output is equivalent to HRES output shown in the diagrams. At the time of the diagrams, HRES had resolution of 9km and ensemble members had a resolution of 18km.
Jumpiness
Customers dislike jumpy forecasts. The role of the forecaster is to deduce the most likely forecast and assign a general probability to it taking into account all possible outcomes. The forecaster should avoid sudden changes in their forecasts, as the subsequent NWP model forecast may or may not revert to previous outcomes. The addition of detail, particularly at longer forecast lead-times, is inappropriate, but nevertheless the risk of severe or exceptional weather should be captured, even if the probability is low.
...
Ensemble of Data Assimilations (EDA), Ocean coupling from Day0, and future enhancements to stochastic physics and land surface perturbations are designed to improve the quality of the ensemble and should continue to reduce, though not eradicate, jumpiness.
Jumps, Trends and Flip-flops
Some terms have evolved to describe the run-to-run changes that may be seen within a series of forecast output:
...
- at short lead-times a small but significant proportion appear better (~15% at Day2),
- at longer lead-times a larger a larger proportion appear better (~40% at Day6). (Fig7.2-5).
Persson and Strauss (1995), Zsótér et al. (2009) found:
...
...
Fig7.2-6: The correlation of 24 hour forecast jumpiness and forecast error for 2m temperature against forecast lead-time for Heathrow at 12 UTC, October 2006 - March 2007. At short lead-times the relationship between jumpiness and error is low, but increases with forecast range and asymptotically approaches 0.50 correlation. Note that even for 0.5 the variance explained is still only 25%.
Unreliability of Trends
Trends in the development of individual synoptic systems over successive forecasts do not provide any indication of their future development. If during its last runs the NWP has systematically changed the position and/or intensity of a synoptic feature, it does not mean that the behaviour of the next forecast can be deduced by simple extrapolation of previous forecasts. However, the order in which the jumpiness occurs can provide additional insights. Table7.2.1 shows occasions when rainfall has occurred and was forecast on at least one of three consecutive forecasts verifying at the same time. According to Table7.2.1 the likelihood that precipitation occurs seems to be broadly similar when the last two forecasts are consistent (R, R, -) or the last three are “flip-flopping” (R, -, R). The last two forecasts are, on average, more skilful than the first forecast - but they are also on average more correlated. The earlier forecast might lack forecast skill but this is compensated by it being less correlated with the most recent forecast. The agreement between two, on average, less correlated forecasts carries more weight than two, on average, more correlated forecasts.
...
Table 7.2-1:The percentage of cases when >2mm/24hr has been observed when up to three consecutive ECMWF runs (T+84hr, T+96hr and T+108hr) have forecast >2mm/24hr for Volkel, Netherlands October 2007-September 2010. R indicates where such rain has been forecast and has occurred. Similar results are found for other west and north European locations and for other NWP medium-range models.
Weakness of an Intuitive Approach towards likely outcomes
It is without doubt difficult to choose which is the more likely outcome when a series of forecasts are showing large variations, trends and/or flip-flops. A simple exercise in recent ECMWF training courses illustrates the problem. Students were asked to interpret the expected temperature from a series of previous sequential NWP model forecasts verifying on the same day, and to accordingly provide a single deterministic forecast for that day, based on that information. The students used several, largely intuitive, “forecasting techniques” (see Table 7.2.2) but in the end none of them can be deemed to be particularly efficient (though any one of them could possibly have captured the correct result in a given situation). The spread of he student's forecasts gives some idea of confidence inherent in the pattern of the forecast information provided:
...
Fig7.2-7: The graphs show sample schematic forecasts of 12UTC temperature over four successive NWP model runs: Jumpy (top) and Trend (bottom). The histograms show the forecasts made by the students using their own techniques. Spread was low with the jumpy forecast case since the oscillations remained fairly steady throughout, and the next forecast could be higher or lower without changing the range of the oscillation much. The spread was high with the trend forecast case illustrating the point that the next forecast may well be higher than the one before but destroy the trend, or lower than the one before, continuing the trend.
Dealing with Jumpiness
The forecaster can try to minimise the effect of these variations by not taking the latest forecasts as necessarily being the best (although on average they are). Techniques which may be of use are in cases of jumpiness are to:
...
Note: HRES and Ensemble Control Forecast (ex-HRES) are scientifically, structurally and computationally identical. With effect from Cy49r1, Ensemble Control Forecast (ex-HRES) output is equivalent to HRES output shown in the diagrams. At the time of the diagrams, HRES had resolution of 9km and ensemble members had a resolution of 18km.
Special considerations - Jumpiness at short lead-times
In ‘finely balanced’ situations (those with dynamical sensitivity), the ensemble spread can be quite high even at quite short lead-times (about one or two days); slight differences and jumpiness among ensemble members or control can all have a large impact on the NWP model evolution (e.g. precise phasing of upper and lower levels needed for explosive cyclogenesis; high precipitation intensities can turn rain into (surprise) snow due to cooling through melting). Severe weather situations are often associated with these sorts of uncertainties and a probabilistic approach rather than definitive forecast is generally more effective and useful.
Customer considerations
To minimise error when measured over an extended period, one should always follow the latest forecast. The reason for not doing this is to avoid negative customer perceptions that can arise when jumpy forecasts are issued. It is important that the forecaster understands the requirements of the customer (e.g. what their thresholds are for taking weather-related precautions), but the forecaster does not have the responsibility to make such decisions for customers - it is for the customer to decide what action to take. Customers have to make decisions based upon the forecasts that are issued, and jumpy forecasts can cause sudden or frequent changes in customer actions - in some cases the precautions that they take are expensive or cannot be easily reversed. So it is important to maintain the confidence of the customer and their belief that forecasters are contributing positively to a best estimate of future weather. It is usually equally important for the customer to know about the uncertainty in a forecast as about the actual forecast value (e.g. what else could happen, or what is the worst possibility). By making full use of the ensemble results the forecaster can give a more effective service. Probability forecasts convey more information than simple deterministic statements. However, weather forecasters may, paradoxically, sometimes aid their end-users more by not issuing a very uncertain forecast.
By not following all jumpiness the forecaster is involved in a trade-off whereby, over many cases and in net terms, accuracy will be reduced but customer perceptions and actions will be improved. This is psychology not statistics. But, most importantly, the integration of psychology can help to reduce customer displeasure and mistrust of forecasters' output and so help to increase the chance that the customer will in the end take the right action - this, after all, is the bottom line. However, over an extended period, minimisation of jumpiness by forecasters may also improve forecaster output as verified against a NWP model. Without full understanding it could be detrimental to the perception of forecaster performance.
Additional Sources of Information
(Note: In older material there may be references to issues that have subsequently been addressed)
Nil currently.
(FUG Associated with Cy49r1)