The history and future of numerical weather ... - Wiley Online Library

21 downloads 35519 Views 540KB Size Report
experiments in numerical forecasting .... Centre. Forecasts were also transmitted to western European National .... coded as a subroutine call to the correspon-.
The history and future of numerical weather prediction in the Met Office 1

Met Office, Exeter

Peter Clark2 2

Met Office, JCMM, Reading

The Met Office is recognised as a world leader in numerical weather prediction (NWP). In this article we trace the milestones in its development, with recollections of some of those involved, and identify key objectives for the future.

Beginnings (1904–53) It was 100 years ago that Vilhelm Bjerknes expounded the hydrodynamical basis for weather forecasting that has come to dominate meteorology through its application in NWP (Bjerknes 1904). Its first practical champion was L. F. Richardson who joined the Met Office from the National Physical Laboratory in 1913 with practical experience in the solution of hydrodynamical problems using finite-difference methods (Ashford 1985), and already with a vision of predicting the state of the atmosphere by similar means. His monumental effort to achieve this by manual computation, while serving as an ambulance driver in World War I, was published by Richardson (1922). Meanwhile, developments in theoretical understanding, especially in the Bergen school, coupled with technical developments in observing the 3-dimensional atmospheric structure, were preparing the ground for achievement of Richardson’s vision once high-speed computing became available following World War II. In the USA, it was through Von Neumann’s Electronic Computer Project that this was first achieved (Platzman 1979), culminating in the ENIAC experiment using Charney’s barotropic equation set (Charney et al. 1950). In 1948, the recently formed Meteorological Research Committee advised that application of computational methods to solving the atmospheric equa-

tions on “an electric desk calculator” should be pursued. The result was the despatch of Fred Bushby (see also Mason and Flood 2004) on a course in using the EDSAC computer at Cambridge, access to the LEO computer of Lyons Co., and the formulation of a set of equations by Sawyer and Bushby which were first integrated in 1952 using a 12 × 8 grid with a grid spacing of 260 km, a 1-hour time-step, and requiring 4 hours’ computing time for a 24-hour forecast (Bushby and Hinds 1954). Unlike Charney’s model, this was a baroclinic model with a vertically uniform thermal wind. With the simplifications introduced by the geostrophic approximation, and a parametrization of the local pressure tendency, the equations reduced to three: for the time tendencies of the 1000–200 mbar thickness and its Laplacian, and the time tendency of the 600 mbar height (Sawyer and Bushby 1953). The equations were integrated using a leapfrog predictor step followed by an implicit corrector. In the Met Office centenary issue of the Meteorological Magazine in 1954, it was noted that, whilst tremendous advances had been made in observing the weather, improvements in forecasting had been very slight in the previous decade, but that the experiments in numerical forecasting showed great promise (Peters 1955). It was anticipated that NWP would become a valuable aid in preparing the 24-hour forecast chart, but that deduction of the associated weather would remain a manual task. In spite of this, Sutton (1954) wrote of his concern that the chaotic nature of the atmosphere could make the hydrodynamical equations unpredictable as early as 24 hours ahead due to the exponential growth of unobservable perturbations in the initial conditions. Today, we are struck both by his pessimism and by his prescience as we seek to use ensembles to identify those rare occasions when this may be true.

The first operational system (1954–66) Following the initial experiments, work moved to the Ferranti Mk I computer at

Manchester University Department of Electrical Engineering. Hinds (1981) has given us a wonderful description of these experiments: “Since we needed the computer for several hours at a stretch, most of our usage was at night and for some years we used the machine for two nights each alternate week. We stayed at a nearby commercial hotel made up of several elderly terraced houses, now happily demolished. Readily available treats were the sight of sunrise over Manchester from the roof near the computer room or the exhilaration of coping with an oldfashioned Manchester smog in which the buses were led by a man on foot holding a flare. It was sometimes necessary to have one member of the party with sufficient athletic prowess to scale the wrought-iron University gate in order to gain access to the computer building . . .” At this time, scientists were experimenting with a wide range of possibilities for equation sets, horizontal and vertical resolution, discretization templates, etc. It was generally agreed that the barotropic set was inadequate and that at least three levels were required in the vertical, but the Met Office focus on this more expensive approach probably delayed its move to operational use. In late 1954, the Meteorological Research Committee recommended procurement of a computer, and a Ferranti Mercury, known as ‘Meteor’, was installed in January 1959 (Fig. 1). By this time several research staff had been trained, and the remaining components of an operational suite had been assembled including observation decoding and quality control, and objective analysis using a local quadric fitting technique (Bushby and Huckle 1957). The new computer was one of the largest and fastest made in England (Knighting 1959), boasting 5000 valves and 3500 diodes, with 1024 floating point stores, a line printer, and a computation speed of about 300 flop (floating-point operations per second). A trial operational suite was constructed in early 1960, using the improved Bushby–

Weather – November 2004, Vol. 59, No. 11

Brian Golding1 Kenneth Mylne1

299

NWP in the Met Office

Weather – November 2004, Vol. 59, No. 11

Fig. 1 The Ferranti Mercury computer, ‘Meteor’, installed at Dunstable in 1959 (© Crown copyright)

Whitelam 3-level model (Bushby and Whitelam 1961) covering western Europe and the North Atlantic, and was run for 18 months. It consisted of a 6-hour forecast from an 0000 UTC analysis, followed by an 0600 UTC re-analysis and a 24-hour forecast. This was planned to be available by 0930 UTC, but this was only achieved on about 35% of occasions (Knighting et al. 1962). It was concluded that a more powerful and reliable computer was required. This computer was an English Electric KDF9, known as ‘Comet’, installed in 1965 and costing £500 000 (Fig. 2). It used transistors, had a speed of 60 kflop, a memory of 12 kword, and could output charts in both zebra form on a line printer and, later, on a pen plotter. Sir John Mason, who became DirectorGeneral of the Met Office that year, recalls

(Mason, personal communication): “Soon after I arrived on 1 October, I became impressed by the fact that the experimental forecasts were systematically more accurate than the traditional forecasts based on extrapolation of time sequences of hand drawn charts. Accordingly, I decided, against the advice of some senior colleagues who favoured a longer trial period, that the model forecasts would be issued twice a day from Monday 2 November 1965. The media were invited to witness this landmark in the history of the Meteorological Office [see Fig. 3] and gave it unprecedented coverage both in the press and on TV. Fortunately the first forecast was excellent”.

Fig. 2 The English Electric KDF9 computer, ‘Comet’, installed at Bracknell in 1965 (© Crown copyright) 300

Copies of this first forecast were given to all delegates as a souvenir (Meteorological Office 1966). The model (Bull 1966) was directly descended from the Sawyer–Bushby model, with two layers from 1000 to 600 mbar and from 600 to 200 mbar, the thermal wind being constant in direction and linearly varying in speed in each. The vertical velocity was zero at the top and specified from the topography gradient and surface drag at the bottom. Lateral boundaries were fixed. It used a 300 km horizontal grid length at 60°N, on a polar stereographic projection. The model equations were derived from conservation of vorticity and mass, assuming geostrophy in each layer and at 1000 mbar, and computed the rate of change of thickness and coefficients of the vertical velocity specification in each layer, together with the stream function at 600 mbar. The 600 mbar height was diagnosed using the inverse balance equation. A leapfrog scheme with a time-step of 45 minutes was used (sometimes reduced to 30 minutes), and a 5 × 5 spatial filter was applied every 6 hours to control noise. The only diabatic input was surface heating over warmer seas, using climatological sea surface temperatures. Most of the suite was coded in KDF9 Usercode. Forecasts were run to T+72 twice a day from an 0430 UTC data cut-off, with an update run from an 1130 UTC data cut-off. Programs were stored on 8-hole paper tape, and read in through a photoelectric reader. Whilst this was a convenient storage medium – unlike cards, the order of the instructions could not be ‘shuffled’ – there were risks associated with rewinding rolls of hundreds of metres of paper tape after use. Without care, a twist in the tape would run into the winder, become folded and then almost inevitably torn. Mending a torn tape was almost impossible, so regular copies had to be made against this eventuality. Also, the edge of the tape was a very effective finger slice at high speed! Output was initially focused on support to forecasters preparing the 24-hour surface pressure forecast and direct model output for aviation flight planning. From autumn 1966, the system operated 24 hours a day, seven days a week. Upper-air forecasters were soon removed from Heathrow Airport and other offices as computer forecasts replaced the manual ones and Heathrow was designated a European Area Forecast Centre. Forecasts were also transmitted to western European National Meteorological Services (NMSs) and to RAF stations. Special forecasts were prepared for flights of the Concorde prototype in 1968, output being fed directly into the BOAC (British Overseas Airways Corporation) flight-planning computer.

By this time, the Met Office was aiming at the prediction of frontal precipitation. The 10-level model, based on a scheme formulated by J. S. Sawyer, solved the Navier– Stokes equations of fluid motion, and the thermodynamic, heat transfer and continuity equations, in their primitive forms, obviating the need for the geostrophic assumption. This permitted computation of vertical motions and large-scale condensation and evaporation processes responsible for the clouds and precipitation. It was to give the Met Office a world lead in NWP that has never been relinquished. Bushby and Timpson (1967) described experiments in mesoscale prediction using a 40 km grid of 95 × 63 points and 10 equally spaced levels from 1000 to 100 mbar, with the express aim of reproducing frontal structures. During the next 3 years it was developed to include the effects of topography, surface friction, convective overturning of the atmosphere, and large-scale diffusion. A radiation balance was added later. Initial experiments were carried out on the Manchester University ICL Atlas computer, and then on the Science Research Council Atlas computer at Chilton where it took 8 hours to run a 24-hour forecast. In early 1970, work turned to operational implementation. Initial development work at Croydon enabled subroutines to be checked by manual computation, but most of the development work was carried out at Poughkeepsie in upstate New York where staff used the weekly RAF flight from Brize Norton to Washington, taking punched cards in special cases and returning with line printer paper in large boxes sealed with Sellotape.

Margaret Bushby (Atkins at the time) recalls her first visit to Poughkeepsie in February 1971 with Fred Bushby and Mavis Hinds (Bushby, personal communication): “There was a blizzard while we were there and it was interesting to see that all traffic stopped moving until the snow stopped and snow ploughs could clear the roads. We had time booked on the computer that night. On enquiry we found that it would be effectively closed with no engineers available to provide support. However, Fred managed to get agreement that if we could get there we could use the machine as long as it continued to work. So with true British grit and devotion to duty we drove gingerly over deserted snow-covered roads to get to the IBM site. When we arrived at the car park there were enormous snow-clearing vehicles, more like earth moving equipment than snow ploughs, racing backwards and forwards clearing the snow. In the dark they appeared large enough to pick up the car and we were a bit concerned about parking it safely! There was only a security man on duty and, when we got inside, one programmer using the 195 to test compilations. He was somewhat annoyed when we turfed him off! I don’t suppose he expected to see anyone else that night.” The IBM 360/195 computer, which used integrated circuits, was installed in 1971 (Howkins 1973) with a speed of about 4 Mflop and 250 kword of storage. It used punched cards for input, while output was available on line printer and Calcomp microfilm, and IBM 2250 visual display units provided a very limited interactive monitoring

Weather – November 2004, Vol. 59, No. 11

The 10-level model (1967–78)

NWP in the Met Office

Fig. 3 Dr B. J. Mason, Director-General, inspecting output from the Met Office’s first operational NWP forecast, 2 November 1965 (© Telegraph Group Limited (1965))

capability. The model was coded in IBM Assembler and constructed very carefully to exploit the capabilities of the 360/195. Model fields were stored in rows and transferred to and from the fixed head disk using in-house data input and output (I/O) routines which completely overlapped with the arithmetic on the central processing unit (CPU) so the machine never had to wait for data. When first run at Poughkeepsie, the CPU loading amazed the IBM engineers. An ‘octagon’ model (300 km grid length at 60°N) was implemented twice daily for 3-day Northern Hemisphere forecasts in March 1972, while a 100 km grid length European and North Atlantic version (the ‘rectangle’) became fully operational in December 1973 for rainfall prediction up to 36 hours ahead. During its life, the model used three integration schemes: initially a Lax–Wendroff scheme, then a split semiimplicit scheme (Burridge 1975) and finally a split-explicit form (Gadd 1978). The latter reduced the computing time to 12 minutes for a 36-hour forecast, and was used in subsequent models until 2003. By 1976, hemispheric forecasts were extended to 6 days and, although accuracy declined rather rapidly beyond 3 days, valuable warnings of severe weather 5 days in advance were achieved for some notable events. Observations were received in the synoptic data bank, and relevant parts extracted and decoded to the basic analysis datasets from where they were used in the analysis. The model initially used an objective analysis similar to the 3-level model (Atkins 1970), followed by a diagnostic initialisation using the nonlinear balance and omega equations (Benwell et al. 1971). However, this was later replaced by an orthogonal polynomial analysis. The analysis scheme began life as a sophisticated procedure for fitting observations directly to a global surface (Dixon and Spackman 1970). However, this proved impractical, and was modified to compute a local weighted adjustment to the first-guess forecast at each grid point, which was then smoothed using pre-computed 1-dimensional orthogonal polynomials. Through clever use of symmetries in the problem, this procedure was highly efficient (Flood 1977). Global exchange of weather data for NWP grew enormously during the 1970s, and the Met Office installed a pair of Marconi Myriad computers dedicated to message routeing. Maintenance of the operational suite became more formal as reliance on it increased. Developers handed over changes for testing and implementation, rather than following them into the computer room (though emergency changes continued to be made on the floor of the computer room until the mid-1970s at least). The operational suite manager became a person of considerable power (Dave Lowther and

301

NWP in the Met Office

Weather – November 2004, Vol. 59, No. 11 302

Janet Portnall were early incumbents). Another innovation was the inclusion of comments in programs – notably absent from early versions of the 10-level model, despite the complexity of the Assembler programming. With the capability to predict sensible weather elements such as surface temperature and rain, experiments in automated product generation started in earnest, including first attempts at worded forecasts (Wickham 1975) – a capability which remains elusive.

Mesoscale and global (1976–92) In order to remove limitations on the representation of vertical circulations associated with convection and complex topography, a new higher-resolution model was developed from the early 1970s using a nonhydrostatic, compressible equation set with a semi-implicit treatment of sound waves (Tapp and White 1976). During the 1980s, it was developed into a comprehensive shortrange prediction model for the UK (Golding 1990), with a high-order turbulence parametrization throughout the depth of the model atmosphere, three-phase cloud microphysics, a sub-model parametrization of cumulus convection, radiation schemes focused on the radiative interactions of clouds and fog, and a surface exchanges scheme incorporating soil moisture from the off-line MORECS land surface model. It also predicted boundary-layer aerosol to support fog prediction (Ballard et al. 1990). An initialisation scheme was developed based on downscaling of large-scale model fields with fine-scale information superimposed from surface observations, radar and satellite imagery. The Interactive Mesoscale Initialisation (Wright and Golding 1990) was developed to allow forecasters to impose their interpretation of the observations. Operational trials of the model on a 15 km grid with 16 levels up to 18 hours ahead started in October 1984 and it was gradually brought into operational use over the following 5 years. The dynamical core was updated in 1991 with a semi-implicit treatment of gravity and sound waves, and semiLagrangian advection (Golding 1992). Meanwhile, a new 15-level model was developed to replace the 10-level model in 1982 (Gadd 1985) for use in global aviation and medium-range forecasting. The dynamical core continued to use the split-explicit approach, cast in terrain-following coordinates, while the parametrizations were largely based on those developed in the 5level global circulation model for the First GARP Global Experiment (FGGE), a unique world-wide intensive observing programme in 1979.

To initialise the model in the tropics, where atmospheric dynamics are not constrained by geostrophy and weather is dominated by convective cloud systems, a new 4-dimensional data assimilation scheme was developed based on multivariate optimum interpolation and nudging. This was developed over several years and tested with tropical data during the GARP Atlantic Tropical Experiment (GATE) in 1974 (Rowntree and Cattle 1983) and later, with great success, during FGGE (Lyne et al. 1982). The analysis procedure started with a firstguess forecast made 6 hours earlier. Observation weights were calculated in terms of the expected error of the first guess, the error of each observing technique, and error correlations between each observation and its neighbours, so as to minimise the statistically expected analysis error. Weighted observation corrections were then nudged into the model over a 6hour period. In addition to conventional radiosonde and aircraft observations, the system used satellite sounding data (SIRS) at 500 km spacing and cloud-track winds (SATOBs) from geostationary satellite images, both received from the USA, and surface pressure PAOBs derived from the Australian Bureau of Meteorology manual analysis. Without these data, predictions in the data-sparse Southern Hemisphere and tropics would have been impossible. Early versions of the new model and data assimilation system were tested on the Cray 1 supercomputer at the European Centre for Medium-Range Weather Forecasts (ECMWF), and it was implemented with a 1.5° × 1.875° grid (~150 km) and 15 levels on the Cyber 203E (later 205) vector computer with 1 Mword of memory and 200 Mflop capacity installed in 1981. In order to achieve maximum speed from this computer, the code was an amalgam of Assembler and Fortran, with each arithmetical vector instruction coded as a subroutine call to the corresponding Assembler instruction. The model was designed with whole fields in memory to optimise the efficiency of the vector calculations. Initially, this required a model boundary at 30°S – though later developments removed this restriction. Since space was required for observations, the assimilation model stored only a few rows in main memory at a time and was written as a global model. While trials were still in progress, the Falklands War started. Margaret Bushby (personal communication) recalls that: “Over a weekend, Met O 2b staff modified the 15-level model suite to use the assimilation model in forecast mode to provide routine global forecasts. The computer itself was still unstable with relatively frequent machine faults. Every week or so, I would arrive at work to be told that the model had ‘fallen over’ during the night,

either because of a machine fault or because of some unforeseen data problem. We only got through all this because of the commitment and dedication of the Met O 2b staff who voluntarily agreed to be, and frequently were, called in outside office hours particularly at weekends and on bank holidays.” Following full operational introduction, the model was run twice daily on the global grid to 6 days and on a regional ~75 km grid to 36 hours. Forecaster intervention became a major activity, checking observations on interactive map and vertical profile displays, and inputting bogus observations both to support outlier observations and to fit the manual analysis. Normally carried out for the northern midlatitudes, an additional Southern Hemisphere roster was put in place for the Falklands War. Later developments in bogussing led to Met Office tropical cyclone forecasts becoming the best in the world (Heming and Radford 1998). Product generation continued to grow, with large numbers of charts being sent to outstations by fax, and increasing use of model products by customers. The implementation of World Meteorological Organization GRIB code in 1985 enabled rationalisation of exchanges between NMSs and creation of an enhanced backup arrangement following the establishment of Bracknell and Washington as World Aviation Forecast Centres (Groves, this issue). Experiments in using models to predict on the monthly time-scale began during this period, initially using the 5-level general circulation model, and then the global forecast model on a 2° × 2.8° grid for operational use. A lagged ensemble approach was taken using nine forecasts run from 6-hourly initialisations (Murphy and Palmer 1986). While the results were of limited skill, diverging from reality more than from each other, the first 15 days soon came to displace the traditional statistical approach.

Ocean impacts (1975–2004) An important impact of the weather is through the response of the oceans, so ocean and sea-state models were early additions to NWP. The Met Office involvement in wave forecasting dates from 1974, when provision of weather services to the developing North Sea oil industry needed support in the prediction of long-period swell. A first-generation wave model was quickly formulated and implemented in 1976. A unique feature of this model was the inclusion of shallowwater effects, including refraction (Golding 1978). The model ran on a 100 km grid and predicted 48 components of the wave spectrum, in six frequencies and eight directions. A unique formulation of a hybrid model was

Pressure (mbar)

The period since 1990 has been marked by consolidation of the NWP model and major advances in data assimilation. At its start, the Met Office had two forecast models and a climate model, and the overheads associated with maintaining them were becoming evident. The initial implementation of a unified climate–forecast model was achieved in 1990 (Cullen 1993), extended to include a hydrostatic mesoscale configuration in 1993, and upgraded with a completely new dynamical core in 2003 incorporating a fully compressible, non-hydrostatic, mesoscale capability. The Unified Model (UM) is configured on a latitude–longitude grid, rotated for limited-area integrations, and initially used 19 levels in the global version and 30 in the UK version. A comprehensive suite of parametrizations was included, largely based on the climate model, but with some innovations adopted from the non-hydrostatic mesoscale model. It also includes coupled ocean models and a sophisticated land surface model. Almost immediately the new flexible capability provided by the UM was called on at short notice to support forecasters in the

the British Isles since 1965 (Fig. 4). It shows the dramatic improvements in forecast quality that have been achieved since then. More recently, composite indices of model performance have been created from baskets of verification scores, normalised with persistence forecasts to minimise seasonal trends. The global index includes surface pressure and upper-air scores relevant to aviation and large-scale forecasting, while the UK index includes precipitation, cloud, visibility, near-surface wind and temperature. Both have improved steadily since their creation, with a notable increase in global model skill following the implementation of 3D-Var in 1999. The problem of forecasting the detailed weather in the first few hours was tackled by developing nowcasting schemes based on radar and satellite data – first the FRONTIERS interactive precipitation nowcasting scheme (Conway and Browning 1988), and then the Nimrod automated scheme dealing with precipitation, fog, and low cloud (Golding 1998). In each case, extrapolation of the detailed initial observations provides skill for the first few hours, which is extended to 6 hours by blending with NWP. A nested single-column configuration of the UM was also developed to provide interpretive forecasts for specific locations, taking account of vegetation, buildings and hills in the immediate upwind fetch. Together with the raw model products, these facilities prompted a further shift from manual to automated forecasting with forecasters increasingly focusing on quality control of ‘first-guess’ products. It also permitted development of new, fully automated services, such as the ‘time and place’ service, which delivers 6-hour nowcasts for any location in the UK as a text message direct to a mobile phone. The end of the period was marked by

Weather – November 2004, Vol. 59, No. 11

Unification (1990–2004)

first Gulf War with mesoscale model forecasts. This capability has since been used regularly in support of international crises. A major advance in data assimilation was achieved in 1999 with the implementation of a three-dimensional variational (3D-Var) scheme (Lorenc et al. 2000). A 4-dimensional variational scheme is, at the time of writing, in the final stages of operational trials prior to planned implementation in late 2004. These schemes use the statistics of the model and observation errors directly to generate a model state that closely fits both the observations and the model equations. A key step forward is that the fit to the observations is computed in terms of the observed variable, rather than converting it to a model variable first. This has had a significantly positive effect on the benefit obtained from satellite radiance profiles, allowing forecasts in the data-sparse Southern Hemisphere to reach similar quality to those in the north. Following the failure of the ETA10 computer, installed in 1988, these developments were supported by a series of Cray computers: the YMP installed in 1990; the C90, a 6 Gflop vector machine, from 1994; and the T3E from 1997 (Burton 1996). The latter marked a change of architecture from vector to massively parallel processing, requiring substantial reprogramming to benefit fully from its 80 Gflop speed which allowed reduction of the global grid length to 60 km (at 60°N) and of the UK model to about 12 km. With the rise of the target culture, performance monitoring has become an essential part of the NWP system. All Met Office model forecasts are verified against both observations and analyses of the true atmospheric state. The longest available series records the root-mean-square error of surface pressure over an area surrounding

NWP in the Met Office

subsequently developed in the Met Office, implemented in 1978 (Golding 1983) and subsequently extended to the globe and to incorporate data assimilation. Also in 1978, a surge model was implemented to support the Storm Tide Warning Service (now renamed Storm Tide Forecasting Service; see Groves, this issue), set up following the 1953 North Sea disaster (Francis 1985). This model was developed at Bidston by the Institute of Oceanographic Science, now the Proudman Oceanographic Laboratory, initiating a close and fruitful relationship which continues to this day. It was formulated using the depth-integrated shallow-water equations and, by making two runs, with and without meteorological forcing, predicts water-depth anomalies which are then added to the local astronomical tide. The Met Office now runs operational surge models for several parts of the coast at different resolutions. A 3-dimensional shelf model has also been implemented from this collaboration for the more realistic prediction of depth-varying currents on the UK continental shelf. Prediction of the open ocean has developed more recently, with experiments in the early 1990s leading to implementation of the global Forecasting Ocean Assimilation Model (FOAM) in 1997 using a 1° grid and 20 levels to 5192 m deep, and run daily to 6 days ahead. Data assimilation is a critical feature of this model using satellite information on sea surface temperature and elevation (Bell et al. 2000). The primary model products are currents, temperature and salinity.

Fig. 4 Root-mean-square error of NWP model predictions of mean sea-level pressure against analyses for an area surrounding the UK, 1967–2003. Persistence shows the error in a 72-hour forecast that was the same as its initial state, providing a useful yardstick against which to measure model performance.

303

NWP in the Met Office

Weather – November 2004, Vol. 59, No. 11

successful completion, on schedule, and without a break, of one of the largest IT relocations ever attempted when the Met Office moved its operational headquarters from Bracknell to Exeter. Delivery of a new NEC SX6 supercomputer, with a speed of almost 800 Gflop followed immediately, and immediate upgrade plans include finer- resolution model grids as well as implementation of 4D-Var.

seasonal time-scale using the coupled ocean–atmosphere capability of the UM. Since its introduction, the principle of ensemble prediction has been adapted to shorter time ranges. ECMWF introduced 3–10 day ensembles in 1992, and the Met Office has played a leading role in their application (Legg et al. 2002). Ensemble output is post-processed to provide calibrated probability forecasts for specific locations (Fig. 6) and to estimate probabilities of

severe weather, allowing early warnings to be issued 2–3 days earlier than previously (Legg and Mylne 2004). It now forms the basis of medium-range forecasting, allowing identification of the most probable outcome, its uncertainty, and alternative scenarios (Young and Carroll 2002). For short-range forecasting over 1–3 days, predictability is higher, and a deterministic forecast is usually adequate for the synopticscale flow. However, as feared by Sutton

Future developments Turning to the future, continued urbanisation, growth in mobility and an increasingly health-conscious society will place increasing demands on forecasts, especially on prediction of high-impact events such as damaging windstorms, floods, storm surges, air-pollution episodes and extreme temperatures. At the same time, technological developments, especially in earth observation and mobile communications, can be expected to play an important role. Met Office strategy for NWP development in the next decade builds on continued steady improvement of the UM and 4D-Var, in two important new areas. The first addresses the higher level of uncertainty in predictions of extreme events, especially windstorms, by implementing a short-range ensemble. The second will focus on the small scale of many severe weather events, especially thunderstorms, through implementation of a convective-scale model with a grid length of about 1 km. Ultimately, the two approaches will need to be combined at the convective scale, but that will take more computer power than can be hoped for in the near future.

Fig. 5 Example of mean sea-level pressure charts from two members (A, left, and B, right) of the ECMWF ensemble at analysis time (upper) and at forecast day 4 (lower)

Ensemble forecasting

(i) (ii)

304

the initial state of the atmosphere can never be known perfectly; and NWP models are nonlinear, with exponentially growing modes that can amplify small perturbations into major forecast differences.

By running several forecasts from equally likely initial conditions, an ensemble allows estimation of the distribution of possible future states of the atmosphere and hence the uncertainty in the forecast. Figure 5 gives an example of two almost indistinguishable analyses leading to very different forecasts 4 days ahead. Where ensemble members are in close agreement, confidence can be relatively high, but where members diverge, confidence and predictability are correspondingly lower. The Met Office first introduced an operational 30-day long-range ensemble in 1988 (Milton 1990) which has since been extended to the

Temperature (°C)

The rationale for ensemble forecasting is that:

Fig. 6 Example meteogram of minimum and maximum surface temperatures at a specific location up to 10 days ahead, generated from calibrated ECMWF ensemble forecast. In each symbol the central box represents the 25–75 percentile range while the vertical lines delineate the 95% confidence range and the horizontal bar represents the median temperature. Both a downward trend in the forecast and decreasing confidence at longer range are readily apparent.

Observed Reflectivity (dBZ) UTC 12:15 at 1 deg Elevation (dBZ) (a)

1 km Model Reflectivity (dBZ) UTC 13:00 (b)

NWP in the Met Office

(1954), the uncertainty sometimes grows much more rapidly, especially in the vicinity of explosively developing cyclonic storms such as those that struck north-west Europe in December 1999. Also, even when the broad-scale flow is predictable, the finer details of temperature and precipitation, for instance, may remain uncertain. To address these issues, the Met Office is developing a short-range ensemble using a 24 km grid regional model covering the Atlantic and Europe, nested within a global ensemble, to allow good resolution of uncertainties in Atlantic weather systems. As well as varying the initial conditions, perturbations are applied to the model parametrizations to account for uncertainty in their approximations. If it performs well and resources are available, the plan is to implement such an ensemble in 2006. Use of different models can further improve the representation of forecast uncertainty, and it is hoped to achieve this by exchanging short-range ensemble forecasts with other NWP centres.

Convective-scale NWP Practical atmospheric models, such as the UM, restrict the scales that are represented by averaging on to a grid. The spatial ‘resolution’ is often taken to be the distance between grid points, but in practice the shortest wavelength that can be represented accurately is four or five grid lengths (e.g. Lean and Clark 2003). Higher resolution can be obtained either through nesting of limited-area models or through variable resolution. The effects of unresolved scales must be approximated using a ‘parametrization’. Probably the most important unresolved processes are moist convection, and interaction with orography. The nature of moist

convection means that parametrization is only feasible by averaging the properties of a collection of clouds. When only a few clouds are present in the resolved wavelength, parametrized convection either fails or, at best, provides only statistical samples from a stochastic process (e.g. Cohen and Craig 2003). Operational limited-area models currently have a grid length around 10 km, and so represent flow on roughly the 50 km scale; it is impossible to predict behaviour of an individual thunderstorm with such a model. Reducing the grid length to 3–4 km can improve the prediction of flow over orography, but convection remains a major problem. Individual clouds cannot be represented, yet the structure of the cloud field (e.g. inter-cloud spacing) may be close to the resolved scale so ‘average’ parametrization schemes cannot work. The Met Office plans to avoid this by reducing the grid length yet further to about 1 km, when the interaction between storm cells is well represented. Realistic results have been obtained, though the detailed structure of a storm cell can still not be properly represented so there remains considerable uncertainty due to model error at small scales. Figure 7 shows an experimental forecast using the UM with a 1 km grid, compared with radar measurements from the Chilbolton radar. If the high-resolution model is used as a direct replacement for lower resolution, the detailed cloud field has little predictive skill, but its statistical properties are an improvement over those available from parametrization. However, to predict the development and location of specific thunderstorms, at least over a few hours, it will be necessary to assimilate initial data at very high resolution, using high temporal and/or spatial resolu-

tion observations from ground-based or satellite-borne instruments. Assimilation of data from weather radar is particularly important since both precipitation and wind can be deduced from the reflected radar signals at high space and time resolution. Some success has been demonstrated, though work in this area has also highlighted the difficulties to be overcome before an operational capability can be achieved. It is anticipated that sufficient computer power to carry out short operational predictions over the more populous parts of the UK could be available by the end of the decade. If the realistic storm structures shown in Fig. 7 can be aligned with the true storms observed by radar, a major step forward will be achieved in the prediction of severe weather associated with thunderstorms.

Weather – November 2004, Vol. 59, No. 11

Fig. 7 360° PPI (Plan Position Indicator) scans of reflectivity (dBZ) from (a) the Chilbolton 3 GHz radar at 1° elevation with ground clutter removed and interpolated to a 1 km grid, and (b) a 7-hour forecast using the Met Office Unified Model at 1 km resolution. Note that the model time is 45 minutes later than the radar time.

Conclusion In summary, the Met Office has achieved its pre-eminent position in NWP through the steady application of state-of-the-art scientific research, together with innovative exploitation of new opportunities by many determined individuals, sometimes beyond their scientific justification. As a result, Met Office models not only have achieved competitive performance, but also have often moved well ahead of other centres in specific capabilities.

Acknowledgements We are indebted to many colleagues for their input to this paper. In particular, the memories of Mavis Hinds, taken from her article “Computer story”, and unpublished recollections from Margaret Bushby and Sir John Mason are gratefully acknowledged. 305

References

Weather – November 2004, Vol. 59, No. 11

NWP in the Met Office

Ashford, O. M. (1985) Prophet or professor? Adam Hilger Ltd, Bristol Atkins, M. J. (1970) Objective analysis of upper air height and humidity data on a fine mesh. Meteorol. Mag., 99, pp. 98–109 Ballard, S. P., Golding, B. W. and Smith, R. N. B. (1990) Mesoscale model experimental forecasts of the haar of north east Scotland. Mon. Wea. Rev., 119, pp. 2107–2123 Bell, M. J., Forbes, R. M. and Hines, A. (2000) Assessment of the FOAM global data assimilation system for real-time operational ocean forecasting. J. Mar. Sci., 25, pp. 1–22 Benwell, G. R. R., Gadd, A. J., Keers, J. F., Timpson, M. S. and White, P. W. (1971) The Bushby–Timpson 10-level model on a fine mesh. Sci. Pap. No. 32, Meteorological Office, HMSO, London Bjerknes, V. (1904) Das Problem der Wettervorhersage, betrachtet vom Standpunkte der Mechanik und der Physik. Meteorol. Z., 21, pp. 1–7 Bull, G. A. (1966) Three parameter atmospheric model used for numerical weather prediction. Forecasting Tech. Branch Mem. No. 9 (unpublished, copy available from National Meteorological Library) Burridge, D. M. (1975) A split semi-implicit reformulation of the Bushby–Timpson 10-level model. Q. J. R. Meteorol. Soc., 101, pp. 777–792 Burton, P. (1996) From Meteor to T3E. Met Office NWP Gazette, 3, pp. 1–5 (copy available from National Meteorological Library) Bushby, F. H. and Hinds, M. K. (1954) The computation of forecast charts by application of the Sawyer–Bushby twoparameter model. Q. J. R. Meteorol. Soc., 80, pp. 165–173 Bushby, F. H. and Huckle, V. M. (1957) Objective analysis in numerical forecasting. Q. J. R. Meteorol. Soc., 83, pp. 232–247 Bushby, F. H. and Timpson, M. S. (1967) A 10-level atmospheric model and frontal rain. Q. J. R. Meteorol. Soc., 93, pp. 1–17 Bushby, F. H. and Whitelam, C. J. (1961) A three-parameter model of the atmosphere suitable for numerical integration. Q. J. R. Meteorol. Soc., 87, pp. 374–392 Charney, J. G., Fjortoft, R. and Von Neumann, J. (1950) Numerical integration of the barotropic vorticity equation. Tellus, 2, pp. 237–254 Cohen, B. G. and Craig, G. C. (2003) Fluctuations in an equilibrium convective ensemble. Parts I and II. Theoretical formulation and experimental validation. JCMM Internal Report No. 137 (unpublished, copy available from National Meteorological Library) Conway, B. J. and Browning, K. A. (1988) Weather forecasting by interactive analysis of radar and satellite imagery. Philos. Trans. R. Soc. London, A324, pp. 299–315

306

Cullen, M. J. P. (1993) The unified forecast/climate model. Meteorol. Mag., 112, pp. 81–94 Dixon, R. and Spackman, E. (1970) The three-dimensional analysis of meteorological data using orthogonal polynomial base functions. Sci. Pap. No. 31, Meteorological Office, HMSO, London Flood, C. R. (1977) The U.K. operational objective analysis scheme. Met O 2b Tech. Note No. 41 (unpublished, copy available from National Meteorological Library) Francis, P. E. (1985) Sea surface wave and storm surge models. Meteorol. Mag., 114, pp. 234–241 Gadd, A. J. (1985) The 15-level weather prediction model. Meteorol. Mag., 114, pp. 222–225 —— (1978) A split explicit integration scheme for numerical weather prediction. Q. J. R. Meteorol. Soc., 104, pp. 569–582 Golding, B. W. (1978) A depth dependent wave model for operational forecasting. In: Favre, A. and Hasselmann, K. (Eds.) Turbulent fluxes through the sea surface, wave dynamics and prediction, Plenum Press, New York —— (1983) A wave prediction system for real-time sea state forecasting. Q. J. R. Meteorol. Soc., 109, pp. 393–416 —— (1990) Met Office mesoscale model. Meteorol. Mag., 119, pp. 81–96 —— (1992) An efficient non-hydrostatic forecast model. Meteorol. Atmos. Phys., 50, pp. 89–103 —— (1998) Nimrod: A system for generating automated very short range forecasts. Meteorol. Appl., 5, pp. 1–16 Heming, J. T. and Radford, A. M. (1998) The performance of the United Kingdom Meteorological Office Global Model in predicting the tracks of Atlantic tropical cyclones in 1995. Mon. Wea. Rev., 126, pp. 1323–1331 Hinds, M. K. (1981) Computer story. Meteorol. Mag., 110, pp. 69–81 Howkins, G. A. (1973) The Meteorological Office 360/195 computing system. Meteorol. Mag., 102, pp. 5–14 Knighting, E. (1959) Meteor. Meteorol. Mag., 88, pp. 266–269 Knighting, E., Corby, G. A. and Rowntree, P. R. (1962) An experiment in operational numerical weather prediction. Sci. Pap. No. 16, Meteorological Office, HMSO, London Lean, H. W. and Clark, P. A. (2003) The effects of changing resolution on the mesoscale modelling of line convection and slantwise circulations in FASTEX IOP16. Q. J. R. Meteorol. Soc., 129, pp. 2255–2278 Legg, T. P. and Mylne, K. R. (2004) Early warnings of severe weather from ensemble forecast information. Wea. Forecasting, 19, pp. 891–906 Legg, T. P., Mylne, K. R. and Woolcock, C. (2002) Use of medium-range ensembles at the Met Office I: PREVIN – a system for the production of probabilistic forecast information from the ECMWF EPS. Meteorol. Appl., 9, pp. 255–271

Lorenc, A. C., Ballard, S. P., Bell R. S., Ingleby, N. B., Andrews, P. L. F., Barker, D. M., Bray, J. R., Clayton, A. M., Dalby, T., Li, D., Payne, T. J. and Saunders, F. W. (2000) The Met. Office global threedimensional variational data assimilation scheme. Q. J. R. Meteorol. Soc., 126, pp. 2991–3012 Lyne, W. H., Swinbank, R. and Birch, N. T. (1982) A data assimilation experiment and the global circulation during the FGGE special observing periods. Q. J. R. Meteorol. Soc., 108, pp. 575–594 Mason, J. and Flood, C. (2004) Obituary: Fred Bushby. Weather, 59, p. 231 Meteorological Office (1966) Press conference. Meteorol. Mag., 95, pp. 28–30 Milton, S. F. (1990) Practical extendedrange forecasting using dynamical models. Meteorol. Mag., 119, pp. 221–233 Murphy, J. M. and Palmer, T. N. (1986) Experimental monthly long-range forecasts for the United Kingdom, Pt II. A real time long range forecast by an ensemble of numerical integrations. Meteorol. Mag., 115, pp. 337–349 Peters, S. P. (1955) The Met Office faces the future: Forecasting and public services. Meteorol. Mag., 84, pp. 192–196 Platzman, G. W. (1979) The ENIAC computations of 1950 – gateway to numerical weather prediction. Bull. Am. Meteorol. Soc., 60, pp. 302–312 Richardson, L. F. (1922) Weather prediction by numerical process. Cambridge University Press Rowntree, P. R. and Cattle, H. (1983) The Meteorological Office GATE modelling experiment. Sci. Pap. No. 40, Meteorological Office, HMSO, London Sawyer, J. S. and Bushby, F. H. (1953) A baroclinic model atmosphere suitable for numerical integration. J. Meteorol., 10, pp. 54–59 Sutton, O. G. (1954) The development of meteorology as an exact science. Q. J. R. Meteorol. Soc., 80, pp. 328–338 Tapp, M. C. and White, P. W. (1976) A non-hydrostatic mesoscale model. Q. J. R. Meteorol. Soc., 102, pp. 277–296 Wickham, P. G. (1975) Automatic worded weather forecasts. Meteorol. Res. Comm. Pap. No. 381 (unpublished, copy available from National Meteorological Library) Wright, B. J. and Golding, B. W. (1990) The interactive mesoscale initialization. Meteorol. Mag., 119, pp. 234–244 Young, M. V. and Carroll, E. B. (2002) Use of medium-range ensembles at the Met Office 2: Applications for medium-range forecasting. Meteorol. Appl., 9, pp. 273–288

Correspondence to: Dr B. Golding, Met Office, FitzRoy Road, Exeter EX1 3PB. e-mail:[email protected] © Crown copyright, 2004. doi: 10.1256/wea.113.04