15 November 2012

Forecasting Hurricane Sandy

Right off the bat, I want to say that I'm biased here. I've been to the National Hurricane Center, have met some of the forecasters there, and even knew some of them from graduate school at Colorado State University. These are some of the best forecasters in the world, and it only add to their credibility that they do their jobs under the intense pressure of administrative budgets and the knowledge that their warnings and predictions save lives. After watching a hybrid storm like Hurricane ("Superstorm") Sandy make landfall at the most densely populated area of North America, blaming the messenger is not really the best way to figure out things that might have gone wrong. The NHC did their jobs, and did exceptionally well at that. The rest falls to the planning and preparations of politicians, emergency managers, and residents. No-one is really to blame when people die in a natural disaster, least of all the victims, but people everywhere can often do just a little bit more to improve their own resilience to events like this that remain beyond their own control. When there's such a good forecast to tell people when, and in what manner, such a disaster may strike, well isn't that a red-letter day for science, instead of an opportunity to find minor weaknesses and place blame? Sure, it can get better, but only if the politicians recognize that better requires $$$.

This video from YouTube shows the progression of NHC forecasts and advisories over the entire lifespan of Hurricane Sandy. There are numerous elements here that can be pointed out, but the take-away message is this: "Damn! Those forecasts were good!"


This short YouTube video from NOAA shows a satellite view of Sandy along with the 5-day forecast track leading up to landfall, to emphasize just how useful and accurate that forecast was.


Weather Underground founder and chief scientist Dr. Jeff Masters blogged a couple of figures on the forecast track errors. To interpret these, one needs to recognize that errors in a forecast hurricane track have two components: the location of a forecast position also has a time attached to it. One way to get an error is if the line of the track center is off by a number of miles left or right. That kind of forecast error is pretty easy to calculate. The other way to get an error is if the storm moves faster or slower than expected, pushing the actual storm position forward or drawing it backward along that forecast track. The location of anticipated landfall can be perfect, but not so helpful if the storm arrives 12 hours before it is expected. By the expected time, the storm can actually be dozens of miles inland, thus a forecast error is easy to calculate for that problem too. At the 5-day forecast horizon, which is as far out as the NHC provides in public advisories, the forecasts looked like this:
From Dr. Jeff Masters' Weather Underground blog entry for 2 Nov 2012, Figure 3.

Taking into account all of the forecasts that these NHC models made over Sandy's lifetime, the statistics shape up like this:
From Dr. Jeff MastersWeather Underground blog entry for 2 Nov 2012, Figure 2.

One thing you'll note here is that several models are used in making a track forecast. The European Center for Medium-Range Weather Forecasts (ECMWF) has been a global leader in weather and climate modeling for some time, and is generally considered the principal rival of the US National Centers for Environmental Prediction (NCEP). Other models there are the Hurricane Weather Research and Forecasting (HWRF) model, the Geophysical Fluid Dynamics Laboratory (GFDL) model, and the Global Forecast System (GFS) that is used for much of the routine NCEP forecasting duties.

At a 3-day forecast horizon, we can see that the models were all pretty much equivalent in accuracy. The issue that has been raised in the media, especially by the USA Today, was that the ECMWF forecast was so much more accurate than the other, largely US-based models at the 5-day horizon. Those who know how this all works are not worrying over whether the Europeans are way ahead of the US in forecasting, as the USA Today suggested. This is not a competition, one country's scientists against another, as depicted there, but a collaboration. Is this a failing of American science? Not by a long shot! I'll tell you why.

First, the NHC uses an ensemble (collection) of models because no single model gets it right all the time, and each model has its own pedigree with different strategies and histories of development. Ensemble forecasting is a staple of numerical weather prediction. Tomorrow's forecasts for your own city's low and high temperatures, the probability and amount of precipitation, even the type of precipitation, all come from a collection of model results that are all just slightly different in their output. The key is an assessment of how much different those results are; the more variability in possible outcomes, the less likely the final forecast is going to be. For the forecasts of Sandy's track, we can see that the models all had errors of around 60-100 nm at the 3-day horizon but then diverged significantly by the 5-day horizon. That demonstrates that our curent predictive models are generally good to a certain point, but that the inherent chaos of the atmosphere takes over after that. That's a basic tenet of atmospheric modeling.

Second, the forecasters use the models as guidance, not as the final determination of the forecast to be issued. Experts, with their own knowledge of these models' strengths and weaknesses, made the determination that the two models that had Sandy on a collision course with New Jersey were more likely correct than the other two that had Sandy staying out over the Atlantic Ocean. A final forecast is only as good as the forecaster's judgment when it comes to trusting the model output, or applying some local (often tacit) knowledge that the model is not always capable of carrying. The model is just numbers; the forecaster's job is to know what all those numbers mean in local context.

Third, and most important, comes the adage better safe than sorry. If the forecasters had decided to give more weight to the models that kept Sandy's forecast out over the ocean, preparations and evacuations in the NJ/NY area might have been delayed for a couple of days. Those days would have put lives at risk in what would eventually become the disaster zone around Sandy's landfall. As it is, we still build on shifting coastal zones and in vulnerable floodplains, and we still have great variability in building codes from place to place according to the expected hazards, and we still have human agency and choice in the decision to prepare. On top of all that variability, the hurricanes themselves bring a wide variety of threats to a landfall location. So many measurable things were forecast with high skill: track location, timing of landfall, intensity of the storm, height of storm surge, rainfall amounts, and locations that were vulnerable. Local and federal emergencies were declared even before Sandy made landfall, and assessment and recovery efforts were started even before the storm was done, and disaster declarations were made almost immediately after the clouds cleared. There are so many unmeasurable variables that require attention from the media, especially public education and warning. The choice to issue warnings to the coastal regions in Sandy's projected path, and the choice of community leaders to advise residents on the local preparations and potential response needs, were actually all highly successful in this storm. Compare the cost of hurricane evacuation, quoted at recently as 2005 at US$1M per mile of coastline affected (and likely more now), with the value of a single human life. There's no contest: better safe than sorry. One can point at the forecasters and the community leaders, and the choices that they all made professionally in the effort to protect citizens and their property, but the ensuing disaster was not of their making.

So don't see it as Americans against Europeans in the race to better forecasts, with winners and losers. If anything, it's a friendly competition, with much trading of ideas and concepts and answers. I've had the pleasure of meeting forecasters and modelers in Europe too, and if there's one thing that scientists everywhere are good at, it's arguing over approaches and methods, and giving each other ideas on how to make things work better. The community literature is enough to convince anyone of that: multi-national, even global teams work on all of these models together, trying to make them all better at their representations of reality and predictions of the future. As to whether this storm was a "wake-up call" to better preparation, greater resilience, and the decisions that still need to be made on so many topics... that remains to be seen.

No comments: