14 December 2012

Dec 2012 List of Reports on Water and Related Issues

This is a regular, mid-month update to the never-ending list of released reports on water and related issues that I come across through e-mail, twitter, and any number of outlets. You can check out the previous list, posted in mid-November, as well as prior lists at irregular intervals for even more informative reading. If you know of a report that I have not listed, please e-mail me with a title and link!

From various sources:
From the Pacific Institute:
From the World Bank:
From the European Environment Agency:
From the US National Research Council:
From the US Congressional Research Service, via the Federation of American Scientists:

10 December 2012

Monday Infographics: Urban Impacts on Streams

The US Geological Survey conducts the National Water-Quality Assessment (NAWQA) Program with a project on the "Effects of Urbanization on Stream Ecosystems." Specifically, the project has examined "the response of a stream's biological communities, hydrology, habitat, and stream chemistry to urban development, and how these responses vary across the country." The study included ten urban metropolitan units across the continental United States, covering much of the range of climate and urban density in our largest cities. Scientists at USGS/NAWQA recently produced an interesting and detailed graphic product on "Stream Ecosystems Change With Urban Development." It is presented here in two parts to provide a readable resolution, but you can also download the full-size pdf version to see it pieced together.

USGS NAWQA General Information Product 143, "Stream Ecosystems Change With Urban Development," left side. Download the complete, full-size pdf here.
USGS NAWQA General Information Product 143, "Stream Ecosystems Change With Urban Development," right side. Download the complete, full-size pdf here.

08 December 2012

Dissertation Proposal Excerpt, part 2

A few more paragraphs excerpted from my Ph.D. Dissertation Proposal, currently in preparation. This is a continuation of the narrative that I began posting yesterday.
There is an ever-present call for greater accuracy and confidence and added value in weather forecasts and climate predictions to achieve applicability in decision-making for societal resilience and for economic and ecosystem sustainability. Climate modelers are being asked to provide predictions of future temperature and precipitation under climate change scenarios not just at continental scales, but for individual states and cities that may be at risk for overall warming, diminished water supply, and extreme events such as heat waves and floods. Weather forecasters issue timely warnings on severe weather using models that rely on a growing awareness of the interactions between the land surface and the atmosphere over short time scales, such as the growth of a thunderstorm, and over longer periods such as the development of a seasonal drought. Hydrologists apply knowledge of weather and the land surface to the understanding and forecasting of numerous events with similarly broad societal impacts, from urban flash floods to regional river floods, from rainfall deficits on local farms to seasonal droughts affecting entire countries, and from the effects of a forest fire on a city’s water supply to the long-term impacts of climate change on global water resources. Ecologists and numerous specialized communities are concerned about the potential impacts of climate change and extreme events on the health and spatial distributions of forests, animal species, agricultural lands, and the human-built environment.

From their individual perspectives, each of these communities of practice has developed sophisticated tools and methods, both for their own understanding of the system under study and to deliver their results for application and use by decision-makers and the public. In some cases additional value and accuracy may be achieved by re-evaluation of both overt and implicit assumptions that accompany these modeling and forecast efforts. In many areas of the modeling and forecast endeavor, those methods originally devised for the parameterization of observed phenomena can now be supplemented or replaced with more recent empirical and analytical datasets. With the growth of specialized understanding in many of these subjects, some parameterized forecast system components can now be abstracted entirely to employ physical representations of observed processes. The analyses to support that reformulation can now be provided by aligned communities such as remote sensing specialists, foresters, ecologists, etc. The development of a comprehensive and physically accurate understanding of the natural world based on both structure and function of the ecosystem requires contributions from numerous specialized fields. This multi-disciplinary approach is recognized as the most viable path to deeper understanding and greater accuracy when attempting to gauge impacts both of natural systems on human decision-making, and of humans on their natural environment.

Historically, many such modeling systems have augmented a detailed treatment of the central problem with coarser representations of “external” processes, those aspects of the physical system that are essentially outside the modelers’ focus. However, an examination across multiple fields of inquiry into these various processes shows that one model’s “externalities” have been addressed as another’s central focus, and vice versa. The long-term development of climate and weather forecast models is one example of the improvement in accuracy that may be obtained, not just in forecast skill but also in the accessibility of physical representation and process understanding, by the combination of process models from different communities and approaches. In that development, atmospheric models were originally developed with a coarse representation of surface conditions, while land surface models originally treated the atmosphere as an external source of “forcing” conditions. The coupled land–atmosphere model is now a staple of forecast centers around the world, demonstrating accuracy in the representation of the natural system, and predictive skill, that far exceed earlier separate and uncoupled modeling efforts. Coupled atmosphere–ocean general circulation models (AOGCMs) are employed for the prediction of climate change and its impacts, and are expected to become more accurate and even more useful to decision-makers with improvements to the component representation of land surface processes.

It is a persistent challenge for modelers to reduce a problem to a tractable scope and scale, while also allowing for the emergence of detailed response patterns, using present computational methods for fully nonlinear systems. Earth’s atmosphere and land surface are tied in dynamic mutual feedback processes over multiple spatial and temporal scales, the full scope and detail of which remain difficult for us to formulate. Land–atmosphere models typically represent only a fraction of the complexity that is observed in the real system. Modeling methods attempt to address this issue from a conceptual and computational viewpoint with simplifying assumptions and parameterizations. The spatiotemporal scales of interest remain important in helping to shape the model dynamics: climate models are oriented on long-term simulations of conditions, while surface-based models must consider the rapid changes that come with the persistent fine-scale redistribution of water and the actions of humans on their environment. Some of the highest-resolution global climate models employ simulation grid cells at a scale of 10-100 kilometers; the entire domain of a land surface process model, formulated for an area that might be of interest to natural resource planners and policy-makers, could fit into a single climate model grid cell several times over. These spatial and temporal scales of interest overlap at the domain size and scope of weather modeling over land areas for both forecasting and system understanding.

Accordingly, the development of such coupled process models is ongoing. Among the goals of continued work in this area is the application of land–atmosphere models to predictions of extreme hydrologic events such as droughts and floods. At the same time, advances in ecological process modeling and remote sensing can provide valuable additional information to these efforts where knowledge gaps exist. Data assimilation efforts in weather forecasting already employ remote sensing and other observational products at various levels of processing and accuracy, demonstrating one way for models to maintain fidelity with natural systems. Another avenue for continued improvement, especially by the incorporation of methods from different specializations, is found in the way the land surface is represented in these climate- and weather-oriented modeling systems. Decades of remote sensing technology and datasets now support landscape-scale ecological models that describe the states and dynamics of land cover and human land use, as well as disturbances to those from both natural and anthropogenic sources. All of these aspects of the land surface provide feedback to the atmosphere through various parameters and processes, with a range of impacts on the surface radiative balance, land–atmosphere heat exchange, and the local and regional hydrologic cycle.
Again, feedback is welcome!

07 December 2012

Dissertation Proposal Excerpt

The paragraphs below are excerpted from my Ph.D. Dissertation Proposal, currently in preparation. I'll be using parts of the dissertation proposal for some upcoming fellowship proposals too...
Disturbance of forest areas, much of which has been cleared by human means around the world, has become one of the more challenging and pertinent aspects of global environmental change. The condition of a forested watershed has a direct impact in the quantity and quality of water that is available for human and ecosystem uses. Various aspects of the forest ecosystem affect the surface energy balance and the partitioning of precipitation to runoff, stream flow and groundwater recharge. The operating hypothesis of this work considers that the wide variety of forest disturbances (e.g. drought, defoliation, windthrow, fire, thinning or partial harvest, and clear-cutting) produces a spectrum of such impacts on the land-based hydrologic cycle. Each of these disturbances types is tied closely, by numerous and complex pathways, to climate and weather conditions and anthropogenic influences. The overall hydrologic impacts of these disturbances arise from similarly varied signatures in the feedback of the land surface to the atmosphere, responses that may initially be observed as subtle changes in the local energy balance and the exchange of heat and moisture within the disturbed forest area. The goal of this work is to provide, using existing and new tools in novel combinations, some physical explanation for the observed spectrum of impacts on the local and regional hydrologic cycle due to various observed forest disturbances.

Scientists, researchers, and natural resource managers seek to understand the hydro-ecological impacts of anthropogenic disturbance and climate variability while the economic needs of the community, the spread of invasive species, and the threat of catastrophic forest fires persist. Significant attention has been given to the loss of tropical forest cover in the past several decades, but concern is also rising for the health and fate of temperate and boreal forests. Expanding urban areas, agricultural land use, resource extraction, land and timber management, natural ecological cycles, and climate change have all shaped forest health for far longer than we have given our attention to the issues and problems of such influences. Valuable previous work has considered these problems from varied and often disjoint aspects, but we now have the tools and the methods that can provide a successful approach to such issues in combination. Specifically, I propose to demonstrate the hydrologic impacts of forest disturbances by the combination of several tools that are now available: remote sensing products and analytical methods for the estimation of forest disturbance severity and vegetation health, ecological models of forest disturbance and succession, and coupled land–atmosphere models.

We can witness these changes as they occur using many methods, from local hydrological and climatological observations to space-based remote sensing platforms, but we often wonder about the ultimate source and mechanism of the change that has been found. We are similarly curious about the ways that projected future climate conditions will become apparent at the local scale, but we must recognize in our approach to this problem that the states and processes of the atmosphere and the land surface are necessarily and closely intertwined. Changes at the land surface, broadly as land cover and use and in such narrow categories as the treatment of a fire-scarred hillside above a drinking water reservoir, are finally being recognized as significant management decisions. These are efforts that require the consideration of detailed and far-reaching impacts on both the natural and built environments, the resilience of ecological and human systems, and the sustainability of biodiversity, ecosystem services, and human health. It is vital that we incorporate the best available science in decision-making efforts in order to ensure progress on these goals.
Feedback welcome, and there's more to come!

03 December 2012

Monday Infographics: World Bank Climate Report

This weekend marked the middle of the current gathering of delegates to the UN Framework Convention on Climate Change (UNFCCC) which is being held this year in Doha, Qatar. This is the 18th Conference of Parties to the UNFCCC (COP-18) and the 8th Meeting of Parties to the Kyoto Protocols (MOP-8), along with several other key meetings in the same place. Existing cooperation under the Kyoto Protocols will expire at the end of this year unless some extension, or an unlikely new agreement, is negotiated. This may be the last chance for both developed and developing nations to come to some agreement on emissions abatements, cost-sharing and funds transfers and "carbon taxes," and any effort to curb our destruction of the very Earth systems (oceans, forests, rivers) that may save our own and innumerable other species.

I've posted previously on the UNFCCC COP process:
My goodness, was it really that long ago that I posted those? Time flies...

While much of the UNFCCC negotiation remains focused on the notion that we want to avoid 2°C of warming (actually, I suppose the meetings remain focused on even more fundamentally political issues, such as responsibility), the World Bank recently published a report "Turn Down the Heat" indicating that we are actually on a path to 4°C warming by 2100. They produced an infographic to accompany the report's release:

Companion infographic to the World Bank report "Turn Down the Heat."

The science of climate change tells us that setting our time horizon at 2100 is actually a highly arbitrary choice. We could set a goal at 2100, work to meet that (and I mean actual actions, not just more negotiating over who and what and where and how much $$$), and still see additional warming beyond that date. There is a certain amount of thermal inertia in the Earth system that ensures we are not seeing all of the potential warming immediately, and that even if we stopped all carbon emissions right now, the climate will still warm for centuries to come. A large part of that inertial effect comes from storage of heat and gases in the oceans, something that climate scientists know well and are still working to get the models to represent as accurately as possible. Another large part of the effect comes from feedback effects in the Earth - atmosphere system that many scientists, far more than those who focus specifically on climate dynamics (such as myself), are still working to understand and quantify for addition to those models. Some feedback effects have been under study for some time, such as aerosols in the atmosphere and the thawing of permafrost in high latitudes, but there are many others on which we are just beginning to gather data and make hypotheses for testing and modeling.

29 November 2012

Very Cool Great Lakes Surface Currents Visualization

My colleague and internet friend Dave Gunderson posted a link to this story on some pretty sweet new visualizations of Great Lakes surface currents by a group of artists and technologists that had previously done this with wind observations across the US. This is just a snapshot -- their lab hosts the dynamic visualization that you can also zoom for better detail. The artists, Fernanda Viégas and Martin Wattenberg, worked with the NOAA Great Lakes Environmental Research Laboratory to build these new visualizations using their existing wind-oriented code.

Great Lakes surface currents around the afternoon of 3 October 2012, from Fernanda Viégas and Martin Wattenberg via OurAmazingPlanet.

The earlier version of that visualization has been very cool to look at during major weather events, such as landfalling hurricanes. They've stored some dynamic snapshots of these in a gallery for visitors to look at these events.

Visualized wind currents over the continental US on the evening of 29 October 2012, around the time Hurricane Sandy made landfall,
from Fernanda Viégas and Martin Wattenberg.

Check these out! I think this would be awesome for small-scale visualization of currents in a river, or using a DEM to calculate the flow of water from rainfall to runoff to streams through a watershed. I could sit and stare at these for hours...

27 November 2012

The Costs of Natural Disasters

I've posted previously on the contributions of FEMA to natural disaster response in the US. In general, there are four recognized stages of emergency management that also translate well to an understanding of disaster-related spending:
  1. Preparedness, involving immediate planning for anticipated disasters because of something in the forecast or because it's a slow-building event. Communities may order evacuations, residents may be asked to stock up on food and seek shelter, and supplies and personnel may be positioned to help in the later response process. 
  2. Response, with attention to the affected population's immediate and short-term needs such as food, water, shelter, and transportation. This is usually a FEMA activity, bringing the most obvious and direct influence of federal preparation and infusion of federal funding.
  3. Recovery, the long-term process of rebuilding to the pre-disaster levels of economic activity. This is usually a post-FEMA activity, when communities and residents have had the chance to evaluate their losses and make insurance claims, business owners have the chance to re-open their doors, and opportunistic decisions are made to move or close or even expand enterprises.
  4. Mitigation, the effort to restore protective measures to pre-disaster levels, and possibly to upgrade that protection in advance of the next event, with a goal of reducing or preventing damage in future events, sometimes under the term "disaster risk reduction." This may actually be the most complex component of disaster-related spending, taking great effort to keep decision-makers' attention on such priorities at times outside of the disaster itself.
The cycle comes back around to #1 when a disaster seems imminent. Some accounts of the cycle like to start with #4, as if people are willing to work on long-term mitigation of disasters that they've never seen or experienced before. However, that's not what seems to happen in reality, based on what I've seen, read, and experienced in my own lifetime. People who move to a new city or home seem to need first-hand experience of at least one disaster of moderate proportions before they get educated on the dangers of their new location and make an effort to plan for the future. I don't have any research to back this up, but I would suggest that people who survive one moderate disaster are more likely to survive subsequent, even larger events because of what they learned form that initial experience. Of course, living in a community that is tightly-knit overall, and well-coordinated when the time for disaster preparedness comes around, will certainly help lessen the impact of that first disaster experience.

Different types of disasters have characteristics that provide communities the opportunity to seek different levels of protection. For example, climatic records tell us the largest storms that affect a particular area, so there is guidance for engineers to develop structural protection measures such as levees and seawalls. Climatic records also provide a history of dry seasons and drought, informing planners and farmers of the need to prepare water supplies for those periods. Historical records of the residents in a location can help communities recognize the likely impact of a disaster event, and that information can guide their preparation and their anticipation of the response needs. It would be great if every community had the cash to prepare for and protect against every type of disaster, but that doesn't happen, so there is a consistent need for response and recovery spending after disaster events. Hopefully, there is also a chance for learning and re-evaluation of priorities, but that rarely happens in the response phase itself, so we can only hope that the the memory of a diaster lingers just long enough to teach its lessons. Unfortunately, that memory usually comes at the cost of lives.

Some events cannot be forecast yet: earthquakes strike suddenly, but there are some advance signals that scientists are studying constantly, and communities like those in Japan and California have worked with engineers to develop "quake-proof" building codes and practices. That level of preparation, based on experience and experiment, is aimed at minimizing the loss of life in future events and reducing the cost of response and recovery after the event has passed. These are the types of events, with little or no warning, that take the largest toll in terms of human lives. In general, the level of warning translates directly to the number of lives spared. Tornado warnings with just a few minutes' notice in many parts of the US, through National Weather Service broadcasts and local alarms, have saved countless lives over the past several decades.

But many disaster events can be forecast to some degree. Storms that produce those tornadoes, among other wind- and flood-related impacts, can be tracked on weather radar and forecast with some skill. Hurricanes move onshore after some time over the ocean where they are watched and analyzed from numerous perspectives. We know that certain communities will be most affected, and the weather forecasts help us determine which communities are under threat and then which will likely need the most attention afterward. Presidential emergency declarations prior to or during a storm event, and then disaster declarations after the event, help prioritize the immediate preparations and then post-event response and recovery efforts, and channel funds in the directions of greatest apparent need. Even with the best available science and engineering, we humans continue to build and live in vulnerable areas, so disasters still exact huge costs in the affected communities.

Several sources have attempted to demonstrate and draw attention to the rising costs of natural disasters,  both in the US and around the world, and especially in recent decades as reporting of such impacts has improved over time. The ultimate cost, in terms of human lives, was tallied by the US National Weather Service as shown by the Washington Post Wonkblog:

From the Washington Post Wonkblog, posted 30 October 2012 at this location.

Using a report from Munich Re (a global reinsurance company), the journal Nature counted the overall number of disasters reported around the world over the past three decades:

From Nature, posted 10 January 2012 at this location.

The Economist has also attempted to tackle the story of rising disaster costs on at least two occasions recently, both relying on Munich Re reports:

From The Economist, posted 21 March 2011 at this location.
From The Economist, posted 14 January 2012 at this location.

That latter Economist article is an excellent review on the macro-economics of natural disasters, and it was quite satisfying to see that level of discussion on these issues in such a highly-regarded international news magazine. It should be noted, however, that these numbers address insured losses, and that additional uninsured losses are some unknown amount greater in almost all cases.

Back to the US and even more up-to-date, Bloomberg Businessweek recently produced this interesting graphic showing the comparative costs of disasters in the US over the past two decades, including an early estimate of response and recovery costs following Hurricane Sandy:

From Bloomberg Businessweek, posted 1 November 2012 at this location.

In a number of sources, it is clear that some researchers and reporters are attempting to make apparent any potential influence of climate change on the rising numbers and costs of natural disasters. However, as linked in the Nature article that I referenced above, the careful science of such attribution is still in the early stages of development. It is already easy and plain to say that some of these events would not have been disasters if not for humans pressing on their natural environments, building levees and towns where the river wants to run or cities in coastal flood zones with inadequate seawall protection, but that doesn't mean that human activity affected the power of the storm itself. I do believe that someday we will be able to say with full certainty "yes, anthropogenic climate change made this storm/drought/flood worse, and it would not have been so costly if we had been more responsible with our greenhouse gas emissions." However, the method of proving that is very strict, and it takes time to compile the evidence. I often think that neither scientists nor journalists nor the public really understand that this language scientists use to convey uncertainty, words like "possibly" and "likely," mean different things to different people. The IPCC has their definitions spelled out clearly in various reports and a Glossary (pdf) (see the definitions for "confidence," "likelihood," and "uncertainty"), and that's what I prefer. However, until everyone using and reading those terms comes to a common understanding of their definitions, I also recognize that those terms need to be defined unambiguously. Yes, the trends of climate change and natural disasters are tracking upward together, suggesting a correlation that calls for further investigation, but the day that we get to "very high confidence" or "very likely" (both meaning >90% probability) is still some time away. The basic uncertainty in making such a claim has many components still to be resolved, and there's still that 10% of leeway in the probability that gives some people justification for their continuing doubt.

Climate scientist James Lawrence Powell just posted a meta-analysis with graphics showing that, in the period 1991-2012, almost 14,000 professional, peer-reviewed journal articles on climate with nearly 34,000 individual authors were published supporting a scientific consensus on anthropogenic climate change. He could find only 24 peer-reviewed, published articles that dissented, and those were poorly-cited articles at that. Yet we know from media reports and journalistic language ("Scientists say ...") that there are still large parts of the US and global populations that deny climate change even exists, let alone that human activity might be causing at least some of that change! The process of science in consensus building is amazing, but it takes time and a willingness to test and re-test hypotheses and to accept observations as evidence. The IPCC 5th Assessment Report (AR5) due for release in 2013 will likely set a new standard regarding the actual certainty of anthropogenic climate change, and every scientific study published thereafter will raise or (unlikely) lower that certainty by fractions of a percentage point. My own sense, from the literature and evidence, is that human activity is "very likely" (>90% probability) causing observed climate change, and that the 2-degrees-C mark is "virtually certain" (>99% probability) of being reached before the end of the current century, or likely even earlier. However, I'm not specifically trained as a climate scientist, so I'll leave those determinations to the experts. I do attempt to specialize in the water-related impacts of such changes that show up in various parts of an exceedingly complex system, both now an in the future, and the fundamental science of climate change suggests that we have already done (and continue to do) dangerous and degrading things to the planet. Society doesn't need to wait on scientific journal articles and subsequent news stories to plan their response to events that are already occurring, to become better prepared for similar or bigger events in the future, and to reform the ways that communities consistently build and re-build in harm's way. The goal of community planning, and especially the mitigation phase of the disaster cycle, should be resilience in the event of disaster. That is a term and idea on which I have wanted to write for some time now. It is part of the discussion now, near the end of the response phase and at the beginning of the recovery/rebuilding phase in the northeastern US following Hurricane Sandy, so this is as good a time as any to talk about it here.

Any search on Google Images for "natural disaster infographic" (and variants on those terms) produces hundreds of results, so these are just some examples of the wealth of information out there. However, as with any search for infographics based on real data sources, it's important to check for references. The best infographics provide, right on the image, their references (and sometimes web links) for the data used in their graphical design. I have attempted to use only infographics that provide such information, or at least link to their sources from a web page that is obviously connected to the infographic. Where I have used figures from other blogs and media and journal articles, I have attempted to be sure to include all reference and link information. As always, if you find that a source is missing or a link from this blog is broken, let me know so that I can resolve the issue.

26 November 2012

Monday Infographics: COP-18 and the Kyoto Protocols

The 18th Conference of the Parties (COP-18) to the UN Framework Convention on Climate Change (UNFCCC) started today in Doha, Qatar. There is much to hope for, but expectations range widely regarding the actual outcomes from these next two weeks of politics-heavy (and, alas, science-light) negotiations. I've been critical of the UNFCCC process and its results, and I've also offered suggestions (part 1; part 2) on topics that deserve coverage at these meetings. While the politics remain wrapped up in carbon accounting and stuck to the idea of mitigation, the science has shown that some warming is already "locked in" and will happen no matter how quickly those elusive emissions targets are met. The rest of the world that works away from the conference table has already moved on to efforts at adaptation. These next couple of weeks may well show the utility of the annual COP negotiations.

About a week ago, the global news network Al Jazeera published an infographic on the present status for a number of issues to be discussed in Doha, foremost of which for many government negotiators is the Kyoto Protocol that may expire at the end of this year:

The original image for this infographic was posted by Al Jazeera on 18 November 2012 at this location.

19 November 2012

Monday Infographics: Xylem's "Value of Water" Report

This infographic pretty much speaks for itself. It accompanies a new, interactive "Value of Water" report by Xylem, a US-based global water infrastructure and technology provider, that was based on a telephone poll of more than 1,000 Americans of voting age. This is a nice complement to the anticipated 2013 ASCE Report Card for America's Infrastructure, for which the previous edition was issued in 2009.

From media materials accompanying the Xylem "Value of Water" report.

16 November 2012

Nov 2012 List of Reports on Water and Related Issues

Since it has been about a month since my last post of reports on water and related issues, it's about time for another. Either the stream seems to have slowed a bit, or I've caught up on the mass of reports that I had bookmarked and collected previously for eventual posting here. Either way, I think that I'll make this a monthly thing, so check in here again around mid-December for the next list.

As always, if you catch a report that I have not listed, please send me a link!

From various sources:
From the US Government Accounting Office:
From the US Congressional Research Service, via the Federation of American Scientists:
From the US National Research Council:
From the UK Department for International Development:

15 November 2012

Forecasting Hurricane Sandy

Right off the bat, I want to say that I'm biased here. I've been to the National Hurricane Center, have met some of the forecasters there, and even knew some of them from graduate school at Colorado State University. These are some of the best forecasters in the world, and it only add to their credibility that they do their jobs under the intense pressure of administrative budgets and the knowledge that their warnings and predictions save lives. After watching a hybrid storm like Hurricane ("Superstorm") Sandy make landfall at the most densely populated area of North America, blaming the messenger is not really the best way to figure out things that might have gone wrong. The NHC did their jobs, and did exceptionally well at that. The rest falls to the planning and preparations of politicians, emergency managers, and residents. No-one is really to blame when people die in a natural disaster, least of all the victims, but people everywhere can often do just a little bit more to improve their own resilience to events like this that remain beyond their own control. When there's such a good forecast to tell people when, and in what manner, such a disaster may strike, well isn't that a red-letter day for science, instead of an opportunity to find minor weaknesses and place blame? Sure, it can get better, but only if the politicians recognize that better requires $$$.

This video from YouTube shows the progression of NHC forecasts and advisories over the entire lifespan of Hurricane Sandy. There are numerous elements here that can be pointed out, but the take-away message is this: "Damn! Those forecasts were good!"


This short YouTube video from NOAA shows a satellite view of Sandy along with the 5-day forecast track leading up to landfall, to emphasize just how useful and accurate that forecast was.


Weather Underground founder and chief scientist Dr. Jeff Masters blogged a couple of figures on the forecast track errors. To interpret these, one needs to recognize that errors in a forecast hurricane track have two components: the location of a forecast position also has a time attached to it. One way to get an error is if the line of the track center is off by a number of miles left or right. That kind of forecast error is pretty easy to calculate. The other way to get an error is if the storm moves faster or slower than expected, pushing the actual storm position forward or drawing it backward along that forecast track. The location of anticipated landfall can be perfect, but not so helpful if the storm arrives 12 hours before it is expected. By the expected time, the storm can actually be dozens of miles inland, thus a forecast error is easy to calculate for that problem too. At the 5-day forecast horizon, which is as far out as the NHC provides in public advisories, the forecasts looked like this:
From Dr. Jeff Masters' Weather Underground blog entry for 2 Nov 2012, Figure 3.

Taking into account all of the forecasts that these NHC models made over Sandy's lifetime, the statistics shape up like this:
From Dr. Jeff MastersWeather Underground blog entry for 2 Nov 2012, Figure 2.

One thing you'll note here is that several models are used in making a track forecast. The European Center for Medium-Range Weather Forecasts (ECMWF) has been a global leader in weather and climate modeling for some time, and is generally considered the principal rival of the US National Centers for Environmental Prediction (NCEP). Other models there are the Hurricane Weather Research and Forecasting (HWRF) model, the Geophysical Fluid Dynamics Laboratory (GFDL) model, and the Global Forecast System (GFS) that is used for much of the routine NCEP forecasting duties.

At a 3-day forecast horizon, we can see that the models were all pretty much equivalent in accuracy. The issue that has been raised in the media, especially by the USA Today, was that the ECMWF forecast was so much more accurate than the other, largely US-based models at the 5-day horizon. Those who know how this all works are not worrying over whether the Europeans are way ahead of the US in forecasting, as the USA Today suggested. This is not a competition, one country's scientists against another, as depicted there, but a collaboration. Is this a failing of American science? Not by a long shot! I'll tell you why.

First, the NHC uses an ensemble (collection) of models because no single model gets it right all the time, and each model has its own pedigree with different strategies and histories of development. Ensemble forecasting is a staple of numerical weather prediction. Tomorrow's forecasts for your own city's low and high temperatures, the probability and amount of precipitation, even the type of precipitation, all come from a collection of model results that are all just slightly different in their output. The key is an assessment of how much different those results are; the more variability in possible outcomes, the less likely the final forecast is going to be. For the forecasts of Sandy's track, we can see that the models all had errors of around 60-100 nm at the 3-day horizon but then diverged significantly by the 5-day horizon. That demonstrates that our curent predictive models are generally good to a certain point, but that the inherent chaos of the atmosphere takes over after that. That's a basic tenet of atmospheric modeling.

Second, the forecasters use the models as guidance, not as the final determination of the forecast to be issued. Experts, with their own knowledge of these models' strengths and weaknesses, made the determination that the two models that had Sandy on a collision course with New Jersey were more likely correct than the other two that had Sandy staying out over the Atlantic Ocean. A final forecast is only as good as the forecaster's judgment when it comes to trusting the model output, or applying some local (often tacit) knowledge that the model is not always capable of carrying. The model is just numbers; the forecaster's job is to know what all those numbers mean in local context.

Third, and most important, comes the adage better safe than sorry. If the forecasters had decided to give more weight to the models that kept Sandy's forecast out over the ocean, preparations and evacuations in the NJ/NY area might have been delayed for a couple of days. Those days would have put lives at risk in what would eventually become the disaster zone around Sandy's landfall. As it is, we still build on shifting coastal zones and in vulnerable floodplains, and we still have great variability in building codes from place to place according to the expected hazards, and we still have human agency and choice in the decision to prepare. On top of all that variability, the hurricanes themselves bring a wide variety of threats to a landfall location. So many measurable things were forecast with high skill: track location, timing of landfall, intensity of the storm, height of storm surge, rainfall amounts, and locations that were vulnerable. Local and federal emergencies were declared even before Sandy made landfall, and assessment and recovery efforts were started even before the storm was done, and disaster declarations were made almost immediately after the clouds cleared. There are so many unmeasurable variables that require attention from the media, especially public education and warning. The choice to issue warnings to the coastal regions in Sandy's projected path, and the choice of community leaders to advise residents on the local preparations and potential response needs, were actually all highly successful in this storm. Compare the cost of hurricane evacuation, quoted at recently as 2005 at US$1M per mile of coastline affected (and likely more now), with the value of a single human life. There's no contest: better safe than sorry. One can point at the forecasters and the community leaders, and the choices that they all made professionally in the effort to protect citizens and their property, but the ensuing disaster was not of their making.

So don't see it as Americans against Europeans in the race to better forecasts, with winners and losers. If anything, it's a friendly competition, with much trading of ideas and concepts and answers. I've had the pleasure of meeting forecasters and modelers in Europe too, and if there's one thing that scientists everywhere are good at, it's arguing over approaches and methods, and giving each other ideas on how to make things work better. The community literature is enough to convince anyone of that: multi-national, even global teams work on all of these models together, trying to make them all better at their representations of reality and predictions of the future. As to whether this storm was a "wake-up call" to better preparation, greater resilience, and the decisions that still need to be made on so many topics... that remains to be seen.

12 November 2012

Monday Infographics: Storm Politics

Some believe that Hurricane ("Superstorm") Sandy and the early recovery efforts may have influenced the US presidential election. After all, Sandy made landfall in New Jersey late in the day on 29 October 2012, affecting much of the mid-Atlantic and New England seaboard, and Election Day was just a week later on 6 November. Some have decried an apparent poor performance of forecasts and warnings from our own NOAA NWS National Hurricane Center. Infrastructure in the area around New York City will take months to recover from an unanticipated storm surge.

But that's all incorrect. It will take a while for people to recover, and there are still things to be done in an area that remains vulnerable, and that is the recovery process following a natural disaster. But in the meantime, I feel some responsibility to help put a few of these other ideas to rest.

Let's start with the election. The New York Times kept running tallies of results using some great interactive map features that are still available. One of those, which I had not seen in previous elections, shows the swing of a county's overall vote from one election to the next, using a vector to indicate the shift from Democrat to Republican or vice versa. This is a screenshot of the map; you can go to the New York Times site for the fully-interactive version.


If I am reading this correctly, the bulk of counties in the vicinity of Hurricane Sandy's landfall and in the disaster zone (much of New Jersey and parts of New York and Connecticut) actually shifted in the Republican direction. President Obama still won those states, but at the level of individual counties he did not come out ahead by as wide a margin as in the 2008 election. In fact, both Independent and Republican candidates gained ground, to the President's expense. I suppose it's possible that voters were holding the President accountable for a response and recovery effort that was just getting started, and had already made FEMA assistance and recovery funds available in the affected areas, but that's only fair. In fact, FEMA recover funding made available to affected individuals is just about to cross the $500 million mark.

I grew up in one of those affected NJ counties, went to college in another, traveled all over that region, and have seen first-hand what a hurricane can do to the New Jersey coast and the New York metro area. If the voters wanted even more from the President within just the first week after the storm, they would be hard-pressed to find any candidate more willing, eager, and proactive in disaster response than President Obama was this year. Given Mr. Romney's earlier statements on the utility of FEMA, I seriously doubt he could or would have done any better in this (or any other disaster) situation. According to him, disaster relief is "immoral," so that just might have left the Northeast on its own to work through this recovery period. Instead, New Jersey Governor Christie praised the President's efforts, and New York City Mayor Bloomberg endorsed President Obama just prior to the election with a statement that was largely focused on the need to address climate change issues, even after that subject was not at all breached in the three Presidential debates this fall. Nevertheless, the Northeast US moved largely toward the Republican side with their votes. To add irony to such an insult, the very FEMA director who was responsible for the poor response and recovery following Hurricane Katrina in 2003 actually had the stones to criticize President Obama's rapid response efforts.

So maybe it wasn't just the Presidential election, but other races were affected? No. In almost every other race for US House of Representatives or the US Senate, the incumbent was re-elected throughout the northeastern states. The races for both houses went almost entirely as predicted well before Hurricane Sandy was a (potential) factor. Even that fundamentalist in Georgia who Republican leadership assigned to the US House Science Committee got re-elected! And of all things, he ran unopposed! If we're going to raise the level of discourse in this country on rational subjects that are necessary to our future, and bring the politicians back to task, instead of oppositional strategies and consensus-killing tactics, we cannot let things like that keep happening. If American's truly and honestly wanted to see some kind of change in the way the politicians are doing things in Washington DC, we sure didn't use our votes to show it.

As a result, the Democrats retained the majority in the Senate, and the Republicans retained the majority in the House. The legislative branch remains split just as before, the President has not changed, and so there's little left to guess about how much positive and progressive legislation we can expect from the next two years of Congress. We had a chance to make things better, to make better things, and to right some of the wrongs where the last Congress failed to act, but I just don't see that the American people took that opportunity to give the President or Congress a mandate on rational planning for the country's future.

This may seem like a rant on the outcome of the election, but let me assure you, I'm glad that it all turned out the way that it did. It's almost as if we didn't need an election day at all. Things in the White House and Congress were not very different on 7 November from where they were on 5 November. And all of that has convinced me of this: government is not going to get done what we need to see happening in this country to prepare for the future. I needed this kind of election, at this time in my own life, to help me recognize that the faith I placed in various groups, my faith in our elected representatives to address reality, my faith in government administrators to get it right, and my faith in the electorate to educate themselves and make reasoned judgements on Election Day, has been misplaced. It also helped me recognize that I've not relied enough on the power of my own voice. Now, I'm not (and have no desire to become) a politician, and it's not my place to tell Americans how to vote. I realize fully that my blog is not influential in that subject, and in this particular election it's too late for that anyway. But I can see and talk about what's wrong with the way things are now, and I can talk about the ideas that are already out there for fixing the problems, and I can make a better effort at educating the electorate so that better decisions are made.

Some have said, through the recovery effort, that Hurricane Sandy was a wake-up call for a number of things that have been neglected in our own country: better planning for disaster events, better protection of our infrastructure, greater resilience of our urban centers of commerce, and a national and participatory conversation on climate change. Perhaps. But even more specifically, a number of discussions and debates were mooted by these election results: Interior Department policies, EPA effectiveness, infrastructure renewal and protection, a national water policy, energy policy and subsidies, recognition and action on climate change... Now, to be sure, the need for reform in all of these areas is still there, and was not cut down by this Election Day's results. However, the discussion (if any) up to now on these topics has been rendered as mere prologue to the opportunity that we now must face, which is to get some real work done.

Given the way our country just voted to maintain the status quo, I don't expect the government to do all (or, really, any) of those things on behalf of American citizens. And this is not a case of simply waiting for the next election in 2014, for a shift in the balance of the US House, for the President's administration to wake up to some of the issues that have gone untreated. Those of us with a voice need to raise the level of discussion and debate on our own. We call those in Washington DC our leaders, but they're not. They're our representatives, and they're supposed to be taking their marching orders from us, not the other way around. I'm starting to think that maybe this is why I have a blog. Discussion is welcome.

05 November 2012

Monday Infographics: FEMA

The US Federal Emergency Management Agency (FEMA) constitutes about 1% of the budget of the US Department of Homeland Security (DHS), which is just a fraction of the overall US federal budget. Recent years have seen consistent attempts to reduce federal expenses, usually by enacting budget bills that are just less than the President's request levels. But in the case of emergency management, a certain amount of awareness and readiness is necessary, or the response to an event will require even greater resources. FEMA develops three overall budgets every year: Administration Request, Enacted Appropriation, and Emergency Supplemental. The actual "year-end" FEMA spending has something to do with the enacted "year-beginning" budget, but they're rarely the same.

Originally published September 2011 by Veronique de Rugy, Senior Research Fellow, Mercatus Center, George Mason University.

It is difficult enough to predict major weather and other events, such as earthquakes and forest fires. But those events are not disasters until placed in a human perspective, and so it is even more difficult to predict the eventual cost of a disaster. We cannot gauge, with accuracy, several aspects of a disaster:
  • Human and infrastructure preparedness and resilience: Is that power transformer going to fail? Are people in the vicinity ready for a power outage? Do people have enough water and food to last through the recovery? Do they have a roof any longer?
  • People's reactions: Did residents in designated zones prepare as suggested? Did residents evacuate when ordered?
  • Administrative and technical responses: Are power companies prepared to respond to multiple service interruptions? Are local and regional officials actively managing a coordinated response to the event?
These cannot be forecast, and so the overall cost of a disaster event cannot be budgeted beforehand. What does FEMA do for the country? That's an easy question to answer, especially right after there's a need for that response.

Originally published September 2011 by DigitalSurgeons.com.

29 October 2012

Monday Infographics: Hurricane Sandy

As I write this, Hurricane Sandy is a few hours away from landfall on the coast of New Jersey. My hopes are that residents along the Mid-Atlantic and New England seaboard have heeded warnings, prepared their homes and families, and evacuated from those areas where directed. For those in the path of this massive storm, the greatest danger is not the winds or rains or even storm surge, it's poor preparation for those and any sense of hubris. If you need help, I don't know why you would be reading this - go to Google's Crisis Map or check out Ushahidi's great round-up of relevant maps and sites.

The Weather Underground web site, long a favorite for those of us who love diving into the data behind all the fancy graphics that The Weather Channel delivers so consistently, was recently bought by that latter company. I readily admit that the merger of expertise between these two, WU in meteorology and TWC in presentation, has been fantastic. One significant positive was WU's selection in 2008 of a Google Maps interface, allowing the user's selection from a vast array of data sources. That orientation, with the expectation that the user can get just as little or as much information as they want/need, is a superb choice on the part of both TWC and WU. The retention of WU's blogs and experts keeps us weather-fascinated types geeking out over all the great stuff that is going on.

So, very quickly, this screen capture of WU's "WunderMap" from approx. 2 pm EDT of the coastal Atlantic region, with the track of Hurricane Sandy and the precipitation radar overlaid on the satellite base map. There are several things to be noted here:
  1. Sandy's eye has just entered the offshore extent of coastal radar coverage. It's rare (to put it mildly) that a storm spans the full extent from Cape Hatteras to Cape Cod.
  2. The last advisory position for Sandy's center is just SE of the eye, and the next forecast position is just W of the eye, indicating a track forecast that is currently very accurate.
  3. I did not plot here the pressure or wind fields around Sandy, or the forecast track uncertainty, or the storm surge heights and tide forecasts, or the individual stream gauges with their observations and flood stage forecasts throughout the region... but those are all available in the data sources on the right side of the interface.
Screen capture around 2 pm EDT on 29 October 2012 from Weather Underground's WunderMap.

For the most up-to-the-moment version of this particular map and data combination, click here.

17 October 2012

Some More Reports on Water and Related Issues

Maybe I should collect all of these, from this and previous posts, into an attached page? Hmmm...
From the US Government Accounting Office
From the US Geological Survey
From the National Research Council

15 October 2012

Monday Infographics: The Clean Water Act at 40

For this week's infographic I was looking around for something great on the Clean Water Act, which was was passed into law by the US Congress (over President Nixon's veto) on 18 October 1972. Alas, I was not able to find something clear and colorful and informative about all that the Clean Water Act has done for our country and our environmental and personal health over these past four decades. I did, however, find a very insightful interactive graphic feature that was originally published in 2009 by the New York Times.

I mentioned previously on this blog one of the reporters who was responsible for this interactive feature, Charles Duhigg, who was honored with a Science Journalism Award from the Kavli Foundation and the American Association for the Advancement of Science (AAAS) in 2009 for his work on the New York Times feature series "Toxic Waters." Credit goes as well to Mr. Duhigg's research and reporting partner Karl Russell at the New York Times. The Flash-animated interactive feature shown here is merely a link to their excellent and detailed work drawing all of the data together and publishing it online. I have not copied their data or work, or the code of the interactive feature itself, merely a couple of image captures and link to the original feature from this space. For more information on all of the stories and features related to Mr. Duhigg's (and everyone else's) great work on the "Toxic Waters" series, check out their New York Times pages.

There is certainly some commentary that could accompany this, related particularly to the data embedded in the feature, but I'll leave that for another post. Perhaps on Thursday, the 40th birthday of the CWA...


from the New York Times

Clean Water Act Violations: The Enforcement Record

Interactive graphic feature originally published on 13 September 2009 in the online edition of the New York Times by Karl Russell and Charles Duhigg.
Circles representing the number of facilities in each state that are regulated under the Clean Water Act. For
examples to provide scale, Wisconsin had 653 registered facilities, and California had 2,161 registered facilities.

Circles representing the number of regulated facilities in each state (outer gray) that were found to have violated provisions of the
Clean Water Act (inner, darker gray). For examples to provide scale, Wisconsin had 313 facilities (~39% of the total registered in
that state) that violated the CWA in the 2004 - 2007 period, and California had 597 facilities (~27% of the total registered in that
state) in violation in that period.

Circles representing the number of violating facilities in each state (outer gray) that were prosecuted under the Clean Water Act
(inner, darkest gray). For examples to provide scale, Wisconsin saw 83 enforcement actions against violating facilities (~33% of
the total violating facilities in that state) under the CWA in the 2004 - 2007 period, and California saw 142 enforcement actions
(~25% of the total violating facilities in that state) under the CWA in that period.

The following original caption accompanied the interactive feature:
Figures were compiled by asking states to verify data initially provided by the federal Environmental Protection Agency. Any time officials disputed the data, they were asked to provide alternative figures, which were substituted. New Mexico, New Hampshire, Massachusetts, Idaho and the District of Columbia were not delegated enforcement of the Clean Water Act. Figures for those states are from the E.P.A. Georgia, Kentucky, Pennsylvania, Tennessee and Mississippi disputed the E.P.A. figures but did not provide alternative information. States' responses are available here.