Chapter 4 of this book was pretty interesting, as it covered weather predictions from various sources. It presented some data that showed how accurate weather predictions from various sources were. Essentially the graphs plotted the prediction (i.e. “20% chance of rain”) against the frequency of rain actually occurring after the prediction.  They found that the National Weather Service is the most accurate, then the Weather Channel, then local TV stations.

While that was interesting in and of itself, what really intrigued me was the discussion of whether an accurate forecast was actually a good forecast. People watching the local news for their weather are almost invariably going to make decisions based on that forecast, so meteorologists actually have a lot of incentives to exaggerate bad weather a bit. After all, people are much less likely to be annoyed by the time they brought an umbrella and didn’t need it than the time they got soaked by a storm they didn’t expect. The National Weather Service on the other hand is taxpayer funded to be as accurate as possible, and may end up seeing their track record put in front of Congress at some point. Different incentives mean different choices.

SignalNoiseCh4

To give you an idea of the comparison, when the National Weather Service says the chance of rain is 100%, it’s about 98%. When the Weather Channel says it, it’s about 92%. When a local station says it, it’s about 68%. When Aaron Justus says it….well, this happens:

Advertisements