December 7

2 comments

The signal and the noiseNate Silver of political forecasting blog 538 fame has a new book out – The Signal and the Noise:  Why So Many Predictions Fail – but Some Don't.  I started reading it on Monday and am half way through it.  I'm reading slowly because my traffic engineering brain keeps thinking of how this applies to our industry.  It is a must read for anyone who prepares or uses traffic forecasts.

The reviewing traffic engineer on a hospital campus master plan study we recently submitted asked "do your recommendations change if 65% of the site traffic goes down to Co Rd 42 instead of just the 40% you assumed in your trip distribution?"  We honed in on the critical design period and intersections that would be affected.  Then we ran some sensitivity analyses with different distribution patterns and a range of trip generation assumptions.  It turns out this additional analysis confirmed our initial recommendations, but the city is now more comfortable with them now.

This recent experience along with Silver's book is making me question the traffic forecasts the industry prepares in our impact studies.  We typically use a single background growth rate, a single trip distribution pattern, and a single trip generation calculation.  Silver's belief is that we shouldn't predict single numbers, but we should predict a range of numbers and define the probability of landing within the range.  

With new programs such as Vistro and increased computing power, I think we're on the verge of being able to prepare this type of analysis.  But there will always be the need for human refinement and interpretation of the computer results.  

Silver agrees with that.  Talking about weather forecasting at the National Weather Service, where they keep track of the computer forecasts and the forecast tweaks the meteorologists make based on the data combined with their interpretation – "humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent.  Moreover, according to Hoke, these ratios have been relatively constant over time:  as much progress as the computers have made, his forecasters continue to add value on top of it."

The feedback loop is a huge missing piece though.  Weather forecasters benchmark their forecasts against the actual weather.  How many times do we we check the accuracy of our forecasts by collecting traffic counts after the development is built and then making the comparison?

 

  • I guess I’ve assumed traffic projections are based on *some* kind of real world experience. Are they designed on models created in the 1970s or something and never updated since then?

  • Traffic forecasts are regularly updated – that’s not the concern. Forecasting traffic is like forecasting the economy though. There are a lot of input variables that can be affected by shifting market demand and government policies. Where forecasting weather has a better feedback loop of being able to benchmark forecasts against the actual daily weather. And weather follows laws of nature where cars are driven by humans that can exercise free will.

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
    Mike Spack

    My mission is to help traffic engineers, transportation planners, and other transportation professionals improve our world.

    Get these blog posts sent to your email! Sign up below.  

    >