Guest Post by Bryant Ficek, PE, PTOE, Vice-President at Spack Consulting
Earlier this year, I detailed how our standard process for a Traffic Impact Study has several points of assumptions at best or guesses at worst. This post continues that discussion. The original post, “Top 6 Ways to Pick Apart a Traffic Study” is available here. Other posts are expected to follow on this subject.
Originally, I had intended to move right down the list of 6 discussed in that first post; traffic counts through background growth. I have ideas on how we as an industry could improve how we complete a study involving all of those items. Granted, there may be better ways on some items, which is way I am opening up the debate and hoping for feedback from colleagues. But before I finished a write-up on the next assumption, we had a detailed discussion on the end result of these potential new processes – would multiple scenarios and thus results help us?
Before we try to answer that, let me provide a little background. I like to think of our current traffic impact study as a linear process. We start with counts, move onto existing analysis, forecasting, future analysis, and then determining mitigation and recommendations. Graphically, I would say our process looks like this:
We start at one end and move through each of the points until we reach a conclusion. Each of these points represents one data point of input or output. Sometimes we have discussions about the points before the project, sometimes it occurs during or after and we make adjustments. But in all cases, we remain with one data point for each subsection.
With our new process we are implementing, detailed in Part 1 here, we are expanding the initial point in hopes of getting better base data. We obtain counts for two days, apply a seasonal factor, and end up with an adjusted count as the starting point. Using the previous graphic, our adjustment makes it look like this now.
That’s a step in the right direction, but still with room for potential improvements in my opinion. So what if we expanded more of those points? Our analysis software has improved enough to allow us to quickly test multiple scenarios, particularly if only changing one input at a time. So imagine if our study process instead considered:
- two different background growths reflecting high growth and low growth
- three trip generation results reflective of different levels of new traffic based on varying trip generation rates, different pass-by/multi-use/internal rates, and higher or lower levels of transit/bicycle/pedestrian traffic
- two different trip distribution patterns based on actual counts and then adjusted to reflect more balance
That scenario would end up with 12 data points of future analysis and look something like this:
Don’t get lost in the debate about how many points to analyze or how to determine what each point should be just yet. Instead, focus on the end product and the original question – does having multiple results help us? Or, stating it a different way, how do we go from 12 future analyses to one recommendation?
To further illustrate the example, I evaluated one intersection assuming a scenario with a standard office development. Two sets of background growths were determined as well as three sets of trip generation and two sets of trip distribution. Thus, the unsignalized intersection was analyzed 12 times with varying levels of future traffic. The graph below shows the future analysis results in terms of overall intersection average delay (and Level of Service), with the existing results for comparison. The future analysis result that would have been determined using our standard process is also highlighted.
As shown, the standard process would have resulted in Level of Service (LOS) D and our subsequent recommendation would likely not have included mitigation. However, the other results show five more at LOS D, one at the LOS D/E border, four at LOS E, and one edging close to the LOS E/F border. If this was your study, how would you interpret this data? And what would your recommendation be based on?
I think a reasonable conclusion would be state that the intersection is sufficient, but a plan should be developed for mitigation to cover 10 out of the 12 scenarios, roughly the 85th percentile. A slightly more conservative conclusion would say build mitigation to cover eight out of the 12 scenarios and plan for the worst case. But that’s the engineer taking a measured look at the data.
Once it leaves our hands and is reviewed by others, the graph is likely to turn into something of a Rorschach Test. A developer is likely to look at the data and conclude nothing is needed since half the results reveal no difficulties. A government official could look at the data and conclude that the worst case needs to be built to protect against any adverse impacts. Opinions on each extreme will be easy to find and likely complicate the already often tense discussions of what should be built as the result of a development.
That leaves us with the conclusion that more analysis, and more data points, is a good thing from the engineering standpoint. It can provide more confidence in our opinions and ultimate conclusions. But once it leaves our hands, it is not likely to be as well received and may make the ultimate solution or compromise more difficult to reach. In that regard, is it worth it to even begin traveling down this path and, bringing back full circle, does having multiple results really help us? I welcome any of your thoughts on the matter.
Did you miss the other installments of the Traffic Impact Study Improvements series? Here are the links to the other articles:
“Theoretically” very sound to consider more information and more correct information. However, of course the issue of cost arises. Although each incremental analysis may not cost that much, in the world where I practice, the norm is to do the least amount of analysis possible and keep the cost as low as possible. I am sure I have lost more than one project due to perceived scope of work and cost issues. So I think you have to balance that versus what the agency requires as well. Most all developers that I have encountered want to keep the cost of study as low as possible. However there can be cases where additional study and review can rule out the need for unnecessary improvements that eventually save the developer money. So it is a juggling act at times as well. On bigger projects you just need to phase them and adjust as you go forward if you can get that sort of agreement with the client. And sometimes you just have to walk away unfortunately.
I agree with Roger, not only does the developer what to keep the cost of the Traffic Engineering down, the developer wants to keep off-site improvements cost down. It seems the additional analysis just shows a condition that will require additional mitigation and off-site construction costs and that condition may or may not materialize. In my opinion, having practiced traffic engineering for 20 years with probably thousands of Traffic Impact Studies (TIS) as a basis for experience, I think caution should be used in making infrastructure improvements based on findings of a TIS since their results may not materialize in the traffic we estimated, especially in highly congested urbanized areas. Significant infrastructure improvements should come from analyses performed during PD&E studies using Travel Demand modeling and 20 year horizons not TIS and their short term project implementation.
Mike your points are well taken; but most projects including medium to mega projects have linear in nature but several elements handled by different expertise. Projects that need simple and clear analysis such a TIS should be done the same way but may be presented better with readable figures, tables and conclusions. As you very well know the number of default values we have to use in using the transportation related software, errors or assumptions are many that are already incorporated. Is there a solution? may be it depends of the person-in-charge of the study to present the real picture than use elements that benefit their report and client. Many developers avoided doing traffic studies by just building under the radar; but kept building after each phase by loading the nearby roads. These are my thoughts.
I would assume there could be a rough “percentage chance” of each option occurring which could then be multiplied out so each result has a percentage chance of occurring. The results that are unlikely to occur can then be discarded – or not even fully analysed in the first place. Just a thought to add to the mix.
Thanks for the comments Richard and Roger. With the current system, cost is obviously a factor in any proposal. One way around that is for the government agencies to start requiring this or a similar process so everyone is bidding on the same or very similar scopes.
Putting cost aside for the moment, we are still left with the question of whether this process has provided us better information upon which to make a decision and if this (or a similar process) is even worthy of a government agency making part of their study requirements.
Narasimha – thanks. Your comment about defaults is exactly why we started down this road of trying to determine a better method.
Thanks Andrew – putting a percentage on each option could definitely be a next step. I would see arguments over where the line is drawn to include or exclude an option. The developer would likely want a high percentage before signing off on the improvement while a government would likely want a low percentage to reduce any chance of future congestion.
Given that base disagreement, I’m still left with the thought of whether more data has helped us.
Just a layperson reading up on traffic studies. From my perspective, if the developer’s cost concerns drive the scope of the project, then there will be less data, not more, perhaps therefore less accuracy? The cost of getting projections wrong is borne by residents, not by developers. I like the idea of the city’s study requirements setting minimum standards for more data.