31 |
Utilizing A Real Life Data Warehouse To Develop Freeway Travel Time Eliability Stochastic ModelsEmam, Emam 01 January 2006 (has links)
During the 20th century, transportation programs were focused on the development of the basic infrastructure for the transportation networks. In the 21st century, the focus has shifted to management and operations of these networks. Transportation network reliability measure plays an important role in judging the performance of the transportation system and in evaluating the impact of new Intelligent Transportation Systems (ITS) deployment. The measurement of transportation network travel time reliability is imperative for providing travelers with accurate route guidance information. It can be applied to generate the shortest path (or alternative paths) connecting the origins and destinations especially under conditions of varying demands and limited capacities. The measurement of transportation network reliability is a complex issue because it involves both the infrastructure and the behavioral responses of the users. Also, this subject is challenging because there is no single agreed-upon reliability measure. This dissertation developed a new method for estimating the effect of travel demand variation and link capacity degradation on the reliability of a roadway network. The method is applied to a hypothetical roadway network and the results show that both travel time reliability and capacity reliability are consistent measures for reliability of the road network, but each may have a different use. The capacity reliability measure is of special interest to transportation network planners and engineers because it addresses the issue of whether the available network capacity relative to the present or forecast demand is sufficient, whereas travel time reliability is especially interesting for network users. The new travel time reliability method is sensitive to the users' perspective since it reflects that an increase in segment travel time should always result in less travel time reliability. And, it is an indicator of the operational consistency of a facility over an extended period of time. This initial theoretical effort and basic research was followed by applying the new method to the I-4 corridor in Orlando, Florida. This dissertation utilized a real life transportation data warehouse to estimate travel time reliability of the I-4 corridor. Four different travel time stochastic models: Weibull, Exponential, Lognormal, and Normal were tested. Lognormal was the best-fit model. Unlike the mechanical equipments, it is unrealistic that any freeway segment can be traversed in zero seconds no matter how fast the vehicles are. So, an adjustment of the developed best-fit statistical model (Lognormal) location parameter was needed to accurately estimate the travel time reliability. The adjusted model can be used to compute and predict travel time reliability of freeway corridors and report this information in real time to the public through traffic management centers. Compared to existing Florida Method and California Buffer Time Method, the new reliability method showed higher sensitivity to geographical locations, which reflects the level of congestion and bottlenecks. The major advantages/benefits of this new method to practitioners and researchers over the existing methods are its ability to estimate travel time reliability as a function of departure time, and that it treats travel time as a continuous variable that captures the variability experienced by individual travelers over an extended period of time. As such, the new method developed in this dissertation could be utilized in transportation planning and freeway operations for estimating the important travel time reliability measure of performance. Then, the segment length impacts on travel time reliability calculations were investigated utilizing the wealth of data available in the I-4 data warehouse. The developed travel time reliability models showed significant evidence of the relationship between the segment length and the results accuracy. The longer the segment, the less accurate were the travel time reliability estimates. Accordingly, long segments (e.g., 25 miles) are more appropriate for planning purposes as a macroscopic performance measure of the freeway corridor. Short segments (e.g., 5 miles) are more appropriate for the evaluation of freeway operations as a microscopic performance measure. Further, this dissertation has explored the impact of relaxing an important assumption in reliability analysis: Link independency. In real life, assuming that link failures on a road network are statistically independent is dubious. The failure of a link in one particular area does not necessarily result in the complete failure of the neighboring link, but may lead to deterioration of its performance. The "Cause-Based Multimode Model" (CBMM) has been used to address link dependency in communication networks. However, the transferability of this model to transportation networks has not been tested and this approach has not been considered before in the calculation of transportation networks' reliability. This dissertation presented the CBMM and applied it to predict transportation networks' travel time reliability that an origin demand can reach a specified destination under multimodal dependency link failure conditions. The new model studied the multi-state system reliability analysis of transportation networks for which one cannot formulate an "all or nothing" type of failure criterion and in which dependent link failures are considered. The results demonstrated that the newly developed method has true potential and can be easily extended to large-scale networks as long as the data is available. More specifically, the analysis of a hypothetical network showed that the dependency assumption is very important to obtain more reasonable travel time reliability estimates of links, paths, and the entire network. The results showed large discrepancy between the dependency and independency analysis scenarios. Realistic scenarios that considered the dependency assumption were on the safe side, this is important for transportation network decision makers. Also, this could aid travelers in making better choices. In contrast, deceptive information caused by the independency assumption could add to the travelers' anxiety associated with the unknown length of delay. This normally reflects negatively on highway agencies and management of taxpayers' resources.
|
32 |
Examining Dynamic Variable Speed Limit Strategies For The Reduction Of Real-time Crash Risk On FreewaysCunningham, Ryan 01 January 2007 (has links)
Recent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data obtained by loop detectors. Since these models use real-time traffic data, they are capable of calculating rear-end and lane-change crash risk values as the traffic flow conditions are changing on the freeway. The objective of this study is to examine the potential benefits of variable speed limit implementation techniques for reducing the crash risk along the freeway. Variable speed limits is an ITS strategy that is typically used upstream of a queue in order to reduce the effects of congestion. By lowering the speeds of the vehicles approaching a queue, more time is given for the queue to dissipate from the front before it continues to grow from the back. This study uses variable speed limit strategies in a corridor-wide attempt to reduce rear-end and lane-change crash risks where speed differences between upstream and downstream vehicles are high. The idea of homogeneous speed zones was also introduced in this study to determine the distance over which variable speed limits should be implemented from a station of interest. This is unique since it is the first time a dynamic distance has been considered for variable speed limit implementation. Several VSL strategies were found to successfully reduce the rear-end and lane-change crash risks at low-volume traffic conditions (60% and 80% loading conditions). In every case, the most successful treatments involved the lowering of upstream speed limits by 5 mph and the raising of downstream speed limits by 5 mph. In the free-flow condition (60% loading), the best treatments involved the more liberal threshold for defining homogeneous speed zones (5 mph) and the more liberal implementation distance (entire speed zone), as well as a minimum time period of 10 minutes. This treatment was actually shown to significantly reduce the network travel time by 0.8%. It was also shown that this particular implementation strategy (lowering upstream, raising downstream) is wholly resistant to the effects of crash migration in the 60% loading scenario. In the condition approaching congestion (80% loading), the best treatment again involved the more liberal threshold for homogeneous speed zones (5 mph), yet the more conservative implementation distance (half the speed zone), along with a minimum time period of 5 minutes. This particular treatment arose as the best due to its unique capability to resist the increasing effects of crash migration in the 80% loading scenario. It was shown that the treatments implementing over half the speed zone were more robust against crash migration than other treatments. The best treatment exemplified the greatest benefit in reduced sections and the greatest resistance to crash migration in other sections. In the 80% loading scenario, the best treatment increased the network travel time by less than 0.4%, which is deemed acceptable. No treatment was found to successfully reduce the rear-end and lane-change crash risks in the congested traffic condition (90% loading). This is attributed to the fact that, in the congested state, the speed of vehicles is subject to the surrounding traffic conditions and not to the posted speed limit. Therefore, changing the posted speed limit does not affect the speed of vehicles in a desirable manner. These conclusions agree with Dilmore (2005).
|
33 |
Evaluating Ramp Meter Wait Time in UtahDaines, Tanner Jeffrey 19 April 2022 (has links)
The purpose of this research was to develop an algorithm that could predict ramp meter wait time at metered freeway on-ramps throughout the state of Utah using existing loop detector systems on the ramps. The loop detectors provided data in 60-second increments that include volume, occupancy, and the metering rate. Using these data sources, several ramp meter queue length algorithms were applied; these predicted queue lengths were then converted into wait times by using the metering rate provided by the detector data. A conservation model and several variations of a Kalman filter model generated predicted queue lengths and wait times that were compared to the observed queue lengths. The Vigos model—the model that yielded the best results—provided wait time estimates that were generally within approximately 45 seconds of the observed wait time. This model is simple to implement and can be automated for the Utah Department of Transportation (UDOT) to provide wait time estimates at any metered on-ramp throughout the state.
|
34 |
Performance Evaluation of the McMaster Incident Detection AlgorithmLyall, Bradley Benjamin 04 1900 (has links)
The McMaster incident detection algorithm is being tested on-line within the Burlington freeway traffic management system (FTMS) as an alternative to the existing California-type algorithm currently in place. This paper represents the most recent and comprehensive evaluation of the McMaster algorithm's performance to date. In the past, the algorithm has been tested using single lane detectors for the northbound lanes only. This evaluation uses data from lanes 1 and 2 for each of the 13 northbound and 13 southbound detector stations. The data was collected during a 60-day period beginning on November 15, 1990 and ending January 13, 1991. Detection rate, mean time-lag to detection and false alarm rate are used to evaluate the performance of the algorithm. As well, those factors such as winter precipitation, which influenced the performance of the algorithm are also examined. To improve the algorithm's detection rate and lower its false alarm rate, it is reccomended that the persistence check used to declare an incident be increased by 30-seconds from 2 to 3 periods. / Thesis / Candidate in Philosophy
|
35 |
IMPROVED VEHICLE LENGTH MEASUREMENT AND CLASSIFICATION FROM FREEWAY DUAL-LOOP DETECTORS IN CONGESTED TRAFFICWu, Lan 21 May 2014 (has links)
No description available.
|
36 |
Toronto: Linking the Lake - Solutions for an Urban Infrastructural DisconnectDe Wet, Andres MG 12 September 2017 (has links)
No description available.
|
37 |
Pull Over: Promoting a Pedestrian Urban Experience by Providing a Vehicular Urban ExperienceDEWALD, NICK KYLE 21 August 2008 (has links)
No description available.
|
38 |
Freeway On-Ramp Bottleneck Activation, Capacity, and the Fundamental RelationshipKim, Seoungbum 04 September 2013 (has links)
No description available.
|
39 |
Modeling the Interarrival Times for Non-Signalized Freeway Entrance RampsSuravaram, Kiran R. 29 September 2007 (has links)
No description available.
|
40 |
An Evaluation of Entrance Ramp Metering for Freeway Work Zones using Digital SimulationOner, Erdinc 24 April 2009 (has links)
No description available.
|
Page generated in 0.023 seconds