1 |
Statistical analysis of TxCAP and its subsystemsQazi, Abdus Shakur 29 September 2011 (has links)
The Texas Department of Transportation (TxDOT) uses the Texas Condition Assessment Program (TxCAP) to measure and compare the overall road maintenance conditions among its 25 districts. TxCAP combines data from three existing subsystems: the Pavement Management Information System (PMIS), which scores the condition of pavement; the Texas Maintenance Assessment Program (TxMAP), which evaluates roadside conditions; and the Texas Traffic Assessment Program (TxTAP), which evaluates the condition of signs, work zones, railroad crossings, and other traffic elements to get an overall picture of the condition of state roads. As a result, TxCAP provides a more comprehensive assessment of the interstate and non-interstate highways. However, the scores for each of the subsystems are based on data of different sample sizes, accuracy, and levels of variations, making it difficult to decide if the difference between two TxCAP score is a true difference or measurement error. Therefore, whether the use of TxCAP is an effective and consistent means to measure the TxDOT roadway maintenance conditions raises concerns and needs to be evaluated. In order to achieve this objective, statistical analyses of the system were conducted in two ways: 1) to determine whether sufficient samples are collected for each of the subsystems, and 2) to determine if the scores are statistically different from each other. A case study was conducted with a dataset covering the whole state from 2008 to 2010. The case study results show that the difference in scores between two districts are statistically significant for some of the districts and insignificant for some other districts. It is therefore recommended that TxDOT either compare the 25 districts by groups/tiers or increase the sample size of the data being collected to compare the districts as individual ones. / text
|
2 |
EVALUATE PROBE SPEED DATA QUALITY TO IMPROVE TRANSPORTATION MODELINGRahman, Fahmida 01 January 2019 (has links)
Probe speed data are widely used to calculate performance measures for quantifying state-wide traffic conditions. Estimation of the accurate performance measures requires adequate speed data observations. However, probe vehicles reporting the speed data may not be available all the time on each road segment. Agencies need to develop a good understanding of the adequacy of these reported data before using them in different transportation applications. This study attempts to systematically assess the quality of the probe data by proposing a method, which determines the minimum sample rate for checking data adequacy. The minimum sample rate is defined as the minimum required speed data for a segment ensuring the speed estimates within a defined error range. The proposed method adopts a bootstrapping approach to determine the minimum sample rate within a pre-defined acceptance level. After applying the method to the speed data, the results from the analysis show a minimum sample rate of 10% for Kentucky’s roads. This cut-off value for Kentucky’s roads helps to identify the segments where the availability is greater than the minimum sample rate. This study also shows two applications of the minimum sample rates resulted from the bootstrapping. Firstly, the results are utilized to identify the geometric and operational factors that contribute to the minimum sample rate of a facility. Using random forests regression model as a tool, functional class, section length, and speed limit are found to be the significant variables for uninterrupted facility. Contrarily, for interrupted facility, signal density, section length, speed limit, and intersection density are the significant variables. Lastly, the speed data associated with the segments are applied to improve Free Flow Speed estimation by the traditional model.
|
3 |
Robust Experimental Design for Speech Analysis ApplicationsJanuary 2020 (has links)
abstract: In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect).
This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2020
|
Page generated in 0.0557 seconds