131 |
The Use of Building Information Modeling in Generating Hard Bid EstimatesJensen, Peter Samuel 16 December 2010 (has links) (PDF)
Building Information Modeling has provided many benefits to the architectural, engineering, construction and facilities management communities. Many studies have been done to validate the asserted benefits, including benefits to the field of estimating. The studies on estimating are currently limited to the realm of conceptual estimates, and have only treated detailed estimates in the abstract. The purpose of this study was to determine how BIM was being used by companies in a hard bid, or detailed estimate scenario. The research team used the Deseret Towers housing project at Brigham Young University as the basis for the research. A building information model (BIM) was provided to all bidders on the project, and at the conclusion of the bidding process a survey was used to determine how the contractors used the model in their bidding processes. The findings determined that a few of the contractors did use the model for quantity takeoff, and one actually used the quantities as the basis for the submitted bid. Additionally, the survey attempted to determine the prevailing attitudes of the estimators toward BIM as a tool in estimating, and their opinions of the future of BIM in estimating.
|
132 |
Safety Analyses At Signalized Intersections Considering Spatial, Temporal And Site CorrelationWang, Xuesong 01 January 2006 (has links)
Statistics show that signalized intersections are among the most dangerous locations of a roadway network. Different approaches including crash frequency and severity models have been used to establish the relationship between crash occurrence and intersection characteristics. In order to model crash occurrence at signalized intersections more efficiently and eventually to better identify the significant factors contributing to crashes, this dissertation investigated the temporal, spatial, and site correlations for total, rear-end, right-angle and left-turn crashes. Using the basic regression model for correlated crash data leads to invalid statistical inference, due to incorrect test statistics and standard errors based on the misspecified variance. In this dissertation, the Generalized Estimating Equations (GEEs) were applied, which provide an extension of generalized linear models to the analysis of longitudinal or clustered data. A series of frequency models are presented by using the GEE with a Negative Binomial as the link function. The GEE models for the crash frequency per year (using four correlation structures) were fitted for longitudinal data; the GEE models for the crash frequency per intersection (using three correlation structures) were fitted for the signalized intersections along corridors; the GEE models were applied for the rear-end crash data with temporal or spatial correlation separately. For right-angle crash frequency, models at intersection, roadway, and approach levels were fitted and the roadway and approach level models were estimated by using the GEE to account for the "site correlation"; and for left-turn crashes, the approach level crash frequencies were modeled by using the GEE with a Negative Binomial link function for most patterns and using a binomial logit link function for the pattern having a higher proportion of zeros and ones in crash frequencies. All intersection geometry design features, traffic control and operational features, traffic flows, and crashes were obtained for selected intersections. Massive data collection work has been done. The autoregression structure is found to be the most appropriate correlation structure for both intersection temporal and spatial analyses, which indicates that the correlation between the multiple observations for a certain intersection will decrease as the time-gap increase and for spatially correlated signalized intersections along corridors the correlation between intersections decreases as spacing increases. The unstructured correlation structure was applied for roadway and approach level right-angle crashes and also for different patterns of left-turn crashes at the approach level. Usually two approaches at the same roadway have a higher correlation. At signalized intersections, differences exist in traffic volumes, site geometry, and signal operations, as well as safety performance on various approaches of intersections. Therefore, modeling the total number of left-turn crashes at intersections may obscure the real relationship between the crash causes and their effects. The dissertation modeled crashes at different levels. Particularly, intersection, roadway, and approach level models were compared for right-angle crashes, and different crash assignment criteria of "at-fault driver" or "near-side" were applied for disaggregated models. It shows that for the roadway and approach level models, the "near-side" models outperformed the "at-fault driver" models. Variables in traffic characteristics, geometric design features, traffic control and operational features, corridor level factor, and location type have been identified to be significant in crash occurrence. In specific, the safety relationship between crash occurrence and traffic volume has been investigated extensively at different studies. It has been found that the logarithm of traffic volumes per lane for the entire intersection is the best functional form for the total crashes in both the temporal and spatial analyses. The studies of right-angle and left-turn crashes confirm the assumption that the frequency of collisions is related to the traffic flows to which the colliding vehicles belong and not to the sum of the entering flows; the logarithm of the product of conflicting flows is usually the most significant functional form in the model. This study found that the left-turn protection on the minor roadway will increase rear-end crash occurrence, while the left-turn protection on the major roadway will reduce rear-end crashes. In addition, left-turn protection reduces Pattern 5 left-turn crashes (left-turning traffic collides with on-coming through traffic) specifically, but it increases Pattern 8 left-turn crashes (left-turning traffic collides with near-side crossing through traffic), and it has no significant effect on other patterns of left-turn crashes. This dissertation also investigated some other factors which have not been considered before. The safety effectiveness of many variables identified in this dissertation is consistent with previous studies. Some variables have unexpected signs and a justification is provided. Injury severity also has been studied for Patterns 5 left-turn crashes. Crashes were located to the approach with left-turning vehicles. The "site correlation" among the crashes occurred at the same approach was considered since these crashes may have similar propensity in crash severity. Many methodologies and applications have been attempted in this dissertation. Therefore, the study has both theoretical and implementational contribution in safety analysis at signalized intersections.
|
133 |
Statistical Methods for Nonlinear Dynamic Models with Measurement Error Using the Ricker ModelResendes, David Joseph 01 September 2011 (has links)
In ecological population management, years of animal counts are fit to nonlinear, dynamic models (e.g. the Ricker model) because the values of the parameters are of interest. The yearly counts are subject to measurement error, which inevitably leads to biased estimates and adversely affects inference if ignored. In the literature, often convenient distribution assumptions are imposed, readily available estimated measurement error variances are not utilized, or the measurement error is ignored entirely. In this thesis, ways to estimate the parameters of the Ricker model and perform inference while accounting for measurement error are investigated where distribution assumptions are minimized and estimated measurement error variances are utilized. To these ends, SIMEX and modified estimating equations (MEE) rather than likelihood methods are investigated for data on the abundance and log-abundance scales, and how inference is done via the parametric bootstrap and estimated standard errors from the modified estimating equations is shown. Subsequently, simulation studies are performed on the log-abundance scale under varying parameter values to learn how levels of measurement error variances (ranging from the realistically low value of 0.0025 to unrealistically high value of 0.025 ) affects the estimators and inference when measurement error is ignored, and how the methods perform accounting for it. It was found that the bias induced by measurement error depends on the true value of the parameter. Furthermore, the performances of SIMEX and MEE are associated with the true value of a and the level of measurement error variance. In particular, both methods perform best for a > 1 and low to moderate levels of measurement error variance, with the MEE estimators having high standard error and often poorer performance than those from SIMEX. It was also found that the MEE estimators contain singularities which attribute to its low precision and erratic behavior. These methods were then applied to actual moose count data with sample size more than double that of the simulations. It was found that both the SIMEX and MEE estimators performed well suggesting that sample size contributes to previous poor behavior.
|
134 |
Predicting RNA Mutation Using 3D StructureDinda, Stephen B. 14 November 2011 (has links)
No description available.
|
135 |
Postnatal depression (PND) and neighborhood effects for women enrolled in a home visitation programJones, David 03 June 2016 (has links)
No description available.
|
136 |
Evaluation of the Impact of Product Detail on the Accuracy of Cost EstimatesDivelbiss, David L. January 2005 (has links)
No description available.
|
137 |
Multiple imputation for marginal and mixed models in longitudinal data with informative missingnessDeng, Wei 07 October 2005 (has links)
No description available.
|
138 |
Predicting the occurrence of major adverse cardiac events within 30 days after a patient’s vascular surgery: An individual patient-data meta-analysisVanniyasingam, Thuvaraha 04 1900 (has links)
<p><strong>Background:</strong> Major adverse cardiac events, MACE – a composite endpoint of cardiac death and nonfatal myocardial infarction (MI) – are severe harmful outcomes that commonly arise after elective vascular surgeries. As current pre-operative risk prediction models are not as effective in predicting post-operative outcomes, this thesis will discuss the key results of an individual patient data meta-analysis that is based on data from six cohort studies of patients undergoing vascular surgery.</p> <p><strong>Objectives:</strong> The purpose of this thesis is to determine optimal thresholds of continuous covariates and create a prediction model for major adverse cardiac events (MACE), within 30 days after a vascular surgery. The goals include exploring the minimum p-value method to dichotomize cutpoints for continuous variables; employing logistic regression analysis to determine a prediction model for MACE; evaluating its validity against other samples; and assessing its sensitivity to clustering effects. The secondary objectives are to determine individual models for predicting all-cause mortality, cardiac death, and nonfatal MI within 30 days of a vascular surgery, using the final covariates assessed for MACE.<strong></strong></p> <p><strong>Methods: </strong>Both B-type naturietic peptide (BNP) and its N-terminal fragment (NTproBNP) are independently associated with cardiovascular complications after noncardiac surgeries, and particularly frequent after noncardiac vascular surgeries. In a previous study, these covariates were dichotomized using the receiver operating characteristic (ROC) curve approach and a simple logistic regression (SLR) model was created for MACE [1]. The first part of this thesis applies the minimum p-value method to determine a threshold for each natriuretic peptide (NP), BNP and NTproBNP. SLR is then used to model the prediction of MACE within 30 days after a patient’s vascular surgery. Comparisons were made with the ROC curve approach to determine the optimal thresholds and create a prediction model. The validity of this model was tested using bootstrap samples and its robustness was assessed using a mixed effects logistic regression (MELR) model and a generalized estimating equation (GEE). Finally, MELR was performed on each of the secondary outcomes.</p> <p><strong>Results:</strong>A variable, ROC_thrshld, was created to represent the cutpoints of Rodseth’s ROC curve approach, which identified 116pg/mL and 277.5pg/mL to be the optimal thresholds for BNP and NTproBNP, respectively [1]. The minimum p-value method dichotomized these NP thresholds as BNP: 115.57pg/mL (p</p> <p><strong>Discussion:</strong> One key limitation to this thesis is the small sample size received for NTproBNP. Also, determining only one cutpoint for each NP concentration may not be sufficient, since dichotomizing continuous factors can lead to loss of information along with other issues. Further research should be performed to explore other possible cutpoints along with performing reclassification to observe improvements in risk stratification. After validating our final model against other samples, we can conclude that MINP_thrshld, the type of surgery, and diabetes are significant covariates for the prediction of MACE. With the simplicity in only requiring a blood test to measure NP concentration levels and easily learning the status of the other two factors, minimal effort is needed in calculating the points and risk estimates for each patient. Further research should also be performed on the secondary outcomes to examine other factors that may be useful in prediction.</p> <p><strong>Conclusions: </strong>The minimum p-value method produced similar results to the ROC curve method in dichotomizing the NP concentration levels. The cutpoints for BNP and NTproBNP were 115.57pg/mL and 241.7 pg/mL, respectively. Further research needs to be performed to determine the optimality of the final prediction model of MACE, with covariates MINP_thrshld, type of surgery, and diabetes mellitus. <strong></strong></p> <p><strong><br /></strong></p> / Master of Science (MSc)
|
139 |
Continuation Ratio and Generalized Estimating Equation Analysis of a Longitudinal Asthma Study / Statistical Analysis of a Longitudinal Asthma StudyCapan, Dragos 04 1900 (has links)
Two randomized controlled trials were conducted to find out whether a new treatment for asthma has a significant effect on the patients. These were multi-center trials with a parallel design, the control arm receiving a Placebo. The data were collected over a period of about 20 days before administering the intervention and for almost 80 days after the intervention. Thus, each patient has many observations recorded, making the data longitudinal. The data are summarized using first descriptive statistics and graphical displays. Then, a continuation ratio model with a lagged covariate to account for the longitudinal aspect is used to model the data. Finally, Generalized Estimating Equations methods are used. These methods have acquired popularity in recent years to account for longitudinal correlation structures. To apply the continuation ratio, the data have to be appropriately restructured. Then, the logistic regression is used to model the symptoms. The results of this procedure show that the treatment is statistically significant. However, the goodness of fit tests show that the model is inadequate. This issue is explored in the last subsection of Chapter 3. Using Generalized Estimating Equations to analyze the number of times rescue medication was used, we concluded that there is no statistically significant difference between the Active and Control groups. However, we noticed that the use of rescue medication decreased with time from the start of treatment. / Thesis / Master of Science (MS)
|
140 |
A method of evaluating the impact of economic change on the services of local governmentsKambhampaty, S. Murthy 12 March 2009 (has links)
This study seeks a method of evaluating the local impact of changing economic conditions (such as employment, per capita income, et cetera). Specifically, measures of the impact of changing economic conditions on the services of local governments will be formulated. These measures provide a means for comparing the impact of a project (or of alternative projects) to conditions that would prevail in its (their) absence.
The data used is drawn from the Virginia Impact Projection (VIP) Model in the form of statistically derived equations representing the relationships for eleven different categories of public services. The analytical framework was developed based on theories of public decision-making, public finance, and local government expenditures and services. This framework was used to analyze the per capita expenditure relationships (functions) to obtain the information required in formulating the impact measures.
For individual categories of services, a SERVICE VALUE INDEX was devised to indicate changes (including shifts) in the demand for the services, and the level of provision thereof. The NET PUBLIC SERVICE BENEFIT is a measure of the overall increase (decline) in public services provision, and balances the change in levels of service on the individual categories against each other. This measure reflects the net benefits derived by the consumer-voter following her/his reallocation of expenditures given the new cost conditions, budget constraints, et cetera, as a consequence of the project. / Master of Urban Affairs
|
Page generated in 0.1038 seconds