• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigating the Utility of Age-Dependent Cranial Vault Thickness as an Aging Method for Juvenile Skeletal Remains on Dry Bone, Radiographic and Computed Tomography Scans

Kamnikar, Kelly R 07 May 2016 (has links)
Age estimation, a component of the biological profile, contributes significantly to the creation of a post-mortem profile of an unknown set of human remains. This goal of this study is to: (1) refine the juvenile age estimation method of cranial vault thickness (CVT) through MARS modeling, (2) test the method on known age samples, and (3) compare CVT and dental development age estimations. Data for this study comes from computed tomography (CT) scans, radiographic images, and dry bone. CVT was measured at seven cranial landmarks (nasion, glabella, bregma, vertex, vertex radius, lambda and opisthocranion). Results indicate that CVT models vary in their predictive ability; vertex and lambda produce the best results. Predicted fit values and prediction intervals for CVT are larger, and less accurate than dental development age estimates. Aging by CVT could benefit from a larger known age sample composed of individuals older than 6 years old.
2

An Efficient Robust Concept Exploration Method and Sequential Exploratory Experimental Design

Lin, Yao 31 August 2004 (has links)
Experimentation and approximation are essential for efficiency and effectiveness in concurrent engineering analyses of large-scale complex systems. The approximation-based design strategy is not fully utilized in industrial applications in which designers have to deal with multi-disciplinary, multi-variable, multi-response, and multi-objective analysis using very complicated and expensive-to-run computer analysis codes or physical experiments. With current experimental design and metamodeling techniques, it is difficult for engineers to develop acceptable metamodels for irregular responses and achieve good design solutions in large design spaces at low prices. To circumvent this problem, engineers tend to either adopt low-fidelity simulations or models with which important response properties may be lost, or restrict the study to very small design spaces. Information from expensive physical or computer experiments is often used as a validation in late design stages instead of analysis tools that are used in early-stage design. This increases the possibility of expensive re-design processes and the time-to-market. In this dissertation, two methods, the Sequential Exploratory Experimental Design (SEED) and the Efficient Robust Concept Exploration Method (E-RCEM) are developed to address these problems. The SEED and E-RCEM methods help develop acceptable metamodels for irregular responses with expensive experiments and achieve satisficing design solutions in large design spaces with limited computational or monetary resources. It is verified that more accurate metamodels are developed and better design solutions are achieved with SEED and E-RCEM than with traditional approximation-based design methods. SEED and E-RCEM facilitate the full utility of the simulation-and-approximation-based design strategy in engineering and scientific applications. Several preliminary approaches for metamodel validation with additional validation points are proposed in this dissertation, after verifying that the most-widely-used method of leave-one-out cross-validation is theoretically inappropriate in testing the accuracy of metamodels. A comparison of the performance of kriging and MARS metamodels is done in this dissertation. Then a sequential metamodeling approach is proposed to utilize different types of metamodels along the design timeline. Several single-variable or two-variable examples and two engineering example, the design of pressure vessels and the design of unit cells for linear cellular alloys, are used in this dissertation to facilitate our studies.
3

Predicting bid prices in construction projects using non-parametric statistical models

Pawar, Roshan 15 May 2009 (has links)
Bidding is a very competitive process in the construction industry; each competitor’s business is based on winning or losing these bids. Contractors would like to predict the bids that may be submitted by their competitors. This will help contractors to obtain contracts and increase their business. Unit prices that are estimated for each quantity differ from contractor to contractor. These unit costs are dependent on factors such as historical data used for estimating unit costs, vendor quotes, market surveys, amount of material estimated, number of projects the contractor is working on, equipment rental costs, the amount of equipment owned by the contractor, and the risk averseness of the estimator. These factors are nearly similar when estimators are estimating cost of similar projects. Thus, there is a relationship between the projects that a particular contractor has bid in previous years and the cost the contractor is likely to quote for future projects. This relationship could be used to predict bids that the contractor might quote for future projects. For example, a contractor may use historical data for a certain year for bidding on certain type of projects, the unit prices may be adjusted for size, time and location, but the basis for bidding on projects of similar types is the same. Statistical tools can be used to model the underlying relationship between the final cost of the project quoted by a contractor to the quantities of materials or amount of tasks performed in a project. There are a number of statistical modeling techniques, but a model used for predicting costs should be flexible enough that it could adjust to depict any underlying pattern. Data such as amount of work to be performed for a certain line item, material cost index, labor cost index and a unique identifier for each participating contractor is used to predict bids that a contractor might quote for a certain project. To perform the analysis, artificial neural networks and multivariate adaptive regression splines are used. The results obtained from both the techniques are compared, and it is found that multivariate adaptive regression splines are able to predict the cost better than artificial neural networks.
4

A Computational Approach To Nonparametric Regression: Bootstrapping Cmars Method

Yazici, Ceyda 01 September 2011 (has links) (PDF)
Bootstrapping is a resampling technique which treats the original data set as a population and draws samples from it with replacement. This technique is widely used, especially, in mathematically intractable problems. In this study, it is used to obtain the empirical distributions of the parameters to determine whether they are statistically significant or not in a special case of nonparametric regression, Conic Multivariate Adaptive Regression Splines (CMARS). Here, the CMARS method, which uses conic quadratic optimization, is a modified version of a well-known nonparametric regression model, Multivariate Adaptive Regression Splines (MARS). Although performing better with respect to several criteria, the CMARS model is more complex than that of MARS. To overcome this problem, and to improve the CMARS performance further, three different bootstrapping regression methods, namely, Random-X, Fixed-X and Wild Bootstrap are applied on four data sets with different size and scale. Then, the performances of the models are compared using various criteria including accuracy, precision, complexity, stability, robustness and efficiency. Random-X yields more precise, accurate and less complex models particularly for medium size and medium scale data even though it is the least efficient method.
5

Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems

Mondal, Anirban 2011 August 1900 (has links)
We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.
6

An osteometric evaluation of age and sex differences in the long bones of South African children from the Western Cape

Stull, Kyra Elizabeth January 2013 (has links)
The main goal of a forensic anthropological analysis of unidentified human remains is to establish an accurate biological profile. The largest obstacle in the creation or validation of techniques specific for subadults is the lack of large, modern samples. Techniques created for subadults were mainly derived from antiquated North American or European samples and thus inapplicable to a modern South African population as the techniques lack diversity and ignore the secular trends in modern children. This research provides accurate and reliable methods to estimate age and sex of South African subadults aged birth to 12 years from long bone lengths and breadths, as no appropriate techniques exist. Standard postcraniometric variables (n = 18) were collected from six long bones on 1380 (males = 804, females = 506) Lodox Statscan-generated radiographic images housed at the Forensic Pathology Service, Salt River and the Red Cross War Memorial Children’s Hospital in Cape Town, South Africa. Measurement definitions were derived from and/or follow studies in fetal and subadult osteology and longitudinal growth studies. Radiographic images were generated between 2007 and 2012, thus the majority of children (70%) were born after 2000 and thus reflect the modern population. Because basis splines and multivariate adaptive regression splines (MARS) are nonparametric the 95% prediction intervals associated with each age at death model were calculated with cross-validation. Numerous classification methods were employed namely linear, quadratic, and flexible discriminant analysis, logistic regression, naïve Bayes, and random forests to identify the method that consistently yielded the lowest error rates. Because some of the multivariate subsets demonstrated small sample sizes, the classification accuracies were bootstrapped to validate results. Both univariate and multivariate models were employed in the age and sex estimation analyses. Standard errors for the age estimation models were smaller in most of the multivariate models with the exception of the univariate humerus, femur, and tibia diaphyseal lengths. Univariate models provide narrower age estimates at the younger ages but the multivariate models provide narrower age estimates at the older ages. Diaphyseal lengths did not demonstrate any significant sex differences at any age, but diaphyseal breadths demonstrated significant sex differences throughout the majority of the ages. Classification methods utilizing multivariate subsets achieved the highest accuracies, which offer practical applicability in forensic anthropology (81% to 90%). Whereas logistic regression yielded the highest classification accuracies for univariate models, FDA yielded the highest classification accuracies for multivariate models. This study is the first to successfully estimate subadult age and sex using an extensive number of measurements, univariate and multivariate models, and robust statistical analyses. The success of the current study is directly related to the large, modern sample size, which ultimately captured a wider range of human variation than previously collected for subadult diaphyseal dimensions. / Thesis (PhD)--University of Pretoria, 2013. / gm2014 / Anatomy / unrestricted

Page generated in 0.1395 seconds