• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 10
  • 9
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 43
  • 40
  • 36
  • 27
  • 19
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Predictive Modeling Approach for Assessing Seismic Soil Liquefaction Potential Using CPT Data

Schmidt, Jonathan Paul 01 June 2019 (has links)
Soil liquefaction, or loss of strength due to excess pore water pressures generated during dynamic loading, is a main cause of damage during earthquakes. When a soil liquefies (referred to as triggering), it may lose its ability to support overlying structures, deform vertically or laterally, or cause buoyant uplift of buried utilities. Empirical liquefaction models, used to predict liquefaction potential based upon in-situ soil index property measurements and anticipated level of seismic loading, are the standard of practice for assessing liquefaction triggering. However, many current models do not incorporate predictor variable uncertainty or do so in a limited fashion. Additionally, past model creation and validation lacks the same rigor found in predictive modeling in other fields. This study examines the details of creating and validating an empirical liquefaction model, using the existing worldwide cone penetration test liquefaction database. Our study implements a logistic regression within a Bayesian measurement error framework to incorporate uncertainty in predictor variables and allow for a probabilistic interpretation of model parameters. Our model is built using a hierarchal approach account for intra-event correlation in loading variables and differences in event sample sizes that mirrors the random/mixed effects models used in ground motion prediction equation development. The model is tested using an independent set of case histories from recent New Zealand earthquakes, and performance metrics are reported. We found that a Bayesian measurement error model considering two predictor variables, qc,1 and CSR, decreases model uncertainty while maintaining predictive utility for new data. Two forms of model uncertainty were considered – the spread of probabilities predicted by mean values of regression coefficients (apparent uncertainty) and the standard deviations of the predictive distributions from fully probabilistic inference. Additionally, we found models considering friction ratio as a predictor variable performed worse than the two variable case and will require more data or informative priors to be adequately estimated.
22

Predictive Modeling and Analysis of Student Academic Performance in an Engineering Dynamics Course

Huang, Shaobo 01 December 2011 (has links)
Engineering dynamics is a fundamental sophomore-level course that is required for nearly all engineering students. As one of the most challenging courses for undergraduates, many students perform poorly or even fail because the dynamics course requires students to have not only solid mathematical skills but also a good understanding of fundamental concepts and principles in the field. A valid model for predicting student academic performance in engineering dynamics is helpful in designing and implementing pedagogical and instructional interventions to enhance teaching and learning in this critical course. The goal of this study was to develop a validated set of mathematical models to predict student academic performance in engineering dynamics. Data were collected from a total of 323 students enrolled in ENGR 2030 Engineering Dynamics at Utah State University for a period of four semesters. Six combinations of predictor variables that represent students’ prior achievement, prior domain knowledge, and learning progression were employed in modeling efforts. The predictor variables include X1 (cumulative GPA), X2~ X5 (three prerequisite courses), X6~ X8 (scores of three dynamics mid-term exams). Four mathematical modeling techniques, including multiple linear regression (MLR), multilayer perceptron (MLP) network, radial basis function (RBF) network, and support vector machine (SVM), were employed to develop 24 predictive models. The average prediction accuracy and the percentage of accurate predictions were employed as two criteria to evaluate and compare the prediction accuracy of the 24 models. The results from this study show that no matter which modeling techniques are used, those using X1 ~X6, X1 ~X7, and X1 ~X8 as predictor variables are always ranked as the top three best-performing models. However, the models using X1 ~X6 as predictor variables are the most useful because they not only yield accurate prediction accuracy, but also leave sufficient time for the instructor to implement educational interventions. The results from this study also show that RBF network models and support vector machine models have better generalizability than MLR models and MLP network models. The implications of the research findings, the limitation of this research, and the future work are discussed at the end of this dissertation.
23

The Effects of Previous Concussions on the Physiological Complexity of Motor Output During a Continuous Isometric Visual-Motor Tracking Task

Raikes, Adam C. 01 May 2017 (has links)
The majority of clinical impairments following a concussion resolve within 7-10 days. However, there is limited clarity as to long-term impact of this injury on neurocognitive function, motor control, and particularly integration of these domains. While repetitive head trauma is associated with numerous neurological disorders, the link is not well described. Visual-motor tracking tasks have been used to identify differences in visual processing, error detection, and fine motor control in aging and numerous pathologies. Examining the complexity of motor output from visual-motor tracking provides insight into multiple cognitive and motor function domains, and into fine motor control used for daily living, work, and sport. The purpose of this dissertation was, therefore, to: (1) use multiple regression to determine the extent to which concussion history and symptoms (loss of consciousness and amnesia) influence visual-motor task performance multiscale complexity, and (2) determine whether task performance complexity can distinguish, through logistic regression and prediction, between individuals with and without a history of concussion. In study 1, individuals with (n = 35) and without (n = 15) a history of concussion performed a visual-motor tracking task. Men and women exhibited linear decreases in task performance complexity, as well as midand high-frequency task performance components, with increasing numbers of concussions. However, men and women exhibited differing patterns, as did those with and without a history of concussion-related loss of consciousness. Finally, trial-to-trial complexity variability increased with increasing numbers of concussions. Findings indicate (1) a cumulative reduction in the way in which previously concussed individuals process and integrate visual information to guide behavior and (2) gender is an important consideration in concussion-related visual-motor outcomes. In Study 2, individuals with (n = 85) and without (n = 42) a history of concussion performed a visualmotor tracking task. Linear and nonlinear measures of task performance were used to build gender-specific logistic classification models using 10-fold cross-validation. When ensuring 80% sensitivity, the best models were 75-80% accurate in predicting a history of concussion. Such discrimination has clinical value in identifying individuals who merit further evaluation and observation over time for conditions related to repetitive head traumas.
24

Methods for Engineers to Understand, Predict, and Influence the Social Impacts of Engineered Products

Stevenson, Phillip Douglas 07 December 2022 (has links)
Engineered products can impact the day-to-day life of their users and other stakeholders. These impacts are often referred to as the product's social impacts. Products have been known to impact the people who use them, design them, manufacture them, distribute them, and the communities where they exist. Currently, there are few methods that can help an engineer identify, quantify, predict, or improve a product's social impact. Some companies and organizations have tried to identify their impacts and, for example, set goals for achieving more sustainable business practices. However, engineers, in large part, do not have methods that can help improve the sustainability and social impacts of their products. Without new methods to help engineers make better product decisions, products will continue to have unanticipated negative impacts and will likely not reach their true social impact potential. Engineers working in the field of Engineering for Global Development (EGD) are especially in need of methods that can help improve the social impacts of their products. One of the purposes of creating products in EGD is to help solve problems that lead to improved quality of life for people and communities in developing countries. The research in this dissertation presents new methods developed to help engineers understand, predict, and improve the social impact of their products. Chapter 2 introduces the Product Impact Metric, a simple metric engineers can use to quantify their products impact on improving the quality of life of impoverished individuals in developing countries. Chapter 3 introduces a method that engineers can use to create product-specific social impact metrics and models. These models are used to predict the social impacts of an expanded US-Mexico border wall on immigrants, border patrol officers, and local communities. Chapter 4 shows a method that allows engineers to create social impact models for individuals within a population. Using data available through online databanks and census reports, the author predicts the social impact of a new semi-automated cassava peeler on farmers in the Brazilian Amazon. In Chapter 5, the author presents a method for engineers to optimize a product according to its social impact on multiple stakeholders. Inspired by existing literature on multi-stakeholder decision making, eight different optimization problem formulations are presented and demonstrated in an example with the cassava peeler. Chapter 6 presents the author's experience in co-designing a semi-automated cassava with the Itacoatiara Rural Farming Cooperative. The peeler was designed and built by the author and is used as the example in Chapters 4 and 5. Finally, Chapter 7 shows the conclusions the author has in completing this research. Comments are made as to the difficulties encountered in this research (specifically data quality and validation), and the author makes suggestions of possible future work.
25

Utilizing blood-based biomarkers to characterize pathogenesis and predict mortality in viral hemorrhagic fevers

Strampe, Jamie 21 March 2024 (has links)
Hemorrhagic fever viruses are a major public health threat in Sub-Saharan Africa. These kinds of viruses cause symptoms ranging from non-specific fevers and body aches to severe, life-threatening bleeding, shock, and multi-organ failure. Previously discovered hemorrhagic fever viruses can cause recurrent or seasonal outbreaks, but new ones continue to emerge. In order to combat these viruses, we need to better understand the aspects of pathogenesis that lead to mortality or survival. I will present analysis of the host immune response to two hemorrhagic fever viruses, Lassa virus and Bundibugyo virus, and how the host response can be used to predict mortality in these diseases. Lassa virus (LASV) was identified over 50 years ago, but it remains understudied and has hence been denoted a “Neglected Tropical Disease”. Clinical studies and experiments were run by our collaborators in Nigeria and Germany. In all, longitudinal blood samples were collected for over two hundred Nigerian Lassa Fever patients and concentrations of over 60 proteins analyzed. I processed the datasets, performed statistical testing, and created logistic regression models for each protein. This modeling allowed me to determine which proteins could be used as a predictive biomarker of mortality and the level of that protein that could best stratify patients who died and survived. I then compared protein levels for the best biomarkers and other markers in the same biological pathways with those of healthy and other febrile illness (non-Lassa Fever) controls. I examined the best biomarkers over time for their utility as biomarkers at later timepoints in hospitalization. Finally, I produced an application using RShiny that incorporated these and other exploratory analyses of the data, which allows users to visualize all the data we had in addition to the plots that were published. The filovirus Bundibugyo ebolavirus (BDBV), a relative of the more well-known Ebola virus (EBOV), first caused an outbreak in people fifteen years ago. Animal models are still being developed and characterized for this virus. Our collaborators in Texas experimentally infected cynomolgus macaques with BDBV and gave them post-exposure treatment with a VSV-based vaccine. These collaborators performed RNA-Seq on longitudinal samples from the infected macaques and sent me these data for analysis. I wrote pipelines to perform RNA-Seq and differential expression analyses on over 600 samples, of which I will focus on a subset here. I found differentially expressed genes for different subsets of the data, and I examined these gene lists using gene set enrichment analysis. I then generated logistic regression models to find differentially expressed genes that could predict mortality or survival. Many of these genes could accurately predict outcome at either late or early timepoints. I then used the top genes found by logistic regression to generate random forest models that could predict mortality over the entire course of disease. / 2025-03-20T00:00:00Z
26

A Machine Learning Approach for Data Unification and Its Application in Asset Performance Management

He, Bin 28 March 2016 (has links)
The amount of data is growing fast with the advance of data capturing and management technologies. However, data from different source are often isolated and not ready to be analyzed together as one data set. The effort of connecting pieces of isolated data into a unified data set is time consuming and often costly in terms of cognitive load and programming time. To address this problem, here we proposed an approach using machine learning to augment human intelligence in the data unification process, especially complex categorical data value unification. Many aspects of useful information are extracted from supervised machine learning models, then used to amplify intelligence of human experts in various aspects of the data unification process. An empirical study is performed applying the proposed methodology to the field of Asset Performance Management, specifically focus only on the performance of equipment asset. The experiments show that machine learning helps experts in the unification standard generation, unified value suggestion, batch data unification. We conclude that machine learning models contain valuable information that can facilitate the data unification process. / Master of Science
27

Enhancing genetic programming for predictive modeling

König, Rikard January 2014 (has links)
<p>Avhandling för teknologie doktorsexamen i datavetenskap, som kommer att försvaras offentligt tisdagen den 11 mars 2014 kl. 13.15, M404, Högskolan i Borås. Opponent: docent Niklas Lavesson, Blekinge Tekniska Högskola, Karlskrona.</p>
28

Simultaneous partitioning and modeling : a framework for learning from complex data

Deodhar, Meghana 11 October 2010 (has links)
While a single learned model is adequate for simple prediction problems, it may not be sufficient to represent heterogeneous populations that difficult classification or regression problems often involve. In such scenarios, practitioners often adopt a "divide and conquer" strategy that segments the data into relatively homogeneous groups and then builds a model for each group. This two-step procedure usually results in simpler, more interpretable and actionable models without any loss in accuracy. We consider prediction problems on bi-modal or dyadic data with covariates, e.g., predicting customer behavior across products, where the independent variables can be naturally partitioned along the modes. A pivoting operation can now result in the target variable showing up as entries in a "customer by product" data matrix. We present a model-based co-clustering framework that interleaves partitioning (clustering) along each mode and construction of prediction models to iteratively improve both cluster assignment and fit of the models. This Simultaneous CO-clustering And Learning (SCOAL) framework generalizes co-clustering and collaborative filtering to model-based co-clustering, and is shown to be better than independently clustering the data first and then building models. Our framework applies to a wide range of bi-modal and multi-modal data, and can be easily specialized to address classification and regression problems in domains like recommender systems, fraud detection and marketing. Further, we note that in several datasets not all the data is useful for the learning problem and ignoring outliers and non-informative values may lead to better models. We explore extensions of SCOAL to automatically identify and discard irrelevant data points and features while modeling, in order to improve prediction accuracy. Next, we leverage the multiple models provided by the SCOAL technique to address two prediction problems on dyadic data, (i) ranking predictions based on their reliability, and (ii) active learning. We also extend SCOAL to predictive modeling of multi-modal data, where one of the modes is implicitly ordered, e.g., time series data. Finally, we illustrate our implementation of a parallel version of SCOAL based on the Google Map-Reduce framework and developed on the open source Hadoop platform. We demonstrate the effectiveness of specific instances of the SCOAL framework on prediction problems through experimentation on real and synthetic data. / text
29

DEVELOPING A MODEL OF CLIENT SATISFACTION WITH A REHABILITATION CONTINUUM OF CARE

Custer, Melba G. 01 January 2012 (has links)
Client satisfaction is an important outcome indicator because it measures multiple domains of the quality of healthcare and rehabilitation service delivery. It is especially important in occupational therapy because it is also client-centered. There are multiple domains of satisfaction and findings described in previous research; however, there is no single standard of measuring client satisfaction or any single working model describing the relationship among variables influencing satisfaction. This research was designed to apply a measure of satisfaction in rehabilitation and to develop a working model of satisfaction. This study was an exploratory and predictive study using a large existing dataset to test a working logic model of client satisfaction, determine the best predictors of satisfaction, and then to revise the model for future research. After developing the Satisfaction with a Continuum of Care (SCC) in a pilot study, the SCC was completed by 1104 clients from a large Midwest rehabilitation hospital. The SCC results were paired with administrative data with client demographics, functional status, and measures of the` rehabilitation process. Six research questions on the predictors of satisfaction with client-centeredness and clinical quality were answered using logistic regression. Significant predictors of satisfaction were having a neurological disorder, total rehabilitation hours, and admission to rehabilitation within 15 days of onset. The most robust and consistent predictors of satisfaction in this study were aspects of functional status as measured by the Functional Independence Measure especially improvement in overall and self-care functioning. The results in the study were consistent with some previous research and inconsistent with others. The finding that improvements in functional status were highly predictive of satisfaction supports the worth that clients place on rehabilitation results including the self-care improvements focused on by occupational therapy. This study was a partnership involving occupational therapy and a rehabilitation hospital. The finding that changes in self-care function were predictive of satisfaction was intended to isolate the effects of OT. There is a need to demonstrate outcomes and link these to occupational therapy and other rehabilitation disciplines to continue to identify best practices and contribute to the rehabilitation literature.
30

Run-time Predictive Modeling of Power and Performance via Time-Series in High Performance Computing

Zamani, Reza 13 November 2012 (has links)
Pressing demands for less power consumption of processors while delivering higher performance levels have put an extra attention on efficiency of the systems. Efficient management of resources in the current computing systems, given their increasing number of entities and complexity, requires accurate predictive models that can easily adapt to system and application changes. Through performance monitoring counter (PMC) events, in modern processors, a vast amount of information can be obtained from the system. This thesis provides a methodology to efficiently choose such events for power modeling purposes. In addition, exploiting the time-dependence of the data measured through PMCs and multi-meters, we build predictive multivariate time-series models that estimate the run-time power consumption of a system. In particular, we find an autoregressive moving average with exogenous inputs (ARMAX) model that is combined with a recursive least squares (RLS) algorithm as a good candidate for such purposes. Many of the available estimation or prediction models avoid using the metrics that are affected by the changes of the processor frequency. This thesis proposes a method to mitigate the impact of frequency scaling in a run-time model on power and PMC metrics. This method is based on a practical Gaussian approximation. Different segments of the trend of a metric that are associated with different frequencies are scaled and offset into a zero mean unit variance signal. This is an attempt to transform the variable frequency trend into a weakly stationary time-series. Using this approach, we have shown that power estimation of a system using PMCs can be done in a variable frequency environment. We extend the ARMAX-RLS model to predict the near future power consumption and PMCs of different applications in a variable frequency environment. The proposed method is adaptive, independent of the system and applications. We have shown that a run-time per core or aggregate system PMC event prediction, multiple-steps ahead of time, is feasible using an ARMAX-RLS model. This is crucial for progressing from the reactive power and performance management methods to more proactive algorithms. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2012-11-12 12:21:00.152

Page generated in 0.0836 seconds