• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 2
  • 2
  • Tagged with
  • 45
  • 45
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Process Monitoring and Control of Advanced Manufacturing based on Physics-Assisted Machine Learning

Chung, Jihoon 05 July 2023 (has links)
With the advancement of equipment and the development of technology, the manufacturing process is becoming more and more advanced. This appears as an advanced manufacturing process that uses innovative technology, including robotics, artificial intelligence, and autonomous systems. Additive manufacturing (AM), also known as 3D printing, is the representative advanced manufacturing technology that creates 3D geometries in a layer-by-layer fashion with various types of materials. However, quality assurance in the manufacturing process requires high expectations as the process develops. Therefore, the objective of this dissertation is to propose innovative methodologies for process monitoring and control to achieve quality assurance in advanced manufacturing. The development of sensor technologies and computational power offer process data, providing opportunities to achieve effective quality assurance through a machine learning approach. Hence, exploring the connections between sensor data and process quality using machine learning methodologies would be advantageous. Although this direction is promising, some constraints and complex process dynamics in the actual process hinder achieving quality assurance from the existing machine learning methods. To address these challenges, several machine learning approaches assisted by the physics knowledge obtained from the process have been proposed in this dissertation. These approaches are successfully validated by various manufacturing processes, including AM and multistage assembly processes. Specifically, three new methodologies are proposed and developed, as listed below. -To detect the process anomalies with imbalanced process data due to different ratios of occurrence between process states, a new Generative Adversarial Network (GAN)-based method is proposed. The proposed method jointly optimizes the GAN and classifier to augment realistic and state-distinguishable images to provide balanced data. Specifically, the method utilizes the knowledge and features of normal process data to generate effective abnormal process data. The benefits of the proposed approach have been confirmed in both polymer AM and metal AM processes. -To diagnose process faults with a limited number of sensors caused by the physical constraints in the multistage assembly process, a novel sparse Bayesian learning is proposed. The method is based on a practical assumption that it will likely have a few process faults (sparse). In addition, the temporal correlation of process faults and the prior knowledge of process faults are considered through the Bayesian framework. Based on the proposed method, process faults can be accurately identified with limited sensors. -To achieve online defect mitigation of new defects that occurred during the printing due to the complex process dynamics of the AM process, a novel Reinforcement Learning (RL)-based algorithm is proposed. The proposed method is to learn the machine parameter adjustment to mitigate the new defects during the printing. The method transfers knowledge learned from various sources in the AM process to RL. Therefore, with a theoretical guarantee, the proposed method learns the mitigation strategy with fewer training samples than traditional RL. By overcoming the challenges in the process, the above-proposed methodologies successfully achieve quality assurance in the advanced manufacturing process. Furthermore, the methods are not designed for the typical processes. Therefore, they can easily be applied to other domains, such as healthcare systems. / Doctor of Philosophy / The development of equipment and technologies has led to advanced manufacturing processes. Along with that, quality assurance in the manufacturing processes has become a very important issue. Therefore, the objective of this dissertation is to accomplish quality assurance by developing advanced machine learning approaches. In this dissertation, several advanced machine learning methodologies using the physics knowledge from the process are proposed. These methods overcome some constraints and complex process dynamics of the actual process that degrade the performance of existing machine learning methodologies in achieving quality assurance. To validate the effectiveness of the proposed methodologies, various advanced manufacturing processes, including additive manufacturing and multistage assembly processes, are utilized. The performance of the proposed methodologies provides superior results for achieving quality assurance in various scenarios compared to existing state-of-the-art machine learning methods. The applications of the achievements in this dissertation are not limited to the manufacturing process. Therefore, the proposed machine learning approaches can be further extended to other application areas, such as healthcare systems.
22

Essays on the Term Structure of Interest Rates and Long Run Variance of Stock Returns

Wu, Ting 15 September 2010 (has links)
No description available.
23

High-Dimensional Generative Models for 3D Perception

Chen, Cong 21 June 2021 (has links)
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval. / Doctor of Philosophy / The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
24

Resource Allocation Decision-Making in Sequential Adaptive Clinical Trials

Rojas Cordova, Alba Claudia 19 June 2017 (has links)
Adaptive clinical trials for new drugs or treatment options promise substantial benefits to both the pharmaceutical industry and the patients, but complicate resource allocation decisions. In this dissertation, we focus on sequential adaptive clinical trials with binary response, which allow for early termination of drug testing for benefit or futility at interim analysis points. The option to stop the trial early enables the trial sponsor to mitigate investment risks on ineffective drugs, and to shorten the development time line of effective drugs, hence reducing expenditures and expediting patient access to these new therapies. In this setting, decision makers need to determine a testing schedule, or the number of patients to recruit at each interim analysis point, and stopping criteria that inform their decision to continue or stop the trial, considering performance measures that include drug misclassification risk, time-to-market, and expected profit. In the first manuscript, we model current practices of sequential adaptive trials, so as to quantify the magnitude of drug misclassification risk. Towards this end, we build a simulation model to realistically represent the current decision-making process, including the utilization of the triangular test, a widely implemented sequential methodology. We find that current practices lead to a high risk of incorrectly terminating the development of an effective drug, thus, to unrecoverable expenses for the sponsor, and unfulfilled patient needs. In the second manuscript, we study the sequential resource allocation decision, in terms of a testing schedule and stopping criteria, so as to quantify the impact of interim analyses on the aforementioned performance measures. Towards this end, we build a stochastic dynamic programming model, integrated with a Bayesian learning framework for updating the drug’s estimated efficacy. The resource allocation decision is characterized by endogenous uncertainty, and a trade-off between the incentive to establish that the drug is effective early on (exploitation), due to a time-decreasing market revenue, and the benefit from collecting some information on the drug’s efficacy prior to committing a large budget (exploration). We derive important structural properties of an optimal resource allocation strategy and perform a numerical study based on realistic data, and show that sequential adaptive trials with interim analyses substantially outperform traditional trials. Finally, the third manuscript integrates the first two models, and studies the benefits of an optimal resource allocation decision over current practices. Our findings indicate that our optimal testing schedules outperform different types of fixed testing schedules under both perfect and imperfect information. / Ph. D.
25

Named Entity Recognition In Turkish With Bayesian Learning And Hybrid Approaches

Yavuz, Sermet Reha 01 December 2011 (has links) (PDF)
Information Extraction (IE) is the process of extracting structured and important pieces of information from a set of unstructured text documents in natural language. The final goal of structured information extraction is to populate a database and reach data effectively. Our study focuses on named entity recognition (NER) which is an important subtask of IE. NER is the task that deals with extraction of named entities like person, location, organization names, temporal expressions (date and time) and numerical expressions (money and percent). NER research on Turkish is known to be rare. There are rule-based, learning based and hybrid systems for NER on Turkish texts. Some of the learning approaches used for NER in Turkish are conditional random fields (CRF), rote learning, rule extraction and generalization. In this thesis, we propose a learning based named entity recognizer for Turkish texts which employs a modified version of Bayesian learning as the learning scheme. To the best of our knowledge, this is the first learning based system that uses Bayesian approach for NER in Turkish. Several features (like token length, capitalization, lexical meaning, etc.) are used in the system to see the effects of different features on NER process. We also propose hybrid system where the Bayesian learning-based system is utilized along with a rule-based recognition system. There are two different versions of the hybrid system. Output of rule-based recognizer is utilized in different phases in these versions. We observed increase in F-Measure values for both hybrid versions. When partial scoring is active, hybrid system reached 91.44% F-Measure value / where rule-based system result is 87.43% and learning-based system result is 88.41%. The hybrid system can be improved by utilizing rule-based and learning-based components differently in the future. Hybrid system can also be improved by using different learning approaches and combining them with existing hybrid system or forming the hybrid system with a completely new approach.
26

Aortic valve analysis and area prediction using bayesian modeling

Ghotikar, Miheer S 01 June 2005 (has links)
Aortic Valve Analysis and Area Prediction using Bayesian Modeling Miheer S. Ghotikar ABSTRACT Aortic valve stenosis affects approximately 5 out of every 10,000 people in the United States. [3] This disorder causes decrease in the aortic valve opening area increasing resistance to blood flow. Detection of early stages of valve malfunction is an important area of research to enable new treatments and develop strategies in order to delay degenerative progression. Analysis of relationship between valve properties and hemodynamic factors is critical to develop and validate these strategies. Porcine aortic valves are anatomically analogous to human aortic valves. Fixation agents modify the valves in such a manner to mimic increased leaflet stiffness due to early degeneration. In this study, porcine valves treated with glutaraldehyde, a cross-linking agent and ethanol, a dehydrating agent were used to alter leaflet material properties. The hydraulic performance of ethanol and glutaraldehyde treated valves was compared to fresh valves using a programmable pulse duplicator that could simulate physiological conditions. Hydraulic conditions in the pulse duplicator were modified by varying mean flow rate and mean arterial pressure. Pressure drops across the aortic valve, flow rate and back pressure (mean arterial pressure) values were recorded at successive instants of time. Corresponding values of pressure gradient were measured, while aortic valve opening area was obtained from photographic data. Effects of glutaradehyde cross-linking and ethanol dehydration on the aortic valve area for different hydraulic conditions that emulated hemodynamic physiological conditions were analyzed and it was observed that glutaradehyde and ethanol fixation causes changes in aortic valve opening and closing patterns. Next, relations between material properties, experimental conditions, and hydraulic measures of valve performance were studied using a Bayesian model approach. The primary hypothesis tested in this study was that a Bayesian network could be used to predict dynamic changes in the aortic valve area given the hemodynamic conditions. A Bayesian network encodes probabilistic relationships among variables of interest, also representing causal relationships between temporal antecedents and outcomes. A Learning Bayesian Network was constructed; direct acyclic graphs were drawn in GeNIe 2.0ʾ using an information theory dependency algorithm. Mutual Information was calculated between every set of parameters. Conditional probability tables and cut-sets were obtained from the data with the use of Matlabʾ. A Bayesian model was built for predicting dynamic values of opening and closing area for fresh, ethanol fixed and glutaradehyde fixed aortic valves for a set of hemodynamic conditions. Separate models were made for opening and closing cycles. The models predicted aortic valve area for fresh, ethanol fixed and glutaraldehyde fixed valves. As per the results obtained from the model, it can be concluded that the Bayesian network works successfully with the performance of porcine valves in a pulse duplicator. Further work would include building the Bayesian network with additional parameters and patient data for predicting aortic valve area of patients with progressive stenosis. The important feature would be to predict valve degenration based on valve opening or closing pattern.
27

Particle filters and Markov chains for learning of dynamical systems

Lindsten, Fredrik January 2013 (has links)
Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods provide computational tools for systematic inference and learning in complex dynamical systems, such as nonlinear and non-Gaussian state-space models. This thesis builds upon several methodological advances within these classes of Monte Carlo methods.Particular emphasis is placed on the combination of SMC and MCMC in so called particle MCMC algorithms. These algorithms rely on SMC for generating samples from the often highly autocorrelated state-trajectory. A specific particle MCMC algorithm, referred to as particle Gibbs with ancestor sampling (PGAS), is suggested. By making use of backward sampling ideas, albeit implemented in a forward-only fashion, PGAS enjoys good mixing even when using seemingly few particles in the underlying SMC sampler. This results in a computationally competitive particle MCMC algorithm. As illustrated in this thesis, PGAS is a useful tool for both Bayesian and frequentistic parameter inference as well as for state smoothing. The PGAS sampler is successfully applied to the classical problem of Wiener system identification, and it is also used for inference in the challenging class of non-Markovian latent variable models.Many nonlinear models encountered in practice contain some tractable substructure. As a second problem considered in this thesis, we develop Monte Carlo methods capable of exploiting such substructures to obtain more accurate estimators than what is provided otherwise. For the filtering problem, this can be done by using the well known Rao-Blackwellized particle filter (RBPF). The RBPF is analysed in terms of asymptotic variance, resulting in an expression for the performance gain offered by Rao-Blackwellization. Furthermore, a Rao-Blackwellized particle smoother is derived, capable of addressing the smoothing problem in so called mixed linear/nonlinear state-space models. The idea of Rao-Blackwellization is also used to develop an online algorithm for Bayesian parameter inference in nonlinear state-space models with affine parameter dependencies. / CNDM / CADICS
28

Hierarchical Bayesian Learning Approaches for Different Labeling Cases

Manandhar, Achut January 2015 (has links)
<p>The goal of a machine learning problem is to learn useful patterns from observations so that appropriate inference can be made from new observations as they become available. Based on whether labels are available for training data, a vast majority of the machine learning approaches can be broadly categorized into supervised or unsupervised learning approaches. In the context of supervised learning, when observations are available as labeled feature vectors, the learning process is a well-understood problem. However, for many applications, the standard supervised learning becomes complicated because the labels for observations are unavailable as labeled feature vectors. For example, in a ground penetrating radar (GPR) based landmine detection problem, the alarm locations are only known in 2D coordinates on the earth's surface but unknown for individual target depths. Typically, in order to apply computer vision techniques to the GPR data, it is convenient to represent the GPR data as a 2D image. Since a large portion of the image does not contain useful information pertaining to the target, the image is typically further subdivided into subimages along depth. These subimages at a particular alarm location can be considered as a set of observations, where the label is only available for the entire set but unavailable for individual observations along depth. In the absence of individual observation labels, for the purposes of training standard supervised learning approaches, observations both above and below the target are labeled as targets despite substantial differences in their characteristics. As a result, the label uncertainty with depth would complicate the parameter inference in the standard supervised learning approaches, potentially degrading their performance. In this work, we develop learning algorithms for three such specific scenarios where: (1) labels are only available for sets of independent and identically distributed (i.i.d.) observations, (2) labels are only available for sets of sequential observations, and (3) continuous correlated multiple labels are available for spatio-temporal observations. For each of these scenarios, we propose a modification in a traditional learning approach to improve its predictive accuracy. The first two algorithms are based on a set-based framework called as multiple instance learning (MIL) whereas the third algorithm is based on a structured output-associative regression (SOAR) framework. The MIL approaches are motivated by the landmine detection problem using GPR data, where the training data is typically available as labeled sets of observations or sets of sequences. The SOAR learning approach is instead motivated by the multi-dimensional human emotion label prediction problem using audio-visual data, where the training data is available in the form of multiple continuous correlated labels representing complex human emotions. In both of these applications, the unavailability of the training data as labeled featured vectors motivate developing new learning approaches that are more appropriate to model the data. </p><p>A large majority of the existing MIL approaches require computationally expensive parameter optimization, do not generalize well with time-series data, and are incapable of online learning. To overcome these limitations, for sets of observations, this work develops a nonparametric Bayesian approach to learning in MIL scenarios based on Dirichlet process mixture models. The nonparametric nature of the model and the use of non-informative priors remove the need to perform cross-validation based optimization while variational Bayesian inference allows for rapid parameter learning. The resulting approach is highly generalizable and also capable of online learning. For sets of sequences, this work integrates Hidden Markov models (HMMs) into an MIL framework and develops a new approach called the multiple instance hidden Markov model. The model parameters are inferred using variational Bayes, making the model tractable and computationally efficient. The resulting approach is highly generalizable and also capable of online learning. Similarly, most of the existing approaches developed for modeling multiple continuous correlated emotion labels do not model the spatio-temporal correlation among the emotion labels. Few approaches that do model the correlation fail to predict the multiple emotion labels simultaneously, resulting in latency during testing, and potentially compromising the effectiveness of implementing the approach in real-time scenario. This work integrates the output-associative relevance vector machine (OARVM) approach with the multivariate relevance vector machine (MVRVM) approach to simultaneously predict multiple emotion labels. The resulting approach performs competitively with the existing approaches while reducing the prediction time during testing, and the sparse Bayesian inference allows for rapid parameter learning. Experimental results on several synthetic datasets, benchmark datasets, GPR-based landmine detection datasets, and human emotion recognition datasets show that our proposed approaches perform comparably or better than the existing approaches.</p> / Dissertation
29

Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases

Abbas, Kaja Moinudeen 05 1900 (has links)
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
30

False Alarm Reduction in Maritime Surveillance

Erik, Bergenholtz January 2016 (has links)
Context. A large portion of all the transportation in the world consists of voyages over the sea. Systems such as Automatic Identification Systems (AIS) have been developed to aid in the surveillance of the maritime traffic, in order to help keeping the amount accidents and illegal activities down. In recent years a lot of time and effort has gone into automated surveillance of maritime traffic, with the purpose of finding and reporting behaviour deviating from what is considered normal. An issue with many of the present approaches is inaccuracy and the amount of false positives that follow from it. Objectives. This study continues the work presented by Woxberg and Grahn in 2015. In their work they used quadtrees to improve upon the existing tool STRAND, created by Osekowska et al. STRAND utilizes potential fields to build a model of normal behaviour from received AIS data, which can then be used to detect anomalies in the traffic. The goal of this study is to further improve the system by adding statistical analysis to reduce the number of false positives detected by Grahn and Woxberg's implementation. Method. The method for reducing false positives proposed in this thesis uses the charge in overlapping potential fields to approximate a normal distribution of the charge in the area. If a charge is too similar to that of the overlapping potential fields the detection is dismissed as a false positive. A series of experiments were ran to find out which of the methods proposed by the thesis are most suited for this application.   Results. The tested methods for estimating the normal distribution of a cell in the potential field, i.e. the unbiased formula for estimating the standard deviation and a version using Kalman filtering, both find as many of the confirmed anomalies as the base implementation, i.e. 9/12. Furthermore, both suggested methods reduce the amount of false positives by 11.5% in comparison to the base implementation, bringing the amount of false positives down to 17.7%. However, there are indications that the unbiased method has more promise. Conclusion. The two proposed methods both work as intended and both proposed methods perform equally. There are however indications that the unbiased method may be better despite the test results, but a new extended set of training data is needed to confirm or deny this. The two methods can only work if the examined overlapping potential fields are independent from each other, which means that the methods can not be applied to anomalies of the positional variety. Constructing a filter for these anomalies is left for future study.

Page generated in 0.0978 seconds