• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 861
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1894
  • 661
  • 330
  • 235
  • 224
  • 216
  • 215
  • 213
  • 209
  • 204
  • 191
  • 183
  • 171
  • 150
  • 145
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Graph-based Inference with Constraints for Object Detection and Segmentation

Ma, Tianyang January 2013 (has links)
For many fundamental problems of computer vision, adopting a graph-based framework can be straight-forward and very effective. In this thesis, I propose several graph-based inference methods tailored for different computer vision applications. It starts from studying contour-based object detection methods. In particular, We propose a novel framework for contour based object detection, by replacing the hough-voting framework with finding dense subgraph inference. Compared to previous work, we propose a novel shape matching scheme suitable for partial matching of edge fragments. The shape descriptor has the same geometric units as shape context but our shape representation is not histogram based. The key contribution is that we formulate the grouping of partial matching hypotheses to object detection hypotheses is expressed as maximum clique inference on a weighted graph. Consequently, each detection result not only identifies the location of the target object in the image, but also provides a precise location of its contours, since we transform a complete model contour to the image. We achieve very competitive results on ETHZ dataset, obtained in a pure shape-based framework, demonstrate that our method achieves not only accurate object detection but also precise contour localization on cluttered background. Similar to the task of grouping of partial matches in the contour-based method, in many computer vision problems, we would like to discover certain pattern among a large amount of data. For instance, in the application of unsupervised video object segmentation, where we need automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. An efficient algorithm is applied to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods. We also show that the same maximum weight subgraph with mutex constraints formulation can be used to solve various computer vision problems, such as points matching, solving image jigsaw puzzle, and detecting object using 3D contours. / Computer and Information Science
362

NONPARAMETRIC EMPIRICAL BAYES SIMULTANEOUS ESTIMATION FOR MULTIPLE VARIANCES

KWON, YEIL January 2018 (has links)
The shrinkage estimation has proven to be very useful when dealing with a large number of mean parameters. In this dissertation, we consider the problem of simultaneous estimation of multiple variances and construct a shrinkage type, non-parametric estimator. We take the non-parametric empirical Bayes approach by starting with an arbitrary prior on the variances. Under an invariant loss function, the resultant Bayes estimator relies on the marginal cumulative distribution function of the sample variances. Replacing the marginal cdf by the empirical distribution function, we obtain a Non-parametric Empirical Bayes estimator for multiple Variances (NEBV). The proposed estimator converges to the corresponding Bayes version uniformly over a large set. Consequently, the NEBV works well in a post-selection setting. We then apply the NEBV to construct condence intervals for mean parameters in a post-selection setting. It is shown that the intervals based on the NEBV are shortest among all the intervals which guarantee a desired coverage probability. Through real data analysis, we have further shown that the NEBV based intervals lead to the smallest number of discordances, a desirable property when we are faced with the current "replication crisis". / Statistics
363

Property Inference for Maple: An Application of Abstract Interpretation

Forrest, Stephen A. 24 September 2017 (has links)
We present a system for the inference of various static properties from source code written in the Maple programming language. We make use of an abstract interpretation framework in the design of these properties and define languages of constraints specific to our abstract domains which capture the desired static properties of the code. Finally we discuss the automated generation and solution of these constraints, describe a tool for doing so, and present some results from applying this tool to several nontrivial test inputs. / Thesis / Master of Science (MSc)
364

Machine Learning and Field Inversion approaches to Data-Driven Turbulence Modeling

Michelen Strofer, Carlos Alejandro 27 April 2021 (has links)
There still is a practical need for improved closure models for the Reynolds-averaged Navier-Stokes (RANS) equations. This dissertation explores two different approaches for using experimental data to provide improved closure for the Reynolds stress tensor field. The first approach uses machine learning to learn a general closure model from data. A novel framework is developed to train deep neural networks using experimental velocity and pressure measurements. The sensitivity of the RANS equations to the Reynolds stress, required for gradient-based training, is obtained by means of both variational and ensemble methods. The second approach is to infer the Reynolds stress field for a flow of interest from limited velocity or pressure measurements of the same flow. Here, this field inversion is done using a Monte Carlo Bayesian procedure and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. The two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions. / Doctor of Philosophy / The Reynolds-averaged Navier-Stokes (RANS) equations are widely used to simulate fluid flows in engineering applications despite their known inaccuracy in many flows of practical interest. The uncertainty in the RANS equations is known to stem from the Reynolds stress tensor for which no universally applicable turbulence model exists. The computational cost of more accurate methods for fluid flow simulation, however, means RANS simulations will likely continue to be a major tool in engineering applications and there is still a need for improved RANS turbulence modeling. This dissertation explores two different approaches to use available experimental data to improve RANS predictions by improving the uncertain Reynolds stress tensor field. The first approach is using machine learning to learn a data-driven turbulence model from a set of training data. This model can then be applied to predict new flows in place of traditional turbulence models. To this end, this dissertation presents a novel framework for training deep neural networks using experimental measurements of velocity and pressure. When using velocity and pressure data, gradient-based training of the neural network requires the sensitivity of the RANS equations to the learned Reynolds stress. Two different methods, the continuous adjoint and ensemble approximation, are used to obtain the required sensitivity. The second approach explored in this dissertation is field inversion, whereby available data for a flow of interest is used to infer a Reynolds stress field that leads to improved RANS solutions for that same flow. Here, the field inversion is done via the ensemble Kalman inversion (EKI), a Monte Carlo Bayesian procedure, and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. While further development is needed, the two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
365

On the Value of Online Learning for Cognitive Radar Waveform Selection

Thornton III, Charles Ethridge 16 May 2023 (has links)
Modern radar systems must operate in a wide variety of time-varying conditions. These include various types of interference from neighboring systems, self-interference or clutter, and targets with fluctuating responses. It has been well-established that the quality and nature of radar measurements depend heavily on the choice of signal transmitted by the radar. In this dissertation, we discuss techniques which may be used to adapt the radar's waveform on-the-fly while making very few a priori assumptions about the physical environment. By employing tools from reinforcement learning and online learning, we present a variety of algorithms which handle practical issues of the waveform selection problem that have been left open by previous works. In general, we focus on two key challenges inherent to the waveform selection problem, sample-efficiency and universality. Sample-efficiency corresponds to the number of experiences a learning algorithm requires to achieve desirable performance. Universality refers to the learning algorithm's ability to achieve desirable performance across a wide range of physical environments. Specifically, we develop a contextual bandit-based approach to vastly improve the sample-efficiency of learning compared to previous works. We then improve the generalization performance of this model by developing a Bayesian meta-learning technique. To handle the problem of universality, we develop a learning algorithm which is asymptotically optimal in any Markov environment having finite memory length. Finally, we compare the performance of learning-based waveform selection to fixed rule-based waveform selection strategies for the scenarios of dynamic spectrum access and multiple-target tracking. We draw conclusions as to when learning-based approaches are expected to significantly outperform rule-based strategies, as well as the converse. / Doctor of Philosophy / Modern radar systems must operate in a wide variety of time-varying conditions. These include various types of interference from neighboring systems, self-interference or clutter, and targets with fluctuating responses. It has been well-established that the quality and nature of radar measurements depend heavily on the choice of signal transmitted by the radar. In this dissertation, we discuss techniques which may be used to adapt the radar's waveform on-the-fly while making very few explicit assumptions about the physical environment. By employing tools from reinforcement learning and online learning, we present a variety of algorithms which handle practical and theoretical issues of the waveform selection problem that have been left open by previous works. We begin by asking the questions "What is cognitive radar?" and "When should cognitive radar be used?" in order to develop a broad mathematical framework for the signal selection problem. The latter chapters then deal with the role of intelligent real-time decision-making algorithms which select favorable signals for target tracking and interference mitigation. We conclude by discussing the possible roles of cognitive radar within future wireless networks and larger autonomous systems.
366

Bayesian Methods for Mineral Processing Operations

Koermer, Scott Carl 07 June 2022 (has links)
Increases in demand have driven the development of complex processing technology for separating mineral resources from exceedingly low grade multi- component resources. Low mineral concentrations and variable feedstocks can make separating signal from noise difficult, while high process complexity and the multi-component nature of a feedstock can make testwork, optimization, and process simulation difficult or infeasible. A prime example of such a scenario is the recovery and separation of rare earth elements (REEs) and other critical minerals from acid mine drainage (AMD) using a solvent extraction (SX) process. In this process the REE concentration found in an AMD source can vary site to site, and season to season. SX processes take a non-trivial amount of time to reach steady state. The separation of numerous individual elements from gangue metals is a high-dimensional problem, and SX simulators can have a prohibitive computation time. Bayesian statistical methods intrinsically quantify uncertainty of model parameters and predictions given a set of data and a prior distribution and model parameter prior distributions. The uncertainty quantification possible with Bayesian methods lend well to statistical simulation, model selection, and sensitivity analysis. Moreover, Bayesian models utilizing Gaussian Process priors can be used for active learning tasks which allow for prediction, optimization, and simulator calibration while reducing data requirements. However, literature on Bayesian methods applied to separations engineering is sparse. The goal of this dissertation is to investigate, illustrate, and test the use of a handful of Bayesian methods applied to process engineering problems. First further details for the background and motivation are provided in the introduction. The literature review provides further information regarding critical minerals, solvent extraction, Bayeisan inference, data reconciliation for separations, and Gaussian process modeling. The body of work contains four chapters containing a mixture of novel applications for Bayesian methods and a novel statistical method derived for the use with the motivating problem. Chapter topics include Bayesian data reconciliation for processes, Bayesian inference for a model intended to aid engineers in deciding if a process has reached steady state, Bayesian optimization of a process with unknown dynamics, and a novel active learning criteria for reducing the computation time required for the Bayesian calibration of simulations to real data. In closing, the utility of a handfull of Bayesian methods are displayed. However, the work presented is not intended to be complete and suggestions for further improvements to the application of Bayesian methods to separations are provided. / Doctor of Philosophy / Rare earth elements (REEs) are a set of elements used in the manufacture of supplies used in green technologies and defense. Demand for REEs has prompted the development of technology for recovering REEs from unconventional resources. One unconventional resource for REEs under investigation is acid mine drainage (AMD) produced from the exposure of certain geologic strata as part of coal mining. REE concentrations found in AMD are significant, although low compared to REE ore, and can vary from site to site and season to season. Solvent extraction (SX) processes are commonly utilized to concentrate and separate REEs from contaminants using the differing solubilities of specific elements in water and oil based liquid solutions. The complexity and variability in the processes used to concentrate REEs from AMD with SX motivates the use of modern statistical and machine learning based approaches for filtering noise, uncertainty quantification, and design of experiments for testwork, in order to find the truth and make accurate process performance comparisons. Bayesian statistical methods intrinsically quantify uncertainty. Bayesian methods can be used to quantify uncertainty for predictions as well as select which model better explains a data set. The uncertainty quantification available with Bayesian models can be used for decision making. As a particular example, the uncertainty quantification provided by Gaussian process regression lends well to finding what experiments to conduct, given an already obtained data set, to improve prediction accuracy or to find an optimum. However, literature is sparse for Bayesian statistical methods applied to separation processes. The goal of this dissertation is to investigate, illustrate, and test the use of a handful of Bayesian methods applied to process engineering problems. First further details for the background and motivation are provided in the introduction. The literature review provides further information regarding critical minerals, solvent extraction, Bayeisan inference, data reconciliation for separations, and Gaussian process modeling. The body of work contains four chapters containing a mixture of novel applications for Bayesian methods and a novel statistical method derived for the use with the motivating problem. Chapter topics include Bayesian data reconciliation for processes, Bayesian inference for a model intended to aid engineers in deciding if a process has reached steady state, Bayesian optimization of a process with unknown dynamics, and a novel active learning criteria for reducing the computation time required for the Bayesian calibration of simulations to real data. In closing, the utility of a handfull of Bayesian methods are displayed. However, the work presented is not intended to be complete and suggestions for further improvements to the application of Bayesian methods to separations are provided.
367

Emerging Readers and Inferential Comprehension with Wordless Narrative Picturebooks: An intervention study

Kambach, Anna Elizabeth 26 May 2023 (has links)
Inference generation is a process that is key to successful reading (e.g., Bowyer- Crane and Snowling, 2005; Oakhill and Cain, 2012) and that begins to develop early in the reading acquisition process, through listening comprehension (e.g., Kendeou et al., 2009). Despite being able to generate inferences, such as cause and effect, as early as four years old (Lynch and van den Broek, 2007) inference generation is a skill not explicitly taught to many emergent readers. This study looked at wordless picturebooks and how they could be used with linguistic prompting to develop inferential thinking in young readers, building on the work of Grolig et al. (2020). The study involved a a quasi-experimental, 2-between subjects (wordless/worded picturebooks) and 2-within subjects (pre/post-assessment) design examining the impact of a reading intervention on emergent readers' inferential narrative comprehension. One group's intervention utilized wordless picturebooks, while the second group used a worded picturebook. The gains from pre- to post-assessment suggested that wordless picturebooks, alongside the planned prompts, did have an impact on the inferential narrative comprehension of the students (t (35) = 4.99, d = 1.63, p<.001) and that the intervention as a whole positively impacts members of both groups. / Doctor of Philosophy / As teachers, we want the children in our care to become strong readers. A part of this challenging task involves helping our students understand what they read. Wordless picturebooks, in combination with prompts for reading them, may be just the tool to help build comprehension through building inference making skills. This study looked at the impact of a wordless picturebook intervention on the inference generation abilities of young readers and found that wordless picturebooks, along with intentionally planned prompts to support readers, positively impacts a child's ability to make inferences.
368

Energy And Power Systems Simulated Attack Algorithm For Defense Testbed And Analysis

Ruttle, Zachary Andrew 31 May 2023 (has links)
The power grid has evolved over the course of many decades with the usage of cyber systems and communications such as Supervisory Control And Data Acquisition (SCADA); however, due to their connectivity to the internet, the cyber-power system can be infiltrated by malicious attackers. Encryption is not a singular solution. Currently, there are several cyber security measures in development, including those based on artificial intelligence. However, there is a need for a varying but consistent attack algorithm to serve as a testbed for these AI or other practices to be trained and tested. This is important because in the event of a real attacker, it is not possible to know exactly where they will attack and in what order. Therefore, the proposed method in this thesis is to use criminology concepts and fuzzy logic inference to create this algorithm and determine its effectiveness in making decisions on a cyber-physical system model. The method takes various characteristics of the attacker as an input, builds their ideal target node, and then compares the nodes to the high-impact target and chooses one as the goal. Based on that target and their knowledge, the attackers will attack nodes if they have resources. The results show that the proposed method can be used to create a variety of attacks with varying damaging effects, and one other set of tests shows the possibility for multiple attacks, such as denial of service and false data injection. The proposed method has been validated using an extended cyber-physical IEEE 13-node distribution system and sensitivity tests to ensure that the ruleset created would take each of the inputs well. / Master of Science / For the last decades, information and communications technology has become more commonplace for electric power and energy systems around the world. As a result, it has attracted hackers to take advantage of the cyber vulnerabilities to attack critical systems and cause damage, e.g., the critical infrastructure for electric energy. The power grid is a wide-area, distributed infrastructure with numerous power plants, substations, transmission and distribution lines as well as customer facilities. For operation and control, the power grid needs to acquire measurements from substations and send control commands from the control center to substations. The cyber-physical system has its vulnerabilities that can be deployed by hackers to launch falsified measurements or commands. Much research is concerned with how to detect and mitigate cyber threats. These methods are used to determine if an attack is occurring, and, if so, what to do about it. However, for these techniques to work properly, there must be a way to test how the defense will understand the purpose and target of an actual attack, which is where the proposed modeling and simulation method for an attacker comes in. Using a set of values for their resources, motivation and other characteristics, the defense algorithm determines what the attacker's best target would be, and then finds the closest point on the power grid that they can attack. While there are still resources remaining based on the initial value, the attacker will keep choosing places and then execute the attack. From the results, these input characteristic values for the attacker can affect the decisions the attacker makes, and the damage to the system is reflected by the values too. This is tested by looking at the results for the high-impact nodes for each input value, and seeing what came out of it. This shows that it is possible to model an attacker for testing purposes on a simulation.
369

NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES: USING NEURAL NETWORK SURROGATE MODELS WITH NON-UNIFORM DATA SAMPLING / NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES

Weir, Lauren January 2024 (has links)
This thesis demonstrates a parameter estimation technique for bioprocesses that utilizes measurement noise in experimental data to determine credible intervals on parameter estimates, with this information of potential use in prediction, robust control, and optimization. To determine these estimates, the work implements Bayesian inference using nested sampling, presenting an approach to develop neural network (NN) based surrogate models. To address challenges associated with non-uniform sampling of experimental measurements, an NN structure is proposed. The resultant surrogate model is utilized within a Nested Sampling Algorithm that samples possible parameter values from the parameter space and uses the NN to calculate model output for use in the likelihood function based on the joint probability distribution of the noise of output variables. This method is illustrated against simulated data, then with experimental data from a Sartorius fed-batch bioprocess. Results demonstrate the feasibility of the proposed technique to enable rapid parameter estimation for bioprocesses. / Thesis / Master of Applied Science (MASc) / Bioprocesses require models that can be developed quickly for rapid production of desired pharmaceuticals. Parameter estimation is necessary for these models, especially first principles models. Generating parameter estimates with confidence intervals is important for model based control. Challenges with parameter estimation that must be addressed are the presence of non-uniform sampling and measurement noise in experimental data. This thesis demonstrates a method of parameter estimation that generates parameter estimates with credible intervals by incorporating measurement noise in experimental data, while also employing a dynamic neural network surrogate model that can process non-uniformly sampled data. The proposed technique implements Bayesian inference using nested sampling and was tested against both simulated and real experimental fed-batch data.
370

Incremental Learning approaches to Biomedical decision problems

Tortajada Velert, Salvador 21 September 2012 (has links)
During the last decade, a new trend in medicine is transforming the nature of healthcare from reactive to proactive. This new paradigm is changing into a personalized medicine where the prevention, diagnosis, and treatment of disease is focused on individual patients. This paradigm is known as P4 medicine. Among other key benefits, P4 medicine aspires to detect diseases at an early stage and introduce diagnosis to stratify patients and diseases to select the optimal therapy based on individual observations and taking into account the patient outcomes to empower the physician, the patient, and their communication. This paradigm transformation relies on the availability of complex multi-level biomedical data that are increasingly accurate, since it is possible to find exactly the needed information, but also exponentially noisy, since the access to that information is more and more challenging. In order to take advantage of this information, an important effort is being made in the last decades to digitalize medical records and to develop new mathematical and computational methods for extracting maximum knowledge from patient records, building dynamic and disease-predictive models from massive amounts of integrated clinical and biomedical data. This requirement enables the use of computer-assisted Clinical Decision Support Systems for the management of individual patients. The Clinical Decision Support System (CDSS) are computational systems that provide precise and specific knowledge for the medical decisions to be adopted for diagnosis, prognosis, treatment and management of patients. The CDSS are highly related to the concept of evidence-based medicine since they infer medical knowledge from the biomedical databases and the acquisition protocols that are used for the development of the systems, give computational support based on evidence for the clinical practice, and evaluate the performance and the added value of the solution for each specific medical problem. / Tortajada Velert, S. (2012). Incremental Learning approaches to Biomedical decision problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17195

Page generated in 0.051 seconds