• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 316
  • 115
  • 65
  • 34
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 668
  • 135
  • 121
  • 86
  • 76
  • 73
  • 70
  • 67
  • 64
  • 58
  • 57
  • 56
  • 55
  • 52
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Clinical dose feature extraction for prediction of dose mimicking parameters / Extrahering av features från kliniska dosbilder för prediktion av doshärmande parametrar

Finnson, Anton January 2021 (has links)
Treating cancer with radiotherapy requires precise planning. Several planning pipelines rely on reference dose mimicking, where one tries to find machine parameters best mimicking a given reference dose. Dose mimicking relies on having a function that quantifies dose similarity well, necessitating methods for feature extraction of dose images. In this thesis we investigate ways of extracting features from clinical doseimages, and propose a few proof-of-concept dose mimicking functions using the extracted features. We extend current techniques and lay the foundation for new techniques for feature extraction, using mathematical frameworks developed in entirely different areas. In particular we give an introduction to wavelet theory, which provides signal decomposition techniques suitable for analysing local structure, and propose two different dose mimicking functions using wavelets. Furthermore, we extend ROI-based mimicking functions to use artificial ROIs, and we investigate variational autoencoders and their application to the clinical dose feature extraction problem. We conclude that the proposed functions have the potential to address certain shortcomings of current dose mimicking functions. The four methods all seem to approximately capture some notion of dose similarity. Used in combination with the current framework they have the potential of improving dose mimickingresults. However, the numerical tests supporting this are brief, and more thorough numerical investigations are necessary to properly evaluate the usefulness of the new dose mimicking functions. / Behandling av cancer med strålterapi kräver precis planering. Flera olika planeringsramverk bygger på doshärmning, som innebär att hitta de maskinparametrar som bäst härmar en given referensdos. För doshärmning behövs en funktion som kvantifierar likheten mellan två doser, vilket kräver ett sätt att extrahera utmärkande egenskaper – så kallade features – från dosbilder. I det här examensarbetet undersöker vi olika matematiska metoder för att extrahera features från kliniska dosbilder, och presenterar några olika förslag på prototyper till doshärmningsfunktioner, konstruerade utifrån extraherade features. Vi utvidgar nuvarande tekniker och lägger grunden för nya tekniker genom att använda matematiska ramverk utvecklade för helt andra syften. Speciellt så ger vi en introduktion till wavelet-teori, som ger matematiska verktyg för att analysera lokala beteenden hos signaler, exempelvis bilder. Vi föreslår två olika doshärmningsfunktioner som utnyttjar wavelets, och utvidgar ROI-baseraddoshärmning genom att introducera artificiella ROIar. Vidare så undersökervi så kallade variational autoencoders  och möjligheten att använda dessa för extrahering av features från dosbilder. Vi kommer fram till att de föreslagna funktionerna har potential att åtgärda vissa begränsningar som finns hos de doshärmningsfunktioner som används idag. De fyra metoderna verkar alla approximativt kvantifiera begreppet doslikhet. Användning av dessa nya metoder i kombination med nuvarande ramverk för doshärmning har potential att förbättra resultaten från doshärmning. De numeriska undersökningar som underbygger dessa slutsatser är dock inte särskilt ingående, så mer noggranna numeriska tester krävs för att kunna ge några definitiva svar angående de presenterade doshärmningsfunktionernas användbarhet ipraktiken.
342

Bayesian Sparse Regression with Application to Data-driven Understanding of Climate

Das, Debasish January 2015 (has links)
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables. / Computer and Information Science
343

Recycling Krylov Subspaces and Preconditioners

Ahuja, Kapil 15 November 2011 (has links)
Science and engineering problems frequently require solving a sequence of single linear systems or a sequence of dual linear systems. We develop algorithms that recycle Krylov subspaces and preconditioners from one system (or pair of systems) in the sequence to the next, leading to efficient solutions. Besides the benefit of only having to store few Lanczos vectors, using BiConjugate Gradients (BiCG) to solve dual linear systems may have application-specific advantages. For example, using BiCG to solve the dual linear systems arising in interpolatory model reduction provides a backward error formulation in the model reduction framework. Using BiCG to evaluate bilinear forms -- for example, in the variational Monte Carlo (VMC) algorithm for electronic structure calculations -- leads to a quadratic error bound. Since one of our focus areas is sequences of dual linear systems, we introduce recycling BiCG, a BiCG method that recycles two Krylov subspaces from one pair of dual linear systems to the next pair. The derivation of recycling BiCG also builds the foundation for developing recycling variants of other bi-Lanczos based methods like CGS, BiCGSTAB, BiCGSTAB2, BiCGSTAB(l), QMR, and TFQMR. We develop a generalized bi-Lanczos algorithm, where the two matrices of the bi-Lanczos procedure are not each other's conjugate transpose but satisfy this relation over the generated Krylov subspaces. This is sufficient for a short term recurrence. Next, we derive an augmented bi-Lanczos algorithm with recycling and show that this algorithm is a special case of generalized bi-Lanczos. The Petrov-Galerkin approximation that includes recycling in the iteration leads to modified two-term recurrences for the solution and residual updates. We generalize and extend the framework of our recycling BiCG to CGS, BiCGSTAB and BiCGSTAB2. We perform extensive numerical experiments and analyze the generated recycle space. We test all of our recycling algorithms on a discretized partial differential equation (PDE) of convection-diffusion type. This PDE problem provides well-known test cases that are easy to analyze further. We use recycling BiCG in the Iterative Rational Krylov Algorithm (IRKA) for interpolatory model reduction and in the VMC algorithm. For a model reduction problem, we show up to 70% savings in iterations, and we also demonstrate that solving the problem without recycling leads to (about) a 50% increase in runtime. Experiments with recycling BiCG for VMC gives promising results. We also present an algorithm that recycles preconditioners, leading to a dramatic reduction in the cost of VMC for large(r) systems. The main cost of the VMC method is in constructing a sequence of Slater matrices and computing the ratios of determinants for successive Slater matrices. Recent work has improved the scaling of constructing Slater matrices for insulators, so that the cost of constructing Slater matrices in these systems is now linear in the number of particles. However, the cost of computing determinant ratios remains cubic in the number of particles. With the long term aim of simulating much larger systems, we improve the scaling of computing determinant ratios in the VMC method for simulating insulators by using preconditioned iterative solvers. The main contribution here is the development of a method to efficiently compute for the Slater matrices a sequence of preconditioners that make the iterative solver converge rapidly. This involves cheap preconditioner updates, an effective reordering strategy, and a cheap method to monitor instability of ILUTP preconditioners. Using the resulting preconditioned iterative solvers to compute determinant ratios of consecutive Slater matrices reduces the scaling of the VMC algorithm from O(n^3) per sweep to roughly O(n^2), where n is the number of particles, and a sweep is a sequence of n steps, each attempting to move a distinct particle. We demonstrate experimentally that we can achieve the improved scaling without increasing statistical errors. / Ph. D.
344

Numerical Analysis for Data-Driven Reduced Order Model Closures

Koc, Birgul 05 May 2021 (has links)
This dissertation contains work that addresses both theoretical and numerical aspects of reduced order models (ROMs). In an under-resolved regime, the classical Galerkin reduced order model (G-ROM) fails to yield accurate approximations. Thus, we propose a new ROM, the data-driven variational multiscale ROM (DD-VMS-ROM) built by adding a closure term to the G-ROM, aiming to increase the numerical accuracy of the ROM approximation without decreasing the computational efficiency. The closure term is constructed based on the variational multiscale framework. To model the closure term, we use data-driven modeling. In other words, by using the available data, we find ROM operators that approximate the closure term. To present the closure term's effect on the ROMs, we numerically compare the DD-VMS-ROM with other standard ROMs. In numerical experiments, we show that the DD-VMS-ROM is significantly more accurate than the standard ROMs. Furthermore, to understand the closure term's physical role, we present a theoretical and numerical investigation of the closure term's role in long-time integration. We theoretically prove and numerically show that there is energy exchange from the most energetic modes to the least energetic modes in closure terms in a long time averaging. One of the promising contributions of this dissertation is providing the numerical analysis of the data-driven closure model, which has not been studied before. At both the theoretical and the numerical levels, we investigate what conditions guarantee that the small difference between the data-driven closure model and the full order model (FOM) closure term implies that the approximated solution is close to the FOM solution. In other words, we perform theoretical and numerical investigations to show that the data-driven model is verifiable. Apart from studying the ROM closure problem, we also investigate the setting in which the G-ROM converges optimality. We explore the ROM error bounds' optimality by considering the difference quotients (DQs). We theoretically prove and numerically illustrate that both the ROM projection error and the ROM error are suboptimal without the DQs, and optimal if the DQs are used. / Doctor of Philosophy / In many realistic applications, obtaining an accurate approximation to a given problem can require a tremendous number of degrees of freedom. Solving these large systems of equations can take days or even weeks on standard computational platforms. Thus, lower-dimensional models, i.e., reduced order models (ROMs), are often used instead. The ROMs are computationally efficient and accurate when the underlying system has dominant and recurrent spatial structures. Our contribution to reduced order modeling is adding a data-driven correction term, which carries important information and yields better ROM approximations. This dissertation's theoretical and numerical results show that the new ROM equipped with a closure term yields more accurate approximations than the standard ROM.
345

Are Particle-Based Methods the Future of Sampling in Joint Energy Models? A Deep Dive into SVGD and SGLD

Shah, Vedant Rajiv 19 August 2024 (has links)
This thesis investigates the integration of Stein Variational Gradient Descent (SVGD) with Joint Energy Models (JEMs), comparing its performance to Stochastic Gradient Langevin Dynamics (SGLD). We incorporated a generative loss term with an entropy component to enhance diversity and a smoothing factor to mitigate numerical instability issues commonly associated with the energy function in energy-based models. Experiments on the CIFAR-10 dataset demonstrate that SGLD, particularly with Sharpness-Aware Minimization (SAM), outperforms SVGD in classification accuracy. However, SVGD without SAM, despite its lower classification accuracy, exhibits lower calibration error underscoring its potential for developing well-calibrated classifiers required in safety-critical applications. Our results emphasize the importance of adaptive tuning of the SVGD smoothing factor ($alpha$) to balance generative and classification objectives. This thesis highlights the trade-offs between computational cost and performance, with SVGD demanding significant resources. Our findings stress the need for adaptive scaling and robust optimization techniques to enhance the stability and efficacy of JEMs. This thesis lays the groundwork for exploring more efficient and robust sampling techniques within the JEM framework, offering insights into the integration of SVGD with JEMs. / Master of Science / This thesis explores advanced techniques for improving machine learning models with a focus on developing well-calibrated and robust classifiers. We concentrated on two methods, Stein Variational Gradient Descent (SVGD) and Stochastic Gradient Langevin Dynamics (SGLD), to evaluate their effectiveness in enhancing classification accuracy and reliability. Our research introduced a new mathematical approach to improve the stability and performance of Joint Energy Models (JEMs). By leveraging the generative capabilities of SVGD, the model is guided to learn better data representations, which are crucial for robust classification. Using the CIFAR-10 image dataset, we confirmed prior research indicating that SGLD, particularly when combined with an optimization method called Sharpness-Aware Minimization (SAM), delivered the best results in terms of accuracy and stability. Notably, SVGD without SAM, despite yielding slightly lower classification accuracy, exhibited significantly lower calibration error, making it particularly valuable for safety-critical applications. However, SVGD required careful tuning of hyperparameters and substantial computational resources. This study lays the groundwork for future efforts to enhance the efficiency and reliability of these advanced sampling techniques, with the overarching goal of improving classifier calibration and robustness with JEMs.
346

Autoencoder-based anomaly detection in time series : Application to active medical devices

Gietzelt, Marie January 2024 (has links)
The aim of this thesis is to derive an unsupervised method for detecting anomalies in time series. Autoencoder-based approaches are widely used for the task of detecting anomalies where a model learns to reconstruct the pattern of the given data. The main idea is that the model will be good at reconstructing data that does not contain anomalous behavior. If the model fails to reconstruct an observation it will be marked as anomalous. In this thesis, the derived method is applied to data from active medical devices manufactured by B. Braun. The given data consist of 6,000 length-varying time series, where the average length is greater than 14,000. Hence, the given sample size is small compared to their lengths. Subsequences of the same pattern where anomalies are expected to appear can be extracted from the time series taking expert knowledge about the data into account. Considering the subsequences for the model training, the problem can betranslated into a problem with a large dataset of short time series. It is shown that a common autoencoder is able to reconstruct anomalies well and is therefore not useful to solve the task. It is demonstrated that a variational autoencoder works better as there are large differences between the given anomalous observations and their reconstructions. Furthermore, several thresholds for these differences are compared. The relative number of detected anomalies in the two given datasets are 3.12% and 5.03%.
347

A Bayesian Inference/Maximum Entropy Approach for Optimization and Validation of Empirical Molecular Models

Raddi, Robert, 0000-0001-7139-5028 05 1900 (has links)
Accurate modeling of structural ensembles is essential for understanding molecular function, predicting molecular interactions, refining molecular potentials, protein engineering, drug discovery, and more. Here, we enhance molecular modeling through Bayesian Inference of Conformational Populations (BICePs), a highly versatile algorithm for reweighting simulated ensembles with experimental data. By incorporating replica-averaging, improved likelihood functions to better address systematic errors, and adopting variational optimization schemes, the utility of this algorithm in the refinement and validation of both structural ensembles and empirical models is unmatched. Utilizing a set of diverse experimental measurements, including NOE distances, chemical shifts, and vicinal J-coupling constants, we evaluated nine force fields for simulating the mini-protein chignolin, highlighting BICePs’ capability to correctly identify folded conformations and perform objective model selection. Additionally, we demonstrate how BICePs automates the parameterization of molecular potentials and forward models—computational frameworks that generate observable quantities—while properly accounting for all sources of random and systematic error. By reconciling prior knowledge of structural ensembles with solution-based experimental observations, BICePs not only offers a robust approach for evaluating the predictive accuracy of molecular models but also shows significant promise for future applications in computational chemistry and biophysics. / Chemistry
348

Incremental sheet forming process : control and modelling

Wang, Hao January 2014 (has links)
Incremental Sheet Forming (ISF) is a progressive metal forming process, where the deformation occurs locally around the point of contact between a tool and the metal sheet. The final work-piece is formed cumulatively by the movements of the tool, which is usually attached to a CNC milling machine. The ISF process is dieless in nature and capable of producing different parts of geometries with a universal tool. The tooling cost of ISF can be as low as 5–10% compared to the conventional sheet metal forming processes. On the laboratory scale, the accuracy of the parts created by ISF is between ±1.5 mm and ±3mm. However, in order for ISF to be competitive with a stamping process, an accuracy of below ±1.0 mm and more realistically below ±0.2 mm would be needed. In this work, we first studied the ISF deformation process by a simplified phenomenal linear model and employed a predictive controller to obtain an optimised tool trajectory in the sense of minimising the geometrical deviations between the targeted shape and the shape made by the ISF process. The algorithm is implemented at a rig in Cambridge University and the experimental results demonstrate the ability of the model predictive controller (MPC) strategy. We can achieve the deviation errors around ±0.2 mm for a number of simple geometrical shapes with our controller. The limitations of the underlying linear model for a highly nonlinear problem lead us to study the ISF process by a physics based model. We use the elastoplastic constitutive relation to model the material law and the contact mechanics with Signorini’s type of boundary conditions to model the process, resulting in an infinite dimensional system described by a partial differential equation. We further developed the computational method to solve the proposed mathematical model by using an augmented Lagrangian method in function space and discretising by finite element method. The preliminary results demonstrate the possibility of using this model for optimal controller design.
349

Graphical Models for Robust Speech Recognition in Adverse Environments

Rennie, Steven J. 01 August 2008 (has links)
Robust speech recognition in acoustic environments that contain multiple speech sources and/or complex non-stationary noise is a difficult problem, but one of great practical interest. The formalism of probabilistic graphical models constitutes a relatively new and very powerful tool for better understanding and extending existing models, learning, and inference algorithms; and a bedrock for the creative, quasi-systematic development of new ones. In this thesis a collection of new graphical models and inference algorithms for robust speech recognition are presented. The problem of speech separation using multiple microphones is first treated. A family of variational algorithms for tractably combining multiple acoustic models of speech with observed sensor likelihoods is presented. The algorithms recover high quality estimates of the speech sources even when there are more sources than microphones, and have improved upon the state-of-the-art in terms of SNR gain by over 10 dB. Next the problem of background compensation in non-stationary acoustic environments is treated. A new dynamic noise adaptation (DNA) algorithm for robust noise compensation is presented, and shown to outperform several existing state-of-the-art front-end denoising systems on the new DNA + Aurora II and Aurora II-M extensions of the Aurora II task. Finally, the problem of speech recognition in speech using a single microphone is treated. The Iroquois system for multi-talker speech separation and recognition is presented. The system won the 2006 Pascal International Speech Separation Challenge, and amazingly, achieved super-human recognition performance on a majority of test cases in the task. The result marks a significant first in automatic speech recognition, and a milestone in computing.
350

Unconventional Phases in Two-Dimensional Hubbard and Kondo-Lattice Models by Variational Cluster Approaches

Lenz, Benjamin 16 December 2016 (has links)
No description available.

Page generated in 0.1119 seconds