• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 216
  • 76
  • 46
  • 30
  • 10
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 439
  • 439
  • 110
  • 101
  • 80
  • 75
  • 70
  • 69
  • 68
  • 64
  • 60
  • 56
  • 53
  • 52
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Reconstructing Historical Earthquake-Induced Tsunamis: Case Study of 1820 Event Near South Sulawesi, Indonesia

Paskett, Taylor Jole 13 July 2022 (has links) (PDF)
We build on the method introduced by Ringer, et al., applying it to an 1820 event that happened near South Sulawesi, Indonesia. We utilize other statistical models to aid our Metropolis-Hastings sampler, including a Gaussian process which informs the prior. We apply the method to multiple possible fault zones to determine which fault is the most likely source of the earthquake and tsunami. After collecting nearly 80,000 samples, we find that between the two most likely fault zones, the Walanae fault zone matches the anecdotal accounts much better than Flores. However, to support the anecdotal data, both samplers tend toward powerful earthquakes that may not be supported by the faults in question. This indicates that even further research is warranted. It may indicate that some other type of event took place, such as a multiple-fault rupture or landslide tsunami.
312

Solving Partial Differential Equations With Neural Networks

Karlsson Faronius, Håkan January 2023 (has links)
In this thesis three different approaches for solving partial differential equa-tions with neural networks will be explored; namely Physics-Informed NeuralNetworks, Fourier Neural Operators and the Deep Ritz method. Physics-Informed Neural Networks and the Deep Ritz Method are unsupervised machine learning methods, while the Fourier Neural Operator is a supervised method. The Physics-Informed Neural Network is implemented on Burger’s equation,while the Fourier Neural Operator is implemented on Poisson’s equation and Darcy’s law and the Deep Ritz method is applied to several variational problems. The Physics-Informed Neural Network is also used for the inverse problem; given some data on a solution, the neural network is trained to determine what the underlying partial differential equation is whose solution is given by the data. Apart from this, importance sampling is also implemented to accelerate the training of physics-informed neural networks. The contributions of this thesis are to implement a slightly different form of importance sampling on the physics-informed neural network, to show that the Deep Ritz method can be used for a larger class of variational problems than the original publication suggests and to apply the Fourier Neural Operator on an application in geophyiscs involving Darcy’s law where the coefficient factor is given by exponentiated two-dimensional pink noise.
313

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
314

Innovative Method for Rapid Determination of Shelf-Life in Packaged Food and Beverages

Anbuhkani Muniandy (5930762) 01 December 2022 (has links)
<p>Temperature is the common accelerant that is used for shelf-life determination of shelf-stable food because it is easy to use and there are models such as Q<sub>10 </sub>and Arrhenius, which are available for shelf-life prediction. The accelerated shelf-life test (ASLT) still requires months of analysis time as it only uses temperature as the accelerant. Oxygen pressure as an accelerant has not been given much attention even though many studies have shown the negative impact of oxygen on the shelf-life of food. An effective analysis method with multiple accelerants has the potential for the development of a rapid shelf-life determination method. Hence, this research focused on the invention of a rapid method, named the Ultra-Accelerated Shelf-Life Test (UASLT) that combines oxygen pressure and temperature as accelerants and the development of shelf-life prediction model(s). The study hypothesized that the application of elevated oxygen pressure and elevated temperature (40C) increases the amount of oxygen diffusing into packaged food which leads to rapid degradation of nutrients that further reduces the overall shelf-life analysis time compared to the ASLT method. A custom-made high-pressure chamber with a 100% oxygen environment at 40C was designed and developed as part of the UASLT method. The impact of the application of oxygen pressure on oxygen diffusivity in polymeric food packaging materials was investigated on three packages with different oxygen permeability properties. The application of oxygen pressure significantly increased the rate of oxygen transfer and the oxygen diffusivity values for all packaging materials compared to the counterparts that were not exposed to the pressure. A shelf-stable model food fortified with vitamins A, B1, C and D3 was developed to investigate the effectiveness of the UASLT method in degrading the quality indicators in the model foods in a polyethylene terephthalate (PET) container. PET was chosen as it was the most permeable to oxygen. Model food was also subjected to ASLT conditions at the same temperature without additional pressure and at room temperature (control). A degradation of 27.1 ± 1.9%, 13.9± 2.1%, 35.8 ± 1.0%, and 35.4 ± 0.7% were seen in vitamins A, B1, C and D3, respectively, in just 50 days. Slower degradation was observed with samples kept under the ASLT conditions for 105 days and reached a degradation of 24.0 ± 2.0%, 4.9 ± 6.1%, 32.0 ± 3.1% and 25.1 ± 1.5% for vitamin A, B1, C and D3, respectively. The control samples that were studied for 210 days showed 14.9 ± 5.0%, 2.0 ± 2.2%, 13.8 ± 2.2% and 10.6% ± 0.8% degradation in vitamins A, B1, C and D3, respectively. The increase in the dE values due to browning in samples kept at the UASLT, ASLT and control conditions were 11.67 ± 0.09, 7.49 ± 0.19 and 2.51 ± 0.11, respectively. The degradation of vitamins A, C, D3 was analyzed using the 1st order kinetic and the rate constant,    (day<sup>-1</sup>) was used to develop four prediction models. Vitamin B1 values were omitted from the kinetic analysis due to insufficient degradation. Two temperature-oxygen diffusion models were developed by correlating oxygen diffusivity and   . Comparisons were made with the temperature-based models of    and Arrhenius. The predicted    values across the models were in the range of 0.051-0.054 day<sup>-1</sup>,0.080-0.088 day<sup>-1</sup> and 0.048-0.051 day<sup>-1</sup>, for vitamin A, C and D3, respectively. The    values estimated for vitamins A, C, and D3 were 2.16, 2.63 and 2.62, respectively. The predicted shelf-life of vitamin A, C and D3 to undergo 25% reduction was in the range of 404 to 551, 321-353 and 529-583 days across all models, respectively. The shelf-life predicted from the temperature-oxygen diffusion models was close to the temperature models indicating the potential to be paired with the UASLT method. Experimental verification is needed to analyze the errors in the prediction. The addition of oxygen pressure further reduced the shelf-life analysis time by 50% compared to ASLT. Elevated external oxygen pressure can be used as an accelerant along with elevated temperatures (40C) for rapid shelf-life testing of packaged foods. This novel approach has potential application in the food industry for faster shelf-life analysis of food.</p>
315

Arnoldi-type Methods for the Solution of Linear Discrete Ill-posed Problems

Onisk, Lucas William 11 October 2022 (has links)
No description available.
316

ADVANCES IN REAL-TIME QUANTITATIVE NEAR-FIELD MICROWAVE IMAGING FOR BREAST CANCER DETECTION / QUANTITATIVE MICROWAVE IMAGING FOR BREAST CANCER DETECTION

Daniel, Tajik January 2022 (has links)
Microwave imaging finds numerous applications involving optically obscured targets. One particular area is breast cancer detection, since microwave technology promises fast low-cost image reconstruction without the use of harmful radiation typical of X-ray mammography. However, the success of microwave imaging is hindered by a critical issue, the complex nature of near-field electromagnetic scattering in tissue. To overcome this, specialized image reconstruction algorithms alongside sensitive measurement hardware are required. In this work, real-time near-field microwave imaging algorithms known as quantitative microwave holography and scattered power mapping are explored. They are experimentally demonstrated to identify potential tumor regions in tissue phantoms. Alongside this development, quality control techniques for evaluating microwave hardware are also described. Two new methods for improving the image reconstruction quality are also presented. First, a novel technique, which combines two commonly used mathematical approximations of scattering (the Born and Rytov approximations), is demonstrated yielding improved image reconstructions due to the complimentary nature of the approximations. Second, a range migration algorithm is introduced which enables near-field refocusing of a point-spread function (PSF), which is critical for algorithms that rely on measured PSFs to perform image reconstruction. / Thesis / Doctor of Philosophy (PhD) / Breast cancer remains as one of the highest causes of cancer-related deaths in women in Canada. Though X-ray mammography remains the gold standard for regular breast cancer screening, its use of harmful radiation, painful breast compression, and radiologist dependent evaluation remain as detracting factors for its use. Over the past 40 years, researchers have been exploring the use of microwave technology in place of X-ray mammography. Microwave radiation, used at power levels similar to that of a cellphone, has been demonstrated successfully in simulations of breast scans. However, in experimental evaluations with breast phantoms, the complex scattering path of the radiation through tissue complicates image reconstruction. In this thesis, methods of improving the accuracy of microwave algorithms are explored, alongside new breast phantom structures that replicate well the electrical properties of tissue. The results of this work demonstrate the flexibility of microwave imaging, and the adversities that still need to be overcome for it to begin seeing clinical use.
317

Bregman Operator Splitting with Variable Stepsize for TotalGeneralized Variation Based Multi-Channel MRIReconstruction

Cowen, Benjamin E. 02 September 2015 (has links)
No description available.
318

Inversion of Markowitz Portfolio Optimization to Evaluate Risk

Persson, Axel, Li, Ran January 2021 (has links)
This project investigates the applicability of the originalversion of Markowitz’s mean-variance model for portfoliooptimization to real-world modern actively managed portfolios.The method measures the mean-variance model’s capability toaccurately capture the riskiness of given portfolios, by invertingthe mathematical formulation of the model. The inversion of themodel is carried out both for fabricated data and real-world dataand shows that in the cases of real-world data the model lackscertain accuracy for estimating risk averseness. The method hascertain errors which both originate from the proposed estimationmethods of input variables and invalid assumptions of investors. / Projektet undersöker lämpligheten att använda den ursprungliga versionen av Markowitzs ”Mean-Variance model” för portföljoptimering för moderna aktivt förvaltade portföljer. Metoden mäter modellens förmåga att tillförlitligt beräkna risken för givna portföljer genom att invert-era den matematiska formuleringen av modellen. Inversionen av modellen utförs både för simulerad data och verklig data och visar att i fallet med verkliga data saknar modellen viss noggrannhet för att uppskatta riskpreferens. Metoden har vissa fel som både uppstår från de föreslagna uppskattningsmetoderna för inputvariabler och ogiltiga antaganden för investerare. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
319

Early stopping for iterative estimation procedures

Stankewitz, Bernhard 07 June 2024 (has links)
Diese Dissertation ist ein Beitrag zum Forschungsfeld Early stopping im Kontext iterativer Schätzverfahren. Wir betrachten Early stopping dabei sowohl aus der Perspektive impliziter Regularisierungsverfahren als auch aus der Perspektive adaptiver Methoden Analog zu expliziter Regularisierung reduziert das Stoppen eines Schätzverfahrens den stochastischen Fehler/die Varianz des endgültigen Schätzers auf Kosten eines zusätzlichen Approximationsfehlers/Bias. In diesem Forschungsbereich präsentieren wir eine neue Analyse des Gradientenabstiegsverfahrens für konvexe Lernprobleme in einem abstrakten Hilbert-Raum. Aus der Perspektive adaptiver Methoden müssen iterative Schätzerverfahren immer mit einer datengetriebenen letzten Iteration m kombiniert werden, die sowohl under- als auch over-fitting verhindert. In diesem Forschungsbereichpräsentieren wir zwei Beiträge: In einem statistischen inversen Problem, das durch iteratives Trunkieren der Singulärwertzerlegung regularisiert wird, untersuchen wir, unter welchen Umständen optimale Adaptiertheit erreicht werden kann, wenn wir an der ersten Iteration m stoppen, an der die geglätteten Residuen kleiner sind als ein kritischer Wert. Für L2-Boosting mittels Orthogonal Matching Pursuit (OMP) in hochdimensionalen linearen Modellen beweisen wir, dass sequenzielle Stoppverfahren statistische Optimalität garantieren können. Die Beweise beinhalten eine subtile punktweise Analyse einer stochastischen Bias-Varianz-Zerlegung, die durch den Greedy-Algorithmus, der OMP unterliegt, induziert wird. Simulationsstudien zeigen, dass sequentielle Methoden zu deutlich reduzierten Rechenkosten die Leistung von Standardalgorithmen wie dem kreuzvalidierten Lasso oder der nicht-sequentiellen Modellwahl über ein hochdimensionales Akaike- Kriterium erbringen können. / This dissertation contributes to the growing literature on early stopping in modern statistics and machine learning. We consider early stopping from the perspective of both implicit regularization and adaptive estimation. From the former, analogous to an explicit regularization method, halting an iterative estimation procedure reduces the stochastic error/variance of the final estimator at the cost of some bias. In this area, we present a novel analysis of gradient descent learning for convex loss functions in an abstract Hilbert space setting, which combines techniques from inexact optimization and concentration of measure. From the perspective of adaptive estimation, iterative estimation procedures have to be combined with a data-driven choice m of the effectively selected iteration in order to avoid under- as well as over-fitting. In this area, we present two contributions: For truncated SVD estimation in statistical inverse problems, we examine under what circumstances optimal adaptation can be achieved by early stopping at the first iteration at which the smoothed residuals are smaller than a critical value. For L2-boosting via orthogonal matching pursuit (OMP) in high dimensional linear models, we prove that sequential early stopping rules can preserve statistical optimality in terms of a general oracle inequality for the empirical risk and recently established optimal convergence rates for the population risk.
320

Computational Advancements for Solving Large-scale Inverse Problems

Cho, Taewon 10 June 2021 (has links)
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification. / Doctor of Philosophy / For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.

Page generated in 0.0478 seconds