• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 615
  • 143
  • 104
  • 92
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Row-Action Methods for Massive Inverse Problems

Slagel, Joseph Tanner 19 June 2019 (has links)
Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification. / Doctor of Philosophy / Numerous scientific problems have seen the rise of massive data sets. An example of this is super-resolution, where many low-resolution images are used to construct a high-resolution image, or 3-D medical imaging where a 3-D image of an object of interest with hundreds of millions voxels is reconstructed from x-rays moving through that object. This work focuses on row-action methods that numerically solve these problems by repeatedly using smaller samples of the data to avoid the computational burden of using the entire data set at once. When data sets contain measurement errors, this can cause the solution to get contaminated with noise. While there are methods to handle this issue, when the data set becomes massive, these methods are no longer feasible. This dissertation develops techniques to avoid getting the solution contaminated with noise, even when the data set is immense. The methods developed in this work are applied to numerous scientific applications including super-resolution imaging, tomography, and image classification.
62

Contributions to regularization theory and practice of certain nonlinear inverse problems

Hofmann, Christopher 23 December 2020 (has links)
The present thesis addresses both theoretical as well as numerical aspects of the treatment of nonlinear inverse problems. The first part considers Tikhonov regularization for nonlinear ill-posed operator equations in Hilbert scales with oversmoothing penalties. Sufficient as well as necessary conditions to establish convergence are introduced and convergence rate results are given for various parameter choice rules under a two sided nonlinearity constraint. Ultimately, both a posteriori as well as certain a priori parameter choice rules lead to identical converce rates. The theoretical results are supported and augmented by extensive numerical case studies. In particular it is shown, that the localization of the above mentioned nonlinearity constraint is not trivial. Incorrect localization will prevent convergence of the regularized to the exact solution. The second part of the thesis considers two open problems in inverse option pricing and electrical impedance tomography. While regularization through discretization is sufficient to overcome ill-posedness of the latter, the first requires a more sophisticated approach. It is shown, that the recovery of time dependent volatility and interest rate functions from observed option prices is everywhere locally ill-posed. This motivates Tikhonov-type or variational regularization with two parameters and penalty terms to simultaneously recover these functions. Two parameter choice rules using the L-hypersurface as well as a combination of L-curve and quasi-optimality are introduced. The results are again supported by extensive numerical case studies.
63

Maximum entropy regularization for calibrating a time-dependent volatility function

Hofmann, Bernd, Krämer, Romy 26 August 2004 (has links)
We investigate the applicability of the method of maximum entropy regularization (MER) including convergence and convergence rates of regularized solutions to the specific inverse problem (SIP) of calibrating a purely time-dependent volatility function. In this context, we extend the results of [16] and [17] in some details. Due to the explicit structure of the forward operator based on a generalized Black-Scholes formula the ill-posedness character of the nonlinear inverse problem (SIP) can be verified. Numerical case studies illustrate the chances and limitations of (MER) versus Tikhonov regularization (TR) for smooth solutions and solutions with a sharp peak.
64

Discriminative object categorization with external semantic knowledge

Hwang, Sung Ju 25 September 2013 (has links)
Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation with the advances in visual input devices and low-level vision techniques, object categorization still remains as a difficult problem because it requires drawing boundaries between instances in a continuous world, where the boundaries are solely defined by human conceptualization. Object categorization is essentially a perceptual process that takes place in a human-defined semantic space. In this semantic space, the categories reside not in isolation, but in relation to others. Some categories are similar, grouped, or co-occur, and some are not. However, despite this semantic nature of object categorization, most of the today's automatic visual category recognition systems rely only on the category labels for training discriminative recognition with statistical machine learning techniques. In many cases, this could result in the recognition model being misled into learning incorrect associations between visual features and the semantic labels, from essentially overfitting to training set biases. This limits the model's prediction power when new test instances are given. Using semantic knowledge has great potential to benefit object category recognition. First, semantic knowledge could guide the training model to learn a correct association between visual features and the categories. Second, semantics provide much richer information beyond the membership information given by the labels, in the form of inter-category and category-attribute distances, relations, and structures. Finally, the semantic knowledge scales well as the relations between categories become larger with an increasing number of categories. My goal in this thesis is to learn discriminative models for categorization that leverage semantic knowledge for object recognition, with a special focus on the semantic relationships among different categories and concepts. To this end, I explore three semantic sources, namely attributes, taxonomies, and analogies, and I show how to incorporate them into the original discriminative model as a form of structural regularization. In particular, for each form of semantic knowledge I present a feature learning approach that defines a semantic embedding to support the object categorization task. The regularization penalizes the models that deviate from the known structures according to the semantic knowledge provided. The first semantic source I explore is attributes, which are human-describable semantic characteristics of an instance. While the existing work treated them as mid-level features which did not introduce new information, I focus on their potential as a means to better guide the learning of object categories, by enforcing the object category classifiers to share features with attribute classifiers, in a multitask feature learning framework. This approach essentially discovers the common low-dimensional features that support predictions in both semantic spaces. Then, I move on to the semantic taxonomy, which is another valuable source of semantic knowledge. The merging and splitting criteria for the categories on a taxonomy are human-defined, and I aim to exploit this implicit semantic knowledge. Specifically, I propose a tree of metrics (ToM) that learns metrics that capture granularity-specific similarities at different nodes of a given semantic taxonomy, and uses a regularizer to isolate granularity-specific disjoint features. This approach captures the intuition that the features used for the discrimination of the parent class should be different from the features used for the children classes. Such learned metrics can be used for hierarchical classification. The use of a single taxonomy can be limited in that its structure is not optimal for hierarchical classification, and there may exist no single optimal semantic taxonomy that perfectly aligns with visual distributions. Thus, I next propose a way to overcome this limitation by leveraging multiple taxonomies as semantic sources to exploit, and combine the acquired complementary information across multiple semantic views and granularities. This allows us, for example, to synthesize semantics from both 'Biological', and 'Appearance'-based taxonomies when learning the visual features. Finally, as a further exploration of more complex semantic relations different from the previous two pairwise similarity-based models, I exploit analogies, which encode the relational similarities between two related pairs of categories. Specifically, I use analogies to regularize a discriminatively learned semantic embedding space for categorization, such that the displacements between the two category embeddings in both category pairs of the analogy are enforced to be the same. Such a constraint allows for a more confusing pair of categories to benefit from a clear separation in the matched pair of categories that share the same relation. All of these methods are evaluated on challenging public datasets, and are shown to effectively improve the recognition accuracy over purely discriminative models, while also guiding the recognition to be more semantic to human perception. Further, the applications of the proposed methods are not limited to visual object categorization in computer vision, but they can be applied to any classification problems where there exists some domain knowledge about the relationships or structures between the classes. Possible applications of my methods outside the visual recognition domain include document classification in natural language processing, and gene-based animal or protein classification in computational biology. / text
65

Beyond-mean-field corrections and effective interactions in the nuclear many-body problem

Moghrabi, Kassem 12 September 2013 (has links) (PDF)
Mean-field approaches successfully reproduce nuclear bulk properties like masses and radii within the Energy Density Functional (EDF) framework. However, complex correlations are missing in mean-field models and several observables related to single-particle and collective nuclear properties cannot be predicted accurately. The necessity to provide a precise description of the available data as well as reliable predictions in the exotic regions of the nuclear chart motivates the use of more sophisticated beyond-mean-field models. Correlations and higher-order corrections (beyond the leading mean-field order) are introduced. A crucial aspect in these calculations is the choice of the effective interaction to be used when one goes beyond the leading order (available effective interactions are commonly adjusted at the mean-field level). In the first part, we deal with the equation of state of nuclear matter evaluated up to the second order with the phenomenological Skyrme force. We analyze the ultraviolet divergence that is related to the zero range of the interaction and we introduce Skyrme-type regularized interactions that can be used at second order for matter. Cutoff regularization and dimen- sional regularization techniques are explored and applied. In the latter case, connections are naturally established between the EDF framework and some techniques employed in Effective Field Theories. In the second part, we check whether the regularized interactions introduced for nuclear matter can be employed also for finite nuclei. As an illustration, this analysis is performed within the particle- vibration model that represents an example of beyond mean-field models where an ultraviolet divergence appears if zero-range forces are used. These first applications suggest several directions to be explored to finally provide regularized interactions that are specially tailored for beyond- mean-field calculations for finite nuclei. Conclusions and perspectives are finally illustrated.
66

Numerical Study Of Regularization Methods For Elliptic Cauchy Problems

Gupta, Hari Shanker 05 1900 (has links) (PDF)
Cauchy problems for elliptic partial differential equations arise in many important applications, such as, cardiography, nondestructive testing, heat transfer, sonic boom produced by a maneuvering aerofoil, etc. Elliptic Cauchy problems are typically ill-posed, i.e., there may not be a solution for some Cauchy data, and even if a solution exists uniquely, it may not depend continuously on the Cauchy data. The ill-posedness causes numerical instability and makes the classical numerical methods inappropriate to solve such problems. For Cauchy problems, the research on uniqueness, stability, and efficient numerical methods are of significant interest to mathematicians. The main focus of this thesis is to develop numerical techniques for elliptic Cauchy problems. Elliptic Cauchy problems can be approached as data completion problems, i.e., from over-specified Cauchy data on an accessible part of the boundary, one can try to recover missing data on the inaccessible part of the boundary. Then, the Cauchy problems can be solved by finding a so-lution to a well-posed boundary value problem for which the recovered data constitute a boundary condition on the inaccessible part of the boundary. In this thesis, we use natural linearization approach to transform the linear Cauchy problem into a problem of solving a linear operator equation. We consider this operator in a weaker image space H−1, which differs from the previous works where the image space of the operator is usually considered as L2 . The lower smoothness of the image space will make a problem a bit more ill-posed. But under such settings, we can prove the compactness of the considered operator. At the same time, it allows a relaxation of the assumption concerning noise. The numerical methods that can cope with these ill-posed operator equations are the so called regularization methods. One prominent example of such regularization methods is Tikhonov regularization which is frequently used in practice. Tikhonov regularization can be considered as a least-squares tracking of data with a regularization term. In this thesis we discuss a possibility to improve the reconstruction accuracy of the Tikhonov regularization method by using an iterative modification of Tikhonov regularization. With this iterated Tikhonov regularization the effect of the penalty term fades away as iterations go on. In the application of iterated Tikhonov regularization, we find that for severely ill-posed problems such as elliptic Cauchy problems, discretization has such a powerful influence on the accuracy of the regularized solution that only with some reasonable discretization level, desirable accuracy can be achieved. Thus, regularization by projection method which is commonly known as self-regularization is also considered in this thesis. With this method, the regularization is achieved only by discretization along with an appropriate choice of discretization level. For all regularization methods, the choice of an appropriate regularization parameter is a crucial issue. For this purpose, we propose the balancing principle which is a recently introduced powerful technique for the choice of the regularization parameter. While applying this principle, a balance between the components related to the convergence rate and stability in the accuracy estimates has to be made. The main advantage of the balancing principle is that it can work in an adaptive way to obtain an appropriate value of the regularization parameter, and it does not use any quantitative knowledge of convergence rate or stability. The accuracy provided by this adaptive strategy is worse only by a constant factor than one could achieve in the case of known stability and convergence rates. We apply the balancing principle in both iterated Tikhonov regularization and self-regularization methods to choose the proper regularization parameters. In the thesis, we also investigate numerical techniques based on iterative Tikhonov regular-ization for nonlinear elliptic Cauchy problems. We consider two types of problems. In the first kind, the nonlinear problem can be transformed to a linear problem while in the second kind, linearization of the nonlinear problem is not possible, and for this we propose a special iterative method which differs from methods such as Landweber iteration and Newton-type method which are usually based on the calculation of the Frech´et derivative or adjoint of the equation. Abundant examples are presented in the thesis, which illustrate the performance of the pro-posed regularization methods as well as the balancing principle. At the same time, these examples can be viewed as a support for the theoretical results achieved in this thesis. In the end of this thesis, we describe the sonic boom problem, where we first encountered the ill-posed nonlinear Cauchy problem. This is a very difficult problem and hence we took this problem to provide a motivation for the model problems. These model problems are discussed one by one in the thesis in the increasing order of difficulty, ending with the nonlinear problems in Chapter 5. The main results of the dissertation are communicated in the article [35].
67

Convergence rates for variational regularization of statistical inverse problems

Sprung, Benjamin 04 October 2019 (has links)
No description available.
68

Tikhonov regularization with oversmoothing penalties

Gerth, Daniel 21 December 2016 (has links)
In the last decade l1-regularization became a powerful and popular tool for the regularization of Inverse Problems. While in the early years sparse solution were in the focus of research, recently also the case that the coefficients of the exact solution decay sufficiently fast was under consideration. In this paper we seek to show that l1-regularization is applicable and leads to optimal convergence rates even when the exact solution does not belong to l1 but only to l2. This is a particular example of over-smoothing regularization, i.e., the penalty implies smoothness properties the exact solution does not fulfill. We will make some statements on convergence also in this general context.
69

Practical Implementations Of The Active Set Method For Support Vector Machine Training With Semi-definite Kernels

Sentelle, Christopher 01 January 2014 (has links)
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the models regularization parameter, C, as well as associated kernel parameters. The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse–the case for the `1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn’t allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semidefinite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical iii instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional grid search where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
70

Deep Learning-based Regularizers for Cone Beam Computed Tomography Reconstruction / Djupinlärningsbaserade regulariserare för rekonstruktion inom volymtomografi

Syed, Sabina, Stenberg, Josefin January 2023 (has links)
Cone Beam Computed Tomography is a technology to visualize the 3D interior anatomy of a patient. It is important for image-guided radiation therapy in cancer treatment. During a scan, iterative methods are often used for the image reconstruction step. A key challenge is the ill-posedness of the resulting inversion problem, causing the images to become noisy. To combat this, regularizers can be introduced, which help stabilize the problem. This thesis focuses on Adversarial Convex Regularization that with deep learning regularize the scans according to a target image quality. It can be interpreted in a Bayesian setting by letting the regularizer be the prior, approximating the likelihood with the measurement error, and obtaining the patient image through the maximum-a-posteriori estimate. Adversarial Convex Regularization has previously shown promising results in regular Computed Tomography, and this study aims to investigate its potential in Cone Beam Computed Tomography.  Three different learned regularization methods have been developed, all based on Convolutional Neural Network architectures. One model is based on three-dimensional convolutional layers, while the remaining two rely on 2D layers. These two are in a later stage crafted to be applicable to 3D reconstruction by either stacking a 2D model or by averaging 2D models trained in three orthogonal planes. All neural networks are trained on simulated male pelvis data provided by Elekta. The 3D convolutional neural network model has proven to be heavily memory-consuming, while not performing better than current reconstruction methods with respect to image quality. The two architectures based on merging multiple 2D neural network gradients for 3D reconstruction are novel contributions that avoid memory issues. These two models outperform current methods in terms of multiple image quality metrics, such as Peak Signal-to-Noise Ratio and Structural Similarity Index Measure, and they also generalize well for real Cone Beam Computed Tomography data. Additionally, the architecture based on a weighted average of 2D neural networks is able to capture spatial interactions to a larger extent and is adjustable to favor the plane that best shows the field of interest, a possibly desirable feature in medical practice. / Volymtomografi kan användas inom cancerbehandling för att skapa bilder av patientens inre anatomi i 3D som sedan används vid stråldosplanering. Under den rekonstruerande fasen i en skanning används ofta iterativa metoder. En utmaning är att det resulterande inversionsproblemet är illa ställt, vilket leder till att bilderna blir brusiga. För att motverka detta kan regularisering introduceras som bidrar till att stabilisera problemet. Fokus för denna uppsats är Adversarial Convex Regularization som baserat på djupinlärning regulariserar bilderna enligt en målbildskvalitet. Detta kan även tolkas ur ett Bayesianskt perspektiv genom att betrakta regulariseraren som apriorifördelningen, approximera likelihoodfördelningen med mätfelet samt erhålla patientbilden genom maximum-a-posteriori-skattningen. Adversarial Convex Regularization har tidigare visat lovande resultat för data från Datortomografi och syftet med denna uppsats är att undersöka dess potential för Volymtomografi.  Tre olika inlärda regulariseringsmetoder har utvecklats med hjälp av faltningsnätverk. En av modellerna bygger på faltning av tredimensionella lager, medan de återstående två är baserade på 2D-lager. Dessa två sammanförs i ett senare skede för att kunna appliceras vid 3D-rekonstruktion, antingen genom att stapla 2D modeller eller genom att beräkna ett viktat medelvärde av tre 2D-modeller som tränats i tre ortogonala plan. Samtliga modeller är tränade på simulerad manlig bäckendata från Elekta. 3D-faltningsnätverket har visat sig vara minneskrävande samtidigt som det inte presterar bättre än nuvarande rekonstruktionsmetoder med avseende på bildkvalitet. De andra två metoderna som bygger på att stapla flera gradienter av 2D-nätverk vid 3D-rekonstruktion är ett nytt vetenskapligt bidrag och undviker minnesproblemen. Dessa två modeller överträffar nuvarande metoder gällande flera bildkvalitetsmått och generaliserar även väl för data från verklig Volymtomografi. Dessutom lyckas modellen som bygger på ett viktat medelvärde av 2D-nätverk i större utsträckning fånga spatiala interaktioner. Den kan även anpassas till att gynna det plan som bäst visar intresseområdet i kroppen, vilket möjligtvis är en önskvärd egenskap i medicinska sammanhang.

Page generated in 0.0211 seconds