• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 609
  • 143
  • 103
  • 89
  • 87
  • 78
  • 77
  • 70
  • 68
  • 68
  • 61
  • 59
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

Asiri, Sharefa M. 25 May 2013 (has links)
Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.
22

Regularization, Uncertainty Estimation and Out of Distribution Detection in Convolutional Neural Networks

Krothapalli, Ujwal K. 11 September 2020 (has links)
Classification is an important task in the field of machine learning and when classifiers are trained on images, a variety of problems can surface during inference. 1) Recent trends of using convolutional neural networks (CNNs) for various machine learning tasks has borne many successes and CNNs are surprisingly expressive in their learning ability due to a large number of parameters and numerous stacked layers in the CNNs. This increased model complexity also increases the risk of overfitting to the training data. Increasing the size of the training data using synthetic or artificial means (data augmentation) helps CNNs learn better by reducing the amount of over-fitting and producing a regularization effect to improve generalization of the learned model. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the loss functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image when producing confidence measures. 3) Convolutional neural networks always output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example an ImageNet-1K trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (context only). We approach these three different problems using bounding box information and learning to produce high entropy predictions on out of distribution classes. To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets. For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image. We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results using class activation maps to illustrate the improvements. Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines and provide entropy and confidence plots for in distribution and out of distribution validation sets. / Doctor of Philosophy / Categorization is an important task in everyday life. Humans can perform the task of classifying objects effortlessly in pictures. Machines can also be trained to classify objects in images. With the tremendous growth in the area of artificial intelligence, machines have surpassed human performance for some tasks. However, there are plenty of challenges for artificial neural networks. Convolutional Neural Networks (CNNs) are a type of artificial neural networks. 1) Sometimes, CNNs simply memorize the samples provided during training and fail to work well with images that are slightly different from the training samples. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the objective functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image. 3) Convolutional neural networks always produce an output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example, an ImageNet-1K (a popular dataset) trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (images with background only). We approach these three different problems using object position information and learning to produce low confidence predictions on out of distribution classes. To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets. For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image. We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results to illustrate the improvements. Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines on `in distribution' and `out of distribution' validation sets.
23

Optimal Control for an Impedance Boundary Value Problem

Bondarenko, Oleksandr 10 January 2011 (has links)
We consider the analysis of the scattering problem. Assume that an incoming time harmonic wave is scattered by a surface of an impenetrable obstacle. The reflected wave is determined by the surface impedance of the obstacle. In this paper we will investigate the problem of choosing the surface impedance so that a desired scattering amplitude is achieved. We formulate this control problem within the framework of the minimization of a Tikhonov functional. In particular, questions of the existence of an optimal solution and the derivation of the optimality conditions will be addressed. / Master of Science
24

Regularization for MRI Diffusion Inverse Problem

Almabruk, Tahani 17 June 2008 (has links)
In this thesis, we introduce a novel method of reconstructing fibre directions from diffusion images. By modelling the Principal Diffusion Direction PDD (the fibre direction) directly, we are able to apply regularization to the fibre direction explicitly, which was not possible before. Diffusion Tensor Imaging (DTI) is a technique which extracts information from multiple Magnetic Resonance Images about the amount and orientation of diffusion within the body. It is commonly used for brain connectivity studies, providing information about the white matter structure. Many methods have been represented in the literature for estimating diffusion tensors with and without regularization. Previous methods of regularization applied to the source images or diffusion tensors. The process of extracting PDDs therefore required two or three numerical procedures, in which regularization (including filtering) is applied in earlier steps before the PDD is extracted. Such methods require and/or impose smoothness on all components of the signal, which is inherently less efficient than using regularizing terms that penalize non-smoothness in the principal diffusion direction directly. Our model can be interpreted as a restriction of the diffusion tensor model, in which the principal eigenvalue of the diffusion tensor is a model variable and not a derived quantity. We test the model using a numerical phantom designed to test many fibre orientations in parallel, and process a set of thigh muscle diffusion-weighted images. / Thesis / Master of Science (MSc)
25

Maximum entropy regularization for calibrating a time-dependent volatility function

Hofmann, Bernd, Krämer, Romy 26 August 2004 (has links) (PDF)
We investigate the applicability of the method of maximum entropy regularization (MER) including convergence and convergence rates of regularized solutions to the specific inverse problem (SIP) of calibrating a purely time-dependent volatility function. In this context, we extend the results of [16] and [17] in some details. Due to the explicit structure of the forward operator based on a generalized Black-Scholes formula the ill-posedness character of the nonlinear inverse problem (SIP) can be verified. Numerical case studies illustrate the chances and limitations of (MER) versus Tikhonov regularization (TR) for smooth solutions and solutions with a sharp peak.
26

Tikhonov regularization with oversmoothing penalties

Gerth, Daniel 21 December 2016 (has links) (PDF)
In the last decade l1-regularization became a powerful and popular tool for the regularization of Inverse Problems. While in the early years sparse solution were in the focus of research, recently also the case that the coefficients of the exact solution decay sufficiently fast was under consideration. In this paper we seek to show that l1-regularization is applicable and leads to optimal convergence rates even when the exact solution does not belong to l1 but only to l2. This is a particular example of over-smoothing regularization, i.e., the penalty implies smoothness properties the exact solution does not fulfill. We will make some statements on convergence also in this general context.
27

Predictor Selection in Linear Regression: L1 regularization of a subset of parameters and Comparison of L1 regularization and stepwise selection

Hu, Qing 11 May 2007 (has links)
Background: Feature selection, also known as variable selection, is a technique that selects a subset from a large collection of possible predictors to improve the prediction accuracy in regression model. First objective of this project is to investigate in what data structure LASSO outperforms forward stepwise method. The second objective is to develop a feature selection method, Feature Selection by L1 Regularization of Subset of Parameters (LRSP), which selects the model by combining prior knowledge of inclusion of some covariates, if any, and the information collected from the data. Mathematically, LRSP minimizes the residual sum of squares subject to the sum of the absolute value of a subset of the coefficients being less than a constant. In this project, LRSP is compared with LASSO, Forward Selection, and Ordinary Least Squares to investigate their relative performance for different data structures. Results: simulation results indicate that for moderate number of small sized effects, forward selection outperforms LASSO in both prediction accuracy and the performance of variable selection when the variance of model error term is smaller, regardless of the correlations among the covariates; forward selection also works better in the performance of variable selection when the variance of error term is larger, but the correlations among the covariates are smaller. LRSP was shown to be an efficient method to deal with the problems when prior knowledge of inclusion of covariates is available, and it can also be applied to problems with nuisance parameters, such as linear discriminant analysis.
28

Bifurcações de campos vetoriais descontínuos / Bifurcations of discontinuous vector fields

Maciel, Anderson Luiz 14 August 2009 (has links)
Seja M um conjunto compacto e conexo do plano que seja a união dos subconjuntos conexos N e S. Seja Z_L=(X_L,Y_L) uma família a um parâmetro de campos vetoriais descontínuos, onde X_L está definida em N e Y_L em S. Ambos os campos X_L e Y_L, assim como as suas dependências em L, são suaves i. e. de classe C^\\infty; a descontinuidade acontece na fronteira comum entre N e S. O objetivo deste trabalho é estudar as bifurcações que ocorrem em certas famílias de campos vetoriais descontínuos seguindo as convenções de Filippov. Aplicando o método da regularização, introduzido por Sotomayor e Teixeira e posteriormente aprofundado por Sotomayor e Machado à família de campos vetoriais descontínuos Z_L obtemos uma família de campos vetoriais suaves que é próxima da família descontínua original. Usamos esta técnica de regularização para estudar, por comparação com os resultados clássicos da teoria suave, as bifurcações que ocorrem nas famílias de campos vetoriais descontínuos. Na literatura há uma lista de bifurcações de codimensão um, no contexto de Filippov, apresentada mais completamente, no artigo de Yu. A. Kuznetsov, A. Gragnani e S. Rinaldi, One-Parameter Bifurcations in Planar Filippov Systems, Int. Journal of Bifurcation and Chaos, vol. 13, No. 8: 2157--2188, (2003). Alguns dos casos dessa lista já eram conhecidos por Kozlova, Filippov e Machado. Neste trabalho nos propomos a estudar as bifurcações de alguns dos casos, apresentados no artigo de Kuznetsov et. al, através do método da regularização dessas famílias. Nesta Tese consubstanciamos matematicamente a seguinte conclusão: As bifurcações das famílias descontínuas analisadas ficam completamente conhecidas através das bifurcações apresentadas pelas respectivas famílias regularizadas, usando recursos da teoria clássica suave. / Let M be a connected and compact set of the plane which is the union of the connected subsets N and S. Let Z_L=(X_L,Y_L) be a one-parameter family of discontinuous vector fields, where X_L is defined on N and Y_L on S. The two fields X_L, Y_L and their dependences on L are smooths, i. e., are of C^\\infty class; the discontinuity happens in the common boundary of N and S. The objective of this work is to study the bifurcations which occurs in certains families of discontinuous vector fields following the conventions of Filippov. Applying the regularization method, introduced by Sotomayor and Teixeira, to the family of discontinuous vector fields Z_L we obtain a family of regular vector fields which is close to the original family of discontinuous vector fields. In the literature there is a list of codimension one bifurcation, in the Filippov sense, presented more completely, in the article of Yu. A. Kuznetsov, A. Gragnani e S. Rinaldi, One-Parameter Bifurcations in Planar Filippov Systems, Int. Journal of Bifurcation and Chaos, vol. 13, No. 8: 2157--2188, (2003). Some of those cases was already known by Kozlova, Filippov and Machado. In this work we propose to study the bifurcations of some of those cases, presented in the article of Kuznetsov et. al, by the method of regularization of those families. In this thesis we justify mathematically the following conclusion: The bifurcations of the analysed discontinuous families are completelly known by the bifurcations contained in the respective regularized families, using the methods of the classical theory of regular vector fields.
29

Bifurcações de campos vetoriais descontínuos / Bifurcations of discontinuous vector fields

Anderson Luiz Maciel 14 August 2009 (has links)
Seja M um conjunto compacto e conexo do plano que seja a união dos subconjuntos conexos N e S. Seja Z_L=(X_L,Y_L) uma família a um parâmetro de campos vetoriais descontínuos, onde X_L está definida em N e Y_L em S. Ambos os campos X_L e Y_L, assim como as suas dependências em L, são suaves i. e. de classe C^\\infty; a descontinuidade acontece na fronteira comum entre N e S. O objetivo deste trabalho é estudar as bifurcações que ocorrem em certas famílias de campos vetoriais descontínuos seguindo as convenções de Filippov. Aplicando o método da regularização, introduzido por Sotomayor e Teixeira e posteriormente aprofundado por Sotomayor e Machado à família de campos vetoriais descontínuos Z_L obtemos uma família de campos vetoriais suaves que é próxima da família descontínua original. Usamos esta técnica de regularização para estudar, por comparação com os resultados clássicos da teoria suave, as bifurcações que ocorrem nas famílias de campos vetoriais descontínuos. Na literatura há uma lista de bifurcações de codimensão um, no contexto de Filippov, apresentada mais completamente, no artigo de Yu. A. Kuznetsov, A. Gragnani e S. Rinaldi, One-Parameter Bifurcations in Planar Filippov Systems, Int. Journal of Bifurcation and Chaos, vol. 13, No. 8: 2157--2188, (2003). Alguns dos casos dessa lista já eram conhecidos por Kozlova, Filippov e Machado. Neste trabalho nos propomos a estudar as bifurcações de alguns dos casos, apresentados no artigo de Kuznetsov et. al, através do método da regularização dessas famílias. Nesta Tese consubstanciamos matematicamente a seguinte conclusão: As bifurcações das famílias descontínuas analisadas ficam completamente conhecidas através das bifurcações apresentadas pelas respectivas famílias regularizadas, usando recursos da teoria clássica suave. / Let M be a connected and compact set of the plane which is the union of the connected subsets N and S. Let Z_L=(X_L,Y_L) be a one-parameter family of discontinuous vector fields, where X_L is defined on N and Y_L on S. The two fields X_L, Y_L and their dependences on L are smooths, i. e., are of C^\\infty class; the discontinuity happens in the common boundary of N and S. The objective of this work is to study the bifurcations which occurs in certains families of discontinuous vector fields following the conventions of Filippov. Applying the regularization method, introduced by Sotomayor and Teixeira, to the family of discontinuous vector fields Z_L we obtain a family of regular vector fields which is close to the original family of discontinuous vector fields. In the literature there is a list of codimension one bifurcation, in the Filippov sense, presented more completely, in the article of Yu. A. Kuznetsov, A. Gragnani e S. Rinaldi, One-Parameter Bifurcations in Planar Filippov Systems, Int. Journal of Bifurcation and Chaos, vol. 13, No. 8: 2157--2188, (2003). Some of those cases was already known by Kozlova, Filippov and Machado. In this work we propose to study the bifurcations of some of those cases, presented in the article of Kuznetsov et. al, by the method of regularization of those families. In this thesis we justify mathematically the following conclusion: The bifurcations of the analysed discontinuous families are completelly known by the bifurcations contained in the respective regularized families, using the methods of the classical theory of regular vector fields.
30

Expansão perturbativa regularizada para o efeito Kondo / Regularized pertuebative expansion for the Kondo effect

Lima, Neemias Alves de 01 April 1998 (has links)
Nas últimas duas décadas a teoria dos sistemas eletrônicos correlacionados teve enorme progresso, que sustentou o paralelo desenvolvimento da pesquisa experimental dos sistemas de férmions pesados. Dada a complexidade do problema proposto pelas correlações fortes, diversas técnicas complementares de cálculo foram desenvolvidas no período. O presente plano se propõe a explorar uma extensão de uma das mais antigas, a técnica do grupo de renormalização numérico (GRN), tratando perturbativamente o modelo de Kondo para uma impureza magnética em um hospedeiro metálico. É bem conhecido que a expansão perturbativa de propriedades físicas, como a susceptibilidade, em termos do acoplamento de troca diverge logaritmicamente próxima da temperatura de Kondo. A abordagem do GRN para isto considera a transformação discreta, T[HN] = HN+1, onde {HN} é uma seqüência de Hamiltonianos. Neste trabalho, para regularizar a expansão da susceptibilidade, usamos um procedimento alternativo considerando a transformação contínua análoga, T&#948z[HN(z)] = HN(z+&#948z), onde z é um parâmetro arbitrário que generaliza a discretização logarítmica do GRN. Ao contrário do procedimento de Wilson, nós esperamos que este novo procedimento possa ser mais facilmente aplicável a Hamiltonianos mais complexos, complementando a diagonalização numérica. / In the last two decades the theory of electronic correlated systems has had an enormous progress, which has sustained the parallel development of the experimental research in heavy fermion systems. Given the complexity imposed by the strong correlations, several techniques appeared. The present work explores an extension of one of the oldest, the Numerical Renormalization Group (NRG), treating perturbatively the Kondo model for a magnetic impurity in a metallic host. It is well known that perturbative expansion of physical properties, like susceptibility, in terms of the exchange coupling diverges logarithmically near the Kondo temperature. The NRG approach for this consider the discrete transformation, T[HN] = HN+1, where {HN}, is a sequence of Hamiltonians. In this work we use an alternative procedure to regularize the expansion, using an analogous continuum transformation T&#948z[HN(z)] = HN(z+&#948z), where z is an arbitrary parameter which generalizes the NRG logarithmic discretization. Unlike Wilson\'s procedure, we hope this new one can be easily applicable to more complex Hamiltonians, complementing the numerical diagonalization.

Page generated in 0.1032 seconds