Spelling suggestions: "subject:"regularization"" "subject:"regularizations""
41 |
Minimum I-divergence Methods for Inverse ProblemsChoi, Kerkil 23 November 2005 (has links)
Problems of estimating nonnegative functions from nonnegative data induced by nonnegative mappings are ubiquitous in science and engineering. We address such problems by minimizing an information-theoretic discrepancy measure, namely Csiszar's I-divergence, between the collected data and hypothetical data induced by an estimate.
Our applications can be summarized along the following three lines:
1) Deautocorrelation: Deautocorrelation involves
recovering a function from its autocorrelation. Deautocorrelation can be interpreted as phase retrieval in that recovering a function from its autocorrelation is equivalent to retrieving
Fourier phases from just the corresponding Fourier magnitudes.
Schulz and Snyder invented an minimum I-divergence algorithm for phase retrieval. We perform a numerical study concerning the convergence of their algorithm to local minima.
X-ray crystallography is a method for finding the interatomic structure of a crystallized molecule. X-ray crystallography problems can be viewed as deautocorrelation problems from aliased autocorrelations, due to the periodicity of the crystal structure. We derive a modified version of the Schulz-Snyder algorithm for application to crystallography. Furthermore, we prove that our tweaked version can theoretically preserve special
symmorphic group symmetries that some crystals possess.
We quantify noise impact via several error metrics as the signal-to-ratio changes.
Furthermore, we propose penalty methods using Good's roughness and total variation for alleviating roughness in estimates caused by
noise.
2) Deautoconvolution: Deautoconvolution involves finding a function from its autoconvolution.
We derive an iterative algorithm that attempts to recover a function from its autoconvolution via minimizing I-divergence. Various theoretical properties of our deautoconvolution algorithm are
derived.
3) Linear inverse problems: Various linear inverse
problems can be described by the Fredholm integral equation of the first kind. We address two such problems via minimum I-divergence
methods, namely the inverse blackbody radiation problem, and the problem of estimating an input distribution to a communication channel
(particularly Rician channels) that would create a desired output.
Penalty methods are proposed for dealing with the ill-posedness of the inverse blackbody problem.
|
42 |
BOSONIZATION VS. SUPERSYMMETRYMorales, Herbert 01 January 2006 (has links)
We study the conjectured equivalence between the O(3) Gross-Neveu model and the supersymmetric sine-Gordon model under a naive application of the bosonization rules. We start with a review of the equivalence between sine-Gordon model and the massive Thirring model. We study the models by perturbation theory and then determine the equivalence. We find that the dependence of the identifications on the couplings can change according to the definition of the vector current. With the operator identifications of the special case corresponding to a free fermionic theory, known as the bosonization rules, we describe the equivalence between the massless Thirring model and the model of a compactified free boson field. For the massless Thirring model, or equivalently the O(2) Gross-Neveu model, we study the conservation laws for the vector current and the axial current by employing a generalized point-splitting method which allows a one-parameter family of definitions of the vector current. With this parameter, we can make contact with different approaches that can be found in the literature; these approaches differ mainly because of the specific definition of the current that was used. We also find the Sugawara form of the stress-energy tensor and its commutation relations. Further, we rewrite the identifications between sine-Gordon and Thirring models in our generalized framework. For the O(3) Gross-Neveu model, we extend our point-splitting method to determine the exact expression for the supercurrent. Using this current, we compute the superalgebra which determines three quantum components of the stress-energy tensor. With an Ansatz for the undetermined component, we find the trace anomaly and the first beta-function coefficient. The central charge which can be computed without using our point-splitting method is independent of the coupling constant, in fact, it is always zero. For the supersymmetric sine-Gordon model, we review its supersymmetry in the context of models derived from a scalar multiplet in two dimensions. We then obtain the central charge and discover an extra term that was missing in the original derivation. We also analyze how normal ordering modifies the central charge. Finally, we discuss the conjectured equivalence of the O(3) Gross-Neveu model and the supersymmetric sine-Gordon model under the naive application of the bosonization rules. Comparing our results of the central charges and the supercurrents for these models, we find that they disagree; consequently the models should be generically inequivalent. We also conclude that the naive application of the bosonization rules at the Lagrangian level does not always lead to an equivalent theory.
|
43 |
Singular perturbations of elliptic operatorsDyachenko, Evgueniya, Tarkhanov, Nikolai January 2014 (has links)
We develop a new approach to the analysis of pseudodifferential operators with small parameter 'epsilon' in (0,1] on a compact smooth manifold X. The standard approach assumes action of operators in Sobolev spaces whose norms depend on 'epsilon'. Instead we consider the cylinder [0,1] x X over X and study pseudodifferential operators on the cylinder which act, by the very nature, on functions depending on 'epsilon' as well. The action in 'epsilon' reduces to multiplication by functions of this variable and does not include any differentiation. As but one result we mention asymptotic of solutions to singular perturbation problems for small values of 'epsilon'.
|
44 |
Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and ApplicationsJanuary 2011 (has links)
abstract: Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms. / Dissertation/Thesis / Ph.D. Computer Science 2011
|
45 |
Sobre a estabilidade estrutural e regularizações de campos de vetores descontínuos / About the structural stability and regularizations of discontinuous vector fieldsJorge, Ronan Felipe [UNESP] 18 March 2016 (has links)
Submitted by Ronan Felipe Jorge null (ronan.jorge@hotmail.com) on 2016-04-07T21:29:57Z
No. of bitstreams: 1
RonanFelipeJorgeVersaoFinal.pdf: 3551394 bytes, checksum: 30b15e5d6519d49164a37bcb6b0946c8 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-04-08T13:07:08Z (GMT) No. of bitstreams: 1
jorge_rf_me_sjrp.pdf: 3551394 bytes, checksum: 30b15e5d6519d49164a37bcb6b0946c8 (MD5) / Made available in DSpace on 2016-04-08T13:07:08Z (GMT). No. of bitstreams: 1
jorge_rf_me_sjrp.pdf: 3551394 bytes, checksum: 30b15e5d6519d49164a37bcb6b0946c8 (MD5)
Previous issue date: 2016-03-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O conceito de estabilidade estrutural foi introduzido aos estudos de sistemas dinâmicos contínuos por Andronov e Pontryagin (1937). Em 1988, após investigar osciladores com atrito de Coulomb e apresentar equações diferenciais com lado direito descontínuo, Filippov desenvolveu em sua obra uma nomenclatura para o estudo de sistemas dinâmicos descontínuos. Desde então, estudiosos da área vêm tentando analisar a estabilidade estrutural de sistemas dinâmicos descontínuos por diferentes métodos. Um dos métodos é transformar, sem alterar a estrutura dos campos de vetores, estes sistemas descontínuos em sistemas contínuos onde o estudo da estabilidade estrutural já é conhecido. Esta transformação, também conhecida como regularização, pode ser desenvolvida de diversas formas.
Este trabalho tem por objetivo a apresentação de um método de regularização através do uso de uma função de transição de campos de vetores Z do plano com um conjunto de descontinuidade S baseando-se no método de regularização apresentado por Sotomayor e Teixeira (1996) em seu artigo, e realizar um breve estudo sobre estabilidade estrutural em campos de vetores regularizados utilizando tal método. / The concept of structural stability was introduced into the studies of continuous dynamical systems by Andronov and Pontryagin (1937). In 1988, after investigating oscillators with Coulomb friction and differential equations with discontinuous right-hand sides, Filippov developed a nomenclature to the study of discontinuous dynamical systems. Since then, researches have been trying to analyze the structural stability of discontinuous dynamical systems by different methods. One of these methods is to transform, without changing the structure of vector fields, these discontinuous systems in continuous systems where the study of structural stability is already known. This transformation, also known as regularization, can be developed in various ways.
This work aims at presenting a regularization method using a transition function of vector fields Z in the plane with a discontinuous set S basing in regularization method presented by Sotomayor and Teixeira (1996) in their article, and, also to do a short study about structural stability in regularized vector fields using this method.
|
46 |
Solving ill-posed problems with mollification and an application in biometricsLindgren, Emma January 2018 (has links)
This is a thesis about how mollification can be used as a regularization method to reduce noise in ill-posed problems in order to make them well-posed. Ill-posed problems are problems where noise get magnified during the solution process. An example of this is how measurement errors increases with differentiation. To correct this we use mollification. Mollification is a regularization method that uses integration or weighted average to even out a noisy function. The different types of error that occurs when mollifying are the truncation error and the propagated data error. We are going to calculate these errors and see what affects them. An other thing worth investigating is the ability to differentiate a mollified function even if the function itself can not be differentiated. An application to mollification is a blood vessel problem in biometrics where the goal is to calculate the elasticity of the blood vessel’s wall. To do this measurements from the blood and the blood vessel are required, as well as equations for the calculations. The model used for the calculations is ill-posed with regard to specific variables which is why we want to apply mollification. Here we are also going to take a look at how the noise level affects the final result as well as the mollification radius.
|
47 |
Expansão perturbativa regularizada para o efeito Kondo / Regularized pertuebative expansion for the Kondo effectNeemias Alves de Lima 01 April 1998 (has links)
Nas últimas duas décadas a teoria dos sistemas eletrônicos correlacionados teve enorme progresso, que sustentou o paralelo desenvolvimento da pesquisa experimental dos sistemas de férmions pesados. Dada a complexidade do problema proposto pelas correlações fortes, diversas técnicas complementares de cálculo foram desenvolvidas no período. O presente plano se propõe a explorar uma extensão de uma das mais antigas, a técnica do grupo de renormalização numérico (GRN), tratando perturbativamente o modelo de Kondo para uma impureza magnética em um hospedeiro metálico. É bem conhecido que a expansão perturbativa de propriedades físicas, como a susceptibilidade, em termos do acoplamento de troca diverge logaritmicamente próxima da temperatura de Kondo. A abordagem do GRN para isto considera a transformação discreta, T[HN] = HN+1, onde {HN} é uma seqüência de Hamiltonianos. Neste trabalho, para regularizar a expansão da susceptibilidade, usamos um procedimento alternativo considerando a transformação contínua análoga, Tδz[HN(z)] = HN(z+δz), onde z é um parâmetro arbitrário que generaliza a discretização logarítmica do GRN. Ao contrário do procedimento de Wilson, nós esperamos que este novo procedimento possa ser mais facilmente aplicável a Hamiltonianos mais complexos, complementando a diagonalização numérica. / In the last two decades the theory of electronic correlated systems has had an enormous progress, which has sustained the parallel development of the experimental research in heavy fermion systems. Given the complexity imposed by the strong correlations, several techniques appeared. The present work explores an extension of one of the oldest, the Numerical Renormalization Group (NRG), treating perturbatively the Kondo model for a magnetic impurity in a metallic host. It is well known that perturbative expansion of physical properties, like susceptibility, in terms of the exchange coupling diverges logarithmically near the Kondo temperature. The NRG approach for this consider the discrete transformation, T[HN] = HN+1, where {HN}, is a sequence of Hamiltonians. In this work we use an alternative procedure to regularize the expansion, using an analogous continuum transformation Tδz[HN(z)] = HN(z+δz), where z is an arbitrary parameter which generalizes the NRG logarithmic discretization. Unlike Wilson\'s procedure, we hope this new one can be easily applicable to more complex Hamiltonians, complementing the numerical diagonalization.
|
48 |
An efficient method for an ill-posed problem [three dashes]band-limited extrapolation by regularizationChen, Weidong January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Robert B. Burckel / In this paper a regularized spectral estimation formula and a regularized iterative algorithm
for band-limited extrapolation are presented. The ill-posedness of the problem is taken into account. First
a Fredholm equation is regularized. Then it is transformed to a differential equation in the
case where the time interval is R. A fast algorithm to solve the differential equation
by the finite differences is given and a regularized spectral estimation formula is obtained.
Then a regularized iterative extrapolation algorithm is introduced and compared with the
Papoulis and Gerchberg algorithm. A time-frequency regularized extrapolation algorithm
is presented in the two-dimensional case. The Gibbs phenomenon is analyzed.
Then the time-frequency regularized extrapolation algorithm is applied to image restoration
and compared with other algorithms.
|
49 |
Learning to Rank with Contextual InformationHan, Peng 15 November 2021 (has links)
Learning to rank is utilized in many scenarios, such as disease-gene association, information retrieval and recommender system. Improving the prediction accuracy of the ranking model is the main target of existing works. Contextual information has a significant influence in the ranking problem, and has been proved effective to increase the prediction performance of ranking models. Then we construct similarities for different types of entities that could utilize contextual information uniformly in an extensible way.
Once we have the similarities constructed by contextual information, how to uti- lize them for different types of ranking models will be the task we should tackle. In this thesis, we propose four algorithms for learning to rank with contextual informa- tion. To refine the framework of matrix factorization, we propose an area under the ROC curve (AUC) loss to conquer the sparsity problem. Clustering and sampling methods are used to utilize the contextual information in the global perspective, and an objective function with the optimal solution is proposed to exploit the contex- tual information in the local perspective. Then, for the deep learning framework, we apply the graph convolutional network (GCN) on the ranking problem with the combination of matrix factorization. Contextual information is utilized to generate the input embeddings and graph kernels for the GCN. The third method in this thesis is proposed to directly exploit the contextual information for ranking. Laplacian loss is utilized to solve the ranking problem, which could optimize the ranking matrix directly. With this loss, entities with similar contextual information will have similar ranking results. Finally, we propose a two-step method to solve the ranking problem of the sequential data. The first step in this two-step method is to generate the em- beddings for all entities with a new sampling strategy. Graph neural network (GNN) and long short-term memory (LSTM) are combined to generate the representation of sequential data. Once we have the representation of the sequential data, we could solve the ranking problem of them with pair-wise loss and sampling strategy.
|
50 |
Regularization Methods for Detecting Differential Item Functioning:Jiang, Jing January 2019 (has links)
Thesis advisor: Zhushan Mandy Li / Differential item functioning (DIF) occurs when examinees of equal ability from different groups have different probabilities of correctly responding to certain items. DIF analysis aims to identify potentially biased items to ensure the fairness and equity of instruments, and has become a routine procedure in developing and improving assessments. This study proposed a DIF detection method using regularization techniques, which allows for simultaneous investigation of all items on a test for both uniform and nonuniform DIF. In order to evaluate the performance of the proposed DIF detection models and understand the factors that influence the performance, comprehensive simulation studies and empirical data analyses were conducted. Under various conditions including test length, sample size, sample size ratio, percentage of DIF items, DIF type, and DIF magnitude, the operating characteristics of three kinds of regularized logistic regression models: lasso, elastic net, and adaptive lasso, each characterized by their penalty functions, were examined and compared. Selection of optimal tuning parameter was investigated using two well-known information criteria AIC and BIC, and cross-validation. The results revealed that BIC outperformed other model selection criteria, which not only flagged high-impact DIF items precisely, but also prevented over-identification of DIF items with few false alarms. Among the regularization models, the adaptive lasso model achieved superior performance than the other two models in most conditions. The performance of the regularized DIF detection model using adaptive lasso was then compared to two commonly used DIF detection approaches including the logistic regression method and the likelihood ratio test. The proposed model was applied to analyzing empirical datasets to demonstrate the applicability of the method in real settings. / Thesis (PhD) — Boston College, 2019. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement and Evaluation.
|
Page generated in 0.069 seconds