Spelling suggestions: "subject:"regularization"" "subject:"regularizations""
31 |
Kernel Methods for Collaborative FilteringSun, Xinyuan 25 January 2016 (has links)
The goal of the thesis is to extend the kernel methods to matrix factorization(MF) for collaborative ltering(CF). In current literature, MF methods usually assume that the correlated data is distributed on a linear hyperplane, which is not always the case. The best known member of kernel methods is support vector machine (SVM) on linearly non-separable data. In this thesis, we apply kernel methods on MF, embedding the data into a possibly higher dimensional space and conduct factorization in that space. To improve kernelized matrix factorization, we apply multi-kernel learning methods to select optimal kernel functions from the candidates and introduce L2-norm regularization on the weight learning process. In our empirical study, we conduct experiments on three real-world datasets. The results suggest that the proposed method can improve the accuracy of the prediction surpassing state-of-art CF methods.
|
32 |
Studies on applications of neural networks in modeling sparse datasets and in the analysis of dynamics of CA3 in hippocampusKeshavarz Hedayati, Babak 23 April 2019 (has links)
Neural networks are an important tool in the field of data science as well as in the study of the very structures they were
inspired from i.e. the human nervous system. In this dissertation, we studied the application of neural networks
in data modeling as well as their role in studying the properties of various structures in the nervous system.
This dissertation has two foci: one relates to developing methods that help improve \gls{generalization} in data models and the other
is to study the possible effects of the structure on the function.
As the first focus of this dissertation, we proposed a set of heuristics that improve the \gls{generalization} capability of the
neural network models in regression and classification problems.
To do so, we explored applying apprioi information in the form of \gls{regularization} of the behavior of the models. We used smoothness and self-consistency
as the two regularized attributes that were enforced on the behavior of the neural networks in our model. We used our
proposed heuristics to improve the performance neural network ensembles in regression problems
(more specifically in quantitative structure–activity relationship (QSAR) modeling problems).
We demonstrated that these heuristics result in significant improvements in the performance of the models we used.
In addition, we developed an anomaly detection method to identify and exclude the outliers among unknown cases presented to
the model. This was to ensure that the data model only made a prediction about the outcome of the unknown cases
that were within its domain of applicability. This filtering resulted in further improvement of the performance of the model
in our experiments.
Furthermore, and through some modifications, we extended the application of our proposed heuristics to classification problems.
We evaluated the performance of the resulting classification models over several datasets and demonstrated that the \gls{regularization}s we employed in our heuristics, had a positive effect on the performance of the data model across various classification problems
as well.
In the second part of this dissertation, we focused on studying the relationship between the structure and the functionality in the nervous
system. More specifically, whether or not the structure implies functionality. In studying these possible effects, we elected to
study CA3b in Hippocampus. For this reason, we used current related literature to derive a physiologically plausible model of CA3b. To make our proposed model as close as possible to its counterpart in the nervous system, we used large scale neural simulations,
in excess of 45,000 neurons, in our experiments.
We used the collective firings of all the neurons in our proposed structure to produce a time series signal. We considered this
time-series signal which is a way to demonstrate the overall output of the structure should it be monitored by an EEG probe
as the output of the structure. In our simulations, the structure produced and maintained a low frequency rhythm. We believe that this rhythm is similar to
the Theta rhythm which occurs naturally in CA3b.
We used the fundamental frequency of this rhythm in our experiments to quantify the effects of modifications in the structure. That is, we modified various properties of our CA3b and measured the changes in the fundamental frequency of the signal.
We conducted various experiments on the structural properties (the length of axons of the neurons, the density of connections
around the neurons, etc.) of the simulated CA3b structure. Our results show that the structure was very resilient to such
modifications.
Finally, we studied the effects of lesions in such a resilient structure. For these experiments, we introduced two types of lesions: many lesions of small radius and a few lesions with large radii. We then increased the severity of these lesions
by increasing the number of lesions in the case of former and increasing the radius of lesions in the case of the latter.
Our results showed that many small lesions in the structure have a more pronounced effect on the fundamental frequency compared
to the few lesions with large radii. / Graduate
|
33 |
Two component semiparametric density mixture models with a known componentZhou Shen (5930258) 17 January 2019 (has links)
<pre>Finite mixture models have been successfully used in many applications, such as classification, clustering, and many others. As opposed to classical parametric mixture models, nonparametric and semiparametric mixture models often provide more flexible approaches to the description of inhomogeneous populations. As an example, in the last decade a particular two-component semiparametric density mixture model with a known component has attracted substantial research interest. Our thesis provides an innovative way of estimation for this model based on minimization of a smoothed objective functional, conceptually similar to the log-likelihood. The minimization is performed with the help of an EM-like algorithm. We show that the algorithm is convergent and the minimizers of the objective functional, viewed as estimators of the model parameters, are consistent. </pre><pre><br></pre><pre>More specifically, in our thesis, a semiparametric mixture of two density functions is considered where one of them is known while the weight and the other function are unknown. For the first part, a new sufficient identifiability condition for this model is derived, and a specific class of distributions describing the unknown component is given for which this condition is mostly satisfied. A novel approach to estimation of this model is derived. That approach is based on an idea of using a smoothed likelihood-like functional as an objective functional in order to avoid ill-posedness of the original problem. Minimization of this functional is performed using an iterative Majorization-Minimization (MM) algorithm that estimates all of the unknown parts of the model. The algorithm possesses a descent property with respect to the objective functional. Moreover, we show that the algorithm converges even when the unknown density is not defined on a compact interval. Later, we also study properties of the minimizers of this functional viewed as estimators of the mixture model parameters. Their convergence to the true solution with respect to a bandwidth parameter is justified by reconsidering in the framework of Tikhonov-type functional. They also turn out to be large-sample consistent; this is justified using empirical minimization approach. The third part of the thesis contains a series of simulation studies, comparison with another method and a real data example. All of them show the good performance of the proposed algorithm in recovering unknown components from data.</pre>
|
34 |
Políticas para o campo no Amazonas: o papel político do Instituto de Terras do Amazona (ITEAM)Oneti, Maglúcia Izabel de Assis 18 May 2010 (has links)
Made available in DSpace on 2015-04-22T22:04:06Z (GMT). No. of bitstreams: 1
DISSERTACAO MAGLUCIA IZABEL.pdf: 28383729 bytes, checksum: f3510241ffd9c9b664075a868e2ab657 (MD5)
Previous issue date: 2010-05-18 / Fundação de Amparo à Pesquisa do Estado do Amazonas / The issue of land in the Amazon has historically been the target of state intervention and it did not recognize the cultural diversity of social groups arranged in the land caring
only for the territorial content itself. By encouraging the settlement and colonization in the Amazon, more massively from the twentieth century, the policies have opened the
door to land conflicts and the emergence of new agents that structured social relations, political, economic and cultural status. For small farmers, riparian, and other agents of
the field emerged a possibility of changing life in the city, attracted by the establishment of the Manaus Free Zone (MFZ), an industrial electronics products that tore the green
areas in Manaus generating a disordered growth of capital due to rural exodus. All these problems represent the expansion of capitalism in the field of the Amazon and Brazil,
with the monopoly of large estate, farm land, bitter conflicts between social classes, migration, exodus (IANNI, 1986). In the current period, an institute responsible for the
land issue, the Land Institute of the Amazon, ITEAM, linked to the state government intends to use its inventory of public lands, approximately 39.0% of the state to
implement reform policies land through land regularization or rural settlements. Such policies would be consistent with the actions that the Lula government intends to take to
the field, a "'land reform' quality, with models of forest settlements, based on the mainstreaming of public policies" (PASQUIS et al., 2005). So the proposal is to deploy
ITEAM settlements in accordance with the reality of social workers in five local municipalities of Amazonas, with the participation of both the staff of institutions
working with the themes of earth and those who live and dwell in it. By examining these issues, policies for the field in the Amazon, we will focus on analysis of the
State/Society having as its background the proposed land reform policy of ITEAM without focusing directly on the content of policies, plans and projects. So we want to
understand this historical relationship in the context of the land problem in Brazil, the consequences of policies for the field in the Amazon and in the current context, what
changes occurred in the state that can influence land policies of today. / A questão da terra no Amazonas historicamente tem sido alvo da intervenção do Estado e este não reconheceu a diversidade cultural dos grupos sociais dispostos na terra,
importando-se somente com o conteúdo territorial em si mesmo. Ao incentivar o povoamento e a colonização na Amazônia, mais massivamente a partir do século XX, as
políticas abriram a porta para os conflitos fundiários e para o surgimento de novos agentes que estruturaram as relações sociais, políticas, econômicas e culturais do estado.
Para os pequenos agricultores, ribeirinhos, entre outros agentes do campo, emergiu uma possibilidade de mudança de vida na cidade, atraídos pela implantação da Zona Franca
de Manaus (ZFM), um polo industrial de produtos eletro-eletrônicos que rasgou as áreas verdes em Manaus, gerando um crescimento desordenado da capital em decorrência do
êxodo rural. Todos esses problemas representam a expansão do capitalismo no campo do Amazonas e do Brasil, com o monopólio da grande propriedade, exploração da terra,
conflitos acirrados entre as classes sociais, migração, êxodo (IANNI, 1986). No período atual, um instituto responsável pela questão fundiária, o Instituto de Terras do
Amazonas, o ITEAM, vinculado ao governo estadual, pretende utilizar seu estoque de terras públicas, em torno de 39,0% do estado, para implantar políticas de reforma agrária por meio de regularização fundiária ou assentamentos rurais. Tais políticas estariam de acordo com as ações que Governo Lula pretende realizar para o campo, uma reforma agrária de qualidade, com modelos de assentamentos florestais,
fundamentados na transversalidade das políticas públicas (PASQUIS et al., 2005). Assim, a proposta do ITEAM é implantar assentamentos de acordo com a realidade dos
agentes sociais locais em cinco municípios do Amazonas, com a participação tanto dos agentes de instituições que trabalham com a temática da terra quanto os que vivem e
moram nela. Ao nos debruçarmos nesta problemática, das políticas para o campo no Amazonas, nos concentraremos na análise da relação Estado/sociedade, tendo como
pano de fundo a proposta de política de reforma agrária do ITEAM, sem focalizar diretamente no conteúdo das políticas, dos planos e dos projetos. Desse modo, pretendemos compreender esta relação histórica no contexto do problema da terra no Brasil, as consequências das políticas para o campo no Amazonas e, num contexto atual, quais as mudanças ocorridas no âmbito do Estado que podem influenciar nas políticas fundiárias do hoje.
|
35 |
Ill-Posed Problems in Early VisionBertero, Mario, Poggio, Tomaso, Torre, Vincent 01 May 1987 (has links)
The first processing stage in computational vision, also called early vision, consists in decoding 2D images in terms of properties of 3D surfaces. Early vision includes problems such as the recovery of motion and optical flow, shape from shading, surface interpolation, and edge detection. These are inverse problems, which are often ill-posed or ill-conditioned. We review here the relevant mathematical results on ill-posed and ill-conditioned problems and introduce the formal aspects of regularization theory in the linear and non-linear case. More general stochastic regularization methods are also introduced. Specific topics in early vision and their regularization are then analyzed rigorously, characterizing existence, uniqueness, and stability of solutions.
|
36 |
Learning a Color Algorithm from ExamplesHurlbert, Anya, Poggio, Tomaso 01 June 1987 (has links)
We show that a color algorithm capable of separating illumination from reflectance in a Mondrian world can be learned from a set of examples. The learned algorithm is equivalent to filtering the image data---in which reflectance and illumination are mixed---through a center-surround receptive field in individual chromatic channels. The operation resembles the "retinex" algorithm recently proposed by Edwin Land. This result is a specific instance of our earlier results that a standard regularization algorithm can be learned from examples. It illustrates that the natural constraints needed to solve a problemsin inverse optics can be extracted directly from a sufficient set of input data and the corresponding solutions. The learning procedure has been implemented as a parallel algorithm on the Connection Machine System.
|
37 |
Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive SplinesGirosi, Federico, Jones, Michael, Poggio, Tomaso 01 June 1993 (has links)
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.
|
38 |
Multiwavelet analysis on fractalsBrodin, Andreas January 2007 (has links)
This thesis consists of an introduction and a summary, followed by two papers, both of them on the topic of function spaces on fractals. Paper I: Andreas Brodin, Pointwise Convergence of Haar type Wavelets on Self-Similar Sets, Manuscript. Paper II: Andreas Brodin, Regularization of Wavelet Expansion characterizes Besov Spaces on Fractals, Manuscript. Properties of wavelets, originally constructed by A. Jonsson, are studied in both papers. The wavelets are piecewise polynomial functions on self-similar fractal sets. In Paper I, pointwise convergence of partial sums of the wavelet expansion is investigated. On a specific fractal set, the Sierpinski gasket, pointwise convergence of the partial sums is shown by calculating the kernel explicitly, when the wavelets are piecewise constant functions. For more general self-similar fractals, pointwise convergence of the partial sums and their derivatives, in case the expanded function has higher regularity, is shown using a different technique based on Markov's inequality. A. Jonsson has shown that on a class of totally disconnected self-similar sets it is possible to characterize Besov spaces by means of the magnitude of the coefficients in the wavelet expansion of a function. M. Bodin has extended his results to a class of graph directed self-similar sets introduced by Mauldin and Williams. Unfortunately, these results only holds for fractals such that the sets in the first generation of the fractal are disjoint. In Paper II we are able to characterize Besov spaces on a class of fractals not necessarily sharing this condition by making the wavelet expansion smooth. We create continuous regularizations of the partial sums of the wavelet expansion and show that properties of these regularizations can be used to characterize Besov spaces.
|
39 |
Regularization for Sparseness and Smoothness : Applications in System Identification and Signal ProcessingOhlsson, Henrik January 2010 (has links)
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance the model fit against model complexity. Regularization here acts as a price on model complexity. In statistics and machine learning, regularization has gained popularity due to modeling methods such as Support Vector Machines (SVM), ridge regression and lasso. But also when using a Bayesian approach to modeling, regularization often implicitly shows up and can be associated with the prior knowledge. Regularization has also had a great impact on many applications, and very much so in clinical imaging. In e.g., breast cancer imaging, the number of sensors is physically restricted which leads to long scantimes. Regularization and sparsity can be used to reduce that. In Magnetic Resonance Imaging (MRI), the number of scans is physically limited and to obtain high resolution images, regularization plays an important role. Regularization shows-up in a variety of different situations and is a well known technique to handle ill-posed problems and to control for overfit. We focus on the use of regularization to obtain sparseness and smoothness and discuss novel developments relevant to system identification and signal processing. In regularization for sparsity a quantity is forced to contain elements equal to zero, or to be sparse. The quantity could e.g., be the regression parameter vectorof a linear regression model and regularization would then result in a tool for variable selection. Sparsity has had a huge impact on neighboring disciplines, such as machine learning and signal processing, but rather limited effect on system identification. One of the major contributions of this thesis is therefore the new developments in system identification using sparsity. In particular, a novel method for the estimation of segmented ARX models using regularization for sparsity is presented. A technique for piecewise-affine system identification is also elaborated on as well as several novel applications in signal processing. Another property that regularization can be used to impose is smoothness. To require the relation between regressors and predictions to be a smooth function is a way to control for overfit. We are here particularly interested in regression problems with regressors constrained to limited regions in the regressor-space e.g., a manifold. For this type of systems we develop a new regression technique, Weight Determination by Manifold Regularization (WDMR). WDMR is inspired byapplications in biology and developments in manifold learning and uses regularization for smoothness to obtain smooth estimates. The use of regularization for smoothness in linear system identification is also discussed. The thesis also presents a real-time functional Magnetic Resonance Imaging (fMRI) bio-feedback setup. The setup has served as proof of concept and been the foundation for several real-time fMRI studies.
|
40 |
Comparison Of Five Regularization Methods For The Solution Of Inverse Electrocardiography ProblemGuclu, Alperen 01 February 2013 (has links) (PDF)
Understanding heart&rsquo / s electrical activity is very important because coronary problems -such as heart attacks, arrhythmia and stroke- are the leading cause of death in the world. Forward and inverse problems of electrocardiography (ECG) are methods that provide detailed information about the electrical activity of the heart. Forward problem of electrocardiography is the estimation of body surface potentials from equivalent cardiac sources. Inverse problem of electrocardiography can be described as estimation of the electrical sources in the heart using the potential measurements obtained from the body surface. Due to spatial smoothing and attenuation that occur within the thorax, inverse ECG problem is ill-posed and the transfer matrix is ill-conditioned. Thus, regularization is needed to find a stable and accurate solution. In this thesis, epicardial potentials used as equivalent cardiac sources to represent electrical activity of the heart and performances of five different regularization methods are compared. These regularization methods are Tikhonov regularization, truncated singular value decomposition, least squares QR factorization, truncated total least squares, and Lanczos truncated total least squares. Results are assessed qualitatively using correlation coefficient (CC) and relative difference measurement star (RDMS) measures. In addition, real and reconstructed surface potential distributions are compared qualitatively. Body surface potential measurements are simulated with different levels of measurement noise. Geometric errors are also included by changing the size and the location of the heart in the mathematical torso model. According to our test results, the performances of the regularization methods in solving the inverse ECG problem depend on the form and amount of the noise.
|
Page generated in 0.0744 seconds