• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 213
  • 53
  • 31
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Recent Techniques for Regularization in Partial Differential Equations and Imaging

January 2018 (has links)
abstract: Inverse problems model real world phenomena from data, where the data are often noisy and models contain errors. This leads to instabilities, multiple solution vectors and thus ill-posedness. To solve ill-posed inverse problems, regularization is typically used as a penalty function to induce stability and allow for the incorporation of a priori information about the desired solution. In this thesis, high order regularization techniques are developed for image and function reconstruction from noisy or misleading data. Specifically the incorporation of the Polynomial Annihilation operator allows for the accurate exploitation of the sparse representation of each function in the edge domain. This dissertation tackles three main problems through the development of novel reconstruction techniques: (i) reconstructing one and two dimensional functions from multiple measurement vectors using variance based joint sparsity when a subset of the measurements contain false and/or misleading information, (ii) approximating discontinuous solutions to hyperbolic partial differential equations by enhancing typical solvers with l1 regularization, and (iii) reducing model assumptions in synthetic aperture radar image formation, specifically for the purpose of speckle reduction and phase error correction. While the common thread tying these problems together is the use of high order regularization, the defining characteristics of each of these problems create unique challenges. Fast and robust numerical algorithms are also developed so that these problems can be solved efficiently without requiring fine tuning of parameters. Indeed, the numerical experiments presented in this dissertation strongly suggest that the new methodology provides more accurate and robust solutions to a variety of ill-posed inverse problems. / Dissertation/Thesis / Doctoral Dissertation Mathematics 2018
72

Sublinear-Time Learning and Inference for High-Dimensional Models

Yan, Enxu 01 May 2018 (has links)
Across domains, the scale of data and complexity of models have both been increasing greatly in the recent years. For many models of interest, tractable learning and inference without access to expensive computational resources have become challenging. In this thesis, we approach efficient learning and inference through the leverage of sparse structures inherent in the learning objective, which allows us to develop algorithms sublinear in the size of parameters without compromising the accuracy of models. In particular, we address the following three questions for each problem of interest: (a) how to formulate model estimation as an optimization problem with tractable sparse structure, (b) how to efficiently, i.e. in sublinear time, search, maintain, and utilize the sparse structures during training and inference, (c) how to guarantee fast convergence of our optimization algorithm despite its greedy nature? By answering these questions, we develop state-of-the-art algorithms in varied domains. Specifically, in the extreme classification domain, we utilizes primal and dual sparse structures to develop greedy algorithms of complexity sublinear in the number of classes, which obtain state-of-the-art accuracies on several benchmark data sets with one or two orders of magnitude speedup over existing algorithms. We also apply the primal-dual-sparse theory to develop a state-of-the-art trimming algorithm for Deep Neural Networks, which sparsifies neuron connections of a DNN with a task-dependent theoretical guarantee, which results in models of smaller storage cost and faster inference speed. When it comes to structured prediction problems (i.e. graphical models) with inter-dependent outputs, we propose decomposition methods that exploit sparse messages to decompose a structured learning problem of large output domains into factorwise learning modules amenable to sublineartime optimization methods, leading to practically much faster alternatives to existing learning algorithms. The decomposition technique is especially effective when combined with search data structures, such as those for Maximum Inner-Product Search (MIPS), to improve the learning efficiency jointly. Last but not the least, we design novel convex estimators for a latent-variable model by reparameterizing it as a solution of sparse support in an exponentially high-dimensional space, and approximate it with a greedy algorithm, which yields the first polynomial-time approximation method for the Latent-Feature Models and Generalized Mixed Regression without restrictive data assumptions.
73

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

André Biasin Segalla Francisco 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
74

Modélisation de contextes pour l'annotation sémantique de vidéos / Context based modeling for video semantic annotation

Ballas, Nicolas 12 November 2013 (has links)
Recent years have witnessed an explosion of multimedia contents available. In 2010 the video sharing website YouTube announced that 35 hours of videos were uploaded on its site every minute, whereas in 2008 users were "only" uploading 12 hours of video per minute. Due to the growth of data volumes, human analysis of each video is no longer a solution; there is a need to develop automated video analysis systems. This thesis proposes a solution to automatically annotate video content with a textual description. The thesis core novelty is the consideration of multiple contextual information to perform the annotation. With the constant expansion of visual online collections, automatic video annotation has become a major problem in computer vision. It consists in detecting various objects (human, car. . . ), dynamic actions (running, driving. . . ) and scenes characteristics (indoor, outdoor. . . ) in unconstrained videos. Progress in this domain would impact a wild range of applications including video search, video intelligent surveillance or human-computer interaction.Although some improvements have been shown in concept annotation, it still remains an unsolved problem, notably because of the semantic gap. The semantic gap is defined as the lack of correspondences between video features and high-level human understanding. This gap is principally due to the concepts intra-variability caused by photometry change, objects deformation, objects motion, camera motion or viewpoint change... To tackle the semantic gap, we enrich the description of a video with multiple contextual information. Context is defined as "the set of circumstances in which an event occurs". Video appearance, motion or space-time distribution can be considered as contextual clues associated to a concept. We state that one context is not informative enough to discriminate a concept in a video. However, by considering several contexts at the same time, we can address the semantic gap. / Recent years have witnessed an explosion of multimedia contents available. In 2010the video sharing website YouTube announced that 35 hours of videos were uploadedon its site every minute, whereas in 2008 users were "only" uploading 12 hours ofvideo per minute. Due to the growth of data volumes, human analysis of each videois no longer a solution; there is a need to develop automated video analysis systems.This thesis proposes a solution to automatically annotate video content with atextual description. The thesis core novelty is the consideration of multiple contex-tual information to perform the annotation.With the constant expansion of visual online collections, automatic video annota-tion has become a major problem in computer vision. It consists in detecting variousobjects (human, car. . . ), dynamic actions (running, driving. . . ) and scenes charac-teristics (indoor, outdoor. . . ) in unconstrained videos. Progress in this domain wouldimpact a wild range of applications including video search, video intelligent surveil-lance or human-computer interaction.Although some improvements have been shown in concept annotation, it still re-mains an unsolved problem, notably because of the semantic gap. The semantic gapis defined as the lack of correspondences between video features and high-level humanunderstanding. This gap is principally due to the concepts intra-variability causedby photometry change, objects deformation, objects motion, camera motion or view-point change. . .To tackle the semantic gap, we enrich the description of a video with multiplecontextual information. Context is defined as "the set of circumstances in which anevent occurs". Video appearance, motion or space-time distribution can be consid-ered as contextual clues associated to a concept. We state that one context is notinformative enough to discriminate a concept in a video. However, by consideringseveral contexts at the same time, we can address the semantic gap.
75

Greedy algorithms for multi-channel sparse recovery

Determe, Jean-François 16 January 2018 (has links)
During the last decade, research has shown compressive sensing (CS) to be a promising theoretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional techniques to fully reconstruct a sparse signal, thereby saving time and hardware resources. This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performance of simultaneous orthogonal matching pursuit (SOMP), a greedy support recovery algorithm tailored to multiple measurement vector signal models. Chapters 4 and 5 detail novel improvements in understanding the performance of SOMP. Chapter 4 presents analyses of SOMP for noiseless measurements; using those analyses, Chapter 5 extensively studies the performance of SOMP in the noisy case. A second challenge consists in optimally weighting the impact of each measurement vector on the decisions of SOMP. If measurement vectors feature unequal signal-to-noise ratios, properly weighting their impact improves the performance of SOMP. Chapter 6 introduces a novel weighting strategy from which SOMP benefits. The chapter describes the novel weighting strategy, derives theoretically optimal weights for it, and presents both theoretical and numerical evidence that the strategy improves the performance of SOMP. Finally, Chapter 7 deals with the tendency for support recovery algorithms to pick support indices solely for mapping a particular noise realization. To ensure that such algorithms pick all the correct support indices, researchers often make the algorithms pick more support indices than the number strictly required. Chapter 7 presents a support reduction technique, that is, a technique removing from a support the supernumerary indices solely mapping noise. The advantage of the technique, which relies on cross-validation, is that it is universal, in that it makes no assumption regarding the support recovery algorithm generating the support. Theoretical results demonstrate that the technique is reliable. Furthermore, numerical evidence proves that the proposed technique performs similarly to orthogonal matching pursuit with cross-validation (OMP-CV), a state-of-the-art algorithm for support reduction. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
76

Cosparse regularization of physics-driven inverse problems / Régularisation co-parcimonieuse de problèmes inverse guidée par la physique

Kitic, Srdan 26 November 2015 (has links)
Les problèmes inverses liés à des processus physiques sont d'une grande importance dans la plupart des domaines liés au traitement du signal, tels que la tomographie, l'acoustique, les communications sans fil, le radar, l'imagerie médicale, pour n'en nommer que quelques uns. Dans le même temps, beaucoup de ces problèmes soulèvent des défis en raison de leur nature mal posée. Par ailleurs, les signaux émanant de phénomènes physiques sont souvent gouvernées par des lois s'exprimant sous la forme d'équations aux dérivées partielles (EDP) linéaires, ou, de manière équivalente, par des équations intégrales et leurs fonctions de Green associées. De plus, ces phénomènes sont habituellement induits par des singularités, apparaissant comme des sources ou des puits d'un champ vectoriel. Dans cette thèse, nous étudions en premier lieu le couplage entre de telles lois physiques et une hypothèse initiale de parcimonie des origines du phénomène physique. Ceci donne naissance à un concept de dualité des régularisations, formulées soit comme un problème d'analyse coparcimonieuse (menant à la représentation en EDP), soit comme une parcimonie à la synthèse équivalente à la précédente (lorsqu'on fait plutôt usage des fonctions de Green). Nous dédions une part significative de notre travail à la comparaison entre les approches de synthèse et d'analyse. Nous défendons l'idée qu'en dépit de leur équivalence formelle, leurs propriétés computationnelles sont très différentes. En effet, en raison de la parcimonie héritée par la version discrétisée de l'EDP (incarnée par l'opérateur d'analyse), l'approche coparcimonieuse passe bien plus favorablement à l'échelle que le problème équivalent régularisé par parcimonie à la synthèse. Nos constatations sont illustrées dans le cadre de deux applications : la localisation de sources acoustiques, et la localisation de sources de crises épileptiques à partir de signaux électro-encéphalographiques. Dans les deux cas, nous vérifions que l'approche coparcimonieuse démontre de meilleures capacités de passage à l'échelle, au point qu'elle permet même une interpolation complète du champ de pression dans le temps et en trois dimensions. De plus, dans le cas des sources acoustiques, l'optimisation fondée sur le modèle d'analyse \emph{bénéficie} d'une augmentation du nombre de données observées, ce qui débouche sur une accélération du temps de traitement, plus rapide que l'approche de synthèse dans des proportions de plusieurs ordres de grandeur. Nos simulations numériques montrent que les méthodes développées pour les deux applications sont compétitives face à des algorithmes de localisation constituant l'état de l'art. Pour finir, nous présentons deux méthodes fondées sur la parcimonie à l'analyse pour l'estimation aveugle de la célérité du son et de l'impédance acoustique, simultanément à l'interpolation du champ sonore. Ceci constitue une étape importante en direction de la mise en œuvre de nos méthodes en en situation réelle. / Inverse problems related to physical processes are of great importance in practically every field related to signal processing, such as tomography, acoustics, wireless communications, medical and radar imaging, to name only a few. At the same time, many of these problems are quite challenging due to their ill-posed nature. On the other hand, signals originating from physical phenomena are often governed by laws expressible through linear Partial Differential Equations (PDE), or equivalently, integral equations and the associated Green’s functions. In addition, these phenomena are usually induced by sparse singularities, appearing as sources or sinks of a vector field. In this thesis we primarily investigate the coupling of such physical laws with a prior assumption on the sparse origin of a physical process. This gives rise to a “dual” regularization concept, formulated either as sparse analysis (cosparse), yielded by a PDE representation, or equivalent sparse synthesis regularization, if the Green’s functions are used instead. We devote a significant part of the thesis to the comparison of these two approaches. We argue that, despite nominal equivalence, their computational properties are very different. Indeed, due to the inherited sparsity of the discretized PDE (embodied in the analysis operator), the analysis approach scales much more favorably than the equivalent problem regularized by the synthesis approach. Our findings are demonstrated on two applications: acoustic source localization and epileptic source localization in electroencephalography. In both cases, we verify that cosparse approach exhibits superior scalability, even allowing for full (time domain) wavefield interpolation in three spatial dimensions. Moreover, in the acoustic setting, the analysis-based optimization benefits from the increased amount of observation data, resulting in a speedup in processing time that is orders of magnitude faster than the synthesis approach. Numerical simulations show that the developed methods in both applications are competitive to state-of-the-art localization algorithms in their corresponding areas. Finally, we present two sparse analysis methods for blind estimation of the speed of sound and acoustic impedance, simultaneously with wavefield interpolation. This is an important step toward practical implementation, where most physical parameters are unknown beforehand. The versatility of the approach is demonstrated on the “hearing behind walls” scenario, in which the traditional localization methods necessarily fail. Additionally, by means of a novel algorithmic framework, we challenge the audio declipping problemregularized by sparsity or cosparsity. Our method is highly competitive against stateof-the-art, and, in the cosparse setting, allows for an efficient (even real-time) implementation.
77

Cooperative Wideband Spectrum Sensing Based on Joint Sparsity

jowkar, ghazaleh 01 January 2017 (has links)
COOPERATIVE WIDEBAND SPECTRUM SENSING BASED ON JOINT SPARSITY By Ghazaleh Jowkar, Master of Science A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at Virginia Commonwealth University Virginia Commonwealth University 2017 Major Director: Dr. Ruixin Niu, Associate Professor of Department of Electrical and Computer Engineering In this thesis, the problem of wideband spectrum sensing in cognitive radio (CR) networks using sub-Nyquist sampling and sparse signal processing techniques is investigated. To mitigate multi-path fading, it is assumed that a group of spatially dispersed SUs collaborate for wideband spectrum sensing, to determine whether or not a channel is occupied by a primary user (PU). Due to the underutilization of the spectrum by the PUs, the spectrum matrix has only a small number of non-zero rows. In existing state-of-the-art approaches, the spectrum sensing problem was solved using the low-rank matrix completion technique involving matrix nuclear-norm minimization. Motivated by the fact that the spectrum matrix is not only low-rank, but also sparse, a spectrum sensing approach is proposed based on minimizing a mixed-norm of the spectrum matrix instead of low-rank matrix completion to promote the joint sparsity among the column vectors of the spectrum matrix. Simulation results are obtained, which demonstrate that the proposed mixed-norm minimization approach outperforms the low-rank matrix completion based approach, in terms of the PU detection performance. Further we used mixed-norm minimization model in multi time frame detection. Simulation results shows that increasing the number of time frames will increase the detection performance, however, by increasing the number of time frames after a number of times the performance decrease dramatically.
78

Machine Learning Methods for Visual Object Detection / Apprentissage machine pour la détection des objets

Hussain, Sabit ul 07 December 2011 (has links)
Le but de cette thèse est de développer des méthodes pratiques plus performantes pour la détection d'instances de classes d'objets de la vie quotidienne dans les images. Nous présentons une famille de détecteurs qui incorporent trois types d'indices visuelles performantes – histogrammes de gradients orientés (Histograms of Oriented Gradients, HOG), motifs locaux binaires (Local Binary Patterns, LBP) et motifs locaux ternaires (Local Ternary Patterns, LTP) – dans des méthodes de discrimination efficaces de type machine à vecteur de support latent (Latent SVM), sous deux régimes de réduction de dimension – moindres carrées partielles (Partial Least Squares, PLS) et sélection de variables par élagage de poids SVM (SVM Weight Truncation). Sur plusieurs jeux de données importantes, notamment ceux du PASCAL VOC2006 et VOC2007, INRIA Person et ETH Zurich, nous démontrons que nos méthodes améliorent l'état de l'art du domaine. Nos contributions principales sont : – Nous étudions l'indice visuelle LTP pour la détection d'objets. Nous démontrons que sa performance est globalement mieux que celle des indices bien établies HOG et LBP parce qu'elle permet d'encoder à la fois la texture locale de l'objet et sa forme globale, tout en étant résistante aux variations d'éclairage. Grâce à ces atouts, LTP fonctionne aussi bien pour les classes qui sont caractérisées principalement par leurs structures que pour celles qui sont caractérisées par leurs textures. En plus, nous démontrons que les indices HOG, LBP et LTP sont bien complémentaires, de sorte qu'un jeux d'indices étendu qui intègre tous les trois améliore encore la performance. – Les jeux d'indices visuelles performantes étant de dimension assez élevée, nous proposons deux méthodes de réduction de dimension afin d'améliorer leur vitesse et réduire leur utilisation de mémoire. La première, basée sur la projection moindres carrés partielles, diminue significativement le temps de formation des détecteurs linéaires, sans réduction de précision ni perte de vitesse d'exécution. La seconde, fondée sur la sélection de variables par l'élagage des poids du SVM, nous permet de réduire le nombre d'indices actives par un ordre de grandeur avec une réduction minime, voire même une petite augmentation, de la précision du détecteur. Malgré sa simplicité, cette méthode de sélection de variables surpasse toutes les autres approches que nous avons mis à l'essai. – Enfin, nous décrivons notre travail en cours sur une nouvelle variété d'indice visuelle – les « motifs locaux quantifiées » (Local Quantized Patterns, LQP). LQP généralise les indices existantes LBP / LTP en introduisant une étape de quantification vectorielle – ce qui permet une souplesse et une puissance analogue aux celles des approches de reconnaissance visuelle « sac de mots », qui sont basées sur la quantification des régions locales d'image considérablement plus grandes – sans perdre la simplicité et la rapidité qui caractérisent les approches motifs locales actuelles parce que les résultats de la quantification puissent être pré-compilés et stockés dans un tableau. LQP permet une augmentation considérable de la taille du support local de l'indice, et donc de sa puissance discriminatoire. Nos expériences indiquent qu'elle a la meilleure performance de toutes les indices visuelles testés, y compris HOG, LBP et LTP. / The goal of this thesis is to develop better practical methods for detecting common object classes in real world images. We present a family of object detectors that combine Histogram of Oriented Gradient (HOG), Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) features with efficient Latent SVM classifiers and effective dimensionality reduction and sparsification schemes to give state-of-the-art performance on several important datasets including PASCAL VOC2006 and VOC2007, INRIA Person and ETHZ. The three main contributions are as follows. Firstly, we pioneer the use of Local Ternary Pattern features for object detection, showing that LTP gives better overall performance than HOG and LBP, because it captures both rich local texture and object shape information while being resistant to variations in lighting conditions. It thus works well both for classes that are recognized mainly by their structure and ones that are recognized mainly by their textures. We also show that HOG, LBP and LTP complement one another, so that an extended feature set that incorporates all three of them gives further improvements in performance. Secondly, in order to tackle the speed and memory usage problems associated with high-dimensional modern feature sets, we propose two effective dimensionality reduction techniques. The first, feature projection using Partial Least Squares, allows detectors to be trained more rapidly with negligible loss of accuracy and no loss of run time speed for linear detectors. The second, feature selection using SVM weight truncation, allows active feature sets to be reduced in size by almost an order of magnitude with little or no loss, and often a small gain, in detector accuracy. Despite its simplicity, this feature selection scheme outperforms all of the other sparsity enforcing methods that we have tested. Lastly, we describe work in progress on Local Quantized Patterns (LQP), a generalized form of local pattern features that uses lookup table based vector quantization to provide local pattern style pixel neighbourhood codings that have the speed of LBP/LTP and some of the flexibility and power of traditional visual word representations. Our experiments show that LQP outperforms all of the other feature sets tested including HOG, LBP and LTP.
79

Técnicas computacionais para a implementação eficiente e estável de métodos tipo simplex / Computational techniques for an efficient and stable implemantation of simplex-type methods

Pedro Augusto Munari Junior 06 March 2009 (has links)
Métodos tipo simplex são a base dos principais softwares utilizados na resolução de problemas de otimização linear. A implementação computacional direta destes métodos, assim como são descritos na teoria, leva a resultados indesejáveis na resolução de problemas reais de grande porte. Assim, a utilização de técnicas computacionais adequadas é fundamental para uma implementação eficiente e estável. Neste trabalho, as principais técnicas são discutidas, com enfoque naquelas que buscam proporcionar a estabilidade numérica do método: utilização de tolerâncias, estabilização do teste da razão, mudança de escala e representação da matriz básica. Para este último tópico, são apresentadas duas técnicas, a Forma Produto da Inversa e a Decomposição LU. A análise das abordagens é feita baseando-se na resolução dos problemas da biblioteca Netlib / Simplex-type methods are the basis of the main linear optimization solvers. The straightforward implementation of these methods as they are presented in theory yield unexpected results in solving reallife large-scale problems. Hence, it is essencial to use suitable computational techniques for an efficient and stable implementation. In this thesis, we address the main techniques focusing on those which aim for numerical stability of the method: use of tolerances, stable ratio test, scaling and representation of the basis matrix. For the latter topic, we present two techniques, the Product Form of Inverse and the LU decomposition. The Netlib problems are solved using the approaches addressed and the results are analyzed
80

Regularized multivariate stochastic regression

Chen, Kun 01 July 2011 (has links)
In many high dimensional problems, the dependence structure among the variables can be quite complex. An appropriate use of the regularization techniques coupled with other classical statistical methods can often improve estimation and prediction accuracy and facilitate model interpretation, by seeking a parsimonious model representation that involves only the subset of revelent variables. We propose two regularized stochastic regression approaches, for efficiently estimating certain sparse dependence structure in the data. We first consider a multivariate regression setting, in which the large number of responses and predictors may be associated through only a few channels/pathways and each of these associations may only involve a few responses and predictors. We propose a regularized reduced-rank regression approach, in which the model estimation and rank determination are conducted simultaneously and the resulting regularized estimator of the coefficient matrix admits a sparse singular value decomposition (SVD). Secondly, we consider model selection of subset autoregressive moving-average (ARMA) modelling, for which automatic selection methods do not directly apply because the innovation process is latent. We propose to identify the optimal subset ARMA model by fitting a penalized regression, e.g. adaptive Lasso, of the time series on its lags and the lags of the residuals from a long autoregression fitted to the time-series data, where the residuals serve as proxies for the innovations. Computation algorithms and regularization parameter selection methods for both proposed approaches are developed, and their properties are explored both theoretically and by simulation. Under mild regularity conditions, the proposed methods are shown to be selection consistent, asymptotically normal and enjoy the oracle properties. We apply the proposed approaches to several applications across disciplines including cancer genetics, ecology and macroeconomics.

Page generated in 0.0275 seconds