• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 37
  • 26
  • 6
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 171
  • 28
  • 28
  • 23
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

O processo de Poisson estendido e aplicações. / O processo de Poisson estendido e aplicações.

Salasar, Luis Ernesto Bueno 14 June 2007 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1 DissLEBS.pdf: 1626270 bytes, checksum: c18112f89ed0a1eea09a198885cf2c2c (MD5) Previous issue date: 2007-06-14 / Financiadora de Estudos e Projetos / Abstract In this dissertation we will study how extended Poisson process can be applied to construct discrete probabilistic models. An Extended Poisson Process is a continuous time stochastic process with the state space being the natural numbers, it is obtained as a generalization of homogeneous Poisson process where transition rates depend on the current state of the process. From its transition rates and Chapman-Kolmogorov di¤erential equations, we can determine the probability distribution at any …xed time of the process. Conversely, given any probability distribution on the natural numbers, it is possible to determine uniquely a sequence of transition rates of an extended Poisson process such that, for some instant, the unidimensional probability distribution coincides with the provided probability distribution. Therefore, we can conclude that extended Poisson process is as a very ‡exible framework on the analysis of discrete data, since it generalizes all probabilistic discrete models. We will present transition rates of extended Poisson process which generate Poisson, Binomial and Negative Binomial distributions and determine maximum likelihood estima- tors, con…dence intervals, and hypothesis tests for parameters of the proposed models. We will also perform a bayesian analysis of such models with informative and noninformative prioris, presenting posteriori summaries and comparing these results to those obtained by means of classic inference. / Nesta dissertação veremos como o proceso de Poisson estendido pode ser aplicado à construção de modelos probabilísticos discretos. Um processo de Poisson estendido é um processo estocástico a tempo contínuo com espaço de estados igual ao conjunto dos números naturais, obtido a partir de uma generalização do processo de Poisson homogê- neo onde as taxas de transição dependem do estado atual do processo. A partir das taxas de transição e das equações diferenciais de Chapman-Kolmogorov pode-se determinar a distribuição de probabilidades para qualquer tempo …xado do processo. Reciprocamente, dada qualquer distribuição de probabilidades sobre o conjunto dos números naturais é pos- sível determinar, de maneira única, uma seqüência de taxas de transição de um processo de Poisson estendido tal que, para algum instante, a distribução unidimensional do processo coincide com a dada distribuição de probabilidades. Portanto, o processo de Poisson es- tendido se apresenta como uma ferramenta bastante ‡exível na análise de dados discretos, pois generaliza todos os modelos probabilísticos discretos. Apresentaremos as taxas de transição dos processos de Poisson estendido que ori- ginam as distribuições de Poisson, Binomial e Binomial Negativa e determinaremos os estimadores de máxima verossimilhança, intervalos de con…ança e testes de hipóteses dos parâmetros dos modelos propostos. Faremos também uma análise bayesiana destes mod- elos com prioris informativas e não informativas, apresentando os resumos a posteriori e comparando estes resultados com aqueles obtidos via inferência clássica.
142

Novas propostas em filtragem de projeções tomográficas sob ruído Poisson

Ribeiro, Eduardo da Silva 24 May 2010 (has links)
Made available in DSpace on 2016-06-02T19:05:43Z (GMT). No. of bitstreams: 1 3115.pdf: 5210903 bytes, checksum: d78cb316f1a90afa1f1d9e435752a5f6 (MD5) Previous issue date: 2010-05-24 / Financiadora de Estudos e Projetos / In this dissertation we present techniques for filtering of tomographic projections with Poisson noise. For the filtering of the tomogram projections we use variations of three filtering techniques: Bayesian estimation, Wiener filtering and thresholding in Wavelet domain. We used ten MAP estimators, each estimator with a diferent probability density as prior information. An adaptive windowing was used to calculate the local estimates. A hypothesis test was used to select the best probability density to each projection. We used the Pointwise Wiener filter and FIR Wiener Filter, in both cases we used a adaptive scheme for the filtering. For thresholding in wavelet domain, we tested the performance of four families basis of wavelet functions and four techniques for obtaining thresholds. The experiments were done with the phantom of Shepp and Logan and five set of projections of phantoms captured by a CT scanner developed by CNPDIA-EMBRAPA. The image reconstruction was made with the parallel POCS algorithm. The evaluation of the filtering was made after reconstruction with the following criteria for measurement of error: ISNR, PSNR, SSIM and IDIV. / Nesta dissertação técnicas de filtragem de projeções tomográficas com ruído Poisson são apresentadas. Utilizamos variações de três técnicas de filtragem: estimação Bayesiana, filtragem de Wiener e limiarização no domínio Wavelet. Foram utilizados dez estimadores MAP, em cada uma densidade de probabilidade foi utilizada como informação a priori. Foi utilizado um janelamento adaptativo para o cálculo das estimativas locais e um teste de hipóteses para a escolha da melhor densidade de probabilidade que se adéqua a cada projeção. Utilizamos o filtro de Wiener na versão pontual e FIR, em ambos os casos utilizamos um esquema adaptativo durante a filtragem. Para a limiarização no domínio Wavelet, verificamos o desempenho de quatro famílias de funções Wavelet e quatro técnicas de obtenção de limiares. Os experimentos foram feitos com o phantom de Shepp e Logan e cinco conjunto de projeções de phantoms capturas por um minitomógrafo no CNPDIAEMBRAPA. A reconstrução da imagem feita com o algoritmo POCS paralelo. A avaliação da filtragem foi feita após a reconstrução com os seguintes crit_erios de medida de erro: ISNR, PSNR, IDIV e SSIM.
143

N?o v?cio assint?tico, consist?ncia forte e uniformemente forte de estimadores do tipo n?cleo para dados direcionais sobre uma esfera unit?ria k-dimensional

Santos, Marconio Silva dos 28 June 2010 (has links)
Made available in DSpace on 2014-12-17T15:26:38Z (GMT). No. of bitstreams: 1 MarconioSS_DISSERT.pdf: 828358 bytes, checksum: d4bc4c24d61f5cdfad5c76519c34784e (MD5) Previous issue date: 2010-06-28 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere / Nesse trabalho estudamos o n?o-v?cio assint?tico, a consist?ncia forte e a consist?ncia uniformemente forte de um estimador do tipo n?cleo, que como a maioria dos estimadores ? constru?do com base em n observa??es i.i.d. X1,..., Xn de X, para a densidade f(x) de um vetor aleat?rio X que assume valores em uma esfera unit?ria k-dimensional
144

Reconstrução de energia em calorímetros operando em alta luminosidade usando estimadores de máxima verossimilhança / Reconstrution of energy in calorimeters operating in high brigthness enviroments using maximum likelihood estimators

Paschoalin, Thiago Campos 15 March 2016 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-12T11:54:08Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Isabela, verifique que no resumo há algumas palavras unidas. on 2016-08-15T13:06:32Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-15T13:57:16Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: separar palavras no resumo e palavras-chave on 2016-08-16T11:34:37Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-12-19T13:07:02Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Consertar palavras unidas no resumo on 2017-02-03T12:27:10Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-02-03T12:51:52Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-03T12:54:15Z (GMT) No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Made available in DSpace on 2017-02-03T12:54:15Z (GMT). No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) Previous issue date: 2016-03-15 / Esta dissertação apresenta técnicas de processamento de sinais a fim de realizar a Estimação da energia, utilizando calorimetria de altas energias. O CERN, um dos mais importantes centros de pesquisa de física de partículas, possui o acelerador de partículas LHC, onde está inserido o ATLAS. O TileCal, importante calorímetro integrante do ATLAS, possui diversos canais de leitura, operando com altas taxas de eventos. A reconstrução da energia das partículas que interagem com este calorímetro é realizada através da estimação da amplitude do sinal gerado nos canais do mesmo. Por este motivo, a modelagem correta do ruído é importante para se desenvolver técnicas de estimação eficientes. Com o aumento da luminosidade (número de partículas que incidem no detector por unidade de tempo) no TileCal, altera-se o modelo do ruído, o que faz com que as técnicas de estimação utilizadas anteriormente apresentem uma queda de desempenho. Com a modelagem deste novo ruído como sendo uma Distribuição Lognormal, torna possível o desenvolvimento de uma nova técnica de estimação utilizando Estimadores de Máxima Verossimilhança (do inglês Maximum Likelihood Estimator MLE), aprimorando a estimação dos parâmetros e levando à uma reconstrução da energia do sinal de forma mais correta. Uma nova forma de análise da qualidade da estimação é também apresentada, se mostrando bastante eficiente e útil em ambientes de alta luminosidade. A comparação entre o método utilizado pelo CERN e o novo método desenvolvido mostrou que a solução proposta é superior em desempenho, sendo adequado o seu uso no novo cenário de alta luminosidade no qual o TileCal estará sujeito a partir de 2018. / This paper presents signal processing techniques that performs signal detection and energy estimation using calorimetry high energies. The CERN, one of the most important physics particles research center, has the LHC, that contains the ATLAS. The TileCal, important device of the ATLAS calorimeter, is the component that involves a lot of parallel channels working, involving high event rates. The reconstruction of the signal energy that interact with this calorimeter is performed through estimation of the amplitude of signal generated by this calorimter. So, accurate noise modeling is important to develop efficient estimation techniques. With high brightness in TileCal, the noise model modifies, which leads a performance drop of estimation techniques used previously. Modelling this new noise as a lognormal distribution allows the development of a new estimation technique using the MLE (Maximum Like lihood Estimation), improving parameter sestimation and leading to a more accurately reconstruction of the signal energy. A new method to analise the estimation quality is presented, wich is very effective and useful in high brightness enviroment conditions. The comparison between the method used by CERN and the new method developed revealed that the proposed solution is superior and is suitable to use in this kind of ambient that TileCal will be working from 2018.
145

Sommes et extrêmes en physique statistique et traitement du signal : ruptures de convergences, effets de taille finie et représentation matricielle / Sums and extremes in statistical physics and signal processing : Convergence breakdowns, finite size effects and matrix representations

Angeletti, Florian 06 December 2012 (has links)
Cette thèse s'est développée à l'interface entre physique statistique et traitement statistique du signal, afin d'allier les perspectives de ces deux disciplines sur les problèmes de sommes et maxima de variables aléatoires. Nous avons exploré trois axes d'études qui mènent à s'éloigner des conditions classiques (i.i.d.) : l'importance des événements rares, le couplage avec la taille du système, et la corrélation. Combinés, ces trois axes mènent à des situations dans lesquelles les théorèmes de convergence classiques sont mis en défaut.Pour mieux comprendre l'effet du couplage avec la taille du système, nous avons étudié le comportement de la somme et du maximum de variables aléatoires indépendantes élevées à une puissance dépendante de la taille du signal. Dans le cas du maximum, nous avons mis en évidence l'apparition de lois limites non standards. Dans le cas de la somme, nous nous sommes intéressés au lien entre effet de linéarisation et transition vitreuse en physique statistique. Grâce à ce lien, nous avons pu définir une notion d'ordre critique des moments, montrant que, pour un processus multifractal, celui-ci ne dépend pas de la résolution du signal. Parallèlement, nous avons construit et étudié, théoriquement et numériquement, les performances d'un estimateur de cet ordre critique pour une classe de variables aléatoires indépendantes.Pour mieux cerner l'effet de la corrélation sur le maximum et la somme de variables aléatoires, nous nous sommes inspirés de la physique statistique pour construire une classe de variable aléatoires dont la probabilité jointe peut s'écrire comme un produit de matrices. Après une étude détaillée de ses propriétés statistiques, qui a montré la présence potentielle de corrélation à longue portée, nous avons proposé pour ces variables une méthode de synthèse en réussissant à reformuler le problème en termes de modèles à chaîne de Markov cachée. Enfin, nous concluons sur une analyse en profondeur du comportement limite de leur somme et de leur maximum. / This thesis has grown at the interface between statistical physics and signal processing, combining the perspectives of both disciplines to study the issues of sums and maxima of random variables. Three main axes, venturing beyond the classical (i.i.d) conditions, have been explored: The importance of rare events, the coupling between the behavior of individual random variable and the size of the system, and correlation. Together, these three axes have led us to situations where classical convergence theorems are no longer valid.To improve our understanding of the impact of the coupling with the system size, we have studied the behavior of the sum and the maximum of independent random variables raised to a power depending of the size of the signal. In the case of the maximum, we have brought to light non standard limit laws. In the case of the sum, we have studied the link between linearisation effect and glass transition in statistical physics. Following this link, we have defined a critical moment order such that for a multifractal process, this critical order does not depend on the signal resolution. Similarly, a critical moment estimator has been designed and studied theoretically and numerically for a class of independent random variables.To gain some intuition on the impact of correlation on the maximum or sum of random variables, following insights from statistical physics, we have constructed a class of random variables where the joint distribution probability can be expressed as a matrix product. After a detailed study of its statistical properties, showing that these variables can exhibit long range correlations, we have managed to recast this model into the framework of Hidden Markov Chain models, enabling us to design a synthesis procedure. Finally, we conclude by an in-depth study of the limit behavior of the sum and maximum of these random variables.
146

MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH / INFERENTIAL IMPROVEMENTS OF BETA-SKEW-T-EGARCH MODEL

Muller, Fernanda Maria 25 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented. / O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
147

Estimation de fonctions de régression : sélection d'estimateurs ridge, étude de la procédure PLS1 et applications à la modélisation de la signature génique du cancer du poumon / Estimation of regression functions : ridge estimators selection, study of PLS1 procedure and applications on modelling the genetic signature of lung cancer

Binard, Carole 04 May 2016 (has links)
Cette thèse porte sur l’estimation d'une fonction de régression fournissant la meilleure relation entredes variables pour lesquelles on possède un certain nombre d’observations. Une première partie portesur une étude par simulation de deux méthodes automatiques de sélection du paramètre de laprocédure d'estimation ridge. D'un point de vue plus théorique, on présente et compare ensuite deuxméthodes de sélection d'un multiparamètre intervenant dans une procédure d'estimation d'unefonction de régression sur l'intervalle [0,1]. Dans une deuxième partie, on étudie la qualité del'estimateur PLS1, d'un point de vue théorique, à travers son risque quadratique et, plus précisément,le terme de variance dans la décomposition biais/variance de ce risque. Enfin, dans une troisièmepartie, une étude statistique sur données réelles est menée afin de mieux comprendre la signaturegénique de cellules cancéreuses à partir de la signature génique des sous-types cellulaires constituantle stroma tumoral associé / This thesis deals with the estimation of a regression function providing the best relationship betweenvariables for which we have some observations. In a first part, we complete a simulation study fortwo automatic selection methods of the ridge parameter. From a more theoretical point of view, wethen present and compare two selection methods of a multiparameter, that is used in an estimationprocedure of a regression function on [0,1]. In a second part, we study the quality of the PLS1estimator through its quadratic risk and, more precisely, the variance term in its bias/variancedecomposition. In a third part, a statistical study is carried out in order to explain the geneticsignature of cancer cells thanks to the genetic signatures of cellular subtypes which compose theassociated tumor stroma
148

Statistical properties of parasite density estimators in malaria and field applications / Propriétés statistiques des estimateurs de la densité parasitaire dans les études portant sur le paludisme et applications opérationnelles

Hammami, Imen 24 June 2013 (has links)
Pas de résumé en français / Malaria is a devastating global health problem that affected 219 million people and caused 660,000 deaths in 2010. Inaccurate estimation of the level of infection may have adverse clinical and therapeutic implications for patients, and for epidemiological endpoint measurements. The level of infection, expressed as the parasite density (PD), is classically defined as the number of asexual parasites relative to a microliter of blood. Microscopy of Giemsa-stained thick blood smears (TBSs) is the gold standard for parasite enumeration. Parasites are counted in a predetermined number of high-power fields (HPFs) or against a fixed number of leukocytes. PD estimation methods usually involve threshold values; either the number of leukocytes counted or the number of HPFs read. Most of these methods assume that (1) the distribution of the thickness of the TBS, and hence the distribution of parasites and leukocytes within the TBS, is homogeneous; and that (2) parasites and leukocytes are evenly distributed in TBSs, and thus can be modeled through a Poisson-distribution. The violation of these assumptions commonly results in overdispersion. Firstly, we studied the statistical properties (mean error, coefficient of variation, false negative rates) of PD estimators of commonly used threshold-based counting techniques and assessed the influence of the thresholds on the cost-effectiveness of these methods. Secondly, we constituted and published the first dataset on parasite and leukocyte counts per HPF. Two sources of overdispersion in data were investigated: latent heterogeneity and spatial dependence. We accounted for unobserved heterogeneity in data by considering more flexible models that allow for overdispersion. Of particular interest were the negative binomial model (NB) and mixture models. The dependent structure in data was modeled with hidden Markov models (HMMs). We found evidence that assumptions (1) and (2) are inconsistent with parasite and leukocyte distributions. The NB-HMM is the closest model to the unknown distribution that generates the data. Finally, we devised a reduced reading procedure of the PD that aims to a better operational optimization and a practical assessing of the heterogeneity in the distribution of parasites and leukocytes in TBSs. A patent application process has been launched and a prototype development of the counter is in process.
149

Estimation d'une matrice d'échelle. / Scale matrix estimation

Haddouche, Mohamed Anis 31 October 2019 (has links)
Beaucoup de résultats sur l’estimation d’une matrice d’échelle en analyse multidimensionnelle sont obtenus sous l’hypothèse de normalité (condition sous laquelle il s’agit de la matrice de covariance). Or il s’avère que, dans des domaines tels que la gestion de portefeuille en finance, cette hypothèse n’est pas très appropriée. Dans ce cas, la famille des distributions à symétrie elliptique, qui contient la distribution gaussienne, est une alternative intéressante. Nous considérons dans cette thèse le problème d’estimation de la matrice d’échelle Σ du modèle additif Yp_m = M + E, d’un point de vue de la théorie de la décision. Ici, p représente le nombre de variables, m le nombre d’observations, M une matrice de paramètres inconnus de rang q < p et E un bruit aléatoire de distribution à symétrie elliptique, avec une matrice de covariance proportionnelle à Im x Σ. Ce problème d’estimation est abordé sous la représentation canonique de ce modèle où la matrice d’observation Y est décomposée en deux matrices, à savoir, Zq x p qui résume l’information contenue dans M et une matrice Un x p, où n = m - q, qui résume l’information suffisante pour l’estimation de Σ. Comme les estimateurs naturels de la forme Σa = a S (où S = UT U et a est une constante positive) ne sont pas de bons estimateurs lorsque le nombre de variables p et le rapport p=n sont grands, nous proposons des estimateurs alternatifs de la forme ^Σa;G = a(S + S S+G(Z; S)) où S+ est l’inverse de Moore-Penrose de S (qui coïncide avec l’inverse S-1 lorsque S est inversible). Nous fournissons des conditions sur la matrice de correction SS+G(Z; S) telles que ^Σa;G améliore^Σa sous le coût quadratique L(Σ; ^Σ) = tr(^ΣΣ‾1 - Ip)² et sous une modification de ce dernier, à savoir le coût basé sur les données LS (Σ; ^Σ) = tr(S+Σ(^ΣΣ‾1 - Ip)²). Nous adoptons une approche unifiée des deux cas où S est inversible et S est non inversible. À cette fin, une nouvelle identité de type Stein-Haff et un nouveau calcul sur la décomposition en valeurs propres de S sont développés. Notre théorie est illustrée par une grande classe d’estimateurs orthogonalement invariants et par un ensemble de simulations. / Numerous results on the estimation of a scale matrix in multivariate analysis are obtained under Gaussian assumption (condition under which it is the covariance matrix). However in such areas as Portfolio management in finance, this assumption is not well adapted. Thus, the family of elliptical symmetric distribution, which contains the Gaussian distribution, is an interesting alternative. In this thesis, we consider the problem of estimating the scale matrix _ of the additif model Yp_m = M + E, under theoretical decision point of view. Here, p is the number of variables, m is the number of observations, M is a matrix of unknown parameters with rank q < p and E is a random noise, whose distribution is elliptically symmetric with covariance matrix proportional to Im x Σ. It is more convenient to deal with the canonical forme of this model where Y is decomposed in two matrices, namely, Zq_p which summarizes the information contained in M, and Un_p, where n = m - q which summarizes the information sufficient to estimate Σ. As the natural estimators of the form ^Σ a = a S (where S = UT U and a is a positive constant) perform poorly when the dimension of variables p and the ratio p=n are large, we propose estimators of the form ^Σa;G = a(S + S S+G(Z; S)) where S+ is the Moore-Penrose inverse of S (which coincides with S-1 when S is invertible). We provide conditions on the correction matrix SS+G(Z; S) such that ^Σa;G improves over ^Σa under the quadratic loss L(Σ; ^Σ) = tr(^ΣΣ‾1 - Ip)² and under the data based loss LS (Σ; ^Σ) = tr(S+Σ(^ΣΣ‾1 - Ip)²).. We adopt a unified approach of the two cases where S is invertible and S is non-invertible. To this end, a new Stein-Haff type identity and calculus on eigenstructure for S are developed. Our theory is illustrated with the large class of orthogonally invariant estimators and with simulations.
150

Invariance and Sliding Modes. Application to coordination of multi-agent systems, bioprocesses estimation, and control in living cells

Vignoni, Alejandro 26 May 2014 (has links)
The present thesis employs ideas of set invariance and sliding modes in order to deal with different relevant problems control of nonlinear systems. Initially, it reviews the techniques of set invariance as well as the more relevant results about sliding modes control. Then the main methodologies used are presented: sliding mode reference conditioning, second order sliding modes and continuous approximation of sliding modes. Finally, the methodologies are applied to different problems in control theory and to a variety of biologically inspired applications. The contributions of the thesis are: The development of a method to coordinate dynamical systems with different dynamic properties by means of a sliding mode auxiliary loop shaping the references given to the systems as function of the local and global goals, the achievable performance of each system and the available information of each system. Design methods for second order sliding mode algorithms. The methods decouple the problem of stability analysis from that of finite-time convergence of the super-twisting sliding mode algorithm. A nonlinear change of coordinates and a time-scaling are used to provide simple, yet flexible design methods and stability proofs. Application of the method to the design of finite-time convergence estimators of bioprocess kinetic rates and specific biomass growth rate, from biomass measurements. Also the estimators are validated with experimental data. The proposal of a strategy to reduce the variability of a cell-to-cell communication signal in synthetic genetic circuits. The method uses set invariance and sliding mode ideas applied to gene expression networks to obtain a reduction in the variance of the communication signal. Experimental approaches available to modify the characteristics of the gene regulation function are described. / Vignoni, A. (2014). Invariance and Sliding Modes. Application to coordination of multi-agent systems, bioprocesses estimation, and control in living cells [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37743 / Alfresco

Page generated in 0.0727 seconds