Spelling suggestions: "subject:"fisher information."" "subject:"bisher information.""
51 |
Free entropies, free Fisher information, free stochastic differential equations, with applications to Von Neumann algebras / Sur quelques propriétés des entropies libres, de l'Information de Fisher libre et des équations différentielles stochastiques libres avec des applications aux algèbres de Von NeumannDabrowski, Yoann 01 December 2010 (has links)
Ce travail étend nos connaissances des entropies libres et des équations différentielles stochastiques (EDS) libres dans trois directions. Dans un premier temps, nous montrons que l'algèbre de von Neumann engendrée par au moins deux autoadjoints ayant une information de Fisher finie n'a pas la propriété $Gamma$ de Murray et von Neumann. C'est un analogue d'un résultat de Voiculescu pour l'entropie microcanonique libre. Dans un second temps, nous étudions des EDS libres à coefficients opérateurs non-bornés (autrement dit des sortes d' EDP stochastiques libres ). Nous montrons la stationnarité des solutions dans des cas particuliers. Nous en déduisons un calcul de la dimension entropique libre microcanonique dans le cas d'une information de Fisher lipschitzienne. Dans un troisième et dernier temps, nous introduisons une méthode générale de résolutions d'EDS libres stationnaires, s'appuyant sur un analogue non-commutatif d'un espace de chemins. En définissant des états traciaux sur cet analogue, nous construisons des dilatations markoviennes de nombreux semigroupes complètement markoviens sur une algèbre de von Neumann finie, en particulier de tous les semigroupes symétriques. Pour des semigroupes particuliers, par exemple dès que le générateur s'écrit sous une forme divergence pour une dérivation à valeur dans la correspondance grossière, ces dilatations résolvent des EDS libres. Entre autres applications, nous en déduisons une inégalité de Talagrand pour l'entropie non-microcanonique libre (relative à une sous-algèbre et une application complètement positive). Nous utilisons aussi ces déformations dans le cadre des techniques de déformations/rigidité de Popa / This works extends our knowledge of free entropies, free Fisher information and free stochastic differential equations in three directions. First, we prove that if a $W^{*}$-probability space generated by more than 2 self-adjoints with finite non-microstates free Fisher information doesn't have property $Gamma$ of Murray and von Neumann (especially is not amenable). This is an analogue of a well-known result of Voiculescu for microstates free entropy. We also prove factoriality under finite non-microstates entropy. Second, we study a general free stochastic differential equation with unbounded coefficients (``stochastic PDE"), and prove stationarity of solutions in well-chosen cases. This leads to a computation of microstates free entropy dimension in case of Lipschitz conjugate variable. Finally, we introduce a non-commutative path space approach to solve general stationary free Stochastic differential equations. By defining tracial states on a non-commutative analogue of a path space, we construct Markov dilations for a class of conservative completely Markov semigroups on finite von Neumann algebras. This class includes all symmetric semigroups. For well chosen semigroups (for instance with generator any divergence form operator associated to a derivation valued in the coarse correspondence) those dilations give rise to stationary solutions of certain free SDEs. Among applications, we prove a non-commutative Talagrand inequality for non-microstate free entropy (relative to a subalgebra $B$ and a completely positive map $eta:Bto B$). We also use those new deformations in conjunction with Popa's deformation/rigidity techniques, to get absence of Cartan subalgebra results
|
52 |
Uma abordagem personalizada no processo de seleção de itens em Testes Adaptativos Computadorizados / A personalized approach to the item selection process in Computerized Adaptive TestingVictor Miranda Gonçalves Jatobá 08 October 2018 (has links)
Testes Adaptativos Computadorizados (CAT) baseados na Teoria de Resposta ao Item permitem fazer testes mais precisos com um menor número de questões em relação à prova clássica feita a papel. Porém a construção de CAT envolve alguns questionamentos-chave, que quando feitos de forma adequada, podem melhorar ainda mais a precisão e a eficiência na estimativa das habilidades dos respondentes. Um dos principais questionamentos está na escolha da Regra de Seleção de Itens (ISR). O CAT clássico, faz uso, exclusivamente, de uma ISR. Entretanto, essas regras possuem vantagens, entre elas, a depender do nível de habilidade e do estágio em que o teste se encontra. Assim, o objetivo deste trabalho é reduzir o comprimento de provas dicotômicas - que consideram apenas se a resposta foi correta ou incorreta - que estão inseridas no ambiente de um CAT que faz uso, exclusivo, de apenas uma ISR sem perda significativa de precisão da estimativa das habilidades. Para tal, cria-se a abordagem denominada ALICAT que personaliza o processo de seleção de itens em CAT, considerando o uso de mais de uma ISR. Para aplicar essa abordagem é necessário primeiro analisar o desempenho de diferentes ISRs. Um estudo de caso na prova de Matemática e suas tecnologias do ENEM de 2012, indica que a regra de seleção de Kullback-Leibler com distribuição a posteriori (KLP) possui melhor desempenho na estimativa das habilidades dos respondentes em relação as regras: Informação de Fisher (F); Kullback-Leibler (KL); Informação Ponderada pela Máxima Verossimilhança (MLWI); e Informação ponderada a posteriori (MPWI). Resultados prévios da literatura mostram que CAT utilizando a regra KLP conseguiu reduzir a prova do estudo de caso em 46,6% em relação ao tamanho completo de 45 itens sem perda significativa na estimativa das habilidades. Neste trabalho, foi observado que as regras F e a MLWI tiveram melhor desempenho nos estágios inicias do CAT, para estimar respondentes com níveis de habilidades extremos negativos e positivos, respectivamente. Com a utilização dessas regras de seleção em conjunto, a abordagem ALICAT reduziu a mesma prova em 53,3% / Computerized Adaptive Testing (CAT) based on Item Response Theory allows more accurate assessments with fewer questions than the classic paper test. Nonetheless, the CAT building involves some key questions that, when done properly, can further improve the accuracy and efficiency in estimating examinees\' abilities. One of the main questions is in regard to choosing the Item Selection Rule (ISR). The classic CAT makes exclusive use of one ISR. However, these rules have differences depending on the examinees\' ability level and on the CAT stage. Thus, the objective of this work is to reduce the dichotomous - which considers only correct and incorrect answers - test size which is inserted on a classic CAT without significant loss of accuracy in the estimation of the examinee\'s ability level. For this purpose, we create the ALICAT approach that personalizes the item selection process in a CAT considering the use of more than one ISR. To apply this approach, we first analyze the performance of different ISRs. The case study in textit test of the ENEM 2012 shows that the Kullback-Leibler Information with a Posterior Distribution (KLP) has better performance in the examinees\' ability estimation when compared with: Fisher Information (F); Kullback-Leibler Information (KL); Maximum Likelihood Weighted Information(MLWI); and Maximum Posterior Weighted Information (MPWI) rules. Previous results in the literature show that CAT using KLP was able to reduce this test size by 46.6% from the full size of 45 items with no significant loss of accuracy in estimating the examinees\' ability level. In this work, we observe that the F and the MLWI rules performed better on early CAT stages to estimate examinees proficiency level with extreme negative and positive values, respectively. With this information, we were able to reduce the same test by 53.3% using an approach that uses the best rules together
|
53 |
Uma abordagem personalizada no processo de seleção de itens em Testes Adaptativos Computadorizados / A personalized approach to the item selection process in Computerized Adaptive TestingJatobá, Victor Miranda Gonçalves 08 October 2018 (has links)
Testes Adaptativos Computadorizados (CAT) baseados na Teoria de Resposta ao Item permitem fazer testes mais precisos com um menor número de questões em relação à prova clássica feita a papel. Porém a construção de CAT envolve alguns questionamentos-chave, que quando feitos de forma adequada, podem melhorar ainda mais a precisão e a eficiência na estimativa das habilidades dos respondentes. Um dos principais questionamentos está na escolha da Regra de Seleção de Itens (ISR). O CAT clássico, faz uso, exclusivamente, de uma ISR. Entretanto, essas regras possuem vantagens, entre elas, a depender do nível de habilidade e do estágio em que o teste se encontra. Assim, o objetivo deste trabalho é reduzir o comprimento de provas dicotômicas - que consideram apenas se a resposta foi correta ou incorreta - que estão inseridas no ambiente de um CAT que faz uso, exclusivo, de apenas uma ISR sem perda significativa de precisão da estimativa das habilidades. Para tal, cria-se a abordagem denominada ALICAT que personaliza o processo de seleção de itens em CAT, considerando o uso de mais de uma ISR. Para aplicar essa abordagem é necessário primeiro analisar o desempenho de diferentes ISRs. Um estudo de caso na prova de Matemática e suas tecnologias do ENEM de 2012, indica que a regra de seleção de Kullback-Leibler com distribuição a posteriori (KLP) possui melhor desempenho na estimativa das habilidades dos respondentes em relação as regras: Informação de Fisher (F); Kullback-Leibler (KL); Informação Ponderada pela Máxima Verossimilhança (MLWI); e Informação ponderada a posteriori (MPWI). Resultados prévios da literatura mostram que CAT utilizando a regra KLP conseguiu reduzir a prova do estudo de caso em 46,6% em relação ao tamanho completo de 45 itens sem perda significativa na estimativa das habilidades. Neste trabalho, foi observado que as regras F e a MLWI tiveram melhor desempenho nos estágios inicias do CAT, para estimar respondentes com níveis de habilidades extremos negativos e positivos, respectivamente. Com a utilização dessas regras de seleção em conjunto, a abordagem ALICAT reduziu a mesma prova em 53,3% / Computerized Adaptive Testing (CAT) based on Item Response Theory allows more accurate assessments with fewer questions than the classic paper test. Nonetheless, the CAT building involves some key questions that, when done properly, can further improve the accuracy and efficiency in estimating examinees\' abilities. One of the main questions is in regard to choosing the Item Selection Rule (ISR). The classic CAT makes exclusive use of one ISR. However, these rules have differences depending on the examinees\' ability level and on the CAT stage. Thus, the objective of this work is to reduce the dichotomous - which considers only correct and incorrect answers - test size which is inserted on a classic CAT without significant loss of accuracy in the estimation of the examinee\'s ability level. For this purpose, we create the ALICAT approach that personalizes the item selection process in a CAT considering the use of more than one ISR. To apply this approach, we first analyze the performance of different ISRs. The case study in textit test of the ENEM 2012 shows that the Kullback-Leibler Information with a Posterior Distribution (KLP) has better performance in the examinees\' ability estimation when compared with: Fisher Information (F); Kullback-Leibler Information (KL); Maximum Likelihood Weighted Information(MLWI); and Maximum Posterior Weighted Information (MPWI) rules. Previous results in the literature show that CAT using KLP was able to reduce this test size by 46.6% from the full size of 45 items with no significant loss of accuracy in estimating the examinees\' ability level. In this work, we observe that the F and the MLWI rules performed better on early CAT stages to estimate examinees proficiency level with extreme negative and positive values, respectively. With this information, we were able to reduce the same test by 53.3% using an approach that uses the best rules together
|
54 |
線性羅吉斯迴歸模型的最佳D型逐次設計 / The D-optimal sequential design for linear logistic regression model藍旭傑, Lan, Shiuh Jay Unknown Date (has links)
假設二元反應曲線為簡單線性羅吉斯迴歸模型(Simple Linear Logistic Regression Model),在樣本數為偶數的前題下,所謂的最佳D型設計(D-Optimal Design)是直接將半數的樣本點配置在第17.6個百分位數,而另一半則配置在第82.4個百分位數。很遺憾的是,這兩個位置在參數未知的情況下是無法決定的,因此逐次實驗設計法(Sequential Experimental Designs)在應用上就有其必要性。在大樣本的情況下,本文所探討的逐次實驗設計法在理論上具有良好的漸近最佳D型性質(Asymptotic D-Optimality)。尤其重要的是,這些特性並不會因為起始階段的配置不盡理想而消失,影響的只是收斂的快慢而已。但是在實際應用上,這些大樣本的理想性質卻不是我們關注的焦點。實驗步驟收斂速度的快慢,在小樣本的考慮下有決定性的重要性。基於這樣的考量,本文將提出三種起始階段設計的方法並透過模擬比較它們之間的優劣性。 / The D-optimal design is well known to be a two-point design for the simple linear logistic regression function model. Specif-ically , one half of the design points are allocated at the 17.6- th percentile, and the other half at the 82.4-th percentile. Since the locations of the two design points depend on the unknown parameters, the actual 2-locations can not be obtained. In order to dilemma, a sequential design is somehow necessary in practice. Sequential designs disscused in this context have some good properties that would not disappear even the initial stgae is not good enough under large sample size. The speed of converges of the sequential designs is influenced by the initial stage imposed under small sample size. Based on this, three initial stages will be provided in this study and will be compared through simulation conducted by C++ language.
|
55 |
Aspects of Interface between Information Theory and Signal Processing with Applications to Wireless CommunicationsPark, Sang Woo 14 March 2013 (has links)
This dissertation studies several aspects of the interface between information theory and signal processing. Several new and existing results in information theory are researched from the perspective of signal processing. Similarly, some fundamental results in signal processing and statistics are studied from the information theoretic viewpoint.
The first part of this dissertation focuses on illustrating the equivalence between Stein's identity and De Bruijn's identity, and providing two extensions of De Bruijn's identity. First, it is shown that Stein's identity is equivalent to De Bruijn's identity in additive noise channels with specific conditions. Second, for arbitrary but fixed input and noise distributions, and an additive noise channel model, the first derivative of the differential entropy is expressed as a function of the posterior mean, and the second derivative of the differential entropy is expressed in terms of a function of Fisher information. Several applications over a number of fields, such as statistical estimation theory, signal processing and information theory, are presented to support the usefulness of the results developed in Section 2.
The second part of this dissertation focuses on three contributions. First, a connection between the result, proposed by Stoica and Babu, and the recent information theoretic results, the worst additive noise lemma and the isoperimetric inequality for entropies, is illustrated. Second, information theoretic and estimation theoretic justifications for the fact that the Gaussian assumption leads to the largest Cramer-Rao lower bound (CRLB) is presented. Third, a slight extension of this result to the more general framework of correlated observations is shown.
The third part of this dissertation concentrates on deriving an alternative proof for an extremal entropy inequality (EEI), originally proposed by Liu and Viswanath. Compared with the proofs, presented by Liu and Viswanath, the proposed alternative proof is simpler, more direct, and more information-theoretic. An additional application for the extremal inequality is also provided. Moreover, this section illustrates not only the usefulness of the EEI but also a novel method to approach applications such as the capacity of the vector Gaussian broadcast channel, the lower bound of the achievable rate for distributed source coding with a single quadratic distortion constraint, and the secrecy capacity of the Gaussian wire-tap channel.
Finally, a unifying variational and novel approach for proving fundamental information theoretic inequalities is proposed. Fundamental information theory results such as the maximization of differential entropy, minimization of Fisher information (Cramer-Rao inequality), worst additive noise lemma, entropy power inequality (EPI), and EEI are interpreted as functional problems and proved within the framework of calculus of variations. Several extensions and applications of the proposed results are briefly mentioned.
|
56 |
Unit Root Problems In Time Series AnalysisPurutcuoglu, Vilda 01 February 2004 (has links) (PDF)
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem.
However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash / statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n.
In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash / moment chi &ndash / square or four &ndash / moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo / s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
|
57 |
Explorando caminhos de mínima informação em grafos para problemas de classificação supervisionadaHiraga, Alan Kazuo 05 May 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:12Z (GMT). No. of bitstreams: 1
5931.pdf: 2655791 bytes, checksum: 6eafe016c175143a8d55692b4681adfe (MD5)
Previous issue date: 2014-05-05 / Financiadora de Estudos e Projetos / Classification is a very important step in pattern recognition, as it aims to categorize objects from a set of inherent features, through its labeling. This process can be supervised, when there is a sample set of labeled training classes, semi-supervised, when the number of labeled samples is limited or nearly inexistent, or unsupervised, where there are no labeled samples. This project proposes to explore minimum information paths in graphs for classification problems, through the definition of a supervised, non-parametric, graph-based classification method, by means of a contextual approach. This method proposes to construct a graph from a set of training samples, where the samples are represented by vertices and the edges are links between samples that belongs to a neighborhood system. From the graph construction, the method calculates the local observed Fisher information, a measurement based on the Potts model, for all vertices, identifying the amount of information that each sample has. Generally, different class vertices when connected by an edge, have a high information level. After that, it is necessary to weight the edges by means of a function that penalizes connecting vertices with high information. During this process, it is possible to identify and select high information vertices, which will be chosen to be prototype vertices, namely, the nodes that define the classes boundaries. After the definition, the method proposes that each prototype sample conquer the remaining samples by offering the shortest path in terms of information, so that when a sample is conquered it receives the label of the winning prototype, occurring the classification. To evaluate the proposed method, statistical methods to estimate the error rates, such as Hold-out, K-fold and Leave-One- Out Cross-Validation will be considered. The obtained results indicate that the method can be a viable alternative to the existing classification techniques. / A classificação é uma etapa muito importante em reconhecimento de padrões, pois ela tem o objetivo de categorizar objetos a partir de um conjunto de características inerentes a ele, atribuindo-lhe um rótulo. Esse processo de classificação pode ser supervisionado, quando existe um conjunto de amostras de treinamento rotuladas que representam satisfatoriamente as classes, semi-supervisionado, quando o conjunto de amostras é limitado ou quase inexistente, ou não-supervisionado, quando não existem amostras rotuladas. Este trabalho propõe explorar caminhos de mínima informação em grafos para problemas de classificação, por meio da criação de um método de classificação supervisionado, não paramétrico, baseado em grafos, seguindo uma abordagem contextual. Esse método propõe a construção de um grafo a partir do conjunto de amostras de treinamento, onde as amostras serão representadas pelos vértices e as arestas serão as ligações entre amostras pertencentes a uma relação de adjacência. A partir da construção do grafo o método faz o calculo da informação de Fisher Local Observada, uma medida baseada no modelo de Potts, para todos os vértices, identificando o grau de informação que cada um possui. Geralmente vértices de classes distintas quando conectados por uma aresta possuem alta informação (bordas). Feito o calculo da informação, é necessário ponderar as arestas por meio de uma função que penaliza a ligação de vértices com alta informação. Enquanto as arestas são ponderadas é possível identificar e selecionar vértices altamente informativos os quais serão escolhidos para serem vértices protótipos, ou seja, os vértices que definem a região de borda. Depois de ponderadas as arestas e definidos os protótipos, o método propõe que cada protótipo conquiste as amostras oferecendo o menor caminho até ele, de modo que quando uma amostra é conquistada ela receba o rótulo do protótipo que a conquistou, ocorrendo a classificação. Para avaliar o método serão utilizados métodos estatísticos para estimar as taxas de acertos, como K-fold, Hold-out e Leave-one-out Cross- Validation. Os resultados obtidos indicam que o método pode ser um uma alternativa viável as técnicas de classificação existentes.
|
58 |
Scale Invariant Equations and Their Modified EM Algorithm for Estimating a Two-Component Mixture ModelUkenazor, Ifeanyichukwu Valentine 07 1900 (has links)
In this work, we propose a novel two-component mixture model: the first component is the three-parameter generalized Gaussian distribution (GGD), and the second is a new three-parameter family of positive densities on the real line. The novelty of our mixture model is that we allow the two components to have totally different parametric families of distributions with asymmetric tails of the mixture density. We extend the scale invariant variable fractional moments (SIVFM) method proposed by Song for the GGD to the parameter estimation of our mixture model. We show that the SIVFM population and sample equations for the second component share very similar desirable global properties such as convexity and unique global roots as those for the GGD given in earlier research. The two-component mixing of these properties make the SIVFM mixture population and estimation equations well-behaved resulting in easy to compute estimators without the issue with starting values. The asymptotic results such as consistency and limiting distribution of the estimators are presented. Furthermore, SIVFM estimators can also serve as a consistent initial estimator for the EM algorithm leading to improved accuracy of the EM algorithm. These algorithms are applied to the analysis of the average amount of precipitation (rainfall) for each of 70 United States (and Puerto Rican) cities clearly demonstrating the bimodal distribution of the estimated mixture density.
|
59 |
Univariate and Multivariate Symmetry: Statistical Inference and Distributional Aspects/Symétrie Univariée et Multivariée: Inférence Statistique et Aspects DistributionnelsLey, Christophe C. 26 November 2010 (has links)
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.
The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.
The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.
Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./
Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.
La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.
La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.
Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests.
|
60 |
Univariate and multivariate symmetry: statistical inference and distributional aspects / Symétrie univariée et multivariée: inférence statistique et aspects distributionnelsLey, Christophe 26 November 2010 (has links)
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.<p><p>The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.<p><p>The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.<p><p>Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./<p><p>Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.<p><p>La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.<p><p>La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.<p><p>Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
Page generated in 0.126 seconds