• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 17
  • 11
  • 5
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 78
  • 66
  • 40
  • 16
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Desafios e perspectivas da implementação computacional de testes adaptativos multidimensionais para avaliações educacionais / Challenges and perspectives of implementation of multidimensional adaptive test for educational assessment

Jean Piton Gonçalves 17 December 2012 (has links)
Testes educacionais possibilitam a obtenção de medidas e resultados, a realização de análises e o estabelecimento de objetivos para os processos de ensino e a aprendizagem, além de subsidiarem processos seletivos e políticas públicas. A avaliação de desempenho dos examinados pode considerar uma única ou múltiplas habilidades e/ou competências. Como alternativa para testes via lápis e papel, o Teste Baseado em Computador (CBT) pode compor, aplicar e corrigir testes e produzir estatísticas individuais ou do grupo de examinados automaticamente. Considerando que o examinado possua múltiplas habilidades, o Teste Adaptativo baseado na Teoria de Resposta ao Item Multidimensional (MCAT) mantém a mesma acurácia de um teste tradicional, baseando-se no conhecimento do examinado a partir do histórico de itens anteriormente respondidos. A seleção de itens por Kullback Leibler entre Posteriores Subsequentes (\'K POT. p\') evita selecionar um item difícil para um examinado com baixa habilidade, sugerindo que \'K POT. p\' é um critério aplicável em testes educacionais. A revisão da literatura apontou para: (i) a carência de estudos para o critério \'K POT. P\', (ii) a carência de estudos com MCATs operacionais em contextos educacionais para usuários reais, (iii) a carência de estudos e propostas de critérios iniciais e de parada para MCATs, quando o número de itens administrados pelo teste é variável, e (iv) a ausência de trabalhos brasileiros na área de MCATs. Diante das lacunas apresentadas, esta tese de doutoramento trata da seguinte questão de pesquisa: Qual a abordagem para viabilizar o uso do critério KP em MCATs operacionais para contextos educacionais, que permita que o sistema implementado seja aprovado nos critérios de funcionalidade, confiabilidade, eficiência, manutenibilidade e portabilidade da ISO-9126, que é a base para avaliar testes computadorizados? Os objetivos específicos desta pesquisa foram os seguintes: (i) implementar e validar o critério de seleção \'K POT. P\', comparando-o com o critério bayesiano usual, (ii) propor melhorias e calcular o tempo computacional de processamento da seleção de itens por \'K POT. P\', (iii) propor critérios iniciais consistentes com a realidade e a necessidade das avaliações educacionais, (iv) validar o critério de parada inédito KPIC, quando a intenção é se ter MCATs que administrem um número variável de itens para os examinados, (v) desenvolver uma arquitetura que viabilize a aplicação via Web de MCATs com usuários reais, (vi) discutir aspectos teóricos e metodológicos da nova abordagem CBMAT via prova de conceito, por meio da implementação do sistema MADEPT, que avalia examinados na perspectiva da avaliação diagnóstica, (vii) avaliar o MADEPT de acordo com as normas internacionais de produto de software ISO-9126 e apontar a factibilidade, a viabilidade, as dificuldades, as vantagens e as limitações do desenvolvimento CBMATs para o ambiente Web. A metodologia utilizada para responder a questão de pesquisa foi: (i) organizar e selecionar as teorias, os métodos, os modelos e os resultados inerentes a MCATs, (ii) expandir a equação de \'K POT. P\', (iii) implementar o MCAT contemplando o critério de seleção \'K POT. P\' e a metodologia bayesiana para estimação e seleção de itens, (iv) validar estatisticamente \'K POT. P\' e KPIC, (v) implementar o CBMAT, contemplando o MCAT como um subsistema e (vi) avaliar o CBMAT via ISO-9126. Os resultados deste trabalho são vários: (i) uma ampla revisão da literatura nas teorias/métodos/critérios necessários para a implementação computacional de MCATs, (ii) a reformulação da equação que expressa a seleção por \'K POT. P\' para implementação via linguagem de programação científica, (iii) os estudos de simulações do MCAT quando a seleção de itens é por \'K POT. P\' e o critério de parada por KPIC mostram que \'K POT. P\' é um critério adequado e indicado quando o objetivo é ter um teste com um número baixo e variável de itens administrados, mantendo um vício adequado e com alta acurácia na estimação da habilidade, (iv) o desenvolvimento de algoritmos inéditos para os critérios iniciais, (v) a validação de uma nova arquitetura que viabiliza a aplicação via Web de MCATs com usuários reais e (vi) a implementação e avaliação via ISO-9126 do sistema computacionalWeb MADEPT. Conclui-se que é possível desenvolver uma arquitetura que viabilize a aplicação viaWeb de MCATs com usuários reais, utilizando o critério de seleção \'K POT. P\' e critérios iniciais condizentes com as avaliações educacionais. Quando a intenção é aplicar MCATs em cenários reais, a seleção de itens por \'K POT. P\' combinado com o critério de parada KPIC proporcionam um teste mais curto e com mais acurácia do que aqueles que utilizam a metodologia bayesiana usual, e com um tempo computacional de processamento condizente com as características da abordagem multidimensional / Educational tests provide measures and indicators that enable evaluations and guide the definition of educational goals, besides supporting selection processes and public policies formulation. The evaluation of the examinees performance may consider one or multiple skills and abilities. As an alternative to hand-written tests, the Computer Based Test (CBT) provides the setup, application and correction of tests as well as provide individual and/or collective statistics about the examinees performance. Considering that the examinee has several abilities, the Computer Adaptive Test based on the Multidimensional Item Response Theory (MCAT) keeps the same accuracy of a traditional test, building on the personal knowledge inferred from the track record of responses to previous items. The item selection through Kullback Leibler between Subsequent Posteriors (\'K POT. P\') avoids to select a difficult item for a low ability examinee, suggesting that \'K POT. P\' is a criterion applicable to educational tests. The literature review evidenced: (i) the insufficiency of studies about the \'K POT. P\' criterion; (ii) the insufficiency of studies on operational MCATs in educational contexts for real users; (iii) the shortage of studies and proposals for initial and stop criteria for MCATs, given a variable number of administered items, and (iv) the lack of Brazilian studies in the area of MCATs. To bridge these gaps, this doctoral thesis addresses the following research question: What is the approach that enables to employ the \'K POT. P\' criterion in operational MCATs for educational contexts, ensuring that the implemented system be in accordance with the functionality, reliability, efficiency, maintainability and portability criteria of ISO-9126 (which is the base for computer based tests evaluation)? The specific objectives of this research are to: (i) implement and validate the \'K POT. P\' selection criterion, comparing it to the usual Bayesian criterion; (ii) propose improvements and calculate the computational time for item selection processing through \'K POT. P\'; (iii) propose initial criteria consistent with the reality and the need of educational evaluation; (iv) validate the novel stop criterion KPIC, aiming at MCATs that administer a variable number of items for the examinees; (v) develop an architecture that enables the application of MCATs via web to real users; (vi) discuss theoretic and methodological issues related to the new CBMAT via proof-of-concept, implementing the MADEPT, which evaluates the examinees under the perspective of the diagnostic evaluation; (vii) evaluateMADEPT according to the international standards software ISO-9126 and point out feasibility, viability, difficulties, advantages and limitations of CBMATs development for web environment. The methodology used to answer the research question was to: (i) organize and select the theories, the methods, the models and results inherent to MCATs; (ii) rewrite the equation of \'K POT. P\'; (iii) implement the MCAT considering the \'K POT. P\' selection criterion and the Bayesian methodology for item estimation and selection (iv) validate \'K POT. P\' and KPIC statistically; (v) implement CBMAT, considering MCAT as a subsystem and (vi) evaluate CBMAT according to ISO-9126. This research has many results: (i) it presents a broad literature review regarding theories/methods/criteria for MCATs computational implementation; (ii) it rewrites in a scientific programming language the equation that expresses the selection through \'K POT. P\'; (iii) it shows, through MCAT simulations, that \'K POT. P\' is a criterion adequate and indicated for tests with a small and variable number of administered items, using \'K POT. P\' for item selection and KPIC as stop criterion; (iv) it develops novel algorithms for initial criteria; (v) it validates a new architecture to enable the application of MCATs via Web to real users; (vi) it implements and evaluates the web computational system MADEPT according to ISO-9126. We conclude that it is possible to develop an architecture that enables the application of MCATs via web to real users, using \'K POT. P\' selection criterion and initial criteria consistent with the educational evaluation. If the aim is to apply MCATs in real scenarios, the item selection through \'K POIT. \'P associated with the stop criterion KPIC provide a shorter and more accurate test in comparison to those using bayesian methodology. Moreover, its processing computational time is in line with the features of the multidimensional approach
42

[en] APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE / [pt] BUSCA APROXIMADA DE VIZINHOS MAIS PRÓXIMOS PARA DIVERGÊNCIA DE KULLBACK-LEIBLER

19 March 2018 (has links)
[pt] Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão. / [en] In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
43

Estimation par sélection de modèle en régression hétéroscédastique

Gendre, Xavier 15 June 2009 (has links) (PDF)
Cette thèse s'inscrit dans les domaines de la statistique non-asymptotique et de la théorie statistique de la sélection de modèle. Son objet est la construction de procédures d'estimation de paramètres en régression hétéroscédastique. Ce cadre reçoit un intérêt croissant depuis plusieurs années dans de nombreux champs d'application. Les résultats présentés reposent principalement sur des inégalités de concentration et sont illustrés par des applications à des données simulées.<br /><br />La première partie de cette thèse consiste dans l'étude du problème d'estimation de la moyenne et de la variance d'un vecteur gaussien à coordonnées indépendantes. Nous proposons une méthode de choix de modèle basée sur un critère de vraisemblance pénalisé. Nous validons théoriquement cette approche du point de vue non-asymptotique en prouvant des majorations de type oracle du risque de Kullback de nos estimateurs et des vitesses de convergence uniforme sur les boules de Hölder.<br /><br />Un second problème que nous abordons est l'estimation de la fonction de régression dans un cadre hétéroscédastique à dépendances connues. Nous développons des procédures de sélection de modèle tant sous des hypothèses gaussiennes que sous des conditions de moment. Des inégalités oracles non-asymptotiques sont données pour nos estimateurs ainsi que des propriétés d'adaptativité. Nous appliquons en particulier ces résultats à l'estimation d'une composante dans un modèle de régression additif.
44

Information-Based Sensor Management for Static Target Detection Using Real and Simulated Data

Kolba, Mark Philip January 2009 (has links)
<p>In the modern sensing environment, large numbers of sensor tasking decisions must be made using an increasingly diverse and powerful suite of sensors in order to best fulfill mission objectives in the presence of situationally-varying resource constraints. Sensor management algorithms allow the automation of some or all of the sensor tasking process, meaning that sensor management approaches can either assist or replace a human operator as well as ensure the safety of the operator by removing that operator from a dangerous operational environment. Sensor managers also provide improved system performance over unmanaged sensing approaches through the intelligent control of the available sensors. In particular, information-theoretic sensor management approaches have shown promise for providing robust and effective sensor manager performance.</p><p>This work develops information-theoretic sensor managers for a general static target detection problem. Two types of sensor managers are developed. The first considers a set of discrete objects, such as anomalies identified by an anomaly detector or grid cells in a gridded region of interest. The second considers a continuous spatial region in which targets may be located at any point in continuous space. In both types of sensor managers, the sensor manager uses a Bayesian, probabilistic framework to model the environment and tasks the sensor suite to make new observations that maximize the expected information gain for the system. The sensor managers are compared to unmanaged sensing approaches using simulated data and using real data from landmine detection and unexploded ordnance (UXO) discrimination applications, and it is demonstrated that the sensor managers consistently outperform the unmanaged approaches, enabling targets to be detected more quickly using the sensor managers. The performance improvement represented by the rapid detection of targets is of crucial importance in many static target detection applications, resulting in higher rates of advance and reduced costs and resource consumption in both military and civilian applications.</p> / Dissertation
45

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
46

Improved facies modelling with multivariate spatial statistics

Li, Yupeng Unknown Date
No description available.
47

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
48

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
49

Kombinování diskrétních pravděpodobnostních rozdělení pomocí křížové entropie pro distribuované rozhodování / Cross-entropy based combination of discrete probability distributions for distributed decision making

Sečkárová, Vladimíra January 2015 (has links)
Dissertation abstract Title: Cross-entropy based combination of discrete probability distributions for distributed de- cision making Author: Vladimíra Sečkárová Author's email: seckarov@karlin.mff.cuni.cz Department: Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague Supervisor: Ing. Miroslav Kárný, DrSc., The Institute of Information Theory and Automation of the Czech Academy of Sciences Supervisor's email: school@utia.cas.cz Abstract: In this work we propose a systematic way to combine discrete probability distributions based on decision making theory and theory of information, namely the cross-entropy (also known as the Kullback-Leibler (KL) divergence). The optimal combination is a probability mass function minimizing the conditional expected KL-divergence. The ex- pectation is taken with respect to a probability density function also minimizing the KL divergence under problem-reflecting constraints. Although the combination is derived for the case when sources provided probabilistic type of information on the common support, it can applied to other types of given information by proposed transformation and/or extension. The discussion regarding proposed combining and sequential processing of available data, duplicate data, influence...
50

Estimation d'une densité prédictive avec information additionnelle

Sadeghkhani, Abdolnasser January 2017 (has links)
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles. / Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.

Page generated in 0.049 seconds