• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 3
  • 1
  • Tagged with
  • 18
  • 18
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluating credal set theory as a belief framework in high-level information fusion for automated decision-making

Karlsson, Alexander January 2010 (has links)
High-level information fusion is a research field in which methods for achieving an overall understanding of the current situation in an environment of interest are studied. The ultimate goal of these methods is to provide effective decision-support for human or automated decision-making. One of the main proposed ways of achieving this is to reduce the uncertainty, coupled with the decision, by utilizing multiple sources of information. Handling uncertainty in high-level information fusion is performed through a belief framework, and one of the most commonly used such frameworks is based on Bayesian theory. However, Bayesian theory has often been criticized for utilizing a representation of belief and evidence that does not sufficiently express some types of uncertainty. For this reason, a generalization of Bayesian theory has been proposed, denoted as credal set theory, which allows one to represent belief and evidence imprecisely. In this thesis, we explore whether credal set theory  yields measurable advantages, compared to Bayesian theory, when used as a belief framework in high-level information fusion for automated decision-making, i.e., when decisions are made by some pre-determined algorithm. We characterize the Bayesian and credal operators for belief updating and evidence combination and perform three experiments where the Bayesian and credal frameworks are evaluated with respect to automated decision-making. The decision performance of the frameworks are measured by enforcing a single decision, and allowing a set of decisions, based on the frameworks’ belief and evidence structures. We construct anomaly detectors based on the frameworks and evaluate these detectors with respect to maritime surveillance. The main conclusion of the thesis is that although the credal framework uses considerably more expressive structures to represent belief and evidence, compared to the Bayesian framework, the performance of the credal framework can be significantly worse, on average, than that of the Bayesian framework, irrespective of the amount of imprecision. / Högnivåfusion är ett forskningsområde där man studerar metoder för att uppnå en övergripande situationsförståelse för någon miljö av intresse. Syftet med högnivåfusion är att tillhandahålla ett effektivt beslutstöd for mänskligt eller automatiskt beslutsfattande. För att åstadkomma detta har det föreslagits att man ska reducera osäkerhet kring beslutet genom att använda flera olika källor av information. Det främsta verktyget för att hantera osäkerhet inom högnivåfusion är ett ramverk för att hantera evidensbaserad trolighet och evidenser kring en given tillståndsrymd. Ett av de vanligaste ramverken som används inom högnivåfusion för detta syfte är baserad på Bayesiansk teori. Denna teori har dock ofta blivit kritiserad för att den använder en representation av evidensbaserad trolighet och evidenser som inte är tillräckligt uttrycksfull för att representera vissa typer av osäkerheter. På grund av detta har en generalisering av Bayesiansk teori föreslagits, kallad “credal set theory“, där man kan representera evidensbaserad trolighet och evidenser oprecist. I denna avhandling undersöker vi om “credal set theory“ medför mätbara fördelar, jämfört med Bayesiansk teori, då det används som ett ramverk i högnivåfusion för automatiskt beslutsfattande, dvs. när ett beslut fattas av en algoritm. Vi karaktäriserar Bayesiansk och “credal“ operatorer för updatering av evidensbaserad trolighet och kombination av evidenser och vi presenterar tre experiment där vi utvärderar ramverken med avseende på automatiskt beslutsfattande. Utvärderingen genomförs med avseende på ett enskilt beslut och för en mängd beslut baserade på ramverkens strukturer för evidensbaserad trolighet och evidens. Vi konstruerar anomalidetektorer baserat på de två ramverken som vi sedan utvärderar med avseende på maritim övervakning.Den främsta slutsatsen av denna avhandling är att även om “credal set theory“ har betydligt mer uttrycksfulla strukturer för att representera evidensbaserad trolighet och evidenser kring ett tillståndsrum, jämfört med det Bayesianska ramverket, så kan “credal set theory“ prestera signifikant sämre i genomsnitt än det Bayesianska ramverket, oberoende av mängden oprecision. / <p>Examining Committee: Arnborg, Stefan, Professor (KTH Royal Institute of Technology), Kjellström, Hedvig, Associate Professor (Docent) (KTH Royal Institute of Technology), Saffiotti, Alessandro, Professor (Örebro University)</p>
12

Fotbollsmålvakters antecipation under straffläggning : Betydelsen av explicit kontextuell information / Goalkeepers antecipation in penalty kicks : The significance of explicit contextual prior information

Abrahamsson, Hampus January 2024 (has links)
Syftet med studien var att studera effekten av explicit kontextuell förhandsinformation på olika reliabilitetsnivåer hos fotbollsmålvakters förmåga att rädda straffar. Hypotesen som sattes utgick från att det skulle finnas en effekt av kontextuell förhandsinformation under förhållandet med stark handlingstendens, samtidigt skulle det inte finnas någon effekt av kontextuell förhandsinformation på målvakters förmåga vid förhållandet av svag handlingstendens. Syftet studerades genom en faktoriell experimentell design med de två förhållandena av svag handlingstendens (60/40) och stark handlingstendens (80/20). Manipuleringen av kontextuell förhandsinformation applicerades i båda förhållanden av handlingstendens i form av manipuleringen med kontextuell förhandsinformation och utan kontextuell förhandsinformation, på så vis utgjordes experimentet av fyra förhållanden. Deltagarna i studien bestod av 8 fotbollsmålvakter varav 7 män och 1 kvinna som alla hade 8 års erfarenhet av målvaktsspel. Det teoretiska ramverket som användes för analys var den bayesianska modellen som är en kognitiv modell vilken förklarar hur människan tar in och använder information genom att använda explicit kontextuell förhandsinformation och kinematisk information. Resultatet visade på att det inte fanns en statistisk signifikant effekt av kontextuell förhandsinformation oberoende av stark eller svag handlingstendens. Med resultatet menas att det inte finns några betydande effekter av kontextuell förhandsinformation under förhållande av stark handlingstendens eller svag handlingstendens. Resultaten för studien ligger inte i linje med tidigare forskningsbild av fenomenet, detta i form av att resultatet indikerar att det inte finns en effekt på målvakternas förmåga till antecipering. / The purpose of the study was to study the effect of explicit contextual information on soccer goalkeepers' ability to save penalties. The hypothesis set predicted that there would be an effect of contextual prior information under the high-action tendencies condition, while there would be no effect of contextual prior information on goalkeepers' ability under the low-action tendencies condition. The purpose was studied through a factorial experimental design with the two action tendency ratios of low-action tendencies (60/40) and high-action tendencies (80/20). The manipulation of prior contextual information was applied in both action tendency conditions in the form of the manipulation with contextual prior information and without contextual prior information, thus the experiment consisted of four conditions. The participants in the study consisted of 8 soccer goalkeepers, 7 men and 1 woman, all of whom had 8 years of experience in goalkeeping. The theoretical framework used for analysis was the Bayesian model which is a cognitive model which explains how humans take in and use information by using explicit contextual prior information and kinematic information. The results showed that there was not a statistically significant effect of contextual prior information. The result means that there are no significant effects of contextual prior information under conditions of high-action tendencies or low-action tendencies. The results of the study are not in line with previous research, this in the form that highly probable contextual information seems to not contribute with an effect on goalkeepers' ability to anticipate. / <p>Arbetet har i sammarbete utformats mellan mig och Pontus Hahrgren, där utformning av metoddesign, insamling av data, och analys av resultat genomförts ihop men separata arbeten har därefter utarbetats för rättsäker examination.</p>
13

Improving specimen identification: Informative DNA using a statistical Bayesian method

Lou, Melanie 04 1900 (has links)
<p>This work investigates the assignment of unknown sequences to their species of origin. In particular, I examine four questions: Is existing (GenBank) data reliable for accurate species identification? Does a segregating sites algorithm make accurate species identifications and how does it compare to another Bayesian method? Does broad sampling of reference species improve the information content of reference data? And does an extended model (of the theory of segregating sites) describe the genetic variation in a set of sequences (of a species or population) better? Though we did not find unusually similar between-species sequences in GenBank, there was evidence of unusually divergent within-species sequences, suggesting that caution and a firm understanding of GenBank species should be exercised before utilizing GenBank data. To address challenging identifications resulting from an overlap between within- and between species variation, we introduced a Bayesian treeless statistical assignment method that makes use of segregating sites. Assignments with simulated and <em>Drosophila</em> (fruit fly) sequences show that this method can provide fast, high probability assignments for recently diverged species. To address reference sequences with low information content, the addition of even one broadly sampled reference sequence can increase the number of correct assignments. Finally, an extended theory of segregating sites generates more realistic probability estimates of the genetic variability of a set of sequences. Species are dynamic entities and this work will highlight ideas and methods to address dynamic genetic patterns in species.</p> / Doctor of Philosophy (PhD)
14

Modélisation de la contamination par Listeria monocytogenes pour l'amélioration de la surveillance dans les industries agro-alimentaires / Contamination modeling of Listeria monocytogenes to improve surveillance in food industry

Commeau, Natalie 04 June 2012 (has links)
Les industriels du secteur agro-alimentaire sont responsables de la qualité des produits mis sur le marché. Un moyen de vérifier cette qualité consiste à déterminer la distribution de la contamination. Dans cette thèse, nous avons utilisé des données portant sur L. monocytogenes durant le procédé de fabrication de lardons et du saumon fumé. Nous avons ensuite élaboré des modèles hiérarchiques pour décrire la concentration en prenant ou non en compte diverses variabilités, nous avons estimé les paramètres par inférence bayésienne, puis comparé leur capacité à simuler des données proches des observations. Nous avons également comparé l'estimation de paramètres par inférence fréquentiste sur deux modèles en utilisant les données brutes issues des analyses microbiologiques et ces mêmes données converties en concentration. Par ailleurs, nous avons amélioré un modèle décrivant le devenir de L. monocytogenes au cours de la fabrication des lardons. Le plan d'échantillonnage permettant d'estimer la qualité des produits, nous avons appliqué la théorie de la décision aux couples L. monocytogenes/lardons et L. monocytogenes/saumon fumé en sortie usine pour déterminer la taille optimale de l'échantillon à prélever par lot de manière à minimiser les coûts moyens supportés par le fabricant. Enfin, nous avons comparé plusieurs plans d'échantillonnage de mesure de la température d'un plat en sauce fabriqué dans une cuisine centrale et placé dans une cellule de refroidissement rapide. L'objectif était de sélectionner le meilleur plan d'échantillonnage en fonction du risque admissible pour le gestionnaire quant à la croissance de C. perfringens. / Food business operators are responsible for the quality of the products they sell. A way to assess the safety of food is to determine the contamination distribution. During my PhD thesis, we used data about L. monocytogenes during the process of diced bacon and of cold smoked salmon. Then, we constructed several hierarchical models to describe contamination taking or not into account several kinds of variability such as between batches variability. We compared the capacity of each model to simulate data close to the observed ones. We also compared the parameters assessment by frequentist inference using raw data (the results of the microbiological analyses) and concentration-like data. In addition to the models describing the contamination at one step of the process, we improved an existing model describing the fate of L. monocytogenes throughout the diced bacon process. A tool to assess the quality of a product is the sampling plan. We applied the Bayesian theory of decision to the pairs L. monocytogenes/diced bacon and L. monocytogenes/cold smoked salmon at the end of the process to determine the optimal size of a sample analysed per batch so that the average cost for the manufacturer is as los as possible. We also compared several sampling plans of temperature measurement of a meal cooked in an institutional food service facility and put in a blast-chiller just after cooking. The aim was to select the best sampling plan regarding the risk of C. perfringens growth that the manager is ready to take.
15

Estimação de parâmetros genéticos de produção de leite e de gordura da raça Pardo-suíça, utilizando metodologias freqüentista e bayesiana / Estimation of genetic parameters of milk and fat yield of Brown-Swiss cows using frequentist and bayesian methodologies

Yamaki, Marcos 31 July 2006 (has links)
Made available in DSpace on 2015-03-26T13:55:08Z (GMT). No. of bitstreams: 1 texto completo.pdf: 905318 bytes, checksum: 167ccc3c1b47051e3ce28eb0224bed43 (MD5) Previous issue date: 2006-07-31 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / First lactation data of 6.262 Brown-Swiss cows from 311 herds, daughters of 803 sires with calving between 1980 and 2003 were used to estimate genetic parameters for milk and fat production traits. The components of variance were estimated by restricted maximum likelihood (REML) and bayesian methods, using animal model with uni and two-traits analisys . The estimation by REML was obtained with the software MTDFREML (BOLDMAN et al. 1995) testing unitrait models with different effects to covariables and considering contemporary group and season as fixed effect. The best fitting obtained on unitrait analisys were used on two-trait analisys. The estimative of additive variance was reduced when lactation length was included on the model suggesting that the animals were been fitted to the same base on the capacity of transmit a longer or shorter lactation length to the progeny. Therefore, fitting to this covariable is not recommended. On the other side, the age of calving has linearly influenced milk and fat production. The heritability estimates were 0,26 and 0,25 to milk and fat yield respectively with genetic correlation of 0,95. the high correlation among these traits suggests that part of genes that acts on milk yield also respond to fat yield, in such way that selection for milk yield results, indirectly, in increase on fat yield. The estimation by Bayesian inference was made on software MTGSAM (VAN TASSELL E VAN VLECK, 1995). Chain lengths were tested to obtain the marginal posterior densities of unitrait analisys, the best option of chain length, burn-in and sampling interval was used on two-trait analisys. The burn-in periods were tested with the software GIBANAL (VAN KAAM, 1998) witch analysis inform a sampling interval for each burn-in tested, the criteria for choosing the sampling interval was made with the serial correlation resulting by burn-in and sampling process. The heritability estimates were 0,33 ± 0,05 for both traits with genetic correlation of 0,95. Similar results were obtained on studies using the same methodology on first lactation records. The stationary phase adequately reached with a 500.000 chain length and 30.000 burn-in iteractions. / Dados de primeira lactação de 6.262 vacas distribuídas em 311 rebanhos, filhas de 803 touros com partos entre os anos de 1980 e 2003 foram utilizados para estimar de componentes de variância para as características de produção de leite e gordura com informações de primeira lactação, em animais da raça Pardo-Suíça. Os componentes de variância foram estimados pelo método da máxima verossimilhança restrita (REML) e Bayesiano, sob modelo animal, por meio de análises uni e bicaracterística. A estimação realizada via REML foi obtida com o programa MTDFREML (BOLDMAN et al. 1995) testando modelos unicaracterística com diferentes efeitos para as covariáveis e considerados grupo contemporâneo e estação como efeitos fixos. Os melhores ajustes obtidos nas analises unicaracterística foram utilizados na análise bicaracterística. A duração da lactação reduziu a estimativa da variância aditiva quando era utilizada no modelo sugerindo que os animais estariam sendo corrigidos para uma mesma base quanto à capacidade de imprimir duração da lactação mais longa ou mais curta à progênie sendo, portanto, não recomendado o ajuste para esta covariável. Já a idade da vaca ao parto, influenciou linearmente a produção de leite e gordura. As herdabilidades estimadas foram 0,26 e 0,25 para produção de leite e gordura respectivamente com correlação genética de 0,95. A alta correlação entre a produção de leite e gordura obtida sugere que parte dos genes que atuam na produção de leite também responde pela produção de gordura, de tal forma que a seleção para a produção de leite resulta, indiretamente, em aumentos na produção de gordura. A estimação via inferência Bayesiana foi realizada com o programa MTGSAM (VAN TASSELL E VAN VLECK, 1995). Foram testados diversos tamanhos de cadeia para a obtenção das densidades marginais a posteriori das análises unicaracterística, a melhor proposta para o tamanho de cadeia, burn-in e amostragem foi utilizada para a análise bicaracterística. Os períodos de burn-in foram testados pelo programa GIBANAL (VAN KAAM, 1998) cujas análises fornecem um intervalo de amostragem para cada burn-in testado, o critério de escolha do intervalo de amostragem foi feito de acordo com a correlação serial, resultante do burn-in e do processo de amostragem. As estimativas de herdabilidade obtidas foram 0,33 ± 0,05 para ambas as características com correlação de 0,95. Resultados similares foram obtidos em estudos utilizando a mesma metodologia em informações de primeira lactação. A fase estacionária foi adequadamente atingida com uma cadeia de 500.000 iterações e descarte inicial de 30.000 iterações.
16

Path reconstruction in diffusion tensor magnetic resonance imaging / Reconstitution de la trajectoire dans le diffusion tenseur magnétique résonance imagerie

Song, Xin 13 July 2011 (has links)
L'environnement sous-marin compliqué et la pauvre vision sous-marine font le robot câblé sous-marin super-mini-à peine pour être contrôlés. Traditionnellement, la méthode de contrôle manuelle par les opérateurs est adoptée par cette sorte de robots. Malheureusement, les robots peuvent à peine travailler normalement dans ces circonstances pratiques. Donc, pour surmonter ces manques et améliorer les capacités de ces robots câblés sous-marins, ce papier propose plusieurs améliorations, en incluant le design de système, le design de contrôleur de mouvement, la reconnaissance d'obstacle en trois dimensions et les technologies de reconstruction de sentier en trois dimensions etc. (1) Super-mini sous-marins de conception système de robot: plusieurs programmes d'amélioration et d'idées de conception importants sont étudiés pour la super-mini robot sous-marin (2) La conception du contrôleur de mouvement du robot sous-marin dans des circonstances compliquées est étudiée. Un nouveau réseau de neurones adaptatif coulissantes contrôleur de mode avec le contrôleur paramètre équilibrée est proposé. Basé sur la théorie de la gestion adaptative floue contrôleur de mode coulissant , un algorithme amélioré est également proposé et appliqué à l'robot sous-marin. (3) Recherche de reconstructions d'environnement sous-marines en trois dimensions : les algorithmes et les expériences de reconstructions d'environnement sous-marines sont enquêtés. L'algorithme de traitement d'image de DT-MRI et la théorie de reconstructions d'obstacle en trois dimensions sont adoptés et améliorés pour l'application du robot sous-marin. / The complicated underwater environment and the poor underwater vision make super-mini underwater cable robot hardly to be controlled. Traditionally, the manual control method by operators is adopted by this kind of robots. Unfortunately, the robots can hardly work normally in these practical circumstances. Therefore, to overcome these shortcomings and improve the abilities of these underwater cable robots, this paper proposes several improvements, including the system design, the motion controller design, three dimensional obstacle recognition and three dimensional path reconstruction technologies etc. The details are displayed as follow: (1) Super-mini underwater robot system design: several improvement schemes and important design ideas are investigated for the super-mini underwater robot.(2) Super-mini robot motion controller design: The motion controller design of underwater robot in complicated circumstance is investigated. A new adaptive neural network sliding mode controller with balanced parameter controller (ANNSMB) is proposed. Based on the theory of adaptive fuzzy sliding mode controller (AFSMC), an improved algorithm is also proposed and applied to the underwater robot. (3)Research of three dimensional underwater environment reconstructions: The algorithms and the experiments of underwater environment reconstructions are investigated. DT-MRI image processing algorithm and the theory of three dimensional obstacle reconstructions are adopted and improved for the application of the underwater robot. (4) The super-mini underwater robot path planning algorithms are investigated.
17

Caractérisations des familles exponentielles naturelles cubiques : étude des lois Beta généralisées et de certaines lois de Kummer / Characterizations of the cubic natural exponential families : Study of generalized beta distributions and some Kummer’s distributions

Hamza, Marwa 18 May 2015 (has links)
Cette thèse contient deux parties différentes. Dans la première partie, nous nous sommes intéressés aux familles exponentielles naturelles cubiques dont la fonction variance est un polynôme de degré inférieur ou égal à 3. Nous donnons trois caractérisations de ces familles en se basant sur une approche Bayesienne. L’une de ces caractérisations repose sur le fait que la fonction cumulante vérifie une équation différentielle. La deuxième partie de notre travail est consacrée aux conséquences de la propriété d’indépendance de type « Matsumoto-Yor » qui a été développée par Koudou et Vallois. Cette propriété fait intervenir la famille de lois de Kummer de type 2 et les lois Beta généralisées. En se basant sur la méthode de conditionnement et sur la méthode de rejet, nous donnons des réalisations presque sûre de ces distributions de probabilités. D’autre part, nous caractérisons la famille de lois de Kummer de type 2 (resp. les lois Beta généralisées) par une équation algébrique impliquant des lois gamma (resp. les lois Beta) / This thesis has two different parts. In the first part we are interested in the real cubic natural exponential families such that their variance function is a polynomial of degree less than or equal to 3. We give three characterizations of such families using a Bayesian approach. One of these characterizations is based on a differential equation verified by the cumulant function. In a second part we study in depth the independence property of the type “Matsumoto-Yor” that was developed by Koudou and Vallois. This property involves the Kummer distribution of type 2 and the generalized beta ones. Using the conditioning and the rejection method, we give almost sure realization of these distributions. We characterize the family of Kummer distribution of type 2 with an algebraic equation involving the gamma ones. We proceed similarly with the generalized beta distributions
18

A physics-based muon trajectory estimation algorithm for muon tomographic applications

Reshma Sanjay Ughade (16625865) 04 August 2023 (has links)
<p>Recently, the use of cosmic ray muons in critical national security applications, e.g., nuclear nonproliferation and safeguards verification, has gained attention due to unique muon properties such as high energy and low attenuation even in very dense materials. Applications where muon tomography has been demonstrated include cargo screening for detection of special nuclear materials smuggling, source localization, material identification, determination of nuclear fuel debris location in nuclear reactors, etc. However, muon image reconstruction techniques are still limited in resolution mostly due to multiple Coulombscattering (MCS) within the target object. Improving and expanding muon tomography would require development of efficient & flexible physics-based algorithms to model the MCS process and accurately estimate the most probable trajectory of a muon as it traverses the target object. The present study introduces a novel algorithmic approach that utilizes Bayesian probability theory and a Gaussian approximation of MCS to estimate the most probable path of cosmic ray muons as they traverse uniform media.</p> <p>Using GEANT4, an investigation was conducted involving the trajectory of 10,000 muon particles that underwent bombardment from a point source parallel to the x-axis. The proposed algorithm was assessed through four types of simulations. In the first type, muons with energies of 1 GeV, 3 GeV, 10 GeV, and 100 GeV were utilized to evaluate the algorithms’ performance and accuracy. The second type of simulation involved the use of target cubes composed of different materials, including aluminum, iron, lead, and uranium. These simulations specifically focused on muons with an energy of 3 GeV. Next, the third type of simulation entailed employing target cubes with varying lengths, such as 10 cm, 20 cm, 40 cm, and 80 cm, specifically using muons with an energy of 3 GeV and a uranium target. Lastly, all the previous simulations were revised to accommodate a source of poly-energetic muons. This revision was undertaken to create a more realistic source scenario that aligns with the distribution of muon energies encountered in real-world situations.</p> <p>The results demonstrate significant improvements in precision and muon flux utilization when comparing different algorithms. The Generalized Muon Trajectory Estimation (GMTE) algorithm shows around 50% improvement in precision compared to currently used Straight Line Path (SLP) algorithm across all test scenarios. Additionally, GMTE algorithm exhibits around 38% improvement in precision compared to the extensively used Point of Closest Approach (PoCA) algorithm. Similarly for both mono and poly energetic source of muons, the GMTE algorithm shows 10%-35% increase in muon flux utilization for high Z materials and a 10%-15% increase for medium Z materials compared to the PoCA algorithm. Similarly, it demonstrates 6%-9% increase in muon flux utilization for both medium and high Z materials compared to the SLP algorithm across all test scenarios. These results highlight the enhanced performance and efficiency of GMTE algorithm in comparison to SLP and PoCA algorithms.</p> <p>Through these extensive simulations, our objective was to comprehensively evaluate the performance and effectiveness of the proposed algorithm across a range of variables, including energy levels, materials, and target geometries. The findings of our study demonstrate that the utilization of these algorithm enables improved resolution and reduced measurement time for cosmic ray muons when compared with current SLP and PoCA algorithm.</p>

Page generated in 0.0569 seconds