• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 103
  • 39
  • 35
  • 32
  • 23
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 691
  • 126
  • 126
  • 123
  • 105
  • 93
  • 89
  • 82
  • 76
  • 70
  • 59
  • 57
  • 54
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Análise de agrupamento de semeadoras manuais quanto à distribuição do número de sementes / Cluster analysis of manual planters according to the distribution of the number of seeds

Patricia Peres Araripe 10 December 2015 (has links)
A semeadora manual é uma ferramenta que, ainda nos dias de hoje, exerce um papel importante em diversos países do mundo que praticam a agricultura familiar e de conservação. Sua utilização é de grande importância devido a minimização do distúrbio do solo, exigências de trabalho no campo, maior produtividade sustentável entre outros fatores. De modo a avaliar e/ou comparar as semeadoras manuais existentes no mercado, diversos trabalhos têm sido realizados, porém considerando somente medidas de posição e dispersão. Neste trabalho é utilizada, como alternativa, uma metodologia para a comparação dos desempenhos das semeadoras manuais. Neste caso, estimou-se as probabilidades associadas a cada categoria de resposta e testou-se a hipótese de que essas probabilidades não variam para as semeadoras quando comparadas duas a duas, utilizando o teste da razão das verossimilhanças e o fator de Bayes nos paradigmas clássico e bayesiano, respectivamente. Por fim, as semeadoras foram agrupadas considerando, como medida de distância, a medida de divergência J-divergência na análise de agrupamento. Como ilustração da metodologia apresentada, são considerados os dados para a comparação de quinze semeadoras manuais de diferentes fabricantes analisados por Molin, Menegatti e Gimenez (2001) em que as semeadoras foram reguladas para depositarem exatamente duas sementes por golpe. Inicialmente, na abordagem clássica, foram comparadas as semeadoras que não possuíam valores nulos nas categorias de resposta, sendo as semeadoras 3, 8 e 14 as que apresentaram melhores comportamentos. Posteriormente, todas as semeadoras foram comparadas duas a duas, agrupando-se as categorias e adicionando as contantes 0,5 ou 1 à cada categoria de resposta. Ao agrupar categorias foi difícil a tomada de conclusões pelo teste da razão de verossimilhanças, evidenciando somente o fato da semeadora 15 ser diferente das demais. Adicionando 0,5 ou 1 à cada categoria não obteve-se, aparentemente, a formação de grupos distintos, como a semeadora 1 pelo teste diferiu das demais e apresentou maior frequência no depósito de duas sementes, o exigido pelo experimento agronômico, foi a recomendada neste trabalho. Na abordagem bayesiana, utilizou-se o fator de Bayes para comparar as semeadoras duas a duas, no entanto as conclusões foram semelhantes às obtidas na abordagem clássica. Finalmente, na análise de agrupamento foi possível uma melhor visualização dos grupos de semeadoras semelhantes entre si em ambas as abordagens, reafirmando os resultados obtidos anteriormente. / The manual planter is a tool that today still has an important role in several countries around the world, which practices family and conservation agriculture. The use of it has importance due to minimizing soil disturbance, labor requirements in the field, most sustainable productivity and other factors. In order to analyze and/or compare the commercial manual planters, several studies have been conducted, but considering only position and dispersion measures. This work presents an alternatively method for comparing the performance of manual planters. In this case, the probabilities associated with each category of response has estimated and the hypothesis that these probabilities not vary for planters when compared in pairs evaluated using the likelihood ratio test and Bayes factor in the classical and bayesian paradigms, respectively. Finally, the planters were grouped considering as a measure of distance, the divergence measure J-divergence in the cluster analysis. As an illustration of this methodology, the data from fifteen manual planters adjusted to deposit exactly two seeds per hit of different manufacturers analyzed by Molin, Menegatti and Gimenez (2001) were considered. Initially, in the classical approach, the planters without zero values in response categories were compared and the planters 3, 8 and 14 presents the better behavior. After, all the planters were compared in pairs, grouping categories and adding the constants 0,5 or 1 for each response category. Grouping categories was difficult making conclusions by the likelihood ratio test, only highlighting the fact that the planter 15 is different from others. Adding 0,5 or 1 for each category, apparently not obtained the formation of different groups, such as planter 1 which by the test differed from the others and presented more frequently the deposit of two seeds, required by agronomic experiment and recommended in this work. In the Bayesian approach, the Bayes factor was used to compare the planters in pairs, but the findings were similar to those obtained in the classical approach. Finally, the cluster analysis allowed a better idea of similar planters groups with each other in the both approaches, confirming the results obtained previously.
182

Reconhecimento automático de defeitos de fabricação em painéis TFT-LCD através de inspeção de imagem

SILVA, Antonio Carlos de Castro da 15 January 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-12T14:09:09Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) MSc_Antonio Carlos de Castro da Silva_digital_12_04_16.pdf: 2938596 bytes, checksum: 9d5e96b489990fe36c4e1ad5a23148dd (MD5) / Made available in DSpace on 2016-09-12T14:09:09Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) MSc_Antonio Carlos de Castro da Silva_digital_12_04_16.pdf: 2938596 bytes, checksum: 9d5e96b489990fe36c4e1ad5a23148dd (MD5) Previous issue date: 2016-01-15 / A detecção prematura de defeitos nos componentes de linhas de montagem de fabricação é determinante para a obtenção de produtos finais de boa qualidade. Partindo desse pressuposto, o presente trabalho apresenta uma plataforma desenvolvida para detecção automática dos defeitos de fabricação em painéis TFT-LCD (Thin Film Transistor-Liquid Cristal Displays) através da realização de inspeção de imagem. A plataforma desenvolvida é baseada em câmeras, sendo o painel inspecionado posicionado em uma câmara fechada para não sofrer interferência da luminosidade do ambiente. As etapas da inspeção consistem em aquisição das imagens pelas câmeras, definição da região de interesse (detecção do quadro), extração das características, análise das imagens, classificação dos defeitos e tomada de decisão de aprovação ou rejeição do painel. A extração das características das imagens é realizada tomando tanto o padrão RGB como imagens em escala de cinza. Para cada componente RGB a intensidade de pixels é analisada e a variância é calculada, se um painel apresentar variação de 5% em relação aos valores de referência, o painel é rejeitado. A classificação é realizada por meio do algorítimo de Naive Bayes. Os resultados obtidos mostram um índice de 94,23% de acurácia na detecção dos defeitos. Está sendo estudada a incorporação da plataforma aqui descrita à linha de produção em massa da Samsung em Manaus. / The early detection of defects in the parts used in manufacturing assembly lines is crucial for assuring the good quality of the final product. Thus, this paper presents a platform developed for automatically detecting manufacturing defects in TFT-LCD (Thin Film Transistor-Liquid Cristal Displays) panels by image inspection. The developed platform is based on câmeras. The panel under inspection is positioned in a closed chamber to avoid interference from light sources from the environment. The inspection steps encompass image acquisition by the cameras, setting the region of interest (frame detection), feature extraction, image analysis, classification of defects, and decision making. The extraction of the features of the acquired images is performed using both the standard RGB and grayscale images. For each component the intensity of RGB pixels is analyzed and the variance is calculated. A panel is rejected if the value variation of the measure obtained is 5% of the reference values. The classification is performed using the Naive Bayes algorithm. The results obtained show an accuracy rate of 94.23% in defect detection. Samsung (Manaus) is considering the possibility of incorporating the platform described here to its mass production line.
183

Análise espaço-temporal dos casos de aids no Estado de São Paulo - 1990 a 2004 / Space-time analysis of the cases of AIDS in State of São Paulo

Rogério Ruscitto do Prado 11 July 2008 (has links)
Introdução: O Estado de São Paulo, por compreender aproximadamente 40% dos casos de aids notificados no Brasil, oferece situação favorável para análise espaço-temporal, visando melhor compreensão da disseminação do HIV/aids. Objetivo: Avaliar a adequação de um modelo espaço-temporal para análise da dinâmica de disseminação da aids segundo áreas geográficas. Material e método: Foram utilizados os casos de aids notificados ao Sistema de Informação de Agravos de Notificação (SINAN - Ministério da Saúde) nos anos de 1990 a 2004 para pessoas com idade igual ou superior a 15 anos e foram criados os riscos relativos de ter aids segundo sexo para períodos de 3 anos utilizando modelos bayesianos completos supondo disseminação geográfica local e disseminação geográfica global. Resultados: O crescimento da aids no interior do Estado de São Paulo é apresentado claramente pelos modelos ajustados uma vez que entre os 50 municípios com maiores riscos relativos de aids no último período do estudo a maioria é do interior. As taxas estimadas de crescimento da aids para as mulheres foram, em sua maioria, de 200% a 300%, enquanto que para os homens este crescimento foi de 100% a 200%. Conclusão: O modelo bayesiano com disseminação global se mostrou mais adequado para explicação da epidemia de aids no Estado de São Paulo, pois não foi encontrada expansão espacial da aids no Estado, mas sim o crescimento local da doença. Os modelos corroboram os fenômenos de feminização e interiorização descritos à exaustão na literatura, o que indica suas adequações. / Introduction: The State of São Paulo, with approximately 40% of the notified cases of AIDS in Brazil, offers a favorable opportunity for a space-time analysis of this disease, which can provide a better understanding of the dissemination of the HIV/AIDS. Objective: To evaluate the adequacy of on space-time modeling to analyze the dynamics of AIDS dissemination according to geographic areas. Methods: Cases of AIDS reported to the Sistema de Informação de Agravos de Notificação (National Disease Reporting System) (SINAN - Ministry of Health) from 1990 to 2004, for people aged 15 years or older were selected. Relative risks of aids for each sex for periods of 3 years were created using complete bayesians models assuming local and global geographic dissemination. Results: The performed analyzes showed that these models were adequate to explain the AIDS dissemination in the State of São Paulo and clearly showed the processes of growth among females and in small size cities. Among the 50 cities with the largest relative risks of AIDS in the last period of study the majority were in the countryside. In general estimated growth rates of AIDS among females were between 200% and 300% while for males were between 100% and 200%. Conclusion: The bayesian model with global dissemination was more adequate to explain the AIDS epidemic in the State of São Paulo since no spatial spreading was observed but instead a local expansion of the disease. The models were consistent with the processes of growth among females and in small size cities, described in the literature indicating their adequacy.
184

Automated invoice handling with machine learning and OCR / Automatiserad fakturahantering med maskininlärning och OCR

Larsson, Andreas, Segerås, Tony January 2016 (has links)
Companies often process invoices manually, therefore automation could reduce manual labor. The aim of this thesis is to evaluate which OCR-engine, Tesseract or OCRopus, performs best at interpreting invoices. This thesis also evaluates if it is possible to use machine learning to automatically process invoices based on previously stored data. By interpreting invoices with the OCR-engines, it results in the output text having few spelling errors. However, the invoice structure is lost, making it impossible to interpret the corresponding fields. If Naïve Bayes is chosen as the algorithm for machine learning, the prototype can correctly classify recurring invoice lines after a set of data has been processed. The conclusion is, neither of the two OCR-engines can interpret the invoices to plain text making it understandable. Machine learning with Naïve Bayes works on invoices if there is enough previously processed data. The findings in this thesis concludes that machine learning and OCR can be utilized to automatize manual labor. / Företag behandlar oftast fakturor manuellt och en automatisering skulle kunna minska fysiskt arbete. Målet med examensarbetet var att undersöka vilken av OCR-läsarna, Tesseract och OCRopus som fungerar bäst på att tolka en inskannad faktura. Även undersöka om det är möjligt med maskininlärning att automatiskt behandla fakturor utifrån tidigare sparad data. Genom att tolka text med hjälp av OCR-läsarna visade resultaten att den producerade texten blev språkligt korrekt, men att strukturen i fakturan inte behölls vilket gjorde det svårt att tolka vilka fält som hör ihop. Naïve Bayes valdes som algoritm till maskininlärningen och resultatet blev en prototyp som korrekt kunde klassificera återkommande fakturarader, efter att en mängd träningsdata var behandlad. Slutsatsen är att ingen av OCR-läsarna kunde tolka fakturor så att resultatet kunde användas vidare, och att maskininlärning med Naïve Bayes fungerar på fakturor om tillräckligt med tidigare behandlad data finns. Utfallet av examensarbetet är att maskininlärning och OCR kan användas för att automatisera fysiskt arbete.
185

Factorisation du rendu de Monte-Carlo fondée sur les échantillons et le débruitage bayésien / Factorization of Monte Carlo rendering based on samples and Bayesian denoising

Boughida, Malik 23 March 2017 (has links)
Le rendu de Monte-Carlo par lancer de rayons est connu depuis longtemps pour être une classe d’algorithmes de choix lorsqu’il s’agit de générer des images de synthèse photo-réalistes. Toutefois, sa nature fondamentalement aléatoire induit un bruit caractéristique dans les images produites. Dans cette thèse, nous mettons en œuvre des algorithmes fondés sur les échantillons de Monte-Carlo et l’inférence bayésienne pour factoriser le calcul du rendu, par le partage d’information entre pixels voisins ou la mise en cache de données précédemment calculées. Dans le cadre du rendu à temps long, en nous fondant sur une technique récente de débruitage en traitement d’images, appelée Non-local Bayes, nous avons développé un algorithme de débruitage collaboratif par patchs, baptisé Bayesian Collaborative Denoising. Celui-ci est conçu pour être adapté aux spécificités du bruit des rendus de Monte-Carlo et aux données supplémentaires qu’on peut obtenir par des statistiques sur les échantillons. Dans un deuxième temps, pour factoriser les calculs de rendus de Monte-Carlo en temps interactif dans un contexte de scène dynamique, nous proposons un algorithme de rendu complet fondé sur le path tracing, appelé Dynamic Bayesian Caching. Une partition des pixels permet un regroupement intelligent des échantillons. Ils sont alors en nombre suffisant pour pouvoir calculer des statistiques sur eux. Ces statistiques sont comparées avec celles stockées en cache pour déterminer si elles doivent remplacer ou enrichir les données existantes. Finalement un débruitage bayésien, inspiré des travaux de la première partie, est appliqué pour améliorer la qualité de l’image. / Monte Carlo ray tracing is known to be a particularly well-suited class of algorithms for photorealistic rendering. However, its fundamentally random nature breeds noise in the generated images. In this thesis, we develop new algorithms based on Monte Carlo samples and Bayesian inference in order to factorize rendering computations, by sharing information across pixels or by caching previous results. In the context of offline rendering, we build upon a recent denoising technique from the image processing community, called Non-local Bayes, to develop a new patch-based collaborative denoising algorithm, named Bayesian Collaborative Denoising. It is designed to be adapted to the specificities of Monte Carlo noise, and uses the additionnal input data that we can get by gathering per-pixel sample statistics. In a second step, to factorize computations of interactive Monte Carlo rendering, we propose a new algorithm based on path tracing, called Dynamic Bayesian Caching. A clustering of pixels enables a smart grouping of many samples. Hence we can compute meaningful statistics on them. These statistics are compared with the ones that are stored in a cache to decide whether the former should replace or be merged with the latter. Finally, a Bayesian denoising, inspired from the works of the first part, is applied to enhance image quality.
186

Calibration of the Highway Safety Manual Safety Performance Function and Development of Jurisdiction-Specific Models for Rural Two-Lane Two-Way Roads in Utah

Brimley, Bradford Keith 17 March 2011 (has links) (PDF)
This thesis documents the results of the calibration of the Highway Safety Manual (HSM) safety performance function (SPF) for rural two-lane two-way roadway segments in Utah and the development of new SPFs using negative binomial and hierarchical Bayesian modeling techniques. SPFs estimate the safety of a roadway entity, such as a segment or intersection, in terms of number of crashes. The new SPFs were developed for comparison to the calibrated HSM SPF. This research was performed for the Utah Department of Transportation (UDOT).The study area was the state of Utah. Crash data from 2005-2007 on 157 selected study segments provided a 3-year observed crash frequency to obtain a calibration factor for the HSM SPF and develop new SPFs. The calibration factor for the HSM SPF for rural two-lane two-way roads in Utah is 1.16. This indicates that the HSM underpredicts the number of crashes on rural two-lane two-way roads in Utah by sixteen percent. The new SPFs were developed from the same data that were collected for the HSM calibration, with the addition of new data variables that were hypothesized to have a significant effect on crash frequencies. Negative binomial regression was used to develop four new SPFs, and one additional SPF was developed using hierarchical (or full) Bayesian techniques. The empirical Bayes (EB) method can be applied with each negative binomial SPF because the models include an overdispersion parameter used with the EB method. The hierarchical Bayesian technique is a newer, more mathematically-intense method that accounts for high levels of uncertainty often present in crash modeling. Because the hierarchical Bayesian SPF produces a density function of a predicted crash frequency, a comparison of this density function with an observed crash frequency can help identify segments with significant safety concerns. Each SPF has its own strengths and weaknesses, which include its data requirements and predicting capability. This thesis recommends that UDOT use Equation 5-11 (a new negative binomial SPF) for predicting crashes, because it predicts crashes with reasonable accuracy while requiring much less data than other models. The hierarchical Bayesian process should be used for evaluating observed crash frequencies to identify segments that may benefit from roadway safety improvements.
187

Wissensintegration von generischem und fallbasiertem Wissen, uniforme Repräsentation, Verwendung relationaler Datenbanksysteme sowie Problemlösen mit Concept Based und Case Based Reasoning sowie Bayesschen Netzen in medizinischen wissensbasierten Systemen

Zimmer, Sandra 27 June 2023 (has links)
Ein wissensbasiertes System soll den Mediziner im Rahmen der Diagnosestellung unterstützen, indem relevante Informationen bereitgestellt werden. Aus komplexen Symptomkonstellationen soll eine zuverlässige Diagnose und damit verbundene medizinische Maßnahmen abgeleitet werden. Grundlage dafür bildet das im System adäquat repräsentierte Wissen, das effizient verarbeitet wird. Dieses Wissen ist in der medizinischen Domäne sehr heterogen und häufig nicht gut strukturiert. In der Arbeit wird eine Methodik entwickelt, die die begriffliche Erfassung und Strukturierung der Anwendungsdomäne über Begriffe, Begriffshierarchien, multiaxiale Komposition von Begriffen sowie Konzeptdeklarationen ermöglicht. Komplexe Begriffe können so vollständig, eindeutig und praxisrelevant abgebildet werden. Darüber hinaus werden mit der zugrunde liegenden Repräsentation Dialogsysteme, fallbasierte und generische Problemlösungsmethoden sowie ihr Zusammenspiel mit relationalen Datenbanken in einem System vorgestellt. Dies ist vor allem im medizinischen Diskursbereich von Bedeutung, da zur Problemlösung generisches Wissen (Lehrbuchwissen) und Erfahrungswissen (behandelte Fälle) notwendig ist. Die Wissensbestände können auf relationalen Datenbanken uniform gespeichert werden. Um das vorliegende Wissen effizient verarbeiten zu können, wird eine Methode zur semantischen Indizierung vorgestellt und deren Anwendung im Bereich der Wissensrepräsentation beschrieben. Ausgangspunkt der semantischen Indizierung ist das durch Konzepthierarchien repräsentierte Wissen. Ziel ist es, den Knoten (Konzepten) Schlüssel zuzuordnen, die hierarchisch geordnet und syntaktisch sowie semantisch korrekt sind. Mit dem Indizierungsalgorithmus werden die Schlüssel so berechnet, dass die Konzepte mit den spezifischeren Konzepten unifizierbar sind und nur semantisch korrekte Konzepte zur Wissensbasis hinzugefügt werden dürfen. Die Korrektheit und Vollständigkeit des Indizierungsalgorithmus wird bewiesen. Zur Wissensverarbeitung wird ein integrativer Ansatz der Problemlösungsmethoden des Concept Based und Case Based Reasoning vorgestellt. Concept Based Reasoning kann für die Diagnose-, Therapie- und Medikationsempfehlung und -evaluierung über generisches Wissen verwendet werden. Mit Hilfe von Case Based Reasoning kann Erfahrungswissen von Patientenfällen verarbeitet werden. Weiterhin werden zwei neue Ähnlichkeitsmaße (Kompromissmengen für Ähnlichkeitsmaße und multiaxiale Ähnlichkeit) für das Retrieval ähnlicher Patientenfälle entwickelt, die den semantischen Kontext adäquat berücksichtigen. Einem ausschließlichen deterministischen konzeptbasiertem Schließen sind im medizinischen Diskursbereich Grenzen gesetzt. Für die diagnostische Inferenz unter Unsicherheit, Unschärfe und Unvollständigkeit werden Bayessche Netze untersucht. Es können so die gültigen allgemeinen Konzepte nach deren Wahrscheinlichkeit ausgegeben werden. Dazu werden verschiedene Inferenzmechanismen vorgestellt und anschließend im Rahmen der Entwicklung eines Prototypen evaluiert. Mit Hilfe von Tests wird die Klassifizierung von Diagnosen durch das Netz bewertet.:1 Einleitung 2 Medizinische wissensbasierte Systeme 3 Medizinischer Behandlungsablauf und erweiterter wissensbasierter Agent 4 Methoden zur Wissensrepräsentation 5 Uniforme Repräsentation mit Begriffshierachien, Konzepten, generischem und fallbasierten Schließen 6 Semantische Indizierung 7 Medizinisches System als Beispielanwendung 8 Ähnlichkeitsmaße, Kompromissmengen, multiaxiale Ähnlichkeit 9 Inferenzen mittels Bayesscher Netze 10 Zusammenfassung und Ausblick A Ausgewählte medizinische wissensbasierte Systeme zur Entscheidungsunterstützung aus der Literatur B Realisierung mit Softwarewerkzeugen C Causal statistic modeling and calculation of distribution functions of classification features / A knowledge-based system is designed to support the medical professionals in the diagnostic process by providing relevant information. A reliable diagnosis and associated medical measures are to be derived from complex symptom constellations. It is based on the knowledge adequately represented in the system, which is processed efficiently. This knowledge is very heterogeneous in the medical domain and often not well structured. In this work, a methodology is developed that enables the conceptual capture and structuring of the application domain via concepts, conecpt hierarchies, multiaxial composition of concepts as well as concept declarations. Complex concepts can thus be mapped completely, clearly and with practical relevance. Furthermore, the underlying representation introduces dialogue systems, \acrlong{abk:CBR} and generic problem solving methods as well as their interaction with relational databases in one system. This is particularly important in the field of medical discourse, since generic knowledge (textbook knowledge) and experiential knowledge (treated cases) are necessary for problem solving. The knowledge can be stored uniformly on relational databases. In order to be able to process the available knowledge efficiently, a method for semantic indexing is presented and its application in the field of knowledge representation is described. The starting point of semantic indexing is the knowledge represented by concept hierarchies. The goal is to assign keys to the nodes (concepts) that are hierarchically ordered and syntactically and semantically correct. With the indexing algorithm, the keys are calculated in such a way that the concepts are unifiable with the more specific concepts and only semantically correct concepts may be added to the knowledge base. The correctness and completeness of the indexing algorithm is proven. An integrative approach of the problem-solving methods of Concept Based and \acrlong{abk:CBR} is presented for knowledge processing. Concept Based Reasoning can be used for diagnosis, therapy and medication recommendation and evaluation via generic knowledge. Case Based Reasoning can be used to process experiential knowledge of patient cases. Furthermore, two new similarity measures (compromise sets for similarity measures and multiaxial similarity) are developed for the retrieval of similar patient cases that adequately consider the semantic context. There are limits to an exclusively deterministic Concept Based Reasoning in the medical domain. For diagnostic inference under uncertainty, vagueness and incompleteness Bayesian networks are investigated. The method is based on an adequate uniform representation of the necessary knowledge. Thus, the valid general concepts can be issued according to their probability. To this end, various inference mechanisms are introduced and subsequently evaluated within the context of a developed prototype. Tests are employed to assess the classification of diagnoses by the network.:1 Einleitung 2 Medizinische wissensbasierte Systeme 3 Medizinischer Behandlungsablauf und erweiterter wissensbasierter Agent 4 Methoden zur Wissensrepräsentation 5 Uniforme Repräsentation mit Begriffshierachien, Konzepten, generischem und fallbasierten Schließen 6 Semantische Indizierung 7 Medizinisches System als Beispielanwendung 8 Ähnlichkeitsmaße, Kompromissmengen, multiaxiale Ähnlichkeit 9 Inferenzen mittels Bayesscher Netze 10 Zusammenfassung und Ausblick A Ausgewählte medizinische wissensbasierte Systeme zur Entscheidungsunterstützung aus der Literatur B Realisierung mit Softwarewerkzeugen C Causal statistic modeling and calculation of distribution functions of classification features
188

Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate

Xu, Yangyi 03 December 2014 (has links)
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems. For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations. For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes. / Ph. D.
189

Erhöhung der Qualität und Verfügbarkeit von satellitengestützter Referenzsensorik durch Smoothing im Postprocessing

Bauer, Stefan 02 February 2013 (has links) (PDF)
In dieser Arbeit werden Postprocessing-Verfahren zum Steigern der Genauigkeit und Verfügbarkeit satellitengestützer Positionierungsverfahren, die ohne Inertialsensorik auskommen, untersucht. Ziel ist es, auch unter schwierigen Empfangsbedingungen, wie sie in urbanen Gebieten herrschen, eine Trajektorie zu erzeugen, deren Genauigkeit sie als Referenz für andere Verfahren qualifiziert. Zwei Ansätze werdenverfolgt: Die Verwendung von IGS-Daten sowie das Smoothing unter Einbeziehung von Sensoren aus der Fahrzeugodometrie. Es wird gezeigt, dass durch die Verwendung von IGS-Daten eine Verringerung des Fehlers um 50% bis 70% erreicht werden kann. Weiterhin demonstrierten die Smoothing-Verfahren, dass sie in der Lage sind, auch unter schlechten Empfangsbedingungen immer eine Genauigkeit im Dezimeterbereich zu erzielen.
190

Procedimentos sequenciais Bayesianos aplicados ao processo de captura-recaptura

Santos, Hugo Henrique Kegler dos 30 May 2014 (has links)
Made available in DSpace on 2016-06-02T20:04:52Z (GMT). No. of bitstreams: 1 6306.pdf: 1062380 bytes, checksum: de31a51e2d0a59e52556156a08c37b41 (MD5) Previous issue date: 2014-05-30 / Financiadora de Estudos e Projetos / In this work, we make a study of the Bayes sequential decision procedure applied to capture-recapture with fixed sample sizes, to estimate the size of a finite and closed population process. We present the statistical model, review the Bayesian decision theory, presenting the pure decision problem, the statistical decision problem and the sequential decision procedure. We illustrate the theoretical methods discussed using simulated data. / Neste trabalho, fazemos um estudo do procedimento de decisão sequencial de Bayes aplicado ao processo de captura-recaptura com tamanhos amostrais fixados, para estimação do tamanho de uma população finita e fechada. Apresentamos o modelo estatístico, revisamos a teoria de decisão bayesiana, apresentando o problema de decisão puro, o problema de decisão estatística e o procedimento de decisão sequencial. Ilustramos os métodos teóricos discutidos através de dados simulados.

Page generated in 0.2058 seconds