• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automated Defect Recognition in Digital Radiography

Xiao, Xinhua 19 October 2015 (has links)
No description available.
12

Pseudoatsitiktinių skaičių statistinių savybių tikrinimas / Testing statistical properties of pseudorandom numbers

Smaliukas, Robertas 23 July 2014 (has links)
Šiame darbe yra tiriami dešimt skirtingų pseudoatsitiktinių generatorių ir jų statistinės savybės. Pseudoatsitiktiniai skaičiai yra naudojami daugelyje sričių, todėl yra labai svarbu, kad jie pasižymėtų kokybišku atsitiktinumu. Atliekant kiekvieną testą yra tikrinama hipotezė, ar sekos nariai yra iš tikrųjų atsitiktiniai. Viso darbe yra naudojami 15 šiame darbe išanalizuotų testų. Yra rekomenduojama, kad testavimo metu kievienoje sekoje būtų bent 1,000,000 bitų. Kad gauti reikšmingus rezultatus, kiekvienam generatoriui ištirti yra naudojami 50,000,000 bitų suskirstyti į dešimt sekų. Seka išlaiko testą, tada, kai testavimo metu gauta p-reikšmė yra 0.01 arba didesnė, kitu atveju – testas neišlaikytas. Jeigu bent aštuonios iš dešimties sekų išlaikė testus, tai yra laikoma, kad generatoriaus generuojama seka šio testo atžvilgiu yra atsitiktinė. Tyrimo metu buvo pastebėta, kad penki iš dešimties generatorių pastoviai išlaiko visus testus. Šiame darbe generatoriai yra suskirstyti pagal kokybiškumą atsižvelgiant į testų rezultatus. Pasiūlytas originalus pseudoatsitiktinis generatorius visada išlaiko 14 iš 15 testų ir yra laikoma, kad jo generuojama skaičių seka yra atsitiktinė, tačiau už jį yra pranašesnių generatorių. / Ten different pseudorandom number generator‘s statistical features were analyzed in this work. Pseudorandom numbers are applied in many fields, that‘s why it‘s important for them to have high quality of randomness. Hypothesis that random numbers are indeed random are checked by 15 different tests that are analyzed in this work. It is recommended that at least 1,000,000 bits of data would be used during the test. To archive meaningful results 50,000,000 of random bits divided into ten sequences are used for each pseudorandom number generator. For generator to pass any of the tests it is required that 8 out of 10 sequence’s p-value would be higher or equal to 0.01. During investigation it was noticed, that only five out of ten generators constantly pass all of the tests. In this work we classify each of the generators and separate those of higher and lower quality and determine which one is the best or the worst. Proposed unique pseudorandom number generator is constantly passing 14 out of 15 tests and is considered to have a high quality of randomness, but, according to results it is not the best of in this work’s analyzed generators.
13

Classes Alléliques d’Haplotypes et Sélection Positive dans le Génome Humain

Hussin, Julie 12 1900 (has links)
L'identification de régions génomiques cibles de la sélection naturelle positive permet de mieux comprendre notre passé évolutif et de trouver des variants génétiques fonctionnels importants. Puisque la fréquence des allèles sélectionnés augmente dans la population, la sélection laisse des traces sur les séquences d'ADN et ces empreintes sont détectées lorsque la variabilité génétique d'une région est différente de celle attendue sous neutralité sélective. On propose une nouvelle approche pour analyser les données de polymorphismes : le calcul des classes alléliques d’haplotypes (HAC), permettant d'évaluer la diversité globale des haplotypes en étudiant leur composition allélique. L'idée de l'approche est de déterminer si un site est sous sélection positive récente en comparant les distributions des HAC obtenues pour les deux allèles de ce site. Grâce à l'utilisation de données simulées, nous avons étudié ces distributions sous neutralité et sous sélection en testant l'effet de différents paramètres populationnels. Pour tester notre approche empiriquement, nous avons analysé la variation génétique au niveau du gène de lactase dans les trois populations inclues dans le projet HapMap. / Natural selection eliminates detrimental and favors advantageous phenotypes. This process leaves characteristic signatures in the underlying genomic segments that can be recognized through deviations in the allelic or in haplotypic frequency spectra. We introduce a new way of looking at the genomic single nucleotide polymorphisms : the haplotype allelic classes (HAC). The model combine segregating sites and haplotypic informations in order to reveal useful characteristics of the data, providing an identifiable signature of recent positive selection that can be detected by comparison with the background distribution. We compare the HAC distribution's partition between the haplotypes carrying the selected allele and the remaining ones. Coalescence simulations are used to study the distributions under standard population models assuming neutrality, demographic scenarios and selection models. To test, in practice, the performance of HAC and the derived statistic in capturing deviation from neutrality due to selection, we analyzed the genetic variation in the locus of lactase persistence in the three HapMap populations.
14

Modeling the Power Evolution of Classical Double Radio Galaxies over Cosmological Scales

Barai, Paramita 03 August 2006 (has links)
During the quasar era (redshifts between 1 and 3) Radio Galaxies (RGs) have been claimed to have substantially influenced the growth and evolution of large scale structures in the universe. In this dissertation I test the robustness of these exciting claims. In order to probe the impacts in more detail, good theoretical models for such RG systems are required. With this motivation, I seek to develop an essentially analytical model for the evolution of Fanaroff-Riley Class II radio galaxies both as they age individually and as their numbers vary with cosmological epoch. To do so, I first compare three sophisticated semi-analytical models for the dynamical and radio lobe power evolution of FR II galaxies, those given by Kaiser, Dennett-Thorpe & Alexander (1997, KDA), Blundell, Rawlings, & Willott (1999, BRW) and Manolakou & Kirk (2002, MK). I perform multi-dimensional Monte Carlo simulations leading to virtual radio surveys. The predictions of each model for redshift, radio power (at 151 MHz), linear size and spectral index are then compared with data. The observational samples are the low frequency radio surveys, 3CRR, 6CE and 7CRS, which are flux-limited and redshift complete. I next perform extensive statistical tests to compare the distributions of model radio source parameters and those of the observational samples. The statistics used are the 1-Dimensional and 2-Dimensional Kolmogorov-Smirnov (K-S) tests and the 4-variable Spearman partial rank correlation coefficient. I search for and describe the "best" parameters for each model. I then produced modifications to each of the three original models, and extensively compare the original and the modified model performances in fitting the data. The key result of my dissertation is that using the Radio Luminosity Function of Willott et al. (2001) as the redshift birth function of radio sources, the KDA and MK models perform better than the BRW models in fitting the 3CRR, 6CE and 7CRS survey data when using K-S based statistical tests, and the KDA model provides the best fits to the correlation coefficients. However, no pre-existing or modified model can provide adequate fits for the spectral indices. I also calculate the volume fraction of the relevant universe filled by the generations of radio galaxies over the quasar era. This volume filling factor is not as large as estimated earlier. Nonetheless, the allowed ranges of various model parameters produce a rather wide range of astrophysically interesting relevant volume fraction values. I conclude that the expanding RGs born during the quasar era may still play significant roles in the cosmological history of the universe.
15

Classes Alléliques d’Haplotypes et Sélection Positive dans le Génome Humain

Hussin, Julie 12 1900 (has links)
L'identification de régions génomiques cibles de la sélection naturelle positive permet de mieux comprendre notre passé évolutif et de trouver des variants génétiques fonctionnels importants. Puisque la fréquence des allèles sélectionnés augmente dans la population, la sélection laisse des traces sur les séquences d'ADN et ces empreintes sont détectées lorsque la variabilité génétique d'une région est différente de celle attendue sous neutralité sélective. On propose une nouvelle approche pour analyser les données de polymorphismes : le calcul des classes alléliques d’haplotypes (HAC), permettant d'évaluer la diversité globale des haplotypes en étudiant leur composition allélique. L'idée de l'approche est de déterminer si un site est sous sélection positive récente en comparant les distributions des HAC obtenues pour les deux allèles de ce site. Grâce à l'utilisation de données simulées, nous avons étudié ces distributions sous neutralité et sous sélection en testant l'effet de différents paramètres populationnels. Pour tester notre approche empiriquement, nous avons analysé la variation génétique au niveau du gène de lactase dans les trois populations inclues dans le projet HapMap. / Natural selection eliminates detrimental and favors advantageous phenotypes. This process leaves characteristic signatures in the underlying genomic segments that can be recognized through deviations in the allelic or in haplotypic frequency spectra. We introduce a new way of looking at the genomic single nucleotide polymorphisms : the haplotype allelic classes (HAC). The model combine segregating sites and haplotypic informations in order to reveal useful characteristics of the data, providing an identifiable signature of recent positive selection that can be detected by comparison with the background distribution. We compare the HAC distribution's partition between the haplotypes carrying the selected allele and the remaining ones. Coalescence simulations are used to study the distributions under standard population models assuming neutrality, demographic scenarios and selection models. To test, in practice, the performance of HAC and the derived statistic in capturing deviation from neutrality due to selection, we analyzed the genetic variation in the locus of lactase persistence in the three HapMap populations.
16

Multivariate non-parametric statistical tests to reuse classifiers in recurring concept drifting environments

GONÇALVES JÚNIOR, Paulo Mauricio 23 April 2013 (has links)
Data streams are a recent processing model where data arrive continuously, in large quantities, at high speeds, so that they must be processed on-line. Besides that, several private and public institutions store large amounts of data that also must be processed. Traditional batch classi ers are not well suited to handle huge amounts of data for basically two reasons. First, they usually read the available data several times until convergence, which is impractical in this scenario. Second, they imply that the context represented by data is stable in time, which may not be true. In fact, the context change is a common situation in data streams, and is named concept drift. This thesis presents rcd, a framework that o ers an alternative approach to handle data streams that su er from recurring concept drifts. It creates a new classi er to each context found and stores a sample of the data used to build it. When a new concept drift occurs, rcd compares the new context to old ones using a non-parametric multivariate statistical test to verify if both contexts come from the same distribution. If so, the corresponding classi er is reused. If not, a new classi er is generated and stored. Three kinds of tests were performed. One compares the rcd framework with several adaptive algorithms (among single and ensemble approaches) in arti cial and real data sets, among the most used in the concept drift research area, with abrupt and gradual concept drifts. It is observed the ability of the classi ers in representing each context, how they handle concept drift, and training and testing times needed to evaluate the data sets. Results indicate that rcd had similar or better statistical results compared to the other classi ers. In the real-world data sets, rcd presented accuracies close to the best classi er in each data set. Another test compares two statistical tests (knn and Cramer) in their capability in representing and identifying contexts. Tests were performed using adaptive and batch classi ers as base learners of rcd, in arti cial and real-world data sets, with several rates-of-change. Results indicate that, in average, knn had better results compared to the Cramer test, and was also faster. Independently of the test used, rcd had higher accuracy values compared to their respective base learners. It is also presented an improvement in the rcd framework where the statistical tests are performed in parallel through the use of a thread pool. Tests were performed in three processors with di erent numbers of cores. Better results were obtained when there was a high number of detected concept drifts, the bu er size used to represent each data distribution was large, and there was a high test frequency. Even if none of these conditions apply, parallel and sequential execution still have very similar performances. Finally, a comparison between six di erent drift detection methods was also performed, comparing the predictive accuracies, evaluation times, and drift handling, including false alarm and miss detection rates, as well as the average distance to the drift point and its standard deviation. / Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T18:02:08Z No. of bitstreams: 2 Tese Paulo Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-12T18:02:08Z (GMT). No. of bitstreams: 2 Tese Paulo Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-04-23 / Fluxos de dados s~ao um modelo de processamento de dados recente, onde os dados chegam continuamente, em grandes quantidades, a altas velocidades, de modo que eles devem ser processados em tempo real. Al em disso, v arias institui c~oes p ublicas e privadas armazenam grandes quantidades de dados que tamb em devem ser processadas. Classi cadores tradicionais n~ao s~ao adequados para lidar com grandes quantidades de dados por basicamente duas raz~oes. Primeiro, eles costumam ler os dados dispon veis v arias vezes at e convergirem, o que e impratic avel neste cen ario. Em segundo lugar, eles assumem que o contexto representado por dados e est avel no tempo, o que pode n~ao ser verdadeiro. Na verdade, a mudan ca de contexto e uma situa c~ao comum em uxos de dados, e e chamado de mudan ca de conceito. Esta tese apresenta o rcd, uma estrutura que oferece uma abordagem alternativa para lidar com os uxos de dados que sofrem de mudan cas de conceito recorrentes. Ele cria um novo classi cador para cada contexto encontrado e armazena uma amostra dos dados usados para constru -lo. Quando uma nova mudan ca de conceito ocorre, rcd compara o novo contexto com os antigos, utilizando um teste estat stico n~ao param etrico multivariado para veri car se ambos os contextos prov^em da mesma distribui c~ao. Se assim for, o classi cador correspondente e reutilizado. Se n~ao, um novo classi cador e gerado e armazenado. Tr^es tipos de testes foram realizados. Um compara o rcd com v arios algoritmos adaptativos (entre as abordagens individuais e de agrupamento) em conjuntos de dados arti ciais e reais, entre os mais utilizados na area de pesquisa de mudan ca de conceito, com mudan cas bruscas e graduais. E observada a capacidade dos classi cadores em representar cada contexto, como eles lidam com as mudan cas de conceito e os tempos de treinamento e teste necess arios para avaliar os conjuntos de dados. Os resultados indicam que rcd teve resultados estat sticos semelhantes ou melhores, em compara c~ao com os outros classi cadores. Nos conjuntos de dados do mundo real, rcd apresentou precis~oes pr oximas do melhor classi cador em cada conjunto de dados. Outro teste compara dois testes estat sticos (knn e Cramer) em suas capacidades de representar e identi car contextos. Os testes foram realizados utilizando classi cadores xi xii RESUMO tradicionais e adaptativos como base do rcd, em conjuntos de dados arti ciais e do mundo real, com v arias taxas de varia c~ao. Os resultados indicam que, em m edia, KNN obteve melhores resultados em compara c~ao com o teste de Cramer, al em de ser mais r apido. Independentemente do crit erio utilizado, rcd apresentou valores mais elevados de precis~ao em compara c~ao com seus respectivos classi cadores base. Tamb em e apresentada uma melhoria do rcd onde os testes estat sticos s~ao executadas em paralelo por meio do uso de um pool de threads. Os testes foram realizados em tr^es processadores com diferentes n umeros de n ucleos. Melhores resultados foram obtidos quando houve um elevado n umero de mudan cas de conceito detectadas, o tamanho das amostras utilizadas para representar cada distribui c~ao de dados era grande, e havia uma alta freq u^encia de testes. Mesmo que nenhuma destas condi c~oes se aplicam, a execu c~ao paralela e seq uencial ainda t^em performances muito semelhantes. Finalmente, uma compara c~ao entre seis diferentes m etodos de detec c~ao de mudan ca de conceito tamb em foi realizada, comparando a precis~ao, os tempos de avalia c~ao, manipula c~ao das mudan cas de conceito, incluindo as taxas de falsos positivos e negativos, bem como a m edia da dist^ancia ao ponto de mudan ca e o seu desvio padr~ao.
17

Multivariate non-parametric statistical tests to reuse classifiers in recurring concept drifting environments

Gonçalves Júnior, Paulo Mauricio 23 April 2013 (has links)
Data streams are a recent processing model where data arrive continuously, in large quantities, at high speeds, so that they must be processed on-line. Besides that, several private and public institutions store large amounts of data that also must be processed. Traditional batch classi ers are not well suited to handle huge amounts of data for basically two reasons. First, they usually read the available data several times until convergence, which is impractical in this scenario. Second, they imply that the context represented by data is stable in time, which may not be true. In fact, the context change is a common situation in data streams, and is named concept drift. This thesis presents rcd, a framework that o ers an alternative approach to handle data streams that su er from recurring concept drifts. It creates a new classi er to each context found and stores a sample of the data used to build it. When a new concept drift occurs, rcd compares the new context to old ones using a non-parametric multivariate statistical test to verify if both contexts come from the same distribution. If so, the corresponding classi er is reused. If not, a new classi er is generated and stored. Three kinds of tests were performed. One compares the rcd framework with several adaptive algorithms (among single and ensemble approaches) in arti cial and real data sets, among the most used in the concept drift research area, with abrupt and gradual concept drifts. It is observed the ability of the classi ers in representing each context, how they handle concept drift, and training and testing times needed to evaluate the data sets. Results indicate that rcd had similar or better statistical results compared to the other classi ers. In the real-world data sets, rcd presented accuracies close to the best classi er in each data set. Another test compares two statistical tests (knn and Cramer) in their capability in representing and identifying contexts. Tests were performed using adaptive and batch classi ers as base learners of rcd, in arti cial and real-world data sets, with several rates-of-change. Results indicate that, in average, knn had better results compared to the Cramer test, and was also faster. Independently of the test used, rcd had higher accuracy values compared to their respective base learners. It is also presented an improvement in the rcd framework where the statistical tests are performed in parallel through the use of a thread pool. Tests were performed in three processors with di erent numbers of cores. Better results were obtained when there was a high number of detected concept drifts, the bu er size used to represent each data distribution was large, and there was a high test frequency. Even if none of these conditions apply, parallel and sequential execution still have very similar performances. Finally, a comparison between six di erent drift detection methods was also performed, comparing the predictive accuracies, evaluation times, and drift handling, including false alarm and miss detection rates, as well as the average distance to the drift point and its standard deviation. / Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T19:25:11Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Paulo Mauricio Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) / Made available in DSpace on 2015-03-12T19:25:11Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese Paulo Mauricio Gonçalves Jr..pdf: 2957463 bytes, checksum: de163caadf10cbd5442e145778865224 (MD5) Previous issue date: 2013-04-23 / Fluxos de dados s~ao um modelo de processamento de dados recente, onde os dados chegam continuamente, em grandes quantidades, a altas velocidades, de modo que eles devem ser processados em tempo real. Al em disso, v arias institui c~oes p ublicas e privadas armazenam grandes quantidades de dados que tamb em devem ser processadas. Classi cadores tradicionais n~ao s~ao adequados para lidar com grandes quantidades de dados por basicamente duas raz~oes. Primeiro, eles costumam ler os dados dispon veis v arias vezes at e convergirem, o que e impratic avel neste cen ario. Em segundo lugar, eles assumem que o contexto representado por dados e est avel no tempo, o que pode n~ao ser verdadeiro. Na verdade, a mudan ca de contexto e uma situa c~ao comum em uxos de dados, e e chamado de mudan ca de conceito. Esta tese apresenta o rcd, uma estrutura que oferece uma abordagem alternativa para lidar com os uxos de dados que sofrem de mudan cas de conceito recorrentes. Ele cria um novo classi cador para cada contexto encontrado e armazena uma amostra dos dados usados para constru -lo. Quando uma nova mudan ca de conceito ocorre, rcd compara o novo contexto com os antigos, utilizando um teste estat stico n~ao param etrico multivariado para veri car se ambos os contextos prov^em da mesma distribui c~ao. Se assim for, o classi cador correspondente e reutilizado. Se n~ao, um novo classi cador e gerado e armazenado. Tr^es tipos de testes foram realizados. Um compara o rcd com v arios algoritmos adaptativos (entre as abordagens individuais e de agrupamento) em conjuntos de dados arti ciais e reais, entre os mais utilizados na area de pesquisa de mudan ca de conceito, com mudan cas bruscas e graduais. E observada a capacidade dos classi cadores em representar cada contexto, como eles lidam com as mudan cas de conceito e os tempos de treinamento e teste necess arios para avaliar os conjuntos de dados. Os resultados indicam que rcd teve resultados estat sticos semelhantes ou melhores, em compara c~ao com os outros classi cadores. Nos conjuntos de dados do mundo real, rcd apresentou precis~oes pr oximas do melhor classi cador em cada conjunto de dados. Outro teste compara dois testes estat sticos (knn e Cramer) em suas capacidades de representar e identi car contextos. Os testes foram realizados utilizando classi cadores tradicionais e adaptativos como base do rcd, em conjuntos de dados arti ciais e do mundo real, com v arias taxas de varia c~ao. Os resultados indicam que, em m edia, KNN obteve melhores resultados em compara c~ao com o teste de Cramer, al em de ser mais r apido. Independentemente do crit erio utilizado, rcd apresentou valores mais elevados de precis~ao em compara c~ao com seus respectivos classi cadores base. Tamb em e apresentada uma melhoria do rcd onde os testes estat sticos s~ao executadas em paralelo por meio do uso de um pool de threads. Os testes foram realizados em tr^es processadores com diferentes n umeros de n ucleos. Melhores resultados foram obtidos quando houve um elevado n umero de mudan cas de conceito detectadas, o tamanho das amostras utilizadas para representar cada distribui c~ao de dados era grande, e havia uma alta freq u^encia de testes. Mesmo que nenhuma destas condi c~oes se aplicam, a execu c~ao paralela e seq uencial ainda t^em performances muito semelhantes. Finalmente, uma compara c~ao entre seis diferentes m etodos de detec c~ao de mudan ca de conceito tamb em foi realizada, comparando a precis~ao, os tempos de avalia c~ao, manipula c~ao das mudan cas de conceito, incluindo as taxas de falsos positivos e negativos, bem como a m edia da dist^ancia ao ponto de mudan ca e o seu desvio padr~ao.
18

Statistické testy pro VaR a CVaR / Statistical tests for VaR and CVaR

Mirtes, Lukáš January 2016 (has links)
The thesis presents test statistics of Value-at-Risk and Conditional Value-at-Risk. The reader is familiar with basic nonparametric estimators and their asymptotic distributions. Tests of accuracy of Value-at- Risk are explained and asymptotic test of Conditional Value-at-Risk is derived. The thesis is concluded by process of backtesting of Value-at-Risk model using real data and computing statistical power and probability of Type I error for selected tests. Powered by TCPDF (www.tcpdf.org)
19

Etude de la pertinence des paramètres stochastiques sur des modèles de Markov cachés / Study of the relevance of stochastic parameters on hidden Markov models

Robles, Bernard 18 December 2013 (has links)
Le point de départ de ce travail est la thèse réalisée par Pascal Vrignat sur la modélisation de niveaux de dégradation d’un système dynamique à l’aide de Modèles de Markov Cachés (MMC), pour une application en maintenance industrielle. Quatre niveaux ont été définis : S1 pour un arrêt de production et S2 à S4 pour des dégradations graduelles. Recueillant un certain nombre d’observations sur le terrain dans divers entreprises de la région, nous avons réalisé un modèle de synthèse à base de MMC afin de simuler les différents niveaux de dégradation d’un système réel. Dans un premier temps, nous identifions la pertinence des différentes observations ou symboles utilisés dans la modélisation d’un processus industriel. Nous introduisons ainsi le filtre entropique. Ensuite, dans un but d’amélioration du modèle, nous essayons de répondre aux questions : Quel est l’échantillonnage le plus pertinent et combien de symboles sont ils nécessaires pour évaluer au mieux le modèle ? Nous étudions ensuite les caractéristiques de plusieurs modélisations possibles d’un processus industriel afin d’en déduire la meilleure architecture. Nous utilisons des critères de test comme les critères de l’entropie de Shannon, d’Akaike ainsi que des tests statistiques. Enfin, nous confrontons les résultats issus du modèle de synthèse avec ceux issus d’applications industrielles. Nous proposons un réajustement du modèle pour être plus proche de la réalité de terrain. / As part of preventive maintenance, many companies are trying to improve the decision support of their experts. This thesis aims to assist our industrial partners in improving their maintenance operations (production of pastries, aluminum smelter and glass manufacturing plant). To model industrial processes, different topologies of Hidden Markov Models have been used, with a view to finding the best topology by studying the relevance of the model outputs (also called signatures). This thesis should make it possible to select a model framework (a framework includes : a topology, a learning & decoding algorithm and a distribution) by assessing the signature given by different synthetic models. To evaluate this « signature », the following widely-used criteria have been applied : Shannon Entropy, Maximum likelihood, Akaike Information Criterion, Bayesian Information Criterion and Statistical tests.
20

Hodnocení porezity u odlitků gravitačně litých z Al slitin / Evaluation of porosity in gravity - cast Al - alloy castings

Staňková, Markéta January 2008 (has links)
Solving of this diploma thesis is evaluation porosity in sequence on mechanical properties from different Al alloys. Castings were made by gravity casting to the iron-mould or gravity casting to the sand. Measurements (mechanical properties, porosity, DAS - dendrite arm spacing, shape factors and sphericity) were statistically analysed and dependencies which were detected were processed to the graphs.

Page generated in 0.0735 seconds