• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Apports de nouveaux outils de traitement d'images et de programmation pour le relevé automatique de dégradations sur chaussées / Contribution of new image processing and programming tools for automatic pavement degradations detection

Kaddah, Wissam 19 December 2018 (has links)
Le réseau routier subit des dégradations sous l’effet du trafic et des conditions climatiques. Le relevé dans les images de différents types de défauts de surface permet d’évaluer l’état du réseau et de programmer des opérations de maintenance nécessaires. Le but de cette thèse est ainsi de développer des méthodes non-supervisées dédiées à l'analyse des images 2D et 3D. Nous nous focalisons sur la détection de dégradations du marquage routier et la détection des fissures sur la chaussée. Dans le cadre de la signalisation horizontale, notre objectif est de réaliser un algorithme capable de détecter, reconnaitre, géolocaliser et quantifier l’état du marquage routier à l’aide d’un système d’imagerie panoramique. Le traitement d’images effectué utilise une méthode de segmentation couleur pour faciliter la phase d’extraction des zones de marquages routiers. Ensuite, une technique de perspective inverse est appliquée pour faciliter l’identification des objets détectés. L’état du marquage est établi à partir des variations des caractéristiques géométriques (longueur, largeur, etc.) et colorimétriques (niveau de couleur blanche) des objets identifiés dans l’image. Dans le cadre de la détection des fissures, notre aspiration consiste à extraire automatiquement les fissures en surface de chaussée, en supposant que celles-ci sont des structures fines et sombres dans l’image. Parmi les nombreuses méthodes existantes, nos approches retenues suivent un schéma classique composé de trois phases principales, à savoir une phase de pré-traitement pour réduire la quantité d’information à traiter, une phase de traitement pour extraire les points ayant une forte vraisemblance d’appartenir à une fissure et une phase de post-traitement pour estimer la gravité du matériel. Les performances de nos algorithmes sont évaluées sur des images réelles 2D et 3D issues de 3 capteurs différents (VIAPIX®, LCMS et Aigle-RN). / The road network is subject to degradations due to traffic and weather conditions. The detection of surface defects within pavement images is used to evaluate the road network and to schedule the necessary maintenance operations. The goal of this thesis is to develop unsupervised processing techniques for the analysis of 2D and 3D pavement images, which originate from imaging systems operating in the field of road engineering. We focus on the detection of road marking damage and the detection of cracks on the pavement. In the context of road marking, our objective is to realize an algorithm for detecting, recognizing, geo-locating and monitoring the wearing conditions of road marking using a panoramic imaging system. The performed image processing uses a color segmentation method to facilitate the extraction phase of the road marking zones. Then, an inverse perspective technique is applied to ease the identification of detected objects.The wearing conditions of road marking is established from the variations in the geometric (length, width, etc.) and colorimetric (white color level) characteristics of the objects identified in the image.In the context of road crack detection, our aspiration is the automatic segmentation of cracks within pavement images, assuming that they represent fine and dark features in the image. Among the many existing methods, our chosen approaches follow a classical scheme composed of three main phases, namely, a pre-processing phase to reduce the amount of information to be processed in the image, a processing phase to extract the points having a high likelihood of belonging to a crack on the road and a post-processing phase to estimate the severity and the damage level of the pavement. The performances of our proposed algorithms are evaluated on 2D and 3D real images, coming from 3 types of existing imaging devices for road engineering (VIAPIX®, LCMS and Aigle-RN).
12

Σχεδίαση και υλοποίηση εργαλείου ανίχνευσης ρυθμών και κυμάτων σε ηλεκτροεγκεφαλογράφημα / Design and develop an EEG rythm and wave detection tool

Αλεξόπουλος, Άγγελος 10 August 2011 (has links)
Ο ύπνος αποτελεί ένα από τα πιο μυστήρια φαινόμενα της ανθρώπινης ζωής. Η επεξεργασία και ανάλυση του εγκεφαλογραφήματος με τη χρήση υπολογιστικών μεθόδων και αλγορίθμων μπορεί να δώσει μεγάλη ώθηση στην διερεύνηση της εγκεφαλικής δραστηριότητας. Στην παρούσα εργασία υλοποιήθηκε ένα γραφικό εργαλείο για την ανίχνευση ρυθμών και κυμάτων που εμφανίζονται στο εγκεφαλογράφημα ύπνου. Το εργαλείο συνδέεται με το πρόγραμμα καταγραφής Neuroscan του εργαστηρίου Νευροφυσιολογίας της Ιατρικής Σχολής του Πανεπιστημίου Πατρών. Το περιβάλλον περιλαμβάνει αλγορίθμους για την αυτόματη ανάλυση του σήματος και την ανίχνευση επιλεγμένων κυμάτων και ρυθμών. Σκοπός του εργαλείου είναι η αποστολή ακουστικού ερεθισμού στην περίπτωση ανίχνευσης του επιλεγμένου κύματος ή ρυθμού. Το εργαλείο περιλαμβάνει γραφικό περιβάλλον για την εύκολη χρήση και παραμετροποίηση των διαθέσιμων επιλογών. Το πρόγραμμα αναπτύχθηκε εξ ολοκλήρου πρωτότυπα με γνώμονα την ταχύτητα ανίχνευσης και επεξεργασίας του ΗΕΓ. Τελικός στόχος του προγράμματος είναι η χρήση του σε πειράματα διερεύνησης της απαντητικότητας του εγκεφάλου σε ερεθισμούς που συμβαίνουν σε συγκεκριμένες χρονικές στιγμές μετά από την στιγμή ανίχνευσης επιλεγμένου κύματος ή ρυθμού. Με αυτό τον τρόπο μπορεί να εξερευνηθεί ο ρόλος διαφόρων καταστάσεων του εγκεφάλου (π.χ. αφυπνιστικός ή υπναγωγικός κατά τον ύπνο) χαρακτηριζόμενων από τα επιλεγόμενα ΗΕΓ κύματα και ρυθμούς. / One of the greatest human mysteries is the phenomenon of sleep. The use of computing methods and algorithms in the analysis and processing of electroencephalogram can boost the research of brain activity. The present work presents the graphical program that was developed and used at the Neurophysiology Unit of the University of Patras’ Medical School for the support of EEG studies. The program detects specific rythms and waves during the sleep EEG (online). The tool connects with the Neuroscan Systems that the lab uses for the sleep experiments. The program supports several algorithms for the automatic signal analysis and the specific rythms’ and waves’ detection. The target of the tool is to send sound stimulus in the case of rhythm or wave detection. The user-friendly graphical interface of the tool includes all the parameters for the experiments. The program was developed originally from scratch, aiming to make signal processing as fast as possible. The final goal of the program is to explore the nature of specific brain states i.e. in sleep, by probing brain reactivity at precise times after EEG signs characterizing this brain state.
13

Reconhecimento automático de defeitos de fabricação em painéis TFT-LCD através de inspeção de imagem

SILVA, Antonio Carlos de Castro da 15 January 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-12T14:09:09Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) MSc_Antonio Carlos de Castro da Silva_digital_12_04_16.pdf: 2938596 bytes, checksum: 9d5e96b489990fe36c4e1ad5a23148dd (MD5) / Made available in DSpace on 2016-09-12T14:09:09Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) MSc_Antonio Carlos de Castro da Silva_digital_12_04_16.pdf: 2938596 bytes, checksum: 9d5e96b489990fe36c4e1ad5a23148dd (MD5) Previous issue date: 2016-01-15 / A detecção prematura de defeitos nos componentes de linhas de montagem de fabricação é determinante para a obtenção de produtos finais de boa qualidade. Partindo desse pressuposto, o presente trabalho apresenta uma plataforma desenvolvida para detecção automática dos defeitos de fabricação em painéis TFT-LCD (Thin Film Transistor-Liquid Cristal Displays) através da realização de inspeção de imagem. A plataforma desenvolvida é baseada em câmeras, sendo o painel inspecionado posicionado em uma câmara fechada para não sofrer interferência da luminosidade do ambiente. As etapas da inspeção consistem em aquisição das imagens pelas câmeras, definição da região de interesse (detecção do quadro), extração das características, análise das imagens, classificação dos defeitos e tomada de decisão de aprovação ou rejeição do painel. A extração das características das imagens é realizada tomando tanto o padrão RGB como imagens em escala de cinza. Para cada componente RGB a intensidade de pixels é analisada e a variância é calculada, se um painel apresentar variação de 5% em relação aos valores de referência, o painel é rejeitado. A classificação é realizada por meio do algorítimo de Naive Bayes. Os resultados obtidos mostram um índice de 94,23% de acurácia na detecção dos defeitos. Está sendo estudada a incorporação da plataforma aqui descrita à linha de produção em massa da Samsung em Manaus. / The early detection of defects in the parts used in manufacturing assembly lines is crucial for assuring the good quality of the final product. Thus, this paper presents a platform developed for automatically detecting manufacturing defects in TFT-LCD (Thin Film Transistor-Liquid Cristal Displays) panels by image inspection. The developed platform is based on câmeras. The panel under inspection is positioned in a closed chamber to avoid interference from light sources from the environment. The inspection steps encompass image acquisition by the cameras, setting the region of interest (frame detection), feature extraction, image analysis, classification of defects, and decision making. The extraction of the features of the acquired images is performed using both the standard RGB and grayscale images. For each component the intensity of RGB pixels is analyzed and the variance is calculated. A panel is rejected if the value variation of the measure obtained is 5% of the reference values. The classification is performed using the Naive Bayes algorithm. The results obtained show an accuracy rate of 94.23% in defect detection. Samsung (Manaus) is considering the possibility of incorporating the platform described here to its mass production line.
14

Detecção automática de massas em mamografias digitais usando Quality Threshold clustering e MVS / Automatic mass detection on digital mammography using Quality Threshold clustering and MVS

SILVA, Joberth de Nazaré 20 February 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-16T18:29:06Z No. of bitstreams: 1 JoberthSilva.pdf: 6383640 bytes, checksum: f18918eb45c49cb426b560e4daddf994 (MD5) / Made available in DSpace on 2017-08-16T18:29:06Z (GMT). No. of bitstreams: 1 JoberthSilva.pdf: 6383640 bytes, checksum: f18918eb45c49cb426b560e4daddf994 (MD5) Previous issue date: 2013-02-20 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Breast cancer is worldwide the most common form of cancer affecting woman, sometimes in their lives, at the proportion of either one to nine or one to thirteen women who reach the age of ninety in the west world (LAURENCE, 2006). Breast cancer is caused by frequent reproduction of cells in various parts of the human body. At certain times, and for reasons yet unknown, some cells begin to reproduce at a higher speed, causing the onset of cellular masses called neoplasias, or tumors, which are new tissue formation, but from pathological origin. This work has proposed a method of automatic detection of masses in digital mammograms, using the Quality Threshold (QT), and the Supporting Vector Machine (MVS). The images processing steps were as follows: firstly, the pre-processing phase took place which consisted of removing the background image, smoothing it with a low pass filter, to increase the degree of contrast, and then, in sequence, accomplishing an enhancement of the Wavelet Transform (WT) by changing their coefficients with a linear function. After the pre-processing phase, came the segmentation with the use of the QT which divided the image in to clusters with pre-defined diameters. Then, the post-processing occurred with the selection of the best candidates to mass formed by the MVS analysis of the shape descriptors. For the extraction phase of texture features the Haralick descriptors and the function correlogram were used. As for the classification stage, the MVS was used again for training, validation of the MVS model and final test. The achieved results were: sensitivity of 92. 31%, specificity of 82.2%, accuracy of 83,53%, a false positive rate per image of 1.12 and an area under a FROC curve of 0.8033. / O câncer de mama é, mundialmente, a forma mais comum de câncer em mulheres afetando, em algum momento suas vidas, aproximadamente uma em cada nove a uma em cada treze mulheres que atingem os noventa anos no mundo ocidental (LAURANCE, 2006). O câncer de mama é ocasionado pela reprodução frequente de células de diversas partes do corpo humano. Em certos momentos e por motivos ainda desconhecidos algumas células começam a se reproduzir com uma velocidade maior, ocasionando o surgimento de massas celulares denominadas de neoplasias ou tumores que são tecidos de formação nova, mas de origem patológica. Neste trabalho foi proposto um método de detecção automática de massas em mamografias digitais usando o Quality Threshold (QT), e a Máquina de Vetores de Suporte (MVS). As etapas de processamento das imagens foram as seguintes: primeiramente veio a fase de pré-processamento que consiste em retirar o fundo da imagem, suavizá-la com um filtro passa-baixa, aumentar a escala de contraste, e na sequencia realizar um realce com a Transformada de Wavelet (WT) através da alteração dos seus coeficientes com uma função linear. Após a fase de pré-processamento vem a seguimentação utilizando o QT que segmenta a imagem em clusters com diâmetros pré-definidos. Em seguida, vem o pós-processamento com a seleção dos melhores candidatos à massa feita através da análise dos descritores de forma pela MVS. Para fase de extração de características de textura foram utiliza os descritores de Haralick e a função correlograma. Já na fase de classificação a MVS novamente foi utilizada para o treinamento, validação do modelo MVS e teste final. Os resultados alcançados foram: sensibilidade de 92,31%, especificidade de 82,2%, Acurácia de 83,53%, uma taxa de falsos positivos por imagem de 1,12 e uma área sob a curva FROC de 0,8033.
15

Two complementary approaches to detecting vulnerabilities in C programs / Deux approches complémentaires pour la détection de vulnérabilités dans les programmes C

Jimenez, Willy 04 October 2013 (has links)
De manière générale, en informatique, les vulnérabilités logicielles sont définies comme des cas particuliers de fonctionnements non attendus du système menant à la dégradation des propriétés de sécurité ou à la violation de la politique de sécurité. Ces vulnérabilités peuvent être exploitées par des utilisateurs malveillants comme brèches de sécurité. Comme la documentation sur les vulnérabilités n'est pas toujours disponible pour les développeurs et que les outils qu'ils utilisent ne leur permettent pas de les détecter et les éviter, l'industrie du logiciel continue à être paralysée par des failles de sécurité. Nos travaux de recherche s'inscrivent dans le cadre du projet Européen SHIELDS et portent sur les techniques de modélisation et de détection formelles de vulnérabilités. Dans ce domaine, les approches existantes sont peu nombreuses et ne se basent pas toujours sur une modélisation formelle précise des vulnérabilités qu'elles traitent. De plus, les outils de détection sous-jacents produisent un nombre conséquent de faux positifs/négatifs. Notons également qu'il est assez difficile pour un développeur de savoir quelles vulnérabilités sont détectées par chaque outil vu que ces derniers sont très peu documentés. En résumé, les contributions réalisées dans le cadre de cette thèse sont les suivantes: Définition d'un formalisme tabulaire de description de vulnérabilités appelé template. Définition d'un langage formel, appelé Condition de Détection de Vulnérabilité (VDC). Une approche de génération de VDCs à partir des templates. Définition d'une approche de détection de vulnérabilités combinant le model checking et l'injection de fautes. Évaluation des deux approches / In general, computer software vulnerabilities are defined as special cases where an unexpected behavior of the system leads to the degradation of security properties or the violation of security policies. These vulnerabilities can be exploited by malicious users or systems impacting the security and/or operation of the attacked system. Since the literature on vulnerabilities is not always available to developers and the used tools do not allow detecting and avoiding them; the software industry continues to be affected by security breaches. Therefore, the detection of vulnerabilities in software has become a major concern and research area. Our research was done under the scope of the SHIELDS European project and focuses specifically on modeling techniques and formal detection of vulnerabilities. In this area, existing approaches are limited and do not always rely on a precise formal modeling of the vulnerabilities they target. Additionally detection tools produce a significant number of false positives/negatives. Note also that it is quite difficult for a developer to know what vulnerabilities are detected by each tool because they are not well documented. Under this context the contributions made in this thesis are: Definition of a formalism called template. Definition of a formal language, called Vulnerability Detection Condition (VDC), which can accurately model the occurrence of a vulnerability. Also a method to generate VDCs from templates has been defined. Defining a second approach for detecting vulnerabilities which combines model checking and fault injection techniques. Experiments on both approaches
16

Κατασκευή συσκευής αυτόματης ανίχνευσης βήχα με μικροελεγκτή τεχνολογίας 32 bit

Τσουραπούλη, Γραμματούλα 07 June 2013 (has links)
Ο βήχας είναι ένα κοινό σύμπτωμα σε πολλές ασθένειες του αναπνευστικού συστήματος. Αν και λειτουργεί ως προστατευτικός μηχανισμός απομάκρυνσης εκκρίσεων από την αναπνευστική οδό, η αυξημένη συχνότητα και έντασή του μπορεί να έχουν επίδραση στην ποιότητα ζωής του ασθενούς. Είναι το βασικότερο σύμπτωμα για το οποίο κάποιος επισκέπτεται τον γιατρό. Η σωστή εκτίμησή του είναι απαραίτητη τόσο για τον προσδιορισμό της αποτελεσματικότητας της θεραπείας αλλά και για την δοκιμή νέων θεραπειών και τη μελέτη των μηχανισμών του. Μέχρι στιγμής η διάγνωσή του βασίζεται σε υποκειμενικές καταγραφές, απλώς ζητώντας από τον ασθενή την εκτίμησή του για την ένταση, τη διάρκεια και τη συχνότητά του. Ένα σύστημα αυτόματης ανίχνευσης του σήματος του βήχα θα επέτρεπε την επικύρωση της παρουσίας και της συχνότητας του βήχα καθώς και την αποτελεσματικότητα της αγωγής. Τα συστήματα καταγραφής του βήχα δεν είναι καινούρια διαδικασία. Η πρώτη καταγραφή έγινε τη δεκαετία του '60 σε νοσηλευόμενους ασθενείς με τη χρήση μαγνητοφώνων και με χειροκίνητη καταγραφή των γεγονότων του βήχα. Στη συνέχεια με την εξέλιξη της τεχνολογίας κατασκευάστηκαν φορητές συσκευές που βασίστηκαν στην ταυτόχρονη καταγραφή ήχου και ηλεκτρομυογραφήματος (EMG σήματα, ανίχνευση κίνησης του θώρακα) για να ανιχνευθούν τα γεγονότα του βήχα όπου ακόμα τα σήματα έπρεπε να καταγραφούν και να μετρηθούν χειροκίνητα. Με τις παραπέρα ανακαλύψεις στις τεχνικές ψηφιακής καταγραφής, συμπίεσης και αποθήκευσης η διαδικασία αναγνώρισης και καταγραφής του βήχα μπορεί να αυτοματοποιηθεί με τη χρήση κατάλληλων αλγορίθμων. Στην εργασία αυτή κατασκευάζεται ένα ενσωματωμένο σύστημα καταγραφής, αποθήκευσης και επεξεργασίας του σήματος του βήχα. Για την επεξεργασία του αναπτύσσεται μια βασική μέθοδος βασισμένη την ενέργειά του. Στο πρώτο κεφάλαιο, γίνεται αναφορά στα χαρακτηριστικά του ηχητικού σήματος και παρατίθενται τα βασικά στάδια της ανάλυσής του. Στο δεύτερο, δίνεται ο ορισμός του βήχα, οι αιτίες που τον προκαλούν και η φυσιολογία του. Στη συνέχεια, παρουσιάζονται μέθοδοι για την ανίχνευσή του. Στο τρίτο κεφάλαιο, αναφέρονται οι βασικές έννοιες των μικροελεγκτών και παρατίθενται τα βασικά χαρακτηριστικά του μικροεπεξεργαστή ARM7TDMI, του μικροελεγκτή ADuC 7026 και της αναπτυξιακής πλατφόρμας μVision της Keil που χρησιμοποιήσαμε για την ανάπτυξη της εφαρμογής μας. Στο τελευταίο κεφάλαιο, παρουσιάζονται κάποιες λειτουργίες προγραμματισμού και δυνατότητες του μικροελεγκτή που χρησιμοποιούνται στην παρούσα εργασία. Στη συνέχεια αναπτύσσεται η εφαρμογή για την ανίχνευση του σήματος του βήχα. Στα παραρτήματα, παρατίθενται παραδείγματα για τον βασικό προγραμματισμό του ADuC 7026 και των περιφερειακών του. / Cough is a common symptom in many diseases of the respiratory system. Although cough protects humans by removing secretions through respiratory track, its increased frequency and intensity may impact on patient’s quality of life. Besides, cough is the main symptom that makes people to visit doctor. Doctor’s accurate assessment is necessary for determining treatment according to each patient as well as the testing of new treatments and the comprehensive study of cough. So far, the diagnosis of cough is based on subjective factors such as patient’s assessment of its intensity, duration and frequency. An automatic detection system of the cough signal would allow to determine about the presence and frequency of cough and an effective treatment. The attempt to record cough is not a new process. The first recording to patients was made in 60’s by the use of tape recorders and by manually recording the symptoms of cough. As technology has evolved, portable device was created that was based on simultaneous recording of sound and electromyography (EMG signals, detection of the movement of chest) in order to detect the symptoms of cough. In that case, the recording was also made manually. With the evolution of digital recording, compression and storage, the recognition and recording of cough can be automated by using the appropriate algorithms. In that study, an embedded system is made in order to record, store and process the signal of cough. A basic method based on energy is developed for the process. In the fisrt chapter, the characteristics of sound signal and the key stages of the analysis are presented. In the second chapter, cough is defined, its causes and the clinical symptoms of cough are analyzed. Then, methods for its detection are specified. In the third chapter, the basic concepts of microcontrollers are given as well as the main characteristics of microprocessor ARM7TDMI and of microcontroller ADuC 7026 are presented and the platform μVision of Keil that we used for the development of the application is analyzed. In the final chapter, some programming functions and properties of microcontroller are presented and are used for the current study. Then, the application of detection of cough signal is developed. In annex, examples for the basic programming of ADuC 7026 and its peripherals are given.
17

Detec??o de Estilos de Aprendizagem em Ambientes Virtuais de Aprendizagem utilizando Redes Bayesianas

Salazar, Luiz Filipe Carreiro 07 November 2017 (has links)
?rea de concentra??o: Educa??o e Tecnologias aplicadas em Institui??es Educacionais. / Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2018-04-02T18:15:41Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) luiz_filipe_carreiro_salazar.pdf: 2092412 bytes, checksum: 081025a8f7b46e716f0a5878e9f2cc60 (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2018-04-09T18:05:57Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) luiz_filipe_carreiro_salazar.pdf: 2092412 bytes, checksum: 081025a8f7b46e716f0a5878e9f2cc60 (MD5) / Made available in DSpace on 2018-04-09T18:05:57Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) luiz_filipe_carreiro_salazar.pdf: 2092412 bytes, checksum: 081025a8f7b46e716f0a5878e9f2cc60 (MD5) Previous issue date: 2017 / O avan?o da tecnologia possibilitou o surgimento de ferramentas para o acesso a conhecimento e experi?ncias individuais e coletivas. As Tecnologias da Informa??o e Comunica??o e a internet criaram o conceito chamado Ciberespa?o, um local virtual onde o somat?rio de todas as experi?ncias, saberes e culturas de todos os povos que forma a Intelig?ncia Coletiva. Tal fen?meno contribuiu para o desenvolvimento da Educa??o ? Dist?ncia e os Sistemas Inteligentes para Educa??o. Um dos maiores problemas em EaD ? aus?ncia de adaptatividade do ensino ao Estilo de Aprendizagem dos estudantes, que consiste nas prefer?ncias que cada aluno tem em receber um determinado conte?do. Dessa forma, o trabalho aborda uma t?cnica de Redes Bayesianas para detectar automaticamente os Estilos de Aprendizagem dos estudantes para proporcionar uma oferta de material de ensino adaptado ?s prefer?ncias de aprendizagem nos Ambientes Virtuais de Aprendizagem. O trabalho se baseia em conceitos e t?cnicas de Intelig?ncia Artificial e Aprendizado de M?quina para compor um modelo computacional e probabil?stico de uma Rede Bayesiana para inferir e detectar qual a melhor combina??o de Estilos de Aprendizagem. Para estruturar os m?todos de detec??o dos Estilos de Aprendizagem, a pesquisa utiliza o Modelo de Estilo de Aprendizagem Felder-Silverman. Para representar o comportamento do estudante no Ambiente Virtual Aprendizagem, o trabalho utiliza utiliza um sistema para simular o desempenho do estudante em um Sistema de Tutoria Inteligente. Os m?todos utilizados resultam na constru??o de um algoritmo de detec??o autom?tica de Estilos de Aprendizagem em Ambientes Virtuais de Aprendizagem. Os resultados do algoritmo de Rede Bayesiana foram comparados aos resultados de outro algoritmo de detec??o de Estilos de Aprendizagem na literatura. Nos testes, o algoritmo de Rede Bayesiana se mostrou mais eficiente comparado ao da literatura, diminuindo consideravelmente o n?mero de itera??es do sistema que no final converge ao Estilo de Aprendizagem do estudante, diminuindo o tempo de execu??o e aumentando a precis?o dos resultados. O trabalho abre discuss?o quanto a robustez, efici?ncia e precis?o da aplica??o de Redes Bayesianas para detec??o de Estilos de Aprendizagem. / Disserta??o (Mestrado Profissional) ? Programa de P?s-Gradua??o em Educa??o, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / The advancement of technology has enabled the emergence of tools for access to knowledge and individual and collective experiences. Information and Communication Technologies and the Internet have created the concept called Cyberspace, a virtual place where the sum of all the experiences, knowledge and cultures of all peoples that forms the Collective Intelligence. This phenomenon contributed to the development of Distance Education and Intelligent Systems for Education. One of the major problems in EaD is the lack of adaptability of teaching to students? learning style, which consists of the preferences each student has in receiving a certain content. Thus, the paper approaches a technique of Bayesian Networks to automatically detect the Learning Styles of the students to provide an offer of teaching material adapted to the preferences of learning in the Virtual Environments of Learning. The work is based on concepts and techniques of Artificial Intelligence and Machine Learning to compose a computational and probabilistic model of a Bayesian Network to infer and detect the best combination of Learning Styles. To structure Learning Styles detection methods, the search uses the Felder-Silverman Learning Style Template. To represent student behavior in the Virtual Learning Environment, the work uses uses a system to simulate student performance in an Intelligent Tutoring System. The methods used result in the construction of an algorithm for automatic detection of Learning Styles in Virtual Learning Environments. The results of the Bayesian Network algorithm were compared to the results of another learning style detection algorithm in the literature. In the tests, the Bayesian Network algorithm proved to be more efficient compared to the literature, considerably reducing the number of system iterations that in the end converges to the student?s Learning Style, reducing execution time and increasing the accuracy of the results. The paper discusses the robustness, efficiency and accuracy of the application of Bayesian Networks for the detection of Learning Styles.
18

Decomposição de potenciais evocados auditivos do tronco encefálico por meio de classificador probabilístico adaptativo

Naves, Kheline Fernandes Peres 18 January 2013 (has links)
The Auditory Brainstem Respose signals are characteristic of the combination of neural activity responses in presence of sound stimuli, detected by the cortex and characterized by peaks and valleys. They are named by roman numerals (I, II, III, IV, V, VI and VII). The identification of these peaks is made by the classic manual process of analysis, which is based on the visualization of the signal generated by the sum of each sample. In the sum the morphological characteristics of the signal and the temporal aspects relevant waves made by Jewett are identified. However, in this visual process some difficulties may occur, regarding the recognition of patterns present, which may vary according to local, individual equipment and settings in the selected protocol. Making the analysis of ABR subject to the influence of many variables and a constant source of doubt about the reliability and agreement between examiners. In order to create a system to automatic detection of these peaks and self-learning, that takes into account the profile for evaluate from examiners this work was developed. The continuous wavelet transforms an innovative technique for the detection of peaks was used associate with a probabilistic model for classification based on the histograms with information provide by examiners. In evaluating of the system, based on the swat rate between the system and a manual technique an accuracy ranging for 74.3% to 99.7%, according to each waves. Thus the proposed technique is proved to be accurate especially in ABR that is a sign of low amplitude. / Os PEATE são sinais resultantes da combinação de respostas de atividades neurais a estímulos sonoros, detectados sobre o córtex, que se caracterizam por vales e picos, sendo nomeados por algarismos romanos (I, II, III, IV, V, VI e VII). O processo clássico de identificação desses picos é baseado na visualização do sinal gerado pela somatória de cada uma de suas componentes. Nele são identificadas as características morfológicas do sinal e os aspectos temporais relevantes constituídos pelas ondas de Jewett. No entanto, neste processo de identificação visual surgem dificuldades que tornam a análise visual dos PEATE uma fonte constante de dúvidas em relação à fidedignidade e concordância entre os examinadores. Com o objetivo de melhorar o processo de avaliação dos PEATE, foi desenvolvido um sistema de detecção automática para os picos, com capacidade de aprendizado que leva em consideração o perfil de marcação realizado por examinadores. Para a detecção de picos foi utilizada a Transformada Wavelet Contínua associado a mesma foi desenvolvido um classificador probabilístico baseado nos histogramas gerados a partir de marcações realizadas pelos profissionais. Na avaliação do sistema proposto, com base na taxa de acerto entre o sistema e a marcação manual, o mesmo apresentou uma acurácia variando de 74,3% a 99,7%, dependendo do tipo de onda analisada. Assim a técnica proposta se revela precisa, principalmente na presença de ruído característico de sinais biológicos, especialmente no PEATE, que é um sinal de amplitude baixa. / Doutor em Ciências
19

Automatic Detection and Classification of Permanent and Non-Permanent Skin Marks / Automatisk detektering och klassificering av permanenta och icke permanenta hudmärken

Moulis, Armand January 2017 (has links)
When forensic examiners try to identify the perpetrator of a felony, they use individual facial marks when comparing the suspect with the perpetrator. Facial marks are often used for identification and they are nowadays found manually. To speed up this process, it is desired to detect interesting facial marks automatically. This master thesis describes a method to automatically detect and separate permanent and non-permanent marks. It uses a fast radial symmetry algorithm as a core element in the mark detector. After candidate skin mark extraction, the false detections are removed depending on their size, shape and number of hair pixels. The classification of the skin marks is done with a support vector machine and the different features are examined. The results show that the facial mark detector has a good recall while the precision is poor. The elimination methods of false detection were analysed as well as the different features for the classifier. One can conclude that the color of facial marks is more relevant than the structure when classifying them into permanent and non-permanent marks. / När forensiker försöker identifiera förövaren till ett brott använder de individuella ansiktsmärken när de jämför den misstänkta med förövaren. Dessa ansiktsmärken identifieras och lokaliseras oftast manuellt idag. För att effektivisera denna process, är det önskvärt att detektera ansiktsmärken automatiskt. I rapporten beskrivs en framtagen metod som möjliggör automatiskt detektion och separation av permanenta och icke-permanenta ansiktsmärken. Metoden som är framtagen använder en snabb radial symmetri algoritm som en huvuddel i detektorn. När kandidater av ansiktsmärken har tagits, elimineras alla falska detektioner utifrån deras storlek, form och hårinnehåll. Utifrån studiens resultat visar sig detektorn ha en god känslighet men dålig precision. Eliminationsmetoderna av falska detektioner analyserades och olika attribut användes till klassificeraren. I rapporten kan det fastställas att färgskiftningar på ansiktsmärkena har en större inverkan än formen när det gäller att sortera dem i permanenta och icke-permanenta märken.
20

Towards an automated framework for coronary lesions detection and quantification in cardiac CT angiography / Vers un système automatisé pour la détection et la quantification des lésions coronaires dans des angiographies CT cardiaques

Melki, Imen 22 June 2015 (has links)
Les maladies coronariennes constituent l'ensemble des troubles affectant les artères coronaires. Elles sont la première cause mondiale de mortalité. Par conséquent, la détection précoce de ces maladies en utilisant des techniques peu invasives fournit un meilleur résultat thérapeutique, et permet de réduire les coûts et les risques liés à une approche interventionniste. Des études récentes ont montré que la tomodensitométrie peut être utilisée comme une alternative non invasive et fiable pour localiser et quantifier ces lésions. Cependant, l'analyse de ces examens, basée sur l'inspection des sections du vaisseau, reste une tâche longue et fastidieuse. Une haute précision est nécessaire, et donc seulement les cliniciens hautement expérimentés sont en mesure d'analyser et d'interpréter de telles données pour établir un diagnostic. Les outils informatiques sont essentiels pour réduire les temps de traitement et assurer la qualité du diagnostic. L'objectif de cette thèse est de fournir des outils automatisés de traitement d'angiographie CT, pour la visualisation et l'analyse des artères coronaires d'une manière non invasive. Ces outils permettent aux pathologistes de diagnostiquer et évaluer efficacement les risques associés aux maladies cardio-vasculaires tout en améliorant la qualité de l'évaluation d'un niveau purement qualitatif à un niveau quantitatif. Le premier objectif de ce travail est de concevoir, analyser et valider un ensemble d'algorithmes automatisés utiles pour la détection et la quantification de sténoses des artères coronaires. Nous proposons un nombre de techniques couvrant les différentes étapes de la chaîne de traitement vers une analyse entièrement automatisée des artères coronaires. Premièrement, nous présentons un algorithme dédié à l'extraction du cœur. L'approche extrait le cœur comme un seul objet, qui peut être utilisé comme un masque d'entrée pour l'extraction automatisée des coronaires. Ce travail élimine l'étape longue et fastidieuse de la segmentation manuelle du cœur et offre rapidement une vue claire des coronaires. Cette approche utilise un modèle géométrique du cœur ajusté aux données de l'image. La validation de l'approche sur un ensemble de 133 examens montre l'efficacité et la précision de cette approche. Deuxièmement, nous nous sommes intéressés au problème de la segmentation des coronaires. Dans ce contexte, nous avons conçu une nouvelle approche pour l'extraction de ces vaisseaux, qui combine ouvertures par chemin robustes et filtrage sur l'arbre des composantes connexes. L'approche a montré des résultats prometteurs sur un ensemble de 11 examens CT. Pour une détection et quantification robuste de la sténose, une segmentation précise de la lumière du vaisseau est cruciale. Par conséquent, nous avons consacré une partie de notre travail à l'amélioration de l'étape de segmentation de la lumière, basée sur des statistiques propres au vaisseau. La validation avec l'outil d'évaluation en ligne du challenge de Rotterdam sur la segmentation des coronaires, a montré que cette approche présente les mêmes performances que les techniques de l'état de l'art. Enfin, le cœur de cette thèse est consacré à la problématique de la détection et la quantification des sténoses. Deux approches sont conçues et évaluées en utilisant l'outil d'évaluation en ligne de l'équipe de Rotterdam. La première approche se base sur l'utilisation de la segmentation de la lumière avec des caractéristiques géométriques et d'intensité pour extraire les sténoses coronaires. La seconde utilise une approche basée sur l'apprentissage. Durant cette thèse, un prototype pour l'analyse automatisée des artères coronaires et la détection et quantification des sténoses a été développé. L'évaluation qualitative et quantitative sur différents bases d'examens cardiaques montre qu'il atteint le niveau de performances requis pour une utilisation clinique / Coronary heart diseases are the group of disorders that affect the coronary artery vessels. They are the world's leading cause of mortality. Therefore, early detection of these diseases using less invasive techniques provides better therapeutic outcome, as well as reduces costs and risks, compared to an interventionist approach. Recent studies showed that X-ray computed tomography (CT) may be used as an alternative to accurately locate and grade heart lesions in a non invasive way. However, analysis of cardiac CT exam for coronaries lesions inspection remains a tedious and time consuming task, as it is based on the manual analysis of the vessel cross sections. High accuracy is required, and thus only highly experienced clinicians are able to analyze and interpret the data for diagnosis. Computerized tools are critical to reduce processing time and ensure quality of diagnostics. The goal of this thesis is to provide automated coronaries analysis tools to help in non-invasive CT angiography examination. Such tools allow pathologists to efficiently diagnose and evaluate risks associated with CVDs, and to raise the quality of the assessment from a purely qualitative level to a quantitative level. The first objective of our work is to design, analyze and validate a set of automated algorithms for coronary arteries analysis with the final purpose of automated stenoses detection and quantification. We propose different algorithms covering different processing steps towards a fully automated analysis of the coronary arteries. Our contribution covers the three major blocks of the whole processing chain and deals with different image processing fields. First, we present an algorithm dedicated to heart volume extraction. The approach extracts the heart as one single object that can be used as an input masque for automated coronary arteries segmentation. This work eliminates the tedious and time consuming step of manual removing obscuring structures around the heart (lungs, ribs, sternum, liver...) and quickly provides a clear and well defined view of the coronaries. This approach uses a geometric model of the heart that is fitted and adapted to the image data. Quantitative and qualitative analysis of results obtained on a 114 exam database shows the efficiency and the accuracy of this approach. Second, we were interested to the problem of coronary arteries enhancement and segmentation. In this context, we first designed a novel approach for coronaries enhancement that combines robust path openings and component tree filtering. The approach showed promising results on a set of 11 CT exam compared to a Hessian based approach. For a robust stenoses detection and quantification, a precise and accurate lumen segmentation is crucial. Therefore, we have dedicated a part of our work to the improvement of lumen segmentation step based on vessel statistics. Validation on the Rotterdam Coronary Challenge showed that this approach provides state of the art performances. Finally, the major core of this thesis is dedicated to the issue of stenosis detection and quantification. Two different approaches are designed and evaluated using the Rotterdam online evaluation framework. The first approach get uses of the lumen segmentation with some geometric and intensity features to extract the coronary stenosis. The second is using a learning based approach for stenosis detection and stenosis. The second approach outperforms some of the state of the art works with reference to some metrics. This thesis results in a prototype for automated coronary arteries analysis and stenosis detection and quantification that meets the level of required performances for a clinical use. The prototype was qualitatively and quantitatively validated on different sets of cardiac CT exams

Page generated in 0.5146 seconds