• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Metody pro zpracování termovizních snímků s detekcí stanovené oblasti obličeje / Methods for infrared thermography with detection of specific facial areas

Kolářová, Dana January 2014 (has links)
This paper deals with non-contact measurement of temperature in human faces. Principle of measurement of infrared radiation and construction of the thermal imager is described in a literature search. The main part of the paper is design of an algorithm for automatic processing and the detection of regions of interest in thermal images. The theoretical description of used methods is also included in this paper. The aim is to design and implement a program for automatic evaluation of temperature changes in a human face in a sequence of thermal images that were taken with short time delay. As a part of thesis is description of implementation of designed algorithm in programming enviroment MATLAB and the description of the user interface. The program was tested on the experimental data samples. Obtained results and possible limitations are also discused in this paper.
22

Rozpoznávání markantních rysů na nábojnicích / Recognition of Unique Features on Weapon Cartridges

Siblík, Jan January 2010 (has links)
Subject of this thesis is design and implementation of an algorithm, capable of distinct feature based comparision of two weapon cartridge casing  images. In the first section it looks into the issue of fi- rearms with special emphasis on ballistic traces. In following parts it presents design and imple- mentation of scanning unit for acquisition of such images and their processing and design and excuti- on of the comparation algorithm. In the conclusion there is an evaluation of goals achieved and possi- bilities for further development.
23

Evaluating machine learning methods for detecting sleep arousal / Evaluering av maskininlärningsmetoder för detektion av sömnstörningar

Ivarsson, Anton, Stachowicz, Jacob January 2019 (has links)
Sleep arousal is a phenomenon that affects the sleep of a large amount of people. The process of predicting and classifying arousal events is done manually with the aid of certified technologists, although some research has been done on automation using Artificial Neural Networks (ANN). This study explored how a Support Vector Machine performed(SVM) compared to an ANN on this task. Polysomnography (PSG) is a sort of sleep study which produces the data that is used in classifying sleep disorders. The PSG-data used in this thesis consists of 13 wave forms sampled at or resampled at 200Hz. There were samples from 994 patients totalling approximately 6.98 1010 data points, processing this amount of data is time consuming and presents a challenge. 2000 points of each signal was used in the construction of the data set used for the models. Extracted features included: Median, Max, Min, Skewness, Kurtosis, Power of EEG-band frequencies and more. Recursive feature elimination was used in order to select the best amount of extracted features. The extracted data set was used to train two ”out of the box” classifiers and due to memory issues the testing had to be split in four batches. When taking the mean of the four tests, the SVM scored ROC AUC of 0,575 and the ANN 0.569 respectively. As the difference in the two results was very modest it was not possible to conclude that either model was better suited for the task at hand. It could however be concluded that SVM can perform as well as ANN on PSG-data. More work has to bee done on feature extraction, feature selection and the tuning of the models for PSG-data to conclude anything else. Future thesis work could include research questions as ”Which features performs best for a SVM in the prediction of Sleep arousals on PSG-data” or ”What feature selection technique performs best for a SVM in the prediction of Sleep arousals on PSG-data”, etc. / Sömnstörningar är en samling hälsotillstånd som påverkar sömnkvaliteten hos en stor mängd människor. Ett exempel på en sömnstörning är sömnapne. Detektion av dessa händelser är idag en manuell uppgift utförd av certifierade teknologer, det har dock på senare tid gjorts studier som visar att Artificella Neurala Nätverk (ANN) klarar att detektera händelserna med stor träffsäkerhet. Denna studie undersöker hur väl en Support Vector Machine (SVM) kan detektera dessa händelser jämfört med en ANN. Datat som används för att klassificera sömnstörningar kommer från en typ av sömnstudie kallad polysomnografi (PSG). Den PSG-data som används i denna avhandling består av 13 vågformer där 12 spelats in i 200Hz och en rekonstruerats till 200Hz. Datan som används i denna avhandling innehåller inspelningar från 994 patienter, vilket ger totalt ungefär·6.98 1010 datapunkter. Att behandla en så stor mängd data var en utmaning. 2000 punkter från vare vågform användes vid konstruktionen av det dataset som användes för modellerna. De attribut som extraherades innehöll bland annat: Median, Max, Min, Skewness, Kurtosis, amplitud av EEG-bandfrekvenser m.m. Metoden Recursive Feature Elimination användes för att välja den optimala antalet av de bästa attributen. Det extraherade datasetet användes sedan för att träna två standard-konfigurerade modeller, en SVM och en ANN. På grund av en begräning av arbetsminne så var vi tvungna att dela upp träningen och testandet i fyra segment. Medelvärdet av de fyra testen blev en ROC AUC på 0,575 för en SVM, respektive 0,569 för ANN. Eftersom skillnaden i de två resultaten var väldigt marginella kunde vi inte dra slutsatsen att endera modellen var bättre lämpad för uppgiften till hands. Vi kan dock dra slutsatsen att en SVM kan prestera lika väl som ANN på PSG-data utan konfiguration. Mer arbete krävs inom extraheringen av attributen, attribut-eliminationen och justering av modellerna. Framtida avhandlingar skulle kunna göras med frågeställningarna: “Vilka attributer fungerar bäst för en SVM inom detektionen av sömnstörningar på PSG-data” eller ”Vilken teknik för attribut-elimination fungerar bäst för en SVM inom detektionen av sömnstörningar på PSG-data”, med mera.
24

Diskussioner om våld i sociala medier - En metod för att mäta förekomsten av diskussioner om våld på olika digitala plattformar

Bisell, Evelina, Rosenqvist, Kim January 2023 (has links)
Möjligheten för individer att uttrycka sig på internet har underlättat för det fria ordet, som är en demokratisk grundsten i vårt samhälle. Baksidan av detta mynt är att alltmer våldsam radikalisering sker runt om på digitala plattformar. I digitala miljöer sprids idag våldsbejakande propaganda där våldsverkare hyllas som hjältar, våld mot fienden rättfärdigas, och instruktioner om hur attentat kan genomföras delas. För socialt utsatta individer som får sin verklighetsförankring i dessa våldsbejakande digitala miljöer kan världsuppfattningen ändras till den grad att de till slut själva väljer att utföra grova våldsdåd. Att kunna identifiera digitala plattformar där diskussioner om våld är mer vanligt förekommande kan därför ge en första indikation på vilka sidor som riskerar att potentiellt främja våldsutövning. Tidigare forskning om hotbedömning genom textanalys har främst fokuserat på att identifiera individer som utgör ett hot. Mindre utrymme har ägnats åt att utveckla metoder som istället kan identifiera gemenskaper som utgör ett hot för individen. Det saknas idag applicerbara och validerade metoder som genom automatiserad textanalys kan mäta diskussioner om våld på digitala forum. Arbetets forskningsmål var att skapa en metod som ska kunna mäta förekomsten av våld i diskussioner på digitala plattformar, och tillämpar metodramverket designforskning. Genom både kvalitativa och kvantitativa metoder skapades en ordlista över en mängd våldsuttryck som används på flera samtida sociala medier. Programkod utvecklades för att automatiskt kunna räkna antalet förekommande våldsuttryck i en given textsamling. Genom tidigare tillgänglig data från flera olika forum utfördes en testning och utvärdering av metoden. Resultaten visar att våldsuttryck var upp till 100 gånger mer vanligt förekommande på vissa av de mer kända högerextrema forumen jämfört med mer generella diskussionsforum. Spektrumet är i linje med vad som initialt skulle kunna förväntas utifrån karaktären på dessa olika forum och indikerar därmed att metoden levererar realistiska resultat. En djupare kvalitativ analys av inläggen skulle vara nödvändig för att identifiera hur stor del av de identifierade våldsuttrycken tas upp i diskussioner med en positiv inställning till våld. / Individuals being able to express themselves on the internet has been a boon to free speech, a democratic pillar to our community. The backside of this is that an increasing amount of violent radicalization is happening all over social media. At this moment, propaganda that praises violence is being spread on digital platforms, where violent perpetrators are praised as heroes, violence against "the enemy" is justified, and instructions on how to perform violent attacks are being spread. Socially vulnerable individuals repeatedly exposed to violent social communities can have their worldview change so drastically that they end up committing violent crimes. That is why identifying digital platforms where discussions about violence are more commonly occurring could give a first indication of which sites pose a higher risk of promoting violent attacks. Previous research on threat assessment through text analysis has mainly focused on detecting warning behaviours in radicalised individuals. Less room within this area of research has been given to developing methods that instead identify communities that can pose a risk to the individual. Therefore, there is a lack of applicable and validated methods that can measure discussions of violence on social media through automated text analysis. The research goal of this thesis was to create a method that can measure the occurrence of violent discussions on digital platforms using the method framework Design Science. A list of violent words and expressions commonly used on social media was created with qualitative and quantitative methods. Code was then developed to automatically count the number of occurrences of these expressions in a given text. Testing and evaluation of the method were carried out with data previously made available from several forums. The results show that violent expressions occurred up to 100 times more often on some of the more known right-wing extremist forums than on more general discussion forums. The resulting spectrum aligns with what one initially could expect to find by judging the character of these forums and therefore indicates that the method delivers accurate results. A deeper qualitative analysis of the posts would be necessary to identify how many of the identified violent expressions appeared in discussions with a positive attitude toward violence.
25

Mise à jour de la Base de Données Topographiques du Québec à l'aide d'images à très haute résolution spatiale et du progiciel Sigma0 : le cas des voies de communication

Bélanger, Jean 12 1900 (has links)
Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus. / In order to optimize and reduce the cost of road map updating, the Ministry of Natural Resources and Wildlife is considering exploiting high definition color aerial photography within a global automatic detection process. In that regard, Montreal based SYNETIX Inc, teamed with the University of Montreal Remote Sensing Laboratory (UMRSL) in the development of an application indented for the automatic detection of road networks on complex radiometric high definition imagery. This application named SIGMA-ROUTES is a derived module of a software called SIGMA0 earlier developed by the UMRSL for optic and radar imagery of 5 to 10 meter resolution. SIGMA-ROUTES road detections relies on a map guided filtering process that enables the filter to be driven along previously known road vectors and tagged them as intact, suspect or lost depending on the filtering responses. As for the new segments updating, the process first implies a detection of potential starting points for new roads within the filtering corridor of previously known road to which they should be connected. In that respect, it is a very challenging task to emulate the human visual filtering process and further distinguish potential starting points of new roads on complex radiometric high definition imagery. In this research, we intend to evaluate the application’s efficiency in terms of total linear distances of detected roads as well as the spatial location of inconsistencies on a 2.8 km2 test site containing 40 km of various road categories in a semi-urban environment. As specific objectives, we first intend to establish the impact of different resolutions of the input imagery and secondly establish the potential gains of enhanced images (segmented and others) in a preemptive approach of better matching the image property with the detection parameters. These results have been compared to a ground truth reference obtained by a conventional visual detection process on the bases of total linear distances and spatial location of detection. The best results with the most efficient combination of resolution and pre-processing have shown a 78% intact detection in accordance to the ground truth reference when applied to a segmented resample image. The impact of image resolution is clearly noted as a change from 84 cm to 210 cm resolution altered the total detected distances of intact roads of around 15%. We also found many roads segments ignored by the process and without detection status although they were directly liked to intact neighbours. By revising the algorithm and optimizing the image pre-processing, we estimate a 90% intact detection performance can be reached. The new segment detection is non conclusive as it generates an uncontrolled networks of false detections throughout other entities in the images. Related to these false detections of new roads, we were able to identify numerous cases of new road detections parallel to previously assigned intact road segments. We conclude with a proposed procedure that involves enhanced images as input combined with human interventions at critical level in order to optimize the final product.
26

Mise à jour de la Base de Données Topographiques du Québec à l'aide d'images à très haute résolution spatiale et du progiciel Sigma0 : le cas des voies de communication

Bélanger, Jean 12 1900 (has links)
Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus. / In order to optimize and reduce the cost of road map updating, the Ministry of Natural Resources and Wildlife is considering exploiting high definition color aerial photography within a global automatic detection process. In that regard, Montreal based SYNETIX Inc, teamed with the University of Montreal Remote Sensing Laboratory (UMRSL) in the development of an application indented for the automatic detection of road networks on complex radiometric high definition imagery. This application named SIGMA-ROUTES is a derived module of a software called SIGMA0 earlier developed by the UMRSL for optic and radar imagery of 5 to 10 meter resolution. SIGMA-ROUTES road detections relies on a map guided filtering process that enables the filter to be driven along previously known road vectors and tagged them as intact, suspect or lost depending on the filtering responses. As for the new segments updating, the process first implies a detection of potential starting points for new roads within the filtering corridor of previously known road to which they should be connected. In that respect, it is a very challenging task to emulate the human visual filtering process and further distinguish potential starting points of new roads on complex radiometric high definition imagery. In this research, we intend to evaluate the application’s efficiency in terms of total linear distances of detected roads as well as the spatial location of inconsistencies on a 2.8 km2 test site containing 40 km of various road categories in a semi-urban environment. As specific objectives, we first intend to establish the impact of different resolutions of the input imagery and secondly establish the potential gains of enhanced images (segmented and others) in a preemptive approach of better matching the image property with the detection parameters. These results have been compared to a ground truth reference obtained by a conventional visual detection process on the bases of total linear distances and spatial location of detection. The best results with the most efficient combination of resolution and pre-processing have shown a 78% intact detection in accordance to the ground truth reference when applied to a segmented resample image. The impact of image resolution is clearly noted as a change from 84 cm to 210 cm resolution altered the total detected distances of intact roads of around 15%. We also found many roads segments ignored by the process and without detection status although they were directly liked to intact neighbours. By revising the algorithm and optimizing the image pre-processing, we estimate a 90% intact detection performance can be reached. The new segment detection is non conclusive as it generates an uncontrolled networks of false detections throughout other entities in the images. Related to these false detections of new roads, we were able to identify numerous cases of new road detections parallel to previously assigned intact road segments. We conclude with a proposed procedure that involves enhanced images as input combined with human interventions at critical level in order to optimize the final product.
27

Two complementary approaches to detecting vulnerabilities in C programs

Jimenez, Willy 04 October 2013 (has links) (PDF)
In general, computer software vulnerabilities are defined as special cases where an unexpected behavior of the system leads to the degradation of security properties or the violation of security policies. These vulnerabilities can be exploited by malicious users or systems impacting the security and/or operation of the attacked system. Since the literature on vulnerabilities is not always available to developers and the used tools do not allow detecting and avoiding them; the software industry continues to be affected by security breaches. Therefore, the detection of vulnerabilities in software has become a major concern and research area. Our research was done under the scope of the SHIELDS European project and focuses specifically on modeling techniques and formal detection of vulnerabilities. In this area, existing approaches are limited and do not always rely on a precise formal modeling of the vulnerabilities they target. Additionally detection tools produce a significant number of false positives/negatives. Note also that it is quite difficult for a developer to know what vulnerabilities are detected by each tool because they are not well documented. Under this context the contributions made in this thesis are: Definition of a formalism called template. Definition of a formal language, called Vulnerability Detection Condition (VDC), which can accurately model the occurrence of a vulnerability. Also a method to generate VDCs from templates has been defined. Defining a second approach for detecting vulnerabilities which combines model checking and fault injection techniques. Experiments on both approaches
28

Newborn EEG seizure detection using adaptive time-frequency signal processing

Rankine, Luke January 2006 (has links)
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
29

Compress?o Seletiva de Imagens Coloridas com Detec??o Autom?tica de Regi?es de Interesse

Gomes, Diego de Miranda 05 January 2006 (has links)
Made available in DSpace on 2014-12-17T14:56:22Z (GMT). No. of bitstreams: 1 DiegoMG.pdf: 1982662 bytes, checksum: e489eb42e914d358aaeb197489ceb5e4 (MD5) Previous issue date: 2006-01-05 / There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input / A compress?o seletiva de imagens tende a ser cada vez mais utilizada, visto que diversas aplica??es fazem uso de imagens digitais que em alguns casos n?o permitem perdas de informa??es em certas regi?es. Por?m, existem aplica??es nas quais essas imagens s?o capturadas e armazenadas automaticamente, impossibilitando a um usu?rio indicar as regi?es da imagem que devem ser comprimidas sem perdas. Uma solu??o para esse problema seria a detec??o autom?tica das regi?es de interesse, um problema muito dif?cil de ser resolvido em casos gerais. Em certos casos, no entanto, pode-se utilizar t?cnicas inteligentes para detectar essas regi?es. Esta disserta??o apresenta um compressor seletivo de imagens coloridas onde as regi?es de interesse, previamente fornecidas, s?o comprimidas totalmente sem perdas. Este m?todo faz uso da transformada wavelet para descorrelacionar os pixels da imagem, de uma rede neural competitiva para realizar uma quantiza??o vetorial, da morfologia matem?tica e do c?digo adaptativo de Huffman. Al?m da op??o da sele??o manual das regi?es de interesse, existem duas op??es de detec??o autom?tica: um m?todo de segmenta??o de texturas, onde a textura com maior freq??ncia ? selecionada para ser a regi?o de interesse, e um novo m?todo de detec??o de faces onde a regi?o da face ? comprimida sem perdas. Os resultados mostram que ambos os m?todos podem ser utilizados com o algoritmo de compress?o, fornecendo a este o mapa de regi?o de interesse
30

DESENVOLVIMENTO E APLICAÇÃO DE UM MÉTODO PARA DETECÇÃO DE INDÍCIOS DE PLÁGIO

Pertile, Solange de Lurdes 11 March 2011 (has links)
The distribution and access to information by a much larger number of people on the Internet has grown in a exponential, way has been hard the which control of the originality of the information and facilitating the work of plagiarists users who make use of such information inappropriately. It is in this context that stands the importance of evalute the texts produced in the post-graduate and graduate courses in the classroom and distance modalities, that this paper proposes a new method to detect signs of plagiarism in academic work, which performs to searche similar fragments with web documents. The method developed analyzes the mosaic plagiarism, where the author shares copies of a work by changing only a few words without giving credit to the original work, and the bilingual plagiarism, where the contents of a document in English is translated for Portuguese without reference the original work. In addition, the method was implemented in an integrated tool and to the sending task module of the Moodle platform for access by desktop and by the mobile device aiming to empower the teachers the benefits of its utilization in the posting of work in a AVA, and implemented as a desktop computer system to allow users the access also outside theAVA Moodle. The results showed that the developed method reached satisfactory results in relation to other techniques found in literature, getting over a collection of 14 documents indexs of similarities ranging from 30.07% to 40% and with a precision in the returned results between 71.42 % and 96.15%. The experimental results of a translated document from English to Portuguese had a 100% accuracy in the returned results. / A distribuição e o acesso a informações por um número muito maior de pessoas na internet tem crescido de forma exponencial, o que vem dificultando o controle da originalidade de tais informações e facilitando o trabalho dos usuários plagiadores que fazem uso de tais informações de forma inadequada. Neste contexto que se destaca a importância de avaliar os textos produzidos nos cursos de pós-graduação e graduação, nas modalidades à distância e presenciais, que esta dissertação propõe um novo método para detecção de índicios de plágio em trabalhos acadêmicos, o qual realiza buscas por fragmentos similares com documentos da web. O método desenvolvido analisa o plágio mosaico, onde o autor copia partes de uma obra trocando somente algumas palavras sem dar crédito ao autor da obra original; e o plágio bilíngue, onde o conteúdo de um documento no idioma inglês é traduzido para o idioma português sem fazer referência à obra original. Além disso, o método foi implementado em uma ferramenta e integrada ao módulo de envio de tarefas da plataforma Moodle para acesso via desktop e pelo dispositivo móvel, visando potencializar aos professores os benefícios de sua utilização, já na postagem dos trabalhos no AVA; e implementada como um sistema computacional desktop para permitir aos usuários seu acesso também fora do AVA Moodle. Os resultados obtidos mostram que o método desenvolvido alcançou resultados satisfatórios em relação a outras técnicas encontrados na literatura, obtendo sobre uma coleção de 14 documentos índices de similaridades variando de 30,07% a 40% e com uma precisão nos resultados retornados entre 71,42% e 96,15%. Os resultados do experimento de um documento traduzido do inglês para o português teve uma precisão de 100% nos resultados retornados.

Page generated in 0.5363 seconds