• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Multiclassifier Approach to Motor Unit Potential Classification for EMG Signal Decomposition

Rasheed, Sarbast January 2006 (has links)
EMG signal decomposition is the process of resolving a composite EMG signal into its constituent motor unit potential trains (classes) and it can be configured as a classification problem. An EMG signal detected by the tip of an inserted needle electrode is the superposition of the individual electrical contributions of the different motor units that are active, during a muscle contraction, and background interference. <BR>This thesis addresses the process of EMG signal decomposition by developing an interactive classification system, which uses multiple classifier fusion techniques in order to achieve improved classification performance. The developed system combines heterogeneous sets of base classifier ensembles of different kinds and employs either a one level classifier fusion scheme or a hybrid classifier fusion approach. <BR>The hybrid classifier fusion approach is applied as a two-stage combination process that uses a new aggregator module which consists of two combiners: the first at the abstract level of classifier fusion and the other at the measurement level of classifier fusion such that it uses both combiners in a complementary manner. Both combiners may be either data independent or the first combiner data independent and the second data dependent. For the purpose of experimentation, we used as first combiner the majority voting scheme, while we used as the second combiner one of the fixed combination rules behaving as a data independent combiner or the fuzzy integral with the lambda-fuzzy measure as an implicit data dependent combiner. <BR>Once the set of motor unit potential trains are generated by the classifier fusion system, the firing pattern consistency statistics for each train are calculated to detect classification errors in an adaptive fashion. This firing pattern analysis allows the algorithm to modify the threshold of assertion required for assignment of a motor unit potential classification individually for each train based on an expectation of erroneous assignments. <BR>The classifier ensembles consist of a set of different versions of the Certainty classifier, a set of classifiers based on the nearest neighbour decision rule: the fuzzy <em>k</em>-NN and the adaptive fuzzy <em>k</em>-NN classifiers, and a set of classifiers that use a correlation measure as an estimation of the degree of similarity between a pattern and a class template: the matched template filter classifiers and its adaptive counterpart. The base classifiers, besides being of different kinds, utilize different types of features and their performances were investigated using both real and simulated EMG signals of different complexities. The feature sets extracted include time-domain data, first- and second-order discrete derivative data, and wavelet-domain data. <BR>Following the so-called <em>overproduce and choose</em> strategy to classifier ensemble combination, the developed system allows the construction of a large set of candidate base classifiers and then chooses, from the base classifiers pool, subsets of specified number of classifiers to form candidate classifier ensembles. The system then selects the classifier ensemble having the maximum degree of agreement by exploiting a diversity measure for designing classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between the base classifier outputs, i. e. , to measure the degree of decision similarity between the base classifiers. This mechanism of choosing the team's classifiers based on assessing the classifier agreement throughout all the trains and the unassigned category is applied during the one level classifier fusion scheme and the first combiner in the hybrid classifier fusion approach. For the second combiner in the hybrid classifier fusion approach, we choose team classifiers also based on kappa statistics but by assessing the classifiers agreement only across the unassigned category and choose those base classifiers having the minimum agreement. <BR>Performance of the developed classifier fusion system, in both of its variants, i. e. , the one level scheme and the hybrid approach was evaluated using synthetic simulated signals of known properties and real signals and then compared it with the performance of the constituent base classifiers. Across the EMG signal data sets used, the hybrid approach had better average classification performance overall, specially in terms of reducing the number of classification errors.
2

A Multiclassifier Approach to Motor Unit Potential Classification for EMG Signal Decomposition

Rasheed, Sarbast January 2006 (has links)
EMG signal decomposition is the process of resolving a composite EMG signal into its constituent motor unit potential trains (classes) and it can be configured as a classification problem. An EMG signal detected by the tip of an inserted needle electrode is the superposition of the individual electrical contributions of the different motor units that are active, during a muscle contraction, and background interference. <BR>This thesis addresses the process of EMG signal decomposition by developing an interactive classification system, which uses multiple classifier fusion techniques in order to achieve improved classification performance. The developed system combines heterogeneous sets of base classifier ensembles of different kinds and employs either a one level classifier fusion scheme or a hybrid classifier fusion approach. <BR>The hybrid classifier fusion approach is applied as a two-stage combination process that uses a new aggregator module which consists of two combiners: the first at the abstract level of classifier fusion and the other at the measurement level of classifier fusion such that it uses both combiners in a complementary manner. Both combiners may be either data independent or the first combiner data independent and the second data dependent. For the purpose of experimentation, we used as first combiner the majority voting scheme, while we used as the second combiner one of the fixed combination rules behaving as a data independent combiner or the fuzzy integral with the lambda-fuzzy measure as an implicit data dependent combiner. <BR>Once the set of motor unit potential trains are generated by the classifier fusion system, the firing pattern consistency statistics for each train are calculated to detect classification errors in an adaptive fashion. This firing pattern analysis allows the algorithm to modify the threshold of assertion required for assignment of a motor unit potential classification individually for each train based on an expectation of erroneous assignments. <BR>The classifier ensembles consist of a set of different versions of the Certainty classifier, a set of classifiers based on the nearest neighbour decision rule: the fuzzy <em>k</em>-NN and the adaptive fuzzy <em>k</em>-NN classifiers, and a set of classifiers that use a correlation measure as an estimation of the degree of similarity between a pattern and a class template: the matched template filter classifiers and its adaptive counterpart. The base classifiers, besides being of different kinds, utilize different types of features and their performances were investigated using both real and simulated EMG signals of different complexities. The feature sets extracted include time-domain data, first- and second-order discrete derivative data, and wavelet-domain data. <BR>Following the so-called <em>overproduce and choose</em> strategy to classifier ensemble combination, the developed system allows the construction of a large set of candidate base classifiers and then chooses, from the base classifiers pool, subsets of specified number of classifiers to form candidate classifier ensembles. The system then selects the classifier ensemble having the maximum degree of agreement by exploiting a diversity measure for designing classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between the base classifier outputs, i. e. , to measure the degree of decision similarity between the base classifiers. This mechanism of choosing the team's classifiers based on assessing the classifier agreement throughout all the trains and the unassigned category is applied during the one level classifier fusion scheme and the first combiner in the hybrid classifier fusion approach. For the second combiner in the hybrid classifier fusion approach, we choose team classifiers also based on kappa statistics but by assessing the classifiers agreement only across the unassigned category and choose those base classifiers having the minimum agreement. <BR>Performance of the developed classifier fusion system, in both of its variants, i. e. , the one level scheme and the hybrid approach was evaluated using synthetic simulated signals of known properties and real signals and then compared it with the performance of the constituent base classifiers. Across the EMG signal data sets used, the hybrid approach had better average classification performance overall, specially in terms of reducing the number of classification errors.
3

Multiple Classifier Strategies for Dynamic Physiological and Biomechanical Signals

Nikjoo Soukhtabandani, Mohammad 30 August 2012 (has links)
Access technologies often deal with the classification of several physiological and biomechanical signals. In most previous studies involving access technologies, a single classifier has been trained. Despite reported success of these single classifiers, classification accuracies are often below clinically viable levels. One approach to improve upon the performance of these classifiers is to utilize the state of- the-art multiple classifier systems (MCS). Because MCS invoke more than one classifier, more information can be exploited from the signals, potentially leading to higher classification performance than that achievable with single classifiers. Moreover, by decreasing the feature space dimensionality of each classifier, the speed of the system can be increased. MCSs may combine classifiers on three levels: abstract, rank, or measurement level. Among them, abstract-level MCSs have been the most widely applied in the literature given the flexibility of the abstract level output, i.e., class labels may be derived from any type of classifier and outputs from multiple classifiers, each designed within a different context, can be easily combined. In this thesis, we develop two new abstract-level MCSs based on "reputation" values of individual classifiers: the static reputation-based algorithm (SRB) and the dynamic reputation-based algorithm (DRB). In SRB, each individual classifier is applied to a “validation set”, which is disjoint from training and test sets, to estimate its reputation value. Then, each individual classifier is assigned a weight proportional to its reputation value. Finally, the total decision of the classification system is computed using Bayes rule. We have applied this method to the problem of dysphagia detection in adults with neurogenic swallowing difficulties. The aim was to discriminate between safe and unsafe swallows. The weighted classification accuracy exceeded 85% and, because of its high sensitivity, the SRB approach was deemed suitable for screening purposes. In the next step of this dissertation, I analyzed the SRB algorithm mathematically and examined its asymptotic behavior. Specifically, I contrasted the SRB performance against that of majority voting, the benchmark abstract-level MCS, in the presence of different types of noise. In the second phase of this thesis, I exploited the idea of the Dirichlet reputation system to develop a new MCS method, the dynamic reputation-based algorithm, which is suitable for the classification of non-stationary signals. In this method, the reputation of each classifier is updated dynamically whenever a new sample is classified. At any point in time, a classifier’s reputation reflects the classifier’s performance on both the validation and the test sets. Therefore, the effect of random high-performance of weak classifiers is appropriately moderated and likewise, the effect of a poorly performing individual classifier is mitigated as its reputation value, and hence overall influence on the final decision is diminished. We applied DRB to the challenging problem of discerning physiological responses from nonverbal youth with severe disabilities. The promising experimental results encourage further development of reputation-based multi-classifier systems in the domain of access technology research.
4

Multiple Classifier Strategies for Dynamic Physiological and Biomechanical Signals

Nikjoo Soukhtabandani, Mohammad 30 August 2012 (has links)
Access technologies often deal with the classification of several physiological and biomechanical signals. In most previous studies involving access technologies, a single classifier has been trained. Despite reported success of these single classifiers, classification accuracies are often below clinically viable levels. One approach to improve upon the performance of these classifiers is to utilize the state of- the-art multiple classifier systems (MCS). Because MCS invoke more than one classifier, more information can be exploited from the signals, potentially leading to higher classification performance than that achievable with single classifiers. Moreover, by decreasing the feature space dimensionality of each classifier, the speed of the system can be increased. MCSs may combine classifiers on three levels: abstract, rank, or measurement level. Among them, abstract-level MCSs have been the most widely applied in the literature given the flexibility of the abstract level output, i.e., class labels may be derived from any type of classifier and outputs from multiple classifiers, each designed within a different context, can be easily combined. In this thesis, we develop two new abstract-level MCSs based on "reputation" values of individual classifiers: the static reputation-based algorithm (SRB) and the dynamic reputation-based algorithm (DRB). In SRB, each individual classifier is applied to a “validation set”, which is disjoint from training and test sets, to estimate its reputation value. Then, each individual classifier is assigned a weight proportional to its reputation value. Finally, the total decision of the classification system is computed using Bayes rule. We have applied this method to the problem of dysphagia detection in adults with neurogenic swallowing difficulties. The aim was to discriminate between safe and unsafe swallows. The weighted classification accuracy exceeded 85% and, because of its high sensitivity, the SRB approach was deemed suitable for screening purposes. In the next step of this dissertation, I analyzed the SRB algorithm mathematically and examined its asymptotic behavior. Specifically, I contrasted the SRB performance against that of majority voting, the benchmark abstract-level MCS, in the presence of different types of noise. In the second phase of this thesis, I exploited the idea of the Dirichlet reputation system to develop a new MCS method, the dynamic reputation-based algorithm, which is suitable for the classification of non-stationary signals. In this method, the reputation of each classifier is updated dynamically whenever a new sample is classified. At any point in time, a classifier’s reputation reflects the classifier’s performance on both the validation and the test sets. Therefore, the effect of random high-performance of weak classifiers is appropriately moderated and likewise, the effect of a poorly performing individual classifier is mitigated as its reputation value, and hence overall influence on the final decision is diminished. We applied DRB to the challenging problem of discerning physiological responses from nonverbal youth with severe disabilities. The promising experimental results encourage further development of reputation-based multi-classifier systems in the domain of access technology research.
5

Off-line signature verification using classifier ensembles and flexible grid features

Swanepoel, Jacques Philip 12 1900 (has links)
Thesis (MSc (Mathematical Sciences))—University of Stellenbosch, 2009. / Thesis presented in partial fulfilment of the requirements for the degree of Master of Science in applied mathematics at Stellenbosch University / ENGLISH ABSTRACT: In this study we investigate the feasibility of combining an ensemble of eight continuous base classifiers for the purpose of off-line signature verification. This work is mainly inspired by the process of cheque authentication within the banking environment. Each base classifier is constructed by utilising a specific local feature, in conjunction with a specific writer-dependent signature modelling technique. The local features considered are pixel density, gravity centre distance, orientation and predominant slant. The modelling techniques considered are dynamic time warping and discrete observation hidden Markov models. In this work we focus on the detection of high quality (skilled) forgeries. Feature extraction is achieved by superimposing a grid with predefined resolution onto a signature image, whereafter a single local feature is extracted from each signature sub-image corresponding to a specific grid cell. After encoding the signature image into a matrix of local features, each column within said matrix represents a feature vector (observation) within a feature set (observation sequence). In this work we propose a novel flexible grid-based feature extraction technique and show that it outperforms existing rigid grid-based techniques. The performance of each continuous classifier is depicted by a receiver operating characteristic (ROC) curve, where each point in ROC-space represents the true positive rate and false positive rate of a threshold-specific discrete classifier. The objective is therefore to develope a combined classifier for which the area-under-curve (AUC) is maximised -or for which the equal error rate (EER) is minimised. Two disjoint data sets, in conjunction with a cross-validation protocol, are used for model optimisation and model evaluation. This protocol avoids possible model overfitting, and also scrutinises the generalisation potential of each classifier. During the first optimisation stage, the grid configuration which maximises proficiency is determined for each base classifier. During the second optimisation stage, the most proficient ensemble of optimised base classifiers is determined for several classifier fusion strategies. During both optimisation stages only the optimisation data set is utilised. During evaluation, each optimal classifier ensemble is combined using a specific fusion strategy, and retrained and tested on the separate evaluation data set. We show that the performance of the optimal combined classifiers is significantly better than that of the optimal individual base classifiers. Both score-based and decision-based fusion strategies are investigated, which includes a novel extension to an existing decision-based fusion strategy. The existing strategy is based on ROC-statistics of the base classifiers and maximum likelihood estimation. We show that the proposed elitist maximum attainable ROC-based strategy outperforms the existing one. / AFRIKAANSE OPSOMMING: In hierdie projek ondersoek ons die haalbaarheid van die kombinasie van agt kontinue basis-klassifiseerders, vir statiese handtekeningverifikasie. Hierdie werk is veral relevant met die oog op die bekragtiging van tjeks in die bankwese. Elke basis-klassifiseerder word gekonstrueer deur ’n spesifieke plaaslike kenmerk in verband te bring met ’n spesifieke skrywer-afhanklike handtekeningmodelleringstegniek. Die plaaslike kenmerke sluit pikseldigtheid, swaartepunt-afstand, oriëntasie en oorheersende helling in, terwyl die modelleringstegnieke dinamiese tydsverbuiging en diskrete verskuilde Markov modelle insluit. Daar word op die opsporing van hoë kwaliteit vervalsings gefokus. Kenmerk-onttreking word bewerkstellig deur die superponering van ’n rooster van voorafgedefinieerde resolusie op ’n bepaalde handtekening. ’n Enkele plaaslike kenmerk word onttrek vanuit die betrokke sub-beeld geassosieer met ’n spesifieke roostersel. Nadat die handtekeningbeeld na ’n matriks van plaaslike kenmerke getransformeer is, verteenwoordig elke kolom van die matriks ’n kenmerkvektor in ’n kenmerkstel. In hierdie werk stel ons ’n nuwe buigsame rooster-gebasseerde kenmerk-ontrekkingstegniek voor en toon aan dat dit die bestaande starre rooster-gebasseerde tegnieke oortref. Die prestasie van elke kontinue klassifiseerder word voorgestel deur ’n ROC-kurwe, waar elke punt in die ROC-ruimte die ware positiewe foutkoers en vals positiewe foutkoers van ’n drempel-spesifieke diskrete klassifiseerder verteenwoordig. Die doelwit is derhalwe die ontwikkeling van ’n gekombineerde klassifiseerder, waarvoor die area onder die kurwe (AUC) gemaksimeer word - of waarvoor die gelyke foutkoers (EER) geminimeer word. Twee disjunkte datastelle en ’n kruisverifi¨eringsprotokol word gebruik vir model optimering en model evaluering. Hierdie protokol vermy potensiële model-oorpassing, en ondersoek ook die veralgemeningspotensiaal van elke klassifiseerder. Tydens die eerste optimeringsfase word die rooster-konfigurasie wat die bekwaamheid van elke basis-klassifiseerder maksimeer, gevind. Tydens die tweede optimeringsfase word die mees bekwame groepering van geoptimeerde basis-klassifiseerders gevind vir verskeie klassifiseerder fusiestrategieë. Tydens beide optimeringsfases word slegs die optimeringsdatastel gebruik. Tydens evaluering word elke optimale groep klassifiseerders gekombineer met ’n spesifieke fusie-strategie, her-afgerig en getoets op die aparte evalueringsdatastel. Ons toon aan dat die prestasie van die optimale gekombineerde klassifiseerder aansienlik beter is as dié van die optimale individuele basis-klassifiseerders. Beide telling- en besluit-gebaseerde fusie-strategieë word ondersoek, insluitend ’n nuwe uitbreiding van ’n bestaande besluit-gebasseerde kombinasie strategie. Die bestaande strategie is gebaseer op die ROC-statistiek van die basis-klassifiseerders en maksimum aanneemlikheidsberaming. Ons toon aan dat die voorgestelde elitistiese maksimum haalbare ROC-gebasseerde strategie die bestaande strategie oortref.
6

Classification of affect using novel voice and visual features

Kim, Jonathan Chongkang 07 January 2016 (has links)
Emotion adds an important element to the discussion of how information is conveyed and processed by humans; indeed, it plays an important role in the contextual understanding of messages. This research is centered on investigating relevant features for affect classification, along with modeling the multimodal and multitemporal nature of emotion. The use of formant-based features for affect classification is explored. Since linear predictive coding (LPC) based formant estimators often encounter problems with modeling speech elements, such as nasalized phonemes and give inconsistent results for bandwidth estimation, a robust formant-tracking algorithm was introduced to better model the formant and spectral properties of speech. The algorithm utilizes Gaussian mixtures to estimate spectral parameters and refines the estimates using maximum a posteriori (MAP) adaptation. When the method was used for features extraction applied to emotion classification, the results indicate that an improved formant-tracking method will also provide improved emotion classification accuracy. Spectral features contain rich information about expressivity and emotion. However, most of the recent work in affective computing has not progressed beyond analyzing the mel-frequency cepstral coefficients (MFCC’s) and their derivatives. A novel method for characterizing spectral peaks was introduced. The method uses a multi-resolution sinusoidal transform coding (MRSTC). Because of MRSTC’s high precision in representing spectral features, including preservation of high frequency content not present in the MFCC’s, additional resolving power was demonstrated. Facial expressions were analyzed using 53 motion capture (MoCap) markers. Statistical and regression measures of these markers were used for emotion classification along the voice features. Since different modalities use different sampling frequencies and analysis window lengths, a novel classifier fusion algorithm was introduced. This algorithm is intended to integrate classifiers trained at various analysis lengths, as well as those obtained from other modalities. Classification accuracy was statistically significantly improved using a multimodal-multitemporal approach with the introduced classifier fusion method. A practical application of the techniques for emotion classification was explored using social dyadic plays between a child and an adult. The Multimodal Dyadic Behavior (MMDB) dataset was used to automatically predict young children’s levels of engagement using linguistic and non-linguistic vocal cues along with visual cues, such as direction of a child’s gaze or a child’s gestures. Although this and similar research is limited by inconsistent subjective boundaries, and differing theoretical definitions of emotion, a significant step toward successful emotion classification has been demonstrated; key to the progress has been via novel voice and visual features and a newly developed multimodal-multitemporal approach.
7

Diabetic retinopathy image quality assessment, detection, screening and referral = Análise de qualidade, detecção de lesões de retinopatia diabética, triagem e verificação de necessidade de consulta a partir de imagens de retina / Análise de qualidade, detecção de lesões de retinopatia diabética, triagem e verificação de necessidade de consulta a partir de imagens de retina

Pires, Ramon, 1989- 23 August 2018 (has links)
Orientador: Anderson de Rezende Rocha / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-23T17:47:21Z (GMT). No. of bitstreams: 1 Pires_Ramon_M.pdf: 4429324 bytes, checksum: 4e4015bc2131a1f1a5e8aa215f24d98f (MD5) Previous issue date: 2013 / Resumo: A Retinopatia Diabética (RD), complicação provocada pela diabetes, se manifesta por meio de diferentes lesões que possuem suas especificidades. Estas especificidades são exploradas na literatura como estratégia para representação, proporcionando uma discriminação satisfatória entre imagens de pacientes normais e doentes. No entanto, por estarem fortemente atrelada _as características visuais de cada anomalia, a detecção de lesões distintas exige abordagens distintas. Neste trabalho, apresentamos um arcabouço geral cujo objetivo é automatizar o procedimento de análise de imagens de fundo de olho. O trabalho é dividido em quatro etapas: avaliação de qualidade, detecção de lesões individuais, triagem e verificação de necessidade de consulta. Na primeira etapa, aplicamos diferentes técnicas de caracterização de imagens para avaliar a qualidade das imagens por meio de dois critérios: definição de campo e detecção de borramentos. Na segunda etapa deste trabalho, propomos a continuação de um trabalho anterior desenvolvido pelo nosso grupo, no qual foi aplicado um método unificado na tentativa de detecção de lesões distintas. No nosso método para detecção de qualquer lesão, exploramos diferentes alternativas de representação em baixo nível (extração densa e esparsa) e médio nível (técnicas de coding/pooling para sacolas de palavras visuais) objetivando o desenvolvimento de um conjunto eficaz de detectores de lesões individuais. As pontuações provenientes de cada detector de lesão, obtidas para cada imagem, representam uma descrição de alto nível, ponto fundamental para a terceira e a quarta etapas. Tendo em mãos um conjunto de dados descritos em alto nível (pontuações dos detectores individuais), propomos, na terceira etapa do trabalho, a aplicação de técnicas de fusão de dados para o desenvolvimento de um método de detecção de múltiplas lesões. A descrição em alto nível também é explorada na quarta etapa para o desenvolvimento de um método eficaz de avaliação de necessidade de encaminhamento a um oftalmologista no intervalo de um ano, visando evitar que o médico seja sobrecarregado, bem como dar prioridade a pacientes em estado urgente / Abstract: Diabetic Retinopathy (DR), a common complication caused by diabetes, manifests through deferent lesions that have their particularities. These particularities are explored in the literature as methods for representation, providing a satisfactory discrimination between healthy/diseased retinas. However, by being strongly linked to the visual characteristics of each anomaly, the detection of distinct lesions requires distinct approaches. In this work, we present a general framework whose objective is to automate the eye-fundus image analysis. The work comprises four steps: image quality assessment, DR-related lesion detection, screening, and referral. In the first step, we apply characterization techniques to assess image quality by two criteria: field definition and blur detection. In the second step of this work, we extend up a previous work of our group which explored a unified method for detecting distinct lesions in eye-fundus images. In our approach for detection of any lesion, we explore several alternatives for low-level (dense and sparse extraction) and mid-level (coding/pooling techniques of bag of visual words) representations, aiming at the development of an effective set of individual DR-related lesion detectors. The scores derived from each individual DR-related lesion, taken for each image, represent a high-level description, fundamental point for the third and fourth steps. Given a dataset described in high-level (scores from the individual detectors), we propose, in the third step of the work, the use of machine learning fusion techniques aiming at the development of a multi-lesion detection method. The high-level description is also explored in the fourth step for the development of an effective method for evaluating the necessity of referral of a patient to an ophthalmologist in the interval of one year, avoiding overloading medical specialist with simple cases as well as give priority to patients in an urgent state / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
8

Projeto e desenvolvimento de técnicas forenses para identificação de imagens sintéticas / Design and development of forensic techniques for synthetic image identification

Tokuda, Eric Keiji, 1984- 21 August 2018 (has links)
Orientadores: Hélio Pedrini, Anderson de Rezende Rocha / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-21T20:45:31Z (GMT). No. of bitstreams: 1 Tokuda_EricKeiji_M.pdf: 9271810 bytes, checksum: 933cc41bd2c4a5d4ace8239be240b632 (MD5) Previous issue date: 2012 / Resumo: O grande investimento de companhias de desenvolvimento de software para animação 3D nos últimos anos tem levado a área de Computação Gráfica a patamares nunca antes atingidos. Frente a esta tecnologia, torna-se cada vez mais difícil a um usuário comum distinguir fotografias reais de imagens produzidas em computador. Mais do que nunca, a fotografia, como meio de informação segura, passa a ter sua idoneidade questionada. A identificação de imagens geradas por computador tornou-se uma tarefa imprescindível. Existem diversos métodos de classificação de imagens fotográficas e geradas por computador na literatura. Todos os trabalhos se concentram em identificar diferenças entre imagens fotográficas e imagens geradas por computador. Contudo, no atual estágio da Computação Gráfica, não há uma caracterização isolada que resolva o problema. Propomos uma análise comparativa entre diferentes formas de combinação de descritores para abordar este problema. Para tanto, criamos um ambiente de testes com diversidade de conteúdo e de qualidade; implementamos treze métodos representativos da literatura; criamos e implementamos quatro abordagens de fusão de dados; comparamos os resultados dos métodos isolados com o resultado dos mesmos métodos combinados. Realizamos a implementação e análise de um total de treze métodos. O conjunto de dados para validação foi composto por aproximadamente 5.000 fotografias e 5.000 imagens geradas por computador. Resultados isolados atingiram acurácias de até 93%. A combinação destes mesmos métodos atingiu uma precisão de 97% (uma redução de 57% no erro do melhor método de maneira isolada) / Abstract: The development of powerful and low-cost hardware devices allied with great advances on content editing and authoring tools have pushed the creation of computer generated images (CGI) to a degree of unrivaled realism. Differentiating a photorealistic computer generated image from a real photograph can be a difficult task to naked eyes. Digital forensics techniques can play a significant role in this task. Indeed, important research has been made by our community in this regard. The current approaches focus on single image features aiming at spotting out diferences between real and computer generated images. However, with the current technology advances, there is no universal image characterization technique that completely solves this problem. In our work, we present a complete study of several current CGI vs. Photograph approaches; create a big and heterogeneous dataset to be used as a training and validation database; implement representative methods of the literature; and devise automatic ways to combine the best approaches. We compare the implemented methods using the same validation environment. Approximately 5,000 photographs and 5,000 CGIs with large diversity of content and quality were collected. A total of 13 methods were implemented. Results show that this set of methods, in an integrated approach, can achieve up to 93% of accuracy. The same methods, when combined through the proposed fusion schemes, can achieve an accuracy rate of 97% (a reduction of 57% of the error over the best result alone) / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
9

EMG Signal Decomposition Using Motor Unit Potential Train Validity

Parsaei, Hossein 09 1900 (has links)
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its component motor unit potential trains (MUPTs). The extracted MUPTs can aid in the diagnosis of neuromuscular disorders and the study of the neural control of movement, but only if they are valid trains. Before using decomposition results and the motor unit potential (MUP) shape and motor unit (MU) firing pattern information related to each active MU for either clinical or research purposes the fact that the extracted MUPTs are valid needs to be confirmed. The existing MUPT validation methods are either time consuming or related to operator experience and skill. More importantly, they cannot be executed during automatic decomposition of EMG signals to assist with improving decomposition results. To overcome these issues, in this thesis the possibility of developing automatic MUPT validation algorithms has been explored. Several methods based on a combination of feature extraction techniques, cluster validation methods, supervised classification algorithms, and multiple classifier fusion techniques were developed. The developed methods, in general, use either the MU firing pattern or MUP-shape consistency of a MUPT, or both, to estimate its overall validity. The performance of the developed systems was evaluated using a variety of MUPTs obtained from the decomposition of several simulated and real intramuscular EMG signals. Based on the results achieved, the methods that use only shape or only firing pattern information had higher generalization error than the systems that use both types of information. For the classifiers that use MU firing pattern information of a MUPT to determine its validity, the accuracy for invalid trains decreases as the number of missed-classification errors in trains increases. Likewise, for the methods that use MUP-shape information of a MUPT to determine its validity, the classification accuracy for invalid trains decreases as the within-train similarity of the invalid trains increase. Of the systems that use both shape and firing pattern information, those that separately estimate MU firing pattern validity and MUP-shape validity and then estimate the overall validity of a train by fusing these two indices using trainable fusion methods performed better than the single classifier scheme that estimates MUPT validity using a single classifier, especially for the real data used. Overall, the multi-classifier constructed using trainable logistic regression to aggregate base classifier outputs had the best performance with overall accuracy of 99.4% and 98.8% for simulated and real data, respectively. The possibility of formulating an algorithm for automated editing MUPTs contaminated with a high number of false-classification errors (FCEs) during decomposition was also investigated. Ultimately, a robust method was developed for this purpose. Using a supervised classifier and MU firing pattern information provided by each MUPT, the developed algorithm first determines whether a given train is contaminated by a high number of FCEs and needs to be edited. For contaminated MUPTs, the method uses both MU firing pattern and MUP shape information to detect MUPs that were erroneously assigned to the train. Evaluation based on simulated and real MU firing patterns, shows that contaminated MUPTs could be detected with 84% and 81% accuracy for simulated and real data, respectively. For a given contaminated MUPT, the algorithm on average correctly classified around 92.1% of the MUPs of the MUPT. The effectiveness of using the developed MUPT validation systems and the MUPT editing methods during EMG signal decomposition was investigated by integrating these algorithms into a certainty-based EMG signal decomposition algorithm. Overall, the decomposition accuracy for 32 simulated and 30 real EMG signals was improved by 7.5% (from 86.7% to 94.2%) and 3.4% (from 95.7% to 99.1%), respectively. A significant improvement was also achieved in correctly estimating the number of MUPTs represented in a set of detected MUPs. The simulated and real EMG signals used were comprised of 3–11 and 3–15 MUPTs, respectively.
10

EMG Signal Decomposition Using Motor Unit Potential Train Validity

Parsaei, Hossein 09 1900 (has links)
Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its component motor unit potential trains (MUPTs). The extracted MUPTs can aid in the diagnosis of neuromuscular disorders and the study of the neural control of movement, but only if they are valid trains. Before using decomposition results and the motor unit potential (MUP) shape and motor unit (MU) firing pattern information related to each active MU for either clinical or research purposes the fact that the extracted MUPTs are valid needs to be confirmed. The existing MUPT validation methods are either time consuming or related to operator experience and skill. More importantly, they cannot be executed during automatic decomposition of EMG signals to assist with improving decomposition results. To overcome these issues, in this thesis the possibility of developing automatic MUPT validation algorithms has been explored. Several methods based on a combination of feature extraction techniques, cluster validation methods, supervised classification algorithms, and multiple classifier fusion techniques were developed. The developed methods, in general, use either the MU firing pattern or MUP-shape consistency of a MUPT, or both, to estimate its overall validity. The performance of the developed systems was evaluated using a variety of MUPTs obtained from the decomposition of several simulated and real intramuscular EMG signals. Based on the results achieved, the methods that use only shape or only firing pattern information had higher generalization error than the systems that use both types of information. For the classifiers that use MU firing pattern information of a MUPT to determine its validity, the accuracy for invalid trains decreases as the number of missed-classification errors in trains increases. Likewise, for the methods that use MUP-shape information of a MUPT to determine its validity, the classification accuracy for invalid trains decreases as the within-train similarity of the invalid trains increase. Of the systems that use both shape and firing pattern information, those that separately estimate MU firing pattern validity and MUP-shape validity and then estimate the overall validity of a train by fusing these two indices using trainable fusion methods performed better than the single classifier scheme that estimates MUPT validity using a single classifier, especially for the real data used. Overall, the multi-classifier constructed using trainable logistic regression to aggregate base classifier outputs had the best performance with overall accuracy of 99.4% and 98.8% for simulated and real data, respectively. The possibility of formulating an algorithm for automated editing MUPTs contaminated with a high number of false-classification errors (FCEs) during decomposition was also investigated. Ultimately, a robust method was developed for this purpose. Using a supervised classifier and MU firing pattern information provided by each MUPT, the developed algorithm first determines whether a given train is contaminated by a high number of FCEs and needs to be edited. For contaminated MUPTs, the method uses both MU firing pattern and MUP shape information to detect MUPs that were erroneously assigned to the train. Evaluation based on simulated and real MU firing patterns, shows that contaminated MUPTs could be detected with 84% and 81% accuracy for simulated and real data, respectively. For a given contaminated MUPT, the algorithm on average correctly classified around 92.1% of the MUPs of the MUPT. The effectiveness of using the developed MUPT validation systems and the MUPT editing methods during EMG signal decomposition was investigated by integrating these algorithms into a certainty-based EMG signal decomposition algorithm. Overall, the decomposition accuracy for 32 simulated and 30 real EMG signals was improved by 7.5% (from 86.7% to 94.2%) and 3.4% (from 95.7% to 99.1%), respectively. A significant improvement was also achieved in correctly estimating the number of MUPTs represented in a set of detected MUPs. The simulated and real EMG signals used were comprised of 3–11 and 3–15 MUPTs, respectively.

Page generated in 0.087 seconds