• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 72
  • 28
  • 28
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 487
  • 487
  • 487
  • 159
  • 136
  • 113
  • 111
  • 82
  • 78
  • 73
  • 73
  • 65
  • 63
  • 57
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

On Enhancement and Quality Assessment of Audio and Video in Communication Systems

Rossholm, Andreas January 2014 (has links)
The use of audio and video communication has increased exponentially over the last decade and has gone from speech over GSM to HD resolution video conference between continents on mobile devices. As the use becomes more widespread the interest in delivering high quality media increases even on devices with limited resources. This includes both development and enhancement of the communication chain but also the topic of objective measurements of the perceived quality. The focus of this thesis work has been to perform enhancement within speech encoding and video decoding, to measure influence factors of audio and video performance, and to build methods to predict the perceived video quality. The audio enhancement part of this thesis addresses the well known problem in the GSM system with an interfering signal generated by the switching nature of TDMA cellular telephony. Two different solutions are given to suppress such interference internally in the mobile handset. The first method involves the use of subtractive noise cancellation employing correlators, the second uses a structure of IIR notch filters. Both solutions use control algorithms based on the state of the communication between the mobile handset and the base station. The video enhancement part presents two post-filters. These two filters are designed to improve visual quality of highly compressed video streams from standard, block-based video codecs by combating both blocking and ringing artifacts. The second post-filter also performs sharpening. The third part addresses the problem of measuring audio and video delay as well as skewness between these, also known as synchronization. This method is a black box technique which enables it to be applied on any audiovisual application, proprietary as well as open standards, and can be run on any platform and over any network connectivity. The last part addresses no-reference (NR) bitstream video quality prediction using features extracted from the coded video stream. Several methods have been used and evaluated: Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Least Square Support Vector Machines (LS-SVM), showing high correlation with both MOS and objective video assessment methods as PSNR and PEVQ. The impact from temporal, spatial and quantization variations on perceptual video quality has also been addressed, together with the trade off between these, and for this purpose a set of locally conducted subjective experiments were performed.
452

Odor coding and memory traces in the antennal lobe of honeybee

Galan, Roberto Fernandez 17 December 2003 (has links)
In dieser Arbeit werden zwei wesentliche neue Ergebnisse vorgestellt. Das erste bezieht sich auf die olfaktorische Kodierung und das zweite auf das sensorische Gedaechtnis. Beide Phaenomene werden am Beispiel des Gehirns der Honigbiene untersucht. In Bezug auf die olfaktorische Kodierung zeige ich, dass die neuronale Dynamik waehrend der Stimulation im Antennallobus duftspezifische Trajektorien beschreibt, die in duftspezifischen Attraktoren enden. Das Zeitinterval, in dem diese Attraktoren erreicht werden, betraegt unabhaengig von der Identitaet und der Konzentration des Duftes ungefaehr 800 ms. Darueber hinaus zeige ich, dass Support-Vektor Maschinen, und insbesondere Perzeptronen, ein realistisches und biologisches Model der Wechselwirkung zwischen dem Antennallobus (dem kodierenden Netwerk) und dem Pilzkoerper (dem dekodierenden Netzwerk) darstellen. Dieses Model kann sowohl Reaktionszeiten von ca. 300 ms als auch die Invarianz der Duftwahrnehmung gegenueber der Duftkonzentration erklaeren. In Bezug auf das sensorische Gedaechtnis zeige ich, dass eine einzige Stimulation ohne Belohnung dem Hebbschen Postulat folgend Veraenderungen der paarweisen Korrelationen zwischen Glomeruli induziert. Ich zeige, dass diese Veranderungen der Korrelationen bei 2/3 der Bienen ausreichen, um den letzten Stimulus zu bestimmen. In der zweiten Minute nach der Stimulation ist eine erfolgreiche Bestimmung des Stimulus nur bei 1/3 der Bienen moeglich. Eine Hauptkomponentenanalyse der spontanen Aktivitaet laesst erkennen, dass das dominante Muster des Netzwerks waehrend der spontanen Aktivitaet nach, aber nicht vor der Stimulation das duftinduzierte Aktivitaetsmuster bei 2/3 der Bienen nachbildet. Man kann deshalb die duftinduzierten (Veraenderungen der) Korrelationen als Spuren eines Kurzzeitgedaechtnisses bzw. als Hebbsche "Reverberationen" betrachtet werden. / Two major novel results are reported in this work. The first concerns olfactory coding and the second concerns sensory memory. Both phenomena are investigated in the brain of the honeybee as a model system. Considering olfactory coding I demonstrate that the neural dynamics in the antennal lobe describe odor-specific trajectories during stimulation that converge to odor-specific attractors. The time interval to reach these attractors is, regardless of odor identity and concentration, approximately 800 ms. I show that support-vector machines and, in particular perceptrons provide a realistic and biological model of the interaction between the antennal lobe (coding network) and the mushroom body (decoding network). This model can also account for reaction-times of about 300 ms and for concentration invariance of odor perception. Regarding sensory memory I show that a single stimulation without reward induces changes of pairwise correlation between glomeruli in a Hebbian-like manner. I demonstrate that those changes of correlation suffice to retrieve the last stimulus presented in 2/3 of the bees studied. Succesful retrieval decays to 1/3 of the bees within the second minute after stimulation. In addition, a principal-component analysis of the spontaneous activity reveals that the dominant pattern of the network during the spontaneous activity after, but not before stimulation, reproduces the odor-induced activity pattern in 2/3 of the bees studied. One can therefore consider the odor-induced (changes of) correlation as traces of a short-term memory or as Hebbian reverberations.
453

Sensory input encoding and readout methods for in vitro living neuronal networks

Ortman, Robert L. 06 July 2012 (has links)
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
454

Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent Models

Samui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities. Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied. The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model. This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models. SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models. Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994). This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996). Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also. SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model. In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data. SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.
455

Geometric modeling and characterization of the circle of willis

Bogunovic, Hrvoje 28 September 2012 (has links)
Los derrames cerebrales son una de las causas principales de morbilidad y mortalidad en los países desarrollados. Esto ha motivado una búsqueda de configuraciones del sistema vascular que se cree que están asociadas con el desarrollo de enfermedades vasculares. En la primera contribución se ha mejorado un método de segmentación vascular para lograr robustez en la segmentación de imágenes procedentes de diferentes modalidades y centros clínicos, con una validación exhaustiva. Una vez que el sistema vascular está correctamente segmentado, en la segunda contribución se ha propuesto una metodología para caracterizar ampliamente la geometría de la arteria carótida interna (ACI). Esto ha incluido el desarrollo de un método para identificar automáticamente la ACI a partir del árbol vascular segmentado. Finalmente, en la tercera contribución, esta identificación automática se ha generalizado a una colección de arterias incluyendo su conectividad y sus relaciones topológicas. Finalmente, la identificación de las arterias en un conjunto de individuos puede permitir la comparación geométrica de sus árboles arteriales utilizando la metodología introducida para la caracterización de la ACI. / Stroke is among the leading causes of morbidity and mortality in the developed countries. This motivated a search for the configurations of vasculature that is assumed to be associated with the development of vascular diseases. In the first contribution we improve a vascular segmentation method to achieve robustness in segmenting images coming from different imaging modalities and clinical centers and we provide exhaustive segmentation validation. Once the vasculature is successfully segmented, in the second contribution we propose a methodology to extensively characterize the geometry of the internal carotid artery (ICA). This includes the development of a method to automatically identify the ICA from the segmented vascular tree. Finally in the third contribution, this automatic identification is generalized to a collection of vessels including their connectivity and topological relationships. Identifying the corresponding vessels in a population enables comparison of their geometry using the methodology introduced for the characterization of the ICA.
456

An?lise e classifica??o de imagens de les?es da pele por atributos de cor, forma e textura utilizando m?quina de vetor de suporte

Soares, Heliana Bezerra 22 February 2008 (has links)
Made available in DSpace on 2014-12-17T14:54:49Z (GMT). No. of bitstreams: 1 HelianaBS_da_capa_ate_cap4.pdf: 2361373 bytes, checksum: 3e1e43e8ba1aadc274663b8b8e3de72f (MD5) Previous issue date: 2008-02-22 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries / O c?ncer de pele ? o mais comum de todos os c?nceres e o aumento da sua incid?ncia deve-se, em parte, ao comportamento das pessoas em rela??o ? exposi??o ao sol. No Brasil, o c?ncer de pele n?o melanoma ? o mais incidente na maioria das regi?es. A dermatoscopia e ideodermatoscopia s?o os principais tipos de exames para o diagn?stico de doen?as da pele dermatol?gicas. O campo que envolve o uso de ferramentas computacionais para o aux?lio ou acompanhamento do diagn?stico m?dico em les?es dermatol?gicas ainda ? visto como muito recente. V?rios m?todos foram propostos para classifica??o autom?tica de patologias da pele utilizando imagens. O presente trabalho tem como objetivo apresentar uma nova metodologia inteligente para an?lise e classifica??o de imagens de c?ncer de pele, baseada nas t?cnicas de processamento digital de imagens para extra??o de caracter?sticas de cor, forma e textura, utilizando a Transformada Wavelet Packet (TWP) e a t?cnicas de aprendizado de m?quina denominada M?quina de Vetor de Suporte (SVM Support Vector Machine). A Transformada Wavelet Packet ? aplicada para extra??o de caracter?sticas de textura nas imagens. Esta consiste de um conjunto de fun??es base que representa a imagem em diferentes bandas de freq??ncia, cada uma com resolu??es distintas correspondente a cada escala. Al?m disso, s?o calculadas tamb?m as caracter?sticas de cor da les?o que s?o dependentes de um contexto visual, influenciada pelas cores existentes em sua volta, e os atributos de forma atrav?s dos descritores de Fourier. Para a tarefa de classifica??o ? utilizado a M?quina de Vetor de Suporte, que baseia-se nos princ?pios da minimiza??o do risco estrutural, proveniente da teoria do aprendizado estat?stico. A SVM tem como objetivo construir hiperplanos ?timos que apresentem a maior margem de separa??o entre classes. O hiperplano gerado ? determinado por um subconjunto dos pontos das classes, chamado vetores de suporte. Para o banco de dados utilizado neste trabalho, os resultados apresentaram um bom desempenho obtendo um acerto global de 92,73% para melanoma, e 86% para les?es n?o-melanoma e benigna. O potencial dos descritores extra?dos aliados ao classificador SVM tornou o m?todo capaz de reconhecer e classificar as les?es analisadas
457

Classificação de dados cinéticos da inicialização da marcha utilizando redes neurais artificiais e máquinas de vetores de suporte

Takáo, Thales Baliero 01 July 2015 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:55:18Z No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-20T12:56:48Z (GMT) No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-05-20T12:56:48Z (GMT). No. of bitstreams: 2 Dissertação - Thales Baliero Takáo - 2015.pdf: 2798998 bytes, checksum: f90a7c928230875abd5873753316f766 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2015-07-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The aim of this work was to assess the performance of computational methods to classify ground reaction force (GRF) to identify on which surface was done the gait initiation. Twenty-five subjects were evaluated while performing the gait initiation task in two experimental conditions barefoot on hard surface and barefoot on soft surface (foam). The center of pressure (COP) variables were calculate from the GRF and the principal component analysis was used to retain the main features of medial-lateral, anterior-posterior and vertical force components. The principal components representing each force component were retained using the broken stick test. Then the support vector machines and multilayer neural networks ware trained with Backpropagation and Levenberg-Marquartd algorithm to perform the GRF classification . The evaluation of classifier models was done based on area under ROC curve and accuracy criteria. The Bootstrap cross-validation have produced area under ROC curve a and accuracy criteria using 500 samples database. The support vector machine with linear kernel and margin parameter equal 100 produced the best result using medial-lateral force as input. It registered area under ROC curve and accuracy with 0.7712 and 0.7974. Those results showed significance difference from the vertical and anterior-posterior force. Then we may conclude that the choice of GRF component and the classifier model directly influences the performance of the classification. / O objetivo deste trabalho foi avaliar o desempenho de ferramentas de inteligência computacional para a classificação da força de reação do solo (FRS) identificando em que tipo de superfície foi realizada a inicialização da marcha. A base de dados foi composta pela força de reação do solo de 25 indivíduos, adquiridas por duas plataformas de força, durante a inicialização da marcha sobre uma superfície macia (SM - colchão), e depois sobre uma superfície dura (SD). A partir da FRS foram calculadas as variáveis que descrevem o comportamento do centro de pressão (COP) e também foram extraídas as características relevantes das forças mediolateral (Fx), anteroposterior (Fy) e vertical (Fz) por meio da análise de componentes principais (ACP). A seleção das componentes principais que descrevem cada uma das forças foi feita por meio do teste broken stick . Em seguida, máquinas de vetores de suporte (MVS) e redes neurais artificiais multicamada (MLP) foram treinadas com o algoritmo Backpropagation e de Levenberg-Marquartd (LMA) para realizar a classificação da FRS. Para a avaliação dos modelos implementados a partir das ferramentas de inteligência computacional foram utilizados os índices de acurácia (ACC) e área abaixo da curva ROC (AUC). Estes índices foram obtidos na validação cruzada utilizando a técnicas bootstrap com 500 bases de dados de amostras. O melhor resultado foi obtido para a máquina de vetor de suporte com kernel linear com parâmetro de margem igual a 100 utilizando a Fx como entrada para classificação das amostras. Os índices AUC e ACC foram 0.7712 e 0.7974, respectivamente. Estes resultados apresentaram diferença estatística em relação aos modelos que utilizaram as componentes principais da Fy e Fz, permitindo concluir que a escolha da componente da FRS assim como o modelo a ser implementado influencia diretamente no desempenho dos índices que avaliam a classificação.
458

DETECÇÃO DE MASSAS EM IMAGENS MAMOGRÁFICAS USANDO REDES NEURAIS CELULARES, FUNÇÕES GEOESTATÍSTICAS E MÁQUINAS DE VETORES DE SUPORTE / DETECTION OF MASSES IN MAMMOGRAPHY IMAGES USING CELLULAR NEURAL NETWORKS, STATISCAL FUNCTIONS VECTOR MACHINES AND SUPPORT

Sampaio, Wener Borges de 31 August 2009 (has links)
Made available in DSpace on 2016-08-17T14:53:04Z (GMT). No. of bitstreams: 1 Werner Borges de Sampaio.pdf: 2991418 bytes, checksum: 1c3fd03c2e6ffea37ed00740d75d2ffd (MD5) Previous issue date: 2009-08-31 / Breast cancer presents high occurrence frequency among the world population and its psychological effects alter the perception of the patient s sexuality and the own personal image. Mammography is an x-ray of the mamma that allows the precocious detection of cancer, since it is capable to showing lesions in their initial stages, typically very small lesions in the order of millimeters. The processing of mammographic images has been contributing to the detection and the diagnosis of mammary nodules, being an important tool, because it reduces the degree of uncertainty of the diagnosis, providing a supplementary source of information to the specialist. This work presents a computational methodology that aids the specialist in the detection of breast masses. The first step of the methodology aims at improvement the mammographic image, which consists of removal of unwanted objects, reduction of noise and enhancement of the breast internal structures. Then, Cellular Neural Networks are used to segment areas suspected of containing masses. These regions have their shapes analyzed by geometry descriptors (eccentricity, circularity, compactness, circular disproportion and circular density) and their textures are analyzed using geostatistical functions (Ripley's K function, Moran's and Geary's indices). Support Vector Machine were trained and used to classify the candidate regions in one of the classes, masses or no-mass, with sensibility of 80.00%, specificity of 85.68%, acuracy of 84.62%, a rate of 0.84 false positive for image and 0.20 false negative for image and an area under the curve ROC of 0.827. / Câncer de mama apresenta alta freqüência de ocorrência entre a população mundial e seus efeitos psicológicos alteram a percepção da sexualidade do paciente e a própria imagem pessoal. A mamografia é uma radiografia da mama que permite a descoberta precoce de câncer, sendo capaz a mostrar lesões nas fases iniciais, tipicamente lesões muito pequenas na ordem de milímetros. O processamento de imagens mamográficas tem contribuído para a descoberta e o diagnóstico de nódulos mamários, sendo uma importante ferramenta, pois reduz o grau de incerteza do diagnóstico, provendo uma fonte adicional de informação ao especialista. Este trabalho apresenta uma metodologia computacional que ajuda o especialista na descoberta de massas mamárias. O primeiro passo da metodologia visa à melhoria da imagem da mamografia que consiste em remoção de objetos externos à mama, redução de ruídos e realce das estruturas internas da mama. Então, Redes Neurais Celulares são usadas para segmentar áreas suspeitadas de conter massas. Estas regiões têm as suas formas analisadas por descritores de geometria (excentricidade, circularidade, densidade, desproporção circular e densidade circular) e as suas texturas analisadas por funções geoestatísticas (função de K de Ripley, e os índices de Moran e Geary). Máquinas de Vetores de Suporte são treinadas para classificar as regiões candidatas em um das classes, massas ou não-massa, com sensibilidade de 80,00%, especificidade de 85,68%, acurácia de 84,62%, uma taxa de 0,84 falsos positivos por imagem e 0,20 falsos negativos por imagem e uma área sob da curva ROC de 0,870.
459

Feature selection based segmentation of multi-source images : application to brain tumor segmentation in multi-sequence MRI / Segmentation des images multi-sources basée sur la sélection des attributs : application à la segmentation des tumeurs cérébrales en IRM

Zhang, Nan 12 September 2011 (has links)
Les images multi-spectrales présentent l’avantage de fournir des informations complémentaires permettant de lever des ambigüités. Le défi est cependant comment exploiter ces informations multi-spectrales efficacement. Dans cette thèse, nous nous focalisons sur la fusion des images multi-spectrales en extrayant des attributs les plus pertinents en vue d’obtenir la meilleure segmentation possible avec le moindre coût de calcul possible. La classification par le Support Vector Machine (SVM), combinée avec une méthode de sélection d’attributs, est proposée. Le critère de sélection est défini par la séparabilité des noyaux de classe. S’appuyant sur cette classification SVM, un cadre pour suivre l’évolution est proposé. Il comprend les étapes suivantes : apprentissage des tumeurs cérébrales et sélection des attributs à partir du premier examen IRM (imagerie par résonance magnétique) ; segmentation automatique des tumeurs dans de nouvelles images en utilisant une classification basée sur le SVM multi-noyaux ; affiner le contour des tumeurs par une technique de croissance de région ; effectuer un éventuel apprentissage adaptatif. L’approche proposée a été évaluée sur 13 patients avec 24 examens, y compris 72 séquences IRM et 1728 images. En comparant avec le SVM traditionnel, Fuzzy C-means, le réseau de neurones, et une méthode d’ensemble de niveaux, les résultats de segmentation et l’analyse quantitative de ces résultats démontrent l’efficacité de l’approche proposée. / Multi-spectral images have the advantage of providing complementary information to resolve some ambiguities. But, the challenge is how to make use of the multi-spectral images effectively. In this thesis, our study focuses on the fusion of multi-spectral images by extracting the most useful features to obtain the best segmentation with the least cost in time. The Support Vector Machine (SVM) classification integrated with a selection of the features in a kernel space is proposed. The selection criterion is defined by the kernel class separability. Based on this SVM classification, a framework to follow up brain tumor evolution is proposed, which consists of the following steps: to learn the brain tumors and select the features from the first magnetic resonance imaging (MRI) examination of the patients; to automatically segment the tumor in new data using a multi-kernel SVM based classification; to refine the tumor contour by a region growing technique; and to possibly carry out an adaptive training. The proposed system was tested on 13 patients with 24 examinations, including 72 MRI sequences and 1728 images. Compared with the manual traces of the doctors as the ground truth, the average classification accuracy reaches 98.9%. The system utilizes several novel feature selection methods to test the integration of feature selection and SVM classifiers. Also compared with the traditional SVM, Fuzzy C-means, the neural network and an improved level set method, the segmentation results and quantitative data analysis demonstrate the effectiveness of our proposed system.
460

Visual interpretation of hand postures for human-machine interaction / Interprétation visuelle de gestes pour l'interaction homme-machine

Nguyen, Van Toi 15 December 2015 (has links)
Aujourd'hui, les utilisateurs souhaitent interagir plus naturellement avec les systèmes numériques. L'une des modalités de communication la plus naturelle pour l'homme est le geste de la main. Parmi les différentes approches que nous pouvons trouver dans la littérature, celle basée sur la vision est étudiée par de nombreux chercheurs car elle ne demande pas de porter de dispositif complémentaire. Pour que la machine puisse comprendre les gestes à partir des images RGB, la reconnaissance automatique de ces gestes est l'un des problèmes clés. Cependant, cette approche présente encore de multiples défis tels que le changement de point de vue, les différences d'éclairage, les problèmes de complexité ou de changement d'environnement. Cette thèse propose un système de reconnaissance de gestes statiques qui se compose de deux phases : la détection et la reconnaissance du geste lui-même. Dans l'étape de détection, nous utilisons un processus de détection d'objets de Viola Jones avec une caractérisation basée sur des caractéristiques internes d'Haar-like et un classifieur en cascade AdaBoost. Pour éviter l'influence du fond, nous avons introduit de nouvelles caractéristiques internes d'Haar-like. Ceci augmente de façon significative le taux de détection de la main par rapport à l'algorithme original. Pour la reconnaissance du geste, nous avons proposé une représentation de la main basée sur un noyau descripteur KDES (Kernel Descriptor) très efficace pour la classification d'objets. Cependant, ce descripteur n'est pas robuste au changement d'échelle et n'est pas invariant à l'orientation. Nous avons alors proposé trois améliorations pour surmonter ces problèmes : i) une normalisation de caractéristiques au niveau pixel pour qu'elles soient invariantes à la rotation ; ii) une génération adaptative de caractéristiques afin qu'elles soient robustes au changement d'échelle ; iii) une construction spatiale spécifique à la structure de la main au niveau image. Sur la base de ces améliorations, la méthode proposée obtient de meilleurs résultats par rapport au KDES initial et aux descripteurs existants. L'intégration de ces deux méthodes dans une application montre en situation réelle l'efficacité, l'utilité et la faisabilité de déployer un tel système pour l'interaction homme-robot utilisant les gestes de la main. / Nowadays, people want to interact with machines more naturally. One of the powerful communication channels is hand gesture. Vision-based approach has involved many researchers because this approach does not require any extra device. One of the key problems we need to resolve is hand posture recognition on RGB images because it can be used directly or integrated into a multi-cues hand gesture recognition. The main challenges of this problem are illumination differences, cluttered background, background changes, high intra-class variation, and high inter-class similarity. This thesis proposes a hand posture recognition system consists two phases that are hand detection and hand posture recognition. In hand detection step, we employed Viola-Jones detector with proposed concept Internal Haar-like feature. The proposed hand detection works in real-time within frames captured from real complex environments and avoids unexpected effects of background. The proposed detector outperforms original Viola-Jones detector using traditional Haar-like feature. In hand posture recognition step, we proposed a new hand representation based on a good generic descriptor that is kernel descriptor (KDES). When applying KDES into hand posture recognition, we proposed three improvements to make it more robust that are adaptive patch, normalization of gradient orientation in patches, and hand pyramid structure. The improvements make KDES invariant to scale change, patch-level feature invariant to rotation, and final hand representation suitable to hand structure. Based on these improvements, the proposed method obtains better results than original KDES and a state of the art method.

Page generated in 0.0726 seconds