• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 58
  • 58
  • 58
  • 22
  • 18
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Métodos de agrupamento na análise de dados de expressão gênica

Rodrigues, Fabiene Silva 16 February 2009 (has links)
Made available in DSpace on 2016-06-02T20:06:03Z (GMT). No. of bitstreams: 1 2596.pdf: 1631367 bytes, checksum: 90f2d842a935f1dd50bf587a33f6a2cb (MD5) Previous issue date: 2009-02-16 / The clustering techniques have frequently been used in literature to the analyse data in several fields of application. The main objective of this work is to study such techniques. There is a large number of clustering techniques in literature. In this work we concentrate on Self Organizing Map (SOM), k-means, k-medoids and Expectation- Maximization (EM) algorithms. These algorithms are applied to gene expression data. The analisys of gene expression, among other possibilities, identifies which genes are differently expressed in synthesis of proteins associated to normal and sick tissues. The purpose is to do a comparing of these metods, sticking out advantages and disadvantages of such. The metods were tested for simulation and after we apply them to a real data set. / As técnicas de agrupamento (clustering) vêm sendo utilizadas com freqüência na literatura para a solução de vários problemas de aplicações práticas em diversas áreas do conhecimento. O principal objetivo deste trabalho é estudar tais técnicas. Mais especificamente, estudamos os algoritmos Self Organizing Map (SOM), k-means, k-medoids, Expectation-Maximization (EM). Estes algoritmos foram aplicados a dados de expressão gênica. A análise de expressão gênica visa, entre outras possibilidades, a identificação de quais genes estão diferentemente expressos na sintetização de proteínas associados a tecidos normais e doentes. O objetivo deste trabalho é comparar estes métodos no que se refere à eficiência dos mesmos na identificação de grupos de elementos similares, ressaltando vantagens e desvantagens de cada um. Os métodos foram testados por simulação e depois aplicamos as metodologias a um conjunto de dados reais.
52

Detecção e diagnóstico de falhas em robôs manipuladores via redes neurais artificiais. / Fault detection and diagnosis in robotic manipulators via artificial neural networks.

Renato Tinós 11 February 1999 (has links)
Neste trabalho, um novo enfoque para detecção e diagnóstico de falhas (DDF) em robôs manipuladores é apresentado. Um robô com falhas pode causar sérios danos e pode colocar em risco o pessoal presente no ambiente de trabalho. Geralmente, os pesquisadores têm proposto esquemas de DDF baseados no modelo matemático do sistema. Contudo, erros de modelagem podem ocultar os efeitos das falhas e podem ser uma fonte de alarmes falsos. Aqui, duas redes neurais artificiais são utilizadas em um sistema de DDF para robôs manipuladores. Um perceptron multicamadas treinado por retropropagação do erro é usado para reproduzir o comportamento dinâmico do manipulador. As saídas do perceptron são comparadas com as variáveis medidas, gerando o vetor de resíduos. Em seguida, uma rede com função de base radial é usada para classificar os resíduos, gerando a isolação das falhas. Quatro algoritmos diferentes são empregados para treinar esta rede. O primeiro utiliza regularização para reduzir a flexibilidade do modelo. O segundo emprega regularização também, mas ao invés de um único termo de penalidade, cada unidade radial tem um regularização individual. O terceiro algoritmo emprega seleção de subconjuntos para selecionar as unidades radiais a partir dos padrões de treinamento. O quarto emprega o mapa auto-organizável de Kohonen para fixar os centros das unidades radiais próximos aos centros dos aglomerados de padrões. Simulações usando um manipulador com dois graus de liberdade e um Puma 560 são apresentadas, demostrando que o sistema consegue detectar e diagnosticar corretamente falhas que ocorrem em conjuntos de padrões não-treinados. / In this work, a new approach for fault detection and diagnosis in robotic manipulators is presented. A faulty robot could cause serious damages and put in risk the people involved. Usually, researchers have proposed fault detection and diagnosis schemes based on the mathematical model of the system. However, modeling errors could obscure the fault effects and could be a false alarm source. In this work, two artificial neural networks are employed in a fault detection and diagnosis system to robotic manipulators. A multilayer perceptron trained with backpropagation algorithm is employed to reproduce the robotic manipulator dynamical behavior. The perceptron outputs are compared with the real measurements, generating the residual vector. A radial basis function network is utilized to classify the residual vector, generating the fault isolation. Four different algorithms have been employed to train this network. The first utilizes regularization to reduce the flexibility of the model. The second employs regularization too, but instead of only one penalty term, each radial unit has a individual penalty term. The third employs subset selection to choose the radial units from the training patterns. The forth algorithm employs the Kohonen’s self-organizing map to fix the radial unit center near to the cluster centers. Simulations employing a two link manipulator and a Puma 560 manipulator are presented, demonstrating that the system can detect and isolate correctly faults that occur in nontrained pattern sets.
53

Intelligent information processing in building monitoring systems and applications

Skön, J.-P. (Jukka-Pekka) 10 November 2015 (has links)
Abstract Global warming has set in motion a trend for cutting energy costs to reduce the carbon footprint. Reducing energy consumption, cutting greenhouse gas emissions and eliminating energy wastage are among the main goals of the European Union (EU). The buildings sector is the largest user of energy and CO2 emitter in the EU, estimated at approximately 40% of the total consumption. According to the International Panel on Climate Change, 30% of the energy used in buildings could be reduced with net economic benefits by 2030. At the same time, indoor air quality is recognized more and more as a distinct health hazard. Because of these two factors, energy efficiency and healthy housing have become active topics in international research. The main aims of this thesis were to study and develop a wireless building monitoring and control system that will produce valuable information and services for end-users using computational methods. In addition, the technology developed in this thesis relies heavily on building automation systems (BAS) and some parts of the concept termed the “Internet of Things” (IoT). The data refining process used is called knowledge discovery from data (KDD) and contains methods for data acquisition, pre-processing, modeling, visualization and interpreting the results and then sharing the new information with the end-users. In this thesis, four examples of data analysis and knowledge deployment are presented. The results of the case studies show that innovative use of computational methods provides a good basis for researching and developing new information services. In addition, the data mining methods used, such as regression and clustering completed with efficient data pre-processing methods, have a great potential to process a large amount of multivariate data effectively. The innovative and effective use of digital information is a key element in the creation of new information services. The service business in the building sector is significant, but plenty of new possibilities await capable and advanced companies or organizations. In addition, end-users, such as building maintenance personnel and residents, should be taken into account in the early stage of the data refining process. Furthermore, more advantages can be gained by courageous co-operation between companies and organizations, by utilizing computational methods for data processing to produce valuable information and by using the latest technologies in the research and development of new innovations. / Tiivistelmä Rakennus- ja kiinteistösektori on suurin fossiilisilla polttoaineilla tuotetun energian käyttäjä. Noin 40 prosenttia kaikesta energiankulutuksesta liittyy rakennuksiin, rakentamiseen, rakennusmateriaaleihin ja rakennuksien ylläpitoon. Ilmastonmuutoksen ehkäisyssä rakennusten energiankäytön vähentämisellä on suuri merkitys ja rakennuksissa energiansäästöpotentiaali on suurin. Tämän seurauksena yhä tiiviimpi ja energiatehokkaampi rakentaminen asettaa haasteita hyvän sisäilman laadun turvaamiselle. Näistä seikoista johtuen sisäilman laadun tutkiminen ja jatkuvatoiminen mittaaminen on tärkeää. Väitöskirjan päätavoitteena on kuvata kehitetty energiankulutuksen ja sisäilman laadun monitorointijärjestelmä. Järjestelmän tuottamaa mittaustietoa on jalostettu eri loppukäyttäjiä palvelevaan muotoon. Tiedonjalostusprosessi koostuu tiedon keräämisestä, esikäsittelystä, tiedonlouhinnasta, visualisoinnista, tulosten tulkitsemisesta ja oleellisen tiedon välittämisestä loppukäyttäjille. Aineiston analysointiin on käytetty tiedonlouhintamenetelmiä, kuten esimerkiksi klusterointia ja ennustavaa mallintamista. Väitöskirjan toisena tavoitteena on tuoda esille jatkuvatoimiseen mittaamiseen liittyviä haasteita sekä rohkaista yrityksiä ja organisaatioita käyttämään tietovarantoja monipuolisemmin ja tehokkaammin. Väitöskirja pohjautuu viiteen julkaisuun, joissa kuvataan kehitetty monitorointijärjestelmä, osoitetaan tiedonjalostusprosessin toimivuus erilaisissa tapauksissa ja esitetään esimerkkejä kuhunkin prosessivaiheeseen soveltuvista laskennallisista menetelmistä. Julkaisuissa on kuvattu energiankulutuksen ja sisäilman laadun informaatiopalvelu sekä sisäilman laatuun liittyviä data-analyysejä omakoti- ja kerrostaloissa sekä koulurakennuksissa. Innovatiivinen digitaalisen tiedon hyödyntäminen on avainasemassa kehitettäessä uusia informaatiopalveluita. Kiinteistöalalle on kehitetty lukuisia informaatioon pohjautuvia palveluita, mutta ala tarjoaa edelleen hyviä liiketoimintamahdollisuuksia kyvykkäille ja kehittyneille yrityksille sekä organisaatioille.
54

CLUSTERING AND VISUALIZATION OF GENOMIC DATA

Sutharzan, Sreeskandarajan 26 July 2019 (has links)
No description available.
55

Využití neuronových sítí v klasifikaci srdečních onemocnění / Use of neural networks in classification of heart diseases

Skřížala, Martin January 2008 (has links)
This thesis discusses the design and the utilization of the artificial neural networks as ECG classifiers and the detectors of heart diseases in ECG signal especially myocardial ischaemia. The changes of ST-T complexes are the important indicator of ischaemia in ECG signal. Different types of ischaemia are expressed particularly by depression or elevation of ST segments and changes of T wave. The first part of this thesis is orientated towards the theoretical knowledges and describes changes in the ECG signal rising close to different types of ischaemia. The second part deals with to the ECG signal pre-processing for the classification by neural network, filtration, QRS detection, ST-T detection, principal component analysis. In the last part there is described design of detector of myocardial ischaemia based on artificial neural networks with utilisation of two types of neural networks back – propagation and self-organizing map and the results of used algorithms. The appendix contains detailed description of each neural networks, description of the programme for classification of ECG signals by ANN and description of functions of programme. The programme was developed in Matlab R2007b.
56

Combining Multivariate Statistical Methods and Spatial Analysis to Characterize Water Quality Conditions in the White River Basin, Indiana, U.S.A.

Gamble, Andrew Stephan 25 February 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This research performs a comparative study of techniques for combining spatial data and multivariate statistical methods for characterizing water quality conditions in a river basin. The study has been performed on the White River basin in central Indiana, and uses sixteen physical and chemical water quality parameters collected from 44 different monitoring sites, along with various spatial data related to land use – land cover, soil characteristics, terrain characteristics, eco-regions, etc. Various parameters related to the spatial data were analyzed using ArcHydro tools and were included in the multivariate analysis methods for the purpose of creating classification equations that relate spatial and spatio-temporal attributes of the watershed to water quality data at monitoring stations. The study compares the use of various statistical estimates (mean, geometric mean, trimmed mean, and median) of monitored water quality variables to represent annual and seasonal water quality conditions. The relationship between these estimates and the spatial data is then modeled via linear and non-linear multivariate methods. The linear statistical multivariate method uses a combination of principal component analysis, cluster analysis, and discriminant analysis, whereas the non-linear multivariate method uses a combination of Kohonen Self-Organizing Maps, Cluster Analysis, and Support Vector Machines. The final models were tested with recent and independent data collected from stations in the Eagle Creek watershed, within the White River basin. In 6 out of 20 models the Support Vector Machine more accurately classified the Eagle Creek stations, and in 2 out of 20 models the Linear Discriminant Analysis model achieved better results. Neither the linear or non-linear models had an apparent advantage for the remaining 12 models. This research provides an insight into the variability and uncertainty in the interpretation of the various statistical estimates and statistical models, when water quality monitoring data is combined with spatial data for characterizing general spatial and spatio-temporal trends.
57

適用於財務舞弊偵測之決策支援系統的對偶方法 / A dual approach for decision support in financial fraud detection

黃馨瑩, Huang, Shin Ying Unknown Date (has links)
增長層級式自我組織映射網路(GHSOM)屬於一種非監督式類神經網路,為自我組織映射網路(SOM)的延伸,擅長於對樣本分群,以輔助分析樣本族群裡的共同特徵,並且可以透過族群間存在的空間關係假設來建立分類器,進而辨別出異常的資料。 因此本研究提出一個創新的對偶方法(即為一個建立決策支援系統架構的方法)分別對舞弊與非舞弊樣本分群,首先兩類別之群組會被配對,即辨識某一特定無弊群體的非舞弊群體對照組,針對這些配對族群,套用基於不同空間假設所設立的分類規則以檢測舞弊與非舞弊群體中是否有存在某種程度的空間關係,此外並對於舞弊樣本的分群結果加入特徵萃取機制。分類績效最好的分類規則會被用來偵測受測樣本是否有舞弊的嫌疑,萃取機制的結果則會用來標示有舞弊嫌疑之受測樣本的舞弊行為特徵以及相關的輸入變數,以做為後續的決策輔助。 更明確地說,本研究分別透過非舞弊樣本與舞弊樣本建立一個非舞弊GHSOM樹以及舞弊GHSOM樹,且針對每一對GHSOM群組建立分類規則,其相應的非舞弊/舞弊為中心規則會適應性地依循決策者的風險偏好最佳化調整規則界線,整體而言較優的規則會被決定為分類規則。非舞弊為中心的規則象徵絕大多數的舞弊樣本傾向分布於非舞弊樣本的周圍,而舞弊為中心的規則象徵絕大多數的非舞弊樣本傾向分布於舞弊樣本的周圍。 此外本研究加入了一個特徵萃取機制來發掘舞弊樣本分群結果中各群組之樣本資料的共同特質,其包含輸入變數的特徵以及舞弊行為模式,這些資訊將能輔助決策者(如資本提供者)評估受測樣本的誠實性,輔助決策者從分析結果裡做出更進一步的分析來達到審慎的信用決策。 本研究將所提出的方法套用至財報舞弊領域(屬於財務舞弊偵測的子領域)進行實證,實驗結果證實樣本之間存在特定的空間關係,且相較於其他方法如SVM、SOM+LDA和GHSOM+LDA皆具有更佳的分類績效。因此顯示本研究所提出的機制可輔助驗證財務相關數據的可靠性。此外,根據SOM的特質,即任何受測樣本歸類到某特定族群時,該族群訓練樣本的舞弊行為特徵將可以代表此受測樣本的特徵推論。這樣的原則可以用來協助判斷受測樣本的可靠性,並可供持續累積成一個舞弊知識庫,做為進一步分析以及制定相關信用決策的參考。本研究所提出之基於對偶方法的決策支援系統架構可以被套用到其他使用財務數據為資料來源的財務舞弊偵測情境中,作為輔助決策的基礎。 / The Growing Hierarchical Self-Organizing Map (GHSOM) is extended from the Self-Organizing Map (SOM). The GHSOM’s unsupervised learning nature such as the adaptive group size as well as the hierarchy structure renders its availability to discover the statistical salient features from the clustered groups, and could be used to set up a classifier for distinguishing abnormal data from regular ones based on spatial relationships between them. Therefore, this study utilizes the advantage of the GHSOM and pioneers a novel dual approach (i.e., a proposal of a DSS architecture) with two GHSOMs, which starts from identifying the counterparts within the clustered groups. Then, the classification rules are formed based on a certain spatial hypothesis, and a feature extraction mechanism is applied to extract features from the fraud clustered groups. The dominant classification rule is adapted to identify suspected samples, and the results of feature extraction mechanism are used to pinpoint their relevant input variables and potential fraud activities for further decision aid. Specifically, for the financial fraud detection (FFD) domain, a non-fraud (fraud) GHSOM tree is constructed via clustering the non-fraud (fraud) samples, and a non-fraud-central (fraud-central) rule is then tuned via inputting all the training samples to determine the optimal discrimination boundary within each leaf node of the non-fraud (fraud) GHSOM tree. The optimization renders an adjustable and effective rule for classifying fraud and non-fraud samples. Following the implementation of the DSS architecture based on the proposed dual approach, the decision makers can objectively set their weightings of type I and type II errors. The classification rule that dominates another is adopted for analyzing samples. The dominance of the non-fraud-central rule leads to an implication that most of fraud samples cluster around the non-fraud counterpart, meanwhile the dominance of fraud-central rule leads to an implication that most of non-fraud samples cluster around the fraud counterpart. Besides, a feature extraction mechanism is developed to uncover the regularity of input variables and fraud categories based on the training samples of each leaf node of a fraud GHSOM tree. The feature extraction mechanism involves extracting the variable features and fraud patterns to explore the characteristics of fraud samples within the same leaf node. Thus can help decision makers such as the capital providers evaluate the integrity of the investigated samples, and facilitate further analysis to reach prudent credit decisions. The experimental results of detecting fraudulent financial reporting (FFR), a sub-field of FFD, confirm the spatial relationship among fraud and non-fraud samples. The outcomes given by the implemented DSS architecture based on the proposed dual approach have better classification performance than the SVM, SOM+LDA, GHSOM+LDA, SOM, BPNN and DT methods, and therefore show its applicability to evaluate the reliability of the financial numbers based decisions. Besides, following the SOM theories, the extracted relevant input variables and the fraud categories from the GHSOM are applicable to all samples classified into the same leaf nodes. This principle makes that the extracted pre-warning signal can be applied to assess the reliability of the investigated samples and to form a knowledge base for further analysis to reach a prudent decision. The DSS architecture based on the proposed dual approach could be applied to other FFD scenarios that rely on financial numbers as a basis for decision making.
58

Precise Mapping for Retinal Photocoagulation in SLIM (Slit-Lamp Image Mosaicing) / Cartographie précise pour la photocoagulation rétinienne dans SLIM (Mosaïque de l’image de la lampe à fente)

Prokopetc, Kristina 10 November 2017 (has links)
Cette thèse est issue d’un accord CIFRE entre le groupe de recherche EnCoV de l’Université Clermont Auvergne et la société Quantel Medical (www.quantel-medical.fr). Quantel Medical est une entreprise spécialisée dans le développement innovant des ultrasons et des produits laser en ophtalmologie. Cette thèse présente un travail de recherche visant à l’application du diagnostic assisté par ordinateur et du traitement des maladies de la rétine avec une utilisation du prototype industriel TrackScan développé par Quantel Medical. Plus précisément, elle contribue au problème du mosaicing précis de l’image de la lampe à fente (SLIM) et du recalage automatique et multimodal en utilisant les images SLIM avec l’angiographie par fluorescence (FA) pour aider à la photo coagulation pan-rétienne naviguée. Nous abordons trois problèmes différents.Le premier problème est lié à l’accumulation des erreurs du recalage en SLIM., il dérive de la mosaïque. Une approche commune pour obtenir la mosaïque consiste à calculer des transformations uniquement entre les images temporellement consécutives dans une séquence, puis à les combiner pour obtenir la transformation entre les vues non consécutives temporellement. Les nombreux algorithmes existants suivent cette approche. Malgré le faible coût de calcul et la simplicité de cette méthode, en raison de sa nature de ‘chaînage’, les erreurs d’alignement s’accumulent, ce qui entraîne une dérive des images dans la mosaïque. Nous proposons donc d’utilise les récents progrès réalisés dans les méthodes d’ajustement de faisceau et de présenter un cadre de réduction de la dérive spécialement conçu pour SLIM. Nous présentons aussi une nouvelle procédure de raffinement local.Deuxièmement, nous abordons le problème induit par divers types d’artefacts communs á l’imagerie SLIM. Ceus-sont liés à la lumière utilisée, qui dégrade considérablement la qualité géométrique et photométrique de la mosaïque. Les solutions existantes permettent de faire face aux blouissements forts qui corrompent entièrement le rendu de la rétine dans l’image tout en laissant de côté la correction des reflets spéculaires semi-transparents et reflets des lentilles. Cela introduit des images fantômes et des pertes d’information. En outre, les méthodes génériques ne produisent pas de résultats satisfaisants dans SLIM. Par conséquent, nous proposons une meilleure alternative en concevant une méthode basée sur une technique rapide en utilisant une seule image pour éliminer les éblouissements et la notion de feux spéculaires semi-transparents en utilisant les indications de mouvement pour la correction intelligente de reflet de lentille.Finalement, nous résolvons le problème du recalage multimodal automatique avec SLIM. Il existe une quantité importante de travaux sur le recalage multimodal de diverses modalités d’image rétinienne. Cependant, la majorité des méthodes existantes nécessitent une détection de points clés dans les deux modalités d’image, ce qui est une tâche très difficile. Dans le cas de SLIM et FA ils ne tiennent pas compte du recalage précis dans la zone maculaire - le repère prioritaire. En outre, personne n’a développé une solution entièrement automatique pour SLIM et FA. Dans cette thèse, nous proposons la première méthode capable de recolu ces deux modalités sans une saisie manuelle, en détectant les repères anatomiques uniquement sur une seule image pour assurer un recalage précis dans la zone maculaire. (...) / This thesis arises from an agreement Convention Industrielle de Formation par la REcherche (CIFRE) between the Endoscopy and Computer Vision (EnCoV) research group at Université Clermont Auvergne and the company Quantel Medical (www.quantel-medical.fr), which specializes in the development of innovative ultrasound and laser products in ophthalmology. It presents a research work directed at the application of computer-aided diagnosis and treatment of retinal diseases with a use of the TrackScan industrial prototype developed at Quantel Medical. More specifically, it contributes to the problem of precise Slit-Lamp Image Mosaicing (SLIM) and automatic multi-modal registration of SLIM with Fluorescein Angiography (FA) to assist navigated pan-retinal photocoagulation. We address three different problems.The first is a problem of accumulated registration errors in SLIM, namely the mosaicing drift.A common approach to image mosaicking is to compute transformations only between temporally consecutive images in a sequence and then to combine them to obtain the transformation between non-temporally consecutive views. Many existing algorithms follow this approach. Despite the low computational cost and the simplicity of such methods, due to its ‘chaining’ nature, alignment errors tend to accumulate, causing images to drift in the mosaic. We propose to use recent advances in key-frame Bundle Adjustment methods and present a drift reduction framework that is specifically designed for SLIM. We also introduce a new local refinement procedure.Secondly, we tackle the problem of various types of light-related imaging artifacts common in SLIM, which significantly degrade the geometric and photometric quality of the mosaic. Existing solutions manage to deal with strong glares which corrupt the retinal content entirely while leaving aside the correction of semi-transparent specular highlights and lens flare. This introduces ghosting and information loss. Moreover, related generic methods do not produce satisfactory results in SLIM. Therefore, we propose a better alternative by designing a method based on a fast single-image technique to remove glares and the notion of the type of semi-transparent specular highlights and motion cues for intelligent correction of lens flare.Finally, we solve the problem of automatic multi-modal registration of FA and SLIM. There exist a number of related works on multi-modal registration of various retinal image modalities. However, the majority of existing methods require a detection of feature points in both image modalities. This is a very difficult task for SLIM and FA. These methods do not account for the accurate registration in macula area - the priority landmark. Moreover, none has developed a fully automatic solution for SLIM and FA. In this thesis, we propose the first method that is able to register these two modalities without manual input by detecting retinal features only on one image and ensures an accurate registration in the macula area.The description of the extensive experiments that were used to demonstrate the effectiveness of each of the proposed methods is also provided. Our results show that (i) using our new local refinement procedure for drift reduction significantly ameliorates the to drift reduction allowing us to achieve an improvement in precision over the current solution employed in the TrackScan; (ii) the proposed methodology for correction of light-related artifacts exhibits a good efficiency, significantly outperforming related works in SLIM; and (iii) despite our solution for multi-modal registration builds on existing methods, with the various specific modifications made, it is fully automatic, effective and improves the baseline registration method currently used on the TrackScan.

Page generated in 0.0959 seconds