• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 16
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 70
  • 34
  • 14
  • 14
  • 12
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Data Mining Methods For Clustering Power Quality Data Collected Via Monitoring Systems Installed On The Electricity Network

Guder, Mennan 01 September 2009 (has links) (PDF)
Increasing power demand and wide use of high technology power electronic devices result in need for power quality monitoring. The quality of electric power in both transmission and distribution systems should be analyzed in order to sustain power system reliability and continuity. This analysis is possible by examination of data collected by power quality monitoring systems. In order to define the characteristics of the power system and reveal the relations between the power quality events, huge amount of data should be processed. In this thesis, clustering methods for power quality events are developed using exclusive and overlapping clustering models. The methods are designed to cluster huge amount of power quality data which is obtained from the online monitoring of the Turkish Electricity Transmission System. The main issues considered in the design of the clustering methods are the amount of the data, efficiency of the designed algorithm and queries that should be supplied to the domain experts. This research work is fully supported by the Public Research grant Committee (KAMAG) of TUBITAK within the scope of National Power quality Project (105G129).
52

Brain State Classification in Epilepsy and Anaesthesia

Lee, Angela 07 January 2011 (has links)
Transitions between normal and pathological brain states are manifested differently in the electroencephalogram (EEG). Traditional discrimination of these states is often subject to bias and strict definitions. A fuzzy logic-based analysis can permit the classification and tracking of brain states in a non-subjective and unsupervised manner. In this thesis, the combination of fuzzy c-means (FCM) clustering, wavelet, and information theory has revealed notable frequency features in epilepsy and anaesthetic-induced unconsciousness. It was shown that entropy changes in membership functions correlate to specific epileptiform activity and changes in anaesthetic dosages. Seizure episodes appeared in the 31-39 Hz band, suggesting changes in cortical functional organization. The induction of anaesthetics appeared in the 64-72 Hz band, while the return to consciousness appeared in the 32-40 Hz band. Changes in FCM activity were associated with the concentration of anaesthetics. These results can help with the treatment of epilepsy and the safe administration of anaesthesia.
53

Brain State Classification in Epilepsy and Anaesthesia

Lee, Angela 07 January 2011 (has links)
Transitions between normal and pathological brain states are manifested differently in the electroencephalogram (EEG). Traditional discrimination of these states is often subject to bias and strict definitions. A fuzzy logic-based analysis can permit the classification and tracking of brain states in a non-subjective and unsupervised manner. In this thesis, the combination of fuzzy c-means (FCM) clustering, wavelet, and information theory has revealed notable frequency features in epilepsy and anaesthetic-induced unconsciousness. It was shown that entropy changes in membership functions correlate to specific epileptiform activity and changes in anaesthetic dosages. Seizure episodes appeared in the 31-39 Hz band, suggesting changes in cortical functional organization. The induction of anaesthetics appeared in the 64-72 Hz band, while the return to consciousness appeared in the 32-40 Hz band. Changes in FCM activity were associated with the concentration of anaesthetics. These results can help with the treatment of epilepsy and the safe administration of anaesthesia.
54

Machinery fault diagnostics based on fuzzy measure and fuzzy integral data fusion techniques

Liu, Xiaofeng January 2007 (has links)
With growing demands for reliability, availability, safety and cost efficiency in modern machinery, accurate fault diagnosis is becoming of paramount importance so that potential failures can be better managed. Although various methods have been applied to machinery condition monitoring and fault diagnosis, the diagnostic accuracy that can be attained is far from satisfactory. As most machinery faults lead to increases in vibration levels, vibration monitoring has become one of the most basic and widely used methods to detect machinery faults. However, current vibration monitoring methods largely depend on signal processing techniques. This study is based on the recognition that a multi-parameter data fusion approach to diagnostics can produce more accurate results. Fuzzy measures and fuzzy integral data fusion theory can represent the importance of each criterion and express certain interactions among them. This research developed a novel, systematic and effective fuzzy measure and fuzzy integral data fusion approach for machinery fault diagnosis, which comprises feature set selection schema, feature level data fusion schema and decision level data fusion schema for machinery fault diagnosis. Different feature selection and fault diagnostic models were derived from these schemas. Two fuzzy measures and two fuzzy integrals were employed: the 2-additive fuzzy measure, the fuzzy measure, the Choquet fuzzy integral and the Sugeno fuzzy integral respectively. The models were validated using rolling element bearing and electrical motor experiments. Different features extracted from vibration signals were used to validate the rolling element bearing feature set selection and fault diagnostic models, while features obtained from both vibration and current signals were employed to assess electrical motor fault diagnostic models. The results show that the proposed schemas and models perform very well in selecting feature set and can improve accuracy in diagnosing both the rolling element bearing and electrical motor faults.
55

Analysis of Quality of Experience by applying Fuzzy logic : A study on response time

Ataeian, Seyed Mohsen, Darbandi, Mehrnaz Jaberi January 2011 (has links)
To be successful in today's competitive market, service providers should look at user's satisfaction as a critical key. In order to gain a better understanding of customers' expectations, a proper evaluations which considers intrinsic characteristics of perceived quality of service is needed. Due to the subjective nature of quality, the vagueness of human judgment and the uncertainty about the degree of users' linguistic satisfaction, fuzziness is associated with quality of experience. Considering the capability of Fuzzy logic in dealing with imprecision and qualitative knowledge, it would be wise to apply it as a powerful mathematical tool for analyzing the quality of experience (QoE). This thesis proposes a fuzzy procedure to evaluate the quality of experience. In our proposed methodology, we provide a fuzzy relationship between QoE and Quality of Service (QoS) parameters. To identify this fuzzy relationship a new term called Fuzzi ed Opinion Score (FOS) representing a fuzzy quality scale is introduced. A fuzzy data mining method is applied to construct the required number of fuzzy sets. Then, the appropriate membership functions describing fuzzy sets are modeled and compared with each other. The proposed methodology will assist service providers for better decision-making and resource management.
56

Text mining se zaměřením na shlukovací a fuzzy shlukovací metody / Text mining focused on clustering and fuzzy clustering methods

Zubková, Kateřina January 2018 (has links)
This thesis is focused on cluster analysis in the field of text mining and its application to real data. The aim of the thesis is to find suitable categories (clusters) in the transcribed calls recorded in the contact center of Česká pojišťovna a.s. by transferring these textual documents into the vector space using basic text mining methods and the implemented clustering algorithms. From the formal point of view, the thesis contains a description of preprocessing and representation of textual data, a description of several common clustering methods, cluster validation, and the application itself.
57

Data Mining the Effects of Storage Conditions, Testing Conditions, and Specimen Properties on Brain Biomechanics

Crawford, Folly Martha Dzan 10 August 2018 (has links)
Traumatic brain injury is highly prevalent in the United States yet there is little understanding of how the brain responds during injurious loading. A confounding problem is that because testing conditions vary between assessment methods, brain biomechanics cannot be fully understood. Data mining techniques were applied to discover how changes in testing conditions affect the mechanical response of the brain. Data were gathered from literature sources and self-organizing maps were used to conduct a sensitivity analysis to rank considered parameters by importance. Fuzzy C-means clustering was applied to find any data patterns. The rankings and clustering for each data set varied, indicating that the strain rate and type of deformation influence the role of these parameters. Multivariate linear regression was applied to develop a model which can predict the mechanical response from different experimental conditions. Prediction of response depended primarily on strain rate, frequency, brain matter composition, and anatomical region.
58

An evolutionary Pentagon Support Vector finder method

Mousavi, S.M.H., Vincent, Charles, Gherman, T. 02 March 2020 (has links)
Yes / In dealing with big data, we need effective algorithms; effectiveness that depends, among others, on the ability to remove outliers from the data set, especially when dealing with classification problems. To this aim, support vector finder algorithms have been created to save just the most important data in the data pool. Nevertheless, existing classification algorithms, such as Fuzzy C-Means (FCM), suffer from the drawback of setting the initial cluster centers imprecisely. In this paper, we avoid existing shortcomings and aim to find and remove unnecessary data in order to speed up the final classification task without losing vital samples and without harming final accuracy; in this sense, we present a unique approach for finding support vectors, named evolutionary Pentagon Support Vector (PSV) finder method. The originality of the current research lies in using geometrical computations and evolutionary algorithms to make a more effective system, which has the advantage of higher accuracy on some data sets. The proposed method is subsequently tested with seven benchmark data sets and the results are compared to those obtained from performing classification on the original data (classification before and after PSV) under the same conditions. The testing returned promising results.
59

T?cnicas de computa??o natural para segmenta??o de imagens m?dicas

Souza, Jackson Gomes de 28 September 2009 (has links)
Made available in DSpace on 2014-12-17T14:55:35Z (GMT). No. of bitstreams: 1 JacksonGS.pdf: 1963039 bytes, checksum: ed3464892d7bb73b5dcab563e42f0e01 (MD5) Previous issue date: 2009-09-28 / Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth / Segmenta??o de imagens ? um dos problemas de processamento de imagens que merece especial interesse da comunidade cient?fica. Neste trabalho, s?o estudado m?todos n?o-supervisionados para detec??o de algomerados (clustering) e reconhecimento de padr?es (pattern recognition) em segmenta??o de imagens m?dicas M?todos baseados em t?cnicas de computa??o natural t?m se mostrado bastante atrativos nestas tarefas e s?o estudados aqui como uma forma de verificar a sua aplicabilidade em segmenta??o de imagens m?dicas. Este trabalho trata de implementa os m?todos GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm) PSOKA (Algoritmo de clustering baseado em PSO (Particle Swarm Optimization) e K means) e PSOFCM (Algoritmo de clustering baseado em PSO e FCM (Fuzzy C Means)). Al?m disso, como forma de avaliar os resultados fornecidos pelos algoritmos s?o utilizados ?ndices de valida??o de clustering como forma de medida quantitativa Avalia??es visuais e qualitativas tamb?m s?o realizadas, principalmente utilizando dados do sistema BrainWeb, um gerador de imagens do c?rebro, como ground truth
60

Software quality studies using analytical metric analysis

Rodríguez Martínez, Cecilia January 2013 (has links)
Today engineering companies expend a large amount of resources on the detection and correction of the bugs (defects) in their software. These bugs are usually due to errors and mistakes made by programmers while writing the code or writing the specifications. No tool is able to detect all of these bugs. Some of these bugs remain undetected despite testing of the code. For these reasons, many researchers have tried to find indicators in the software’s source codes that can be used to predict the presence of bugs. Every bug in the source code is a potentially failure of the program to perform as expected. Therefore, programs are tested with many different cases in an attempt to cover all the possible paths through the program to detect all of these bugs. Early prediction of bugs informs the programmers about the location of the bugs in the code. Thus, programmers can more carefully test the more error prone files, and thus save a lot of time by not testing error free files. This thesis project created a tool that is able to predict error prone source code written in C++. In order to achieve this, we have utilized one predictor which has been extremely well studied: software metrics. Many studies have demonstrated that there is a relationship between software metrics and the presence of bugs. In this project a Neuro-Fuzzy hybrid model based on Fuzzy c-means and Radial Basis Neural Network has been used. The efficiency of the model has been tested in a software project at Ericsson. Testing of this model proved that the program does not achieve high accuracy due to the lack of independent samples in the data set. However, experiments did show that classification models provide better predictions than regression models. The thesis concluded by suggesting future work that could improve the performance of this program. / Idag spenderar ingenjörsföretag en stor mängd resurser på att upptäcka och korrigera buggar (fel) i sin mjukvara. Det är oftast programmerare som inför dessa buggar på grund av fel och misstag som uppkommer när de skriver koden eller specifikationerna. Inget verktyg kan detektera alla dessa buggar. Några av buggarna förblir oupptäckta trots testning av koden. Av dessa skäl har många forskare försökt hitta indikatorer i programvarans källkod som kan användas för att förutsäga förekomsten av buggar. Varje fel i källkoden är ett potentiellt misslyckande som gör att applikationen inte fungerar som förväntat. För att hitta buggarna testas koden med många olika testfall för att försöka täcka alla möjliga kombinationer och fall. Förutsägelse av buggar informerar programmerarna om var i koden buggarna finns. Således kan programmerarna mer noggrant testa felbenägna filer och därmed spara mycket tid genom att inte behöva testa felfria filer. Detta examensarbete har skapat ett verktyg som kan förutsäga felbenägen källkod skriven i C ++. För att uppnå detta har vi utnyttjat en välkänd metod som heter Software Metrics. Många studier har visat att det finns ett samband mellan Software Metrics och förekomsten av buggar. I detta projekt har en Neuro-Fuzzy hybridmodell baserad på Fuzzy c-means och Radial Basis Neural Network använts. Effektiviteten av modellen har testats i ett mjukvaruprojekt på Ericsson. Testning av denna modell visade att programmet inte Uppnå hög noggrannhet på grund av bristen av oberoende urval i datauppsättningen. Men gjordt experiment visade att klassificering modeller ger bättre förutsägelser än regressionsmodeller. Exjobbet avslutade genom att föreslå framtida arbetet som skulle kunna förbättra detta program. / Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones.  No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los códigos fuente del software que puedan ser utilizados para detectar la presencia de errores. Cada error en un código fuente es un error potencial en el funcionamiento del programa, por ello los programas son sometidos a exhaustivas pruebas que cubren (o intentan cubrir) todos los posibles caminos del programa para detectar todos sus errores. La temprana localización de errores informa a los programadores dedicados a la realización de estas pruebas sobre la ubicación de estos errores en el código. Así, los programadores pueden probar con más cuidado los archivos más propensos a tener errores dejando a un lado los archivos libres de error. En este proyecto se ha creado una herramienta capaz de predecir código software propenso a errores escrito en C++. Para ello, en este proyecto se ha utilizado un indicador que ha sido cuidadosamente estudiado y ha demostrado su relación con la presencia de errores: las métricas del software. En este proyecto un modelo híbrido neuro-disfuso basado en Fuzzy c-means y en redes neuronales de función de base radial ha sido utilizado. La eficacia de este modelo ha sido probada en un proyecto software de Ericsson. Como resultado se ha comprobado que el modelo no alcanza una alta precisión debido a la falta de muestras independientes en el conjunto de datos y los experimentos han mostrado que los modelos de clasificación proporcionan mejores predicciones que los modelos de regresión. El proyecto concluye sugiriendo trabajo que mejoraría el funcionamiento del programa en el futuro.

Page generated in 0.0359 seconds