Spelling suggestions: "subject:"fuzzy C means"" "subject:"buzzy C means""
61 |
T?cnicas de computa??o natural para segmenta??o de imagens m?dicasSouza, Jackson Gomes de 28 September 2009 (has links)
Made available in DSpace on 2014-12-17T14:55:35Z (GMT). No. of bitstreams: 1
JacksonGS.pdf: 1963039 bytes, checksum: ed3464892d7bb73b5dcab563e42f0e01 (MD5)
Previous issue date: 2009-09-28 / Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth / Segmenta??o de imagens ? um dos problemas de processamento de imagens que merece especial interesse da comunidade cient?fica. Neste trabalho, s?o estudado m?todos n?o-supervisionados para detec??o de algomerados (clustering) e reconhecimento de padr?es (pattern recognition) em segmenta??o de imagens m?dicas M?todos baseados em t?cnicas de computa??o natural t?m se mostrado bastante atrativos nestas tarefas e s?o estudados aqui como uma forma de verificar a sua aplicabilidade em segmenta??o de imagens m?dicas. Este trabalho trata de implementa os m?todos GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm) PSOKA (Algoritmo de clustering baseado em PSO (Particle Swarm Optimization) e K means) e PSOFCM (Algoritmo de clustering baseado em PSO e FCM (Fuzzy C Means)). Al?m disso, como forma de avaliar os resultados fornecidos pelos algoritmos s?o utilizados ?ndices de valida??o de clustering como forma de medida quantitativa Avalia??es visuais e qualitativas tamb?m s?o realizadas, principalmente utilizando dados do sistema BrainWeb, um gerador de imagens do c?rebro, como ground truth
|
62 |
Software quality studies using analytical metric analysisRodríguez Martínez, Cecilia January 2013 (has links)
Today engineering companies expend a large amount of resources on the detection and correction of the bugs (defects) in their software. These bugs are usually due to errors and mistakes made by programmers while writing the code or writing the specifications. No tool is able to detect all of these bugs. Some of these bugs remain undetected despite testing of the code. For these reasons, many researchers have tried to find indicators in the software’s source codes that can be used to predict the presence of bugs. Every bug in the source code is a potentially failure of the program to perform as expected. Therefore, programs are tested with many different cases in an attempt to cover all the possible paths through the program to detect all of these bugs. Early prediction of bugs informs the programmers about the location of the bugs in the code. Thus, programmers can more carefully test the more error prone files, and thus save a lot of time by not testing error free files. This thesis project created a tool that is able to predict error prone source code written in C++. In order to achieve this, we have utilized one predictor which has been extremely well studied: software metrics. Many studies have demonstrated that there is a relationship between software metrics and the presence of bugs. In this project a Neuro-Fuzzy hybrid model based on Fuzzy c-means and Radial Basis Neural Network has been used. The efficiency of the model has been tested in a software project at Ericsson. Testing of this model proved that the program does not achieve high accuracy due to the lack of independent samples in the data set. However, experiments did show that classification models provide better predictions than regression models. The thesis concluded by suggesting future work that could improve the performance of this program. / Idag spenderar ingenjörsföretag en stor mängd resurser på att upptäcka och korrigera buggar (fel) i sin mjukvara. Det är oftast programmerare som inför dessa buggar på grund av fel och misstag som uppkommer när de skriver koden eller specifikationerna. Inget verktyg kan detektera alla dessa buggar. Några av buggarna förblir oupptäckta trots testning av koden. Av dessa skäl har många forskare försökt hitta indikatorer i programvarans källkod som kan användas för att förutsäga förekomsten av buggar. Varje fel i källkoden är ett potentiellt misslyckande som gör att applikationen inte fungerar som förväntat. För att hitta buggarna testas koden med många olika testfall för att försöka täcka alla möjliga kombinationer och fall. Förutsägelse av buggar informerar programmerarna om var i koden buggarna finns. Således kan programmerarna mer noggrant testa felbenägna filer och därmed spara mycket tid genom att inte behöva testa felfria filer. Detta examensarbete har skapat ett verktyg som kan förutsäga felbenägen källkod skriven i C ++. För att uppnå detta har vi utnyttjat en välkänd metod som heter Software Metrics. Många studier har visat att det finns ett samband mellan Software Metrics och förekomsten av buggar. I detta projekt har en Neuro-Fuzzy hybridmodell baserad på Fuzzy c-means och Radial Basis Neural Network använts. Effektiviteten av modellen har testats i ett mjukvaruprojekt på Ericsson. Testning av denna modell visade att programmet inte Uppnå hög noggrannhet på grund av bristen av oberoende urval i datauppsättningen. Men gjordt experiment visade att klassificering modeller ger bättre förutsägelser än regressionsmodeller. Exjobbet avslutade genom att föreslå framtida arbetet som skulle kunna förbättra detta program. / Actualmente las empresas de ingeniería derivan una gran cantidad de recursos a la detección y corrección de errores en sus códigos software. Estos errores se deben generalmente a los errores cometidos por los desarrolladores cuando escriben el código o sus especificaciones. No hay ninguna herramienta capaz de detectar todos estos errores y algunos de ellos pasan desapercibidos tras el proceso de pruebas. Por esta razón, numerosas investigaciones han intentado encontrar indicadores en los códigos fuente del software que puedan ser utilizados para detectar la presencia de errores. Cada error en un código fuente es un error potencial en el funcionamiento del programa, por ello los programas son sometidos a exhaustivas pruebas que cubren (o intentan cubrir) todos los posibles caminos del programa para detectar todos sus errores. La temprana localización de errores informa a los programadores dedicados a la realización de estas pruebas sobre la ubicación de estos errores en el código. Así, los programadores pueden probar con más cuidado los archivos más propensos a tener errores dejando a un lado los archivos libres de error. En este proyecto se ha creado una herramienta capaz de predecir código software propenso a errores escrito en C++. Para ello, en este proyecto se ha utilizado un indicador que ha sido cuidadosamente estudiado y ha demostrado su relación con la presencia de errores: las métricas del software. En este proyecto un modelo híbrido neuro-disfuso basado en Fuzzy c-means y en redes neuronales de función de base radial ha sido utilizado. La eficacia de este modelo ha sido probada en un proyecto software de Ericsson. Como resultado se ha comprobado que el modelo no alcanza una alta precisión debido a la falta de muestras independientes en el conjunto de datos y los experimentos han mostrado que los modelos de clasificación proporcionan mejores predicciones que los modelos de regresión. El proyecto concluye sugiriendo trabajo que mejoraría el funcionamiento del programa en el futuro.
|
63 |
Apport de la morphologie tridimensionnelle de la colonne vertébrale et du bassin à la scoliose idiopathique de l’adolescenceDubé, Evelyne 08 1900 (has links)
La scoliose idiopathique de l’adolescence (SIA) est une déformation de la colonne vertébrale d’origine inconnue. Bien qu’elle soit encore aujourd’hui mesurée et classifiée en utilisant des radiographies bidimensionnelles (2D), il est largement rapporté dans la littérature qu’il s’agit d’une affection dans les trois plans de l’espace. Aussi, il semble que le bassin soit impliqué dans la déformation scoliotique, puisqu’il constitue l’assise de la colonne vertébrale et que son orientation influe sur l’équilibre postural. Ainsi, l'asymétrie du bassin et son attitude posturale pourraient être des mécanismes compensatoires de la scoliose idiopathique ou encore être les agents qui déclenchent la déformation du rachis. L’objectif de ce travail est de déterminer la relation entre la morphologie tridimensionnelle (3D) de la colonne vertébrale et du bassin et les déformations scoliotiques groupées selon la méthode de Lenke et de connaître les liens entre ces paramètres morphologiques et l’angle de Cobb.
Pour ce faire, 80 filles atteintes de la SIA ont participé à l’étude. Plus précisément, 32 sujets étaient atteintes d’une scoliose thoracique, 23 d’une scoliose thoraco-lombaire et 25 d’une scoliose lombaire. Des radiographies simultanées des plans postéro-antérieur et latéral en position debout ont été prises au moyen du système EOS. Quinze repères anatomiques sur chacune des vertèbres entre T1 à L5 et vingt-et-un sur le bassin ont été identifiés sur les paires de radiographies. La reconstruction tridimensionnelle de la colonne vertébrale et du bassin a été faite à partir des repères anatomiques. Au total, cinq paramètres sur la colonne vertébrale et trois sur le bassin ont été calculés afin d’identifier la morphologie des déformations scoliotiques thoraciques, thoraco-lombaires et lombaires. L’algorithme de classification non-supervisée de la logique floue ou fuzzy c-means (FCM) a été utilisé pour classifier les sujets. Des classifications à deux et trois classes ont été faites avec les données normalisées et non-normalisées, c’est-à-dire en faisant ou en ne faisant pas abstraction au niveau de la courbure scoliotique. Des analyses de variances à un facteur (ANOVA) avec post-hoc ont été menées sur les classifications à deux groupes et deux classes, alors que des analyses multivariées (MANOVA) avec post-hoc ont été réalisées sur les classifications à trois groupes et trois classes non-normalisés et normalisés.
L’angle de Cobb du segment thoracique principal est significativement différent pour les trois types de scolioses. Cependant, ces différences pourraient être associées aux segments analysés et à la sévérité de la courbure. Avec les données normalisées, les scolioses thoraciques L1 se regroupent ensemble pour la classification à deux classes et se divisent en deux pour la classification à trois classes. Les paramètres de la cyphose (p = 0,000), de la lordose (p = 0,000) et de l’orientation du plan de courbure maximale (PCM) (p = 0,000) sont ceux qui divisent ces sujets. Quant aux L5 et L6, peu importe la classification, ils se rassemblent généralement dans une même classe. Aussi, des corrélations de Pearson ont été réalisées en fonction de l’angle de Cobb, afin de déceler des liens entre les types de déformation et les paramètres morphologiques. Le bassin ne semble pas avoir d’impact sur l’issu des classifications, mais il est corrélé avec les déviations scoliotique des sujets lombaires. En effet, la version pelvienne (r = -0,433; p = 0,031) est en relation inverse, tandis que la pente sacrée est en relation directe (r = 0,419; p = 0,037).
En résumé, les résultats de cette étude indiquent que l’apport de la morphologie 3D de la colonne vertébrale et du bassin aux déformations scoliotiques thoraciques (L1), thoraco-lombaires (L5) et lombaires (L6) apporte des informations cliniquement pertinentes. Nos résultats obtenus par logique floue concernant les sujets thoraciques appuient ceux de la littérature, à savoir que ces sujets ne sont pas tous hypocyphosés. De nouveaux paramètres, tels que la lordose et l’orientation du PCM, viennent renforcer l’idée qu’il existe des sous-groupes parmi les scoliotiques thoraciques. Finalement, bien que d’un point de vue visuel, les sujets thoraco-lombaires et lombaires soient différents, du côté de la morphologie tridimensionnelle, ces sujets sont inséparables. / Adolescent idiopathic scoliosis (AIS) is a deviation of the spine of unknown origin. Although it is still classified and measured using two-dimensional X-ray, it is widely reported in the literature that scoliosis is a deviation in the three planes of space. Also, it seems that the pelvis is involved in the scoliosis deformity, as it is the foundation of the spine and its orientation influence on postural balance. Thus, the asymmetry of the pelvis and postural attitude could be compensatory mechanisms of idiopathic scoliosis or be agents that trigger this disease. The aim of this work is to determine the relationship between the three-dimensional morphology of the spine and pelvis of scoliosis grouped according to Lenke’s classification and to know the links between morphological parameters and the Cobb angle.
Eighty girls with the SIA participated in this study. Specifically, 32 subjects were suffering from thoracic scoliosis, 23 thoracolumbar scoliosis and 25 lumbar scoliosis. Simultaneous radiographs of posterior-anterior and lateral planes in standing position were taken with the EOS system. Fifteen anatomical landmarks on each of the vertebrae between T1 and L5 on the spine and twenty-one on the pelvis, have been identified on the pairs of radiographs. The three-dimensional reconstruction of the spine and pelvis was made from the two radiographs and the anatomic landmarks. A total of five parameters on the spine and three on the pelvis were calculated to identify the morphology of thoracic, thoracolumbar and lumbar deformations. The unsupervised classification algorithm of fuzzy c-means (FCM) was used to classify subjects. Classifications with two and three classes were made with non-standardized and normalized data, i.e. by omitting the level of the scoliosis curve. Analysis of variance (ANOVA) with post-hoc was conducted on classifications with two groups and classes and multivariate analysis (MANOVA) with post-hoc were made on classifications with three groups and classes.
The Cobb angle of the main thoracic segment was significantly different for the three types of scoliosis. However, these differences might be attributed to the analyzed segment and severity of the curve. With the normalized data, the thoracic scoliosis L1 subjects regroup in classification with two classes and divides into two classes in the classification with three classes. These classes are divided according to the kyphosis (p = 0.000), lordosis (p = 0.000) and the orientation of plane of maximum curvature (PMC) (p = 0.000) parameters. Regardless of classification, L5 and L6 usually gather in the same class. Also, Pearson correlations were made according to the Cobb angle, in order to detect the relationship between the types of deformation and morphological parameters. Finally, the pelvis had no impact on classifications, but it is correlated with the scoliotic deviation of the lumbar scoliosis. The pelvic tilt (r = -0.433; p = 0.031) is inversely correlated, while the sacral slope has a direct relationship (r = 0.419; p = 0.037).
In conclusion, the results of this study indicate that the contribution of the 3D morphology of the spine and pelvis to the thoracic (L1), thoracolumbar (L5) and lumbar (L6) scoliosis provides clinically relevant information. Our results gained by fuzzy-c-means support those in the literature, according to which these subjects are not all hypokyphotic. New parameters such as lordosis and orientation of the PMC, reinforce the idea that there are subgroups within the thoracic scoliosis. Finally, although the thoracolumbar and lumbar subjects appear to differ from a visual standpoint, these subjects are inseparable according to the three-dimensional morphology parameters.
|
64 |
Driving data pattern recognition for intelligent energy management of plug-in hybrid electric vehiclesMunthikodu, Sreejith 19 August 2019 (has links)
This work focuses on the development and testing of new driving data pattern recognition intelligent system techniques to support driver adaptive, real-time optimal power control and energy management of hybrid electric vehicles (HEVs) and plug-in hybrid electric vehicles (PHEVs). A novel, intelligent energy management approach that combines vehicle operation data acquisition, driving data clustering and pattern recognition, cluster prototype based power control and energy optimization, and real-time driving pattern recognition and optimal energy management has been introduced. The method integrates advanced machine learning techniques and global optimization methods form the driver adaptive optimal power control and energy management. Fuzzy C-Means clustering algorithm is used to identify the representative vehicle operation patterns from collected driving data. Dynamic Programming (DA) based off-line optimization is conducted to obtain the optimal control parameters for each of the identified driving patterns. Artificial Neural Networks (ANN) are trained to associate each of the identified operation patterns with the optimal energy management plan to support real-time optimal control. Implementation and advantages of the new method are demonstrated using the 2012 California household travel survey data, and driver-specific data collected from the city of Victoria, BC Canada. / Graduate
|
65 |
Algoritmo de agrupamento Fuzzy C-Means para aprendizado e tomada de decisão em redes ópticas de próxima geração / Fuzzy C-Means algorithm for learning and decision making in next generation optical networkTronco, Tania Regina 31 August 2015 (has links)
As redes ópticas têm evoluído de forma contínua dentro de um paradigma de aumento das taxas de transmissão e extensão dos enlaces, devido à demanda crescente de banda em função do crescimento do tráfego da Internet. Além disso, atualmente, diversas propostas vêm sendo implementadas visando torná-las mais dinâmicas e flexíveis. Uma destas propostas que atualmente está no âmbito de pesquisa e desenvolvimento refere-se às redes ópticas definidas por software (Software Defined Optical Network, SDON). Nas SDONs, o plano de controle é desacoplado do plano de encaminhamento de dados possibilitando que controladores remotos configurem em tempo real diversos parâmetros dos canais ópticos, tais como a taxa de transmissão, o formato de modulação, a largura do espectro, entre outros. Nestas redes, o sistema de controle torna-se bastante complexo, uma vez que diversos parâmetros têm que ser ajustados de forma dinâmica e autônoma, ou seja, com a mínima intervenção humana. O emprego de técnicas de inteligência computacional em tal controle possibilita a configuração autônoma dos parâmetros dos equipamentos com base em dados coletados por monitores de rede e o aprendizado, a partir de eventos passados, visando a otimização do desempenho da rede. Esta arquitetura de controle constitui um novo paradigma na evolução das redes ópticas, as denominadas Redes Ópticas Cognitivas. A escolha de uma técnica de inteligência computacional adequada para tomada de decisão em redes ópticas é importante para se obter vantagens no uso da cognição. Esta técnica deve possibilitar o aprendizado e ainda minimizar a complexidade computacional, uma vez que a configuração dos parâmetros da rede deve ocorrer em tempo real. Neste contexto, esta tese investiga o uso do algoritmo de agrupamento Fuzzy C-Means (FCM) para aprendizado e tomada de decisão em redes ópticas flexíveis de próxima geração. FCM possibilita a geração automática de regras com base na experiência adquirida no meio de operação (aprendizado) e a tomada de decisão a partir destas regras. Uma comparação de desempenho entre os algoritmos FCM e CBR (Case-Based Reasoning) é apresentada. O algoritmo CBR foi escolhido para esta comparação devido a ter sido utilizado recentemente, com sucesso, em redes ópticas cognitivas. Por fim, um conceito de rede óptica cognitiva é apresentado. / Optical networks have evolved continuously increasing the transmission rate and the extension of links due to the increased bandwidth consuming. Moreover, currently, several proposals are under development to make the next generation optical network more dynamic and flexible. The term \"flexible\" refers to the ability of dynamically adjust the parameters of the optical network such as modulation format, transmission rate, optical bandwidth and others, according with the quality of transmission of each lightpath. In this scenario, a Software Defined Optical Network (SDON) emerges as a new optical network paradigm, where the control plane is decoupled from the data plane, enabling remote controllers to configure network equipment from different hardware vendors, which allows a degree of software programmability to the network. In SDON, the control plane needs to include functionalities to operate autonomously, i.e, with minimal human intervention. The use of the computational intelligence techniques in such control plane enables the autonomous operation and learning based on past events, in order to optimize the network performance. This architecture represents a new paradigm in the evolution of optical networks, resulting in so-called Cognitive Optical Networks. The choice of a computational intelligence technique for learning and decisionmaking in such optical networks is essential to bring advantages with the use of cognition. This technique should minimize the computational complexity, since the configuration of the network parameters must occur in real time.In this context, this thesis investigates the use of Fuzzy C-Means clustering algorithm (FCM) for learning and decision-making in the software defined optical networks context. FCM enables the automatic generation of rules, based on the experience gained during the network operation. Then, these rules are used by the control plane to take decisions about the lightpaths\' configuration. A comparison of performance between the FCM and the CBR (Case-Based Reasoning) algorithm. CBR algorithm was chosen because it has been successfully used in cognitive optical networks. Finally, we propose a concept for optical cognitive network.
|
66 |
Frequency Analysis of Droughts Using Stochastic and Soft Computing TechniquesSadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can
have significant economical, environmental, and social impacts. For example,
droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency
analysis is a technique for analyzing how frequently a drought event of a given
magnitude may be expected to occur. In this study the state of the science related
to frequency analysis of droughts is reviewed and studied. The main contributions
of this thesis include development of a model in Matlab which uses the qualities of
Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria
of effective hydrological regions. In FCM each site has a degree of membership in
each of the clusters. The algorithm developed is flexible to get number of regions and
return period as inputs and show the final corrected clusters as output for most case
scenarios. While drought is considered a bivariate phenomena with two statistical
variables of duration and severity to be analyzed simultaneously, an important step
in this study is increasing the complexity of the initial model in Matlab to correct
regions based on L-comoments statistics (as apposed to L-moments). Implementing
a reasonably straightforward approach for bivariate drought frequency analysis using
bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two
new classes of neural network and machine learning: Radial Basis Function (RBF)
and Support Vector Machine Regression (SVM-R). These two techniques are selected
based on their good reviews in literature in function estimation and nonparametric
regression. The functionalities of RBF and SVM-R are compared with traditional
nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization
method in which catchments are first regionalized using FCMs is applied and
its results are compared with the other three models. Drought data from 36 natural
catchments in the Canadian Prairies are used in this study. This study provides a
methodology for bivariate drought frequency analysis that can be practiced in any
part of the world.
|
67 |
Decision Making System Algorithm On Menopause Data SetBacak, Hikmet Ozge 01 September 2007 (has links) (PDF)
Multiple-centered clustering method and decision making system algorithm on menopause data set depending on multiple-centered clustering are described in this study. This method consists of two stages. At the first stage, fuzzy C-means (FCM) clustering algorithm is applied on the data set under consideration with a high number of cluster centers. As the output of FCM, cluster centers and membership function values for each data member is calculated. At the second stage, original cluster centers obtained in the first stage are merged till the new numbers of clusters are reached. Merging process relies upon a &ldquo / similarity measure&rdquo / between clusters defined in the thesis. During the merging process, the cluster center coordinates do not change but the data members in these clusters are merged in a new cluster. As the output of this method, therefore, one obtains clusters which include many cluster centers.
In the final part of this study, an application of the clustering algorithms &ndash / including the multiple centered clustering method &ndash / a decision making system is constructed using a special data on menopause treatment. The decisions are based on the clusterings created by the algorithms already discussed in the previous chapters of the thesis. A verification of the decision making system /
v
decision aid system is done by a team of experts from the Department of Department of Obstetrics and Gynecology of Hacettepe University under the guidance of Prof. Sinan Beksaç / .
|
68 |
Frequency Analysis of Droughts Using Stochastic and Soft Computing TechniquesSadri, Sara January 2010 (has links)
In the Canadian Prairies recurring droughts are one of the realities which can
have significant economical, environmental, and social impacts. For example,
droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency
analysis is a technique for analyzing how frequently a drought event of a given
magnitude may be expected to occur. In this study the state of the science related
to frequency analysis of droughts is reviewed and studied. The main contributions
of this thesis include development of a model in Matlab which uses the qualities of
Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria
of effective hydrological regions. In FCM each site has a degree of membership in
each of the clusters. The algorithm developed is flexible to get number of regions and
return period as inputs and show the final corrected clusters as output for most case
scenarios. While drought is considered a bivariate phenomena with two statistical
variables of duration and severity to be analyzed simultaneously, an important step
in this study is increasing the complexity of the initial model in Matlab to correct
regions based on L-comoments statistics (as apposed to L-moments). Implementing
a reasonably straightforward approach for bivariate drought frequency analysis using
bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two
new classes of neural network and machine learning: Radial Basis Function (RBF)
and Support Vector Machine Regression (SVM-R). These two techniques are selected
based on their good reviews in literature in function estimation and nonparametric
regression. The functionalities of RBF and SVM-R are compared with traditional
nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization
method in which catchments are first regionalized using FCMs is applied and
its results are compared with the other three models. Drought data from 36 natural
catchments in the Canadian Prairies are used in this study. This study provides a
methodology for bivariate drought frequency analysis that can be practiced in any
part of the world.
|
69 |
Algoritmo de agrupamento Fuzzy C-Means para aprendizado e tomada de decisão em redes ópticas de próxima geração / Fuzzy C-Means algorithm for learning and decision making in next generation optical networkTania Regina Tronco 31 August 2015 (has links)
As redes ópticas têm evoluído de forma contínua dentro de um paradigma de aumento das taxas de transmissão e extensão dos enlaces, devido à demanda crescente de banda em função do crescimento do tráfego da Internet. Além disso, atualmente, diversas propostas vêm sendo implementadas visando torná-las mais dinâmicas e flexíveis. Uma destas propostas que atualmente está no âmbito de pesquisa e desenvolvimento refere-se às redes ópticas definidas por software (Software Defined Optical Network, SDON). Nas SDONs, o plano de controle é desacoplado do plano de encaminhamento de dados possibilitando que controladores remotos configurem em tempo real diversos parâmetros dos canais ópticos, tais como a taxa de transmissão, o formato de modulação, a largura do espectro, entre outros. Nestas redes, o sistema de controle torna-se bastante complexo, uma vez que diversos parâmetros têm que ser ajustados de forma dinâmica e autônoma, ou seja, com a mínima intervenção humana. O emprego de técnicas de inteligência computacional em tal controle possibilita a configuração autônoma dos parâmetros dos equipamentos com base em dados coletados por monitores de rede e o aprendizado, a partir de eventos passados, visando a otimização do desempenho da rede. Esta arquitetura de controle constitui um novo paradigma na evolução das redes ópticas, as denominadas Redes Ópticas Cognitivas. A escolha de uma técnica de inteligência computacional adequada para tomada de decisão em redes ópticas é importante para se obter vantagens no uso da cognição. Esta técnica deve possibilitar o aprendizado e ainda minimizar a complexidade computacional, uma vez que a configuração dos parâmetros da rede deve ocorrer em tempo real. Neste contexto, esta tese investiga o uso do algoritmo de agrupamento Fuzzy C-Means (FCM) para aprendizado e tomada de decisão em redes ópticas flexíveis de próxima geração. FCM possibilita a geração automática de regras com base na experiência adquirida no meio de operação (aprendizado) e a tomada de decisão a partir destas regras. Uma comparação de desempenho entre os algoritmos FCM e CBR (Case-Based Reasoning) é apresentada. O algoritmo CBR foi escolhido para esta comparação devido a ter sido utilizado recentemente, com sucesso, em redes ópticas cognitivas. Por fim, um conceito de rede óptica cognitiva é apresentado. / Optical networks have evolved continuously increasing the transmission rate and the extension of links due to the increased bandwidth consuming. Moreover, currently, several proposals are under development to make the next generation optical network more dynamic and flexible. The term \"flexible\" refers to the ability of dynamically adjust the parameters of the optical network such as modulation format, transmission rate, optical bandwidth and others, according with the quality of transmission of each lightpath. In this scenario, a Software Defined Optical Network (SDON) emerges as a new optical network paradigm, where the control plane is decoupled from the data plane, enabling remote controllers to configure network equipment from different hardware vendors, which allows a degree of software programmability to the network. In SDON, the control plane needs to include functionalities to operate autonomously, i.e, with minimal human intervention. The use of the computational intelligence techniques in such control plane enables the autonomous operation and learning based on past events, in order to optimize the network performance. This architecture represents a new paradigm in the evolution of optical networks, resulting in so-called Cognitive Optical Networks. The choice of a computational intelligence technique for learning and decisionmaking in such optical networks is essential to bring advantages with the use of cognition. This technique should minimize the computational complexity, since the configuration of the network parameters must occur in real time.In this context, this thesis investigates the use of Fuzzy C-Means clustering algorithm (FCM) for learning and decision-making in the software defined optical networks context. FCM enables the automatic generation of rules, based on the experience gained during the network operation. Then, these rules are used by the control plane to take decisions about the lightpaths\' configuration. A comparison of performance between the FCM and the CBR (Case-Based Reasoning) algorithm. CBR algorithm was chosen because it has been successfully used in cognitive optical networks. Finally, we propose a concept for optical cognitive network.
|
70 |
Regionalization Of Hydrometeorological Variables In India Using Cluster AnalysisBharath, R 09 1900 (has links) (PDF)
Regionalization of hydrometeorological variables such as rainfall and temperature is necessary for various applications related to water resources planning and management. Sampling variability and randomness associated with the variables, as well as non-availability and paucity of data pose a challenge in modelling the variables. This challenge can be addressed by using stochastic models that utilize information from hydrometeorologically similar locations for modelling the variables. A set of locations that are hydrometeorologically similar are referred to as homogeneous region or pooling group and the process of identifying a homogeneous region is referred to as regionalization. The thesis concerns development of new approaches to regionalization of (i) extreme rainfall,(ii) maximum and minimum temperatures, and (iii) rainfall together with maximum and minimum temperatures.
Regionalization of extreme rainfall and frequency analysis based on resulting regions yields quantile estimates that find use in design of water control (e.g., barrages, dams, levees) and conveyance structures (e.g., culverts, storm sewers, spillways) to mitigate damages that are likely due to floods triggered by extreme rainfall, and land-use planning and management. Regionalization based on both rainfall and temperature yield regions that could be used to address a wide spectrum of problems such as meteorological drought analysis, agricultural planning to cope with water shortages during droughts, downscaling of precipitation and temperature.
Conventional approaches to regionalization of extreme rainfall are based extensively on statistics derived from extreme rainfall. Therefore delineated regions are susceptible to sampling variability and randomness associated with extreme rainfall records, which is undesirable. To address this, the idea of forming regions by considering attributes for regionalization as seasonality measure and site location indicators (which could be determined even for ungauged locations) is explored. For regionalization, Global Fuzzy c-means (GFCM) cluster analysis based methodology is developed in L-moment framework. The methodology is used to arrive at a set of 25 homogeneous extreme rainfall regions over India considering gridded rainfall records at daily scale, as there is dearth of regionalization studies on extreme rainfall in India Results are compared with those based on commonly used region of influence (ROI) approach that forms site-specific regions for quantile estimation, but lacks ability to delineate a geographical area into a reasonable number of homogeneous regions. Gridded data constitute spatially averaged rainfall that might originate from a different process (more synoptic) than point rainfall (more convective). Therefore to investigate utility of the developed GFCM methodology in arriving at meaningful regions when applied to point rainfall data, the methodology is applied to daily rainfall records available for 1032 gauges in Karnataka state of India. The application yielded 22 homogeneous extreme rainfall regions. Experiments carried out to examine utility of GFCM and ROI based regions in arriving at quantile estimates for ungauged sites in the study area reveal that performance of GFCM methodology is fairly close to that of ROI approach. Errors were marginally lower in the case of GFCM approach in analysis with observed point rainfall data over Karnataka, while its converse was noted in the case of analysis with gridded rainfall data over India. Neither of the approaches (CA, ROI) was found to be consistent in yielding least error in quantile estimates over all the sites.
The existing approaches to regionalization of temperature are based on temperature time series or their related statistics, rather than attributes effecting temperature in the study area. Therefore independent validation of the delineated regions for homogeneity in temperature is not possible. Another drawback of the existing approaches is that they require adequate number of sites with contemporaneous temperature records for regionalization, because the delineated regions are susceptible to sampling variability and randomness associated with the temperature records that are often (i) short in length, (ii) limited over contemporaneous time period and (iii) spatially sparse. To address these issues, a two-stage clustering approach is developed to arrive at regions that are homogeneous in terms of both monthly maximum and minimum temperatures ( and ). First-stage of the approach involves (i) identifying a common set of possible predictors (LSAVs) influencing and over the entire study area, and (ii) using correlations of those predictors with and along with location indicators (latitude, longitude and altitude) as the basis to delineate sites in the study area into hard clusters through global k-means clustering algorithm. The second stage involves (i) identifying appropriate LSAVs corresponding to each of the first-stage clusters, which could be considered as potential predictors, and (ii) using the potential predictors along with location indicators (latitude, longitude and altitude) as the basis to partition each of the first-stage clusters into homogeneous temperature regions through global fuzzy c-means clustering algorithm. A set of 28 homogeneous temperature regions was delineated over India using the proposed approach. Those regions are shown to be effective when compared to an existing set of 6 temperature regions over India for which inter-site cross-correlations were found to be weak and negative for several months, which is undesirable. Effectiveness of the newly formed regions is demonstrated. Utility of the proposed maxTminT
homogeneous temperature regions in arriving at PET estimates for ungauged locations within the study area was demonstrated. The estimates were found to be better when compared to those based on the existing regions.
The existing approaches to regionalization of hydrometeorological variables are based on principal components (PCs)/ statistics/indices determined from time-series of those variables at monthly and seasonal scale. An issue with use of PCs for regionalization is that they have to be extracted from contemporaneous records of hydrometeorological variables. Therefore delineated regions may not be effective when the available records are limited over contemporaneous time period. A drawback associated with the use of statistics/indices is that they (i) may not be meaningful when data exhibit nonstationarity and (ii) do not encompass complete information in the original time series. Consequently the resulting regions may not be effective for the desired purpose. To address these issues, a new approach is proposed. It considers information extracted from wavelet transformations of the observed multivariate hydrometeorological time series as the basis for regionalization by global fuzzy c-means clustering procedure. The approach can account for dynamic variability in the time series and its nonstationarity (if any). Effectiveness of the proposed approach in forming homogeneous hydrometeorological regions is demonstrated by application to India, as there are no prior attempts to form such regions over the country. The investigations resulted in identification of 29 regions over India, which are found to be effective and meaningful. Drought Severity-Area-Frequency (SAF) curves are developed for each of the newly formed regions considering the drought index to be Standardized Precipitation Evapotranspiration Index (SPEI).
|
Page generated in 0.0532 seconds