• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 18
  • 17
  • 17
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 482
  • 482
  • 214
  • 212
  • 160
  • 138
  • 116
  • 91
  • 81
  • 74
  • 69
  • 68
  • 61
  • 59
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Évaluation de la parole dysarthrique : Apport du traitement automatique de la parole face à l’expertise humaine / Evaluation of deviant zones in pathological speech : contribution of the automatic speech processing against the Human expertise

Laaridh, Imed 17 February 2017 (has links)
La dysarthrie est un trouble de la parole affectant la réalisation motrice de la parole causée par des lésions du système nerveux central ou périphérique. Elle peut être liée à différentes pathologies : la maladie de Parkinson, la Sclérose Latérale Amyotrophique(SLA), un Accident Vasculaire Cérébral (AVC), etc. Plusieurs travaux de recherche ont porté sur la caractérisation des altérations liées à chaque pathologie afin de les regrouper dans des classes de dysarthrie. La classification la plus répandue est celle établie parF. L. Darley comportant 6 classes en 1969, (complétée par deux classes supplémentaires en 2005)Actuellement, l’évaluation perceptive (à l’oreille) reste le standard utilisé dans lapratique clinique pour le diagnostique et le suivi thérapeutique des patients. Cette approcheest néanmoins reconnue comme étant subjective, non reproductible et coûteuseen temps. Ces limites la rendent inadaptée à l’évaluation de larges corpus (dans le cadred’études phonétiques par exemple) ou pour le suivi longitudinal de l’évolution des patientsdysarthriques.Face à ces limites, les professionnels expriment constamment leur besoin de méthodesobjectives d’évaluation de la parole dysarthrique. Les outils de Traitement Automatiquede la Parole (TAP) ont été rapidement considérés comme des solutions potentiellespour répondre à cette demande.Le travail présenté dans ce rapport s’inscrit dans ce cadre et étudie l’apport quepeuvent avoir ces outils dans l’évaluation de la parole dysarthrique, et plus généralementpathologique.Dans ce travail, une approche pour la détection automatique des phonèmes anormauxdans la parole dysarthrique est proposée et son comportement est analysé surdifférents corpus comportant différentes pathologies, classes dysarthriques, niveaux desévérité de la maladie et styles de parole. Contrairement à la majorité des approchesproposées dans la littérature permettant des évaluations de la qualité globale de la parole(évaluation de la sévérité, intelligibilité, etc.), l’approche proposée se focalise surle niveau phonème dans le but d’atteindre une meilleure caractérisation de la dysarthrieet de permettre un feed-back plus précis et utile pour l’utilisateur (clinicien, phonéticien,patient). L’approche s’articule autours de deux phases essentielles : (1) unepremière phase d’alignement automatique de la parole au niveau phonème (2) uneclassification de ces phonèmes en deux classes : phonèmes normaux et anormaux. L’évaluation de l’annotation réalisée par le système par rapport à une évaluationperceptive d’un expert humain considérée comme ”référence“ montre des résultats trèsencourageants et confirme la capacité de l’approche à detecter les anomalies au niveauphonème. L’approche s’est aussi révélée capable de capter l’évolution de la sévéritéde la dysarthrie suggérant une potentielle application lors du suivi longitudinal despatients ou pour la prédiction automatique de la sévérité de leur dysarthrie.Aussi, l’analyse du comportement de l’outil d’alignement automatique de la paroleface à la parole dysarthrique a révélé des comportements dépendants des pathologieset des classes dysarthriques ainsi que des différences entre les catégories phonétiques.De plus, un effet important du style de parole (parole lue et spontanée) a été constatésur les comportements de l’outil d’alignement de la parole et de l’approche de détectionautomatique d’anomalies.Finalement, les résultats d’une campagne d’évaluation de l’approche de détectiond’anomalies par un jury d’experts sont présentés et discutés permettant une mise enavant des points forts et des limites du système. / Dysarthria is a speech disorder resulting from neurological impairments of the speechmotor control. It can be caused by different pathologies (Parkinson’s disease, AmyotrophicLateral Sclerosis - ALS, etc.) and affects different levels of speech production (respiratory,laryngeal and supra-laryngeal). The majority of research work dedicated tothe study of dysarthric speech relies on perceptual analyses. The most known study, byF. L. Darley in 1969, led to the organization and the classification of dysarthria within 6classes (completed with 2 additional classes in 2005).Nowadays, perceptual evaluation is still the most used method in clinical practicefor the diagnosis and the therapeutic monitoring of patients. However, this method isknown to be subjective, non reproductive and time-consuming. These limitations makeit inadequate for the evaluation of large corpora (in case of phonetic studies) or forthe follow-up of the progression of the condition of dysarthric patients. In order toovercome these limitations, professionals have been expressing their need of objectivemethods for the evaluation of disordered speech and automatic speech processing hasbeen early seen as a potential solution.The work presented in this document falls within this framework and studies thecontributions that these tools can have in the evaluation of dysarthric, and more generallypathological speech.In this work, an automatic approach for the detection of abnormal phones in dysarthricspeech is proposed and its behavior is analyzed on different speech corpora containingdifferent pathologies, dysarthric classes, dysarthria severity levels and speechstyles (read and spontaneous speech). Unlike the majority of the automatic methodsproposed in the literature that provide a global evaluation of the speech on generalitems such as dysarthria severity, intelligibility, etc., our proposed method focuses onthe phone level aiming to achieve a better characterization of dysarthria effects and toprovide a precise and useful feedback to the potential users (clinicians, phoneticians,patients). This method consists on two essential phases : (1) an automatic phone alignmentof the speech (2) an automatic classification of the resulting phones in two classes :normal and abnormal phones.When compared to an annotation of phone anomalies provided by a human expertconsidered to be the ”gold standard“, the approach showed encouraging results andproved to be able to detect anomalies on the phone level. The approach was also able to capture the evolution of the severity of the dysarthria suggesting a potential relevanceand use in the longitudinal follow-up of dysarthric patients or for the automatic predictionof their intelligibility or the severity of their dysarthria.Also, the automatic phone alignment precision was found to be dependent on the severity,the pathology, the class of the dysarthria and the phonetic category of each phone.Furthermore, the speech style was found to have an interesting effect on the behaviorsof both automatic phone alignment and anomaly detection.Finally, the results of an evaluation campaign conducted by a jury of experts on theannotations provided by the proposed approach are presented and discussed in orderto draw a panel of the strengths and limitations of the system.
422

探索類神經網路於網路流量異常偵測中的時效性需求 / Exploring the timeliness requirement of artificial neural networks in network traffic anomaly detection

連茂棋, Lian, Mao-Ci Unknown Date (has links)
雲端的盛行使得人們做任何事都要透過網路,但是總會有些有心人士使用一些惡意程式來創造攻擊或通過網絡連接竊取資料。為了防止這些網路惡意攻擊,我們必須不斷檢查網路流量資料,然而現在這個雲端時代,網路的資料是非常龐大且複雜,若要檢查所有網路資料不僅耗時而且非常沒有效率。 本研究使用TensorFlow與多個圖形處理器(Graphics Processing Unit, GPU)來實作類神經網路(Artificial Neural Networks, ANN)機制,用以分析網路流量資料,並得到一個可以判斷正常與異常網路流量的偵測規則,也設計一個實驗來驗證我們提出的類神經網路機制是否符合網路流向異常偵測的時效性和有效性。 在實驗過程中,我們發現使用更多的GPU可以減少訓練類神經網路的時間,並且在我們的實驗設計中使用三個GPU進行運算可以達到網路流量異常偵測的時效性。透過該方法得到的初步實驗結果,我們提出機制的結果優於使用反向傳播算法訓練類神經網路得到的結果。 / The prosperity of the cloud makes people do anything through the Internet, but there are people with bad intention to use some malicious programs to create attacks or steal information through the network connection. In order to prevent these cyber-attacks, we have to keep checking the network traffic information. However, in the current cloud environment, the network information is huge and complex that to check all the information is not only time-consuming but also inefficient. This study uses TensorFlow with multiple Graphic Processing Units (GPUs) to implement an Artificial Neural Networks (ANN) mechanism to analyze network traffic data and derive detection rules that can identify normal and malicious traffics, and we call it Network Traffic Anomaly Detection (NTAD). Experiments are also designed to verify the timeliness and effectiveness of the derived ANN mechanism. During the experiment, we found that using more GPUs can reduce training time, and using three GPUs to do the operation can meet the timeliness in NTAD. As a result of this method, the experiment result was better than ANN with back propagation mechanism.
423

[en] A MOBILE AND ONLINE OUTLIER DETECTION OVER MULTIPLE DATA STREAMS: A COMPLEX EVENT PROCESSING APPROACH FOR DRIVING BEHAVIOR DETECTION / [pt] DETECÇÃO MÓVEL E ONLINE DE ANOMALIA EM MÚLTIPLOS FLUXOS DE DADOS: UMA ABORDAGEM BASEADA EM PROCESSAMENTO DE EVENTOS COMPLEXOS PARA DETECÇÃO DE COMPORTAMENTO DE CONDUÇÃO

IGOR OLIVEIRA VASCONCELOS 24 July 2017 (has links)
[pt] Dirigir é uma tarefa diária que permite uma locomoção mais rápida e mais confortável, no entanto, mais da metade dos acidentes fatais estão relacionados à imprudência. Manobras imprudentes podem ser detectadas com boa precisão, analisando dados relativos à interação motorista-veículo, por exemplo, curvas, aceleração e desaceleração abruptas. Embora existam algoritmos para detecção online de anomalias, estes normalmente são projetados para serem executados em computadores com grande poder computacional. Além disso, geralmente visam escala através da computação paralela, computação em grid ou computação em nuvem. Esta tese apresenta uma abordagem baseada em complex event processing para a detecção online de anomalias e classificação do comportamento de condução. Além disso, objetivamos identificar se dispositivos móveis com poder computacional limitado, como os smartphones, podem ser usados para uma detecção online do comportamento de condução. Para isso, modelamos e avaliamos três algoritmos de detecção online de anomalia no paradigma de processamento de fluxos de dados, que recebem os dados dos sensores do smartphone e dos sensores à bordo do veículo como entrada. As vantagens que o processamento de fluxos de dados proporciona reside no fato de que este reduz a quantidade de dados transmitidos do dispositivo móvel para servidores/nuvem, bem como se reduz o consumo de energia/bateria devido à transmissão de dados dos sensores e possibilidade de operação mesmo se o dispositivo móvel estiver desconectado. Para classificar os motoristas, um mecanismo estatístico utilizado na mineração de documentos que avalia a importância de uma palavra em uma coleção de documentos, denominada frequência de documento inversa, foi adaptado para identificar a importância de uma anomalia em um fluxo de dados, e avaliar quantitativamente o grau de prudência ou imprudência das manobras dos motoristas. Finalmente, uma avaliação da abordagem (usando o algoritmo que obteve melhor resultado na primeira etapa) foi realizada através de um estudo de caso do comportamento de condução de 25 motoristas em cenário real. Os resultados mostram uma acurácia de classificação de 84 por cento e um tempo médio de processamento de 100 milissegundos. / [en] Driving is a daily task that allows individuals to travel faster and more comfortably, however, more than half of fatal crashes are related to recklessness driving behaviors. Reckless maneuvers can be detected with accuracy by analyzing data related to driver-vehicle interactions, abrupt turns, acceleration, and deceleration, for instance. Although there are algorithms for online anomaly detection, they are usually designed to run on computers with high computational power. In addition, they typically target scale through parallel computing, grid computing, or cloud computing. This thesis presents an online anomaly detection approach based on complex event processing to enable driving behavior classification. In addition, we investigate if mobile devices with limited computational power, such as smartphones, can be used for online detection of driving behavior. To do so, we first model and evaluate three online anomaly detection algorithms in the data stream processing paradigm, which receive data from the smartphone and the in-vehicle embedded sensors as input. The advantages that stream processing provides lies in the fact that reduce the amount of data transmitted from the mobile device to servers/the cloud, as well as reduce the energy/battery usage due to transmission of sensor data and possibility to operate even if the mobile device is disconnected. To classify the drivers, a statistical mechanism used in document mining that evaluates the importance of a word in a collection of documents, called inverse document frequency, has been adapted to identify the importance of an anomaly in a data stream, and then quantitatively evaluate how cautious or reckless drivers maneuvers are. Finally, an evaluation of the approach (using the algorithm that achieved better result in the first step) was carried out through a case study of the 25 drivers driving behavior. The results show an accuracy of 84 percent and an average processing time of 100 milliseconds.
424

Abordagem semi-supervisionada para detecção de módulos de software defeituosos

OLIVEIRA, Paulo César de 31 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-24T12:11:04Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertação Mestrado Paulo César de Oliveira.pdf: 2358509 bytes, checksum: 36436ca63e0a8098c05718bbee92d36e (MD5) / Made available in DSpace on 2017-07-24T12:11:04Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Dissertação Mestrado Paulo César de Oliveira.pdf: 2358509 bytes, checksum: 36436ca63e0a8098c05718bbee92d36e (MD5) Previous issue date: 2015-08-31 / Com a competitividade cada vez maior do mercado, aplicações de alto nível de qualidade são exigidas para a automação de um serviço. Para garantir qualidade de um software, testá-lo visando encontrar falhas antecipadamente é essencial no ciclo de vida de desenvolvimento. O objetivo do teste de software é encontrar falhas que poderão ser corrigidas e consequentemente, aumentar a qualidade do software em desenvolvimento. À medida que o software cresce, uma quantidade maior de testes é necessária para prevenir ou encontrar defeitos, visando o aumento da qualidade. Porém, quanto mais testes são criados e executados, mais recursos humanos e de infraestrutura são necessários. Além disso, o tempo para realizar as atividades de teste geralmente não é suficiente, fazendo com que os defeitos possam escapar. Cada vez mais as empresas buscam maneiras mais baratas e efetivas para detectar defeitos em software. Muitos pesquisadores têm buscado nos últimos anos, mecanismos para prever automaticamente defeitos em software. Técnicas de aprendizagem de máquina vêm sendo alvo das pesquisas, como uma forma de encontrar defeitos em módulos de software. Tem-se utilizado muitas abordagens supervisionadas para este fim, porém, rotular módulos de software como defeituosos ou não para fins de treinamento de um classificador é uma atividade muito custosa e que pode inviabilizar a utilização de aprendizagem de máquina. Neste contexto, este trabalho propõe analisar e comparar abordagens não supervisionadas e semisupervisionadas para detectar módulos de software defeituosos. Para isto, foram utilizados métodos não supervisionados (de detecção de anomalias) e também métodos semi-supervisionados, tendo como base os classificadores AutoMLP e Naive Bayes. Para avaliar e comparar tais métodos, foram utilizadas bases de dados da NASA disponíveis no PROMISE Software Engineering Repository. / Because the increase of market competition then high level of quality applications are required to provide automate services. In order to achieve software quality testing is essential in the development lifecycle with the purpose of finding defect as earlier as possible. The testing purpose is not only to find failures that can be fixed, but improve software correctness and quality. Once software gets more complex, a greater number of tests will be necessary to prevent or find defects. Therefore, the more tests are designed and exercised, the more human and infrastructure resources are needed. However, time to run the testing activities are not enough, thus, as a result, it causes escape defects. Companies are constantly trying to find cheaper and effective ways to software defect detection in earlier stages. In the past years, many researchers are trying to finding mechanisms to automatically predict these software defects. Machine learning techniques are being a research target, as a way of finding software modules detection. Many supervised approaches are being used with this purpose, but labeling software modules as defective or not defective to be used in training phase is very expensive and it can make difficult machine learning use. Considering that this work aims to analyze and compare unsupervised and semi-supervised approaches to software module defect detection. To do so, unsupervised methods (of anomaly detection) and semi-supervised methods using AutoMLP and Naive Bayes algorithms were used. To evaluate and compare these approaches, NASA datasets were used at PROMISE Software Engineering Repository.
425

Učení se automatů pro rychlou detekci anomálií v síťovém provozu / Automata Learning for Fast Detection of Anomalies in Network Traffic

Hošták, Viliam Samuel January 2021 (has links)
The focus of this thesis is the fast network anomaly detection based on automata learning. It describes and compares several chosen automata learning algorithms including their adaptation for the learning of network characteristics. In this work, various network anomaly detection methods based on learned automata are proposed which can detect sequential as well as statistical anomalies in target communication. For this purpose, they utilize automata's mechanisms, their transformations, and statistical analysis. Proposed detection methods were implemented and evaluated using network traffic of the protocol IEC 60870-5-104 which is commonly used in industrial control systems.
426

ARTIFICIAL INTELLIGENCE EMPOWERED AUGMENTED REALITY APPLICATION FOR ELECTRICAL ENGINEERING LAB EDUCATION

John Luis Estrada (11836646) 20 December 2021 (has links)
With the rising popularity of online and hybrid learning, this study explores an innovative method to improve students’ learning experiences with Electrical and Computer Engineering lab equipment by employing cutting-edge technologies in augmented reality (AR) and artificial intelligence (AI). Automatic object detection component, aligned with AR application, is developed to recognize equipment, including multimeter, oscilloscope, wave generator, and power supply. The deep neural network model, namely MobileNet SSD v2, is implemented in the study for equipment recognition. We used object detection API from TensorFlow (TF) framework to build the neural network model. When a piece of equipment is detected, the corresponding augmented reality (AR) based tutorial will be displayed on the screen. In this study, a tutorial for multi-meter is implemented. In order to provide users an intuitive and easy-to-follow tutorial, we superimpose virtual models on the real multimeter. In addition, images and web links are added in the tutorial to facilitate users with a better learning experience. Unity3D game engine is used as the primary development tool to merge both framework systems and build immersive scenarios in the tutorial.
427

Dolování neobvyklého chování v datech trajektorií / Mining Anomalous Behaviour in Trajectory Data

Koňárek, Petr January 2017 (has links)
The goal of this work is to provide an overview of approaches for mining anomalous behavior in trajectory data. Next part is proposes a mining task for outliner detection in trajectories and selects appropriate methods for this task. Selected methods are implemented as application for outliner trajectories detection.
428

Analýza provozních dat a detekce anomálií při běhu úloh na superpočítači / Analysis of Operational Data and Detection od Anomalies during Supercomputer Job Execution

Stehlík, Petr January 2018 (has links)
V posledních letech jsou superpočítače stále větší a složitější, s čímž souvisí problém využití plného potenciálu systému. Tento problém se umocňuje díky nedostatku nástrojů pro monitorování, které jsou specificky přizpůsobeny uživatelům těchto systémů. Cílem práce je vytvořit nástroj, nazvaný Examon Web, pro analýzu a vizualizaci provozních dat superpočítače a provést nad těmito daty hloubkovou analýzu pomocí neurálních sítí. Ty určí, zda daná úloha běžela korektně, či vykazovala známky podezřelého a nežádoucího chování jako je nezarovnaný přístup do operační paměti nebo např. nízké využití alokovaých zdrojů. O těchto  faktech je uživatel informován pomocí GUI. Examon Web je postavený na frameworku Examon, který sbírá a procesuje metrická data ze superpočítače a následně je ukládá do databáze KairosDB. Implementace zahrnuje disciplíny od návrhu a implementace GUI, přes datovou analýzu, těžení dat a neurální sítě až po implementaci rozhraní na serverové straně. Examon Web je zaměřen zejména na uživatele, ale může být také využíván administrátory. GUI je vytvořeno ve frameworku Angular s knihovnami Dygraphs a Bootstrap. Uživatel díky tomu může analyzovat časové řady různých metrik své úlohy a stejně jako administrátor se může informovat o současném stavu superpočítače. Tento stav je zobrazen jako několik globálně agregovaných metrik v posledních 30 minutách nebo jako 3D model (či 2D model) superpočítače, který získává data ze samotných uzlů pomocí protokolu MQTT. Pro kontinuální získávání dat bylo využito rozhraní WebSocket s vlastním mechanismem přihlašování a odhlašování konkretních metrik zobrazovaných v modelu. Při analýze spuštěné úlohy má uživatel dostupné tři různé pohledy na danou úlohu. První nabízí celkový přehled o úloze a informuje o využitých zdrojích, času běhu a vytížení části superpočítače, kterou úloha využila společně s informací z neurálních sítí o podezřelosti úlohy. Další dva pohledy zobrazují metriky z výkonnostiního energetického hlediska. Pro naučení neurálních sítí bylo potřeba vytvořit novou datovou sadu ze superpočítače Galileo. Tato sada obsahuje přes 1100 úloh monitorovaných na tomto superpočítači z čehož 500 úloh bylo ručně anotováno a následně použito pro trénování sítí. Neurální sítě využívají model back-propagation, vhodný pro anotování časových sérií fixní délky. Celkem bylo vytvořeno 12 sítí pro metriky zahrnující vytížení procesoru, paměti a dalších části a např. také podíl celkového času procesoru v úsporném režimu C6. Tyto sítě jsou na sobě nezávislé a po experimentech jejich finální konfigurace 80-20-4-3-1 (80 vstupních až 1 výstupní neuron) podávaly nejlepší výsledky. Poslední síť (v konfiguraci 12-4-3-1) anotovala výsledky předešlých sítí. Celková úspěšnost  systému klasifikace do 2 tříd je 84 %, což je na použitý model velmi dobré. Výstupem této práce jsou dva produkty. Prvním je uživatelské rozhraní a jeho serverová část Examon Web, která jakožto rozšiřující vrstva systému Examon pomůže s rozšířením daného systému mezi další uživatele či přímo další superpočítačová centra. Druhým výstupem je částečně anotovaná datová sada, která může pomoci dalším lidem v jejich výzkumu a je výsledkem spolupráce VUT, UNIBO a CINECA. Oba výstupy budou zveřejněny s otevřenými zdrojovými kódy. Examon Web byl prezentován na konferenci 1st Users' Conference v Ostravě pořádanou IT4Innovations. Další rozšíření práce může být anotace datové sady a také rozšíření Examon Web o rozhodovací stromy, které určí přesný důvod špatného chování dané úlohy.
429

Detekce Útoků v Síťovém Provozu / Intrusion Detection in Network Traffic

Homoliak, Ivan Unknown Date (has links)
Tato práce se zabývá problematikou anomální detekce síťových útoků s využitím technik strojového učení. Nejdříve jsou prezentovány state-of-the-art datové kolekce určené pro ověření funkčnosti systémů detekce útoků a také práce, které používají statistickou analýzu a techniky strojového učení pro nalezení síťových útoků. V další části práce je prezentován návrh vlastní kolekce metrik nazývaných Advanced Security Network Metrics (ASNM), který je součástí konceptuálního automatického systému pro detekci průniků (AIPS). Dále jsou navrženy a diskutovány dva různé přístupy k obfuskaci - tunelování a modifikace síťových charakteristik - sloužících pro úpravu provádění útoků. Experimenty ukazují, že použité obfuskace jsou schopny předejít odhalení útoků pomocí klasifikátoru využívajícího metriky ASNM. Na druhé straně zahrnutí těchto obfuskací do trénovacího procesu klasifikátoru může zlepšit jeho detekční schopnosti. Práce také prezentuje alternativní pohled na obfuskační techniky modifikující síťové charakteristiky a demonstruje jejich použití jako aproximaci síťového normalizéru založenou na vhodných trénovacích datech.
430

Performance problem diagnosis in cloud infrastructures

Ibidunmoye, Olumuyiwa January 2016 (has links)
Cloud datacenters comprise hundreds or thousands of disparate application services, each having stringent performance and availability requirements, sharing a finite set of heterogeneous hardware and software resources. The implication of such complex environment is that the occurrence of performance problems, such as slow application response and unplanned downtimes, has become a norm rather than exception resulting in decreased revenue, damaged reputation, and huge human-effort in diagnosis. Though causes can be as varied as application issues (e.g. bugs), machine-level failures (e.g. faulty server), and operator errors (e.g. mis-configurations), recent studies have attributed capacity-related issues, such as resource shortage and contention, as the cause of most performance problems on the Internet today. As cloud datacenters become increasingly autonomous there is need for automated performance diagnosis systems that can adapt their operation to reflect the changing workload and topology in the infrastructure. In particular, such systems should be able to detect anomalous performance events, uncover manifestations of capacity bottlenecks, localize actual root-cause(s), and possibly suggest or actuate corrections. This thesis investigates approaches for diagnosing performance problems in cloud infrastructures. We present the outcome of an extensive survey of existing research contributions addressing performance diagnosis in diverse systems domains. We also present models and algorithms for detecting anomalies in real-time application performance and identification of anomalous datacenter resources based on operational metrics and spatial dependency across datacenter components. Empirical evaluations of our approaches shows how they can be used to improve end-user experience, service assurance and support root-cause analysis. / Cloud Control (C0590801)

Page generated in 0.1288 seconds