• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 257
  • 17
  • 17
  • 15
  • 13
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 401
  • 401
  • 181
  • 174
  • 137
  • 118
  • 89
  • 75
  • 72
  • 63
  • 61
  • 58
  • 52
  • 52
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Visual analytics for maritime anomaly detection

Riveiro, María José January 2011 (has links)
The surveillance of large sea areas typically involves  the analysis of huge quantities of heterogeneous data.  In order to support the operator while monitoring maritime traffic, the identification of anomalous behavior or situations that might need further investigation may reduce operators' cognitive load. While it is worth acknowledging that existing mining applications support the identification of anomalies, autonomous anomaly detection systems are rarely used for maritime surveillance. Anomaly detection is normally a complex task that can hardly be solved by using purely visual or purely computational methods. This thesis suggests and investigates the adoption of visual analytics principles to support the detection of anomalous vessel behavior in maritime traffic data. This adoption involves studying the analytical reasoning process that needs to be supported,  using combined automatic and visualization approaches to support such process, and evaluating such integration. The analysis of data gathered during interviews and participant observations at various maritime control centers and the inspection of video recordings of real anomalous incidents lead to a characterization of the analytical reasoning process that operators go through when monitoring traffic. These results are complemented with a literature review of anomaly detection techniques applied to sea traffic. A particular statistical-based technique is implemented, tested, and embedded in a proof-of-concept prototype that allows user involvement in the detection process. The quantitative evaluation carried out by employing the prototype reveals that participants who used the visualization of normal behavioral models outperformed the group without aid. The qualitative assessment shows that  domain experts are positive towards providing automatic support and the visualization of normal behavioral models, since these aids may reduce reaction time, as well as increase trust and comprehensibility in the system. Based on the lessons learned, this thesis provides recommendations for designers and developers of maritime control and anomaly detection systems, as well as guidelines for carrying out evaluations of visual analytics environments. / Maria Riveiro is also affiliated to Informatics Research Centre, Högskolan i Skövde / Information Fusion Research Program, Högskolan i Skövde
192

An intrusion detection system for supervisory control and data acquisition systems

Hansen, Sinclair D. January 2008 (has links)
Despite increased awareness of threats against Critical Infrastructure (CI), securing of Supervisory Control and Data Acquisition (SCADA) systems remains incomplete. The majority of research focuses on preventative measures such as improving communication protocols and implementing security policies. New attempts are being made to use commercial Intrusion Detection System (IDS) software to protect SCADA systems. These have limited effectiveness because the ability to detect specific threats requires the context of the SCADA system. SCADA context is defined as any information that can be used to characterise the current status and function of the SCADA system. In this thesis the standard IDS model will be used with the varying SCADA data sources to provide SCADA context to a signature and anomaly detection engine. A novel addition to enhance the IDS model will be to use the SCADA data sources to simulate the remote SCADA site. The data resulting from the simulation is used by the IDS to make behavioural comparison between the real and simulated SCADA site. To evaluate the enhanced IDS model the specific context of a water and wastewater system is used to develop a prototype. Using this context it was found that the inflow between sites has similar diurnal characteristic to network traffic. This introduced the idea of using inflow data to detect abnormal behaviour for a remote wastewater site. Several experiments are proposed to validate the prototype using data from a real SCADA site. Initial results show good promise for detecting abnormal behaviour and specific threats against water and wastewater SCADA systems.
193

Finding early signals of emerging trends in text through topic modeling and anomaly detection

Redyuk, Sergey January 2018 (has links)
Trend prediction has become an extremely popular practice in many industrial sectors and academia. It is beneficial for strategic planning and decision making, and facilitates exploring new research directions that are not yet matured. To anticipate future trends in academic environment, a researcher needs to analyze an extensive amount of literature and scientific publications, and gain expertise in the particular research domain. This approach is time-consuming and extremely complicated due to abundance of data and its diversity. Modern machine learning tools, on the other hand, are capable of processing tremendous volumes of data, reaching the real-time human-level performance for various applications. Achieving high performance in unsupervised prediction of emerging trends in text can indicate promising directions for future research and potentially lead to breakthrough discoveries in any field of science. This thesis addresses the problem of emerging trend prediction in text in two main steps: it utilizes HDP topic model to represent latent topic space of a given temporal collection of documents, DBSCAN clustering algorithm to detect groups with high-density regions in the document space potentially leading to emerging trends, and applies KLdivergence in order to capture deviating text which might indicate birth of a new not-yet-seen phenomenon. In order to empirically evaluate the effectiveness of the proposed framework and estimate its predictive capability, both synthetically generated corpora and real-world text collections from arXiv.org, an open-access electronic archive of scientific publications (category: Computer Science), and NIPS publications are used. For synthetic data, a text generator is designed which provides ground truth to evaluate the performance of anomaly detection algorithms. This work contributes to the body of knowledge in the area of emerging trend prediction in several ways. First of all, the method of incorporating topic modeling and anomaly detection algorithms for emerging trend prediction is a novel approach and highlights new perspectives in the subject area. Secondly, the three-level word-document-topic topology of anomalies is formalized in order to detect anomalies in temporal text collections which might lead to emerging trends. Finally, a framework for unsupervised detection of early signals of emerging trends in text is designed. The framework captures new vocabulary, documents with deviating word/topic distribution, and drifts in latent topic space as three main indicators of a novel phenomenon to occur, in accordance with the three-level topology of anomalies. The framework is not limited by particular sources of data and can be applied to any temporal text collections in combination with any online methods for soft clustering.
194

Atlantic : a framework for anomaly traffic detection, classification, and mitigation in SDN / Atlantic : um framework para detecção, classificação e mitigação de tráfego anômalo em SDN

Silva, Anderson Santos da January 2015 (has links)
Software-Defined Networking (SDN) objetiva aliviar as limitações impostas por redes IP tradicionais dissociando tarefas de rede executadas em cada dispositivo em planos específicos. Esta abordagem oferece vários benefícios, tais como a possibilidade de uso de protocolos de comunicação padrão, funções de rede centralizadas, e elementos de rede mais específicos e modulares, tais como controladores de rede. Apesar destes benefícios, ainda há uma falta de apoio adequado para a realização de tarefas relacionadas à classificação de tráfego, pois (i) as características de fluxo nativas disponíveis no protocolo OpenFlow, tais como contadores de bytes e pacotes, não oferecem informação suficiente para distinguir de forma precisa fluxos específicos; (ii) existe uma falta de suporte para determinar qual é o conjunto ótimo de características de fluxo para caracterizar um dado perfil de tráfego; (iii) existe uma necessidade de estratégias flexíveis para compor diferentes mecanismos relacionados à detecção, classificação e mitigação de anomalias de rede usando abstrações de software; (iv) existe uma necessidade de monitoramento de tráfego em tempo real usando técnicas leves e de baixo custo; (v) não existe um framework capaz de gerenciar detecção, classificação e mitigação de anomalias de uma forma coordenada considerando todas as demandas acima. Adicionalmente, é sabido que mecanismos de detecção e classificação de anomalias de tráfego precisam ser flexíveis e fáceis de administrar, a fim de detectar o crescente espectro de anomalias. Detecção e classificação são tarefas difíceis por causa de várias razões, incluindo a necessidade de obter uma visão precisa e abrangente da rede, a capacidade de detectar a ocorrência de novos tipos de ataque, e a necessidade de lidar com erros de classificação. Nesta dissertação, argumentamos que SDN oferece ambientes propícios para a concepção e implementação de esquemas mais robustos e extensíveis para detecção e classificação de anomalias. Diferentemente de outras abordagens na literatura relacionada, que abordam individualmente detecção ou classificação ou mitigação de anomalias, apresentamos um framework para o gerenciamento e orquestração dessas tarefas em conjunto. O framework proposto é denominado ATLANTIC e combina o uso de técnicas com baixo custo computacional para monitorar tráfego e técnicas mais computacionalmente intensivas, porém precisas, para classificar os fluxos de tráfego. Como resultado, ATLANTIC é um framework flexível capaz de categorizar anomalias de tráfego utilizando informações coletadas da rede para lidar com cada perfil de tráfego de um modo específico, como por exemplo, bloqueando fluxos maliciosos. / Software-Defined Networking (SDN) aims to alleviate the limitations imposed by traditional IP networks by decoupling network tasks performed on each device in particular planes. This approach offers several benefits, such as standard communication protocols, centralized network functions, and specific network elements, such as controller devices. Despite these benefits, there is still a lack of adequate support for performing tasks related to traffic classification, because (i) the native flow features available in OpenFlow, such as packet and byte counts, do not convey sufficient information to accurately distinguish between some types of flows; (ii) there is a lack of support to determine what is the optimal set of flow features to characterize different types of traffic profiles; (iii) there is a need for a flexible way of composing different mechanisms to detect, classify and mitigate network anomalies using software abstractions; (iv) there is a need of online traffic monitoring using lightweight/low-cost techniques; (v) there is no framework capable of managing anomaly detection, classification and mitigation in a coordinated manner and considering all these demands. Additionally, it is well-known that anomaly traffic detection and classification mechanisms need to be flexible and easy to manage in order to detect the ever growing spectrum of anomalies. Detection and classification are difficult tasks because of several reasons, including the need to obtain an accurate and comprehensive view of the network, the ability to detect the occurrence of new attack types, and the need to deal with misclassification. In this dissertation, we argue that Software-Defined Networking (SDN) form propitious environments for the design and implementation of more robust and extensible anomaly classification schemes. Different from other approaches from the literature, which individually tackle either anomaly detection or classification or mitigation, we present a management framework to perform these tasks jointly. Our proposed framework is called ATLANTIC and it combines the use of lightweight techniques for traffic monitoring and heavyweight, but accurate, techniques to classify traffic flows. As a result, ATLANTIC is a flexible framework capable of categorizing traffic anomalies and using the information collected to handle each traffic profile in a specific manner, e.g., blocking malicious flows.
195

Modélisation de données de surveillance épidémiologique de la faune sauvage en vue de la détection de problèmes sanitaires inhabituels / Modelling of epidemiological surveillance data from wildlife for the detection of unusual health events

Petit, Eva 09 February 2011 (has links)
Des études récentes ont montré que parmi les infections émergentes chez l'homme, env. 40% étaient des zoonoses liées à la faune sauvage. La surveillance sanitaire de ces animaux devrait contribuer à améliorer la protection de leur santé et aussi celle des animaux domestiques et des hommes. Notre objectif était de développer des outils de détection de problèmes sanitaires inhabituels dans la faune sauvage, en adoptant une approche syndromique, utilisée en santé humaine, avec des profils pathologiques comme indicateurs de santé non spécifiques. Un réseau national de surveillance des causes de mortalité dans la faune sauvage, appelé SAGIR, a fourni les données. Entre 1986 et 2007, plus de 50.000 cas ont été enregistrés, représentant 244 espèces de mammifères terrestres et d'oiseaux, et attribués à 220 différentes causes de mort. Le réseau a d'abord été évalué pour sa capacité à détecter précocement des événements inhabituels. Des classes syndromiques ont ensuite été définies par une typologie statistique des lésions observées sur les cadavres. Les séries temporelles des syndromes ont été analysées en utilisant deux méthodes complémentaires de détection : un algorithme robuste développé par Farrington et un modèle linéaire généralisé avec des termes périodiques. Les tendances séculaires de ces syndromes et des signaux correspondent a des excès de cas ont été identifiés. Les signalements de problèmes de mortalité inhabituelle dans le bulletin du réseau ont été utilisés pour interpréter ces signaux. L'étude analyse la pertinence de l'utilisation de la surveillance syndromique sur ce type de données et donne des éléments pour des améliorations futures. / Recent studies have shown that amongst emerging infectious disease events in humans, about 40% were zoonoses linked to wildlife. Disease surveillance of wildlife should help to improve health protection of these animals and also of domestic animals and humans that are exposed to these pathogenic agents. Our aim was to develop tools capable of detecting unusual disease events in free ranging wildlife, by adopting a syndromic approach, as it is used for human health surveillance, with pathological profiles as early unspecific health indicators. We used the information registered by a national network monitoring causes of death in wildlife in France since 1986, called SAGIR. More than 50.000 cases of mortality in wildlife were recorded up to 2007, representing 244 species of terrestrial mammals and birds, and were attributed to 220 different causes of death. The network was first evaluated for its capacity to detect early unusual events. Syndromic classes were then defined by a statistical typology of the lesions observed on the carcasses. Syndrome time series were analyzed, using two complimentary methods of detection, one robust detection algorithm developed by Farrington and another generalized linear model with periodic terms. Historical trends of occurrence of these syndromes and greater-than-expected counts (signals) were identified. Reporting of unusual mortality events in the network bulletin was used to interpret these signals. The study analyses the relevance of the use of syndromic surveillance on this type of data and gives elements for future improvements.
196

Atlantic : a framework for anomaly traffic detection, classification, and mitigation in SDN / Atlantic : um framework para detecção, classificação e mitigação de tráfego anômalo em SDN

Silva, Anderson Santos da January 2015 (has links)
Software-Defined Networking (SDN) objetiva aliviar as limitações impostas por redes IP tradicionais dissociando tarefas de rede executadas em cada dispositivo em planos específicos. Esta abordagem oferece vários benefícios, tais como a possibilidade de uso de protocolos de comunicação padrão, funções de rede centralizadas, e elementos de rede mais específicos e modulares, tais como controladores de rede. Apesar destes benefícios, ainda há uma falta de apoio adequado para a realização de tarefas relacionadas à classificação de tráfego, pois (i) as características de fluxo nativas disponíveis no protocolo OpenFlow, tais como contadores de bytes e pacotes, não oferecem informação suficiente para distinguir de forma precisa fluxos específicos; (ii) existe uma falta de suporte para determinar qual é o conjunto ótimo de características de fluxo para caracterizar um dado perfil de tráfego; (iii) existe uma necessidade de estratégias flexíveis para compor diferentes mecanismos relacionados à detecção, classificação e mitigação de anomalias de rede usando abstrações de software; (iv) existe uma necessidade de monitoramento de tráfego em tempo real usando técnicas leves e de baixo custo; (v) não existe um framework capaz de gerenciar detecção, classificação e mitigação de anomalias de uma forma coordenada considerando todas as demandas acima. Adicionalmente, é sabido que mecanismos de detecção e classificação de anomalias de tráfego precisam ser flexíveis e fáceis de administrar, a fim de detectar o crescente espectro de anomalias. Detecção e classificação são tarefas difíceis por causa de várias razões, incluindo a necessidade de obter uma visão precisa e abrangente da rede, a capacidade de detectar a ocorrência de novos tipos de ataque, e a necessidade de lidar com erros de classificação. Nesta dissertação, argumentamos que SDN oferece ambientes propícios para a concepção e implementação de esquemas mais robustos e extensíveis para detecção e classificação de anomalias. Diferentemente de outras abordagens na literatura relacionada, que abordam individualmente detecção ou classificação ou mitigação de anomalias, apresentamos um framework para o gerenciamento e orquestração dessas tarefas em conjunto. O framework proposto é denominado ATLANTIC e combina o uso de técnicas com baixo custo computacional para monitorar tráfego e técnicas mais computacionalmente intensivas, porém precisas, para classificar os fluxos de tráfego. Como resultado, ATLANTIC é um framework flexível capaz de categorizar anomalias de tráfego utilizando informações coletadas da rede para lidar com cada perfil de tráfego de um modo específico, como por exemplo, bloqueando fluxos maliciosos. / Software-Defined Networking (SDN) aims to alleviate the limitations imposed by traditional IP networks by decoupling network tasks performed on each device in particular planes. This approach offers several benefits, such as standard communication protocols, centralized network functions, and specific network elements, such as controller devices. Despite these benefits, there is still a lack of adequate support for performing tasks related to traffic classification, because (i) the native flow features available in OpenFlow, such as packet and byte counts, do not convey sufficient information to accurately distinguish between some types of flows; (ii) there is a lack of support to determine what is the optimal set of flow features to characterize different types of traffic profiles; (iii) there is a need for a flexible way of composing different mechanisms to detect, classify and mitigate network anomalies using software abstractions; (iv) there is a need of online traffic monitoring using lightweight/low-cost techniques; (v) there is no framework capable of managing anomaly detection, classification and mitigation in a coordinated manner and considering all these demands. Additionally, it is well-known that anomaly traffic detection and classification mechanisms need to be flexible and easy to manage in order to detect the ever growing spectrum of anomalies. Detection and classification are difficult tasks because of several reasons, including the need to obtain an accurate and comprehensive view of the network, the ability to detect the occurrence of new attack types, and the need to deal with misclassification. In this dissertation, we argue that Software-Defined Networking (SDN) form propitious environments for the design and implementation of more robust and extensible anomaly classification schemes. Different from other approaches from the literature, which individually tackle either anomaly detection or classification or mitigation, we present a management framework to perform these tasks jointly. Our proposed framework is called ATLANTIC and it combines the use of lightweight techniques for traffic monitoring and heavyweight, but accurate, techniques to classify traffic flows. As a result, ATLANTIC is a flexible framework capable of categorizing traffic anomalies and using the information collected to handle each traffic profile in a specific manner, e.g., blocking malicious flows.
197

Explanation Methods for Bayesian Networks

Helldin, Tove January 2009 (has links)
The international maritime industry is growing fast due to an increasing number of transportations over sea. In pace with this development, the maritime surveillance capacity must be expanded as well, in order to be able to handle the increasing numbers of hazardous cargo transports, attacks, piracy etc. In order to detect such events, anomaly detection methods and techniques can be used. Moreover, since surveillance systems process huge amounts of sensor data, anomaly detection techniques can be used to filter out or highlight interesting objects or situations to an operator. Making decisions upon large amounts of sensor data can be a challenging and demanding activity for the operator, not only due to the quantity of the data, but factors such as time pressure, high stress and uncertain information further aggravate the task. Bayesian networks can be used in order to detect anomalies in data and have, in contrast to many other opaque machine learning techniques, some important advantages. One of these advantages is the fact that it is possible for a user to understand and interpret the model, due to its graphical nature. This thesis aims to investigate how the output from a Bayesian network can be explained to a user by first reviewing and presenting which methods exist and second, by making experiments. The experiments aim to investigate if two explanation methods can be used in order to give an explanation to the inferences made by a Bayesian network in order to support the operator’s situation awareness and decision making process when deployed in an anomaly detection problem in the maritime domain.
198

APLICANDO A TRANSFORMADA WAVELET BIDIMENSIONAL NA DETECÇÃO DE ATAQUES WEB / APPLYING TWO-DIMENSIONAL WAVELET TRANSFORM FOR THE DETECTION OF WEB ATTACKS

Mozzaquatro, Bruno Augusti 27 February 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / With the increase web traffic of comes various threats to the security of web applications. The threats arise inherent vulnerabilities of web systems, where malicious code or content injection are the most exploited vulnerabilities in web attacks. The injection vulnerability allows the attacker to insert information or a program in improper places, causing damage to customers and organizations. Its property is to change the character frequency distribution of some requests within a set of web requests. Anomaly-based intrusion detection systems have been used to break these types of attacks, due to the diversity and complexity found in web attacks. In this context, this paper proposes a new anomaly based detection algorithm that apply the two-dimensional wavelet transform for the detection of web attacks. The algorithm eliminates the need for a training phase (which asks for reliable data) and searches for character frequency anomalies in a set of web requests, through the analysis in multiple directions and resolutions. The experiment results demonstrate the feasibility of our technique for detecting web attacks. After some adjustments on different parameters, the algorithm has obtained detection rates up to 100%, eliminating the occurrence of false positives. / O aumento do tráfego web vem acompanhado de diversas ameaças para a segurança das aplicações web. As ameaças são decorrentes das vulnerabilidades inerentes dos sistemas web, sendo a injeção de código ou conteúdo malicioso uma das vulnerabilidades mais exploradas em ataques web, pois permite que o atacante insira uma informação ou programa em locais indevidos, podendo causar danos aos clientes e organizações. Esse tipo de ataque tem sido caracterizado pela alteração na distribuição da frequência dos caracteres de algumas requisições dentro de um conjunto de requisições web. Sistemas de detecção de intrusão baseados em anomalias têm sido usados para procurar conter tais tipos de ataques, principalmente em função da diversidade e da complexidade dos ataques web. Neste contexto, o trabalho propõe um novo algoritmo para detecção de anomalias que aplica a transformada wavelet bidimensional na detecção de ataques web e elimina a necessidade de uma fase de treinamento com dados confiáveis de difícil obtenção. O algoritmo pesquisa por anomalias nas frequências dos caracteres de um conjunto de requisições web através da análise em múltiplas direções e resoluções. Os resultados obtidos nos experimentos demonstraram a viabilidade da técnica para detecção de ataques web e também que com ajustes entre diferentes parâmetros foram obtidas taxas de detecção de até 100%, eliminando a ocorrência de falsos positivos.
199

Performance anomaly detection and resolution for autonomous clouds

Ibidunmoye, Olumuyiwa January 2017 (has links)
Fundamental properties of cloud computing such as resource sharing and on-demand self-servicing is driving a growing adoption of the cloud for hosting both legacy and new application services. A consequence of this growth is that the increasing scale and complexity of the underlying cloud infrastructure as well as the fluctuating service workloads is inducing performance incidents at a higher frequency than ever before with far-reaching impact on revenue, reliability, and reputation. Hence, effectively managing performance incidents with emphasis on timely detection, diagnosis and resolution has thus become a necessity rather than luxury. While other aspects of cloud management such as monitoring and resource management are experiencing greater automation, automated management of performance incidents remains a major concern. Given the volume of operational data produced by cloud datacenters and services, this thesis focus on how data analytics techniques can be used in the aspect of cloud performance management. In particular, this work investigates techniques and models for automated performance anomaly detection and prevention in cloud environments. To familiarize with developments in the research area, we present the outcome of an extensive survey of existing research contributions addressing various aspects of performance problem management in diverse systems domains. We discuss the design and evaluation of analytics models and algorithms for detecting performance anomalies in real-time behaviour of cloud datacenter resources and hosted services at different resolutions. We also discuss the design of a semi-supervised machine learning approach for mitigating performance degradation by actively driving quality of service from undesirable states to a desired target state via incremental capacity optimization. The research methods used in this thesis include experiments on real virtualized testbeds to evaluate aspects of proposed techniques while other aspects are evaluated using performance traces from real-world datacenters. Insights and outcomes from this thesis can be used by both cloud and service operators to enhance the automation of performance problem detection, diagnosis and resolution. They also have the potential to spur further research in the area while being applicable in related domains such as Internet of Things (IoT), industrial sensors as well as in edge and mobile clouds. / Grundläggande egenskaper för datormoln såsom resursdelning och självbetjäning driver ett växande nyttjande av molnet för internettjänster. En följd av denna tillväxt är att den underliggande molninfrastrukturens ökande storlek och komplexitet samt fluktuerade arbetsbelastning orsakar prestandaincidenter med högre frekvens än någonsin tidigare. En konsekvens av detta blir omfattande inverkan på intäkter, tillförlitlighet och rykte för de som äger tjänsterna. Det har därför blivit viktigt att snabbt och effektivt hantera prestandaincidenter med avseende på upptäckt, diagnos och korrigering. Även om andra aspekter av resurshantering för datormoln, som övervakning och resursallokering, på senare tid automatiserats i allt högre grad så är automatiserad hantering av prestandaincidenter fortfarande ett stort problem. Denna avhandling fokuserar på hur prestandahanteringen i molndatacenter kan förbättras genom användning av dataanalystekniker på de stora datamängder som produceras i de system som monitorerar prestanda hos datorresurser och tjänster. I synnerhet undersöks tekniker och modeller för automatisk upptäckt och förebyggande av prestandaanomalier i datormoln. För att kartlägga utvecklingen inom forskningsområdet presenterar vi resultatet av en omfattande undersökning av befintliga forskningsbidrag som behandlar olika aspekter av hantering av prestandaproblem inom i relevanta tillämpningsområden. Vi diskuterar design och utvärdering av analysmodeller och algoritmer för att upptäcka prestandaanomalier i realtid hos resurser och tjänster. Vi diskuterar också utformningen av ett maskininlärningsbaserat tillvägagångssätt för att mildra prestandaförluster genom att aktivt driva tjänsternas kvalitet från oönskade tillstånd till ett önskat målläge genom inkrementell kapacitetoptimering. Forskningsmetoderna som används i denna avhandling innefattar experiment på verkliga virtualiserade testmiljöer för att utvärdera aspekter av föreslagna tekniker medan andra aspekter utvärderas med hjälp av belastningsmönster från verkliga datacenter. Insikter och resultat från denna avhandling kan användas av både moln- och tjänsteoperatörer för att bättre automatisera detekteringen av prestandaproblem, inklusive dess diagnos och korrigering. Resultaten har också potential att uppmuntra vidare forskning inom området samtidigt som de är användbara inom relaterade områden som internet-av-saker, industriella sensorer, och storskaligt distribuerade moln eller telekomnätverk. / Cloud Control / eSSENCE
200

Modélisation de fonds complexes statiques et en mouvement : application à la détection d'événements rares dans les séries d'images / Modeling of static or moving complex backgrounds : application to rare event detection in image sequences

Davy, Axel 22 November 2019 (has links)
{La première partie de cette thèse est dédiée à la modélisation d'images ou de vidéos considérés comme des fonds sur lesquels on s'attache à détecter des anomalies. Notre analyse de la littérature de la détection d'anomalie sur une seule image nous a fait identifier cinq différentes familles d'hypothèses structurelles sur le fond. Nous proposons de nouveaux algorithmes pour les problèmes de détection d'anomalie sur seule image, de détection de petites cibles sur un fond en mouvement, de détection de changements sur des images satellitaires SAR (Synthetic Aperture Radar) et de détection de nuages dans des séquences d'images de satellite optique.Dans une seconde partie, nous étudions deux autres applications de la modélisation de fond. Pour le débruitage vidéo, nous cherchons pour chaque patch de la vidéo, des patchs similaires le long de la séquence vidéo, et fournissons à un réseau de neurones convolutif les pixels centraux de ces patchs. Le modèle de fond est caché dans les poids du réseau de neurones. Cette méthode s'avère être la plus performante des méthodes par réseau de neurones comparées. Nous étudions également la synthèse de texture à partir d'un exemple. Dans ce problème, des échantillons de texture doivent être générés à partir d'un seul exemple servant de référence. Notre étude distingue les familles d'algorithmes en fonction du type de modèle adopté. Dans le cas des méthodes par réseau de neurones, nous proposons une amélioration corrigeant les artefacts de bord.Dans une troisième partie, nous proposons des implémentations temps-réel GPU de l'interpolation B-spline et de plusieurs algorithmes de débruitage d'images et de vidéo: NL-means, BM3D et VBM3D. La rapidité des implémentations proposées permet leur utilisation dans des scénarios temps-réel, et elles sont en cours de transfert vers l'industrie. / The first part of this thesis is dedicated to the modeling of image or video backgrounds, applied to anomaly detection. In the case of anomaly detection on a single image, our analysis leads us to find five different families of structural assumptions on the background. We propose new algorithms for single-image anomaly detection, small target detection on moving background, change detection on satellite SAR (Synthetic Aperture Radar) images and cloud detection on a sequence of satellite optical images.In the second part, we study two further applications of background modeling. To perform video denoising we search, for every video patch, similar patches in the video sequence, and feed their central pixels to a convolutional neural network (CNN). The background model in this case is hidden in the CNN weights. In our experiments, the proposed method is the best performing of the compared CNN-based methods. We also study exemplar-based texture synthesis. In this problem texture samples have to be generated based on only one reference sample. Our survey classifies the families of algorithms for this task according to their model assumptions. In addition, we propose improvements to fix the border behavior issues that we pointed out in several deep learning based methods.In the third part, we propose real-time GPU implementations for B-spline interpolation and for several image and video denoising algorithms: NL-means, BM3D and VBM3D. The speed of the proposed implementations enables their use in real-time scenarios, and they are currently being transitioned to industry.

Page generated in 0.2924 seconds