• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 464
  • 77
  • 34
  • 31
  • 29
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 802
  • 506
  • 234
  • 223
  • 170
  • 148
  • 125
  • 97
  • 96
  • 86
  • 83
  • 82
  • 72
  • 72
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Characterization of Laminated Magnetoelectric Vector Magnetometers to Assess Feasibility for Multi-Axis Gradiometer Configurations

Berry, David 29 December 2010 (has links)
Wide arrays of applications exist for sensing systems capable of magnetic field detection. A broad range of sensors are already used in this capacity, but future sensors need to increase sensitivity while remaining economical. A promising sensor system to meet these requirements is that of magnetoelectric (ME) laminates. ME sensors produce an electric field when a magnetic field is applied. While this ME effect exists to a limited degree in single phase materials, it is more easily achieved by laminating a magnetostrictive material, which deforms when exposed to a magnetic field, to a piezoelectric material. The transfer of strain from the magnetostrictive material to the piezoelectric material results in an electric field proportional to the induced magnetic field. Other fabrication techniques may impart the directionality needed to classify the ME sensor as a vector magnetometer. ME laminate sensors are more affordable to fabricate than competing vector magnetometers and with recent increases in sensitivity, have potential for use in arrays and gradiometer configurations. However, little is known about their total field detection, the effects of multiple sensors in close proximity and the signal processing needed for target localization. The goal for this project is to closely examine the single axis ME sensor response in different orientations with a moving magnetic dipole to assess the field detection capabilities. Multiple sensors were tested together to determine if the response characteristics are altered by the DC magnetic bias of ME sensors in close proximity. And finally, the ME sensor characteristics were compared to alternate vector magnetometers. / Master of Science
152

Program Anomaly Detection Against Data-Oriented Attacks

Cheng, Long 29 August 2018 (has links)
Memory-corruption vulnerability is one of the most common attack vectors used to compromise computer systems. Such vulnerabilities could lead to serious security problems and would remain an unsolved problem for a long time. Existing memory corruption attacks can be broadly classified into two categories: i) control-flow attacks and ii) data-oriented attacks. Though data-oriented attacks are known for a long time, the threats have not been adequately addressed due to the fact that most previous defense mechanisms focus on preventing control-flow exploits. As launching a control-flow attack becomes increasingly difficult due to many deployed defenses against control-flow hijacking, data-oriented attacks are considered an appealing attack technique for system compromise, including the emerging embedded control systems. To counter data-oriented attacks, mitigation techniques such as memory safety enforcement and data randomization can be applied in different stages over the course of an attack. However, attacks are still possible because currently deployed defenses can be bypassed. This dissertation explores the possibility of defeating data-oriented attacks through external monitoring using program anomaly detection techniques. I start with a systematization of current knowledge about exploitation techniques of data-oriented attacks and the applicable defense mechanisms. Then, I address three research problems in program anomaly detection against data-oriented attacks. First, I address the problem of securing control programs in Cyber-Physical Systems (CPS) against data-oriented attacks. I describe a new security methodology that leverages the event-driven nature in characterizing CPS control program behaviors. By enforcing runtime cyber-physical execution semantics, our method detects data-oriented exploits when physical events are inconsistent with the runtime program behaviors. Second, I present a statistical program behavior modeling framework for frequency anomaly detection, where frequency anomaly is the direct consequence of many non-control-data attacks. Specifically, I describe two statistical program behavior models, sFSA and sCFT, at different granularities. Our method combines the local and long-range models to improve the robustness against data-oriented attacks and significantly increase the difficulties that an attack bypasses the anomaly detection system. Third, I focus on defending against data-oriented programming (DOP) attacks using Intel Processor Trace (PT). DOP is a recently proposed advanced technique to construct expressive non-control data exploits. I first demystify the DOP exploitation technique and show its complexity and rich expressiveness. Then, I design and implement the DeDOP anomaly detection system, and demonstrate its detection capability against the real-world ProFTPd DOP attack. / Ph. D. / Memory-corruption vulnerability is one of the most common attack vectors used to compromise computer systems. Such vulnerabilities could lead to serious security problems and would remain an unsolved problem for a long time. This is because low-level memory-unsafe languages (e.g., C/C++) are still in use today for interoperability and speed performance purposes, and remain common sources of security vulnerabilities. Existing memory corruption attacks can be broadly classified into two categories: i) control-flow attacks that corrupt control data (e.g., return address or code pointer) in the memory space to divert the program’s control-flow; and ii) data-oriented attacks that target at manipulating non-control data to alter a program’s benign behaviors without violating its control-flow integrity. Though data-oriented attacks are known for a long time, the threats have not been adequately addressed due to the fact that most previous defense mechanisms focus on preventing control-flow exploits. As launching a control-flow attack becomes increasingly difficult due to many deployed defenses against control-flow hijacking, data-oriented attacks are considered an appealing attack technique for system compromise, including the emerging embedded control systems. To counter data-oriented attacks, mitigation techniques such as memory safety enforcement and data randomization can be applied in different stages over the course of an attack. However, attacks are still possible because currently deployed defenses can be bypassed. This dissertation explores the possibility of defeating data-oriented attacks through external monitoring using program anomaly detection techniques. I start with a systematization of current knowledge about exploitation techniques of data-oriented attacks and the applicable defense mechanisms. Then, I address three research problems in program anomaly detection against data-oriented attacks. First, I address the problem of securing control programs in Cyber-Physical Systems (CPS) against data-oriented attacks. The key idea is to detect subtle data-oriented exploits in CPS when physical events are inconsistent with the runtime program behaviors. Second, I present a statistical program behavior modeling framework for frequency anomaly detection, where frequency anomaly is often consequences of many non-control-data attacks. Our method combines the local and long-range models to improve the robustness against data-oriented attacks and significantly increase the difficulties that an attack bypasses the anomaly detection system. Third, I focus on defending against data-oriented programming (DOP) attacks using Intel Processor Trace (PT). I design and implement the DEDOP anomaly detection system, and demonstrate its detection capability against the real-world DOP attack.
153

Extensions to Radio Frequency Fingerprinting

Andrews, Seth Dixon 05 December 2019 (has links)
Radio frequency fingerprinting, a type of physical layer identification, allows identifying wireless transmitters based on their unique hardware. Every wireless transmitter has slight manufacturing variations and differences due to the layout of components. These are manifested as differences in the signal emitted by the device. A variety of techniques have been proposed for identifying transmitters, at the physical layer, based on these differences. This has been successfully demonstrated on a large variety of transmitters and other devices. However, some situations still pose challenges: Some types of fingerprinting feature are very dependent on the modulated signal, especially features based on the frequency content of a signal. This means that changes in transmitter configuration such as bandwidth or modulation will prevent wireless fingerprinting. Such changes may occur frequently with cognitive radios, and in dynamic spectrum access networks. A method is proposed to transform features to be invariant with respect to changes in transmitter configuration. With the transformed features it is possible to re-identify devices with a high degree of certainty. Next, improving performance with limited data by identifying devices using observations crowdsourced from multiple receivers is examined. Combinations of three types of observations are defined. These are combinations of fingerprinter output, features extracted from multiple signals, and raw observations of multiple signals. Performance is demonstrated, although the best method is dependent on the feature set. Other considerations are considered, including processing power and the amount of data needed. Finally, drift in fingerprinting features caused by changes in temperature is examined. Drift results from gradual changes in the physical layer behavior of transmitters, and can have a substantial negative impact on fingerprinting. Even small changes in temperature are found to cause drift, with the oscillator as the primary source of this drift (and other variation) in the fingerprints used. Various methods are tested to compensate for these changes. It is shown that frequency based features not dependent on the carrier are unaffected by drift, but are not able to distinguish between devices. Several models are examined which can improve performance when drift is present. / Doctor of Philosophy / Radio frequency fingerprinting allows uniquely identifying a transmitter based on characteristics of the signal it emits. In this dissertation several extensions to current fingerprinting techniques are given. Together, these allow identification of transmitters which have changed the signal sent, identifying using different measurement types, and compensating for variation in a transmitter's behavior due to changes in temperature.
154

Spectrum Awareness: Deep Learning and Isolation Forest Approaches for Open-set Identification of Signals

Fredieu, Christian January 2022 (has links)
Over the next decade, 5G networks will become more and more prevalent in everyday life. This will provide solutions to current limitations by allowing access to bands previously unavailable to civilian communication networks. However, this also provides new challenges primarily for the military operations. Radar bands have traditionally operated primarily in the sub-6 GHz region. In the past, these bands were off limits to civilian communications. However, that changed when they were opened up in the 2010's. With these bands now being forced to co-exist with commercial users, military operators need systems to identify the signals within a spectrum environment. In this thesis, we extend current research in the area of signal identification by using previous work in the area to construct a deep learning-based classifier that is able to classify a signal as either as a communication waveform (Single-Carrier (SC), Single-Carrier Frequency Division Multiple Access (SC-FDMA), Orthogonal Frequency Division Multiplexing (OFDM), Amplitude Modulation (AM), Frequency Modulation (FM)) or a radar waveform (Linear Frequency Modulation (LFM) or Phase-coded). However, the downside to this method is that the classifier is based on the assumption that all possible signals within the spectrum environment are within the training dataset. To account for this, we have proposed a novel classifier design for detection of unknown signals outside of the training dataset. This two-classifier system forms an open-set recognition (OSR) system that is used to provide more situational awareness for operators. / M.S. / Over the next decade, next-generation communications will become prevalent in everyday life providing solutions to limitation previously experienced by older networks. However, this also brings about new challenges. Bands in the electromagnetic spectrum that were reserved for military use are now being opened up to commercial users. This means that military and civilian networks now have a challenge of co-existence that must be addressed. One way to address this is being aware of what signals are operating in the bands such as either communication signals, radar signals, or both. In this thesis, we will developed a system that can do that task of identifying a signal as one of five communication waveforms or two radar waveforms by using machine learning techniques. We also develop a new technique for identifying unknown signals that might be operating within these bands to further help military and civilian operators monitor the spectrum.
155

Extensions of Weighted Multidimensional Scaling with Statistics for Data Visualization and Process Monitoring

Kodali, Lata 04 September 2020 (has links)
This dissertation is the compilation of two major innovations that rely on a common technique known as multidimensional scaling (MDS). MDS is a dimension-reduction method that takes high-dimensional data and creates low-dimensional versions. Project 1: Visualizations are useful when learning from high-dimensional data. However, visualizations, just as any data summary, can be misleading when they do not incorporate measures of uncertainty; e.g., uncertainty from the data or the dimension reduction algorithm used to create the visual display. We incorporate uncertainty into visualizations created by a weighted version of MDS called WMDS. Uncertainty exists in these visualizations on the variable weights, the coordinates of the display, and the fit of WMDS. We quantify these uncertainties using Bayesian models in a method we call Informative Probabilistic WMDS (IP-WMDS). Visually, we display estimated uncertainty in the form of color and ellipses, and practically, these uncertainties reflect trust in WMDS. Our results show that these displays of uncertainty highlight different aspects of the visualization, which can help inform analysts. Project 2: Analysis of network data has emerged as an active research area in statistics. Much of the focus of ongoing research has been on static networks that represent a single snapshot or aggregated historical data unchanging over time. However, most networks result from temporally-evolving systems that exhibit intrinsic dynamic behavior. Monitoring such temporally-varying networks to detect anomalous changes has applications in both social and physical sciences. In this work, we simulate data from models that rely on MDS, and we perform an evaluation study of the use of summary statistics for anomaly detection by incorporating principles from statistical process monitoring. In contrast to most previous studies, we deliberately incorporate temporal auto-correlation in our study. Other considerations in our comprehensive assessment include types and duration of anomaly, model type, and sparsity in temporally-evolving networks. We conclude that the use of summary statistics can be valuable tools for network monitoring and often perform better than more involved techniques. / Doctor of Philosophy / In this work, two main ideas in data visualization and anomaly detection in dynamic networks are further explored. For both ideas, a connecting theme is extensions of a method called Multidimensional Scaling (MDS). MDS is a dimension-reduction method that takes high-dimensional data (all $p$ dimensions) and creates a low-dimensional projection of the data. That is, relationships in a dataset with presumably a large number of dimensions or variables can be summarized into a lower number of, e.g., two, dimensions. For a given data, an analyst could use a scatterplot to observe the relationship between 2 variables initially. Then, by coloring points, changing the size of the points, or using different shapes for the points, perhaps another 3 to 4 more variables (in total around 7 variables) may be shown in the scatterplot. An advantage of MDS (or any dimension-reduction technique) is that relationships among the data can be viewed easily in a scatterplot regardless of the number of variables in the data. The interpretation of any MDS plot is that observations that are close together are relatively more similar than observations that are farther apart, i.e., proximity in the scatterplot indicates relative similarity. In the first project, we use a weighted version of MDS called Weighted Multidimensional Scaling (WMDS) where weights, which indicate a sense of importance, are placed on the variables of the data. The problem with any WMDS plot is that inaccuracies of the method are not included in the plot. For example, is an observation that appears to be an outlier, really an outlier? An analyst cannot confirm this without further context. Thus, we created a model to calculate, visualize, and interpret such inaccuracy or uncertainty in WMDS plots. Such modeling efforts help analysts facilitate exploratory data analysis. In the second project, the theme of MDS is extended to an application with dynamic networks. Dynamic networks are multiple snapshots of pairwise interactions (represented as edges) among a set of nodes (observations). Over time, changes may appear in some of the snapshots. We aim to detect such changes using a process monitoring approach on dynamic networks. Statistical monitoring approaches determine thresholds for in-control or expected behavior that are calculated from data with no signal. Then, the in-control thresholds are used to monitor newly collected data. We applied this approach on dynamic network data, and we utilized a detailed simulation study to better understand the performance of such monitoring. For the simulation study, data are generated from dynamic network models that use MDS. We found that monitoring summary statistics of the network were quite effective on data generated from these models. Thus, simple tools may be used as a first step to anomaly detection in dynamic networks.
156

Root Cause Prediction from Log Data using Large Language Models

Mandakath Gopinath, Aswath January 2024 (has links)
In manufacturing, uptime and system reliability are paramount, placing high demands on automation technologies such as robotic systems. Failures in these systems cause considerable disruptions and incur significant costs. Traditional troubleshooting methods require extensive manual analysis by experts of log files, system data, application information, and problem descriptions. This process is labor-intensive and time-consuming, often resulting in prolonged downtimes and increased customer dissatisfaction, leading to heavy financial losses for companies. This research explores the application of Large Language Models (LLMs) like MistralLite and Mixtral-8*7B to automate root cause prediction from log data. We employed various fine-tuning methods, including full fine-tuning, Low-Rank Adaptation (LoRA), and Quantized Low Rank Adaptation (QLoRA), on these decoder-only models. Beyond using perplexity as an evaluation metric, the study incorporates GPT-4 as-a-judge to assess model performance. Additionally, the research uses complex prompting techniques to aid in the extraction of root causes from problem descriptions using GPT-4 and utilizes vector embeddings to analyze the importance of features in root cause prediction.  The findings demonstrate that LLMs, when fine-tuned, can assist in identifying root causes from log data, with the smaller MistralLite model showing superior performance compared to the larger Mixtral model, challenging the notion that larger models are inherently better. The results also indicate that different training adaptations yield varied effectiveness, with QLoRA adaptation performing best for MistralLite and full fine-tuning proving most effective for Mixtral. This suggests that a tailored approach to model adaptation is necessary for optimal performance. Additionally, employing GPT-4 with Chain of Thought (CoT) prompting has demonstrated the capability to extract reasonable root causes from solved issues using this technique. The analysis of feature vector embeddings provides insights into the significant features, enhancing our understanding of the underlying patterns and relationships in the data.
157

Detection and localization of link-level network anomalies using end-to-end path monitoring

Salhi, Emna 13 February 2013 (has links) (PDF)
The aim of this thesis is to come up with cost-efficient, accurate and fast schemes for link-level network anomaly detection and localization. It has been established that for detecting all potential link-level anomalies, a set of paths that cover all links of the network must be monitored, whereas for localizing all potential link-level anomalies, a set of paths that can distinguish between all links of the network pairwise must be monitored. Either end-node of each path monitored must be equipped with a monitoring device. Most existing link-level anomaly detection and localization schemes are two-step. The first step selects a minimal set of monitor locations that can detect/localize any link-level anomaly. The second step selects a minimal set of monitoring paths between the selected monitor locations such that all links of the network are covered/distinguishable pairwise. However, such stepwise schemes do not consider the interplay between the conflicting optimization objectives of the two steps, which results in suboptimal consumption of the network resources and biased monitoring measurements. One of the objectives of this thesis is to evaluate and reduce this interplay. To this end, one-step anomaly detection and localization schemes that select monitor locations and paths that are to be monitored jointly are proposed. Furthermore, we demonstrate that the already established condition for anomaly localization is sufficient but not necessary. A necessary and sufficient condition that minimizes the localization cost drastically is established. The problems are demonstrated to be NP-Hard. Scalable and near-optimal heuristic algorithms are proposed.
158

The Protracted Magmatism and Hydrothermal Activity Associated with the Gibraltar Porphyry Cu-Mo Deposit, South Central British Columbia, Canada.

Kobylinski, Christopher 01 August 2019 (has links)
The Gibraltar porphyry-Cu deposit is a large open pit porphyry Cu mine in Canada with the geological tonnage (production and reserves) of 3.2 Mt Cu. The Gibraltar deposit is hosted by the Granite Mountain Batholith (GMB), a tonalitic batholith with the surface exposure over 150 km2. All rocks within the batholith are tonalites with minor quartz diorites. The batholith intrudes into mafic volcanoclastic rocks of the Nicola group in the Quesnel terrane of the Canadian Cordillera. The Cu mineralization at Gibraltar is confined to a small 4.5 km2 area in the central part of the batholith and occurs primarily as disseminated chalcopyrite. New U-Pb dating on zircon shows protracted late Triassic magmatism spanning ~25 m.y. for the formation of the GMB. Early magmatism is dated at 229.2±4.4 Ma in unmineralized tonalites. Later, at least three magmatism form the Cu mineralization during a period spanning from 218.9±3.1 Ma to 205.8±2.1 Ma. These fertile magmas form in a more mature arc setting, superseded early barren magmatic activity in a more juvenile arc setting for the bulk of the GMB. Epidote in the GMB shows compositional zoning with Fe-poor cores and Fe-rich rims. The zoning in the mineralized intrusions likely reflects changes in hydrothermal fluid, from S-rich to S-poor. The data from the Gibraltar deposit shows that an economic porphyry Cu deposit may be found in igneous rocks with low Sr/Y in bulk rocks and low Eu/Eu* in zircon. In the Gibraltar deposit, Ce anomalies in zircon reflect oxidation conditions and are correlated with Cu resource associated with their respective intrusion.
159

ONTO-Analyst: um método extensível para a identificação e visualização de anomalias em ontologias / ONTO-Analyst: An Extensible Method for the Identification and the Visualization of Anomalies in Ontologies

Orlando, João Paulo 21 August 2017 (has links)
A Web Semântica é uma extensão da Web em que as informações tem um significado explícito, permitindo que computadores e pessoas trabalhem em cooperação. Para definir os significados explicitamente, são usadas ontologias na estruturação das informações. À medida que mais campos científicos adotam tecnologias da Web Semântica, mais ontologias complexas são necessárias. Além disso, a garantia de qualidade das ontologias e seu gerenciamento ficam prejudicados quanto mais essas ontologias aumentam em tamanho e complexidade. Uma das causas para essas dificuldades é a existência de problemas, também chamados de anomalias, na estrutura das ontologias. Essas anomalias englobam desde problemas sutis, como conceitos mal projetados, até erros mais graves, como inconsistências. A identificação e a eliminação de anomalias podem diminuir o tamanho da ontologia e tornar sua compreensão mais fácil. Contudo, métodos para identificar anomalias encontrados na literatura não visualizam anomalias, muitos não trabalham com OWL e não são extensíveis por usuários. Por essas razões, um novo método para identificar e visualizar anomalias em ontologias, o ONTO-Analyst, foi criado. Ele permite aos desenvolvedores identificar automaticamente anomalias, usando consultas SPARQL, e visualizá-las em forma de grafos. Esse método usa uma ontologia proposta, a METAdata description For Ontologies/Rules (MetaFOR), para descrever a estrutura de outras ontologias, e consultas SPARQL para identificar anomalias nessa descrição. Uma vez identificadas, as anomalias podem ser apresentadas na forma de grafos. Um protótipo de sistema, chamado ONTO-Analyst, foi criado para a validação desse método e testado em um conjunto representativo de ontologias, por meio da verificação de anomalias representativas. O protótipo testou 18 tipos de anomalias retirados da literatura científica, em um conjunto de 608 ontologias OWL de 4 repositórios públicos importantes e dois artigos. O sistema detectou 4,4 milhões de ocorrências de anomalias nas 608 ontologias: 3,5 milhões de ocorrências de um mesmo tipo e 900 mil distribuídas em 11 outros tipos. Essas anomalias ocorreram em várias partes das ontologias, como classes, propriedades de objetos e de dados, etc. Num segundo teste foi realizado um estudo de caso das visualizações geradas pelo protótipo ONTO-Analyst das anomalias encontradas no primeiro teste. Visualizações de 11 tipos diferentes de anomalias foram automaticamente geradas. O protótipo mostrou que cada visualização apresentava os elementos envolvidos na anomalia e que pelo menos uma solução podia ser deduzida a partir da visualização. Esses resultados demonstram que o método pode eficientemente encontrar ocorrências de anomalias em um conjunto representativo de ontologias OWL, e que as visualizações facilitam o entendimento e correção da anomalia encontrada. Para estender os tipos de anomalias detectáveis, usuários podem escrever novas consultas SPARQL. / The Semantic Web is an extension of the World Wide Web in which the information has explicit meaning, allowing computers and people to work in cooperation. In order to explicitly define meaning, ontologies are used to structure information. As more scientific fields adopt Semantic Web technologies, more complex ontologies are needed. Moreover, the quality assurance of the ontologies and their management are undermined as these ontologies increase in size and complexity. One of the causes for these difficulties is the existence of problems, also called anomalies, in the ontologies structure. These anomalies range from subtle problems, such as poorly projected concepts, to more serious ones, such as inconsistencies. The identification and elimination of anomalies can diminish the ontologies size and provide a better understanding of the ontologies. However, methods to identify anomalies found in the literature do not provide anomaly visualizations, many do not work on OWL ontologies or are not user extensible. For these reasons, a new method for anomaly identification and visualization, the ONTO-Analyst, was created. It allows ontology developers to automatically identify anomalies, using SPARQL queries, and visualize them as graph images. The method uses a proposed ontology, the METAdata description For Ontologies/Rules (MetaFOR), to describe the structure of other ontologies, and SPARQL queries to identify anomalies in this description. Once identified, the anomalies can be presented as graph images. A system prototype, the ONTO-Analyst, was created in order to validate this method and it was tested in a representative set of ontologies, trough the verification of representative anomalies. The prototype tested 18 types of anomalies, taken from the scientific literature, in a set of 608 OWL ontologies from major public repositories and two articles. The system detected 4.4 million anomaly occurrences in the 608 ontologies: 3.5 million occurrences from the same type and 900 thousand distributed in 11 other types. These anomalies occurred in various parts of the ontologies, such as classes, object and data properties, etc. In a second test, a case study was performed in the visualizations generated by the ONTO-Analyst prototype, from the anomalies found in the first test. It was shown that each visualization presented the elements involved in the anomaly and that at least one possible solution could be deduced from the visualization. These results demonstrate that the method can efficiently find anomaly occurrences in a representative set of OWL ontologies and that the visualization aids in the understanding and correcting of said anomalies. In order to extend the types of detectable anomalies, users can write new SPARQL queries.
160

Sécurité Vérification d’implémentation de protocole / Security Verification of Protocol Implementation

Fu, Yulong 14 March 2014 (has links)
En ce qui concerne le développement des technologies informatique, les systèmes et les réseaux informatiques sont intensément utilisés dans la vie quotidienne. Ces systèmes sont responsables de nombreuses tâches essentielles pour notre communauté sociale (par exemple, système de traitement médical, E-Commerce, Système d'avion, système de vaisseau spatial, etc.). Quand ces systèmes cessent de fonctionner ou sont corrompus, les pertes économiques peuvent atteindre des sommes inacceptables. Pour éviter ces situations, les systèmes doivent être sécurisés avant leur installation. Alors que la plupart de ces systèmes sont mis en œuvre à partir de spécifications des protocoles, les problèmes de vérification de la sécurité de systèmes concrets renvient à vérifier la sécurité de l'implémentation de ces protocoles. Dans cette thèse, nous nous concentrons sur les méthodes de vérification de la sécurité des implémentations des protocoles et nous sommes intéressés à deux principaux types d'attaques sur les réseaux : Déni de service (DoS) et attaque de Protocol d’authentification. Nous étudions les caractéristiques de ces attaques et les méthodes de vérification formelles. Puis nous proposons modèle étendu de IOLTS et les algorithmes correspondants à la génération de les cas de test pour la vérification de sécurité automatique. Afin d'éviter les explosions d'état possibles, nous formalisons également les expériences de sécurité du testeur comme le « Objectif de Sécurité » pour contrôler la génération de test sur la volée. Parallèlement, une méthode d'analyse basée sur le modèle pour la Systèmes de Détection d'intrusion Anomalie (Anomaly IDS) est également proposée dans cette thèse, ce qui peut améliorer les capacités de détecter des anomalies de l'IDS. Ces méthodes de vérification proposées sont mises en évidence par l'étude de RADIUS protocole et un outil intégré de graphique est également proposé pour facilement les opérations de la génération de test. / Regarding the development of computer technologies, computer systems have been deeply used in our daily life. Those systems have become the foundation of our modern information society. Some of them even take responsibilities for many essential and sensitive tasks (e.g., Medical Treatment System, E-Commerce, Airplane System, Spaceship System, etc.). Once those systems are executed with problems, the loss on the economy may reach an unacceptable number. In order to avoid these disappointing situations, the security of the current systems needs to be verified before their installations. While, most of the systems are implemented from protocol specifications, the problems of verifying the security of concrete system can be transformed to verify the security of protocol implementation. In this thesis, we focus on the security verification methods of protocol implementations and we are interested with two main types of network attacks: Denis-of-Services (DoS) attacks and Protocol Authentication attacks. We investigate the features of these attacks and the existed formal verification methods and propose two extended models of IOLTS and the corresponding algorithms to generate the security verification test cases automatically. In order to avoid the possible state explosions, we also formalize the security experiences of the tester as Security Objective to control the test generation on-the-fly. Meanwhile, a modeled based Anomaly Intrusion Detection Systems (IDS) analysis method is also proposed in this thesis, which can enhance the detect abilities of Anomaly IDS. These proposed verification methods are demonstrated with the case study of RADIUS protocol and an integrated GUI tool is also proposed to simply the operations of test generation.

Page generated in 0.0308 seconds