• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1767
  • 421
  • 360
  • 299
  • 272
  • 263
  • 254
  • 223
  • 211
  • 193
  • 179
  • 172
  • 129
  • 123
  • 123
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Extended simulation and verification platform for kernel P systems

Bakir, M.E., Ipate, F., Konur, Savas, Mierla, L.M., Niculescu, I.M. January 2014 (has links)
No / Kernel P systems integrate in a coherent and elegant manner many of the features of different P system variants, successfully used for modelling various applications. In this paper, we present our initial attempt to extend the software framework developed to support kernel P systems: a formal verification tool based on the NuSMV model checker and a large scale simulation environment based on FLAME. The use of these two tools for modelling and analysis of biological systems is illustrated with a synthetic biology example.
462

Automatic Selection of Verification Tools for Efficient Analysis of Biochemical Models

Bakir, M.E., Konur, Savas, Gheorghe, Marian, Krasnogor, N., Stannett, M. 24 April 2018 (has links)
Yes / Motivation: Formal verification is a computational approach that checks system correctness (in relation to a desired functionality). It has been widely used in engineering applications to verify that systems work correctly. Model checking, an algorithmic approach to verification, looks at whether a system model satisfies its requirements specification. This approach has been applied to a large number of models in systems and synthetic biology as well as in systems medicine. Model checking is, however, computationally very expensive, and is not scalable to large models and systems. Consequently, statistical model checking (SMC), which relaxes some of the constraints of model checking, has been introduced to address this drawback. Several SMC tools have been developed; however, the performance of each tool significantly varies according to the system model in question and the type of requirements being verified. This makes it hard to know, a priori, which one to use for a given model and requirement, as choosing the most efficient tool for any biological application requires a significant degree of computational expertise, not usually available in biology labs. The objective of this paper is to introduce a method and provide a tool leading to the automatic selection of the most appropriate model checker for the system of interest. Results: We provide a system that can automatically predict the fastest model checking tool for a given biological model. Our results show that one can make predictions of high confidence, with over 90% accuracy. This implies significant performance gain in verification time and substantially reduces the “usability barrier” enabling biologists to have access to this powerful computational technology. / EPSRC, Innovate UK
463

A property-driven methodology for formal analysis of synthetic biology systems

Konur, Savas, Gheorghe, Marian 03 1900 (has links)
Yes / This paper proposes a formal methodology to analyse bio-systems, in particular synthetic biology systems. An integrative analysis perspective combining different model checking approaches based on different property categories is provided. The methodology is applied to the synthetic pulse generator system and several verification experiments are carried out to demonstrate the use of our approach to formally analyse various aspects of synthetic biology systems. / EPSRC
464

Enhancing Software Security through Code Diversification Verification, Control-flow Restriction, and Automatic Compartmentalization

Jang, Jae-Won 26 July 2024 (has links)
In today's digital age, computer systems are prime targets for adversaries due to the vast amounts of sensitive information stored digitally. This ongoing cat-and-mouse game between programmers and adversaries forces security researchers to continually develop novel security measures. Widely adopted schemes like NX bits have safeguarded systems against traditional memory exploits such as buffer overflows, but new threats like code-reuse attacks quickly bypass these defenses. Code-reuse attacks exploit existing code sequences, known as gadgets, without injecting new malicious code, making them challenging to counter. Additionally, input-based vulnerabilities pose significant risks by exploiting external inputs to trigger malicious paths. Languages like C and C++ are often considered unsafe due to their tendency to cause issues like buffer overflows and use-after-free errors. Addressing these complex vulnerabilities requires extensive research and a holistic approach. This dissertation initially introduces a methodology for verifying the functional equivalence between an original binary and its diversified version. The Verification of Diversified Binary (VDB) algorithm is employed to determine whether the two binaries—the original and the diversified—maintain functional equivalence. Code diversification techniques modify the binary compilation process to produce functionally equivalent yet different binaries from the same source code. Most code diversification techniques focus on analyzing non-functional properties, such as whether the technique improves security. The objective of this contribution is to enable the use of untrusted diversification techniques in essential applications. Our evaluation demonstrates that the VDB algorithm can verify the functional equivalence of 85,315 functions within binaries from the GNU Coreutils 8.31 benchmark suite. Next, this dissertation proposes a binary-level tool that modifies binaries to protect against control-flow hijacking attacks. Traditional approaches to guard against ROP attacks either introduce significant overhead, require hardware support, or need intimate knowledge of the binary, such as source code. In contrast, this contribution does not rely on source code nor the latest hardware technology (e.g., Intel Control-flow Enforcement Technology). Instead, we show that we can precisely restrict control flow transfers from transferring to non-intended paths even without these features. To that end, this contribution proposes a novel control-flow integrity policy based on a deny list called Control-flow Restriction (CFR). CFR determines which control flow transfers are allowed in the binary without requiring source code. Our implementation and evaluation of CFR show that it achieves this goal with an average runtime performance overhead for commercial off-the-shelf (COTS) binaries in the range of 5.5% to 14.3%. In contrast, a state-of-the-art binary-level solution such as BinCFI has an average overhead of 61.5%. Additionally, this dissertation explores leveraging the latest hardware security primitives to compartmentalize sensitive data. Specifically, we use a tagged memory architecture introduced by ARM called the Memory Tagging Extension (MTE), which assigns a metadata tag to a memory location that is associated with pointers referencing that memory location. Although promising, ARM MTE suffers from predictable tag allocation on stack data, vulnerable plain-text metadata tags, and lack of fine-grained memory access control. Therefore, this contribution introduces Shroud to enhance data security through compartmentalization using MTE and protect MTE's tagged pointers' vulnerability through encryption. Evaluation of Shroud demonstrates its security effectiveness against non-control-data attacks like Heartbleed and Data-Oriented Programming, with performance evaluations showing an average overhead of 4.2% on lighttpd and 2% on UnixBench. Finally, the NPB benchmark measured Shroud's overhead, showing an average runtime overhead of 2.57%. The vulnerabilities highlighted by exploits like Heartbleed capitalize on external inputs, underscoring the need for enhanced input-driven security measures. Therefore, this dissertation describes a method to improve upon the limitations of traditional compartmentalization techniques. This contribution introduces an Input-Based Compartmentalization System (IBCS), a comprehensive toolchain that utilizes user input to identify data for memory protection automatically. Based on user inputs, IBCS employs hybrid taint analysis to generate sensitive code paths and further analyze each tainted data using novel assembly analyses to identify and enforce selective targets. Evaluations of IBCS demonstrate its security effectiveness through adversarial analysis and report an average overhead of 3% on Nginx. Finally, this dissertation concludes by revisiting the problem of implementing a classical technique known as Software Fault Isolation (SFI) on an x86-64 architecture. Prior works attempting to implement SFI on an x86-64 architecture have suffered from supporting a limited number of sandboxes, high context-switch overhead, and requiring extensive modifications to the toolchain, jeopardizing maintainability and introducing compatibility issues due to the need for specific hardware. This dissertation describes x86-based Fault Isolation (XFI), an efficient SFI scheme implemented on an x86-64 architecture with minimal modifications needed to the toolchain, while reducing complexity in enforcing SFI policies with low performance (22.48% average) and binary size overheads (2.65% average). XFI initializes the sandbox environment for the rewritten binary and, depending on the instructions, enforces data-access and control-flow policies to ensure safe execution. XFI provides the security benefits of a classical SFI scheme and offers additional protection against several classes of side-channel attacks, which can be further extended to enhance its protection capabilities. / Doctor of Philosophy / In today's digital age, cyber attackers frequently target computer systems due to the vast amounts of sensitive information they store. As a result, security researchers must constantly develop new protective measures. Traditional defenses like NX bits have been effective against memory exploits, but new threats like code-reuse attacks, which leverage existing code without introducing new malicious code, present new challenges. Additionally, vulnerabilities in languages like C and C++ further complicate security efforts. Addressing these issues requires extensive research and a comprehensive approach. This dissertation introduces several innovative techniques to enhance computer security. First, it presents a method to verify that a diversified program is functionally equivalent to its original version, ensuring that security modifications do not alter its intended functions. Next, it proposes a technique to prevent control-flow hijacking attacks without requiring source code or advanced hardware. Then, the dissertation explores leveraging advanced hardware, such as ARM's Memory Tagging Extension, to protect sensitive data, demonstrating robust security against attacks like Heartbleed. Recognizing that adversaries often use external inputs to exploit vulnerabilities, this dissertation introduces Input-Based Compartmentalization to automatically protect memory based on user input. Finally, an efficient implementation of a well-known security technique called Software Fault Isolation on x86-64 architecture ensures safe execution with low overhead. These advancements collectively enhance the robustness of computer systems against modern cyber threats.
465

High Quality Test Generation at the Register Transfer Level

Gent, Kelson Andrew 01 December 2016 (has links)
Integrated circuits, from general purpose microprocessors to application specific designs (ASICs), have become ubiquitous in modern technology. As our applications have become more complex, so too have the circuits used to drive them. Moore's law predicts that the number of transistors on a chip doubles every 18-24 months. This explosion in circuit size has also lead to significant growth in testing effort required to verify the design. In order to cope with the required effort, the testing problem must be approached from several different design levels. In particular, exploiting the Register Transfer Level for test generation allows for the use of relational information unavailable at the structural level. This dissertation demonstrates several novel methods for generating tests applicable for both structural and functional tests. These testing methods allow for significantly faster test generation for functional tests as well as providing high levels of fault coverage during structural test, typically outperforming previous state of the art methods. First, a semi-formal method for functional verification is presented. The approach utilizes a SMT-based bounded model checker in combination with an ant colony optimization based search engine to generate tests with high branch coverage. Additionally, the method is utilized to identify unreachable code paths within the RTL. Compared to previous methods, the experimental results show increased levels of coverage and improved performance. Then, an ant colony optimization algorithm is used to generate high quality tests for fault coverage. By utilizing co-simulation at the RTL and gate level, tests are generated for both levels simultaneously. This method is shown to reach previously unseen levels of fault coverage with significantly lower computational effort. Additionally, the engine was also shown to be effective for behavioral level test generation. Next, an abstraction method for functional test generation is presented utilizing program slicing and data mining. The abstraction allows us to generate high quality test vectors that navigate extremely narrow paths in the state space. The method reaches previously unseen levels of coverage and is able to justify very difficult to reach control states within the circuit. Then, a new method of fault grading test vectors is introduced based on the concept of operator coverage. Operator coverage measures the behavioral coverage in each synthesizable statement in the RTL by creating a set of coverage points for each arithmetic and logical operator. The metric shows a strong relationship with fault coverage for coverage forecasting and vector comparison. Additionally, it provides significant reductions in computation time compared to other vector grading methods. Finally, the prior metric is utilized for creating a framework of automatic test pattern generation for defect coverage at the RTL. This framework provides the unique ability to automatically generate high quality test vectors for functional and defect level testing at the RTL without the need for synthesis. In summary, We present a set of tools for the analysis and test of circuits at the RTL. By leveraging information available at HDL, we can generate tests to exercise particular properties that are extremely difficult to extract at the gate level. / Ph. D. / Digital circuits and modern microprocessors are pervasive in modern life. The complexity and scope of these devices has dramatically increased to meet new demands and applications, from entertainment devices to advanced automotive applications. Rising complexity causes design errors and manufacturing defects are more difficult to detect and increases testing costs. To cope with rising test costs, significant effort has been directed towards automating test generation early in development when defects are less expensive to correct. Modern digital circuits are designed using Hardware Description Languages (HDL) to describe their behavior at a high logical level. Then, the behavioral description is translated to a chip level implementation. Most automated test tools use the implementation description since it is a more direct representation of the manufactured circuit. This dissertation demonstrates several methods to utilize available logical information in behavioral descriptions for generating tests early in development that maintain applicability throughout the design process. The proposed algorithms utilize a biologically-inspired search, the ant colony optimization, abstracting test generation as an ant colony hunting for food. In the abstraction, a sequence of inputs to a circuit is represented by the walked path of an individual ant and untested portions of the circuit description are modelled as food sources. The final test is a collection of paths that efficiently reach the most food sources. Each algorithm also explores different software analysis techniques, which have been adapted to handle unique constraints of HDLs, to learn about the target circuits. The ant colony optimization uses the analysis to help guide and direct the search, yielding more efficient execution than prior techniques and reducing the time required for test generation. Additionally, the described methods can automatically generate tests in cases previously requiring manual generation, improving overall test quality.
466

Caractérisation des images à Rayon-X de la main par des modèles mathématiques : application à la biométrie / « Characterization of X-ray images of the hand by mathematical models : application to biometrics »

Kabbara, Yeihya 09 March 2015 (has links)
Dans son contexte spécifique, le terme « biométrie » est souvent associé à l'étude des caractéristiques physiques et comportementales des individus afin de parvenir à leur identification ou à leur vérification. Ainsi, le travail développé dans cette thèse nous a conduit à proposer un algorithme d'identification robuste, en considérant les caractéristiques intrinsèques des phalanges de la main. Considérée comme une biométrie cachée, cette nouvelle approche peut s'avérer intéressante, notamment lorsqu'il est question d'assurer un niveau de sécurité élevé, robuste aux différentes attaques qu'un système biométrique doit contrer. La base des techniques proposées requière trois phases, à savoir: (1) la segmentation des phalanges, (2) l'extraction de leurs caractéristiques par la génération d'une empreinte, appelée « Phalange-Code » et (3) l'identification basée sur la méthode du 1-plus proche voisin ou la vérification basée sur une métrique de similarité. Ces algorithmes opèrent sur des niveaux hiérarchiques permettant l'extraction de certains paramètres, invariants à des transformations géométriques telles que l'orientation et la translation. De plus, nous avons considéré des techniques robustes au bruit, pouvant opérer à différentes résolutions d'images. Plus précisément, nous avons élaboré trois approches de reconnaissance biométrique : la première approche utilise l'information spectrale des contours des phalanges de la main comme signature individuelle, alors que la deuxième approche nécessite l'utilisation des caractéristiques géométriques et morphologiques des phalanges (i.e. surface, périmètre, longueur, largeur, capacité). Enfin, la troisième approche requière la génération d'un nouveau rapport de vraisemblance entre les phalanges, utilisant la théorie de probabilités géométriques. En second lieu, la construction d'une base de données avec la plus faible dose de rayonnement a été l'un des grands défis de notre étude. Nous avons donc procédé par la collecte de 403 images radiographiques de la main, acquises en utilisant la machine Apollo EZ X-Ray. Ces images sont issues de 115 adultes volontaires (hommes et femmes), non pathologiques. L'âge moyen étant de 27.2 ans et l'écart-type est de 8.5. La base de données ainsi construite intègre des images de la main droite et gauche, acquises à des positions différentes et en considérant des résolutions différentes et des doses de rayonnement différentes (i.e. réduction jusqu'à 98 % de la dose standard recommandée par les radiologues « 1 µSv »).Nos expériences montrent que les individus peuvent être distingués par les caractéristiques de leurs phalanges, que ce soit celles de la main droite ou celles de la main gauche. Cette distinction est également valable pour le genre des individus (homme/femme). L'étude menée a montré que l'approche utilisant l'information spectrale des contours des phalanges permet une identification par seulement trois phalanges, à un taux EER (Equal Error Rate) inférieur à 0.24 %. Par ailleurs, il a été constaté « de manière surprenante » que la technique fondée sur les rapports de vraisemblance entre les phalanges permet d'atteindre un taux d'identification de 100 % et un taux d'EER de 0.37 %, avec une seule phalange. Hormis l'aspect identification/authentification, notre étude s'est penchée sur l'optimisation de la dose de rayonnement permettant une identification saine des individus. Ainsi, il a été démontré qu'il était possible d'acquérir plus de 12500/an d'images radiographiques de la main, sans pour autant dépasser le seuil administratif de 0.25 mSv / In its specific context, the term "biometrics" is often associated with the study of the physical and behavioral of individual's characteristics to achieve their identification or verification. Thus, the work developed in this thesis has led us to suggest a robust identification algorithm, taking into account the intrinsic characteristics of the hand phalanges. Considered as hidden biometrics, this new approach can be of high interest, particularly when it comes to ensure a high level of security, robust to various attacks that a biometric system must address. The basis of the proposed techniques requires three phases, namely: (1) the segmentation of the phalanges (2) extracting their characteristics by generating an imprint, called "Phalange-Code" and (3) the identification based on the method of 1-nearest neighbor or the verification based on a similarity metric. This algorithm operates on hierarchical levels allowing the extraction of certain parameters invariant to geometric transformations such as image orientation and translation. Furthermore, the considered algorithm is particularly robust to noise, and can function at different resolutions of images. Thus, we developed three approaches to biometric recognition: the first approach produces individual signature from the spectral information of the contours issued from the hand phalanges, whereas the second approach requires the use of geometric and morphological characteristics of the phalanges (i.e. surface, perimeter, length, width, and capacity). Finally, the third approach requires the generation of a new likelihood ratio between the phalanges, using the geometric probability theory. Furthermore, the construction of a database with the lowest radiation dose was one of the great challenges of our study. We therefore proceeded with the collection of 403 x-ray images of the hand, acquired using the Apollo EZ X-Ray machine. These images are from 115 non-pathological volunteering adult (men and women). The average age is 27.2 years and the standard deviation is 8.5. Thus, the constructed database incorporates images of the right and left hands, acquired at different positions and by considering different resolutions and different radiation doses (i.e. reduced till 98% of the standard dose recommended by radiologists "1 µSv").Our experiments show that individuals can be distinguished by the characteristics of their phalanges, whether those of the right hand or the left hand. This distinction also applies to the kind of individuals (male/female). The study has demonstrated that the approach using the spectral information of the phalanges' contours allows identification by only three phalanges, with an EER (Equal Error Rate) lower than 0.24 %. Furthermore, it was found “Surprisingly” that the technique based on the likelihood ratio between phalanges reaches an identification rate of 100% and an EER of 0.37% with a single phalanx. Apart from the identification/authentication aspect, our study focused on the optimization of the radiation dose in order to offer safe identification of individuals. Thus, it has been shown that it was possible to acquire more than 12,500/year radiographic hand images, without exceeding the administrative control of 0.25 mSv
467

Program analysis with interpolants

Weissenbacher, Georg January 2010 (has links)
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
468

Formal methods for distributed real-time systems / Méthodes formelles pour les systèmes distribués temps-réel

Dellabani, Mahieddine 31 October 2018 (has links)
Nowadays, real-time systems are ubiquitous in several application domains.Such an emergence led to an increasing need of performance (resources,availability, concurrency, etc.) and initiated a shift from theuse of single processor based hardware platforms, to large setsof interconnected and distributed computing nodes. This trend introduced the birthof a new family of systems that are intrinsically distributed, namelyemph{Networked Embedded Systems}.Such an evolution stems from the growing complexity of real-time softwareembedded on such platforms (e.g. electronic control in avionicsand automotive domains), and the need to integrate formerly isolated systems so thatthey can cooperate, as well as share resources improving thus functionalitiesand reducing costs.Undoubtedly, the design, implementation and verification of such systems areacknowledged to be very hard tasks since theyare prone to different kinds of factors, such as communication delays, CPU(s)speed or even hardware imprecisions, which increases considerably the complexity ofcoordinating parallel activities.In this thesis, we propose a rigorous design flow intended forbuilding distributed real-time applications.We investigate timed automata based models, with formally defined semantics, in orderto study the behavior of a given system with some imposed timing constraints when deployedin a distributed environment. Particularly, we study emph{(i)} the impact of the communicationdelays by introducing a minimum latency between actions executions and the effectivedate at which actions executions have been decided,and emph{(ii)} the effect of hardware imperfections, more precisely clocks imprecisions,on systems execution by breaking the perfect clocks hypothesis, often adopted duringthe modeling phase. Nevertheless, timed automata formalism is intended to describe a highlevel abstraction of the behavior of a given application.Therefore, we use an intermediate representation ofthe initial application that, besides having say{equivalent} behavior, explicitly expressesimplementation mechanisms, and thus reduces the gap between the modeling and the concreteimplementation. Additionally, we contribute in building such systems by emph{(iii)}proposing a knowledge based optimization method that aims to eliminate unnecessarycomputation time or exchange of messages during the execution.We compare the behavior of each proposed model to the initial high level model and study therelationships between both. Then, we identify and formally characterize the potential problemsresulting from these additional constraints. Furthermore, we propose execution strategies thatallow to preserve some desired properties and reach a say{similar} execution scenario,faithful to the original specifications. / Aujourd'hui, les systèmes temps réel sont omniprésents dans plusieurs domaines.Une telle expansion donne lieu à un besoin croissant en terme de performance (ressources,disponibilité, parallélisme, etc.) et a initié par la même occasion une transition del'utilisation de plateformes matérielles à processeur unique, à de grands ensemblesde nœuds de calcul inter-connectés et distribués. Cette tendance a donné la naissanceà une nouvelle famille de systèmes connue sous le nom de emph{Networked Embedded Systems},qui sont intrinsèquement distribués.Une telle évolution provient de la complexité croissante des logiciels temps réelembarqués sur de telles plateformes (par exemple les système de contrôle en avioniqueet dans domaines de l'automobile), ainsi que la nécessité d'intégrer des systèmes autrefoisisolés afin d'accomplir les fonctionnalités requises, améliorant ainsi les performanceset réduisant les coûts.Sans surprise, la conception, l'implémentation et la vérification de ces systèmes sontdes tâches très difficiles car ils sont sujets à différents types de facteurs, tels que lesdélais de communication, la fréquence du CPU ou même les imprécisions matérielles,ce qui augmente considérablement la complexité lorsqu'il s'agit de coordonner les activités parallèles.Dans cette thèse, nous proposons une démarche rigoureuse destinée à la construction d'applicationsdistribuées temps réel.Pour ce faire, nous étudions des modèles basés sur les automates temporisés, dont la sémantiqueest formellement définie, afin d'étudier le comportement d'un système donné avec des contraintes de tempsimposées lorsqu'il est déployé dans un environnement distribué. En particulier, nous étudionsemph{(i)} l'impact des délais de communication en introduisant une latence minimale entreles exécutions d'actions et la date à laquelle elles ont été décidées,et emph{(ii)} l'effet des imperfections matérielles, plus précisément les imprécisionsd'horloges, sur l'exécution des systèmes.Le paradigme des automates temporisés reste néanmoins destiné à décrire une abstractiondu comportement d'une application donnée.Par conséquent, nous utilisons une représentation intermédiaire del'application initiale, qui en plus d'avoir un comportement say{équivalent}, exprimeexplicitement les mécanismes mis en œuvre durant l'implémentation, et donc réduit ainsil'écart entre la modélisation et l'implémentation réelle.De plus, nous contribuons à la construction de tels systèmes en emph{(iii)}proposant une optimisation basée sur la emph{connaissance}, qui a pour but d'éliminer lestemps de calcul inutiles et de réduire les échanges de messages pendant l'exécution.  Nous comparons le comportement de chaque modèle proposé au modèle initial et étudionsles relations entre les deux. Ensuite, nous identifions et caractérisons formellement lesproblèmes potentiels résultants de ces contraintes supplémentaires. Aussi, nous proposonsdes stratégies d'exécution qui permettent de préserver certaines propriétés souhaitéeset d'obtenir des scénarios d'exécution say{similaires}, et fidèles aux spécificationsde départs.
469

Caractérisation des images à Rayon-X de la main par des modèles mathématiques : application à la biométrie / « Characterization of X-ray images of the hand by mathematical models : application to biometrics »

Kabbara, Yeihya 09 March 2015 (has links)
Dans son contexte spécifique, le terme « biométrie » est souvent associé à l'étude des caractéristiques physiques et comportementales des individus afin de parvenir à leur identification ou à leur vérification. Ainsi, le travail développé dans cette thèse nous a conduit à proposer un algorithme d'identification robuste, en considérant les caractéristiques intrinsèques des phalanges de la main. Considérée comme une biométrie cachée, cette nouvelle approche peut s'avérer intéressante, notamment lorsqu'il est question d'assurer un niveau de sécurité élevé, robuste aux différentes attaques qu'un système biométrique doit contrer. La base des techniques proposées requière trois phases, à savoir: (1) la segmentation des phalanges, (2) l'extraction de leurs caractéristiques par la génération d'une empreinte, appelée « Phalange-Code » et (3) l'identification basée sur la méthode du 1-plus proche voisin ou la vérification basée sur une métrique de similarité. Ces algorithmes opèrent sur des niveaux hiérarchiques permettant l'extraction de certains paramètres, invariants à des transformations géométriques telles que l'orientation et la translation. De plus, nous avons considéré des techniques robustes au bruit, pouvant opérer à différentes résolutions d'images. Plus précisément, nous avons élaboré trois approches de reconnaissance biométrique : la première approche utilise l'information spectrale des contours des phalanges de la main comme signature individuelle, alors que la deuxième approche nécessite l'utilisation des caractéristiques géométriques et morphologiques des phalanges (i.e. surface, périmètre, longueur, largeur, capacité). Enfin, la troisième approche requière la génération d'un nouveau rapport de vraisemblance entre les phalanges, utilisant la théorie de probabilités géométriques. En second lieu, la construction d'une base de données avec la plus faible dose de rayonnement a été l'un des grands défis de notre étude. Nous avons donc procédé par la collecte de 403 images radiographiques de la main, acquises en utilisant la machine Apollo EZ X-Ray. Ces images sont issues de 115 adultes volontaires (hommes et femmes), non pathologiques. L'âge moyen étant de 27.2 ans et l'écart-type est de 8.5. La base de données ainsi construite intègre des images de la main droite et gauche, acquises à des positions différentes et en considérant des résolutions différentes et des doses de rayonnement différentes (i.e. réduction jusqu'à 98 % de la dose standard recommandée par les radiologues « 1 µSv »).Nos expériences montrent que les individus peuvent être distingués par les caractéristiques de leurs phalanges, que ce soit celles de la main droite ou celles de la main gauche. Cette distinction est également valable pour le genre des individus (homme/femme). L'étude menée a montré que l'approche utilisant l'information spectrale des contours des phalanges permet une identification par seulement trois phalanges, à un taux EER (Equal Error Rate) inférieur à 0.24 %. Par ailleurs, il a été constaté « de manière surprenante » que la technique fondée sur les rapports de vraisemblance entre les phalanges permet d'atteindre un taux d'identification de 100 % et un taux d'EER de 0.37 %, avec une seule phalange. Hormis l'aspect identification/authentification, notre étude s'est penchée sur l'optimisation de la dose de rayonnement permettant une identification saine des individus. Ainsi, il a été démontré qu'il était possible d'acquérir plus de 12500/an d'images radiographiques de la main, sans pour autant dépasser le seuil administratif de 0.25 mSv / In its specific context, the term "biometrics" is often associated with the study of the physical and behavioral of individual's characteristics to achieve their identification or verification. Thus, the work developed in this thesis has led us to suggest a robust identification algorithm, taking into account the intrinsic characteristics of the hand phalanges. Considered as hidden biometrics, this new approach can be of high interest, particularly when it comes to ensure a high level of security, robust to various attacks that a biometric system must address. The basis of the proposed techniques requires three phases, namely: (1) the segmentation of the phalanges (2) extracting their characteristics by generating an imprint, called "Phalange-Code" and (3) the identification based on the method of 1-nearest neighbor or the verification based on a similarity metric. This algorithm operates on hierarchical levels allowing the extraction of certain parameters invariant to geometric transformations such as image orientation and translation. Furthermore, the considered algorithm is particularly robust to noise, and can function at different resolutions of images. Thus, we developed three approaches to biometric recognition: the first approach produces individual signature from the spectral information of the contours issued from the hand phalanges, whereas the second approach requires the use of geometric and morphological characteristics of the phalanges (i.e. surface, perimeter, length, width, and capacity). Finally, the third approach requires the generation of a new likelihood ratio between the phalanges, using the geometric probability theory. Furthermore, the construction of a database with the lowest radiation dose was one of the great challenges of our study. We therefore proceeded with the collection of 403 x-ray images of the hand, acquired using the Apollo EZ X-Ray machine. These images are from 115 non-pathological volunteering adult (men and women). The average age is 27.2 years and the standard deviation is 8.5. Thus, the constructed database incorporates images of the right and left hands, acquired at different positions and by considering different resolutions and different radiation doses (i.e. reduced till 98% of the standard dose recommended by radiologists "1 µSv").Our experiments show that individuals can be distinguished by the characteristics of their phalanges, whether those of the right hand or the left hand. This distinction also applies to the kind of individuals (male/female). The study has demonstrated that the approach using the spectral information of the phalanges' contours allows identification by only three phalanges, with an EER (Equal Error Rate) lower than 0.24 %. Furthermore, it was found “Surprisingly” that the technique based on the likelihood ratio between phalanges reaches an identification rate of 100% and an EER of 0.37% with a single phalanx. Apart from the identification/authentication aspect, our study focused on the optimization of the radiation dose in order to offer safe identification of individuals. Thus, it has been shown that it was possible to acquire more than 12,500/year radiographic hand images, without exceeding the administrative control of 0.25 mSv
470

Méthodes de vérification de spécifications comportementales : étude et mise en œuvre

Mounier, Laurent 31 January 1992 (has links) (PDF)
Nous rappelons tout d'abord le principe des procedures de decision classiques,qui reposent sur des algorithmes de raffinement de partitions. Cette approche necessite de construire au prealable les relations de transition des deux systemes a comparer, ce qui constitue une limitation en pratique. Nous proposons par consequent un algorithme original, base sur un parcours en profondeur du produit synchrone des deux systèmes, qui permet d'effectuer la comparaison ``a la volee'', sans jamais construire ni mémoriser explicitementles deux relations de transition. L'algorithme de comparaison ``a la volée'' a ete mis en œuvre au sein du logiciel de verification Aldebaran dans le cas de différentes relations : la bisimulation forte, l'équivalence observationnelle, la tau*a-bisimulation, la delay bisimulation et la bisimulation de branchement, ainsi que le preordre et l'equivalence de surete. Son application a la verification de plusieurs programmes Lotos de taille realiste a confirme l'interet pratique de notre approche par rapport aux methodes classiques. Enfin, nous nous interessons egalement a la generation d'un diagnostic lorsque les deux systemes de transitions etiquetees a comparer ne sont pas equivalents : les procedures de decision implementees dans Aldebaran fournissent le cas echeant un ensemble de sequences d'execution discriminantes, minimales pour une relation d'ordre donnee.

Page generated in 0.1217 seconds