• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 104
  • 104
  • 67
  • 36
  • 19
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Analyses de terminaison des calculs flottants / Termination Analysis of Floating-Point Computations

Maurica Andrianampoizinimaro, Fonenantsoa 08 December 2017 (has links)
Le tristement célèbre Ecran Bleu de la Mort de Windows introduit bien le problème traité. Ce bug est souvent causé par la non-terminaison d'un pilote matériel : le programme s'exécute infiniment, bloquant ainsi toutes les ressources qu'il s'est approprié pour effectuer ses calculs. Cette thèse développe des techniques qui permettent de décider, préalablement à l'exécution, la terminaison d'un programme donné pour l'ensemble des valeurs possibles de ses paramètres en entrée. En particulier, nous nous intéressons aux programmes qui manipulent des nombres flottants. Ces nombres sont omniprésents dans les processeurs actuels et sont utilisés par pratiquement tous les développeurs informatiques. Pourtant, ils sont souvent mal compris et, de fait, source de bugs. En effet, les calculs flottants sont entachés d'erreurs, inhérentes au fait qu'ils sont effectués avec une mémoire finie. Par exemple, bien que vraie dans les réels, l'égalité 0.2 + 0.3 = 0.5 est fausse dans les flottants. Non gérées correctement, ces erreurs peuvent amener à des évènements catastrophiques, tel l'incident du missile Patriot qui a fait 28 morts. Les théories que nous développons sont illustrées, et mises à l'épreuve par des extraits de codes issus de programmes largement répandus. Notamment, nous avons pu exhiber des bugs de terminaisons dues à des calculs flottants incorrects dans certains paquets de la distribution Ubuntu. / The infamous Blue Screen of Death of Windows appropriately introduces the problem at hand. This bug is often caused by a non-terminating device driver: the program runs infinitely, blocking in the process all the resources it allocated for its calculations. This thesis develops techniques that allow to decide, before runtime,termination of a given program for any possible value ​​of its inputs. In particular, we are interested in programs that manipulate floating-point numbers. These numbers are ubiquitous in current processors andare used by nearly all software developers. Yet, they are often misunderstood and, hence, source of bugs.Indeed, floating-point computations are tainted with errors. This is because they are performed within a finite amount of memory. For example, although true in the reals, the equality 0.2 + 0.3 = 0.5 is false in the floats. Not handled properly, these errors can lead to catastrophic events,such as the Patriot missile incident that killed 28 people. The theories we develop are illustrated, and put to the test, by code snippets taken from widely used programs. Notably, we were able to exhibit termination bugs due toincorrect floating-point computations in some packages of the Ubuntu distribution.
82

Un système de types pragmatique pour la vérification déductive des programmes / A Pragmatic Type System for Deductive Software Verification

Gondelman, Léon 13 December 2016 (has links)
Cette thèse se place dans le contexte de la vérification déductive des programmes et a pour objectif de formaliser un certain nombre de concepts qui sont mis en œuvre dans l'outil de vérification Why3.L'idée générale est d'explorer des solutions qu'une approche à base de systèmes de types peut apporter à la vérification. Nous commençons par nous intéresser à la notion du code fantôme, une technique implantée dans de nombreux outils de vérification modernes, qui consiste à donner à des éléments de la spécification les apparences d'un code opérationnel. L'utilisation correcte du code fantôme requiert maintes précautions puisqu'il ne doit jamais interférer avec le reste du code. Le premier chapitre est consacré à une formalisation du code fantôme, en illustrant comment un système de types avec effets en permet une utilisation à la fois correcte et expressive. Puis nous nous intéressons à la vérification des programmes manipulant des pointeurs. En présence d'aliasing, c'est-à-dire lorsque plusieurs pointeurs manipulés dans un programme dénotent une même case mémoire, la spécification et la vérification deviennent non triviales. Plutôt que de nous diriger vers des approches existantes qui abordent le problème d'aliasing dans toute sa complexité, mais sortent du cadre de la logique de Hoare, nous présentons un système de types avec effets et régions singletons qui permet d'effectuer un contrôle statique des alias avant même de générer les obligations de preuve. Bien que ce système de types nous limite à des pointeurs dont l'identité peut être connue statiquement, notre observation est qu'il convient à une grande majorité des programmes que l'on souhaite vérifier. Enfin, nous abordons les questions liées à la vérification de programmes conçus de façon modulaire. Concrètement, nous nous intéressons à une situation où il existe une barrière d'abstraction entre le code de l'utilisateur et celui des bibliothèques dont il dépend. Cela signifie que les bibliothèques fournissent à l'utilisateur une énumération de fonctions et de structures de données manipulées, sans révéler les détails de leur implémentation. Le code de l'utilisateur ne peut alors exploiter ces données qu'à travers un ensemble de fonctions fournies. Dans une telle situation, la vérification peut elle-même être modulaire. Du côté de l'utilisateur, la vérification ne doit alors s'appuyer que sur des invariants de type et des contrats de fonctions exposés par les bibliothèques. Du côté de ces dernières, la vérification doit garantir que la représentation concrète raffine correctement les entités exposées, c'est-à-dire en préservant les invariants de types et les contrats de fonctions. Dans le troisième chapitre nous explorons comment un système de types permettant le contrôle statique des alias peut être adapté à la vérification modulaire et le raffinement des structures de données. / This thesis is conducted in the framework of deductive software verification.is aims to formalize some concepts that are implemented in the verification tool Why3. The main idea is to explore solutions that a type system based approach can bring to deductive verification. First, we focus our attention on the notion of ghost code, a technique that is used in most of modern verification tools and which consists in giving to some parts of specification the appearance of operational code. Using ghost code correctly requires various precautions since the ghost code must never interfere with the operational code. The first chapter presents a type system with effects illustrating how ghost code can be used in a way which is both correct and expressive. The second chapter addresses some questions related to verification of programs with pointers in the presence of aliasing, i.e. when several pointers handled by a program denote a same memory cell. Rather than moving towards to approaches that address the problem in all its complexity to the costs of abandoning the framework of Hoare logic, we present a type system with effects and singleton regions which resolves a liasing issues by performing a static control of aliases even before the proof obligations are generated. Although our system is limited to pointers whose identity must be known statically, we observe that it fits for most of the code we want to verify. Finally, we focus our attention on a situation where there exists an abstraction barrier between the user's code and the one of the libraries which it depends on. That means that libraries provide the user a set of functions and of data structures, without revealing details of their implementation. When programs are developed in a such modular way, verification must be modular it self. It means that the verification of user's code must take into account only function contracts supplied by libraries while the verification of libraries must ensure that their implementations refine correctly the exposed entities. The third chapter extends the system presented in the previous chapter with these concepts of modularity and data refinement.
83

Simulation of Attitude and Orbit Control for APEX CubeSat

de Graaf, Niels January 2020 (has links)
CubeSats are becoming a game changer in the space industry. Appearing first for univer-sity mission, its popularity is increasing for commercial use and for deep space missionssuch as the on HERA mission that will orbit in 2026 around an asteroid as part of aplanetary defence mission. Standardisation and industrial collaboration is key to a fastdevelopment, assuring the product quality and lower development expenditures.In this study the focus is set elaborating a low cost demonstrator platform to be usedfor developing and testing onboard software on physical hardware: a Hardware-Softwaretesting facility. The purpose of such a platform is to create an interactive and accessibleenvironment for developing on board software. The application chosen to be elaboratedon this platform is a module the subsystem of attitude and orbit control of the satelliteorbiting around asteroid.In order to create this platform the simulation of the asteroid environment of theCubeSat has been made using open source software libraries. During this task the per-formance of open source libraries has been compared to commercial alternatives. In thedevelopment of simulation different orbit perturbations have been studied by modellingthe asteroid as a cube or spheroid and additionally the effect of a third perturbing bodyand radiation pressure.As part of this project two microcontroller have been set up communicating using acommunication bus and communication protocols used for space applications to simulatehow the attitude and orbit control is commanded inside the CubeSat.
84

Program Verification of FreeRTOS using Microsoft Dafny

Matias, Matthew John 28 May 2014 (has links)
No description available.
85

A comprehensive process for Automotive Model-Based Control

Gurusubramanian, Sabarish 27 September 2013 (has links)
No description available.
86

A model-driven development and verification approach for medical devices

Jedryszek, Jakub January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / John Hatcliff / Medical devices are safety-critical systems whose failure may put human life in danger. They are becoming more advanced and thus more complex. This leads to bigger and more complicated code-bases that are hard to maintain and verify. Model-driven development provides high-level and abstract description of the system in the form of models that omit details, which are not relevant during the design phase. This allows for certain types of verification and hazard analysis to be performed on the models. These models can then be translated into code. However, errors that do not exist in the models may be introduced during the implementation phase. Automated translation from verified models to code may prevent to some extent. This thesis proposes approach for model-driven development and verification of medical devices. Models are created in AADL (Architecture Analysis & Design Language), a language for software and hardware architecture modeling. AADL models are translated to SPARK Ada, contract-based programming language, which is suitable for software verification. Generated code base is further extended by developers to implement internals of specific devices. Created programs can be verified using SPARK tools. A PCA (Patient Controlled Analgesia) pump medical device is used to illustrate the primary artifacts and process steps. The foundation for this work is "Integrated Clinical Environment Patient-Controlled Analgesia Infusion Pump System Requirements" document and AADL Models created by Brian Larson. In addition to proposed model-driven development approach, a PCA pump prototype was created using the BeagleBoard-xM device as a platform. Some components of PCA pump prototype were verified by SPARK tools and Bakar Kiasan.
87

Verification and validation of computer simulations with the purpose of licensing a pebble bed modular reactor

Bollen, Rob 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: The Pebble Bed Modular Reactor is a new and inherently safe concept for a nuclear power generation plant. In order to obtain the necessary licenses to build and operate this reactor, numerous design and safety analyses need to be performed. The results of these analyses must be supported with substantial proof to provide the nuclear authorities with a sufficient level of confidence in these results to be able to supply the required licences. Beside the obvious need for a sufficient level of confidence in the safety analyses, the analyses concerned with investment protection also need to be reliable from the investors’ point of view. The process to be followed to provide confidence in these analyses is the verification and validation process. It is aimed at presenting reliable material against which to compare the results from the simulations. This material for comparison will consist of a combination of results from experimental data, extracts from actual plant data, analytical solutions and independently developed solutions for the simulation of the event to be analysed. Besides comparison with these alternative sources of information, confidence in the results will also be built by providing validated statements on the accuracy of the results and the boundary conditions with which the simulations need to comply. Numerous standards exist that address the verification and validation of computer software, for instance by organisations such as the American Society of Mechanical Engineers (ASME) and the Institute of Electrical and Electronics Engineers (IEEE). The focal points of the verification and validation of the design and safety analyses performed on typical PBMR modes and states, and the requirements imposed by both the local and overseas nuclear regulators, are not entirely enveloped by these standards. For this reason, PBMR developed a systematic and disciplined approach for the preparation of the Verification and Validation Plan, aimed at capturing the essence of the analyses. This approach aims to make a definite division between software development and the development of technical analyses, while still using similar processes for the verification and validation. The reasoning behind this is that technical analyses are performed by engineers and scientists who should only be responsible for the verification and validation of the models and data they use, but not for the software they are dependent on. Software engineers should be concerned with the delivery of qualified software to be used in the technical analyses. The PBMR verification and validation process is applicable to both hand calculations and computer-aided analyses, addressing specific requirements in clearly defined stages of the software and Technical Analysis life cycle. The verification and validation effort of the Technical Analysis activity is divided into the verification and validation of models and data, the review of calculational tasks, and the verification and validation of software, with the applicable information to be validated, captured in registers or databases. The resulting processes are as simple as possible, concise and practical. Effective use of resources is ensured and internationally accepted standards have been incorporated, aiding in faith in the process by all stakeholders, including investors, nuclear regulators and the public. / AFRIKAASE OPSOMMING: Die Modulêre Korrelbedreaktor is ’n nuwe konsep vir ’n kernkragsentrale wat inherent veilig is. Dit word deur PBMR (Edms.) Bpk. ontwikkel. Om die nodige vergunnings om so ’n reaktor te kan bou en bedryf, te bekom, moet ’n aansienlike hoeveelheid ontwerp- en veiligheidsondersoeke gedoen word. Die resultate wat hierdie ondersoeke oplewer, moet deur onweerlegbare bewyse ondersteun word om vir die owerhede ’n voldoende vlak van vertroue in die resultate te gee, sodat hulle die nodigde vergunnings kan maak. Benewens die ooglopende noodsaak om ’n voldoende vlak van vertroue in die resultate van die veiligheidsondersoeke te hê, moet die ondersoeke wat met die beskerming van die beleggers se beleggings gepaard gaan, net so betroubaar wees. Die proses wat gevolg word om vertroue in die resultate van die ondersoeke op te bou, is die proses van verifikasie en validasie. Dié proses is daarop gerig om betroubare vergelykingsmateriaal vir simulasies voor te lê. Hierdie vergelykingsmateriaal vir die gebeurtenis wat ondersoek word, sal bestaan uit enige kombinasie van inligting wat in toetsopstellings bekom is, wat in bestaande installasies gemeet is, wat analities bereken is; asook dit wat deur ’n derde party onafhanklik van die oorspronklike ontwikkelaars bekom is. Vertroue in die resultate van die ondersoeke sal, behalwe deur vergelyking met hierdie alternatiewe bronne van inligting, ook opgebou word deur die resultate te voorsien van ’n gevalideerde verklaring wat die akkuraatheid van die resultate aantoon en wat die grensvoorwaardes waaraan die simulasies ook moet voldoen, opsom. Daar bestaan ’n aansienlike hoeveelheid internasionaal aanvaarde standaarde wat die verifikasie en validasie van rekenaarsagteware aanspreek. Die standaarde kom van instansies soos die Amerikaanse Vereniging vir Meganiese Ingenieurs (ASME) en die Instituut vir Elektriese en Elektroniese Ingenieurs (IEEE) – ook van Amerika. Die aandag wat deur die Suid-Afrikaanse en oorsese kernkragreguleerders vereis word vir die toestande wat spesifiek geld vir korrelbedreaktors, word egter nie geheel en al deur daardie standaarde aangespreek nie. Daarom het die PBMR maatskappy ’n stelselmatige benadering ontwikkel om verifikasie- en validasieplanne voor te berei wat die essensie van die ondersoeke kan ondervang. Hierdie benadering is daarop gemik om ’n duidelike onderskeid te maak tussen die ontwikkeling van sagteware en die ontwikkeling van tegniese ondersoeke, terwyl steeds gelyksoortige prosesse in die verifikasie en validasie gebruik sal word. Die rede hiervoor is dat tegniese ondersoeke uitgevoer word deur ingenieurs en wetenskaplikes wat net vir verifikasie en validasie van hulle eie modelle en die gegewens verantwoordelik gehou kan word, maar nie vir die verifikasie en validasie van die sagteware wat hulle gebruik nie. Ingenieurs wat spesialiseer in sagteware-ontwikkeling behoort verantwoordelik te wees vir die daarstelling van sagteware wat deur die reguleerders gekwalifiseer kan word, sodat dit in tegniese ondersoeke op veiligheidsgebied gebruik kan word. Die verifikasie- en validasieproses van die PBMR is sowel vir handberekeninge as vir rekenaarondersteunde-ondersoek geskik. Hierdie proses spreek spesifieke vereistes in onderskeie stadiums gedurende die lewenssiklusse van die ontwikkeling van sagteware en van tegniese ondersoeke aan. Die verifikasie- en validasiewerk vir tegniese ondersoeksaktiwiteite is verdeel in die verifikasie en validasie van modelle en gegewens, die nasien van berekeninge en die verifikasie en validasie van sagteware, waarby die betrokke inligting wat gevalideer moet word, versamel word in registers of databasisse. Die prosesse wat hieruit voortgevloei het, is so eenvoudig as moontlik, beknop en prakties gehou. Hierdeur is ’n effektiewe benutting van bronne verseker. Internasionaal aanvaarde standaarde is gebruik wat die vertroue in die proses deur alle betrokkenes, insluitende beleggers, die owerhede en die publiek, sal bevorder.
88

A model checker for the LF system

Gerber, Erick D. B. 03 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: Computer aided veri cation techniques, such as model checking, can be used to improve the reliability of software. Model checking is an algorithmic approach to illustrate the correctness of temporal logic speci cations in the formal description of hardware and software systems. In contrast to traditional testing tools, model checking relies on an exhaustive search of all the possible con gurations that these systems may exhibit. Traditionally model checking is applied to abstract or high level designs of software. However, often interpreting or translating these abstract designs to implementations introduce subtle errors. In recent years one trend in model checking has been to apply the model checking algorithm directly to the implementations instead. This thesis is concerned with building an e cient model checker for a small concurrent langauge developed at the University of Stellenbosch. This special purpose langauge, LF, is aimed at developement of small embedded systems. The design of the language was carefully considered to promote safe programming practices. Furthermore, the language and its runtime support system was designed to allow directly model checking LF programs. To achieve this, the model checker extends the existing runtime support infrastructure to generate the state space of an executing LF program. / AFRIKAANSE OPSOMMING: Rekenaar gebaseerde program toetsing, soos modeltoetsing, kan gebruik word om die betroubaarheid van sagteware te verbeter. Model toetsing is 'n algoritmiese benadering om die korrektheid van temporale logika spesi kasies in die beskrywing van harde- of sagteware te bewys. Anders as met tradisionlee program toetsing, benodig modeltoetsing 'n volledige ondersoek van al die moontlike toestande waarin so 'n beskrywing homself kan bevind. Model toetsing word meestal op abstrakte modelle van sagteware of die ontwerp toegepas. Indien die ontwerp of model aan al die spesi kasies voldoen word die abstrakte model gewoontlik vertaal na 'n implementasie. Die vertalings proses word gewoontlik met die hand gedoen en laat ruimte om nuwe foute, en selfs foute wat uitgeskakel in die model of ontwerp is te veroorsaak. Deesdae, is 'n gewilde benadering tot modeltoetsing om di e tegnieke direk op die implementasie toe te pas, en sodoende die ekstra moeite van model konstruksie en vertaling uit te skakel. Hierdie tesis handel oor die ontwerp, implementasie en toetsing van 'n e ektiewe modeltoetser vir 'n klein gelyklopende taal, LF, wat by die Universiteit van Stellenbosch ontwikkel is. Die enkeldoelige taal, LF, is gemik op die veilige ontwikkeling van ingebedde sagteware. Die taal is ontwerp om veilige programmerings praktyke aan te moedig. Verder is die taal en die onderliggende bedryfstelsel so ontwerp om 'n model toetser te akkomodeer. Om die LF programme direk te kan toets, is die model toetser 'n integrale deel van die bedryfstelsel sodat dit die program kan aandryf om alle moontlike toestande te besoek.
89

Modular Detection of Feature Interactions Through Theorem Proving: A Case Study

Roberts, Brian Glenn 21 August 2003 (has links)
"Feature-oriented programming is a way of designing a program around the features it performs, rather than the objects or files it manipulates. This should lead to an extensible and flexible "product-line" architecture that allows custom systems to be assembled with particular features included or excluded as needed. Composing these features together modularly, while leading to flexibility in the feature-set of the finished product, can also lead to unexpected interactions that occur between features. Robert Hall presented a manual methodology for locating these interactions and has used it to search for feature interactions in email. Li et al. performed automatic verification of Hall's system using model-checking verifications tools. Model-checking verification is state-based, and is not well-suited for verifying recursive data structures, an area where theorem-proving verification tools excel. In this thesis, we propose a methodology for using formal theorem-proving tools for modularly verifying feature-oriented systems. The methodology presented captures the essential steps for using modular techniques for modeling and verifying a system. This enables verification of individual modules, without examining the source code of the other modules in the system. We have used Hall's email system as a test case for validating the methodology."
90

Efficient Online Path Profiling

Vaswani, Kapil 10 1900 (has links)
Most dynamic program analysis techniques such as profile-driven compiler optimizations, software testing and runtime property checking infer program properties by profiling one or more executions of a program. Unfortunately, program profiling does not come for free. For example, even the most efficient techniques for profiling acyclic, intra-procedural paths can slow down program execution by a factor of 2. In this thesis, we propose techniques that significantly lower the overheads of profiling paths, enabling the use of path-based dynamic analyzes in cost-sensitive environments. Preferential path profiling (PPP) is a novel software-only path profiling scheme that efficiently profiles given subsets of paths, which we refer to as interesting paths. The algorithm is based on the observation that most consumers of path profiles are only interested in profiling a small set of paths known a priori. Our algorithm can be viewed as a generalization of the Ball-Larus path profiling algorithm. Whereas the Ball-Larus algorithm assigns weights to the edges of a given CFG such that the sum of the weights of the edges along each path through the CFG is unique, our algorithm assigns weights to the edges such that the sum of the weights along the edges of interesting paths is unique. Furthermore, our algorithm attempts to achieve a minimal and compact encoding of the interesting paths; such an encoding significantly reduces the overheads of path profiling by eliminating expensive hash operations during profiling. Interestingly, we find that both the Ball-Larus algorithm and PPP are essentially a form of arithmetic coding. We use this connection to prove that the numbering produced by PPP is optimal. We also propose a programmable, non-intrusive hardware path profiler (HPP). The hardware profiler consists of a path detector that detects paths by monitoring the stream of retiring branch instructions emanating from the processor pipeline. The path detector can be programmed to detect various types of paths and track architectural events that occur along paths. The second component of the hardware profiling infrastructure is a Hot Path Table (HPT), that collects accurate hot path profiles. Our experimental evaluation shows that PPP reduces the overheads of profiling paths to 15% on average (with a maximum of 26%). The algorithm can be easily extended to profile inter-procedural paths at minimal additional overheads (average of 26%). We modeled HPP using a cycle-accurate superscalar processor simulator and find that HPP generates accurate path profiles at extremely low overheads (0.6% on average) with a moderate hardware budget. We also evaluated the use of PPP and HPP in a realistic profiling scenarios. We find that the profiles generated by HPP can effectively replace expensive profiles used in profile-driven optimizations. We also find that even well-tested programs tend to exercise a large number of untested paths in the field, emphasizing the need for efficient profiling schemes that can be deployed in production environments.

Page generated in 0.1315 seconds