• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 18
  • 13
  • 12
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 150
  • 64
  • 50
  • 43
  • 36
  • 31
  • 30
  • 30
  • 27
  • 26
  • 22
  • 22
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Os teoremas de Stewart e de Heron e o cálculo da área de um triângulo em função dos lados

Oliveira, Carlos Alberto Maziozeki de 07 March 2014 (has links)
CAPES / Organizou-se esta dissertação a partir da constatação de que teoremas de matemáticos como Matthew Stewart e Heron são pouco empregados nas aulas de matemática do ensino fundamental e médio. As contribuições desses matemáticos no cálculo das cevianas e da área de triângulos podem simplificar a solução de muitos problemas. Como ponto de partida, elaborou-se uma atividade extraclasse contendo quatro questões centradas no Teorema de Stewart e aplicou-se a mesma a três turmas do ensino m´edio do CPM-PR. A partir da análise dos resultados dessa atividade, definiu-se a pesquisa bibliográfica, a estrutura do texto e a organização de uma coleção de problemas aplicados. / This dissertation was organized from the observation that mathematical theorems as Stewart’s theorem and Heron’s theorem are rarely used in mathematics classes in middle and high schools. The contributions of these mathematicians in calculating cevians and area of triangles can simplify the solution of many problems. As a starting point, we prepared one extracurricular activity containing four questions centered on Stewart’s theorem and applied the same in three high school classes of CPM-PR. From the analysis of the results of this activity, we defined the literature, the text structure and organization of a collection of applied problems.
132

Secure System Virtualization : End-to-End Verification of Memory Isolation

Nemati, Hamed January 2017 (has links)
Over the last years, security-kernels have played a promising role in reshaping the landscape of platform security on embedded devices. Security-kernels, such as separation kernels, enable constructing high-assurance mixed-criticality execution platforms on a small TCB, which enforces isolation between components. The reduced TCB  minimizes the system attack surface and facilitates the use of formal methods to ensure the kernel functional correctness and security. In this thesis, we explore various aspects of building a provably secure separation kernel using virtualization technology. We show how the memory management subsystem can be virtualized to enforce isolation of system components. Virtualization is done using direct-paging that enables a guest software to manage its own memory configuration. We demonstrate the soundness of our approach by verifying that the high-level model of the system fulfills the desired security properties. Through refinement, we then propagate these properties (semi-)automatically to the machine-code of the virtualization mechanism. Further, we show how a runtime monitor can be securely deployed alongside a Linux guest on a hypervisor to prevent code injection attacks targeting Linux. The monitor takes advantage of the provided separation to protect itself and to retain a complete view of the guest. Separating components using a low-level software cannot by itself guarantee the system security. Indeed, current processors architecture involves features that can be utilized to violate the isolation of components. We present a new low-noise attack vector constructed by measuring caches effects which is capable of breaching isolation of components and invalidates the verification of a software that has been verified on a memory coherent model. To restore isolation, we provide several countermeasures and propose a methodology to repair the verification by including data-caches in the statement of the top-level security properties of the system. / <p>QC 20170831</p> / PROSPER / HASPOC
133

Účetnictví jako důkaz / The Accounting As An Evidence

Smolíková, Tereza January 2015 (has links)
The diploma thesis is focused on the accounting as an evidence, more specifically as a tool for procuring the proof while proving the facts claimed by the taxpayer. Firstly, there are described a financial and tax terms relevant for the topic. Then the rights and obligations of the tax administrator and taxpayer during the tax procedure are described. The last but one chapter deals with the burden of proof and with it´s transfer among the tax administrator and the taxpayer. In conclusion, an evidence that can be used in proving the facts are mentioned.
134

Une couverture combinant tests et preuves pour la vérification formelle / A coverage combining tests and proofs for formal verification

Le, Viet Hoang 11 July 2019 (has links)
Actuellement, le développement d’un logiciel de taille industriel repose généralement surdes tests ou des preuves unitaires pour garantir rigoureusement ses exigences. En outre, il adéjà été montré que l’utilisation combinée du test et de la preuve unitaires est plus efficaceque l’utilisation d’une seule de ces deux techniques. Néanmoins, un ingénieur en vérificationhésite encore à utiliser ces deux techniques conjointement, faute d’une notion de couverturecommune au test et à la preuve. Définir une telle notion est l’objet de cette thèse.En effet, nous introduisons une nouvelle couverture, appelée « couverture label-mutant ».Elle permet de représenter les critères de couverture structurelle habituels du test, comme lacouverture des instructions, la couverture des branches ou la couverture MC/DC et de décidersi le critère choisi est satisfait en utilisant une technique de vérification formelle, qu’elle soitpar test, par preuve ou par une combinaison des deux. Elle permet également de représenterles critères de couverture fonctionnelle. Nous introduisons aussi dans cette thèse une méthodereposant sur des outils automatiques de test et de preuve pour réduire l’effort de vérificationtout en satisfaisant le critère de couverture choisi. Cette méthode est mise en oeuvre au seinde la plateforme d’analyse de code C (Frama-C), fournissant ainsi à un ingénieur un moyenopérationnel pour contrôler et réaliser la vérification qu’il souhaite. / Currently, industrial-strength software development usually relies on unit testing or unitproof in order to ensure high-level requirements. Combining these techniques has already beendemonstrated more effective than using one of them alone. The verification engineer is yetnot been to combine these techniques because of the lack of a common notion of coverage fortesting and proving. Defining such a notion is the main objective of this thesis.We introduce here a new notion of coverage, named « label-mutant coverage ». It subsumesmost existing structural coverage criteria for unit testing, including statement coverage,branch coverage or MC/DC coverage, while allowing to decide whether the chosen criterionis satisfied by relying on a formal verification technique, either testing or proving or both.It also subsumes functional coverage criteria. Furthermore, we also introduce a method thatmakes use of automatic tools for testing or proving in order to reduce the verification costwhile satisfying the chosen coverage criterion. This method is implemented inside Frama-C, aframework for verification of C code (Frama-C). This way, it offers to the engineer a way tocontrol and to perform the expected verifications.
135

Development of Correct-by-Construction Software using Product Lines / Développement de logiciels corrects par construction à partir de lignes de produits

Pham, Thi-Kim-Dung 16 November 2017 (has links)
Nous avons commencé la thèse par la littérature d'enquête sur les approches SPLE et CbyC dans l'état de l'art. Sur la base de l'aperçu et des connaissances obtenues, nous avons analysé les problèmes existants et suggéré des moyens de les résoudre pour notre objectif principal. Nous avons proposé dans le chapitre 2 une méthodologie pour développer des lignes de produits afin que les produits générés soient corrects par construction. Notre intention principale est qu'un utilisateur n'a pas besoin de connaître le processus de génération de produit mais peut recevoir un produit final correct en sélectionnant une configuration de fonctionnalité. En utilisant la méthodologie, les produits finaux sont générés automatiquement et leur exactitude est garantie. À la suite de cette proposition, nous sommes passés au chapitre 3 pour définir la langue de FFML qui est utilisé pour l'écriture de modules. Le mécanisme de réutilisation et de modification, défini pour la langue et appliqué à toutes sortes d'artefacts (spécification, code et preuve de précision) réduit l'effort de programmation. Au chapitre 4, nous nous sommes concentrés sur la définition des mécanismes de composition pour la composition des modules FFML et les intégrons à l'outil FFML Product Generator. L'évaluation de notre méthodologie est réalisée par le développement de deux lignes de produits logiciels, le compte bancaire SPL et le SPL de poker, ce dernier étant un peu plus complexe que le premier. Dans l'évaluation, nous avons souligné les avantages et la limitation de notre méthodologie. / We began the thesis by survey literature on SPLE and CbyC approaches in the State of the Art. Based on the overview and the insights obtained, we have analyzed the existing problems and suggested ways to solve them for our main goal. We have proposed in Chapter 2 a methodology to develop product lines such that the generated products are correct-by-construction. Our main intention is that a user does not need to know the product generation process but can receive a correct final product from selecting a configuration of features. Using the methodology, the final products are generated automatically and their correctness is guaranteed. Following this proposal, we have moved in Chapter 3 to define the FFML language that is used for writing modules. The reuse and modification mechanism, defined for the language and applied to all kinds of artifacts (specification, code and correctness proof), reduce the programming effort. In Chapter 4, we have focused on defining the composition mechanisms for composing FFML modules and embedded them into the FFML Product Generator tool. The evaluation of our methodology is performed through the development of two software product lines, the Bank Account SPL and the Poker SPL, the latter being a bit more complex than the former. In the evaluation, we have highlighted the advantages and the limitation of our methodology.
136

HOLCF '11: A Definitional Domain Theory for Verifying Functional Programs

Huffman, Brian Charles 01 January 2011 (has links)
HOLCF is an interactive theorem proving system that uses the mathematics of domain theory to reason about programs written in functional programming languages. This thesis introduces HOLCF '11, a thoroughly revised and extended version of HOLCF that advances the state of the art in program verification: HOLCF '11 can reason about many program definitions that are beyond the scope of other formal proof tools, while providing a high degree of proof automation. The soundness of the system is ensured by adhering to a definitional approach: New constants and types are defined in terms of previous concepts, without introducing new axioms. Major features of HOLCF '11 include two high-level definition packages: the Fixrec package for defining recursive functions, and the Domain package for defining recursive datatypes. Each of these uses the domain-theoretic concept of least fixed points to translate user-supplied recursive specifications into safe low-level definitions. Together, these tools make it easy for users to translate a wide variety of functional programs into the formalism of HOLCF. Theorems generated by the tools also make it easy for users to reason about their programs, with a very high level of confidence in the soundness of the results. As a case study, we present a fully mechanized verification of a model of concurrency based on powerdomains. The formalization depends on many features unique to HOLCF '11, and is the first verification of such a model in a formal proof tool.
137

Prokazování nároku na odpočet daně z přidané hodnoty / Proving the title to value added tax deduction

Rásocha, Filip January 2021 (has links)
(v anglickém jazyce) The Master's Thesis "Proving the title to value added tax deduction" is focused on the explanation of rules and characteristics of a specific area of proving in the administration of taxes, which is undoubtedly represented by proving the title to value added tax (VAT) deduction, this Thesis is also focused on identification and solution of disputed issues related to this area. First of all, VAT and its deductions are dealt with in general, namely in the scope necessary for the issues under analysis. The Thesis then analyzes in detail the conditions necessary to the title to VAT deduction as adherence to these conditions is mainly being ascertained within the process of proving. Having the necessary starting points been thus defined, the title to deduction is comprehensively and critically analyzed wherein the biggest attention in this regard is paid to the distribution of the burden of proof between the taxable entity and the tax administrator as well as to the content and scope of this burden of proof. In this context, many problems are also pointed out which are faced by the decision-making practice, including their possible methods of solution. One of them is related to proving the supplier of the taxable supply and a fully independent part is focused on this issue as the...
138

Formal Verification of Hardware Peripheral with Security Property / Formell verifikation av extern hårdvara med säkerhetskrav

Yao Håkansson, Jonathan, Rosencrantz, Niklas January 2017 (has links)
One problem with computers is that the operating system automatically trusts any externallyconnected peripheral. This can result in abuse when a peripheral technically can violate the security model because the peripheral is trusted. Because of that the security is an important issue to look at.The aim of our project is to see in which cases hardware peripherals can be trusted. We built amodel of the universal asynchronous transmitter/receiver (UART), a model of the main memory(RAM) and a model of a DMA controller. We analysed interaction between hardware peripherals,user processes and the main memory.One of our results is that connections with hardware peripherals are secure if the hardware is properly configured. A threat scenario could be an eavesdropper or man-in-the-middle trying to steal data or change a cryptographic key.We consider the use-cases of DMA and protecting a cryptographic key. We prove the well-behavior of the algorithm. Some error-traces resulted from incorrect modelling that was resolved by adjusting the models. Benchmarks were done for different memory sizes.The result is that a peripheral can be trusted provided a configuration is done. Our models consist of finite state machines and their corresponding SMV modules. The models represent computer hardware with DMA. We verified the SMV models using the model checkers NuSMV and nuXmv. / Målet med vårt projekt är att verifiera olika specifikationer av externa enheter som ansluts till datorn. Vi utför formell verifikation av sådan datorutrustning och virtuellt minne. Verifikation med temporal logik, LTL, utförs. Specifikt verifierar vi 4 olika use-case och 9 formler för seriell datakommunikation, DMA och virtuellt minne. Slutsatsen är att anslutning av extern hårdvara är säker om den är ordentligt konfigurerad.Vi gör jämförelser mellan olika minnesstorlekar och mätte tidsåtgången för att verifiera olika system. Vi ser att tidsåtgången för verifikation är långsammare än linjärt beroende och att relativt små system tar relativt lång tid att verifiera.
139

Low-Level Static Analysis for Memory Usage and Control Flow Recovery

Bockenek, Joshua Alexander 07 March 2023 (has links)
Formal characterization of the memory used by a program is an important basis for security analyses, compositional verification, and identification of noninterference. However, soundly proving memory usage requires operating on the assembly level due to the semantic gap between high-level languages and the code that processors actually execute. Automated methods, such as model checking, would not be able to handle many interesting functions due to the undecidability of memory usage. Fully-interactive methods do not scale well either. Sound control flow recovery (CFR) is also important for binary decompilation, verification, patching, and security analysis. It lifts raw unstructured data into a form that allows reasoning over behavior and semantics. However, doing so requires interpreting the behavior of the program when indirect or dynamic control flow exists, creating a recursive dependency. This dissertation tackles the first property with two contributions that perform proof generation combined with interactive theorem proving in a semi-automated manner: an untrusted tool extracts as much information as it can from the functions under test and then generates all the necessary proofs to be completed in a theorem prover. The first, Floyd-style approach still requires significant manual effort but provides good flexibility and ensures no paths are analyzed more than once. In contrast, the second, Hoare-style approach sacrifices some flexibility and avoidance of repeated path evaluation in order to achieve much greater automation. However, neither approach can handle the dynamic control flow caused by indirect branching. The second property is handled by the second set of contributions of this dissertation. These two contributions provide fully-automated methods of recovering control flow from binaries even in the presence of indirect branching. When such dynamic control flow cannot be overapproximatively resolved, it is clearly noted in the resultant output. In the first approach to control flow recovery, a structured memory representation allows for general analysis of control flow in the presence of indirection, gaining scalability by utilizing context-free function analysis. It supports various aliasing conditions via the usage of nondeterminism, with multiple output states potentially being produced from a given input state. The second approach adds function context and abstract interpretation-inspired modeling of the C++ exception handling (EH) application binary interface (ABI), allowing for the discovery of previously-unknown paths while maintaining or increasing automation. / Doctor of Philosophy / Modern computer programs are so complicated that individual humans cannot manually check all but the smallest programs to make sure they are correct and secure. This is even worse if you want to reduce the trusted computing base (TCB), the stuff that you have to assume is working right in order to say a program will execute correctly. The TCB includes your computer itself, but also whatever tools were used to take the programs written by programmers and transform them into a form suitable for running on a computer. Such tools are often called compilers. One method of reducing the TCB is to examine the lowest-level representation of that program, the assembly or even machine code that is actually run by your computer. This poses unique challenges, because operating on such a low level means you do not have a lot of the structure that a more abstract, higher-level representation provides. Also, sometimes you want to formally state things about a program's behavior; that is, say things about what it does with a high degree of confidence based on mathematical principles. You may also want to verify that one or more of those statements are true. If you want to be detailed about that behavior, you may need to know all of the chunks, or regions, in random-access memory (RAM) that are used by that program. RAM, henceforth referred to as just ``memory'', is your computer's first place of storage for the information used by running programs. This is distinct from long-term storage devices like hard disk drives (HDDs) or solid-state drives (SSDs), which programs do not normally have direct access to. Unfortunately, there is no one single approach that can automatically determine with absolute certainty for all cases the exact regions of memory that are read or written. This is called undecidability, and means that you need to approximate those memory regions a lot of the time if you want to have a significant degree of automation. An underapproximation, an approach that only gives you some of the regions, is not useful for formal statements as it might miss out on some behavior; it is unsound. This means that you need an overapproximation, an approach that is guaranteed to give you at least the regions read or written. Therefore, the first contribution of this dissertation is a preliminary approach to such an overapproximation. This approach is based on the work of Robert L. Floyd, focusing on the direct control flow (where the steps of a program go) in an individual function (structured program component). It still requires a lot of user effort, including having to manually specify the regions in memory that were possibly used and do a lot of work to prove that those regions are (overapproximatively) correct, so our tests were limited in scope. The second contribution automated a lot of the manual work done for the first approach. It is based on the work of Charles Antony Richard Hoare, who developed a verification approach focusing on the syntax (the textual form) of programs. This contribution produces what we call formal memory usage certificates (FMUCs), which are formal statements that the regions of memory they describe are the only ones possibly affected by the functions under test. These statements also come with proofs, which for our work are like scripts used to verify that the things the FMUCs assert about the corresponding functions can be shown to be true given the assumptions our FMUCs have. Sometimes those proofs are incomplete, though, such as when there is a loop (repeated bit of code) in a function under test or one function calls (executes) another. In those cases, a user has to finish the proof, in the first case by weakening (removing information from) the FMUC's statements about the loop and in the second by composing, or combining, the FMUCs of the two functions. Additionally, this second approach cannot handle dynamic control flow. Such control flow occurs when the low-level instructions a program uses to move to another place in that program do not have a pre-stored location to go to. Instead, that location is supplied as the program is running. This is opposed to direct control flow, where the place to go to is hard-coded into the program when it is compiled. The tool also cannot not deal with aliasing, which is when different state parts (value-holding components) of a program contain the same value and that value is used as the numeric address or identifier of a location in memory. Specifically, it cannot deal with potential aliasing, when there is not enough information available to determine if the state parts alias or not. Because of that, we had to add extra assumptions to the FMUCs that limited them to those cases where ambiguous memory-referencing state parts referred to separate memory locations. Finally, it specifically requires assembly as input; you cannot directly supply a binary to it. This is also true of the first contribution. Because of this, we were able to test on more functions than before, but not a lot more. Not being able to deal with dynamic control flow is a big problem, as almost all programs use it. For example, when a function reaches its end, it has to figure out where to return to based on the current state of the program (in the previous contribution, this was done manually). This means that control flow recovery (CFR) is very important for many applications, including decompilation (converting a program back into a higher-level form), patching (updating a program in place without modifying the original code and recompiling it), and low-level analysis or verification in general. However, as you may have noticed from earlier in this paragraph, in order to deal with such dynamic control flow you need to figure out what the possible destinations are for the individual control flow transfers. That can require knowing where you came from in the program, which means that analysis of dynamic control flow requires context (in this context, information previously obtained in the program). Even worse, it is another undecidable problem that requires overapproximation. To soundly recover control flow, we developed Hoare graphs (HGs), the third contribution of this dissertation. HGs use memory models that take the form of forests, or collections of tree data structures. A single tree represents a region in memory that may have multiple symbolic references, or abstract representations of a value. The children of the tree represent regions used in the program that are enclosed within their parent tree elements. Now, instead of assuming that all ambiguous memory regions are separate, we can use them under various aliasing conditions. We have also implemented support for some forms of dynamic control flow. Those that are not supported are clearly marked in the resultant HG. No user interaction is required even when loops are present thanks to a methodology that automatically reduces the amount of information present at a re-executed instruction until the information stabilizes. Function composition is also automatic now thanks to a method that treats each function as its own context in a safe and automated way, reducing memory consumption of our tool and allowing larger programs to be examined. In the process we did lose the ability to deal with recursion (functions that call themselves or call other functions that call back to the original), though. Lastly, we provided the ability to directly load binaries into the tool, no external disassembly (converting machine code into human-readable instructions) needed. This all allowed much greater testing than before, with applications to multiple programs and program libraries. The fourth and final contribution of this dissertation iterates on the HG work by narrowing focus to the concept of exceptional control flow. Specifically, it models the kind of exception handling used by C++ programs. This is important as, if you want to explore a program's behavior, you need to know all the places it goes to. If you use a tool that does not model exception handling, you may end up missing paths of execution caused by unwinding. This is when an exception is thrown and propagates up through the program's current stack of function calls, potentially reaching programmer-supplied handling for that exception. Despite this, commonplace tools for static, low-level program analysis do not model such unwinding. The control flow graph (CFG) produced by our exception-aware tool are called exceptional interprocedural control flow graphs (EICFGs). These provide information about the exceptions being thrown and what paths they take in the program when they are thrown. Additional improvements are a better methodology for handling dynamic control flow as well adding back in support for recursion. All told, this allowed us to explore even more programs than ever before.
140

Validating reasoning heuristics using next generation theorem provers

Steyn, Paul Stephanes 31 January 2009 (has links)
The specification of enterprise information systems using formal specification languages enables the formal verification of these systems. Reasoning about the properties of a formal specification is a tedious task that can be facilitated much through the use of an automated reasoner. However, set theory is a corner stone of many formal specification languages and poses demanding challenges to automated reasoners. To this end a number of heuristics has been developed to aid the Otter theorem prover in finding short proofs for set-theoretic problems. This dissertation investigates the applicability of these heuristics to next generation theorem provers. / Computing / M.Sc. (Computer Science)

Page generated in 0.0573 seconds