• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 866
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1758
  • 421
  • 358
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Defending Against GPS Spoofing by Analyzing Visual Cues

Xu, Chao 21 May 2020 (has links)
Massive GPS navigation services are used by billions of people in their daily lives. GPS spoofing is quite a challenging problem nowadays. Existing Anti-GPS spoofing systems primarily focus on expensive equipment and complicated algorithms, which are not practical and deployable for most of the users. In this thesis, we explore the feasibility of a simple text-based system design for Anti-GPS spoofing. The goal is to use the lower cost and make the system more effective and robust for general spoofing attack detection. Our key idea is to only use the textual information from the physical world and build a real-time system to detect GPS spoofing. To demonstrate the feasibility, we first design image processing modules to collect sufficient textual information in panoramic images. Then, we simulate real-world spoofing attacks from two cities to build our training and testing datasets. We utilize LSTM to build a binary classifier which is the key for our Anti-GPS spoofing system. Finally, we evaluate the system performance by simulating driving tests. We prove that our system can achieve more than 98% detection accuracy when the ratio of attacked points in a driving route is more than 50%. Our system has a promising performance for general spoofing attack strategies and it proves the feasibility of using textual information for the spoofing attack detection. / Master of Science / Nowadays, people are used to using GPS navigation services in their daily lives. However, GPS can be easily spoofed and the wrong GPS information will mislead victims to an unknown place. There are some existing methods that can defend GPS spoofing attacks, but all of them have significant shortcomings. Our goal is to design a novel system, which is cheap, effective, and robust, to detect general GPS spoofing attacks in real-time. In this thesis, we propose a complete system design and evaluations for performance. Our system only uses textual information from the real physical world and virtual maps. To get more accurate textual information, we use state of the art techniques for image processing and text recognition. We also use a neural network to help with detection. By testing with datasets in two cities, we confirm the promising performance of our system for general GPS spoofing attack strategies. We believe that textual information can be further developed in the Anti-GPS spoofing systems.
202

Quantitative verification of real-time properties with application to medical devices

Diciolla, Marco January 2014 (has links)
Probabilistic model checking is a powerful technique used to ensure the correct functioning of systems which exhibit real-time and stochastic behaviours. Many such systems are embedded and used in safety-critical situations, to mention implantable medical devices. This thesis aims to develop a formal model-based framework that is tailored for the analysis and verification of cardiac pacemakers. The contributions are novel approaches for the automatic verification and validation of real-time properties over continuous-time models, which are applicable to software embedded in medical devices. First, we address the problem of model checking continuous-time Markov chain (CTMC) models against real-time specifications given in the form of temporal logic, namely, metric temporal logic (MTL) and linear duration properties (LDP), or as timed automata (TA). The main question that we address is “given a continuous-time Markov chain, what is the probability of the set of timed paths that satisfy the real-time property under consideration?”. We provide novel algorithms to approximate the probability through generating systems of linear inequalities over variables that represent the waiting times in system states, and then solving multidimensional integrals over this set. Second, we present a model-based framework to support the design and verification of pacemakers against real-time properties. The pacemaker is modelled as a network of timed automata, whereas the human heart is modelled either as a network of timed automata or as a network of hybrid automata. Our framework can be instantiated with personalised heart models whose parameters can be learnt from patient data, and we have done so to validate our approach. We introduce property patterns and the counting metric temporal logic (CMTL) in order to specify the properties of interest. We provide new verification algorithms for networks of timed or hybrid automata against property patterns and CMTL. Finally, we pose and solve the parameter synthesis problem, i.e., given a network of timed automata containing model parameters, an objective function and a CMTL formula, find the set of parameter valuations, whenever existing, which satisfy the CMTL formula and maximise the objective function. The framework has been implemented using Simulink, Matlab and Python code. Extensive experimental results on pacemaker models have been carried out and discussed in detail. The techniques developed in this thesis can assist in the design and verification of software embedded in medical devices.
203

Automated Architecture-Based Verification of Safety-Critical Systems

Jaradat, Omar Tawffeeq Saleem January 2011 (has links)
Safety-critical systems require high quality and dependability levels, where system correctness and safety are major features to avoid any severe outcome. Time and cost are also important challenges that are imposed during the development process. Describing the behavior of a system in a high level provides a realistic vision and anticipation of the system. This presents a valuable opportunity for verifying the system before wasting the intended resources to develop the system. Architecture Description Languages (ADLs) provide the ability to comprise and represent the system level details of components, interactions and configuration. Architecture Analysis and Design Language (AADL) as a family member of ADLs proved its effectiveness in designing software intensive systems. In this report, we present a case study to validate “An Architecture-Based Verification Technique for AADL Specifications”. The technique involves a combination of model checking and model-based testing approaches adapted to an architectural perspective. The objectives of the verification process are 1) to ensure completeness and consistency of an AADL specification, and 2) to ensure conformance of an implementation with respect to its AADL specification. The technique has only been applied to small examples, and the goal of this thesis work is to validate it against a safety-critical system developed by a major vehicle manufacturer. Validation of the technique begins by investigating the system and specifying it in AADL. The defined verification criteria are subsequently applied to the AADL specification which drives the verification process. The case study presents interesting results while performing the model checking (the completeness and consistency checking). Conformance testing, on the other hand, could not be performed on the implemented system but is an interesting topic for future work.
204

Higher-order model checking with traversals

Neatherway, Robin Philip January 2014 (has links)
Higher-order recursion schemes are a powerful model of functional computation that grew out of traditional recursive program schemes and generalisations of grammars. It is common to view recursion schemes as generators of possibly-infinite trees, which Ong showed to have a decidable monadic second order theory and opened the door to applications in verification. Kobayashi later presented an intersection type characterisation of the model checking problem, on which most subsequent applied work is based. In recent work, recursion schemes have been considered to play a role similar to Boolean programs in verification of first-order imperative programs: a natural target for abstraction of programs with very large or infinite data domains. In this thesis we focus on the development of model checking algorithms for variants of recursion schemes. We start our contributions with a model checking algorithm inspired by the fully abstract game semantics of recursion schemes, but specified as a goal-directed approach to intersection type inference, that offers a unification of the views of Ong and Kobayashi. We build on this largely theoretical contribution with two orthogonal extensions and practical implementations. First, we develop a new extension of recursion schemes: higher-order recursion schemes with cases, which add non-determinism and a case construct operating over a finite data domain. These additions provide us with a more natural and succinct target for abstraction from functional programs: encoding data using functions inevitably results in an increase in the order and arity of the scheme, which have a direct impact on the worst-case complexity of the problem. We characterise the model checking problem using a novel intersection and union type system and give a practical algorithm for type inference in this system. We have carried out an empirical evaluation of the implementation --- the tool T<sub>RAV</sub>MC --- using a variety of problem instances from the literature and a new suite of problem instances derived via an abstraction-refinement procedure from functional programs. Second, we extend our approach from safety properties to all properties expressible in monadic second order logic using alternating parity tree automata as our specification language. We again provide an implementation and an empirical evaluation, which shows that despite the challenges accompanying liveness properties our tool scales beyond the current state of the art.
205

Game semantics based equivalence checking of higher-order programs

Hopkins, David G. B. January 2012 (has links)
This thesis examines the use of game semantics for the automatic equivalence checking of higher-order programs. Game semantics has proved to be a powerful method for constructing fully abstract models of logics and programming languages. Furthermore, the concrete nature of the semantics lends itself to algorithmic analysis. The game-semantic model can be used to identify fragments of languages which have a decidable observational equivalence problem. We investigate decidability results for different languages as well as the efficiency of these algorithms in practice. First we consider the call-by-value higher-order language with state, RML. This can be viewed as a canonical restriction of Standard ML to ground-type references. The O-strict fragment of RML is the largest set of type sequents for which, in the game-semantic denotation, justification pointers from O-moves are always uniquely reconstructible from the underlying move sequence. The O-strict fragment is surprisingly expressive, including higher-order types and difficult examples from the literature. By representing strategies as Visibly Pushdown Automata (VPA) we show that observational equivalence of O-strict terms is decidable (and in fact is ExpTime-complete). We then consider extensions of the O-strict fragment. Adding general recursion or using most non-O-strict types leads to undecidability. However, a limited form of recursion can be added while still preserving decidability (although the full power of DPDA is required). Next we examine languages with non-local control. This involves adding call/cc to our language and is known to correspond to dropping the game-semantic bracketing condition. In the call-by-name game-semantic model of Idealized Algol (IA), in which answers cannot justify questions, the visibility condition still implies a form of weak bracketing. By making bracketing violations explicit we show that we can still model the entire third-order fragment using VPA. We have also implemented tools based on these algorithms. Our model checkers Homer and Hector perform equivalence checking for third-order IA and O-strict RML respectively. Homer uses a naive explicit state method whereas Hector takes advantage of on-the-fly model checking. Our tools perform well on small yet challenging examples. On negative instances, the on-the-fly approach allows Hector to outperform Homer. To improve their performance, we also consider using ideas from symbolic execution. We propose a representation for finite automata using transitions labelled with formulas and guards which aims to take advantage of the symmetries of the game-semantic model so that strategies can be represented compactly. We refer to this representation as Symbolically Executed Automata (SEA). Using SEA allows much larger data types to be handled but is not as effective on larger examples with small data types.
206

What is the value of a Health Verified Program.

Schumacher, Kash Tucker January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Ted C. Schroeder / The beef cattle industry is one of the last industries in production agriculture that is not heavily integrated. Therefore each segment of the industry is constantly looking for opportunities to increase the value of their cattle. In recent years, one of those opportunities available to cow-calf producers was verification of certain production practices (i.e. Age and Source, Natural, and Non-Hormone Treated). The value flows from the consumer to the cow-calf producer. The packers need these verified cattle to fill export contracts therefore they are willing to pay a premium for these types of cattle. The objective of the thesis was to determine the value of a Health Verified Program (HPV) to feedlot operators. HPV is not required to export beef like other verified programs, but it does verify the procedures that a group of calves has received from the previous owner. Since the feedlot is a deciding factor of value for HPV, feedlot managers were asked from across the United States not only what value they place on HPV but other questions that could be beneficial to others involved in the beef cattle industry. Regression models were used along with a correlation analysis to determine value. There is value to a health verified program along with other procedures that are available to cow-calf producers. Individual producers need to determine which verifications and procedures are economical and efficient for their individual operations with all factors considered.
207

Ferramenta CAD para extração de modelo de cobertura de saída por itens em verificação funcional. / CAD tool for output coverage model extraction in functional verification.

Muñoz Quispe, Joel Iván 25 October 2011 (has links)
Nos ambientes de desenvolvimento de sistemas integrados da atualidade, os requisitos dos sistemas devidos ao alto grau de funcionalidades incorporadas vêm-se incrementando, gerando uma alta complexidade nos projetos. Isto traz como consequência o aumento na quantidade de ciclos dentro do fluxo de projeto. Uma solução tem sido o uso de blocos IP para acelerar o desenvolvimento. Entretanto, para garantir um grau elevado de confiabilidade destes componentes, os processos de verificação devem comprovar que todas as propriedades do circuito estejam sendo cumpridas. Uma das técnicas utilizadas para isto é verificação funcional por simulação, que procura explorar, através da injeção de vetores de teste, a maior porção possível de todo o espaço de estados do circuito. Quanto maior o número de estados possíveis, maior o número de vetores de testes que devem ser inseridos. Portanto, o número de vetores de teste deve ser reduzido de forma considerável, entretanto, por este fato, métricas para determinar a completeza do processo de verificação, definidas como modelos de cobertura, têm sido necessárias. As métricas de cobertura são estabelecidas segundo as estratégias de observação do projeto sob verificação, DUV, sendo bastante comum na indústria a de caixa preta que tem como objetivo a estimulação das entradas e a observação dos eventos de saída do DUV. Neste caso, para determinar se o sistema cumpre com as especificações, o engenheiro de verificação, deve definir os eventos à saída que considera relevantes e as métricas para determinar a quantidade de vezes que devem ser observadas. Este tipo de modelagem é conhecido como cobertura por itens. A quantidade de itens e os eventos a serem observados podem ser dfinidos pelo conhecimento especialista, dos engenheiros de verificação ou, para simplificar esta tarefa, uma distribuição uniforme é adotada. Como estas formas de modelagem não abstraem todas as propriedades do circuito, o perfil da distribuição de valores dos eventos (parâmetros) escolhidos, em geral, não estão correlacionados com o perfil real verficado durante a execução dos testbenches , tendo como consequência o aumento dos tempos de simulação. Para tratar do problema acima, o presente trabalho tem como objetivo geral o desenvolvimento de uma metodologia para obter um modelo de cobertura de saída que apresente um perfil de distribuição semelhante ao real e que, assim, assista o engenheiro de verificação na seleção dos pontos ou intervalos de saída de interesse, adicionado-os às decisões derivadas de seu conhecimento especialista. Pela metodologia utilizada, encontra-se a(s) equação(ões) que define(m) a(s) saída(s) do circuito sob verificação e, a partir destas, a distribuição probabilística por evento observável. No centro da metodologia está a ferramenta PrOCov (Probabilistic Output Coverage), projetada com os objetivos acima. A metodologia e a ferramenta foram testadas com alguns exemplos de circuitos, modelos em alto nível do filtro FIR, do processador FFT e do filtro Elliptic, todos descritos em SystemC. Nos três casos testados, o PrOCov encontrou satisfatoriamente os respectivos perfis de saída. Estes foram comparados com os perfis obtidos por simulação, mostrando que uma excelente precisão pode ser obtida; apenas pequenas variações foram encontradas devidas a erros de aproximação. Também variações de precisão e tempo de simulação em função da resolução dos parâmetros de saída (eventos) foram analisadas nesta dissertação. / In current integrated system development environments, the requirements for the design of multi-function systems have increased constantly. Consequently, the number of iterations in the design flow has also grown. A solution for this problem has been the use of IP-cores to speed up the hardware development. However, to guarantee high level of reliability for these components, the verification process has to be kept strict in other to prove if the all system properties have been satisfied. The mainstream technique that has been used in the industry for the verification process is the dynamic functional verification. It aims to explore, by test vector injection, all the state space of the circuit. The higher the number of possible states, the higher the number of test vectors to be inserted. Therefore, the number of test vectors must be kept as low as possible. Due to that, completion and sufficiency metrics, identified as the coverage model, should be carefully defined. The coverage metrics are established according the observation strategies of the design under verification, DUV, where the black box approach is very common in the industry, being aimed at the stimulation of the inputs and observing the events of the DUV output. To determine whether the system meets the specifications, the verification engineer must define the events (s)he considers relevant at the output and the metrics used to determine the amount of times that the results must be observed. This type of modeling is known as item coverage. The amount of items and events to be observed may be defined by the experience of the engineer, but in most cases, to simplify this task, a uniform distribution is adopted. Those forms of modeling do not abstract the functionality of the circuit, then, the probability distribution of the chosen events is uncorrelated to the real simulated distribution, when the testbenchs are implemented. Therefore, the resulting simulation time increases. To solve the problem that is mentioned above, this work aims the development of a methodology to compute the output coverage, which should be similar to the real output value distribution and thus assist the engineer in the selection of the proper check points or output ranges of interest, by adding them to the decisions derived from his(her) knowledge. This methodology finds the equations that represent the outputs of the DUV and, from them, it computes the output probabilistic distribution. At the core of this methodology is the PrOCov (Probabilistic Output Coverage) tool, which was developed with the goals above. Both methodology and tool were tested with three circuits described in high level language, the FIR filter, FFT processor and Elliptic filter, written in SystemC. In all three cases, PrOCov presented a satisfactorily output distribution. Excellent precision could be achieved by the results, with only small variations found due to approximation errors. Also variations of accuracy and simulation time due to different resolutions of the output parameters (events) were analyzed in this dissertation.
208

Formal Methods For Verification Based Software Inspection

Powell, Daniel, n/a January 2003 (has links)
Useful processes, that are independently repeatable, are utilised in all branches of science and traditional engineering disciplines but seldom in software engineering. This is particularly so with processes used for detection and correction of defects in software systems. Code inspection, as introduced by Michael Fagan at IBM in the mid 1970's is widely recognised as an effective technique for finding defects in software. Despite its reputation, code inspection, as it is currently practiced, is not a strictly repeatable process. This is due to the problems faced by inspectors when they attempt to paraphrase the complicated semantics of a unit of computer code. Verification based software inspection, as advocated by the cleanroom software engineering community, requires that arguments of correctness be formulated with the code and its specification. These arguments rely on the reader being able to extract the semantics from the code. This thesis addresses the requirement for an independently repeatable, scalable and substantially automated method for yielding semantics from computer code in a complete, unambiguous and consistent manner in order to facilitate, and make repeatable, verification based code inspection. Current literature regarding the use of code inspection for verification of software is surveyed. Empirical studies are referenced, comparing inspection to software testing and program proof. Current uses of formal methods in software engineering will be discussed, with particular reference to formal method applications in verification. Forming the basis of the presented method is a systematic, and hence repeatable, approach to the derivation of program semantics. The theories and techniques proposed for deriving semantics from program code extend current algorithmic and heuristic techniques for deriving invariants. Additionally, the techniques introduced yield weaker forms of invariant information which are also useful for verification, defect detection and correction. Methods for using these weaker invariant forms, and tools to support these methods, are introduced. Algorithmic and heuristic techniques for investigating loop progress and termination are also introduced. Some of these techniques have been automated in supporting tools, and hence, the resulting defects can be repeatably identified. Throughout this thesis a strong emphasis is placed on describing implementable algorithms to realise the derivation techniques discussed. A number of these algorithms are implemented in a tool to support the application of the verification methods presented. The techniques and tools presented in this thesis are well suited, but not limited to, supporting rigorous methods of defect detection as well as formal and semi-formal reasoning of correctness. The automation of these techniques in tools to support practical, formal code reading and correctness argument will assist in addressing the needs of trusted component technologies and the general requirement for quality in software.
209

Architecture-Based Verification of Software-Intensive Systems

Johnsen, Andreas January 2010 (has links)
<p>Development of software-intensive systems such as embedded systems for telecommunications, avionics and automotives occurs under severe quality, schedule and budget constraints. As the size and complexity of software-intensive systems increase dramatically, the problems originating from the design and specification of the system architecture becomes increasingly significant. Architecture-based development approaches promise to improve the efficiency of software-intensive system development processes by reducing costs and time, while increasing quality. This paradox is partially explained by the fact that the system architecture abstracts away unnecessary details, so that developers can concentrate both on the system as a whole, and on its individual pieces, whether it's the components, the components' interfaces, or connections among components. The use of architecture description languages (ADLs) provides an important basis for verification since it describes how the system should behave, in a high level view and in a form where automated tests can be generated. Analysis and testing based on architecture specifications allow detection of problems and faults early in the development process, even before the implementation phase, thereby reducing a significant amount of costs and time. Furthermore, tests derived from the architecture specification can later be applied to the implementation to see the conformance of the implementation with respect to the specification. This thesis extends the knowledge base in the area of architecture-based verification. In this thesis report, an airplane control system is specified using the Architecture Analysis and Description Language (AADL). This specification will serve as a starting point of a system development process where developed architecture-based verification algorithms are applied.</p>
210

Algorithmic Verification Techniques for Mobile Code

Aktug, Irem January 2008 (has links)
Modern computing platforms strive to support mobile code without putting system security at stake. These platforms can be viewed as open systems, as the mobile code adds new components to the running system. Establishing that such platforms function correctly can  be divided into two steps. First, it is shown that the system functions correctly regardless of the mobile components that join it, provided that they satisfy certain assumptions. These assumptions can, for instance, restrict the behavior of the component to ensure that the security policy of the platform is not violated. Second, the mobile component is checked to satisfy its assumptions, before it is allowed to join the system. This thesis presents algorithmic verification techniques to support this methodology. In the first two parts, we present techniques for the verification of open systems relative to the given component assumptions. In the third part, a technique for the  quick certification of mobile code is presented for the case where a particular type of program rewriting is used as a means of enforcing the component assumptions.In the first part of this study, we present a framework for the verification of open systems based on explicit state space representation. We propose Extended Modal Transition Systems (EMTS) as a suitable structure for representing the state space of open systems when assumptions on components are written in the modal μ-calculus. EMTSs are based on the Modal Transition Systems (MTS) of Larsen and provide a formalism for graphical specification and facilitate a thorough understanding of the system by visualization. In interactive verification, this state space representation enables proof reuse and aids the user guiding the verification process. We present a construction of state space representations from process algebraic open system descriptions based on a maximal model construction for the modal μ-calculus. The construction is sound and complete for systems with a single unknown component and sound for those without dynamic process reation. We also suggest a tableau-based proof system for establishing temporal properties of open systems represented as EMTS. The proof system is sound in general and complete for prime formulae.The problem of open system correctness  also arises in compositional verification, where the problem of showing a global property of a system is reduced to showing local properties of components. In the second part, we extend an existing  compositional verification framework for Java bytecode programs. The framework employs control flow graphs with procedures to model component implementations and open systems for the purpose of checking control-flow properties. We generalize these models to capture exceptional and multi-threaded behavior. The resulting control flow graphs are specifically tailored to support the compositional verification principle; however, they are sufficiently intuitive and standard to be useful on their own. We describe how the models can be extracted from program code and give preliminary experimental results for our implementation of the extraction of control flow graphs with exceptions. We also discuss further tool support and practical applications of the method.In the third part of the thesis, we develop a technique for the certification of safe mobile code, by adapting the proof-carrying code scheme of Necula to the case of security policies expressed as security automata. In particular, we describe how proofs of policy compliance can  be automatically generated for  programs that include a monitor for the desired policy. A monitor is an entity that observes the execution of a program and terminates the program if a violation to the property is about to occur. One way to implement such a monitor is by rewriting the program to make it self-monitoring. Given a property, we characterize self-monitoring of Java bytecode programs for this property by an annotation scheme with annotations in the style of Floyd-Hoare logics. The annotations generated by this scheme can be extended in a straightforward way to form a correctness proof in the sense of axiomatic semantics of programs. The proof generated in this manner essentially establishes that the program satisfies the property because it contains a monitor for it. The annotations that comprise the proofs are simple and efficiently checkable, thus facilitate certification of mobile code on devices with restricted computing power such as mobile phones. / QC 20100628

Page generated in 0.0991 seconds