• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 21
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1766
  • 421
  • 360
  • 298
  • 271
  • 262
  • 254
  • 223
  • 211
  • 193
  • 179
  • 171
  • 129
  • 123
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Synthesis of correct-by-design schedulers for hybrid systems / Synthèse d'ordonnanceurs corrects par conception pour les systèmes hybrides

Soulat, Romain 18 February 2014 (has links)
Dans cette thèse, nous nous intéressons au calcul d'ordonnanceurs pour les systèmes hybrides. En fait, nous considérons deux sous-classes des systèmes hybrides, les systèmes temps-réels où des tâches doivent se partager l'accès à une ressource commune, et les systèmes à commutations où un choix doit être fait sur les dynamiques à choisir en fonction d'objectifs à atteindre. Dans la première partie de cette thèse, nous nous intéressons aux problèmes d'ordonnancement et prenons comme étude de cas l'ordonnancement de tâches périodiques sur des architectures multiprocesseurs. Nous nous intéressons plus particulièrement à déterminer si l'on peut modifier certaines valeur des paramètres du système tout en respectant les contraintes temporelles sans changer d'ordonnanceur. La méthode inverse permet de prouver de manière formelle la robustesse des systèmes temporisés paramétriques. Nous introduisons une méthode de réduction du nombre d'états nécessaire à la vérification. Cette réduction nous permet de traîter des études de cas intéressantes telle que celle proposée par Astrium EADS pour le lanceur Ariane 6. Nous montrons également comment la Cartographie Comportementale, une extension de la méthode inverse, permet de trouver la zone de l'espace des paramètres où l'on a l'existence d'un ordonnancement satisfaisant les contraintes temporelles. Nous comparons cette approche avec une méthode analytique pour montrer l'intérêt de notre approche. Dans la seconde partie de cette thèse, nous nous intéressons au contrôle de systèmes affines à commutation. Ces systèmes sont gouvernés par une famille d'équations différentielles linéaires et le contrôleur peut choisir laquelle va gouverner le système pendant le prochain pas de temps. Dans ce cadre, le contrôle peut être vu comme l'ordonnancement des dynamiques que le système va prendre. Le choix de la dynamique peut se faire pour des objectifs de stabilité ou d'accessibilité. Nous proposons une nouvelle méthode qui calcule un contrôleur dont la stratégie est la même pour des ensembles denses de points. Notre méthode utilise le calcul en avant, souvent préférable au calcul à rebours pour les systèmes contractants. Nous montrons que, sous certaines conditions, le système contrôlé évolue vers un comportement limite. Nous appliquons notre méthode sur plusieurs études de cas issues de la littérature ainsi qu'un exemple réel, un prototype de convertisseur de tension multiniveaux. Enfin, nous montrons que notre méthode s'étend aux systèmes comportant des perturbations ainsi qu'aux systèmes non linéaires. / In this thesis, we are interested in designing schedulers for hybrid systems. We consider two specific subclasses of hybrid systems, real-time systems where tasks are competing for the access to common resources, and sampled switched systems where a choice has to be made on dynamics of the system to reach goals. Scheduling consists in defining the order in which the tasks will be run on the processors in order to complete all the tasks before a given deadline. In the first part of this thesis, we are interested in the scheduling of periodic tasks on multiprocessor architectures. We are especially interested in the robustness of schedulers, i.e., to prove that some values of the system parameters can be modified, and until what value they can be extended while preserving the scheduling order and meeting the deadlines. The Inverse Method can be used to prove the robustness of parametric timed systems. In this thesis, we introduce a state space reduction technique which allows us to treat challenging case studies such as one provided by Astrium EADS for the launcher Ariane 6. We also present how an extension of the Inverse Method, the Behavioral Cartography, can solve the problem of schedulability, i.e., finding the area in the parametric space in which there exists a scheduler that satisfies all the deadlines. We compare this approach to an analytic method to illustrate the interest of our approach In the second part of this thesis, we are interested in the control of affine switched systems. These systems are governed by a finite family of affine differential equations. At each time step, a controller can choose which dynamics will govern the system for the next time step. Controlling in this sense can be seen as a scheduling on the order of dynamics the system will have to use. The objective for the controller can be to make the system stay in a given area of the state space (stability) or to reach a given region of the state space (reachability). In this thesis, we propose a novel approach that computes a scheduler where the strategy is uniform for dense subsets of the state space. Moreover, our approach only uses forward computation, which is better suited than backward computation for contractive systems. We show that our designed controllers, systems evolve to a limit cyclic behavior. We apply our method to several case studies from the literature and on a real-life prototype of a multilevel voltage converter. Moreover, we show that our approach can be extended to systems with perturbations and non-linear dynamics.
272

Ambiente integrado para verificação e teste da coordenação de componentes tolerantes a falhas / An integrated environment for verification and test of fault-tolerant components coordination

Simone Hanazumi 01 September 2010 (has links)
Hoje, diante das contínuas mudanças e do mercado competitivo, as empresas e organizações têm sempre a necessidade de adaptar suas práticas de negócios para atender às diferentes exigências de seus clientes e manter-se em vantagem com relação às suas concorrentes. Para ajudá-las a atingir esta meta, uma proposta promissora é o Desenvolvimento Baseado em Componentes (DBC), cuja ideia básica é a de que um novo software possa ser construído rapidamente a partir de componentes pré-existentes. Entretanto, a montagem de sistemas corporativos mais confiáveis e tolerantes a falhas a partir da integração de componentes tem-se mostrado uma tarefa relativamente complexa. E a necessidade de garantir que tal integração não falhe tornou-se algo imprescindível, sobretudo porque as consequências de uma falha podem ser extremamente graves. Para que haja uma certa garantia de que o software seja tolerante a falhas, devem ser realizadas atividades de testes e verificação formal de programas. Isto porque ambas, em conjunto, procuram garantir ao desenvolvedor que o sistema resultante da integração é, de fato, confiável. Mas a viabilidade prática de execução destas atividades depende de ferramentas que auxiliem sua realização, uma vez que a execução de ambas constitui um alto custo para o desenvolvimento do software. Tendo em vista esta necessidade de facilitar a realização de testes e verificação nos sistemas baseados em componentes (DBC), este trabalho de Mestrado se propõe a desenvolver um ambiente integrado para a verificação e teste de protocolos para a coordenação do comportamento excepcional de componentes. / Nowadays, because of continuous changes and the competitive market, companies and organizations have the necessity to adapt their business practices in order to satisfy the different requirements of their customers and then, keep themselves in advantage among their competitors. To help them to reach this aim, a promising purpose is the Component-Based Development (CBD), whose basic idea is that a new software can be built in a fast way from preexisting components. However, mounting more reliable and fault-tolerant corporative systems from components integration is a relatively complex task. And the need to assure that such integration does not fail becomes something essential, especially because the consequences of a failure can be extremely serious. To have a certain guarantee that the software will be fault-tolerant, testing activities and formal verification of programs should be done. This is because both, together, try to assure to developer that the resulting system of the integration is, in fact, reliable. But the practical feasibility of executing these activities depends on tools which support it, once both executions have a high cost to software development. Having the necessity to make test and verification easier in systems based in components (CBD), this work has, as main objective, the development of an integrated environment for verification and test of protocols to the coordination of components exceptional behaviour.
273

Planning proofs of correctness of CCS systems

Monroy-Borja, Raul January 1997 (has links)
The specification and verification of communicating systems has captured increasing interest in the last decades. CCS, a Calculus of Communicating Systems [Milner 89a], was especially designed to help this enterprise; it is widely used in both industry and academia. Most efforts to automate the use of CCS for verification have centered around the explicit construction of a bisimulation [Park 81]. This approach, however, presents severe limitations to deal with systems that contain infinite states (e.g. systems with evolving structure [Milner 89a] or that comprise a finite but arbitrary number of components (e.g. systems with inductive structure [Milner 89a]). There is an alternative approach to verification, based on equational reasoning, which does not exhibit such limitations. This formulation, however, introduces significant proof search control issues, and, hence, has remained far less explored. This thesis investigates the use of explicit proof plans [Bundy 88] for problems of automatic verification in the context of CCS. We have conducted the verification task using equational reasoning, and centred on infinite state systems, and parameterised systems. A parameterised system, e.g. a system with inductive structure, circumscribes a family of CCS systems, which have fixed struture and finitely many states. To reason about theses systems, we have adopted Robin Milner's approach [Milner 89a], which advocates the use of induction to exploit the structure and/or the behavior of a system during its verification. To automate this reasoning, wehave used proof plans for induction [Bundy 88]- built within CLAM [Bundy et al 90b], and extended it with special CCS proof plans. We have implemented a verification planner by adding these special proof plans to CLAM. The system handles the search control problems prompted by CCS verification satisfactorily, though it is not complete. Moreover, the system is capable of dealing with the verification of finite state systems, infinite state systems, and parameterised systems, hence, providing a uniform method to analyse CCS systems, regardless of their state space. Our results are encouraging: the verification planner has been successfully tested on a number of examples drawn from the litereature. We have planned proofs of conjectures that are outside the domain of existing verification methods. Furthernore; the verification planning is fully automated. Because of this, even though the verification plan has still got plenty of room for improvement, we can state that proof planning can handle the equational verication of CCS systems, and, therefore, advocate its use within this interesting field.
274

Evaluating the Usability of Two-Factor Authentication

Reese, Kendall Ray 01 June 2018 (has links)
Passwords are the dominant form of authentication on the web today. However,many users choose weak passwords and reuse the same password on multiple sites, thus increasing their vulnerability to having their credentials leaked or stolen. Two-factor authentication strengthens existing password authentication schemes against impersonation attacks and makes it more difficult for attackers to reuse stolen credentials on other websites. Despite the added security benefits of two-factor authentication, there are still many open questions about its usability. Many two-factor authentication systems in widespread usage today have not yet been subjected to adequate usability testing. Previous comparative studies have demonstrated significant differences in usability between various single-factor authentication systems.The main contributions of this work are as follows. First, we developed a novel user behavior model that describes four phases of interaction between a user and an authentication system. This model is designed to inform the design of future usability studies and will enable researchers and those implementing authentication systems to have a more nuanced understanding of authentication system usability. Second, we conducted a comparative usability study of some of the most common two-factor authentication systems. In contrast to previous authentication usability studies, we had participants use the system for a period of two weeks and collected both timing data and SUS metrics on the systems under test. From these studies, we make several conclusions about the state of usability and acceptance of two-factor authentication, finding that many users want more security for their sensitive online accounts and are open to using multiple forms of two-factor authentication. We also suggest that security researchers draw upon risk communication theory to better help users make informed security decisions.
275

Verification of Task Parallel Programs Using Predictive Analysis

Nakade, Radha Vi 01 October 2016 (has links)
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
276

Improvement in Computational Fluid Dynamics Through Boundary Verification and Preconditioning

Folkner, David 01 May 2013 (has links)
This thesis provides improvements to computational fluid dynamics accuracy and ef- ficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary condi- tions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation bound- ary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional precon- ditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
277

VEHICLE INFORMATION SYSTEM USING BLOCKCHAIN

Zulkanthiwar, Amey 01 June 2019 (has links)
The main purpose of a vehicle information system using blockchain is to create a transparent and reliable information system which will help consumers buy a vehicle; it is a vehicle information system. The blockchain system will create a time sequence chain of events database for each vehicle from the original sale. It will include insurance, vehicle repair, and vehicle resale. This project is mainly divided into three parts. Part one is used by the administration who will create the blockchain and will give authentication to a different organization to create the blockchain. Part two will be used by the Organization to create a block in the blockchain. Part three will be used by customers who want to get information about the vehicle.
278

Space Vehicle Testing

Belsick, Charlotte Ann 01 December 2012 (has links)
Requirement verification and validation is a critical component of building and delivering space vehicles with testing as the preferred method. This Master’s Project presents the space vehicle test process from planning through test design and execution. It starts with an overview of the requirements, validation, and verification. The four different verification methods are explained including examples as to what can go wrong if the verification is done incorrectly. Since the focus of this project is on test, test verification is emphasized. The philosophy behind testing, including the “why” and the methods, is presented. The different levels of testing, the test objectives, and the typical tests are discussed in detail. Descriptions of the different types of tests are provided including configurations and test challenges. While most individuals focus on hardware only, software is an integral part of any space product. As such, software testing, including mistakes and examples, is also presented. Since testing is often not performed flawlessly the first time, sections on anomalies, including determining root cause, corrective action, and retest is included. A brief discussion of defect detection in test is presented. The project is actually presented in total in the Appendix as a Power Point document.
279

Evolving model evolution

Fuchs, Alexander 01 December 2009 (has links)
Automated theorem proving is a method to establish or disprove logical theorems. While these can be theorems in the classical mathematical sense, we are more concerned with logical encodings of properties of algorithms, hardware and software. Especially in the area of hardware verification, propositional logic is used widely in industry. Satisfiability Module Theories (SMT) is a set of logics which extend propositional logic with theories relevant for specific application domains. In particular, software verification has received much attention, and efficient algorithms have been devised for reasoning over arithmetic and data types. Built-in support for theories by decision procedures is often significantly more efficient than reductions to propositional logic (SAT). Most efficient SAT solvers are based on the DPLL architecture, which is also the basis for most efficient SMT solvers. The main shortcoming of both kinds of logics is the weak support for non-ground reasoning, which noticeably limits the applicability to real world systems. The Model Evolution Calculus (ME) was devised as a lifting of the DPLL architecture from the propositional setting to full first-order logic. In previous work, we created the solver Darwin as an implementation of ME, and showed how to adapt improvements from the DPLL setting. The first half of this thesis is concerned with ME and Darwin. First, we lift a further crucial ingredient of SAT and SMT solvers, lemma-learning, to Darwin and evaluate its benefits. Then, we show how to use Darwin for finite model finding, and how this application benefits from lemma-learning. In the second half of the thesis we present Model Evolution with Linear Integer Arithmetic (MELIA), a calculus combining function-free first-order logic with linear integer arithmetic (LIA). MELIA is based on ME and supports similar inference rules and redundancy criteria. We prove the correctness of the calculus, and show how to obtain complete proof procedures and decision procedures for some interesting classes of MELIA's logic. Finally, we explain in detail how MELIA can be implemented efficiently based on the techniques employed in SMT solvers and Darwin.
280

Verification and Validation of the Spalart-Allmaras Turbulence Model for Strand Grids

Tong, Oisin 01 May 2013 (has links)
The strand-Cartesian grid approach is a unique method of generating and computing fluid dynamic simulations. The strand-Cartesian approach provides highly desirable qual- ities of fully-automatic grid generation and high-order accuracy. This thesis focuses on the implementation of the Spalart-Allmaras turbulence model to the strand-Cartesian grid framework. Verification and validation is required to ensure correct implementation of the turbulence model.Mathematical code verification is used to ensure correct implementation of new algo- rithms within the code framework. The Spalart-Allmaras model is verified with the Method of Manufactured Solutions (MMS). MMS shows second-order convergence, which implies that the new algorithms are correctly implemented.Validation of the strand-Cartesian solver is completed by simulating certain cases for comparison against the results of two independent compressible codes; CFL3D and FUN3D. The NASA-Langley turbulence resource provided the inputs and conditions required to run the cases, as well as the case results for these two codes. The strand solver showed excellent agreement with both NASA resource codes for a zero-pressure gradient flat plate and bump- in-channel. The treatment of the sharp corner on a NACA 0012 airfoil is investigated, resulting in an optimal external sharp corner configuration of strand vector smoothing with a base Cartesian grid and telescoping Cartesian refinement around the trailing edge. Results from the case agree well with those from CFL3D and FUN3D. Additionally, a NACA 4412 airfoil case is examined, and shows good agreement with CFL3D and FUN3D, resulting in validation for this case.

Page generated in 0.0788 seconds