• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 867
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1759
  • 421
  • 359
  • 295
  • 270
  • 260
  • 254
  • 223
  • 211
  • 192
  • 179
  • 171
  • 128
  • 123
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Ambiente integrado para verificação e teste da coordenação de componentes tolerantes a falhas / An integrated environment for verification and test of fault-tolerant components coordination

Simone Hanazumi 01 September 2010 (has links)
Hoje, diante das contínuas mudanças e do mercado competitivo, as empresas e organizações têm sempre a necessidade de adaptar suas práticas de negócios para atender às diferentes exigências de seus clientes e manter-se em vantagem com relação às suas concorrentes. Para ajudá-las a atingir esta meta, uma proposta promissora é o Desenvolvimento Baseado em Componentes (DBC), cuja ideia básica é a de que um novo software possa ser construído rapidamente a partir de componentes pré-existentes. Entretanto, a montagem de sistemas corporativos mais confiáveis e tolerantes a falhas a partir da integração de componentes tem-se mostrado uma tarefa relativamente complexa. E a necessidade de garantir que tal integração não falhe tornou-se algo imprescindível, sobretudo porque as consequências de uma falha podem ser extremamente graves. Para que haja uma certa garantia de que o software seja tolerante a falhas, devem ser realizadas atividades de testes e verificação formal de programas. Isto porque ambas, em conjunto, procuram garantir ao desenvolvedor que o sistema resultante da integração é, de fato, confiável. Mas a viabilidade prática de execução destas atividades depende de ferramentas que auxiliem sua realização, uma vez que a execução de ambas constitui um alto custo para o desenvolvimento do software. Tendo em vista esta necessidade de facilitar a realização de testes e verificação nos sistemas baseados em componentes (DBC), este trabalho de Mestrado se propõe a desenvolver um ambiente integrado para a verificação e teste de protocolos para a coordenação do comportamento excepcional de componentes. / Nowadays, because of continuous changes and the competitive market, companies and organizations have the necessity to adapt their business practices in order to satisfy the different requirements of their customers and then, keep themselves in advantage among their competitors. To help them to reach this aim, a promising purpose is the Component-Based Development (CBD), whose basic idea is that a new software can be built in a fast way from preexisting components. However, mounting more reliable and fault-tolerant corporative systems from components integration is a relatively complex task. And the need to assure that such integration does not fail becomes something essential, especially because the consequences of a failure can be extremely serious. To have a certain guarantee that the software will be fault-tolerant, testing activities and formal verification of programs should be done. This is because both, together, try to assure to developer that the resulting system of the integration is, in fact, reliable. But the practical feasibility of executing these activities depends on tools which support it, once both executions have a high cost to software development. Having the necessity to make test and verification easier in systems based in components (CBD), this work has, as main objective, the development of an integrated environment for verification and test of protocols to the coordination of components exceptional behaviour.
272

Planning proofs of correctness of CCS systems

Monroy-Borja, Raul January 1997 (has links)
The specification and verification of communicating systems has captured increasing interest in the last decades. CCS, a Calculus of Communicating Systems [Milner 89a], was especially designed to help this enterprise; it is widely used in both industry and academia. Most efforts to automate the use of CCS for verification have centered around the explicit construction of a bisimulation [Park 81]. This approach, however, presents severe limitations to deal with systems that contain infinite states (e.g. systems with evolving structure [Milner 89a] or that comprise a finite but arbitrary number of components (e.g. systems with inductive structure [Milner 89a]). There is an alternative approach to verification, based on equational reasoning, which does not exhibit such limitations. This formulation, however, introduces significant proof search control issues, and, hence, has remained far less explored. This thesis investigates the use of explicit proof plans [Bundy 88] for problems of automatic verification in the context of CCS. We have conducted the verification task using equational reasoning, and centred on infinite state systems, and parameterised systems. A parameterised system, e.g. a system with inductive structure, circumscribes a family of CCS systems, which have fixed struture and finitely many states. To reason about theses systems, we have adopted Robin Milner's approach [Milner 89a], which advocates the use of induction to exploit the structure and/or the behavior of a system during its verification. To automate this reasoning, wehave used proof plans for induction [Bundy 88]- built within CLAM [Bundy et al 90b], and extended it with special CCS proof plans. We have implemented a verification planner by adding these special proof plans to CLAM. The system handles the search control problems prompted by CCS verification satisfactorily, though it is not complete. Moreover, the system is capable of dealing with the verification of finite state systems, infinite state systems, and parameterised systems, hence, providing a uniform method to analyse CCS systems, regardless of their state space. Our results are encouraging: the verification planner has been successfully tested on a number of examples drawn from the litereature. We have planned proofs of conjectures that are outside the domain of existing verification methods. Furthernore; the verification planning is fully automated. Because of this, even though the verification plan has still got plenty of room for improvement, we can state that proof planning can handle the equational verication of CCS systems, and, therefore, advocate its use within this interesting field.
273

Evaluating the Usability of Two-Factor Authentication

Reese, Kendall Ray 01 June 2018 (has links)
Passwords are the dominant form of authentication on the web today. However,many users choose weak passwords and reuse the same password on multiple sites, thus increasing their vulnerability to having their credentials leaked or stolen. Two-factor authentication strengthens existing password authentication schemes against impersonation attacks and makes it more difficult for attackers to reuse stolen credentials on other websites. Despite the added security benefits of two-factor authentication, there are still many open questions about its usability. Many two-factor authentication systems in widespread usage today have not yet been subjected to adequate usability testing. Previous comparative studies have demonstrated significant differences in usability between various single-factor authentication systems.The main contributions of this work are as follows. First, we developed a novel user behavior model that describes four phases of interaction between a user and an authentication system. This model is designed to inform the design of future usability studies and will enable researchers and those implementing authentication systems to have a more nuanced understanding of authentication system usability. Second, we conducted a comparative usability study of some of the most common two-factor authentication systems. In contrast to previous authentication usability studies, we had participants use the system for a period of two weeks and collected both timing data and SUS metrics on the systems under test. From these studies, we make several conclusions about the state of usability and acceptance of two-factor authentication, finding that many users want more security for their sensitive online accounts and are open to using multiple forms of two-factor authentication. We also suggest that security researchers draw upon risk communication theory to better help users make informed security decisions.
274

Verification of Task Parallel Programs Using Predictive Analysis

Nakade, Radha Vi 01 October 2016 (has links)
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
275

Improvement in Computational Fluid Dynamics Through Boundary Verification and Preconditioning

Folkner, David 01 May 2013 (has links)
This thesis provides improvements to computational fluid dynamics accuracy and ef- ficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary condi- tions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation bound- ary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional precon- ditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
276

VEHICLE INFORMATION SYSTEM USING BLOCKCHAIN

Zulkanthiwar, Amey 01 June 2019 (has links)
The main purpose of a vehicle information system using blockchain is to create a transparent and reliable information system which will help consumers buy a vehicle; it is a vehicle information system. The blockchain system will create a time sequence chain of events database for each vehicle from the original sale. It will include insurance, vehicle repair, and vehicle resale. This project is mainly divided into three parts. Part one is used by the administration who will create the blockchain and will give authentication to a different organization to create the blockchain. Part two will be used by the Organization to create a block in the blockchain. Part three will be used by customers who want to get information about the vehicle.
277

Space Vehicle Testing

Belsick, Charlotte Ann 01 December 2012 (has links)
Requirement verification and validation is a critical component of building and delivering space vehicles with testing as the preferred method. This Master’s Project presents the space vehicle test process from planning through test design and execution. It starts with an overview of the requirements, validation, and verification. The four different verification methods are explained including examples as to what can go wrong if the verification is done incorrectly. Since the focus of this project is on test, test verification is emphasized. The philosophy behind testing, including the “why” and the methods, is presented. The different levels of testing, the test objectives, and the typical tests are discussed in detail. Descriptions of the different types of tests are provided including configurations and test challenges. While most individuals focus on hardware only, software is an integral part of any space product. As such, software testing, including mistakes and examples, is also presented. Since testing is often not performed flawlessly the first time, sections on anomalies, including determining root cause, corrective action, and retest is included. A brief discussion of defect detection in test is presented. The project is actually presented in total in the Appendix as a Power Point document.
278

Evolving model evolution

Fuchs, Alexander 01 December 2009 (has links)
Automated theorem proving is a method to establish or disprove logical theorems. While these can be theorems in the classical mathematical sense, we are more concerned with logical encodings of properties of algorithms, hardware and software. Especially in the area of hardware verification, propositional logic is used widely in industry. Satisfiability Module Theories (SMT) is a set of logics which extend propositional logic with theories relevant for specific application domains. In particular, software verification has received much attention, and efficient algorithms have been devised for reasoning over arithmetic and data types. Built-in support for theories by decision procedures is often significantly more efficient than reductions to propositional logic (SAT). Most efficient SAT solvers are based on the DPLL architecture, which is also the basis for most efficient SMT solvers. The main shortcoming of both kinds of logics is the weak support for non-ground reasoning, which noticeably limits the applicability to real world systems. The Model Evolution Calculus (ME) was devised as a lifting of the DPLL architecture from the propositional setting to full first-order logic. In previous work, we created the solver Darwin as an implementation of ME, and showed how to adapt improvements from the DPLL setting. The first half of this thesis is concerned with ME and Darwin. First, we lift a further crucial ingredient of SAT and SMT solvers, lemma-learning, to Darwin and evaluate its benefits. Then, we show how to use Darwin for finite model finding, and how this application benefits from lemma-learning. In the second half of the thesis we present Model Evolution with Linear Integer Arithmetic (MELIA), a calculus combining function-free first-order logic with linear integer arithmetic (LIA). MELIA is based on ME and supports similar inference rules and redundancy criteria. We prove the correctness of the calculus, and show how to obtain complete proof procedures and decision procedures for some interesting classes of MELIA's logic. Finally, we explain in detail how MELIA can be implemented efficiently based on the techniques employed in SMT solvers and Darwin.
279

Verification and Validation of the Spalart-Allmaras Turbulence Model for Strand Grids

Tong, Oisin 01 May 2013 (has links)
The strand-Cartesian grid approach is a unique method of generating and computing fluid dynamic simulations. The strand-Cartesian approach provides highly desirable qual- ities of fully-automatic grid generation and high-order accuracy. This thesis focuses on the implementation of the Spalart-Allmaras turbulence model to the strand-Cartesian grid framework. Verification and validation is required to ensure correct implementation of the turbulence model.Mathematical code verification is used to ensure correct implementation of new algo- rithms within the code framework. The Spalart-Allmaras model is verified with the Method of Manufactured Solutions (MMS). MMS shows second-order convergence, which implies that the new algorithms are correctly implemented.Validation of the strand-Cartesian solver is completed by simulating certain cases for comparison against the results of two independent compressible codes; CFL3D and FUN3D. The NASA-Langley turbulence resource provided the inputs and conditions required to run the cases, as well as the case results for these two codes. The strand solver showed excellent agreement with both NASA resource codes for a zero-pressure gradient flat plate and bump- in-channel. The treatment of the sharp corner on a NACA 0012 airfoil is investigated, resulting in an optimal external sharp corner configuration of strand vector smoothing with a base Cartesian grid and telescoping Cartesian refinement around the trailing edge. Results from the case agree well with those from CFL3D and FUN3D. Additionally, a NACA 4412 airfoil case is examined, and shows good agreement with CFL3D and FUN3D, resulting in validation for this case.
280

FPGA-based Implementation of Concatenative Speech Synthesis Algorithm

Bamini, Praveen Kumar 29 October 2003 (has links)
The main aim of a text-to-speech synthesis system is to convert ordinary text into an acoustic signal that is indistinguishable from human speech. This thesis presents an architecture to implement a concatenative speech synthesis algorithm targeted to FPGAs. Many current text-to-speech systems are based on the concatenation of acoustic units of recorded speech. Current concatenative speech synthesizers are capable of producing highly intelligible speech. However, the quality of speech often suffers from discontinuities between the acoustic units, due to contextual differences. This is the easiest method to produce synthetic speech. It concatenates prerecorded acoustic elements and forms a continuous speech element. The software implementation of the algorithm is performed in C whereas the hardware implementation is done in structural VHDL. A database of acoustic elements is formed first with recording sounds for different phones. The architecture is designed to concatenate acoustic elements corresponding to the phones that form the target word. Target word corresponds to the word that has to be synthesized. This architecture doesn't address the form discontinuities between the acoustic elements as its ultimate goal is the synthesis of speech. The Hardware implementation is verified on a Virtex (v800hq240-4) FPGA device.

Page generated in 0.0808 seconds