• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 88
  • 70
  • 31
  • 20
  • 12
  • 10
  • 10
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 747
  • 510
  • 193
  • 187
  • 143
  • 127
  • 119
  • 102
  • 87
  • 78
  • 75
  • 67
  • 67
  • 56
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Development of a near-zone computer model for investigation of feasibility of ground checking the capture-effect glide slope

d'Estaintot, Thierry Langlois January 1984 (has links)
No description available.
162

In pursuit of a hidden evader

Bohn, Christopher A. 29 September 2004 (has links)
No description available.
163

Exploring Hybrid Dynamic and Static Techniques for Software Verification

Cheng, Xueqi 10 March 2010 (has links)
With the growing importance of software on which human lives increasingly depend, the correctness requirement of the underlying software becomes especially critical. However, the increasing complexities and sizes of modern software systems pose special challenges on the effectiveness as well as efficiency of software verification. Two major obstacles include the quality of test generation in terms of error detection in software testing and the state space explosion problem in software formal verification (model checking). In this dissertation, we investigate several hybrid techniques that explore dynamic (with program execution), static (without program execution) as well as the synergies of multiple approaches in software verification from the perspectives of testing and model checking. For software testing, a new simulation-based internal variable range coverage metric is proposed with the goal of enhancing the error detection capability of the generated test data when applied as the target metric. For software model checking, we utilize various dynamic analysis methods, such as data mining, swarm intelligence (ant colony optimization), to extract useful high-level information from program execution data. Despite being incomplete, dynamic program execution can still help to uncover important program structure features and variable correlations. The extracted knowledge, such as invariants in different forms, promising control flows, etc., is then used to facilitate code-level program abstraction (under-approximation/over-approximation), and/or state space partition, which in turn improve the performance of property verification. In order to validate the effectiveness of the proposed hybrid approaches, a wide range of experiments on academic and real-world programs were designed and conducted, with results compared against the original as well as the relevant verification methods. Experimental results demonstrated the effectiveness of our methods in improving the quality as well as performance of software verification. For software testing, the newly proposed coverage metric constructed based on dynamic program execution data is able to improve the quality of test cases generated in terms of mutation killing — a widely applied measurement for error detection. For software model checking, the proposed hybrid techniques greatly take advantage of the complementary benefits from both dynamic and static approaches: the lightweight dynamic techniques provide flexibility in extracting valuable high-level information that can be used to guide the scope and the direction of static reasoning process. It consequently results in significant performance improvement in software model checking. On the other hand, the static techniques guarantee the completeness of the verification results, compensating the weakness of dynamic methods. / Ph. D.
164

Design Verification for Sequential Systems at Various Abstraction Levels

Zhang, Liang 31 January 2005 (has links)
With the ever increasing complexity of digital systems, functional verification has become a daunting task to circuit designers. Functional verification alone often surpasses 70% of the total development cost and the situation has been projected to continue to worsen. The most critical limitations of existing techniques are the capacity issue and the run-time issue. This dissertation addresses the functional verification problem using a unified approach, which utilizes different core algorithms at various abstraction levels. At the logic level, we focus on incorporating a set of novel ideas to existing formal verification approaches. First, we present a number of powerful optimizations to improve the performance and capacity of a typical SAT-based bounded model checking framework. Secondly, we present a novel method for performing dynamic abstraction within a framework for abstraction-refinement based model checking. Experiments on a wide range of industrial designs have shown that the proposed optimizations consistently provide between 1-2 orders of magnitude speedup and can be extremely useful in enhancing the efficacy of existing formal verification algorithms. At the register transfer level, where the formal verification is less likely to succeed, we developed an efficient ATPG-based validation framework, which leverages the high-level circuit information and an improved observability-enhanced coverage to generate high quality validation sequences. Experiments show that our approach is able to generate high quality validation vectors, which achieve both high tag coverage and high bug coverage with extremely low computational cost. / Ph. D.
165

Exploring Abstraction Techniques for Scalable Bit-Precise Verification of Embedded Software

He, Nannan 01 June 2009 (has links)
Conventional testing has become inadequate to satisfy rigorous reliability requirements of embedded software that is playing an increasingly important role in many safety critical applications. Automatic formal verification is a viable avenue for ensuring the reliability of such software. Recently, more and more formal verification techniques have begun modeling a non-Boolean data variable as a bit-vector with bounded width (i.e. a vector of multiple bits like 32- or 64- bits) to implement bit-precise verification. One major challenge in the scalable application of such bit-precise verification on real-world embedded software is that the state space for verification can be intractably large. In this dissertation, several abstraction techniques are explored to deal with this scalability challenge in the bit-precise verification of embedded software. First, we propose a tight integration of program slicing, which is an important static program analysis technique, with bounded model checking (BMC). While many software verification tools apply program slicing as a separate preprocessing step, we integrate slicing operations into our model construction and reduction process and enhance them with compilation optimization techniques to compute accurate program slices. We also apply a proof-based abstraction-refinement framework to further remove those program segments irrelevant to the property being verified. Next, we present a method of using symbolic simulation for scalable formal verification. The simulation involves distinguishing X as symbolic values to abstract concrete variables' values. Also, the method embeds this symbolic simulation in a counterexample-guided abstraction-refinement framework to automatically construct and verify an abstract model, which has a smaller state space than that of the original concrete program. This dissertation also presents our efforts on using two common testability metrics — controllability metric (CM) and observability metric (OM) — as the high-level structural guidance for scalable bit-precise verification. A new abstraction approach is proposed based on the concept of under- and over-approximation to efficiently solve bit-vector formulas generated from embedded software verification instances. These instances include both complicated arithmetic computations and intensive control structures. Our approach applies CM and OM to assist the abstraction refinement procedure in two ways: (1) it uses CM and OM to guide the construction of a simple under-approximate model, which includes only a subset of execution paths in a verification instance, so that a counterexample that refutes the instance can be obtained with reduced effort, and (2) in order to reduce the cost of using proof-based refinement alone, it uses OM heuristics to guide the restoration of additional verification-relevant formula constraints with low computational cost for refinement. Experiments show a significant reduction of the solving time compared to state-of-the-art solvers for the bit-vector arithmetic. This dissertation finally proposes an efficient algorithm to discover non-uniform encoding widths of individual variables in the verification model, which may be smaller than their original modeling width but sufficient for the verification. Our algorithm distinguishes itself from existing approaches in that it is path-oriented; it takes advantage of CM and OM values to guide the computation of the initial, non-uniform encoding widths, and the effective adjustment of these widths along different paths, until the property is verified. It can restrict the search from those paths that are deemed less favorable or have been searched in previous steps, thus simplifying the problem. Experiments demonstrate that our algorithm can significantly speed up the verification especially in searching for a counterexample that violates the property under verification. / Ph. D.
166

Search-space Aware Learning Techniques for Unbounded Model Checking and Path Delay Testing

Chandrasekar, Kameshwar 24 April 2006 (has links)
The increasing complexity of VLSI designs, in recent years, poses serious challenges while ensuring the correctness of large designs for functionality and timing. In this dissertation, we target two related problems in Design Verification and Testing: Unbounded Model Checking and Path Delay Fault Testing, that commonly suffer from extremely large memory requirements. We propose efficient representations and intelligent learning techniques that reason on the problem structure and take advantage of the repeated search space, thereby alleviating the memory required and time taken to solve these problems. In this dissertation, we exploit Automatic Test Pattern Generation (ATPG) for Unbounded Model Checking (UMC). In order to perform unbounded model checking, we need the core image / preimage computation engines that perform forward / backward reachability analysis. First, we develop an ATPG engine, with search-space aware learning, that computes ``all solutions" for a given target objective and stores it as a decision diagram. We propose efficient decision selection heuristics and derive a suitable cut-set metric to quickly obtain a compact solution set. The solution set that is obtained, with the initial state set as the objective, represents the one-cycle preimage. In order to use the preimage state set as the objective in the subsequent iterations, we propose efficient techniques to convert a decision diagram into clauses/circuit. We propose a node-based conversion scheme that derives the functionality of each node in the decision diagram. The proposed scheme contains the size of the state set and helps to iteratively compute the preimage for many cycles until a fixed point / desired state is reached. Further, we gear the ATPG engine to directly compute the circuit cofactors, rather than individual solutions. The circuit cofactors contain a large number of solutions and hence capture a larger solution space. We also propose efficient learning techniques to prune the cofactor space and accelerate preimage computation. Then, we develop an exclusive image computation procedure that branches on the combinational inputs of the circuit and projects the values on the next state flip-flops as the image. We perform learning on the input solution space and incrementally store the image obtained as a decision diagram. We consistently show, with our experimental results, that our techniques are better than the existing techniques in terms of both performance and capacity. In the case of delay testing, we consider the test generation for path delay fault (PDF) model, which is the most accurate in characterizing the cumulative effect of distributed delays along each path in a circuit. The main bottle-neck in the ATPG for PDFs is the exponential number of paths in a circuit. In this work, we use the circuit information to analyze the common segments shared by different paths in a circuit. Based on the common sensitization constraints, we propose to identify the ``untestable core of segments" that cannot be sensitized together. We use these segments to identify the conflict search space for a huge number of untestable path delay faults apriori and prune them on-the-fly during test generation. Experimental results show that a huge number of untestable path delay faults are identified and it helps to accelerate test generation. / Ph. D.
167

Intervening to Increase the ID-Checking Behavior of Cashiers: Cashier-Focused vs. Customer-Focused Approaches

Downing, Christopher O'Brien Jr. 11 June 2015 (has links)
The present four field studies explored the effectiveness of multiple prevention techniques designed to increase the frequency of cashiers' identification (ID)-checking behaviors from a customer-focused and cashier-focused approach. Studies 1 and 2 examined customer-focused approaches, whereas Study 3 examined a cashier-focused approach. Study 4 examined a combination of the cashier-focused and customer-focused approaches. From a customer approach, Study 1 investigated the use of four prompts (a no-prompt control, an antecedent only, an antecedent with a positive consequence, and an antecedent with a negative consequence) at encouraging cashiers to ask customers for their ID during a credit purchase. Research assistants (RAs) visited various stores and made credit purchases, while displaying one of the four prompts covering their card's signature line to the cashier during check-out. The results showed RAs were checked for ID the most when using the prompts containing the antecedent and consequence, which was checked for ID significantly more than the no-prompt control. Study 2 (also a customer approach) attempted to replicate Study 1 in a non-college community. Using a similar methodology as Study 1, the results showed RAs were checked for ID the most when using the prompt with the antecedent and positive consequence, which was checked for ID significantly more than the no-prompt control. From a cashier approach, Study 3 investigated the use of a goal-setting and prompt intervention led by the restaurant manager to increase the frequency of cashiers' ID-checking behavior. Using an A-B-A (Baseline-Intervention-Withdrawal) reversal design at one of two restaurants, the results showed the intervention restaurant's percentage of ID-checked purchases increased from Baseline to the Intervention phase. But, it decreased slightly during the Withdrawal phase, showing functional control but also some maintenance over the target behavior. The percentage of ID-checked purchases at the control restaurant was almost nonexistent throughout the study. Study 4 investigated the impact of using two intervention approaches (i.e., the customer and cashier approach) as opposed to one (i.e., the customer approach) to increase the frequency of cashiers' ID-checking behavior. While the A-B-A phases were occurring in the restaurants used in Study 3, RAs entered the restaurants and displayed an antecedent and positive consequence prompt to the cashiers during a credit purchase. The results of Study 4 partially supported the hypothesis. The cashiers in the intervention restaurant significantly checked more RAs for ID when two intervention approaches were combined than when only one intervention approach was used during Baseline, but not during the Withdrawal phase. / Ph. D.
168

Constraint Solving for Diagnosing Concurrency Bugs

Khoshnood, Sepideh 28 May 2015 (has links)
Programmers often have to spend a significant amount of time inspecting the software code and execution traces to identify the root cause of a software bug. For a multithreaded program, debugging is even more challenging due to the subtle interactions between concurrent threads and the often astronomical number of possible interleavings. In this work, we propose a logical constraint-based symbolic analysis method to aid in the diagnosis of concurrency bugs and find their root causes, which can be later used to recommend repairs. In our method, the diagnosis process is formulated as a set of constraint solving problems. By leveraging the power of constraint satisfiability (SAT) solvers and a bounded model checker, we perform a semantic analysis of the sequential computation as well as the thread interactions. The analysis is ideally suited for handling software with small to medium code size but complex concurrency control, such as device drivers, synchronization protocols, and concurrent data structures. We have implemented our method in a software tool and demonstrated its effectiveness in diagnosing subtle concurrency bugs in multithreaded C programs. / Master of Science
169

Novel RTD-Based Threshold Logic Design and Verification

Zheng, Yexin 06 May 2008 (has links)
Innovative nano-scale devices have been developed to enhance future circuit design to overcome physical barriers hindering complementary metal-oxide semiconductor (CMOS) technology. Among the emerging nanodevices, resonant tunneling diodes (RTDs) have demonstrated promising electronic features due to their high speed switching capability and functional versatility. Great circuit functionality can be achieved through integrating heterostructure field-effect transistors (HFETs) in conjunction with RTDs to modulate effective negative differential resistance (NDR). However, RTDs are intrinsically suitable for implementing threshold logic rather than Boolean logic which has dominated CMOS technology in the past. To fully take advantage of such emerging nanotechnology, efficient design methodologies and design automation tools for threshold logic therefore become essential. In this thesis, we first propose novel programmable logic elements (PLEs) implemented in threshold gates (TGs) and multi-threshold threshold gates (MTTGs) by exploring RTD/ HFET monostable-bistable transition logic element (MOBILE) principles. Our three-input PLE can be configured through five control bits to realize all the three-variable logic functions, which is, to the best of our knowledge, the first single RTD-based structure that provides complete logic implementation. It is also a more efficient reconfigurable circuit element than a general look-up table which requires eight configuration bits for three-variable functions. We further extend the design concept to construct a more versatile four-input PLE. A comprehensive comparison of three- and four-input PLEs provides an insightful view of design tradeoffs between performance and area. We present the mathematical proof of PLE's logic completeness based on Shannon Expansion, as well as the HSPICE simulation results of the programmable and primitive RTD/HFET gates that we have designed. An efficient control bit generating algorithm is developed by using a special encoding scheme to implement any given logic function. In addition, we propose novel techniques of formulating a given threshold logic in conjunctive normal form (CNF) that facilitates efficient SAT-based equivalence checking for threshold logic networks. Three different strategies of CNF generation from threshold logic representations are implemented. Experimental results based on MCNC benchmarks are presented as a complete comparison. Our hybrid algorithm, which takes into account input symmetry as well as input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time. / Master of Science
170

Formal Verification Techniques for Reversible Circuits

Limaye, Chinmay Avinash 27 June 2011 (has links)
As the number of transistors per unit chip area increases, the power dissipation of the chip becomes a bottleneck. New nano-technology materials have been proposed as viable alternatives to CMOS to tackle area and power issues. The power consumption can be minimized by the use of reversible logic instead of conventional combinational circuits. Theoretically, reversible circuits do not consume any power (or consume minimal power) when performing computations. This is achieved by avoiding information loss across the circuit. However, use of reversible circuits to implement digital logic requires development of new Electronic Design Automation techniques. Several approaches have been proposed and each method has its own pros and cons. This often results in multiple designs for the same function. Consequently, this demands research in efficient equivalence checking techniques for reversible circuits. This thesis explores the optimization and equivalence checking of reversible circuits. Most of the existing synthesis techniques work in two steps — generate an original, often sub-optimal, implementation for the circuit followed optimization of this design. This work proposes the use of Binary Decision Diagrams for optimization of reversible circuits. The proposed technique identifies repeated gate (trivial) as well as non-contiguous redundancies in a reversible circuit. Construction of a BDD for a sub-circuit (obtained by sliding a window of fixed size over the circuit) identifies redundant gates based upon the redundant variables in the BDD. This method was unsuccessful in identifying any additional redundancies in benchmark circuits; however, hidden non-contiguous redundancies were consistently identified for a family of randomly generated reversible circuits. As of now, several research groups focus upon efficient synthesis of reversible circuits. However, little work has been done in identification of redundant gates in existing designs and the proposed peephole optimization method stands among the few known techniques. This method fails to identify redundancies in a few cases indicating the complexity of the problem and the need for further research in this area. Even for simple logical functions, multiple circuit representations exist which exhibit a large variation in the total number of gates and circuit structure. It may be advantageous to have multiple implementations to provide flexibility in choice of implementation process but it is necessary to validate the functional equivalence of each such design. Equivalence checking for reversible circuits has been researched to some extent and a few pre-processing techniques have been proposed prior to this work. One such technique involves the use of Reversible Miter circuits followed by SAT-solvers to ascertain equivalence. The second half of this work focuses upon the application of the proposed reduction technique to Reversible Miter circuits as a pre-processing step to improve the efficiency of the subsequent SAT-based equivalence checking. / Master of Science

Page generated in 0.0403 seconds