• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1747
  • 414
  • 357
  • 293
  • 266
  • 256
  • 252
  • 219
  • 211
  • 191
  • 177
  • 169
  • 124
  • 121
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Receiver Performance Simulation : System Verification for GSM Receiver

Yuan, Shuai, Haddad, Marc Antony January 2007 (has links)
The purpose of this thesis is to build and then optimize a simulation environment for the GSM / EDGE / WCDMA receiver in the RF Asics. The system generally consists of two blocks: an Agilent Advanced Design System (ADS) controlled system core and Simulation Environment System for Verification and Design (SEVED). The signal is generated by SEVED and directed into the system core, where the receiver under test is located. Signal output of the receiver is then directed back into SEVED for bit error rate calculations. Therefore the performance of the receiver can be evaluated.
132

Abstraction for Verification and Refutation in Model Checking

Wei, Ou 13 April 2010 (has links)
Model checking is an automated technique for deciding whether a computer program satisfies a temporal property. Abstraction is the key to scaling model checking to industrial-sized problems, which approximates a large (or infinite) program by a smaller abstract model and lifts the model checking result over the abstract model back to the original program. In this thesis, we study abstraction in model checking based on \emph{exact-approximation}, which allows for verification and refutation of temporal properties within the same abstraction framework. Our work in this thesis is driven by problems from both practical and theoretical aspects of exact-approximation. We first address challenges of effectively applying symmetry reduction to \emph{virtually} symmetric programs. Symmetry reduction can be seen as a \emph{strong} exact-approximation technique, where a property holds on the original program if and only if it holds on the abstract model. In this thesis, we develop an efficient procedure for identifying virtual symmetry in programs. We also explore techniques for combining virtual symmetry with symbolic model checking. Our second study investigates model checking of \emph{recursive} programs. Previously, we have developed a software model checker for non-recursive programs based on exact-approximating predicate abstraction. In this thesis, we extend it to reachability and non-termination analysis of recursive programs. We propose a new program semantics that effectively removes call stacks while preserving reachability and non-termination. By doing this, we reduce recursive analysis to non-recursive one, which allows us to reuse existing abstract analysis in our software model checker to handle recursive programs. A variety of \emph{partial} transition systems have been proposed for construction of abstract models in exact-approximation. Our third study conducts a systematic analysis of them from both semantic and logical points of view. We analyze the connection between semantic and logical consistency of partial transition systems, compare the expressive power of different families of these formalisms, and discuss the precision of model checking over them. Abstraction based on exact-approximation uses a uniform framework to prove correctness and detect errors of computer programs. Our results in this thesis provide better understanding of this approach and extend its applicability in practice.
133

Parallel Run-Time Verification

Berkovich, Shay January 2013 (has links)
Run-time verification is a technique to reason about a program correctness. Given a set of desirable properties and a program trace from the inspected program as an input, the monitor module verifies that properties hold on this trace. As this process is taking place at a run time, one of the major drawbacks of run-time verification is the execution overhead caused by a monitoring activity. In this thesis, we intend to minimize this overhead by presenting a collection of parallel verification algorithms. The algorithms verify properties correctness in a parallel fashion, decreasing the verification time by dispersion of computationally intensive calculations over multiple cores (first level of parallelism). We designed the algorithms with the intention to exploit a data-level parallelism, thus specifically suitable to run on Graphics Processing Units (GPUs), although can be utilized on multi-core platforms as well. Running the inspected program and the monitor module on separate platforms (second level of parallelism) results in several advantages: minimization of interference between the monitor and the program, faster processing for non-trivial computations, and even significant reduction in power consumption (when the monitor is running on GPU). This work also aims to provide a solution to automated run-time verification of C programs by implementing the aforementioned set of algorithms in the monitoring tool called GPU-based online and offline Monitoring Framework (GooMF). The ultimate goal of GooMF is to supply developers with an easy-to-use and flexible verification API that requires minimal knowledge of formal languages and techniques.
134

Runtime Verification with Controllable Time Predictability and Memory Utilization

Kumar, Deepak 20 September 2013 (has links)
The goal of runtime verifi cation is to inspect the well-being of a system by employing a monitor during its execution. Such monitoring imposes cost in terms of resource utilization. Memory usage and predictability of monitor invocations are the key indicators of the quality of a monitoring solution, especially in the context of embedded systems. In this work, we propose a novel control-theoretic approach for coordinating time predictability and memory utilization in runtime monitoring of real-time embedded systems. In particular, we design a PID controller and four fuzzy controllers with di erent optimization control objectives. Our approach controls the frequency of monitor invocations by incorporating a bounded memory bu er that stores events which need to be monitored. The controllers attempt to improve time predictability, and maximize memory utilization, while ensuring the soundness of the monitor. Unlike existing approaches based on static analysis, our approach is scalable and well-suited for reactive systems that are required to react to stimuli from the environment in a timely fashion. Our experiments using two case studies (a laser beam stabilizer for aircraft tracking, and a Bluetooth mobile payment system) demonstrate the advantages of using controllers to achieve low variation in the frequency of monitor invocations, while maintaining maximum memory utilization in highly non-linear environments. In addition to this problem, the thesis presents a brief overview of our preceding work on runtime verifi cation.
135

Formal Verification of Instruction Dependencies in Microprocessors

Shehata, Hazem January 2011 (has links)
In microprocessors, achieving an efficient utilization of the execution units is a key factor in improving performance. However, maintaining an uninterrupted flow of instructions is a challenge due to the data and control dependencies between instructions of a program. Modern microprocessors employ aggressive optimizations trying to keep their execution units busy without violating inter-instruction dependencies. Such complex optimizations may cause subtle implementation flaws that can be hard to detect using conventional simulation-based verification techniques. Formal verification is known for its ability to discover design flaws that may go undetected using conventional verification techniques. However, with formal verification come two major challenges. First, the correctness of the implementation needs to be defined formally. Second, formal verification is often hard to apply at the scale of realistic implementations. In this thesis, we present a formal verification strategy to guarantee that a microprocessor implementation preserves both data and control dependencies among instructions. Throughout our strategy, we address the two major challenges associated with formal verification: correctness and scalability. We address the correctness challenge by specifying our correctness in the context of generic pipelines. Unlike conventional pipeline hazard rules, we make no distinction between the data and control aspects. Instead, we describe the relationship between a producer instruction and a consumer instruction in a way such that both instructions can speculatively read their source operands, speculatively write their results, and go out of their program order during execution. In addition to supporting branch and value prediction, our correctness criteria allow the implementation to discard (squash) or replay instructions while being executed. We address the scalability challenge in three ways: abstraction, decomposition, and induction. First, we state our inter-instruction dependency correctness criteria in terms of read and write operations without making reference to data values. Consequently, our correctness criteria can be verified for implementations with abstract datapaths. Second, we decompose our correctness criteria into a set of smaller obligations that are easier to verify. All these obligations can be expressed as properties within the Syntactically-Safe fragment of Linear Temporal Logic (SSLTL). Third, we introduce a technique to verify SSLTL properties by induction, and prove its soundness and completeness. To demonstrate our overall strategy, we verified a term-level model of an out-of-order speculative processor. The processor model implements register renaming using a P6-style reorder buffer and branch prediction with a hybrid (discard-replay) recovery mechanism. The verification obligations (expressed in SSLTL) are checked using a tool implementing our inductive technique. Our tool, named Tahrir, is built on top of a generic interface to SMT solvers and can be generally used for verifying SSLTL properties about infinite-state systems.
136

Biometrics in practice : The security technology of tomorrow's airports

Salavati, Sadaf January 2006 (has links)
The biometric technology is a method for authentication which has been used since several centuries back. This is a technology which offers several different techniques where the human’s unique characteristics are used for identification and verification of the individual. Biometrics are today at a stage of development that is pointing upwards and many individuals that are well aware of the biometric world believes that this is the technology that will take over the security systems used today. Ever since the terror attacks against USA at 2001, USA demanded that all 45 countries that today are not required to have visa when entering the United States must until the end of 2006 implement passports that contains biometrics information. The UN’s air traffic group on the other hand thinks that all counties in the world should use passports with biometric data. The biometric data in the passports are going to be stored in a chip and is in the first hand an image of the individuals face in a cryptic jpg format, but can also be complemented with fingerprints or even signature recognition. Sweden is currently using passports which contain biometric data but so far haven’t any machines that can read this passports been bought. Ulf Hägglund at Precise Biometrics AB believes that as soon as the real use of the biometric passports gets going the biometric technique will be used in a greater extension in the airports. Even though several Swedish airports consider the security technique used in airports today being enough, biometrics can increase the security and at the same time simplify many security processes. Falsification can be reduced when at same time one can be sure that the same passenger who has checked-in is the passenger who boards the airplane and the employee security control can be totally automatized. Generally it can be said that “biometrics is a decent way to increase security in different areas”. / Den biometriska teknologin är en äkthetsbevisningsmetod som har används sedan flera århundraden tillbaka. Detta är en teknologi som erbjuder flera olika tekniker där människans unika karateristiska kännetecken används för identifiering och verifiering av individen. Biometrin befinner sig idag i ett utvecklingsstadie som pekar uppåt och flera personer som är insatta i biometrins värld anser att detta är den teknologi som kommer att ta över det nuvarande säkerhetssystemet. Sedan terrorattentatet mot USA år 2001 har USA begärt att alla 45 länderna som idag inte behöver visum för att komma in till USA måste innan slutet av år 2006 införa pass som innehåller biometrisk information. FN; s luftfartsgrupp anser däremot att alla världens länder bör införa pass med biometrisk data. Den biometriska data som ska finnas i passen ska lagras i ett chip och är främst en avbildning av individens ansikte i krypterad jpg format men kan även tänkas bli kompletterat med fingeravtryck och eventuellt signatur igenkänning . I dagsläget använder sig Sverige av pass med biometrisk data, men än så länge har inte några maskiner som kan avläsa dessa pass köpts in. Ulf Hägglund på Precise Biometrics AB tror att så snart användandet av de biometriska passen kommer igång på riktigt kommer även den biometiska tekniken att användas i större utsträckning på flygplatser. Trotts att flera svenska flygplatser idag anser att den säkerhetsteknik som används på flygplatserna idag räcker, kan man genom att använda sig av biometri öka säkerheten samtidigt som man förenklar många säkerhetsprocesser. Falsifieringen minskar samtidigt som man kan försäkra sig om att det alltid är samma passagerare som checkat- in som stiger på planet och säkerhetskontrollerna för de anställda kan bli total automatiserad. I stort kan man säga att ”biometrin är ett hyggligt steg mot att förbättra säkerheten inom olika områden”.
137

Blind Signature Scheme with Anonymous Verification

Huang, Ren-Shang 01 September 2010 (has links)
In an anonymous credential system, when a credential has been shown for verification, none can identify the owner of the credential and link the relationship between any two credentials. The unlinkability is the crucial feature for any anonymous credential system. In 2002, Jan Camenisch and Anna Lysyanskaya proposed a signature scheme (CL signature) which allows users to demonstrate their credentials without revealing their identity information. However, CL signature is compounded of a lot of zero-knowledge proof technologies which cause inefficiency for CL signatures. Such heavy computation requirements may limit the scope which CL signatures can be applied to. In this thesis, we propose a new blind signature scheme based on ElGamal signatures and design an anonymous verification procedure which is more efficient than the CL signature scheme. Finally, we also implement our proposed protocols.
138

SoC Integration and Verification of a 3D Graphics SoC

Huang, Tzu-Ming 26 July 2011 (has links)
While consumer demand for electronic equipment and more mature systems integration capabilities, it makes the system complexity of chip design increasing significantly. Also accompany an issue is how to efficiently and accurately verify that such a large-scale chip. In this thesis, we make 3D graphics SoC as a case study, investigate the various aspect, i.e. architecture design, system integration, verification methods and verification platform. This thesis proposes a verification methodology with unified test pattern from system modeling level to test chip level, and via increase of the abstraction level of test patterns, that avoided the way through the manual to generate the test patterns. Not only eliminate manual editing effort and reduce the possibility of error, but also allows developers to more focus on algorithm design and functional verification. In addition, through the pre-described of test scenario (Test-bench) which automated verification and comparison methodology. The efficiency of regression test will be increased. And it's much easier to meet the constraint of time to market. However, In order to demonstrate our chip on new prototyping based board. We not only modified the channel of 3DG chip, but also develop a high-performance bus bridge to keep the efficient of exchange data between two system buses which in platform board and our SoC. And shorten the longest path of the overall system so that system clock rate could be enhanced from 82.6MHz to 120.4 MHz system clock rate.
139

P2VSIM: A SIMULATION AND VISUALIZATION TOOL FOR THE P2V COMPILER

Almeida, Oscar 2009 May 1900 (has links)
The Property Specification Language (PSL) is an IEEE standard which allows developers to specify precise behavioral properties of hardware designs. PSL assertions can be embedded within code written in hardware description languages (HDL) such as Verilog to monitor signals of interest. Debugging simulations at the register transfer level (RTL) is often required to verify the functionality of a design before synthesis. Traditional methods of RTL debugging can help locate failures, but do not necessarily immediately help in discovering the reasons for the failures. The P2VSim tool presents the ability to combine multiple Verilog signals not only instantaneously, but also across multiple clock cycles, producing a graphical display of the state of active PSL assertions in a given RTL simulation. When using the P2VSim tool, users will write PSL assertions directly into their Verilog source files. After the tool searches for and loads the embedded assertions, execution trace monitors for the relevant Verilog signals are dynamically generated and written back into the Verilog source code. P2VSim then invokes an RTL simulator, Modelsim, to generate a simulation execution trace, requiring that the designer has some hardware or software testbench already in place. Next, the input PSL assertions are parsed into time intervals that have logical and temporal properties. These intervals are to be displayed graphically when PSL property checking is performed. Finally, the user is allowed to step through simulation one cycle at a time, while the tool applies the simulation execution trace to the instantiated time intervals, performing PSL property checking at each clock cycle. From this, the user can witness the exact clock cycles when PSL assertions are satisfied or violated, along with the causes of such results.
140

Cost-effective Microarchitecture Optimization for ARM7TDMI Microprocessor

Hung, Yu-Liang 24 August 2000 (has links)
In this paper, we present how to optimize our ARM7TDMI instruction set compatible microprocessor. The ARM7TDMI is a 32-bit microprocessor developed by ARM Ldt. It used in embedded application such as mobile phones, pager and PDAs. The ARM7 family owes its success to the combination of low power, low cost and high performance. However, as applications become more complex and integrate more and more functionality, the processor is required to provide more and more performance. We tune the hardware simply, no use complex hardware, to complete performance optimization. We use synthesis tool to synthesize our RTL design and analyze timing to fund the critical path of our microprocessor. We will describe how to optimize the critical path to increase performance.

Page generated in 0.1095 seconds