121 |
Hierarchical fingerprint verificationYager, Neil Gordon, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Fingerprints have been an invaluable tool for law enforcement and forensics for over a century, motivating research into automated fingerprint based identification in the early 1960's. More recently, fingerprints have found an application in the emerging industry of biometric systems. Biometrics is the automatic identification of an individual based on physiological or behavioral characteristics. Due to its security related applications and the current world political climate, biometrics is presently the subject of intense research by private and academic institutions. Fingerprints are emerging as the most common and trusted biometric for personal identification. However, despite decades of intense research there are still significant challenges for the developers of automated fingerprint verification systems. This thesis includes an examination of all major stages of the fingerprint verification process, with contributions made at each step. The primary focus is upon fingerprint registration, which is the challenging problem of aligning two prints in order to compare their corresponding features for verification. A hierarchical approach is proposed consisting of three stages, each of which employs novel features and techniques for alignment. Experimental results show that the hierarchical approach is robust and outperforms competing state-of-the-art registration methods from the literature. However, despite its power, like most algorithms it has limitations. Therefore, a novel method of information fusion at the registration level has been developed. The technique dynamically selects registration parameters from a set of competing algorithms using a statistical framework. This allows for the relative advantages of different approaches to be exploited. The results show a significant improvement in alignment accuracy for a wide variety of fingerprint databases. Given a robust alignment of two fingerprints, it still remains to be verified whether or not they have originated from the same finger. This is a non-trivial problem, and a close examination of fingerprint features available for this task is conducted with extensive experimental results.
|
122 |
Comparison of Two Planning Methods for Heterogeneity Correction in Planning Total Body IrradiationFlower, Emily Elizabeth, not supplied January 2006 (has links)
Total body irradiation (TBI) is often used as part of the conditioning process prior to bone marrow transplants for diseases such as leukemia. By delivering radiation to the entire body, together with chemotherapy, tumour cells are killed and the patient is also immunosupressed. This reduces the risk of disease relapse and increases the chances of a successful implant respectively. TBI requires a large flat field of radiation to cover the entire body with a uniform dose. However, dose uniformity is a major challenge in TBI. (AAPM Report 17) The ICRU report 50 recommends that the dose range within the target volume remain in the range of -5% to +7%. Whilst it is generally accepted that this is not possible for TBI, it is normally clinically acceptable that ±10% of the prescribed dose to the whole body is sufficiently uniform, unless critical structures are being shielded. TBI involves complex dosimetry due to the large source to treatment axis distance (SAD), dose uniformity and flatness over the large field, bolus requirements, extra scatter from the bunker walls and floor and large field overshoot. There is also a lack of specialised treatment planning systems for TBI planning at extended SAD. TBI doses at Westmead Hospital are prescribed to midline. Corrections are made for variations in body contour and tissue density heterogeneity in the lungs using bolus material to increase dose uniformity along midline. Computed tomography (CT) data is imported into a treatment planning system. The CT gives information regarding tissue heterogeneity and patient contour. The treatment planning system uses this information to determine the dose distribution. Using the dose ratio between plans with and without heterogeneity correction the effective chest width can be calculated. The effective chest width is then used for calculating the treatment monitor units and bolus requirements. In this project the tissue heterogeneity corrections from two different treatment planning systems are compared for calculating the effective chest width. The treatment planning systems used were PinnacleTM, a 3D system that uses a convolution method to correct for tissue heterogeneity and calculate dose. The other system, RadplanTM, is a 2D algorithm that corrects for tissue heterogeneity using a modified Batho method and calculates dose using the Bentley - Milan Algorithm. Other possible differences between the treatment planning systems are also discussed. An anthropomorphic phantom was modified during this project. The chest slices were replaced with PerspexTM slices that had different sized cork and PerspexTM inserts to simulate different lung sizes. This allowed the effects of different lung size on the heterogeneity correction to be analysed. The phantom was CT scanned and the information used for the treatment plans. For each treatment planning system and each phantom plans were made with and without heterogeneity corrections. For each phantom the ratio between the plans from each system was used to calculate the effective chest width. The effective chest width was then used to calculate the number of monitor units to be delivered. The calculated dose per monitor unit at the extended TBI distance for the effective chest width from each planning system is then verified using thermoluminescent dosimeters (TLDs) in the unmodified phantom. The original phantom was used for the verification measurements as it had special slots for TLDs. The isodose distributions produced by each planning system are then verified using measurements from Kodak EDR2 radiographic film in the anthropomorphic phantom at isocentre. Further film measurements are made at the extended TBI treatment SAD. It was found that only the width of the lungs made any significant difference to the heterogeneity correction for each treatment planning system. The height and depth of the lungs will affect the dose at the calculation point from changes to the scattered radiation within the volume. However, since the dose from scattered radiation is only a fraction of that from the primary beam, the change in dose was not found to be significant. This is because the calculation point was positioned in the middle of the lungs, so the height and depth of the lungs didn't affect the dose at the calculation point. The dose per monitor unit calculated using the heterogeneity correction for each treatment planning system varied less than the accuracy of the TLD measurements. The isodose distributions measured by film showed reasonable agreement with those calculated by both treatment planning systems at isocentre and a more uniform distribution at the extended TBI treatment distance. The verification measurements showed that either treatment planning system could be used to calculate the heterogeneity correction and hence effective chest width for TBI treatment planning.
|
123 |
Advances in space and time efficient model checking of finite state systems / Atanas Nikolaev Parashkevov.Parashkevov, Atanas January 2002 (has links)
Bibliography: leaves 211-220 / xviii, 220 leaves : charts ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science, 2002
|
124 |
Model-checking pour les ambients : des algèbres de processus aux données semi-structuréesTalbot, Jean-Marc 09 December 2005 (has links) (PDF)
.
|
125 |
Receiver Performance Simulation : System Verification for GSM ReceiverYuan, Shuai, Haddad, Marc Antony January 2007 (has links)
<p>The purpose of this thesis is to build and then optimize a simulation environment for the GSM / EDGE / WCDMA receiver in the RF Asics.</p><p>The system generally consists of two blocks: an Agilent Advanced Design System (ADS) controlled system core and Simulation Environment System for Verification and Design (SEVED). The signal is generated by SEVED and directed into the system core, where the receiver under test is located. Signal output of the receiver is then directed back into SEVED for bit error rate calculations. Therefore the performance of the receiver can be evaluated.</p>
|
126 |
Dynamically vs. empirically downscaled medium-range precipitation forecastsBürger, Gerd January 2009 (has links)
For three small, mountainous catchments in Germany two medium-range forecast systems are compared that predict precipitation for up to 5 days in advance. One system is composed of the global German weather service (DWD) model, GME, which is dynamically downscaled using the COSMO-EU regional model. The other system is an empirical (expanded) downscaling of the ECMWF model IFS. Forecasts are verified against multi-year daily observations, by applying standard skill scores to events of specified intensity. All event classes are skillfully predicted by the empirical system for up to five days lead time. For the available prediction range of one to two days it is superior to the dynamical system.
|
127 |
SAT-based Verification for Analog and Mixed-signal CircuitsDeng, Yue 2012 May 1900 (has links)
The wide application of analog and mixed-signal (AMS) designs makes the verification of AMS circuits an important task. However, verification of AMS circuits remains as a significant challenge even though verification techniques for digital circuits design have been successfully applied in the semiconductor industry.
In this thesis, we propose two techniques for AMS verification targeting DC and transient verifications, respectively. The proposed techniques leverage a combination of circuit modeling, satisfiability (SAT) and circuit simulation techniques.
For DC verification, we first build bounded device models for transistors. The bounded models are conservative approximations to the accurate BSIM3/4 models. Then we formulate a circuit verification problem by gathering the circuit's KCL/KVL equations and the I-V characteristics which are constrained by the bounded models. A nonlinear SAT solver is then recursively applied to the problem formula to locate a candidate region which is guaranteed to enclose the actual DC equilibrium of the original circuit. In the end, a refinement technique is applied to reduce the size of candidate region to a desired resolution. To demonstrate the application of the proposed DC verification technique, we apply it to locate the DC equilibrium points for a set of ring oscillators. The experimental results show that the proposed DC verification technique is efficient in terms of runtime.
For transient verification, we perform reachability analysis to verify the dynamic property of a circuit. Our method combines circuit simulation SAT to take advantage of the efficiency of simulation and the soundness of SAT. The novelty of the proposed transient verification lies in the fact that a significant part of the reachable state space is discovered via fast simulation while the full coverage of the reachable state space is guaranteed by the invoking of a few SAT runs. Furthermore, a box merging algorithm is presented to efficiently represent the reachable state space using grid boxes. The proposed technique is used to verify the startup condition of a tunnel diode oscillator and the phase-locking of a phase-locked loop (PLL). The experimental results demonstrate that the proposed transient verification technique can perform reachability analysis for reasonable complex circuits over a great number of time steps.
|
128 |
Büchi Automata as Specifications for Reactive SystemsFogarty, Seth 05 June 2013 (has links)
Computation is employed to incredible success in a massive variety of applications, and yet it is difficult to formally state what our computations are. Finding a way to model computations is not only valuable to understanding them, but central to automatic manipulations and formal verification. Often the most interesting computations are not functions with inputs and outputs, but ongoing systems that continuously react to user input. In the automata-theoretic approach, computations are modeled as words, a sequence of letters representing a trace of a computation. Each automaton accepts a set of words, called its language. To model reactive computation, we use Büchi automata: automata that operate over infinite words. Although the computations we are modeling are not infinite, they are unbounded, and we are interested in their ongoing properties. For thirty years, Büchi automata have been recognized as the right model for reactive computations.
In order to formally verify computations, however, we must also be able to create specifications that embody the properties we want to prove these systems possess. To date, challenging algorithmic problems have prevented Büchi automata from being used as specifications. I address two challenges to the use of Buechi automata as specifications in formal verification. The first, complementation, is required to check program adherence to a specification. The second, determination, is used in domains such as synthesis, probabilistic verification, and module checking. I present both empirical analysis of existing complementation constructions, and a new theoretical contribution that provides more deterministic complementation and a full determination construction.
|
129 |
Büchi Automata as Specifications for Reactive SystemsFogarty, Seth 05 June 2013 (has links)
Computation is employed to incredible success in a massive variety of applications, and yet it is difficult to formally state what our computations are. Finding a way to model computations is not only valuable to understanding them, but central to automatic manipulations and formal verification. Often the most interesting computations are not functions with inputs and outputs, but ongoing systems that continuously react to user input. In the automata-theoretic approach, computations are modeled as words, a sequence of letters representing a trace of a computation. Each automaton accepts a set of words, called its language. To model reactive computation, we use Büchi automata: automata that operate over infinite words. Although the computations we are modeling are not infinite, they are unbounded, and we are interested in their ongoing properties. For thirty years, Büchi automata have been recognized as the right model for reactive computations.
In order to formally verify computations, however, we must also be able to create specifications that embody the properties we want to prove these systems possess. To date, challenging algorithmic problems have prevented Büchi automata from being used as specifications. I address two challenges to the use of Buechi automata as specifications in formal verification. The first, complementation, is required to check program adherence to a specification. The second, determination, is used in domains such as synthesis, probabilistic verification, and module checking. I present both empirical analysis of existing complementation constructions, and a new theoretical contribution that provides more deterministic complementation and a full determination construction.
|
130 |
Abstraction for Verification and Refutation in Model CheckingWei, Ou 13 April 2010 (has links)
Model checking is an automated technique for deciding whether a computer program satisfies a temporal property. Abstraction is the key to scaling model checking to industrial-sized problems, which approximates a large (or infinite) program by a smaller abstract model and lifts the model checking result over the abstract model back to the original program. In this thesis, we study abstraction in model checking based on \emph{exact-approximation}, which allows for verification and refutation of temporal properties within the same abstraction framework. Our work in this thesis is driven by problems from both practical and theoretical aspects of exact-approximation.
We first address challenges of effectively applying symmetry reduction to \emph{virtually} symmetric programs. Symmetry reduction can be seen as a \emph{strong} exact-approximation technique, where a property holds on the original program if and only if it holds on the abstract model. In this thesis, we develop an efficient procedure for identifying virtual symmetry in programs. We also explore techniques for combining virtual symmetry with symbolic model checking.
Our second study investigates model checking of \emph{recursive} programs.
Previously, we have developed a software model checker for non-recursive programs based on exact-approximating predicate abstraction. In this thesis, we extend it to reachability and non-termination analysis of recursive programs. We propose a new program semantics that effectively removes call stacks while preserving reachability and non-termination. By doing this, we reduce recursive analysis to non-recursive one, which allows us to reuse existing
abstract analysis in our software model checker to handle recursive programs.
A variety of \emph{partial} transition systems have been proposed for construction of abstract models in exact-approximation. Our third study conducts a systematic analysis of them from both semantic and logical points of view. We analyze the connection between semantic and logical consistency of partial transition systems, compare the expressive power of different families of these formalisms, and discuss the precision of model checking over them.
Abstraction based on exact-approximation uses a uniform framework to prove correctness and detect errors of computer programs. Our results in this thesis provide better understanding of this approach and extend its applicability in practice.
|
Page generated in 0.1052 seconds