Spelling suggestions: "subject:"1inear dependencies"" "subject:"1inear dependencies""
1 |
METHODS TO MINIMIZE LINEAR DEPENDENCIES IN TWO-DIMENSIONAL SCAN DESIGNSKakade, Jayawant Shridhar 01 January 2008 (has links) (PDF)
Two-dimensional scan design is an effective BIST architecture that uses multiple scan chains in parallel to test the Circuit Under Test (CUT). Linear Finite State Machines (LFSMs) are often used as on-board Pseudo Random Pattern Generators (PRPGs) in two-dimensional scan designs. However, linear dependencies present in the LFSM generated test-bit sequences adversely affect the resultant fault coverage in two-dimensional scan designs. In this work, we present methods that improve the resultant fault coverage in two-dimensional scan designs through the minimization of linear dependencies. Currently, metric of channel separation and matrix-based metric are used in order to estimate linear dependencies in a CUT. When the underlying sub-circuit (cone) structure of a CUT is available, the matrix-based metric can be used more effectively. Fisrt, we present two methods that use matrix-based metric and minimize the overall linear dependencies in a CUT through explicitly minimizing linear dependencies in the highest number of underlying cones of the CUT. The first method minimizes linear dependencies in a CUT through the selection of an appropriate LFSM structure. On the other hand, the second method synthesizes a phase shifter for a specified LFSM structure such that the overall linear dependencies in a CUT are minimized. However, the underlying structure of a CUT is not always available and in such cases the metric of channel separation can be used more effectively. The metric of channel separation is an empirical measure of linear dependencies and an ad-hoc large channel separation is imposed between the successive scan chains of a two-dimensional scan design in order to minimize the linear dependencies. Present techniques use LFSMs with additional phase shifters (LFSM/PS) as PRPGs in order to obtain desired levels of channel separation. We demonstrate that Generalized LFSRs (GLFSRs) are a better choice as PRPGs compared to LFSM/PS and obtain desired levels of channel separations at a lower hardware cost than the LFSM/PS. Experimental results corroborate the effectiveness of the proposed methods through increased levels of the resultant fault coverage in two-dimensional scan designs.
|
2 |
Fundamental numerical schemes for parameter estimation in computer vision.Scoleri, Tony January 2008 (has links)
An important research area in computer vision is parameter estimation. Given a mathematical model and a sample of image measurement data, key parameters are sought to encapsulate geometric properties of a relevant entity. An optimisation problem is often formulated in order to find these parameters. This thesis presents an elaboration of fundamental numerical algorithms for estimating parameters of multi-objective models of importance in computer vision applications. The work examines ways to solve unconstrained and constrained minimisation problems from the view points of theory, computational methods, and numerical performance. The research starts by considering a particular form of multi-equation constraint function that characterises a wide class of unconstrained optimisation tasks. Increasingly sophisticated cost functions are developed within a consistent framework, ultimately resulting in the creation of a new iterative estimation method. The scheme operates in a maximum likelihood setting and yields near-optimal estimate of the parameters. Salient features of themethod are that it has simple update rules and exhibits fast convergence. Then, to accommodate models with functional dependencies, two variant of this initial algorithm are proposed. These methods are improved again by reshaping the objective function in a way that presents the original estimation problem in a reduced form. This procedure leads to a novel algorithm with enhanced stability and convergence properties. To extend the capacity of these schemes to deal with constrained optimisation problems, several a posteriori correction techniques are proposed to impose the so-called ancillary constraints. This work culminates by giving two methods which can tackle ill-conditioned constrained functions. The combination of the previous unconstrained methods with these post-hoc correction schemes provides an array of powerful constrained algorithms. The practicality and performance of themethods are evaluated on two specific applications. One is planar homography matrix computation and the other trifocal tensor estimation. In the case of fitting a homography to image data, only the unconstrained algorithms are necessary. For the problem of estimating a trifocal tensor, significant work is done first on expressing sets of usable constraints, especially the ancillary constraints which are critical to ensure that the computed object conforms to the underlying geometry. Evidently here, the post-correction schemes must be incorporated in the computational mechanism. For both of these example problems, the performance of the unconstrained and constrained algorithms is compared to existing methods. Experiments reveal that the new methods perform with high accuracy to match a state-of-the-art technique but surpass it in execution speed. / Thesis (Ph.D.) - University of Adelaide, School of Mathemtical Sciences, Discipline of Pure Mathematics, 2008
|
Page generated in 0.0775 seconds