261 |
Stereo and Eye MovementGeiger, Davi, Yuille, Alan 01 January 1988 (has links)
We describe a method to solve the stereo correspondence using controlled eye (or camera) movements. These eye movements essentially supply additional image frames which can be used to constrain the stereo matching. Because the eye movements are small, traditional methods of stereo with multiple frames will not work. We develop an alternative approach using a systematic analysis to define a probability distribution for the errors. Our matching strategy then matches the most probable points first, thereby reducing the ambiguity for the remaining matches. We demonstrate this algorithm with several examples.
|
262 |
Symbolic Error Analysis and Robot PlanningBrooks, Rodney A. 01 September 1982 (has links)
A program to control a robot manipulator for industrial assembly operations must take into account possible errors in parts placement and tolerances of the parts themselves. Previous approaches to this problem have been to (1) engineer the situation so that the errors are small or (2) let the programmer analyze the errors and take explicit account of them. This paper gives the mathematical underpinnings for building programs (plan checkers) to carry out approach (2) automatically. The plan checker uses a geometric CAD-type database to infer the effects of actions and the propagation of errors. It does this symbolically rather than numerically, so that computations can be reversed and desired resultant tolerances can be used to infer required initial tolerances or the necessity for sensing. The checker modifies plans to include sensing and adds constraints to the plan which ensure that it will succeed. An implemented system is described and results of its execution are presented. The plan checker could be used as part of an automatic planning system of as an aid to a human robot programmer.
|
263 |
The harmless error ruleBoyle, Germain P. January 1900 (has links)
Thesis (LL. M.)--Judge Advocate General's School, 1955. / "May 1955." Typescript. Includes bibliographical references. Also issued in microfiche.
|
264 |
Bayesian Methods for On-Line Gross Error Detection and CompensationGonzalez, Ruben 11 1900 (has links)
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time.
The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise. / Process Control
|
265 |
Quantum convolutional stabilizer codesChinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
|
266 |
Adaptive Algorithms for Deterministic and Stochastic Differential EquationsMoon, Kyoung-Sook January 2003 (has links)
No description available.
|
267 |
Implementation of Pipeline Floating-Point CORDIC Processor and its Error Analysis and ApplicationsYang, Chih-yu 19 August 2007 (has links)
In this thesis, the traditional fixed-point CORDIC algorithm is extended to floating-point version in order to calculate transcendental functions (such as sine/cosine, logarithm, powering function, etc.) with high accuracy and large range. Based on different algorithm derivations, two different floating-point high-throughput pipelined CORDIC architectures are proposed. The first architecture adopts barrel shifters to implement the shift operations in each pipelined stage. The second architecture uses pure hardwired method for the shifting operations. Another key contribution of this thesis is to analyze the execution errors in the floating-point CORDIC architectures and make comparison with the execution resulting from pure software programs. Finally, the thesis applies the floating-point CORDIC to realizing the rotation-related operations required in 3D graphics applications.
|
268 |
An Error Analysis Model for Adaptive Deformation SimulationKocak, Umut, Lundin Palmerius, Karljohan, Cooper, Matthew January 2012 (has links)
With the widespread use of deformation simulations in medical applications, the realism of the force feedback has become an important issue. In order to reach real-time performance with sufficient realism the approach of adaptivity, solution of different parts of the system with different resolutions and refresh rates, has been commonly deployed. The change in accuracy resulting from the use of adaptivity, however, has been been paid scant attention in the deformation simulation field. Presentation of error metrics is rare, while more focus is given to the real-time stability. We propose an abstract pipeline to perform error analysis for different types of deformation techniques which can consider different simulation parameters. A case study is also performed using the pipeline, and the various uses of the error estimation are discussed.
|
269 |
Automated Error Assessment in Spherical Near-Field Antenna MeasurementsPelland, Patrick 27 May 2011 (has links)
This thesis will focus on spherical near-field antenna measurements and the methods developed or modified for the work of this thesis to estimate the uncertainty in a particular far-field radiation pattern. We will discuss the need for error assessment in spherical near-field antenna measurements. A procedure will be proposed that, in an automated fashion, can be used to determine the overall uncertainty in the measured far-field radiation pattern of a particular antenna. This overall uncertainty will be the result of a combination of several known sources of error common to SNF measurements. This procedure will consist of several standard SNF measurements, some newly developed tests, and several stages of post-processing of the measured data. The automated procedure will be tested on four antennas of various operating frequencies and directivities to verify its functionality. Finally, total uncertainty data will be presented to the reader in several formats.
|
270 |
Decentralized Coding in Unreliable Communication NetworksLin, Yunfeng 30 August 2010 (has links)
Many modern communication networks suffer significantly from the unreliable characteristic of their nodes and links. To deal with failures, traditionally, centralized erasure codes have been extensively used to improve reliability by introducing data redundancy. In this thesis, we address several issues in implementing erasure codes in a decentralized way such that coding operations are spread to multiple nodes. Our solutions are based on fountain codes and randomized network coding, because of their capability of being amenable to decentralized implementation originated from their simplicity and randomization properties.
Our contributions consist of four parts. First, we propose a novel decentralized implementation of fountain codes utilizing random walks. Our solution does not require node location information and enjoys a small local routing table with a size in proportion to the number of neighbors. Second, we introduce priority random linear codes to achieve partial data recovery by partition and encoding data into non-overlapping or overlapping subsets. Third, we present geometric random linear codes to decrease communication costs in decoding significantly, by introducing modest data redundancy in a hierarchical fashion. Finally, we study the application of network coding in disruption tolerant networks. We show that network coding achieves shorter data transmission time than replication, especially when data buffers are limited. We also propose an efficient variant of network coding based protocol, which attains similar transmission delay, but with much lower transmission costs, as compared to a protocol based on epidemic routing.
|
Page generated in 0.0305 seconds