131 |
Design techniques for low noise and high speed A/D convertersGupta, Amit Kumar 15 May 2009 (has links)
Analog-to-digital (A/D) conversion is a process that bridges the real analog world to digital
signal processing. It takes a continuous-time, continuous amplitude signal as its input and
outputs a discrete-time, discrete-amplitude signal. The resolution and sampling rate of an
A/D converter vary depending on the application. Recently, there has been a growing
demand for broadband (>1 MHz), high-resolution (>14bits) A/D converters. Applications
that demand such converters include asymmetric digital subscriber line (ADSL) modems,
cellular systems, high accuracy instrumentation, and medical imaging systems. This thesis
suggests some design techniques for such high resolution and high sampling rate A/D
converters.
As the A/D converter performance keeps on increasing it becomes increasingly
difficult for the input driver to settle to required accuracy within the sampling time. This is
because of the use of larger sampling capacitor (increased resolution) and a decrease in
sampling time (higher speed). So there is an increasing trend to have a driver integrated onchip
along with A/D converter. The first contribution of this thesis is to present a new
precharge scheme which enables integrating the input buffer with A/D converter in
standard CMOS process. The buffer also uses a novel multi-path common mode feedback
scheme to stabilize the common mode loop at high speeds.
Another major problem in achieving very high Signal to Noise and Distortion Ratio
(SNDR) is the capacitor mismatch in Digital to Analog Converters (DAC) inherent in the
A/D converters. The mismatch between the capacitor causes harmonic distortion, which
may not be acceptable. The analysis of Dynamic Element Matching (DEM) technique as applicable to broadband data-converters is presented and a novel second order notch-DEM
is introduced. In this thesis we present a method to calibrate the DAC. We also show that a
combination of digital error correction and dynamic element matching is optimal in terms
of test time or calibration time.
Even if we are using dynamic element matching techniques, it is still critical to get the
best matching of unit elements possible in a given technology. The matching obtained may
be limited either by random variations in the unit capacitor or by gradient effects. In this
thesis we present layout techniques for capacitor arrays, and the matching results obtained
in measurement from a test-chip are presented.
Thus we present various design techniques for high speed and low noise A/D
converters in this thesis. The techniques described are quite general and can be applied to
most of the types of A/D converters.
|
132 |
Polarization analysis of elliptical fibers by the analytic mode matching methodFu, Li-ping 08 July 2005 (has links)
Dielectric waveguides are important passive devices in optical communication systems. Circular-core fibers with slight ellipticity may lead to polarization-mode dispersion. A clear understanding of the propagation characteristics of the elliptical fibers thus becomes important for theoretical as well as practical purposes.
Although mesh-dependent methods such as the finite-element method or finite-difference method, can be used to study such a complex structure, its computational task is very high. Strictly speaking, mesh-based solution does satisfy the Helmholtz equation and the solution only provided four to five significant digits. On the other hand, the highly accurate solution based on solving the Helmholtz equation of the elliptical coordinate system spend most its computational resources on computing the functional value and the zeros of the modified Mathieu functions of the first kind.
Our method is based on linear combination of the exact mode-field solutions of the dielectric optical fiber. We apply the analytical continuity principle to obtain the simultaneous equation of the expansion coefficient vector. Since each basis solution satisfies the Helmholtz equation exactly, the overall solutions are very accurate and provide more than six significant digits for fibers with small elliptical eccentricity. In addition, only the Bessel functions are needed in our computation. Using cylindrical coordinate and symmetry, together with ACM principle, we simplify the problem of modal analysis of dielectric elliptical waveguides. This method also can be applied to some regular polygonal dielectric waveguides such as the large area VCESL.
|
133 |
noneHuang, Shih-ting 28 June 2007 (has links)
This paper extends respectively Gale-Shapley¡¦s model and Balinski-Sonmez¡¦s model to analyze the college admission problem and the student placement problem in the case of Taiwan. Given the assumption that time is not considered as a critical dimension of this issue, it is argued that Taiwan¡¦s admission mechanism is in accordance with the criterion of the student optimal stable mechanism with number restriction. As well, the outcome of Taiwan¡¦s admission mechanism exhibits features which are similar to that of the student optimal stable matching with number restriction. However, with regard to Taiwan¡¦s student placement mechanism, it is demonstrated that inefficiency may prevail.
|
134 |
Evaluating the Quality Payment Program in Taiwan for Treating TuberculosisHsieh, Yu-Ting 22 July 2007 (has links)
none
|
135 |
Tax Competition, Spillovers, and SubsidiesOgawa, Hikaru 09 1900 (has links)
No description available.
|
136 |
Rapid assessment of redevelopment potential in marginal oil fields, application to the cut bank fieldChavez Ballesteros, Luis Eladio 17 February 2005 (has links)
Quantifying infill potential in marginal oil fields often involves several challenges. These include highly heterogeneous reservoir quality both horizontally and vertically, incomplete reservoir databases, considerably large amounts of data involving numerous wells, and different production and completion practices. The most accurate way to estimate infill potential is to conduct a detailed integrated reservoir study, which is often time-consuming and expensive for operators of marginal oil fields. Hence, there is a need for less-demanding methods that characterize and predict heterogeneity and production variability. As an alternative approach, various authors have used empirical or statistical analyses to model variable well performance. Many of the methods are based solely on the analysis of well location, production and time data.
My objective is to develop an enhanced method for rapid assessment of infill-drilling potential that would combine increased accuracy of simulation-based methods with times and costs associated with statistical methods. My proposed solution is to use reservoir simulation combined with automatic history matching to regress production data to determine the permeability distribution. Instead of matching on individual cell values of reservoir properties, I match on constant values of permeability within regions around each well. I then use the permeability distribution and an array of automated simulation predictions to determine infill drilling potential throughout the reservoir.
Infill predictions on a single-phase synthetic case showed greater accuracy than results from statistical techniques. The methodology successfully identified infill well locations on a synthetic case derived from Cut Bank field, a water-flooded oil reservoir. Analysis of the actual production and injection data from Cut Bank field was unsuccessful, mainly because of an incomplete production database and limitations in the commercial regression software I used.
In addition to providing more accurate results than previous empirical and statistical methods, the proposed method can also incorporate other types of data, such as geological data and fluid properties. The method can be applied in multiphase fluid situations and, since it is simulation based, it provides a platform for easy transition to more detailed analysis. Thus, the method can serve as a valuable reservoir management tool for operators of stripper oil fields.
|
137 |
An efficient Bayesian formulation for production data integration into reservoir modelsLeonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
|
138 |
Resource allocation in DS-CDMA systems with side information at the transmitterPeiris, Bemini Hennadige Janath 25 April 2007 (has links)
In a multiuser DS-CDMA system with frequency selectivity, each userâÂÂs spreading
sequence is transmitted through a different channel and the autocorrelation and
the cross correlation properties of the received sequences will not be the same as
that of the transmitted sequences. The best way of designing spreading sequences
for frequency selective channels is to design them at the receiver exploiting the usersâÂÂ
channel characteristics. By doing so, we can show that the designed sequences outperform
single user AWGN performance.
In existing sequence design algorithms for frequency selective channels, the design
is done in the time domain and the connection to frequency domain properties
is not established. We approach the design of spreading sequences based on their
frequency domain characteristics. Based on the frequency domain characteristics of
the spreading sequences with unconstrained amplitudes and phases, we propose a
reduced-rank sequence design algorithm that reduces the computational complexity,
feedback bandwidth and improves the performance of some existing sequence design
algorithms proposed for frequency selective channels.
We propose several different approaches to design the spreading sequences with constrained amplitudes and phases for frequency selective channels. First, we use the
frequency domain characteristics of the unconstrained spreading sequences to find a
set of constrained amplitude sequences for a given set of channels. This is done either
by carefully assigning an already existing set of sequences for a given set of users or by
mapping unconstrained sequences onto a unit circle. Secondly, we use an information
theoretic approach to design the spreading sequences by matching the spectrum of
each userâÂÂs sequence to the water-filling spectrum of the userâÂÂs channel.
Finally, the design of inner shaping codes for single-head and multi-head magnetic
recoding channels is discussed. The shaping sequences are designed considering them
as short spreading codes matched to the recoding channels. The outer channel code
is matched to the inner shaping code using the extrinsic information transfer chart
analysis.
In this dissertation we introduce a new frequency domain approach to design
spreading sequences for frequency selective channels. We also extend this proposed
technique to design inner shaping codes for partial response channels.
|
139 |
A prototype system for ontology matching using polygonsHerrero, Ana January 2006 (has links)
<p>When two distributed parties want to share information stored in ontologies, they have to make sure that they refer to the same concepts. This is done matching the ontologies.</p><p>This thesis will show the implementation of a method for automatic ontology matching based on the representation of polygons. The method is used to compare two ontologies and determine the degree of similarity between them.</p><p>The first of the ontologies will be taken as the standard, while the other will be compared to it by analyzing the elements in both. According to the degrees of similarity obtained from the comparison of elements, a set of polygons is represented for the standard ontology and another one for the second ontology.</p><p>Comparing the polygons we obtain the final result of the similarity between the ontologies.</p><p>With that result it is possible to determine if two ontologies handle information referred to the same concept.</p>
|
140 |
Efficient muscle representation for human walkingIyer, Rahul R. 22 February 2013 (has links)
Research in robotics has recently broadened its traditional focus on industrial applications to include natural, human-like systems. The human musculoskeletal system has over 600 muscles and 200 joint degrees-of-freedom that provide extraordinary flexibility in tailoring its overall configuration and dynamics to the demands of different tasks. The importance of understanding human movement has spurred efforts to build systems with similar capabilities and has led to the construction of actuators, such as pneumatic artificial muscles, that have properties similar to those of human muscles. However, muscles are far more complex than these robotic actuators and will require new control perspectives.
Specifying how to encode high degree-of-freedom muscle functions in order to recreate such movements in anthropomorphic robotic systems is an imposing challenge. This dissertation attempts to advance our understanding by modeling the workings of human muscles in a way that explains how the low temporal bandwidth control of the human brain could direct the high temporal bandwidth requirements of the human movement system. We extend the motor primitives model, a popular strategy for human motor control, by coding a fixed library of movements such that their temporal codes are pre-computed and can be looked up and combined on demand. In this dissertation we develop primitives that lead to various smooth, natural human movements and obtain a sparse-code representation for muscle fiber length changes by applying Matching Pursuit on a parameterized representation of such movements. We employ accurate three-dimensional musculoskeletal models to simulate the lower body muscle fiber length changes for multiple repeatable movements captured from human subjects. We recreate the length changes and show that the signal can be economically encoded in terms of discrete movement elements. Each movement can thus be visualized as a sequence of coefficients for temporally displaced motor primitives.
The primary research contribution of describing movements as a compact code develops a clear hierarchy between the spinal cord and higher brain areas. The code has several other advantages. First, it provides an overview of how the elaborate computations in abstract motor control could be ‘parcellated’ into the brain’s primary subsystems. Second, its parametric description could be used in the extension of learned movements to similar movements with different goals. Thirdly, the sensitivity of the parameters can allow the differentiation of very subtle variations in movement. This research lays the groundwork for understanding and developing further human motor control strategies and provides a mathematical framework for experimental research. / text
|
Page generated in 0.0208 seconds