• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 696
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1735
  • 536
  • 244
  • 183
  • 165
  • 153
  • 153
  • 125
  • 114
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

DIGITAL GAIN ERROR CORRECTION TECHNIQUE  FOR 8-BIT PIPELINE ADC

javeed, khalid January 2010 (has links)
An analog-to-digital converter (ADC) is a link between the analog and digital domains and plays a vital role in modern mixed signal processing systems. There are several architectures, for example flash ADCs, pipeline ADCs, sigma delta ADCs,successive approximation (SAR) ADCs and time interleaved ADCs. Among the various architectures, the pipeline ADC offers a favorable trade-off between speed,power consumption, resolution, and design effort. The commonly used applications of pipeline ADCs include high quality video systems, radio base stations,Ethernet, cable modems and high performance digital communication systems.Unfortunately, static errors like comparators offset errors, capacitors mismatch errors and gain errors degrade the performance of the pipeline ADC. Hence, there is need for accuracy enhancement techniques. The conventional way to overcome these mentioned errors is to calibrate the pipeline ADC after fabrication, the so-called post fabrication calibration techniques. But environmental changes like temperature and device aging necessitates the recalibration after regular intervals of time, resulting in a loss of time and money. A lot of effort can be saved if the digital outputs of the pipeline ADC can be used for the estimation and correctionof these errors, further classified as foreground and background techniques. In this thesis work, an algorithm is proposed that can estimate 10% inter stage gain errors in pipeline ADC without any need for a special calibration signal. The efficiency of the proposed algorithm is investigated on an 8-bit pipeline ADC architecture.The first seven stages are implemented using the 1.5-bit/stage architecture whilethe last stage is a one-bit flash ADC. The ADC and error correction algorithms simulated in Matlab and the signal to noise and distortion ratio (SNDR) is calculated to evaluate its efficiency.
192

Alzheimer's Disease Classification using K-OPLS and MRI

Falahati Asrami, Farshad January 2012 (has links)
In this thesis, we have used the kernel based orthogonal projection to latent structures (K-OPLS) method to discriminate between Alzheimer's Disease patients (AD) and healthy control subjects (CTL), and to predict conversion from mild cognitive impairment (MCI) to AD. In this regard three cohorts were used to create two different datasets; a small dataset including 63 subjects based on the Alzheimer’s Research Trust (ART) cohort and a large dataset including 1074 subjects combining the AddNeuroMed (ANM) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohorts. In the ART dataset, 34 regional cortical thickness measures and 21 volumetric measures from MRI in addition to 3 metabolite ratios from MRS, altogether 58 variables obtained for 28 AD and 35 CTL subjects. Three different K-OPLS models were created based on MRI and MRS measures and their combination. Combining the MRI and the MRS measures significantly improved the discriminant power resulting in a sensitivity of 96.4% and a specificity of 97.1%. In the combined dataset (ADNI and AddNeuroMed), the Freesurfer pipeline was utilized to extract 34 regional cortical thickness measures and 23 volumetric measures from MRI scans of 295 AD, 335 CTL and 444 MCI subjects. The classification of AD and CTL subjects using the K-OPLS model resulted in a high sensitivity of 85.8% and a specificity of 91.3%. Subsequently, the K-OPLS model was used to prospectively predict conversion from MCI to AD, according to the one year follow up diagnosis. As a result, 78.3% of the MCI converters were classified as AD-like and 57.5% of the MCI non-converters were classified as control-like. Furthermore, an age correction method was proposed to remove the effect of age as a confounding factor. The age correction method successfully removed the age-related changes of the data. Also, the age correction method slightly improved the performance regarding to classification and prediction. This resulted in that 82.1% of the MCI converters were correctly classified. All analyses were performed using 7-fold cross validation. The K-OPLS method shows strong potential for classification of AD and CTL, and for prediction of MCI conversion.
193

Comparative advantage, Exports and Economic Growth

Riaz, Bushra January 2011 (has links)
This study investigates the causal relationship among comparative advantage, exports and economic growth by using the time series annual data for the period of 1980-2009 on 13 developing countries. The purpose is to develop an understanding of causal relationship and explore the differences or similarities among different sectors of several developing countries that are in different stages of development and how their comparative advantage influences the exports, which further effect the economic growth of the country. The co-integration and the vector error correction techniques are used to explore the causal relationship among the three variables. The results suggest bi-directional or mutual long run relationship between comparative advantage, exports and economic growth in most of the developing countries. The overall long run results of the study favour the export led growth hypothesis that exports precede the growth in case of all countries except for Malaysia, Pakistan and Sri Lanka. The short run mutual relationship exists mostly among the three variables except for Malaysian exports and growth and its comparative advantage and GDP and for Singapore‟s exports and growth. The short run causality runs from exports to gross domestic product (GDP). So overall, short run results favour export led growth in all cases except for Malaysia, Nepal and Sri Lanka.
194

Linearization of Voltage-Controlled Oscillators in Phase-Locked Loops

Eklund, Robert January 2005 (has links)
This is a thesis report done as part of the Master of Science in Electronics Design Engineering given at Linköping University, Campus Norrköping. The thesis work is done at Ericsson AB in the spring of 2005. The thesis describes a method of removing variations in the tuning sensitivity of voltage-controlled crystal oscillators due to different manufacturing processes. These variations results in unwanted variations in the modulation bandwidth of the phase-locked loop the oscillator is used in. Through examination of the theory of phase-locked loops it is found that the bandwidth of the loop is dependent on the tuning sensitivity of the oscillator. A method of correcting the oscillator-sensitivity by amplifying or attenuating the control-voltage of the oscillator is developed. The size of the correction depends on the difference in oscillator-sensitivity compared to that of an ideal oscillator. This error is measured and the correct correction constant calculated. To facilitate the measurements and correction extra circuits are developed and inserted in the loop. The circuits are both analog and digital. The analog circuits are mounted on an extra circuit board and the digital circuits are implemented in VHDL in an external FPGA. Tests and theoretical calculations show that the method is valid and able to correct both positive and negative variations in oscillator-sensitivity of up to a factor ±2.5 times. The bandwidth of the loop can be adjusted between 2 to 15 Hz (up to ±8 dB, relative an unmodified loop).
195

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
196

On single-crystal solid-state NMR based quantum information processing

Moussa, Osama January 2010 (has links)
Quantum information processing devices promise to solve some problems more efficiently than their classical counterparts. The source of the speedup is the structure of quantum theory itself. In that sense, the physical units that are the building blocks of such devices are its power. The quest then is to find or manufacture a system that behaves according to quantum theory, and yet is controllable in such a way that the desired algorithms can be implemented. Candidate systems are benchmarked against general criteria to evaluate their success. In this thesis, I advance a particular system and present the progress made towards each of these criteria. The system is a three-qubit 13C solid-state nuclear magnetic resonance (NMR) based quantum processor. I report results concerning system characterization and control, pseudopure state preparation, and quantum error correction. I also report on using the system to test a central question in the foundation of quantum mechanics.
197

Error correction model estimation of the Canada-US real exchange rate

Ye, Dongmei 18 January 2008 (has links)
Using the error correction model, we link the long-run behavior of the Canada-US real exchange rate to its short-run dynamics. The equilibrium real exchange rate is determined by the energy and non-energy commodity prices over the period 1973Q1-1992Q1. However such a single long-run relationship does not hold when the sample period is extended to 2004Q4. This breakdown can be explained by the break point which we find at 1993Q3. At the break point, the effect of the energy price shocks on Canadas real exchange rate turns from negative to positive while the effect of the non-energy commodity price shocks is constantly positive. We find that after one year 40.03% of the gap between the actual and equilibrium real exchange rate is closed. The Canada-US interest rate differential affects the real exchange rate temporarily. The Canadas real exchange rate depreciates immediately after a decrease in Canadas interest rate and appreciates next quarter but not by as much as it has depreciated.
198

Design of Buck LED Driver Circuits with Single-stage Power Factor Correction

Wu, Wen-yuan 02 August 2010 (has links)
In the thesis, LED driver circuits which are applied in low power lighting LED with constant output current and Power Factor Correction are presented. The non-isolated Buck converter are used for the LED drivers. According to different operating mode of inductance current, Power Factor Correction are realized with both the method of Voltage Follower Approach Control under Discontinuous Conduction Mode and the method of Nonlinear Carrier Control under Continuous Conduction Mode. NLC doesn¡¦t need the multiplier which is used in traditional power factor correction, therefore NLC can reduce the system cost. The designed circuits are verified by simulation of IsSpice software and practical experiments. From simulation and experimental results, it shows the proposed approaches achieve the goal with high power factor and constant output current.
199

A Direct Digital Frequency Synthesizer based on Linear Interpolation with Correction Block

Chen, Shi-wei 01 August 2011 (has links)
In this thesis, a linear interpolation direct digital frequency synthesizer (DDFS) with improved structure to simplify the hardware complexity by correction block is proposed. Correction block is mainly used to compensate for the error curve of linear interpolation DDFS. From the analysis of these error curves, these error curves have similar behavior between each others. After selecting an error curve, the other error curves can be derived and multiplied by a fixed scale. From the simulation results, the correction block using the above method can improve about 12 dB spurious frequency dynamic range (SFDR). The goal of the DDFS designed in this thesis is to achieve 80 dB SFDR. Minimum required number of bits for each block in the proposed DDFS is carefully selected by simulation. In general, DDFS with piecewise linear interpolation theoretically needs 32 segments of piecewise linear interpolation to achieve 84 dB SFDR. In this thesis, 16 segments of piecewise linear interpolation with correction block can achieve the target SFDR. The chip¡¦s simulation is implemented by TSMC standard 0.13um 1P8M CMOS process with core area 78.11 x 77.49 um2.
200

Sensor Alignment Correction for Ultra Short Baseline Positioning

Du, Kung-wen 27 April 2006 (has links)
The performance of an ultra-short baseline (USBL) positioning system is limited by noises and errors from physical environment and other sources. One of the major errors in USBL positioning is to neglect the sensor misalignment which produces static yaw, pitch, and roll offsets. In this study, a circular survey observation scheme is first proposed to study the positioning errors of a USBL system with a fixed seabed transponder. The center of the circular survey scheme is assumed to be located over the top of the transponder. Mathematical equations of the transponder positioning with yaw, pitch, and roll offsets are derived, respectively. According to characteristics of positioning errors arose from yaw, pitch, and roll offsets, an iterative procedure of first getting roll offset, next computing yaw offset, and then obtaining pitch offset for sensor misalignment correction is established. Simulation results indicate that the iterative procedure can effectively obtain all offsets with high determination accuracy and the computation can rapidly converge to desired error tolerance in a few iterations. However, the center of circular vessel survey scheme is almost impossible to be exactly located over the top of the transponder. In such a case, the horizontal positioning error resulting from pitch offset or roll offset is no more a circle function. As a result, it will fail to evaluate the angle offsets through above iterative procedure unless the deviation from real and estimate horizontal transponder position is extremely small comparing to the transponder depth. Therefore, in addition to circular survey scheme, this study proposed a straight survey scheme to study the patterns of positioning error resulting from yaw, pitch, and roll offsets. Similar to the philosophy of establishing the iterative procedure described above, the iterative procedure of first getting pitch offset, next computing roll offset, and then obtaining yaw offset for sensor misalignment correction is established. Again, simulation results show that the iterative procedure can find all offsets with high determination accuracy and has the advantage of quick converging. Besides, the iterative procedure can still obtain correct angle offsets even though there is a constant heading deviation from the direction of the straight vessel track during vessel survey.

Page generated in 0.115 seconds