• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 51
  • 48
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 485
  • 485
  • 163
  • 101
  • 79
  • 67
  • 66
  • 51
  • 47
  • 39
  • 38
  • 37
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Architectures for fault-tolerant quantum computation

O'Gorman, Joe January 2017 (has links)
Quantum computing has enormous potential, but this can only be realised if quantum errors can be controlled sufficiently to allow quantum algorithms to be completed reliably. However, quantum-error-corrected logical quantum bits (qubits) which can be said to have achieved meaningful error suppression have not yet been demonstrated. This thesis reports research on several topics related to the challenge of designing fault-tolerant quantum computers. The first topic is a proposal for achieving large-scale error correction with the surface code in a silicon donor based quantum computing architecture. This proposal relaxes some of the stringent requirements in donor placement precision set by previous ideas from the single atom level to the order of 10 nm in some regimes. This is shown by means of numerical simulation of the surface code threshold. The second topic then follows, it is the development of a method for benchmarking and assessing the performance of small error correcting codes in few-qubit systems, introducing a metric called 'integrity' - closely linked to the trace distance -- and a proposal for experiments to demonstrate various stepping stones on the way to 'strictly superior' quantum error correction. Most quantum error correcting codes, including the surface code, do not allow for fault-tolerant universal computation without the addition of extra gadgets. One method of achieving universality is through a process of distilling and then consuming high quality 'magic states'. This process adds additional overhead to quantum computation over and above that incurred by the use of the base level quantum error correction. The latter parts of this thesis report an investigation into how many physical qubits are needed in a `magic state factory' within a surface code quantum computer and introduce a number of techniques to reduce the overhead of leading magic state techniques. It is found that universal quantum computing is achievable with &Tilde; 16 million qubits if error rates across a device are kept below 10<sup>-4</sup>. In addition, the thesis introduces improved methods of achieving magic state distillation for unconventional magic states that allow for logical small angle rotations, and show that this can be more efficient than synthesising these operations from the gates provided by traditional magic states.
52

The effective error-correction/feedback in ESL children's written work in terms of fluency and accuracy : a case study with two Korean ESL children

Ko, Bo-Ai, n/a January 1999 (has links)
This case study was explored to determine effective error-correction/feedback methods for two ESL Korean children's writing (recounting task) in terms of accuracy, fluency and attitudes. Three different error-correction methods - written comments focusing on meaning by researcher (Case1), direct and global error-correction focusing on form by researcher (Case2) and self-directed error-correction using check lists by subjects (Case 3) - were applied over a period of 7 months. Thirty pieces of recount writing per subject were collected (10 pieces per case) and analysed by structured criteria of fluency and accuracy. Through participant observation, the subjects' changing attitudes were recorded in notes and video tapes. The results of the analysis showed that for Subject B, who was 7 years old and a more advanced writer of English than Subject A, self-directed error-correction using check lists (Case 3) was the most effective method in relation to both fluency and accuracy as well as attitude. Yet, for Subject A who was 5 years old and an early beginner in her writing, Case 1 seemed to be more effective in terms of fluency and attitude and Case 3 was likely to be more effective in terms of accuracy. In discussion, the method of error correction / feedback, the issue of ownership in children's writing including errorcorrection and the necessity of process writing were highlighted in the light of the whole context of the case study.
53

DIGITAL GAIN ERROR CORRECTION TECHNIQUE  FOR 8-BIT PIPELINE ADC

javeed, khalid January 2010 (has links)
<p>An analog-to-digital converter (ADC) is a link between the analog and digital domains and plays a vital role in modern mixed signal processing systems. There are several architectures, for example flash ADCs, pipeline ADCs, sigma delta ADCs,successive approximation (SAR) ADCs and time interleaved ADCs. Among the various architectures, the pipeline ADC offers a favorable trade-off between speed,power consumption, resolution, and design effort. The commonly used applications of pipeline ADCs include high quality video systems, radio base stations,Ethernet, cable modems and high performance digital communication systems.Unfortunately, static errors like comparators offset errors, capacitors mismatch errors and gain errors degrade the performance of the pipeline ADC. Hence, there is need for accuracy enhancement techniques. The conventional way to overcome these mentioned errors is to calibrate the pipeline ADC after fabrication, the so-called post fabrication calibration techniques. But environmental changes like temperature and device aging necessitates the recalibration after regular intervals of time, resulting in a loss of time and money. A lot of effort can be saved if the digital outputs of the pipeline ADC can be used for the estimation and correctionof these errors, further classified as foreground and background techniques. In this thesis work, an algorithm is proposed that can estimate 10% inter stage gain errors in pipeline ADC without any need for a special calibration signal. The efficiency of the proposed algorithm is investigated on an 8-bit pipeline ADC architecture.The first seven stages are implemented using the 1.5-bit/stage architecture whilethe last stage is a one-bit flash ADC. The ADC and error correction algorithms simulated in Matlab and the signal to noise and distortion ratio (SNDR) is calculated to evaluate its efficiency.</p>
54

Housing Investment in Germany : an Empirical Test

Holm, Hanna January 2006 (has links)
<p>In this thesis I study the German housing market and specifically the level of housing investment. First, a theoretical background to housing market dynamics is presented and then I test whether there is a relationship between housing investments and GDP, the size of the population, Tobin’s Q and construction costs. An Error Correction Model is estimated and the result is that the equilibrium level of housing investment is restored after less then two quarters after a change in one of the explainable variables. The estimation indicates that GDP, the size of the population and construction costs affect the level of construction in the short run. However, in the long run the only significant effect is changes in construction cost.</p>
55

Currency Substitution¡GEmpirical Investigation Of Taiwan

Yeh, Hui-Chuan 01 August 2007 (has links)
If there is currency substitution, the central bank will lose independence in monetary policy even if the flexible exchange rate system is adopted. In this paper, we investigate the existence of currency substitution between Taiwan and the United States in an open economy during the period of the managed floating exchange rate system, and examine the role of the factor influencing monetary policy and domestic money demand function derived from a small-country portfolio balance approach. To take account of currency substitution, we use quarterly data over 1981-2005 period on the demand for money and include data on the real exchange rate in addition to real income, domestic nominal interest rate and foreign nominal interest rate. The methodology is based on an application of the Johansen and Juselius¡]1990¡^cointegration technique. Also use error correction model to discuss short-run dynamic adjustment processes of these variables. Application of the Augmented Dickey-Fuller test and Phillips-Perron test indeed reveal that all variables are integrated of order one. The result from the Johansen¡¦s maximum likelihood mehtod reveal that there is only one cointegrating vector among the variables. This implies that there is long-run equilibrium relationship among the variables. There is clear evidence that demand for money is affected not only by changes in domestic variables such as real income, domestic nominal interest rate but also by fluctuations in foreign nominal interest rate and real exchange rate. And the coefficiect of the real exchange rate is negative and statistically significant. That means currency substitution is significant factor in the domestic money demand equation and currency substitution indeed exists in Taiwan. This paper successfully provides a consistent result, currency substitution indeed exists in Taiwan. Therefore, to have an effective monetary policy, the monetary authorities should take into account the international factors.
56

Housing Investment in Germany : an Empirical Test

Holm, Hanna January 2006 (has links)
In this thesis I study the German housing market and specifically the level of housing investment. First, a theoretical background to housing market dynamics is presented and then I test whether there is a relationship between housing investments and GDP, the size of the population, Tobin’s Q and construction costs. An Error Correction Model is estimated and the result is that the equilibrium level of housing investment is restored after less then two quarters after a change in one of the explainable variables. The estimation indicates that GDP, the size of the population and construction costs affect the level of construction in the short run. However, in the long run the only significant effect is changes in construction cost.
57

Error correction model estimation of the Canada-US real exchange rate

Ye, Dongmei 18 January 2008
Using the error correction model, we link the long-run behavior of the Canada-US real exchange rate to its short-run dynamics. The equilibrium real exchange rate is determined by the energy and non-energy commodity prices over the period 1973Q1-1992Q1. However such a single long-run relationship does not hold when the sample period is extended to 2004Q4. This breakdown can be explained by the break point which we find at 1993Q3. At the break point, the effect of the energy price shocks on Canadas real exchange rate turns from negative to positive while the effect of the non-energy commodity price shocks is constantly positive. We find that after one year 40.03% of the gap between the actual and equilibrium real exchange rate is closed. The Canada-US interest rate differential affects the real exchange rate temporarily. The Canadas real exchange rate depreciates immediately after a decrease in Canadas interest rate and appreciates next quarter but not by as much as it has depreciated.
58

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
59

DIGITAL GAIN ERROR CORRECTION TECHNIQUE  FOR 8-BIT PIPELINE ADC

javeed, khalid January 2010 (has links)
An analog-to-digital converter (ADC) is a link between the analog and digital domains and plays a vital role in modern mixed signal processing systems. There are several architectures, for example flash ADCs, pipeline ADCs, sigma delta ADCs,successive approximation (SAR) ADCs and time interleaved ADCs. Among the various architectures, the pipeline ADC offers a favorable trade-off between speed,power consumption, resolution, and design effort. The commonly used applications of pipeline ADCs include high quality video systems, radio base stations,Ethernet, cable modems and high performance digital communication systems.Unfortunately, static errors like comparators offset errors, capacitors mismatch errors and gain errors degrade the performance of the pipeline ADC. Hence, there is need for accuracy enhancement techniques. The conventional way to overcome these mentioned errors is to calibrate the pipeline ADC after fabrication, the so-called post fabrication calibration techniques. But environmental changes like temperature and device aging necessitates the recalibration after regular intervals of time, resulting in a loss of time and money. A lot of effort can be saved if the digital outputs of the pipeline ADC can be used for the estimation and correctionof these errors, further classified as foreground and background techniques. In this thesis work, an algorithm is proposed that can estimate 10% inter stage gain errors in pipeline ADC without any need for a special calibration signal. The efficiency of the proposed algorithm is investigated on an 8-bit pipeline ADC architecture.The first seven stages are implemented using the 1.5-bit/stage architecture whilethe last stage is a one-bit flash ADC. The ADC and error correction algorithms simulated in Matlab and the signal to noise and distortion ratio (SNDR) is calculated to evaluate its efficiency.
60

Comparative advantage, Exports and Economic Growth

Riaz, Bushra January 2011 (has links)
This study investigates the causal relationship among comparative advantage, exports and economic growth by using the time series annual data for the period of 1980-2009 on 13 developing countries. The purpose is to develop an understanding of causal relationship and explore the differences or similarities among different sectors of several developing countries that are in different stages of development and how their comparative advantage influences the exports, which further effect the economic growth of the country. The co-integration and the vector error correction techniques are used to explore the causal relationship among the three variables. The results suggest bi-directional or mutual long run relationship between comparative advantage, exports and economic growth in most of the developing countries. The overall long run results of the study favour the export led growth hypothesis that exports precede the growth in case of all countries except for Malaysia, Pakistan and Sri Lanka. The short run mutual relationship exists mostly among the three variables except for Malaysian exports and growth and its comparative advantage and GDP and for Singapore‟s exports and growth. The short run causality runs from exports to gross domestic product (GDP). So overall, short run results favour export led growth in all cases except for Malaysia, Nepal and Sri Lanka.

Page generated in 0.0996 seconds