• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2075
  • 468
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 52
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4362
  • 717
  • 537
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 305
  • 295
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

The contribution of Kant to the problem of error

Lion, Aline January 1930 (has links)
No description available.
122

Error analysis of correlation estimates

Foiles, Carl Luther, 1935- January 1960 (has links)
No description available.
123

Synchronization of cyclic codes.

Lewis, David John Head January 1969 (has links)
No description available.
124

“People believes on ghosts” - an Error Analysis of Swedish Junior and Senior High School students´ written compositions

Strömblad, Lucas January 2013 (has links)
This paper investigates errors in compositions written by junior and senior high school students. Two types of errors are specifically targeted, one relating to grammar in subject-verb agreement and another relating to the word class of prepositions. The aim of the present study is to focus on aspects which are of particular difficulty for the students, and to unveil underlying psycholinguistic mechanisms which affect the students’ acquisition process, for example, a potential influence of the mother tongue. The study is also expected to yield information about potential differences in error frequency and language construction between proficiency levels. The study is cross-sectional and includes altogether fifty-six samples collected from 7th and 9th grade at junior high school, and Year 1 and Year 3 at senior high school. Each group has produced fourteen compositions of free writing, and each text consists of approximately 200-300 words. The topic of the writing task was related to the supernatural and the head title was set as Do you believe in ghosts? A few taxonomies (James, 1998) and a method referred to as Error Analysis (Ellis, 1994), (both deriving from Second Language Acquisition research) are used to categorize, describe and explain error frequencies of certain error types. The results of the study show that the error frequency generally decreases from one expected proficiency level to another. The highest number of errors was found in 7th grade students’ writing, and the lowest in Year 3 students’ writing. Regardless of proficiency level, what is most troublesome for the students with subject-verb agreement is to master the 3rd person –s inflection. Prepositions, on the other hand, which account for a lower number of errors in the compositions compared to the number of subject-verb agreement errors, tend to be used erroneously when the students are confused about when and which a particular preposition should agree in a specific contextual meaning in English.
125

The design and implementation of trellis-based soft decision decoders for block codes

Luna, Amjad A. 05 1900 (has links)
No description available.
126

Evaluation of finite element error estimation techniques

Kalyanpur, Rohan 12 1900 (has links)
No description available.
127

Nonresponse and ratio estimation problems in sample surveys

Oshungade, I. O. January 1988 (has links)
No description available.
128

Resource optimization for fault-tolerant quantum computing

Paetznick, Adam 13 December 2013 (has links)
Quantum computing offers the potential for efficiently solving otherwise classically difficult problems, with applications in material and drug design, cryptography, theoretical physics, number theory and more. However, quantum systems are notoriously fragile; interaction with the surrounding environment and lack of precise control constitute noise, which makes construction of a reliable quantum computer extremely challenging. Threshold theorems show that by adding enough redundancy, reliable and arbitrarily long quantum computation is possible so long as the amount of noise is relatively low---below a ``threshold'' value. The amount of redundancy required is reasonable in the asymptotic sense, but in absolute terms the resource overhead of existing protocols is enormous when compared to current experimental capabilities. In this thesis we examine a variety of techniques for reducing the resources required for fault-tolerant quantum computation. First, we show how to simplify universal encoded computation by using only transversal gates and standard error correction procedures, circumventing existing no-go theorems. The cost of certain error correction procedures is dominated by preparation of special ancillary states. We show how to simplify ancilla preparation, reducing the cost of error correction by more than a factor of four. Using this optimized ancilla preparation, we then develop improved techniques for proving rigorous lower bounds on the noise threshold. The techniques are specifically intended for analysis of relatively large codes such as the 23-qubit Golay code, for which we compute a lower bound on the threshold error rate of 0.132 percent per gate for depolarizing noise. This bound is the best known for any scheme. Additional overhead can be incurred because quantum algorithms must be translated into sequences of gates that are actually available in the quantum computer. In particular, arbitrary single-qubit rotations must be decomposed into a discrete set of fault-tolerant gates. We find that by using a special class of non-deterministic circuits, the cost of decomposition can be reduced by as much as a factor of four over state-of-the-art techniques, which typically use deterministic circuits. Finally, we examine global optimization of fault-tolerant quantum circuits. Physical connectivity constraints require that qubits are moved close together before they can interact, but such movement can cause data to lay idle, wasting time and space. We adapt techniques from VLSI in order to minimize time and space usage for computations in the surface code, and we develop a software prototype to demonstrate the potential savings.
129

On turbo codes and other concatenated schemes in communication systems

Ambroze, Marcel Adrian January 2000 (has links)
The advent of turbo codes in 1993 represented a significant step towards realising the ultimate capacity limit of a communication channel, breaking the link that was binding very good performance with exponential decoder complexity. Turbo codes are parallel concatenated convolutional codes, decoded with a suboptimal iterative algorithm. The complexity of the iterative algorithm increases only linearly with block length, bringing previously unprecedented performance within practical limits. This work is a further investigation of turbo codes and other concatenated schemes such as the multiple parallel concatenation and the serial concatenation. The analysis of these schemes has two important aspects, their performance under optimal decoding and the convergence of their iterative, suboptimal decoding algorithm. The connection between iterative decoding performance and the optimal decoding performance is analysed with the help of computer simulation by studying the iterative decoding error events. Methods for good performance interleaver design and code design are presented and analysed in the same way. The optimal decoding performance is further investigated by using a novel method to determine the weight spectra of turbo codes by using the turbo code tree representation, and the results are compared with the results of the iterative decoder. The method can also be used for the analysis of multiple parallel concatenated codes, but is impractical for the serial concatenated codes. Non-optimal, non-iterative decoding algorithms are presented and compared with the iterative algorithm. The convergence of the iterative algorithm is investigated by using the Cauchy criterion. Some insight into the performance of the concatenated schemes under iterative decoding is found by separating error events into convergent and non-convergent components. The sensitivity of convergence to the Eb/Ng operating point has been explored.
130

Quantum computation

Gourlay, Iain January 2000 (has links)
No description available.

Page generated in 0.0287 seconds