• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 691
  • 223
  • 199
  • 90
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1729
  • 533
  • 243
  • 182
  • 164
  • 153
  • 152
  • 124
  • 112
  • 107
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

An investigation into the scale effects on cavitation inception and noise in marine propellers

Korkut, Emin January 1999 (has links)
This thesis presents an investigation into the phenomena of scale effects on cavitation inception and noise of marine propellers. The overall aim is to extend the understanding of these phenomena and improve predicting methods. The investigations, which are largely experimental in nature, are restricted to the tip vortex and sheet types of cavitation. Chapter 1 includes a state-of-the-art review of the scale effect studies based on published papers to form the basis for the main objectives and structure of thesis. The objectives require systematic tests in a cavitation tunnel to explore the viscous scale effects contributing to the phenomena, particularly for the effect of the free-stream turbulence, and to include this effect in extrapolation procedures. Chapter 2 is concernedw ith the background flow measurementsin the cavitation tunnel under the effect of systematically varying levels of the free-stream turbulence generated by using wire meshes. This background information is obtained using a Laser Doppler Anemometry; measurements made with the latter provide a systematic basis on which the analyses of the cavitation inception and noise experiments can be performed. In Chapter 3, a set of cavitation inception tests is described with a NACA66 rectangular foil whose cross-section represents a typical blade section of a marine propeller. The inception measurements for systematically varying levels of the free-stream turbulence and that of the leading edge roughness are presented for different angles of attack and the results are discussed. Chapter 4 includes another set of cavitation inception experiments with a 5-bladed of model propeller of the Meridian Series. The measurements are taken for varying levels of the free-stream turbulence, blade roughness and dissolved gas contents. The results are analysed and discussed with a specific emphasis on the similarities between the effects of the free-stream turbulence and blade roughness. Chapter 5 presents a set of systematic noise measurements, with the same test propeller under the similar effects of the free-stream turbulence, blade roughness and dissolved gas content, using a single external hydrophone. The analyseso f these measurements,in terms of the tunnel background noise and net propeller noise, are presented and discussed for two operating conditions representing a typical non-cavitating and cavitating noise spectrum. In Chapter 6, a semi-empirical tool is developed to predict the inception of cavitation including the effect of the free-stream turbulence based on Lighthill's Leading Edge Correction factor (Lighthill, 1951). This tool is correlated with the inception tests results of the model propeller and its potential to be used as an extrapolator for the full-scale prediction is discussed. An attempt is made to establish a correspondence between the level of the free-stream turbulence and that of the blade roughness and its impact on the current test procedures is discussed. This chapter also includes an analysis of the similarity criteria to incorporate the effect of the free-stream turbulence in the inception of cavitation using the Dimensional Analysis procedure. In Chapter 7, a general review of the study together with the main conclusions from the thesis are presented and some recommendations for future work are made.
72

Practical Advances in Quantum Error Correction & Communication

Criger, Daniel Benjamin January 2013 (has links)
Quantum computing exists at the intersection of mathematics, physics, chemistry, and engineering; the main goal of quantum computing is the creation of devices and algorithms which use the properties of quantum mechanics to store, manipulate and measure information. There exist many families of algorithms, which, using non-classical logical operations, can outperform traditional, classical algorithms in terms of memory and processing requirements. In addition, quantum computing devices are fundamentally smaller than classical processors and memory elements; since the physical models governing their performance are applicable on all scales, as opposed to classical logic elements, whose underlying principles rely on the macroscopic nature of the device in question. Quantum algorithms, for the most part, are predicated on a theory of resources. It is often assumed that quantum computers can be placed in a precise fiducial state prior to computation, and that logical operations are perfect, inducing no error on the system which they affect. These assumptions greatly simplify algorithmic design, but are fundamentally unrealistic. In order to justify their use, it is necessary to develop a framework for using a large number of imperfect devices to simulate the action of a perfect device, with some acceptable probability of failure. This is the study of fault-tolerant quantum computing. In order to pursue this study effectively, it is necessary to understand the fundamental nature of generic quantum states and operations, as well as the means by which one can correct quantum errors. Additionally, it is important to attempt to minimize the use of computational resources in achieving error reduction and fault-tolerant computing. This thesis is concerned with three projects related to the use of error-prone quantum systems to transmit and manipulate information. The first of these is concerned with the use of imperfectly-prepared states in error-correction routines. Using optimal quantum error correction, we are able to deduce a method of partially protecting encoded quantum information against preparation errors prior to encoding, using no additional qubits. The second of these projects details the search for entangled states which can be used to transmit classical information over quantum channels at a rate superior to classical states. The third of these projects concerns the transcoding of data from one quantum code into another using few ancillary resources. The descriptions of these projects are preceded by a brief introduction to representations of quantum states and channels, for completeness. Three techniques of general interest are presented in appendices. The first is an introduction to, and a minor advance in the development of optimal error correction codes. The second is a more efficient means of calculating the action of a quantum channel on a given state, given that the channel acts non-trivially only on a subsystem, rather than the entire system. Finally, we include documentation on a software package developed to aid the search for quantum transcoding operations.
73

A study of the robustness of magic state distillation against Clifford gate faults

Jochym-O'Connor, Tomas Raphael January 2012 (has links)
Quantum error correction and fault-tolerance are at the heart of any scalable quantum computation architecture. Developing a set of tools that satisfy the requirements of fault- tolerant schemes is thus of prime importance for future quantum information processing implementations. The Clifford gate set has the desired fault-tolerant properties, preventing bad propagation of errors within encoded qubits, for many quantum error correcting codes, yet does not provide full universal quantum computation. Preparation of magic states can enable universal quantum computation in conjunction with Clifford operations, however preparing magic states experimentally will be imperfect due to implementation errors. Thankfully, there exists a scheme to distill pure magic states from prepared noisy magic states using only operations from the Clifford group and measurement in the Z-basis, such a scheme is called magic state distillation [1]. This work investigates the robustness of magic state distillation to faults in state preparation and the application of the Clifford gates in the protocol. We establish that the distillation scheme is robust to perturbations in the initial state preparation and characterize the set of states in the Bloch sphere that converge to the T-type magic state in different fidelity regimes. Additionally, we show that magic state distillation is robust to low levels of gate noise and that performing the distillation scheme using noisy Clifford gates is a more efficient than using encoded fault-tolerant gates due to the large overhead in fault-tolerant quantum computing architectures.
74

Correctness preserving program refinements proof theory and applications /

Back, R. J. R. January 1980 (has links)
Revision of Thesis (doctoral)--University of Helsinki, 1978. / Includes indexes. Includes bibliographical references (p. 109-112).
75

Attenuation correction techniques for tissue fluorescence

Bradley, Robert Stephen January 2006 (has links)
Fluorescence intensity measurements have the potential to facilitate the diagnoses of many pathological conditions. However, accurate interpretation of the measurements is complicated by the distorting effects of tissue scattering and absorption. Consequently, different techniques have been developed to attempt to compensate for these effects.
76

QUANTUM ERROR CORRECTION AND LEAKAGE ELIMINATION FOR QUANTUM DOTS

Pegahan, Saeed 01 August 2015 (has links)
The development of a quantum computer presents one of the greatest challenges in science and engineering to date. The promise of more ecient computing based on entangled quantum states and the superposition principle has led to a worldwide explosion of interest in the elds of quantum information and computation. Decoherence is one of the main problems that gives rise to dierent errors in the quantum system. However, the discovery of quantum error correction and the establishment of the accuracy threshold theorem provide us comprehensive tools to build a quantum computer. This thesis contributes to this eort by investigating a particular class of quantum error correcting codes, called Decoherence free subsystems. The passive approach to error correction taken by these encodings provides an ecient means of protection for symmetrically coupled system-bath interactions. Here I will present methods for determining the subsystem-preserving evolutions for noiseless subsystem encodings and more importantly implementing a Universal quantum computing over three-quantum dots.
77

The efficacy of community supervision

Seimiya, Takashi 01 January 2008 (has links)
This study examines the efficacy of treatment and likelihood of success while on probation among a sample of 193 people who recently were under community supervision in Illinois. Less than one-third of the probationers participated in any form of treatment or services. However, treatment was more likely to be given to drug offenders and probationers who were either married or educated at high school or higher levels. In contrast to the low level of treatment overall, the subjects in this study achieved a high level of success while serving sentences of community corrections. Whether absconding from the jurisdiction, getting a technical violation of their supervision, or a new arrest for a crime, only one-fifth of the subjects failed. Failure was more apt to occur for those with a prior arrest history, less serious offenses, and among unmarried probationers. This study also examined the effectiveness of an assessment instrument to classify risk of reoffending and needs for treatment. The instrument was of low utility, the greatest use being able to distinguish failure between low and medium-risk groups of offenders.
78

Numerical Simulations Of Installation Effects Caused By Upstream Elbows On Single-path Transit-time Ultrasonic Flare Flow Meters

MARTINS, R. S. 23 May 2012 (has links)
Made available in DSpace on 2016-08-29T15:32:56Z (GMT). No. of bitstreams: 1 tese_5744_Dissertation_MartinsRS20131108-144619.pdf: 9401224 bytes, checksum: 3db21435dc365ec6e0dd83f6c3b5e644 (MD5) Previous issue date: 2012-05-23 / A indústria de petróleo e gás requer medições de vazão com baixa incerteza, uma vez que são estabelecidas por lei. Contudo, curvas e outros obstáculos são comumente encontrados nesse cenário, o que pode afetar a qualidade da medição de vazão em função de perturbações no escoamento, tais como swirl e assimetrias no peril de velocidades. Medidores de vazão por ultrassom de um canal são frequentemente utilizados em instalações de gás de queimadores, apesar de serem sensíveis a tais perturbações. O presente trabalho usa códigos comerciais de CFD para obter o escoamento à jusante de instalações com uma curva e duas curvas, visando a investigar a magnitude e o comportamento de tais efeitos na medição de vazão. Integração numérica é utilizada para diversos arranjos de caminho acústico, simulando medidores de vazão por ultrassom de um canal em várias condições para avaliar o desvio do fator de correção em escoamentos perturbados. Números de Reynolds de 1 x 104 a 2 x 106 são considerados. Ângulos de montagem dos transdutores de 0° a 180° são testados e posições axiais até 80D à jusante do obstáculo são avaliadas. Os resultados indicam que medidores de vazão por ultrassom são sensíveis aos efeitos de acidente de linha. O desvio do fator de correção mostra-se, em geral, consideravelmente maior que 2% em distâncias menores que 20D, conforme recomendado por alguns fabricantes e por leis. Não obstante, tais desvios podem atingir 0,01% em algumas configurações específicas, o que sugere que a medição de vazão por ultrassom pode ser melhorada pelo rearranjo do aparato em posição angular favorável e, principalmente, pela implementação de funções específicas para fatores de correção em condições perturbadas.
79

Low complexity bit-level soft-decision decoding for Reed-Solomon codes

Oh, Min-seok January 1999 (has links)
Reed-Solomon codes (RS codes) are an important method for achieving error-correction in communication and storage systems. However, it has proved difficult to find a soft-decision decoding method which has low complexity. Moreover, in some previous soft-decision decoding approaches, bit-level soft-decision information could not be employed fully. Even though RS codes have powerful error correction capability, this is a critical shortcoming. This thesis presents bit-level soft-decision decoding schemes for RS codes. The aim is to design a low complexity sequential decoding method based on bit-level soft- decision information approaching maximum likelihood performance. Firstly a trellis decoding scheme which gives easy implementation is introduced, since the soft-decision information can be used directly. In order to allow bit-level soft-decision, a binary equivalent code is introduced and Wolf's method is used to construct the binary-trellis from a systematic parity check matrix. Secondly, the Fano sequential decoding method is chosen, which is sub-optimal and adaptable to channel conditions. This method does not need a large amount of storage to perform an efficient trellis search. The Fano algorithm is then modified to improve the error correcting performance. Finally, further methods of complexity reduction are presented without loss of decoding performance, based on reliability-first search decoding using permutation groups for RS codes. Compared with the decoder without permutation, those schemes give a large complexity reduction and performance improvement approaching near maximum likelihood performance. In this thesis, three types of permutation, cyclic, squaring and hybrid permutation, are presented and the decoding methods using them are implemented.
80

Development of a novel three-dimensional deformable mirror with removable influence functions for high precision wavefront correction in adaptive optics system

Huang, Lei, Zhou, Chenlu, Gong, Mali, Ma, Xingkun, Bian, Qi 27 July 2016 (has links)
Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.

Page generated in 0.1264 seconds