• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 702
  • 223
  • 199
  • 92
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1743
  • 538
  • 245
  • 184
  • 165
  • 153
  • 153
  • 125
  • 114
  • 109
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Étude des artefacts en tomodensitométrie par simulation Monte Carlo

Bedwani, Stéphane 08 1900 (has links)
En radiothérapie, la tomodensitométrie (CT) fournit l’information anatomique du patient utile au calcul de dose durant la planification de traitement. Afin de considérer la composition hétérogène des tissus, des techniques de calcul telles que la méthode Monte Carlo sont nécessaires pour calculer la dose de manière exacte. L’importation des images CT dans un tel calcul exige que chaque voxel exprimé en unité Hounsfield (HU) soit converti en une valeur physique telle que la densité électronique (ED). Cette conversion est habituellement effectuée à l’aide d’une courbe d’étalonnage HU-ED. Une anomalie ou artefact qui apparaît dans une image CT avant l’étalonnage est susceptible d’assigner un mauvais tissu à un voxel. Ces erreurs peuvent causer une perte cruciale de fiabilité du calcul de dose. Ce travail vise à attribuer une valeur exacte aux voxels d’images CT afin d’assurer la fiabilité des calculs de dose durant la planification de traitement en radiothérapie. Pour y parvenir, une étude est réalisée sur les artefacts qui sont reproduits par simulation Monte Carlo. Pour réduire le temps de calcul, les simulations sont parallélisées et transposées sur un superordinateur. Une étude de sensibilité des nombres HU en présence d’artefacts est ensuite réalisée par une analyse statistique des histogrammes. À l’origine de nombreux artefacts, le durcissement de faisceau est étudié davantage. Une revue sur l’état de l’art en matière de correction du durcissement de faisceau est présentée suivi d’une démonstration explicite d’une correction empirique. / Computed tomography (CT) is widely used in radiotherapy to acquire patient-specific data for an accurate dose calculation in radiotherapy treatment planning. To consider the composition of heterogeneous tissues, calculation techniques such as Monte Carlo method are needed to compute an exact dose distribution. To use CT images with dose calculation algorithms, all voxel values, expressed in Hounsfield unit (HU), must be converted into relevant physical parameters such as the electron density (ED). This conversion is typically accomplished by means of a HU-ED calibration curve. Any discrepancy (or artifact) that appears in the reconstructed CT image prior to calibration is susceptible to yield wrongly-assigned tissues. Such tissue misassignment may crucially decrease the reliability of dose calculation. The aim of this work is to assign exact physical values to CT image voxels to insure the reliability of dose calculation in radiotherapy treatment planning. To achieve this, origins of CT artifacts are first studied using Monte Carlo simulations. Such simulations require a lot of computational time and were parallelized to run efficiently on a supercomputer. An sensitivity study on HU uncertainties due to CT artifacts is then performed using statistical analysis of the image histograms. Beam hardening effect appears to be the origin of several artifacts and is specifically addressed. Finally, a review on the state of the art in beam hardening correction is presented and an empirical correction is exposed in detail.
62

Practical Advances in Quantum Error Correction & Communication

Criger, Daniel Benjamin January 2013 (has links)
Quantum computing exists at the intersection of mathematics, physics, chemistry, and engineering; the main goal of quantum computing is the creation of devices and algorithms which use the properties of quantum mechanics to store, manipulate and measure information. There exist many families of algorithms, which, using non-classical logical operations, can outperform traditional, classical algorithms in terms of memory and processing requirements. In addition, quantum computing devices are fundamentally smaller than classical processors and memory elements; since the physical models governing their performance are applicable on all scales, as opposed to classical logic elements, whose underlying principles rely on the macroscopic nature of the device in question. Quantum algorithms, for the most part, are predicated on a theory of resources. It is often assumed that quantum computers can be placed in a precise fiducial state prior to computation, and that logical operations are perfect, inducing no error on the system which they affect. These assumptions greatly simplify algorithmic design, but are fundamentally unrealistic. In order to justify their use, it is necessary to develop a framework for using a large number of imperfect devices to simulate the action of a perfect device, with some acceptable probability of failure. This is the study of fault-tolerant quantum computing. In order to pursue this study effectively, it is necessary to understand the fundamental nature of generic quantum states and operations, as well as the means by which one can correct quantum errors. Additionally, it is important to attempt to minimize the use of computational resources in achieving error reduction and fault-tolerant computing. This thesis is concerned with three projects related to the use of error-prone quantum systems to transmit and manipulate information. The first of these is concerned with the use of imperfectly-prepared states in error-correction routines. Using optimal quantum error correction, we are able to deduce a method of partially protecting encoded quantum information against preparation errors prior to encoding, using no additional qubits. The second of these projects details the search for entangled states which can be used to transmit classical information over quantum channels at a rate superior to classical states. The third of these projects concerns the transcoding of data from one quantum code into another using few ancillary resources. The descriptions of these projects are preceded by a brief introduction to representations of quantum states and channels, for completeness. Three techniques of general interest are presented in appendices. The first is an introduction to, and a minor advance in the development of optimal error correction codes. The second is a more efficient means of calculating the action of a quantum channel on a given state, given that the channel acts non-trivially only on a subsystem, rather than the entire system. Finally, we include documentation on a software package developed to aid the search for quantum transcoding operations.
63

Motion Detection and Correction in Magnetic Resonance Imaging

Maclaren, Julian Roscoe January 2007 (has links)
Magnetic resonance imaging (MRI) is a non-invasive technique used to produce high-quality images of the interior of the human body. Compared to other imaging modalities, however, MRI requires a relatively long data acquisition time to form an image. Patients often have difficulty staying still during this period. This is problematic as motion produces artifacts in the image. This thesis explores the methods of imaging a moving object using MRI. Testing is performed using simulations, a moving phantom, and human subjects. Several strategies developed to avoid motion artifact problems are presented. Emphasis is placed on techniques that provide motion correction without penalty in terms of acquisition time. The most significant contribution presented is the development and assessment of the 'TRELLIS' pulse sequence and reconstruction algorithm. TRELLIS is a unique approach to motion correction in MRI. Orthogonal overlapping strips fill k-space and phase-encode and frequency-encode directions are alternated such that the frequency-encode direction always runs lengthwise along each strip. The overlap between pairs of orthogonal strips is used for signal averaging and to produce a system of equations that, when solved, quantifies the rotational and translational motion of the object. Acquired data is then corrected using this motion estimation. The advantage of TRELLIS over existing techniques is that k-space is sampled uniformly and all collected data is used for both motion detection and image reconstruction. This thesis presents a number of other contributions: a proposed means of motion correction using parallel imaging; an extension to the phase-correlation method for determining displacement between two objects; a metric to quantify the level of motion artifacts; a moving phantom; a physical version of the ubiquitous Shepp-Logan head phantom; a motion resistant data acquisition technique; and a means of correcting for T2 blurring artifacts.
64

Price uncertainty, investment and consumption

Ercolani, Marco G. January 1999 (has links)
No description available.
65

Iterative decoding of concatenated codes

Fagervik, Kjetil January 1998 (has links)
No description available.
66

Combination of Reliability-based Automatic Repeat ReQuest with Error Potential-based Error Correction for Improving P300 Speller Performance

Furuhashi, Takeshi, Yoshikawa, Tomohiro, Takahashi, Hiromu January 2010 (has links)
Session ID: SA-B1-3 / SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
67

An improved error correction algorithm for multicasting over LTE networks / Johannes Mattheus Cornelius

Cornelius, Johannes Mattheus January 2014 (has links)
Multicasting in Long-Term Evolution (LTE) environments poses several challenges if it is to be reliably implemented. Neither retransmission schemes nor Forward Error Correction (FEC), the traditional error correction approaches, can be readily applied to this system of communication if bandwidth and resources are to be used efficiently. A large number of network parameters and topology variables can influence the cost of telecommunication in such a system. These need to be considered when selecting an appropriate error correction technique for a certain LTE multicast deployment. This dissertation develops a cost model to investigate the costs associated with over-the-air LTE multicasting when different error correction techniques are applied. The benefit of this simplified model is an easily implementable and fast method to evaluate the communications costs of different LTE multicast deployments with the application of error correction techniques. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2014
68

Honest Approximations to Realistic Fault Models and Their Applications to Efficient Simulation of Quantum Error Correction

Daniel, Puzzuoli January 2014 (has links)
Understanding the performance of realistic noisy encoded circuits is an important task for the development of large-scale practical quantum computers. Specifically, the development of proposals for quantum computation must be well informed by both the qualities of the low-level physical system of choice, and the properties of the high-level quantum error correction and fault-tolerance schemes. Gaining insight into how a particular computation will play out on a physical system is in general a difficult problem, as the classical simulation of arbitrary noisy quantum circuits is inefficient. Nevertheless, important classes of noisy circuits can be simulated efficiently. Such simulations have led to numerical estimates of threshold errors rates and resource estimates in topological codes subject to efficiently simulable error models. This thesis describes and analyzes a method that my collaborators and I have introduced for leveraging efficient simulation techniques to understand the performance of large quantum processors that are subject to errors lying outside of the efficient simulation algorithm's applicability. The idea is to approximate an arbitrary gate error with an error from the efficiently simulable set in a way that ``honestly'' represents the original error's ability to preserve or distort quantum information. After introducing and analyzing the individual gate approximation method, its utility as a means for estimating circuit performance is studied. In particular, the method is tested within the use-case for which it was originally conceived; understanding the performance of a hypothetical physical implementation of a quantum error-correction protocol. It is found that the method performs exactly as desired in all cases. That is, the circuits composed of the approximated error models honestly represent the circuits composed of the errors derived from the physical models.
69

An FPT Algorithm for STRING-TO-STRING CORRECTION

Lee-Cultura, Serena Glyn 24 August 2011 (has links)
Parameterized string correction decision problems investigate the possibility of transforming a given string X into a target string Y using a fixed number of edit operations, k. There are four possible edit operations: swap, delete, insert and substi- tute. In this work we consider the NP--complete STRING-TO-STRING CORREC- TION problem restricted to deletes and swaps and parameterized by the number of allowed operations. Specifically, the problem asks whether there exists a trans- formation from X into Y consisting of at most k deletes or swaps. We present a fixed parameter algorithm that runs in O(2k(k + m)), where m is the length of the destination string. Further, we present an implementation of an extended version of the algorithm that constructs the transformation sequence ! of length ay most k, given its existence. This thesis concludes with a discussion comparing the practical run times obtained from our implementation with the proposed theoretical results. Efficient string correction algorithms have applications in several areas, for example computational linguistics, error detection and correction, and computational biology. / Graduate
70

Diagnostic testing and teaching of oral communication in English as a foreign language

Chen, Grace Show-ying January 1995 (has links)
No description available.

Page generated in 0.0943 seconds