41 |
Magellan Recorder Data Recovery AlgorithmsScott, Chuck, Nussbaum, Howard, Shaffer, Scott 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes algorithms implemented by the Magellan High Rate Processor to recover radar data corrupted by the failure of an onboard tape recorder that dropped bits. For data with error correction coding, an algorithm was developed that decodes data in the presence of bit errors and missing bits. For the SAR data, the algorithm takes advantage of properties in SAR data to locate corrupted bits and reduce there effects on downstream processing. The algorithms rely on communication approaches, including an efficient tree search and the Viterbi algorithm to maintain the required throughput rate.
|
42 |
A Precision Angular Correlation Table and Calculation of Geometrical Correction FactorsRowton, Larry James 01 1900 (has links)
In recent years y-y angular correlations have been very useful in confirming the spins of excited nuclear states. Angular correlation techniques have also been employed to study the electric and magnetic character of excited nuclear states. With these things in mind, it was decided to design, construct, and test a precision angular correlation table.
|
43 |
Étude des artefacts en tomodensitométrie par simulation Monte CarloBedwani, Stéphane 08 1900 (has links)
En radiothérapie, la tomodensitométrie (CT) fournit l’information anatomique du patient utile au calcul de dose durant la planification de traitement. Afin de considérer la composition hétérogène des tissus, des techniques de calcul telles que la méthode Monte Carlo sont nécessaires pour calculer la dose de manière exacte. L’importation des images CT dans un tel calcul exige que chaque voxel exprimé en unité Hounsfield (HU) soit converti en une valeur physique telle que la densité électronique (ED). Cette conversion est habituellement effectuée à l’aide d’une courbe d’étalonnage HU-ED. Une anomalie ou artefact qui apparaît dans une image CT avant l’étalonnage est
susceptible d’assigner un mauvais tissu à un voxel. Ces erreurs peuvent causer une perte cruciale de fiabilité du calcul de dose.
Ce travail vise à attribuer une valeur exacte aux voxels d’images CT afin d’assurer la fiabilité des calculs de dose durant la planification de traitement en radiothérapie. Pour y parvenir, une étude est réalisée sur les artefacts qui sont reproduits par simulation Monte Carlo. Pour réduire le temps de calcul, les simulations sont parallélisées et transposées sur un superordinateur. Une étude de sensibilité des nombres HU en présence d’artefacts est ensuite réalisée par une analyse statistique des histogrammes. À l’origine de nombreux artefacts, le durcissement de faisceau est étudié davantage. Une revue sur l’état de l’art en matière de correction du durcissement de faisceau est présentée suivi d’une démonstration explicite d’une correction empirique. / Computed tomography (CT) is widely used in radiotherapy to acquire patient-specific data for an accurate dose calculation in radiotherapy treatment planning. To consider the composition of heterogeneous tissues, calculation techniques such as Monte Carlo method are needed to compute an exact dose distribution. To use CT images with dose calculation algorithms, all voxel values, expressed in Hounsfield unit (HU), must be converted into relevant physical parameters such as the electron density (ED). This conversion is typically accomplished by means of a HU-ED calibration curve. Any discrepancy (or artifact) that appears in the reconstructed CT image prior to calibration is
susceptible to yield wrongly-assigned tissues. Such tissue misassignment may crucially decrease the reliability of dose calculation.
The aim of this work is to assign exact physical values to CT image voxels to insure the reliability of dose calculation in radiotherapy treatment planning. To achieve this, origins of CT artifacts are first studied using Monte Carlo simulations. Such simulations require a lot of computational time and were parallelized to run efficiently on a supercomputer. An sensitivity study on HU uncertainties due to CT artifacts is then performed using statistical analysis of the image histograms. Beam hardening effect appears to be the origin of several artifacts and is specifically addressed. Finally, a review on the state of the art in beam hardening correction is presented and an empirical correction is exposed in detail.
|
44 |
Body Deformation Correction for SPECT ImagingGu, Songxiang 09 July 2009 (has links)
"Single Photon Emission Computed Tomography (SPECT) is a medical imaging modality that allows us to visualize functional information about a patient's specific organ or body systems. During 20 minute scan, patients may move. Such motion will cause misalignment in the reconstruction, degrade the quality of 3D images and potentially lead to errors in diagnosis. Body bend and twist are types of patient motion that may occur during SPECT imaging and which has been generally ignored in SPECT motion correction strategies. To correct for these types of motion we propose a deformation model and its inclusion within an iterative reconstruction algorithm. One simulation and three experiments were conducted to investigate the applicability of our model. The simulation employed simulated projections of the MCAT phantom formed using an analytical projector which includes attenuation and distance-dependent resolution to investigate applications of our model in reconstruction. We demonstrate in the simulation studies that twist and bend can significantly degrade SPECT image quality visually. Our correction strategy is shown to be able to greatly diminish the degradation seen in the slices, provided the parameters are estimated accurately. To verify the correctness of our deformation model, we design the first experiment. In this experiment, the return of the post-motion-compensation locations of markers on the body-surface of a volunteer to approximate their original coordinates is used to examine our method of estimating the parameters of our model and the parameters' use in undoing deformation. Then, we design an MRI based experiment to validate our deformation model without any reconstruction. We use the surface marker motion to alter an MRI body volume to compensate the deformation the volunteer undergoes during data acquisition, and compare the motion-compensated volume with the motionless volume. Finally, an experiment with SPECT acquisitions and modified MLEM algorithm is designed to show the contribution of our deformation correction for clinical SPECT imaging. We view this work as a first step towards being able to estimate and correct patient deformation based on information obtained from marker tracking data."
|
45 |
Automatic analysis and repair of exception bugs for java programs / Analyse et Réparation automatique des bugs liées aux exceptions en javaCornu, Benoit 26 November 2015 (has links)
Le monde est de plus en plus informatisé. Il y a de plus en plus de logiciels en cours d'exécution partout, depuis les ordinateurs personnels aux serveurs de données, et à l'intérieur de la plupart des nouvelles inventions connectées telles que les montres ou les machines à laver intelligentes. Toutes ces technologies utilisent des applications logicielles pour effectuer les taches pour lesquelles elles sont conçus. Malheureusement, le nombre d'erreurs de logiciels croît avec le nombre d'applications logicielles.Dans cette thèse, nous ciblons spécifiquement deux problèmes:Problème n°1: Il ya un manque d'informations de débogage pour les bugs liés à des exceptions.Cela entrave le processus de correction de bogues. Pour rendre la correction des bugs liées aux exceptions plus facile, nous allons proposer des techniques pour enrichir les informations de débogage.Ces techniques sont entièrement automatisées et fournissent des informations sur la cause et les possibilités de gestion des exceptions.Problème n ° 2: Il y a des exceptions inattendues lors de l'exécution pour lesquelles il n'y a pas de code pour gérer l'erreur.En d'autres termes, les mécanismes de résilience actuels contre les exceptions ne sont pas suffisamment efficaces. Nous proposons de nouvelles capacités de résilience qui gérent correctement les exceptions qui n'ont jamais été rencontrées avant. Nous présentons quatre contributions pour résoudre les deux problèmes présentés. / The world is day by day more computerized. There is more and more software running everywhere, from personal computers to data servers, and inside most of the new popularized inventions such as connected watches or intelligent washing machines. All of those technologies use software applications to perform the services they are designed for. Unfortunately, the number of software errors grows with the number of software applications. In isolation, software errors are often annoyances, perhaps costing one person a few hours of work when their accounting application crashes.Multiply this loss across millions of people and consider that even scientific progress is delayed or derailed by software error: in aggregate, these errors are now costly to society as a whole.We specifically target two problems:Problem #1: There is a lack of debug information for the bugs related to exceptions.This hinders the bug fixing process.To make bug fixing of exceptions easier, we will propose techniques to enrich the debug information.Those techniques are fully automated and provide information about the cause and the handling possibilities of exceptions.Problem #2: There are unexpected exceptions at runtime for which there is no error-handling code.In other words, the resilience mechanisms against exceptions in the currently existing (and running) applications is insufficient.We propose resilience capabilities which correctly handle exceptions that were never foreseen at specification time neither encountered during development or testing. In this thesis, we aim at finding solutions to those problems. We present four contributions to address the two presented problems.
|
46 |
Managing Dynamic Written Corrective Feedback: Perceptions of Experienced TeachersMessenger, Rachel A. 01 March 2017 (has links)
Error correction for English language learner's (ELL) writing has long been debated in the field of teaching English to learners of other languages (TESOL). Some researchers say that written corrective feedback (WCF) is beneficial, while others contest. This study takes a look at the manageability of the innovative strategy Dynamic Written Corrective Feedback (DWCF) and asks what factors influence the manageability of the strategy (including how long marking sessions take on average) and what suggestions experienced teachers of DWCF have. The strategy has shown to be highly effective in previous studies, but its manageability has recently been in question. A qualitative analysis of the manageability of DWCF was done via interviews of experienced teachers that have used DWCF and the author's experience and reflections using the strategy. The results indicate that this strategy can be manageable with some possible adaptions and while avoiding some common pitfalls.
|
47 |
Quantum Corrections for (Anti)--Evaporating Black HoleMaja Buri´c, Voja Radovanovi´c, rvoja@rudjer.ff.bg.ac.yu 25 July 2000 (has links)
No description available.
|
48 |
Quantum convolutional stabilizer codesChinthamani, Neelima 30 September 2004 (has links)
Quantum error correction codes were introduced as a means to protect quantum information from decoherance and operational errors. Based on their approach to error control, error correcting codes can be divided into two different classes: block codes and convolutional codes. There has been significant development towards finding quantum block codes, since they were first discovered in 1995. In contrast, quantum convolutional codes remained mainly uninvestigated. In this thesis, we develop the stabilizer formalism for quantum convolutional codes. We define distance properties of these codes and give a general method for constructing encoding circuits, given a set of generators of the stabilizer of a quantum convolutional stabilizer code, is shown. The resulting encoding circuit enables online encoding of the qubits, i.e., the encoder does not have to wait for the input transmission to end before starting the encoding process. We develop the quantum analogue of the Viterbi algorithm. The quantum Viterbi algorithm (QVA) is a maximum likehood error estimation algorithm, the complexity of which grows linearly with the number of encoded qubits. A variation of the quantum Viterbi algorithm, the Windowed QVA, is also discussed. Using Windowed QVA, we can estimate the most likely error without waiting for the entire received sequence.
|
49 |
On 'nicht...sondern...' (contrastive 'not...but...')Kasimir, Elke January 2006 (has links)
This article presents an analysis of German nicht...sondern... (contrastive not...but...) which departs from the commonly held view that this construction should be explained by appeal to its alleged corrective function. It will be demonstrated that in nicht A sondern B (not A but B), A and B just behave like stand-alone unmarked answers to a common question Q, and that this property of sondern is presuppositional in character. It is shown that from this general
observation many interesting properties of nicht...sondern... follow, among them distributional differences between German 'sondern' and German 'aber' (contrastive but, concessive but), intonational requirements and exhaustivity effects. sondern's presupposition is furthermore argued to be the result of the conventionalization of conversational implicatures.
|
50 |
A study of the robustness of magic state distillation against Clifford gate faultsJochym-O'Connor, Tomas Raphael January 2012 (has links)
Quantum error correction and fault-tolerance are at the heart of any scalable quantum computation architecture. Developing a set of tools that satisfy the requirements of fault- tolerant schemes is thus of prime importance for future quantum information processing implementations. The Clifford gate set has the desired fault-tolerant properties, preventing bad propagation of errors within encoded qubits, for many quantum error correcting codes, yet does not provide full universal quantum computation. Preparation of magic states can enable universal quantum computation in conjunction with Clifford operations, however preparing magic states experimentally will be imperfect due to implementation errors. Thankfully, there exists a scheme to distill pure magic states from prepared noisy magic states using only operations from the Clifford group and measurement in the Z-basis, such a scheme is called magic state distillation [1]. This work investigates the robustness of magic state distillation to faults in state preparation and the application of the Clifford gates in the protocol. We establish that the distillation scheme is robust to perturbations in the initial state preparation and characterize the set of states in the Bloch sphere that converge to the T-type magic state in different fidelity regimes. Additionally, we show that magic state distillation is robust to low levels of gate noise and that performing the distillation scheme using noisy Clifford gates is a more efficient than using encoded fault-tolerant gates due to the large overhead in fault-tolerant quantum computing architectures.
|
Page generated in 0.0916 seconds