• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 700
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1738
  • 537
  • 245
  • 183
  • 165
  • 153
  • 153
  • 125
  • 114
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Diagnostique optimal d'erreurs pour architecture de qubits à mesure faible et continue

Denhez, Gabrielle January 2011 (has links)
L'un des principaux obstacles pour construire un ordinateur quantique est la décohérence, laquelle limite grandement le temps alloué pour un calcul ainsi que la taille du système. Pour combattre la décohérence dans un système quantique, des protocoles de correction d'erreurs ont été proposés et semblent apporter une bonne solution à ce problème. Ces protocoles consistent à confiner l'information que contiennent les qubits dans un sous-espace nommé espace code. Après un certain temps d'évolution, on pose un diagnostic sur l'erreur qui s'est produite sur le système en effectuant des mesures indiquant s'il est toujours dans l'espace code où s'il a évolué vers un autre sous-espace. Pour que de tels protocoles soient efficaces, les mesures effectuées doivent en principe être rapides et projectives. Cependant, pour plusieurs architectures de qubits existantes, les mesures sont faibles et se font de façon continue. De plus, elles peuvent introduire elles-mêmes des erreurs dans le système. Ces caractéristiques de mesure rendent difficile le diagnostic de l'erreur tel qu'il est effectué traditionnellement. Aussi comme les mesures peuvent introduire des erreurs, il n'est pas certain que les protocoles de diagnostic d'erreur traditionnels soient utiles. Dans ce travail, on étudie l'utilité d'une mesure faible et continue dans un processus de correction d'erreurs. Cette étude s'est réalisée en deux volets. D'abord, on présente un protocole de correction d'erreur adapté aux architectures de qubits dont la mesure est faible et se fait de façon continue. On montre que ce protocole permet d'évaluer sous quelles conditions une mesure présentant ces caractéristiques peut aider à corriger des erreurs. Ensuite, on teste ce protocole de correction dans le cas particulier des qubits supraconducteurs. On établit sous quelles conditions la mesure sur ces qubits peut aider à diagnostiquer les erreurs et on étudie l'effet de différents paramètres expérimentaux dans ce contexte.
362

Décodage Viterbi Dégénéré

Pelchat, Émilie January 2013 (has links)
La correction d'erreur fera partie intégrante de toute technologie d'information quantique. Diverses méthodes de correction d'erreurs quantiques ont par conséquent été élaborées pour pallier les erreurs inévitables qui proviennent de la manipulation des qubits ainsi que de leur interaction inévitable avec l'environnement. Parmi les familles de codes de correction d'erreurs se trouvent les codes convolutifs, dont l'utilité envisagée est surtout pour protéger l'information quantique envoyée à travers un canal de communication bruyant. La méthode de décodage des codes convolutifs utilise des algorithmes d'estimation d'erreur basés sur un principe de maximum de vraisemblance. L'algorithme de Viterbi permet de déterminer ce maximum de vraisemblance de façon efficace. Cette méthode permet de trouver le mot code le plus probable et ce en utilisant des outils telle décodage par trellis. De plus, cet algorithme a une complexité linéaire avec le nombre de qubits encodés. Ce mémoire porte sur l'effet de la dégénérescence sur le décodage Viterbi. La dégénérescence est une propriété de lamécanique quantique ne possédant aucun analogue classique par laquelle des corrections distinctes peuvent corriger une erreur donnée. Les versions précédentes de décodage quantique suivant l'algorithme de Viterbi étaient sous-optimales parce qu'elles ne tenaient pas compte de la dégénérescence. La réalisation principale de ce mémoire est la conception d'un algorithme de décodage de Viterbi qui tient compte des erreurs dégénérées et nous les testons par des simulations numériques. Ces simulations démontrent qu'il y a effectivement un avantage à utiliser le décodeur Viterbi qui tient compte de la dégénérescence.
363

Segmentation d'images de transmission pour la correction de l'atténué en tomographie d'émission par positrons

Nguiffo Podie, Yves January 2009 (has links)
L'atténuation des photons est un phénomène qui affecte directement et de façon profonde la qualité et l'information quantitative obtenue d'une image en Tomographie d'Emission par Positrons (TEP). De sévères artefacts compliquant l'interprétation visuelle ainsi que de profondes erreurs d'exactitudes sont présents lors de l'évaluation quantitative des images TEP, biaisant la vérification de la corrélation entre les concentrations réelles et mesurées.L' atténuation est due aux effets photoélectrique et Compton pour l'image de transmission (30 keV - 140 keV), et majoritairement à l'effet Compton pour l'image d'émission (511 keV). La communauté en médecine nucléaire adhère largement au fait que la correction d'atténuation constitue une étape cruciale pour l'obtention d'images sans artefacts et quantitativement exactes. Pour corriger les images d'émission TEP pour l'atténué, l'approche proposée consiste concrètement à segmenter une image de transmission à l'aide d'algorithmes de segmentation: K-means (KM), Fuzzy C-means (FCM), Espérance-Maximisation (EM), et EM après une transformation en ondelettes (OEM). KM est un algorithme non supervisé qui partitionne les pixels de l'image en agrégats tels que chaque agrégat de la partition soit défini par ses objets et son centroïde. FCM est un algorithme de classification non-supervisée qui introduit la notion d'ensemble flou dans la définition des agrégats, et chaque pixel de l'image appartient à chaque agrégat avec un certain degré, et tous les agrégats sont caractérisés par leur centre de gravité.L'algorithme EM est une méthode d'estimation permettant de déterminer les paramètres du maximum de vraisemblance d'un mélange de distributions avec comme paramètres du modèle à estimer la moyenne, la covariance et le poids du mélange correspondant à chaque agrégat. Les ondelettes forment un outil pour la décomposition du signal en une suite de signaux dits d'approximation de résolution décroissante suivi d'une suite de rectifications appelées détails.L' image à laquelle a été appliquée les ondelettes est segmentée par EM. La correction d'atténuation nécessite la conversion des intensités des images de transmission segmentées en coefficients d'atténuation à 511 keV. Des facteurs de correction d' atténuation (FCA) pour chaque ligne de réponse sont alors obtenus, lesquels représentent le rapport entre les photons émis et transmis. Ensuite il s'agit de multiplier le sinogramme, formé par l'ensemble des lignes de réponses, des FCA par le sinogramme de l'image d'émission pour avoir le sinogramme corrigé pour l'atténuation, qui est par la suite reconstruit pour générer l'image d'émission TEP corrigée. Nous avons démontré l'utilité de nos méthodes proposées dans la segmentation d'images médicales en les appliquant à la segmentation des images du cerveau, du thorax et de l'abdomen humains. Des quatre processus de segmentation, la décomposition par les ondelettes de Haar suivie de l'Espérance-Maximisation (OEM) semble donner un meilleur résultat en termes de contraste et de résolution. Les segmentations nous ont permis une réduction claire de la propagation du bruit des images de transmission dans les images d'émission, permettant une amélioration de la détection des lésions, et améliorant les diagnostics en médecine nucléaire.
364

625 MBIT/SEC BIT ERROR LOCATION ANALYSIS FOR INSTRUMENTATION RECORDING APPLICATIONS

Waschura, Thomas E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / This paper describes techniques for error location analysis used in the design and testing of high-speed instrumentation data recording and communications applications. It focuses on the differences between common bit error rate testing and new error location analysis. Examples of techniques presented include separating bit and burst error components, studying probability of burst occurrences, looking at error free interval occurrence rates as well as auto-correlating error position. Each technique contributes to a better understanding of the underlying error phenomenon and enables higher-quality digital recording and communication. Specific applications in error correction coding emulation, magnetic media error mapping and systematic error interference are discussed.
365

Adaptation of a Loral ADS 100 as a Remote Ocean Buoy Maintenance System

Sharp, Kirk, Thompson, Lorraine Masi 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The Naval Ocean Research and Development Activity (NORDA) has adapted the Loral Instrumentation Advanced Decommutation system (ADS 100) as a portable maintenance system for one of its remotely deployable buoy systems. This particular buoy system sends up to 128 channels of amplified sensor data to a centralized A/D for formatting and storage on a high density digital recorder. The resulting tapes contain serial PCM data in a format consistent with IRIG Standard 106-87. Predictable and correctable perturbations exist within the data due to the quadrature multiplexed telemetry system. The ADS 100 corrects for the perturbations of the telemetry system and provides the user with diagnostic tools to examine the stored data stream and determine the operational status of the buoy system prior to deployment.
366

ADAPTATION OF A LORAL ADS 100 AS A REMOTE OCEAN BUOY MAINTENANCE SYSTEM

Sharp, Kirk, Thompson, Lorraine Masi 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California / The Naval Ocean Research and Development Activity (NORDA) has adapted the Loral Instrumentation Advanced Decommutation system (ADS 100) as a portable maintenance system for one of its remotely deployable buoy systems. This particular buoy system sends up to 128 channels of amplified sensor data to a centralized A/D for formatting and storage on a high density digital recorder. The resulting tapes contain serial PCM data in a format consistent with IRIG Standard 106-87. Predictable and correctable perturbations exist within the data due to the quadrature multiplexed telemetry system. The ADS 100 corrects for the perturbations of the telemetry system and provides the user with diagnostic tools to examine the stored data stream and determine the operational status of the buoy system prior to deployment.
367

Impact of attenuation correction on clinical [18F]FDG brain PET in combined PET/MRI

Werner, Peter, Rullmann, Michael, Bresch, Anke, Tiepolt, Solveig, Lobsien, Donald, Schröter, Matthias, Sabri, Osama, Barthel, Henryk 20 June 2016 (has links) (PDF)
Background: In PET/MRI, linear photon attenuation coefficients for attenuation correction (AC) cannot be directly derived, and cortical bone is, so far, usually not considered. This results in an underestimation of the average PET signal in PET/MRI. Recently introduced MR-AC methods predicting bone information from anatomic MRI or proton density weighted zero-time imaging may solve this problem in the future. However, there is an ongoing debate if the current error is acceptable for clinical use and/or research. Methods: We examined this feature for [18F] fluorodeoxyglucose (FDG) brain PET in 13 patients with clinical signs of dementia or movement disorders who subsequently underwent PET/CT and PET/MRI on the same day. Multiple MR-AC approaches including a CT-derived AC were applied. Results: The resulting PET data was compared to the CT-derived standard regarding the quantification error and its clinical impact. On a quantitative level, −11.9 to +2 % deviations from the CT-AC standard were found. These deviations, however, did not translate into a systematic diagnostic error. This, as overall patterns of hypometabolism (which are decisive for clinical diagnostics), remained largely unchanged. Conclusions: Despite a quantitative error by the omission of bone in MR-AC, clinical quality of brain [18F]FDG is not relevantly affected. Thus, brain [18F]FDG PET can already, even now with suboptimal MR-AC, be utilized for clinical routine purposes, even though the MR-AC warrants improvement.
368

Novel methods for scatter correction and dual energy imaging in cone-beam CT

Dong, Xue 22 May 2014 (has links)
Excessive imaging doses from repeated scans and poor image quality mainly due to scatter contamination are the two bottlenecks of cone-beam CT (CBCT) imaging. This study investigates a method that combines measurement-based scatter correction and a compressed sensing (CS)-based iterative reconstruction algorithm to generate scatter-free images from low-dose data. Scatter distribution is estimated by interpolating/extrapolating measured scatter samples inside blocked areas. CS-based iterative reconstruction is finally carried out on the under-sampled data to obtain scatter-free and low-dose CBCT images. In the tabletop phantom studies, with only 25% dose of a conventional CBCT scan, our method reduces the overall CT number error from over 220 HU to less than 25 HU, and increases the image contrast by a factor of 2.1 in the selected ROIs. Dual-energy CT (DECT) is another important application of CBCT. DECT shows promise in differentiating materials that are indistinguishable in single-energy CT and facilitates accurate diagnosis. A general problem of DECT is that decomposition is sensitive to noise in the two sets of projection data, resulting in severely degraded qualities of decomposed images. The first study of DECT is focused on the linear decomposition method. In this study, a combined method of iterative reconstruction and decomposition is proposed. The noise on the two initial CT images from separate scans becomes well correlated, which avoids noise accumulation during the decomposition process. To fully explore the benefits of DECT on beam-hardening correction and to reduce the computation cost, the second study is focused on an iterative decomposition method with a non-linear decomposition model for noise suppression in DECT. Phantom results show that our methods achieve superior performance on DECT imaging, with respect to noise reduction and spatial resolution.
369

Genetic Correction of Duchenne Muscular Dystrophy using Engineered Nucleases

Ousterout, David Gerard January 2014 (has links)
<p>Duchenne muscular dystrophy (DMD) is a severe hereditary disorder caused by a loss of dystrophin, an essential musculoskeletal protein. Decades of promising research have yielded only modest gains in survival and quality of life for these patients and there have been no approved gene therapies for DMD to date. There are two significant hurdles to creating effective gene therapies for DMD; it is difficult to deliver a replacement dystrophin gene due to its large size and current strategies to restore the native dystrophin gene likely require life-long administration of a gene-modifying drug. This thesis presents a novel method to address these challenges through restoring dystrophin expression by genetically correcting the native dystrophin gene using engineered nucleases that target one or more exons in a mutational hotspot in exons 45-55 of the dystrophin gene. Importantly, this hotspot mutational region collectively represents approximately 62% of all DMD mutations. In this work, we utilize various engineered nuclease platforms to create genetic modifications that can correct a variety of DMD patient mutations.</p><p>Initially, we demonstrate that genome editing can efficiently correct the dystrophin reading frame and restore protein expression by introducing micro-frameshifts in exon 51, which is adjacent to a hotspot mutational region in the dystrophin gene. Transcription activator-like effector nucleases (TALENs) were engineered to mediate highly efficient gene editing after introducing a single TALEN pair targeted to exon 51 of the dystrophin gene. This led to restoration of dystrophin protein expression in cells from DMD patients, including skeletal myoblasts and dermal fibroblasts that were reprogrammed to the myogenic lineage by MyoD. We show that our engineered TALENs have minimal cytotoxicity and exome sequencing of cells with targeted modifications of the dystrophin locus showed no TALEN-mediated off-target changes to the protein coding regions of the genome, as predicted by in silico target site analysis. </p><p>In an alternative approach, we capitalized on the recent advances in genome editing to generate permanent exclusion of exons by using zinc-finger nucleases (ZFNs) to selectively remove sequences important in specific exon recognition. This strategy has the advantage of creating predictable frame restoration and protein expression, although it relies on simultaneous nuclease activity to generate genomic deletions. ZFNs were designed to remove essential splicing sequences in exon 51 of the dystrophin gene and thereby exclude exon 51 from the resulting dystrophin transcript, a method that can potentially restore the dystrophin reading frame in up to 13% of DMD patients. Nucleases were assembled by extended modular assembly and context-dependent assembly methods and screened for activity in human cells. Selected ZFNs had moderate observable cytotoxicity and one ZFN showed off-target activity at two chromosomal loci. Two active ZFN pairs flanking the exon 51 splice acceptor site were transfected into DMD patient cells and a clonal population was isolated with this region deleted from the genome. Deletion of the genomic sequence containing the splice acceptor resulted in the loss of exon 51 from the dystrophin mRNA transcript and restoration of dystrophin expression in vitro. Furthermore, transplantation of corrected cells into the hind limb of immunodeficient mice resulted in efficient human dystrophin expression localized to the sarcolemma. </p><p>Finally, we exploited the increased versatility, efficiency, and multiplexing capabilities of the CRISPR/Cas9 system to enable a variety of otherwise challenging gene correction strategies for DMD. Single or multiplexed sgRNAs were designed to restore the dystrophin reading frame by targeting the mutational hotspot at exons 45-55 and introducing either intraexonic small insertions and deletions, or large deletions of one or more exons. Significantly, we generated a large deletion of 336 kb across the entire exon 45-55 region that is applicable to correction of approximately 62% of DMD patient mutations. We show that, for selected sgRNAs, CRISPR/Cas9 gene editing displays minimal cytotoxicity and limited aberrant mutagenesis at off-target chromosomal loci. Following treatment with Cas9 nuclease and one or more sgRNAs, dystrophin expression was restored in Duchenne patient muscle cells in vitro. Human dystrophin was detected in vivo following transplantation of genetically corrected patient cells into immunodeficient mice. </p><p>In summary, the objective of this work was to develop methods to genetically correct the native dystrophin as a potential therapy for DMD. These studies integrate the rapid advances in gene editing technologies to create targeted frameshifts that restore the dystrophin gene around patient mutations in non-essential coding regions. Collectively, this thesis presents several gene editing methods that can correct patient mutations by modification of specific exons or by deletion of one or more exons that results in restoration of the dystrophin reading frame. Importantly, the gene correction methods described here are compatible with leading cell-based therapies and in vivo gene delivery strategies for DMD, providing an avenue towards a cure for this devastating disease.</p> / Dissertation
370

AN INEXPENSIVE DATA ACQUISITION SYSTEM FOR MEASURING TELEMETRY SIGNALS ON TEST RANGES TO ESTIMATE CHANNEL CHARACTERISTICS

Horne, Lyman D., Dye, Ricky G. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / In an effort to determine a more accurate characterization of the multipath fading effects on telemetry signals, the BYU telemetering group is implementing an inexpensive data acquisition system to measure these effects. It is designed to measure important signals in a diversity combining system. The received RF envelope, AGC signal, and the weighting signal for each beam, as well as the IRIG B time stamp will be sampled and stored. This system is based on an 80x86 platform for simplicity, compactness, and ease of use. The design is robust and portable to accommodate measurements in a variety of locations including aircraft, ground, and mobile environments.

Page generated in 0.0947 seconds