• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1640
  • 548
  • 454
  • 349
  • 171
  • 67
  • 67
  • 60
  • 31
  • 22
  • 21
  • 21
  • 14
  • 11
  • 11
  • Tagged with
  • 4040
  • 617
  • 592
  • 474
  • 431
  • 395
  • 305
  • 295
  • 284
  • 254
  • 245
  • 228
  • 211
  • 208
  • 196
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

The Effects of Prefer Orientation on Three-Dimensional Reconstruction of T = 3 Virus Particles

Chen, Chun-Hong 20 June 2008 (has links)
Cyro-EM and three-dimensional reconstruction have become important research tools for virus structure. These techniques have benefit of fast and keep samples in native folding. Simulated prefer orientation images of DGNNV, PaV and TBSV were reconstructed by SPIDER or PurdueEM, with projection matching with. SPIDER prefers 3-fold and 5-fold view fields, and PurdueEM prefers 2-fold view fields. SPIDER could reconstruction images which have more noise than PurdueEM can successfully reconstruct. Reconstruction of RNA-cages has some relationship with the symmetry of capsid protein. Prefer orientation, noise and RNA-cages are the factors that can effect reconstruction.
402

Compactly supported radial basis functions multidimensional reconstruction and applications /

Gelas, Arnaud Prost, Rémy January 2007 (has links)
Thèse doctorat : Images et Systèmes : Villeurbanne, INSA : 2006. / Thèse rédigée en anglais. Titre provenant de l'écran-titre. Bibliogr. p. 161-172.
403

Palaeoenvironmental reconstruction of catchment processes in sediments from Bolgoda Lake, Sri Lanka

Eriksson, Frida, Olsson, Daniel January 2015 (has links)
Bottom sediment is an archive of the historical changes in a lake and its catchment. This thesis is apalaeoenvironmental reconstruction of catchment processes in Bolgoda Lake situated in western SriLanka. We studied a sediment core retrieved from this lake. In our study, we focus on multiplephysical and chemical proxies: grain-size, loss-on-ignition, total organic carbon content, C:N ratio,and δ13C stored in the organic matter. The aim of this study is to contribute to a better understandingof the palaeoenvironmental conditions in the region and allow a comparison between this site andothers.In the deepest part of the core, we see an overall high sand content, which indicates a period ofhigher discharge into the lake compared to what the other core parts indicate. This is probably aresult of higher precipitation. This is followed by a decline in C:N and a rise in TOC in the second partwhich indicates an increase of primary production in the lake. In the third part we again see a shift inthe C:N indicating a source change back to more terrestrial runoff. The increase in TOC and LOIvalues together with decrease in C:N ratio and a steady increase in δ13C indicate an increase inlacustrine productivity in the upper part of the core.By reconstructing the palaeoenvironmental history in Bolgoda Lake we can conclude that it isprobable that some other factor than diagenetic change affects the lake. Our results indicate thatthese changes most likely are due to more wet periods and anthropogenic activity, mainly throughland use changes.
404

Design, development and implementation of a parallel algorithm for computed tomography using algebraic reconstruction technique

Melvin, Cameron 05 October 2007 (has links)
This project implements a parallel algorithm for Computed Tomography based on the Algebraic Reconstruction Technique (ART) algorithm. This technique for reconstructing pictures from projections is useful for applications such as Computed Tomography (CT or CAT). The algorithm requires fewer views, and hence less radiation, to produce an image of comparable or better quality. However, the approach is not widely used because of its computationally intensive nature in comparison with rival technologies. A faster ART algorithm could reduce the amount of radiation needed for CT imaging by producing a better image with fewer projections. A reconstruction from projections version of the ART algorithm for two dimensions was implemented in parallel using the Message Passing Interface (MPI) and OpenMP extensions for C. The message passing implementation did not result in faster reconstructions due to prohibitively long and variant communication latency. The shared memory implementation produced positive results, showing a clear computational advantage for multiple processors and measured efficiency ranging from 60-95%. Consistent with the literature, image quality proved to be significantly better compared to the industry standard Filtered Backprojection algorithm especially when reconstructing from fewer projection angles.
405

Design, development and implementation of a parallel algorithm for computed tomography using algebraic reconstruction technique

Melvin, Cameron 05 October 2007 (has links)
This project implements a parallel algorithm for Computed Tomography based on the Algebraic Reconstruction Technique (ART) algorithm. This technique for reconstructing pictures from projections is useful for applications such as Computed Tomography (CT or CAT). The algorithm requires fewer views, and hence less radiation, to produce an image of comparable or better quality. However, the approach is not widely used because of its computationally intensive nature in comparison with rival technologies. A faster ART algorithm could reduce the amount of radiation needed for CT imaging by producing a better image with fewer projections. A reconstruction from projections version of the ART algorithm for two dimensions was implemented in parallel using the Message Passing Interface (MPI) and OpenMP extensions for C. The message passing implementation did not result in faster reconstructions due to prohibitively long and variant communication latency. The shared memory implementation produced positive results, showing a clear computational advantage for multiple processors and measured efficiency ranging from 60-95%. Consistent with the literature, image quality proved to be significantly better compared to the industry standard Filtered Backprojection algorithm especially when reconstructing from fewer projection angles.
406

An investigation into the use of scattered photons to improve 2D Position Emission Tomography (PET) functional imaging quality

Sun, Hongyan January 2012 (has links)
Positron emission tomography (PET) is a powerful metabolic imaging modality, which is designed to detect two anti-parallel 511 keV photons origniating from a positron-electron annihilation. However, it is possible that one or both of the annihilation photons undergo a Compton scattering in the object. This is more serious for a scanner operated in 3D mode or with large patients, where the scatter fraction can be as high as 40-60%. When one or both photons are scattered, the line of response (LOR) defined by connecting the two relevant detectors no longer passes through the annihilation position. Thus, scattered coincidences degrade image contrast and compromise quantitative accuracy. Various scatter correction methods have been proposed but most of them are based on estimating and subtracting the scatter from the measured data or incorporating it into an iterative reconstruction algorithm. By accurately measuring the scattered photon energy and taking advantage of the kinematics of Compton scattering, two circular arcs (TCA) in 2D can be identified, which describe the locus of all the possible scattering positions and encompass the point of annihilation. In the limiting case where the scattering angle approaches zero, the TCA approach the LOR for true coincidences. Based on this knowledge, a Generalized Scatter (GS) reconstruction algorithm has been developed in this thesis, which can use both true and scattered coincidences to extract the activity distribution in a consistent way. The annihilation position within the TCA can be further confined by adding a patient outline as a constraint into the GS algorithm. An attenuation correction method for the scattered coincidences was also developed in order to remove the imaging artifacts. A geometrical model that characterizes the different probabilities of the annihilation positions within the TCA was also proposed. This can speed up image convergence and improve reconstructed image quality. Finally, the GS algorithm has been adapted to deal with non-ideal energy resolutions. In summary, an algorithm that implicitly incorporates scattered coincidences into the image reconstruction has been developed. Our results demonstrate that this eliminates the need for scatter correction and can improve system sensitivity and image quality. / February 2016
407

Robust, refined and selective matching for accurate camera pose estimation / Sélection et raffinement de mises en correspondance robustes pour l'estimation de pose précise de caméras

Liu, Zhe 13 April 2015 (has links)
Grâce aux progrès récents en photogrammétrie, il est désormais possible de reconstruire automatiquement un modèle d'une scène 3D à partir de photographies ou d'une vidéo. La reconstruction est réalisée en plusieurs étapes. Tout d'abord, on détecte des traits saillants (features) dans chaque image, souvent des points mais plus généralement des régions. Puis on cherche à les mettre en correspondance entre images. On utilise ensuite les traits communs à deux images pour déterminer la pose (positions et orientations) relative des images. Puis les poses sont mises dans un même repère global et la position des traits saillants dans l'espace est reconstruite (structure from motion). Enfin, un modèle 3D dense de la scène peut être estimé. La détection de traits saillants, leur appariement, ainsi que l'estimation de la position des caméras, jouent des rôles primordiaux dans la chaîne de reconstruction 3D. Des imprécisions ou des erreurs dans ces étapes ont un impact majeur sur la précision et la robustesse de la reconstruction de la scène entière. Dans cette thèse, nous nous intéressons à l'amélioration des méthodes pour établir la correspondance entre régions caractéristiques et pour les sélectionner lors de l'estimation des poses de caméras, afin de rendre les résultats de reconstruction plus robustes et plus précis. Nous introduisons tout d'abord une contrainte photométrique pour une paire de correspondances (VLD) au sein d'une même image, qui est plus fiable que les contraintes purement géométriques. Puis, nous proposons une méthode semi-locale (K-VLD) pour la mise en correspondance, basée sur cette contrainte photométrique. Nous démontrons que notre méthode est très robuste pour des scènes rigides, mais aussi non-rigides ou répétitives, et qu'elle permet d'améliorer la robustesse et la précision de méthodes d'estimation de poses, notamment basées sur RANSAC. Puis, pour améliorer l'estimation de la position des caméras, nous analysons la précision des reconstructions et des estimations de pose en fonction du nombre et de la qualité des correspondances. Nous en dérivons une formule expérimentale caractérisant la relation ``qualité contre quantité''. Sur cette base, nous proposons une méthode pour sélectionner un sous-ensemble des correspondances de meilleure qualité de façon à obtenir une très haute précision en estimation de poses. Nous cherchons aussi raffiner la précision de localisation des points en correspondance. Pour cela, nous développons une extension de la méthode de mise en correspondance aux moindres carrés (LSM) en introduisant un échantillonnage irrégulier et une exploration des échelles d'images. Nous montrons que le raffinement et la sélection de correspondances agissent indépendamment pour améliorer la reconstruction. Combinées, les deux méthodes produisent des résultats encore meilleurs / With the recent progress in photogrammetry, it is now possible to automatically reconstruct a model of a 3D scene from pictures or videos. The model is reconstructed in several stages. First, salient features (often points, but more generally regions) are detected in each image. Second, features that are common in images pairs are matched. Third, matched features are used to estimate the relative pose (position and orientation) of images. The global poses are then computed as well as the 3D location of these features (structure from motion). Finally, a dense 3D model can be estimated. The detection of salient features, their matching, as well as the estimation of camera poses play a crucial role in the reconstruction process. Inaccuracies or errors in these stages have a major impact on the accuracy and robustness of reconstruction for the entire scene. In this thesis, we propose better methods for feature matching and feature selection, which improve the robustness and accuracy of existing methods for camera position estimation. We first introduce a photometric pairwise constraint for feature matches (VLD), which is more reliable than geometric constraints. Then we propose a semi-local matching approach (K-VLD) using this photometric match constraint. We show that our method is very robust, not only for rigid scenes but also for non-rigid and repetitive scenes, which can improve the robustness and accuracy of pose estimation methods, such as based on RANSAC. To improve the accuracy in camera position estimation, we study the accuracy of reconstruction and pose estimation in function of the number and quality of matches. We experimentally derive a “quantity vs. quality” relation. Using this relation, we propose a method to select a subset of good matches to produce highly accurate pose estimations. We also aim at refining match position. For this, we propose an improvement of least square matching (LSM) using an irregular sampling grid and image scale exploration. We show that match refinement and match selection independently improve the reconstruction results, and when combined together, the results are further improved
408

Méthodes itératives pour la reconstruction tomographique régularisée / Iterative Methods in regularized tomographic reconstruction

Paleo, Pierre 13 November 2017 (has links)
Au cours des dernières années, les techniques d'imagerie par tomographie se sont diversifiées pour de nombreuses applications. Cependant, des contraintes expérimentales conduisent souvent à une acquisition de données limitées, par exemple les scans rapides ou l'imagerie médicale pour laquelle la dose de rayonnement est une préoccupation majeure. L'insuffisance de données peut prendre forme d'un faible rapport signal à bruit, peu de vues, ou une gamme angulaire manquante. D'autre part, les artefacts nuisent à la qualité de reconstruction. Dans ces contextes, les techniques standard montrent leurs limitations. Dans ce travail, nous explorons comment les méthodes de reconstruction régularisée peuvent répondre à ces défis. Ces méthodes traitent la reconstruction comme un problème inverse, et la solution est généralement calculée par une procédure d'optimisation. L'implémentation de méthodes de reconstruction régularisée implique à la fois de concevoir une régularisation appropriée, et de choisir le meilleur algorithme d'optimisation pour le problème résultant. Du point de vue de la modélisation, nous considérons trois types de régularisations dans un cadre mathématique unifié, ainsi que leur implémentation efficace : la variation totale, les ondelettes et la reconstruction basée sur un dictionnaire. Du point de vue algorithmique, nous étudions quels algorithmes d'optimisation de l'état de l'art sont les mieux adaptés pour le problème et l'architecture parallèle cible (GPU), et nous proposons un nouvel algorithme d'optimisation avec une vitesse de convergence accrue. Nous montrons ensuite comment les modèles régularisés de reconstruction peuvent être étendus pour prendre en compte les artefacts usuels : les artefacts en anneau et les artefacts de tomographie locale. Nous proposons notamment un nouvel algorithme quasi-exact de reconstruction en tomographie locale. / In the last years, there have been a diversification of the tomography imaging technique for many applications. However, experimental constraints often lead to limited data - for example fast scans, or medical imaging where the radiation dose is a primary concern. The data limitation may come as a low signal to noise ratio, scarce views or a missing angle wedge.On the other hand, artefacts are detrimental to reconstruction quality.In these contexts, the standard techniques show their limitations.In this work, we explore how regularized tomographic reconstruction methods can handle these challenges.These methods treat the problem as an inverse problem, and the solution is generally found by the means of an optimization procedure.Implementing regularized reconstruction methods entails to both designing an appropriate regularization, and choosing the best optimization algorithm for the resulting problem.On the modelling part, we focus on three types of regularizers in an unified mathematical framework, along with their efficient implementation: Total Variation, Wavelets and dictionary-based reconstruction. On the algorithmic part, we study which state-of-the-art convex optimization algorithms are best fitted for the problem and parallel architectures (GPU), and propose a new algorithm for an increased convergence speed.We then show how the standard regularization models can be extended to take the usual artefacts into account, namely rings and local tomography artefacts. Notably, a novel quasi-exact local tomography reconstruction method is proposed.
409

Bezprostřední a odložené rekonstrukce prsu / Delayed and Immediate Breast Reconstruction

Kydlíček, Tomáš January 2014 (has links)
OBJECTIVES : This work studies the indications, methods, results , satisfaction and partner relationships in immediate (IBR ) and deferred breast reconstruction ( DBR ) to objectively consider the benefits and indications IBR . METHOD : IBR between 1/2002-12/2012 underwent 51 ( 33.33 %) women ( 29-58 years, mean 41.5 , median 40.5 ) ; DBR 102 ( 66.67 %) ( 31-64 , mean 47.5 , median 47 ), data were obtained from medical records , questionnaires interviews and questionnaires , processed by statistical analysis RESULTS : Indications IBR : ≤ pT2N0M0 , low grade tumor ; DBR : ≥ 1 year of remission. Age at IBR was lower than the DBR ( p- 0.0004 ) Statistical differences in the modes of life after reconstruction the IBR a DBR were observed ( p- 0.1935-0.9659 ) predominates full and prevailing contentment. IBR does not burden patients ( 55 to 160 min, average 91.1 and 139.3 min, median 75 and 135 min ) between unilateral and bilateral operations are not statistically significant differences ( p -value 0.1065 ) . Complications prolonging healing rare - IBR 5 ( 8.33 %) , DBR 6 ( 5.8 % ) and mortality generalization low - IBR and 1 ( 1.96 % s ) DBR 1 and 2 ( 0.98 % and 1 , 96%) . Satisfaction with IBR was reported by 84.09 % , with 86.11 % DBR . The DBR was found 4 times greater risk of life or relationship. SUMMARY:...
410

Débruitage, alignement et reconstruction 3D automatisés en tomographie électronique : applications en sciences des matériaux / Automatic denoising, alignment and reconstruction in electron tomography : materials science applications

Printemps, Tony 24 November 2016 (has links)
La tomographie électronique est une technique de nano-caractérisation 3D non destructive. C’est une technique de choix dans le domaine des nanotechnologies pour caractériser des structures tridimensionnelles complexes pour lesquelles l’imagerie 2D en microscopie électronique en transmission seule n’est pas suffisante. Toutes les étapes nécessaires à la réalisation d’une reconstruction 3D en tomographie électronique sont investiguées dans cette thèse, de la préparation d’échantillon aux algorithmes de reconstruction, en passant par l’acquisition des données et l’alignement. Les travaux entrepris visent en particulier (i) à développer une algorithmie complète incluant débruitage, alignement et reconstruction automatisés afin de rendre la technique plus robuste et donc utilisable en routine (ii) à étendre la tomographie électronique à des échantillons plus épais ou ayant subis une déformation en cours d’acquisition et enfin (iii) à améliorer la tomographie électronique chimique en essayant d’exploiter au maximum toutes les informations disponibles. Toutes ces avancées ont pu être réalisées en s’intéressant particulièrement aux échantillons permettant une acquisition sur une étendue angulaire idéale de 180°. Un logiciel a également été développé au cours de cette thèse synthétisant la majeure partie de ces avancées pour permettre de réaliser simplement toutes les étapes de tomographie électronique post-acquisition. / Electron tomography is a 3D non-destructive nano-characterization technique. It is an essential technique in the field of nanotechnologies to characterize complex structures particularly when 2D projections using a transmission electron microscope (TEM) are inappropriate for understanding the 3D sample morphology. During this thesis each one of the necessary steps of electron tomography have been studied: sample preparation, TEM acquisition, projection alignment and inversion algorithms. The main contributions of this thesis are (i) the development of a new complete procedure of automatic denoising, alignment and reconstruction for a routine use of electron tomography (ii) the extension of the technique to thicker specimen and specimen being damaged during the acquisition and finally (iii) the improvement of chemical tomography reconstructions using as much information as possible. All those contributions are possible taking advantage of the use of needle-shaped samples to acquire projections on an ideal tilt range of 180°. A software has been developed during this thesis to allow users to simply apply most of the contributions proposed in this work.

Page generated in 0.1197 seconds