• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 30
  • 12
  • 11
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 128
  • 47
  • 34
  • 32
  • 31
  • 25
  • 25
  • 21
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Geometrische und stochastische Modelle zur Optimierung der Leistungsfähigkeit des Strömungsmessverfahrens 3D-PTV

Putze, Torsten 02 December 2008 (has links)
Die 3D Particle Tracking Velocimetry (3D PTV) ist eine Methode zur bildbasierten Bestimmung von Geschwindigkeitsfeldern in Gas- oder Flüssigkeitsströmungen. Dazu wird die Strömung mit Partikeln markiert und durch ein Mehrkamerasystem beobachtet. Das Ergebnis der Datenauswertung sind 3D Trajektorien einer großen Anzahl von Partikeln, die zur statistischen Analyse der Strömung genutzt werden können. In der vorliegenden Arbeit werden verschiedene neu entwickelte Modelle gezeigt, die das Einsatzspektrum vergrößern und die Leistungsfähigkeit der 3D PTV erhöhen. Wesentliche Neuerungen sind der Einsatz eines Spiegelsystems zur Generierung eines virtuellen Kamerasystems, die Modellierung von komplex parametrisierten Trennflächen der Mehrmedienphotogrammetrie, eine wahrscheinlichkeitsbasierte Trackingmethode sowie eine neuartige Methode zur tomographischen Rekonstruktion von Rastervolumendaten. Die neuen Modelle sind an drei realen Experimentieranlagen und mit synthetischen Daten getestet worden. Durch den Einsatz eines Strahlteilers vor dem Objektiv einer einzelnen Kamera und vier Umlenkspiegeln, positioniert im weiteren Strahlengang, werden vier virtuelle Kameras generiert. Diese Methode zeichnet sich vor allem durch die Wirtschaftlichkeit als auch durch die nicht notwendige Synchronisation aus. Vor allem für die Anwendung im Hochgeschwindigkeitsbereich sind diese beiden Faktoren entscheidend. Bei der Beobachtung von Phänomenen in Wasser kommt es an den Trennflächen verschiedener Medien zur optischen Brechung. Diese muss für die weitere Auswertung zwingend modelliert werden. Für komplexe Trennflächen sind einfache Ansätze über zusätzliche Korrekturterme nicht praktikabel. Der entwickelte Ansatz basiert auf der mehrfachen Brechung jedes einzelnen Bildstrahls. Dazu müssen die Trennflächenparameter und die Kameraorientierungen im selben Koordinatensystem bekannt sein. Zumeist wird die Mehrbildzuordnung von Partikeln durch die Verwendung von Kernlinien realisiert. Auf Grund von instabilen Kameraorientierungen oder bei einer sehr hohen Partikeldichte sind diese geometrischen Eigenschaften nicht mehr ausreichend, um die Mehrbildzuordnung zu lösen. Unter der Ausnutzung weiterer geometrischer, radiometrischer und physikalischer Eigenschaften kann die Bestimmung der 3D Trajektorien dennoch durchgeführt werden. Dabei werden durch die Analyse verschiedener Merkmale diejenigen ausgewählt, welche sich für die spatio-temporale Zuordnung eignen. Die 3D PTV beruht auf der Diskretisierung der Partikelabbildungen im Bildraum und der anschließenden Objektkoordinatenbestimmung. Eine rasterbasierte Betrachtungsweise stellt die tomographische Rekonstruktion des Volumens dar. Hierbei wird die Intensitätsverteilung wird im Volumen rekonstruiert. Die Bewegungsinformationen werden im Anschluss aus den Veränderungen aufeinander folgender 3D-Bilder bestimmt. Durch dieses Verfahren können Strömungen mit einer höheren Partikeldichte im Volumen analysiert werden. Das entwickelte Verfahren basiert auf der schichtweisen Entzerrung und Zusammensetzung der Kamerabilder. Die entwickelten Modelle und Ansätze sind an verschiedenen Versuchsanlagen erprobt worden. Diese unterschieden sich stark in der Größe (0,5 dm³ – 20 dm³ – 130 m³) und den vorherrschenden Strömungsgeschwindigkeiten (0,3 m/s – 7 m/s – 0,5 m/s). / 3D Particle Tracking Velocimetry (3D PTV) is an image based method for flow field determination. It is based on seeding a flow with tracer particles and recording the flow with a multi camera system. The results are 3D trajectories of a large number of particles for a statistical analysis of the flow. The thesis shows different novel models to increase the spectrum of applications and to optimize efficiency of 3D PTV. Central aspects are the use of the mirror system to generate a virtual multi camera system, the modelling of complex interfaces of multimedia photogrammetry, a probability based tracking method and a novel method for tomographic reconstruction of volume raster data. The improved models are tested in three real testing facilities and with synthetic data. Using a beam splitter in front of the camera lens and deflecting mirrors arranged in the optical path, a four headed virtual camera system can be generated. This method is characterised by its economic efficiency and by the fact that a synchronisation is not necessary. These facts are important especially when using high speed cameras. When observing phenomena in water, there will be refraction at the different interfaces. This has to be taken into account and modelled for each application. Approaches which use correction terms are not suitable to handle complex optical interfaces. The developed approach is based on a multiple refraction ray tracing with known interface parameters and camera orientations. Mostly the multi image matching of particles is performed using epipolar geometry. Caused by the not stable camera orientation or a very high particle density this geometric properties are not sufficient to solve the ambiguities. Using further geometrical radiometrical and physical properties of particles, the determination of the 3D trajectories can be performed. After the analysis of different properties those of them are chosen which are suitable for spatio-temporal matching. 3D PTV bases on the discretisation of particle images in image space and the following object coordinate determination. A raster based approach is the tomographic reconstruction of the volume. Here the light intensity distribution in the volume will be reconstructed. Afterwards the flow information is determined from the differences in successive 3D images. Using tomographic reconstruction techniques a higher particle density can be analysed. The developed approach bases on a slice by slice rectification of the camera images and on a following assembly of the volume. The developed models and approaches are tested at different testing facilities. These differ in size (0.5 dm³ – 20 dm³ – 130 m³) and flow velocities (0.3 m/s – 7 m/s – 0.5 m/s).
102

Towards Battery-free Radio Tomographic Imaging : Battery-free Boundary Crossing Detection

Hylamia, Abdullah January 2018 (has links)
Radio tomographic imaging (RTI) is a novel device-free localization technique which utilizes the changes in radio signals caused by obstruction to enable various sensing applications. However, the deployment of these applications is hindered by the energy-expensive radio sensing techniques employed in these systems. In this thesis, we tackle this issue by introducing a novel way to realize a battery-free RTI sensor. We go through the design process and produce and evaluate a working prototype that operates on minuscule amounts of energy. Our design reduces power consumption by orders of magnitude compared to traditional RTI sensors by eliminating the energy-expensive components used in current RTI systems, enabling battery-free operation of RTI sensors. We demonstrate the efficiency and accuracy of our system in a boundary crossing scenario. We Discuss its limitations and tackle some of the security threats correlated with the deployment of such a system. / Radiotomografisk avbildning (RTA) är en ny, anordningsfri lokaliseringstekniksom utnyttjar förändringarna i radiosignaler orsakat av obstruktioner för att möjliggöraolika avkänningsapplikationer. Utvecklingen av dessa applikationer hindrasemellertid av de energiineffektiva radioavkännande tekniker som användsi dessa system. I denna avhandling behandlar vi problemet genom att introduceraen ny metod för att skapa en batterifri RTA-sensor. Vi går igenom konstruktionsprocessenoch producerar och utvärderar en arbetsprototyp som kräver minusklermängder energi. Vår design minskar energiförbrukningen signifikantjämfört med traditionella RTA-sensorer, genom att eliminera de energiineffektivakomponenterna som används i dagens RTA-system, vilket möjliggör batterifridrift av RTA-sensorer. Vi demonstrerar effektiviteten och noggrannheten hos vårtsystem i ett gränsöverskridande scenario. Vi diskuterar begränsningarna och taritu med några av de säkerhetshot som är korrelerade med utplaceringen av ettsådant system.
103

Respiratory Motion Correction in PET Imaging: Comparative Analysis of External Device and Data-driven Gating Approaches / Respiratorisk rörelsekorrigering inom PET-avbildning: En jämförande analys av extern enhetsbaserad och datadriven gating-strategi

Lindström Söraas, Nina January 2023 (has links)
Positron Emission Tomography (PET) is pivotal in medical imaging but is prone to artifactsfrom physiological movements, notably respiration. These motion artifacts both degradeimage quality and compromise precise attenuation correction. To counteract this, gatingstrategies partition PET data in synchronization with respiratory cycles, ensuring each gatenearly represents a static phase. Additionally, a 3D deep learning image registration modelcan be used for inter-gate motion correction, maximizing the use of the full acquired data. Thisstudy aimed to implement and evaluate two gating strategies: an external device-based approachand a data-driven centroid-of-distribution (COD) trace algorithm, and assess their impact on theperformance of the registration model. Analysis of clinical data from four subjects indicated thatthe external device approach outperformed its data-driven counterpart, which faced challengesin real-patient settings. Post motion compensation, both methods achieved results comparableto state-of-the-art reconstructions, suggesting the deep learning model addressed some data-driven method limitations. However, the motion corrected outputs did not exhibit significantimprovements in image quality over state-of-the-art standards. / Positronemissionstomografi (PET) är fundamentalt inom medicinsk avbildning men påverkasav artefakter orsakade av fysiologiska rörelser, framför allt andning. Dessa artefakter påverkarbildkvaliteten negativt och försvårar korrekt attenueringskorrigering. För att motverka dettakan tekniker för rörelsekorrigering tillämpas. Dessa innefattar gating-tekniker där PET-dataförst synkroniseras med andningscykeln för att därefter segmenterateras i olika så kalladegater som representerar en specifick respiratorisk fas. Vidare kan en 3D djupinlärningsmodellanvändas för att korrigera för rörelserna mellan gaterna, vilket optimerar användningen av allinsamlad data. Denna studie implementerade och undersökte två gating-tekniker: en externenhetsbaserad metod och en datadriven ”centroid-of-distribution (COD)” spår-algoritm, samtanalyserade hur dessa tekniker påverkar prestandan av bildregistreringsmodellen. Utifrånanalysen av kliniska data från fyra patienter visade sig metoden med den externa enhetenvara överlägsen den datadrivna metoden, som hade svårigheter i verkliga patient-situationer.Trots detta visade bildregistreringsmodellen potential att delvis kompensera för den datadrivnametodens begränsningar, då resultatet från båda strategeierna var jämförbara med befintligaklinisk bildrekonstruktion. Dock kunde ingen markant förbättring i bildkvalitet urskiljas av derörelsekorrigerade bilderna jämfört med nuvarande toppstandard.
104

Comparative Analysis of ISAR and Tomographic Radar Imaging at W-Band Frequencies

Hopkins, Nicholas Christian 24 May 2017 (has links)
No description available.
105

Análise tomográfica quantitativa linear de espessuras ósseas alveolares com vistas ao diagnóstico em ortodontia - Proposta de método / A quantitative linear tomographic analisys of the alveolar bone thicknesses and its implications to diagnosis in Orthodontics A method proposal

Silva, Siddhartha Uhrigshardt 01 June 2012 (has links)
O objetivo principal desta pesquisa foi justificar a proposta de utilização de um novo método tomográfico (cone beam) de avaliação das espessuras ósseas alveolares, maxilares e mandibulares, por meio de testes objetivos das Condições de Repetitividade e de Precisão Intermediária associadas à variação intra e interoperadores, e conforme a utilização de programa computacional independente (AutoCAD®) para a realização das medições, aplicadas à sequência do Procedimento Operacional Padrão (POP) definido para este experimento. A Fase I da pesquisa registrou os critérios de obtenção da qualidade final das imagens tomográficas definitivas, a partir de equipamento iCAT® (Imaging Sciences International, Hatfield, Pa), com parâmetros de aquisição de 120KVp, 37,7mA e 26,9s, e considerando FOV cilíndrico de 13cm e matriz de 512x512 pixels. A resolução do voxel foi de 0,25mm; A Fase II registrou os critérios exploratórios relativos às condições operacionais do software de visualização, registro (inspeção e identificação) e medição das grandezas selecionadas. A Fase III registrou a realização dos testes de repetitividade e de reprodutibilidade das medidas. Um total de 72 grandezas lineares foram previamente definidas e metodologicamente testadas em sua qualidade de inspeção, identificação e medição, a partir da avaliação de sete (7) operadores independentes, cinco dos quais eram especialistas e com Mestrado Acadêmico em Ortodontia pela FOUSP e, o outro, especialista em Radiologia Odontológica e Doutor em Diagnóstico Bucal (FOUSP). Os examinadores foram previamente instruídos, calibrados e treinados considerando os requerimentos necessários à execução dos testes propostos. O protocolo de pesquisa foi aprovado pelo Comitê de Ética em Pesquisa da Faculdade de Odontologia da Universidade de São Paulo (Parecer CAAE 0120.0.017.000-11). A análise estatística dependeu da utilização de Modelo de Componentes de Variância (hierárquico), e foram consideradas como fontes de variação: as medidas, efetuadas por um mesmo operador ou por diferentes operadores; a face considerada, vestibular ou lingual/palatina; os locais (três níveis de espessura óssea alveolar) em cada uma das faces e, ainda, os diferentes dentes. Esta análise foi realizada de forma separada para mandíbula e maxila. Valores de p<0,05 indicaram significância estatística. Os resultados indicaram significativa confiabilidade geral no método proposto, considerando a Condição de Repetitividade, com apenas 0,24% da variabilidade maxilar total atribuível a um único operador, e mandibular de 0,53%; e com valores expressivos relativos às incertezas de medida maxilares (0,156mm) e mandibulares (0,091mm), desse modo atestando significativa consistência interna (repetibilidade) do método. Os testes da Condição de Precisão Intermediária também indicaram significativa confiabilidade geral no método proposto, com apenas 1,52% da variabilidade total mandibular atribuível à participação de diversos operadores, e maxilar de 0,25%; e com valores também expressivos relativos às incertezas de medida mandibulares (0,149mm) e maxilares (0,158mm), desse modo atestando significativa condição final de reprodutibilidade. Conclui-se que a utilização de imagens provenientes de tomógrafo iCAT®, conforme indicação de resolução de imagem com voxel de 0,25mm, em humanos vivos e a partir de cortes trans-axiais sistematicamente operacionalizados com auxílio de Software AutoCAD®, propicia a geração de condições metodológicas suficientemente favoráveis à obtenção de mapeamento quantitativo linear de espessuras ósseas alveolares, vestibulares e palatinas/linguais, tanto para a maxila quanto para a mandíbula. / This research aimed to justify the proposed use of a new tomography method (cone beam) in the clinical assessment of alveolar, maxillary and mandibular bone width, through objective tests of the Conditions of Repetitiveness and Intermediate Precision associated with intra- and inter-operator variation, using the independent computer program (AutoCAD®) for the execution of the measurements according to the Standard Operating Procedure (SOP) sequence defined for this experiment. Phase I of the research recorded the criteria for obtaining the final quality of the tomography images, using iCAT® (Imaging Sciences International, Hatfield, Pa, USA) equipment with acquisition parameters 120KVp, 37.7mA, and 26.9s, and considering cilindric field-of-view (FOV) of 13cm and 512x512 pixels matrix. The voxel resolution was 0.25mm. Phase II recorded the exploratory criteria relative to the operational conditions of the visualization software, registry (visual inspection and landmark identification) and measurement of the selected magnitudes. Phase III recorded the tests of repeatability and reproducibility of the measurements. A total of 72 linear magnitudes were previously defined and methodologically tested for their quality of inspection, identification and measurement, based on assessment by seven (7) independent operators, five of whom were specialists, with masters degrees in Orthodontics from FOUSP; and the other, a specialist in Dental Radiology and Doctor of Oral Diagnosis (FOUSP). The examiners were previously instructed, calibrated and trained according to the requirements for performing the proposed tests. The research protocol was approved by the Committee for Ethics in Research of the Faculty of Dentistry at the University of São Paulo (Protocol # 102/11-CAAE 0120.0.017.000-11). Statistical analysis used the (hierarchical) Components of Variation Model, and the sources of variation were considered to be: the measurements, made by the same operator or by different operators; the face considered, whether vestibular or lingual/palatal; the locations (three levels of alveolar bone thickness) in each of the faces and, also, the different teeth. This analysis was carried out separately for the mandible and the maxilla. Values of p<0.05 indicated statistical significance. The results indicated overall significant reliability in the proposed method considering the Condition of Repetitiveness, with only 0.24% of total maxillary, and 0.53% of mandibular, variability attributable to a single operator; and with expressive values relative to measurement uncertainties of maxillary (0.156 mm) and mandibular (0.091mm) averages, thereby attesting to significant internal consistency (\"repeatability\") of the method. Tests for the Condition of Intermediate Precision also indicated overall significant reliability of the proposed method, with only 1.52% total mandibular, and 0.25% maxillary, variability attributable to the participation of the various operators; and, also, with expressive values relative to measurement uncertainties of mandibular (0.149mm) and maxillary (0.158mm) averages, thereby attesting to the significant final condition of reproducibility. It is concluded that the use of images from iCAT® tomography, as indicated by image resolution with voxels of 0.25mm, in live humans and from transaxial cuts performed systematically with the help of AutoCAD® Software, provides methodological conditions sufficiently favorable for obtaining linear quantitative mapping of alveolar, vestibular and palatal/lingual bone thicknesses, for both the maxillary and mandibular dental arches.
106

Joint super-resolution/segmentation approaches for the tomographic images analysis of the bone micro-structure / Approches de super-résolution/segmentation pour l'analyse d'images tomographiques de la microstructure osseuse

Li, Yufei 20 December 2018 (has links)
L'ostéoporose est une maladie caractérisée par la perte de la masse osseuse et la dégradation de la micro-architecture osseuse. Bien que l'ostéoporose ne soit pas une maladie mortelle, les fractures qu'elle provoque peuvent entraîner de graves complications (lésions des vaisseaux et des nerfs, infections, raideur), parfois accompagnées de menaces de mort. La micro-architecture osseuse joue un rôle important dans le diagnostic de l'ostéoporose. Deux appareils de tomodensitométrie courants pour scanner la micro-architecture osseuse sont la tomodensitométrie quantitative périphérique à haute résolution et la tomodensitométrie microscopique. Le premier dispositif donne accès à l'investigation in vivo, mais sa résolution spatiale est inférieure. Le micro tomodensitomètre donne une meilleure résolution spatiale, mais il est contraint à une mesure ex vivo. Dans cette thèse, notre but est d'améliorer la résolution spatiale des images de tomodensitométrie périphérique à haute résolution afin que l'analyse quantitative des images résolues soit proche de celle donnée par les images de tomodensitométrie Micro. Nous sommes partis de la régularisation de la variation totale, à une combinaison de la variation totale et du potentiel de double puits pour améliorer le contraste des résultats. Ensuite, nous envisageons d'utiliser la méthode d'apprentissage par dictionnaire pour récupérer plus de détails sur la structure. Par la suite, une méthode d'apprentissage approfondi a été proposée pour résoudre un problème de super résolution et de segmentation joint. Les résultats montrent que la méthode d'apprentissage profond est très prometteuse pour les applications futures. / Osteoporosis is a disease characterized by loss of bone mass and degradation of bone microarchitecture. Although osteoporosis is not a fatal disease, the fractures it causes can lead to serious complications (damage to vessels and nerves, infections, stiffness), sometimes accompanied with risk of death. The bone micro-architecture plays an important role for the diagnosis of osteoporosis. Two common CT devices to scan bone micro architecture is High resolution-peripheral Quantitative CT and Micro CT. The former device gives access to in vivo investigation, but its spatial resolution is inferior. Micro CT gives better spatial resolution, but it is constrained to ex vivo measurement. In this thesis, we attempt to improve the spatial resolution of high resolution peripheral CT images so that the quantitative analysis of the resolved images is close to the one given by Micro CT images. We started from the total variation regularization, to a combination of total variation and double-well potential to enhance the contrast of results. Then we consider to use dictionary learning method to recover more structure details. Afterward, a deep learning method has been proposed to solve a joint super resolution and segmentation problem. The results show that the deep learning method is very promising for future applications.
107

3D analysis of bone ultra structure from phase nano-CT imaging / Analyse 3D de l'ultra structure ultra osseuse par nano-CT de phase

Yu, Boliang 13 March 2019 (has links)
L'objectif de cette thèse était de quantifier le réseau lacuno-canaliculaire du tissu osseux à partir d’images 3D acquises en nano CT synchrotron de phase. Ceci a nécessité d’optimiser les processus d’acquisition et de reconstruction de phase, ainsi que de développer des méthodes efficaces de traitement d'images pour la segmentation et l’analyse 3D. Dans un premier temps, nous avons étudié et évalué différents algorithmes de reconstruction de phase. Nous avons étendu la méthode de Paganin pour plusieurs distances de propagation et l’avons évaluée et comparée à d’autres méthodes, théoriquement puis sur nos données expérimentales Nous avons développé une chaine d’analyse, incluant la segmentation des images et prenant en compte les gros volumes de données à traiter. Pour la segmentation des lacunes, nous avons choisi des méthodes telles que le filtre médian, le seuillage par hystérésis et l'analyse par composantes connexes. La segmentation des canalicules repose sur une méthode de croissance de région après rehaussement des structures tubulaires. Nous avons calculé des paramètres de porosité, des descripteurs morphologiques des lacunes ainsi que des nombres de canalicules par lacune. Par ailleurs, nous avons introduit des notions de paramètres locaux calculés dans le voisinage des lacunes. Nous avons obtenu des résultats sur des images acquises à différentes tailles de voxel (120nm, 50nm, 30nm) et avons également pu étudier l’impact de la taille de voxel sur les résultats. Finalement ces méthodes ont été utilisées pour analyser un ensemble de 27 échantillons acquis à 100 nm dans le cadre du projet ANR MULTIPS. Nous avons pu réaliser une analyse statistique pour étudier les différences liées au sexe et à l'âge. Nos travaux apportent de nouvelles données quantitatives sur le tissu osseux qui devraient contribuer à la recherche sur les mécanismes de fragilité osseuse en relation avec des maladies comme l’ostéoporose. / Osteoporosis is a bone fragility disease resulting in abnormalities in bone mass and density. In order to prevent osteoporotic fractures, it is important to have a better understanding of the processes involved in fracture at various scales. As the most abundant bone cells, osteocytes may act as orchestrators of bone remodeling which regulate the activities of both osteoclasts and osteoblasts. The osteocyte system is deeply embedded inside the bone matrix and also called lacuno-canalicular network (LCN). Although several imaging techniques have recently been proposed, the 3D observation and analysis of the LCN at high spatial resolution is still challenging. The aim of this work was to investigate and analyze the LCN in human cortical bone in three dimensions with an isotropic spatial resolution using magnified X-ray phase nano-CT. We performed image acquisition at different voxel sizes of 120 nm, 100 nm, 50 nm and 30 nm in the beamlines ID16A and ID16B of the European Synchrotron Radiation Facility (ESRF - European Synchrotron Radiation Facility - Grenoble). Our first study concerned phase retrieval, which is the first step of data processing and consists in solving a non-linear inverse problem. We proposed an extension of Paganin’s method suited to multi-distance acquisitions, which has been used to retrieve phase maps in our experiments. The method was compared theoretically and experimentally to the contrast transfer function (CTF) approach for homogeneous object. The analysis of the 3D reconstructed images requires first to segment the LCN, including both the segmentation of lacunae and of canaliculi. We developed a workflow based on median filter, hysteresis thresholding and morphology filters to segment lacunae. Concerning the segmentation of canaliculi, we made use of the vesselness enhancement to improve the visibility of line structures, the variational region growing to extract canaliculi and connected components analysis to remove residual noise. For the quantitative assessment of the LCN, we calculated morphological descriptors based on an automatic and efficient 3D analysis method developed in our group. For the lacunae, we calculated some parameters like the number of lacunae, the bone volume, the total volume of all lacunae, the lacunar volume density, the average lacunae volume, the average lacunae surface, the average length, width and depth of lacunae. For the canaliculi, we first computed the total volume of all the canaliculi and canalicular volume density. Moreover, we counted the number of canaliculi at different distances from the surface of each lacuna by an automatic method, which could be used to evaluate the ramification of canaliculi. We reported the statistical results obtained on the different groups and at different spatial resolutions, providing unique information about the organization of the LCN in human bone in three dimensions.
108

Avaliação comparativa entre enxertos alógenos e autógenos \'onlay\'. Estudo histológico, imunohistoquímico e tomográfico em coelhos / Comparative evaluation about onlay allograt and autogenous graft. Histological, Immunohistochemical and tomographic study in the rabbits

Hawthorne, Ana Carolina 20 December 2010 (has links)
A reconstrução dos maxilares em implantodontia através de métodos de enxertia óssea constitui o procedimento cirúrgico mais utilizado frente à perda fisiológica ou traumática a que estes ossos estão sujeitos. Os enxertos autógenos mostram vantagens em relação às demais técnicas de reconstrução no que se refere ao potencial regenerador ósseo, entretanto, a sua remoção implica obrigatoriamente na necessidade de áreas doadoras. Nas últimas décadas tem ocorrido um grande interesse pelos enxertos alógenos de banco de tecidos músculo-esquelético (BTME) como alternativa às enxertias autógenas, como forma de evitar morbidade do sítio doador e redução de tempo e custos da cirurgia. O propósito do estudo foi comparar o comportamento dos enxertos alógenos com autógenos avaliados por métodos imunohistoquímicos, histológicos e tomográficos. Trinta e seis coelhos da linhagem New Zealand White foram submetidos a cirurgias para enxertia &Prime;onlay&Prime; de osso autógeno (grupo controle) e osso alógeno em lados diferentes da mandíbula de forma aleatória. Seis animais de cada grupo foram sacrificados aos 03, 05, 07, 10, 20 e 60 dias após as cirurgias. Cortes histológicos foram corados com Tricrômio de Mallory para as análises histológicas. As imuno marcações foram realizadas com osteoprotegerina (OPG); receptor activator of nuclear factor-k&beta; ligand (RANKL); fosfatase alcalina (ALP); osteopontina (OPN); vascular endothelial growth factor (VEGF); tartrate-resistant acid phosphatase (TRAP); colágeno tipo I (Col I) e osteocalcina (OC). A manutenção do volume e densidade dos enxertos foi avaliada por meio de tomografias obtidas após as cirurgias e após os sacrifícios. Os enxertos autógenos e alógenos exibiram padrões de preservação de volume e densidade similares; os dados histológicos mostram que a remodelação óssea no grupo alógeno ocorreu de modo mais intenso que no grupo autógeno; a avaliação por microscopia de luz mostra que a incorporação do osso autógeno ao leito receptor foi mais eficiente que no grupo alógeno; no grupo alógeno os resultados de imunohistoquímica demonstraram um quadro típico de intensa remodelação dos enxertos. / The reconstruction of jaws in implantology using methods of bone grafting constitutes is becoming the most popular surgical procedure due to the physiologic bone loss that follows teeth extraction or trauma. The autogenous grafts show advantages in relation to the other reconstruction techniques because its potential as bone regenerator. However, its removal implicates obligatorily in the areas donor areas. In the last decades aroused the interest for the bone bank (BTME) as alternative to the autogenous grafting, as a manner to avoid donor sites morbidity and reduction of time and surgery costs. The purpose of the study was to compare the behavior of allografts with autogenous using methods of immunochemistry, histology and tomography. Thirty six rabbits of the lineage New Zealand White were submitted to surgeries for onlay grafting of autogenous bone (group control) and allogenous bone randomly placed bilaterally in the mandible. Six animals of each group were sacrificed to the 03, 05, 07, 10, 20 and 60 days after the surgeries. Paraffin sections were stained with Mallorys Trichrome for histologics analyses. Immuno labeling accomplished with osteoprotegerin (OPG); receptor activator of nuclear factor-k&beta; ligand (RANKL); alkaline fosfatase (ALP); osteopontin (OPN); vascular endothelial growth factor (VEGF); tartrate-resistant acid phosphatase (TRAP); collagen type I (COL I) and osteocalcin (OC). The maintenance of the volume and density of the grafts was evaluated on tomographs obtained after surgeries and sacrifices. The autogenous grafts and allografts exhibited patterns of volume preservation and similar density; the histological data show that the remodelation bone in the group allograft happened in a more intense way than in the autogenous group; the evaluation for light microscopic shows that the incorporation of the autogenous bone on donors bed was more efficient than in the allogenous group; in the allogenous group for immunohistochemical results demonstrated a typical picture of intense remodelation of the grafts.
109

Conception, reconstruction et évaluation d'une géométrie de collimation multi-focale en tomographie d'émission monophotonique préclinique / Design, reconstruction and evaluation of multi-focal collimation in single photon emission computed tomography for small-animal imaging

Benoit, Didier 05 December 2013 (has links)
La tomographie d'émission monophotonique (TEMP) dédiée au petit animal est une technique d'imagerie nucléaire qui joue un rôle important en imagerie moléculaire. Les systèmes TEMP, à l'aide de collimateurs pinholes ou multi-pinholes, peuvent atteindre des résolutions spatiales submillimétriques et une haute sensibilité pour un petit champ de vue, ce qui est particulièrement attractif pour imager des souris. Une géométrie de collimation originale a été proposée, dans le cadre d'un projet, appelé SIGAHRS, piloté par la société Biospace. Ce collimateur présente des longueurs focales qui varient spatialement dans le plan transaxial et qui sont fixes dans le plan axial. Une haute résolution spatiale est recherchée au centre du champ de vue, avec un grand champ de vue et une haute sensibilité. Grâce aux simulations Monte Carlo, dont nous pouvons maîtriser tous les paramètres, nous avons étudié cette collimation originale que nous avons positionnée par rapport à un collimateur parallèle et un collimateur monofocal convergent. Afin de générer des données efficacement, nous avons développé un module multi-CPU/GPU qui utilise une technique de lancer de rayons dans le collimateur et qui nous a permis de gagner un facteur ~ 60 en temps de calcul, tout en conservant ~ 90 % du signal, pour l'isotope ⁹⁹^mTc (émettant à 140,5 keV), comparé à une simulation Monte Carlo classique. Cependant, cette approche néglige la pénétration septale et la diffusion dans le collimateur. Les données simulées ont ensuite été reconstruites avec l'algorithme OSEM. Nous avons développé quatre méthodes de projection (une projection simple (S-RT), une projection avec volume d'intersection (S-RT-IV), une projection avec calcul de l'angle solide (S-RT-SA) et une projection tenant compte de la profondeur d'interaction (S-RT-SA-D)). Nous avons aussi modélisé une PSF dans l'espace image, anisotrope et non-stationnaire, en nous inspirant de la littérature existante. Nous avons étudié le conditionnement de la matrice système pour chaque projecteur et collimateur, et nous avons comparé les images reconstruites pour chacun des collimateurs et pour chacun des projecteurs. Nous avons montré que le collimateur original proposé est le système le moins bien conditionné. Nous avons aussi montré que la modélisation de la PSF dans l'image ainsi que de la profondeur d'intéraction améliorent la qualité des images reconstruites ainsi que le recouvrement de contraste. Cependant, ces méthodes introduisent des artefacts de bord. Comparé aux systèmes existants, nous montrons que ce nouveau collimateur a un grand champ de vue (~ 70 mm dans le plan transaxial), avec une résolution de 1,0 mm dans le meilleur des cas, mais qu'il a une sensibilité relativement faible (1,32x10⁻² %). / Small animal single photon emission computed tomography (SPECT) is a nuclear medicine imaging technique that plays an important role in molecular imaging. SPECT systems using pinhole or multi-pinhole collimator can achieve submillimetric spatial resolution and high sensitivity in a small field of view, which is particularly appropriate for imaging mice. In our work, we studied a new collimator dedicated to small animal SPECT, in the context of a project called SIGAHRS, led by the Biospace company. In this collimator, focal lengths vary spatially in the transaxial plane and are fixed in the axial plane. This design aims at achieving high spatial resolution in the center of the field of view, with a large field of view and high sensitivity. Using Monte Carlo simulations, where all parameters can be controlled, we studied this new collimator geometry and compared it to a parallel collimator and a cone-beam collimator. To speed up the simulations, we developed a multi-CPU/GPU module that uses a technique of ray tracing. Using this approach, the acceleration factor was ~ 60 and we restored ~ 90 % of the signal for ⁹⁹^mTc (140.5 keV emission), compared to a classical Monte Carlo simulation. The 10 % difference is due to the fact that the multi-CPU/GPU module neglects the septal penetration and scatter in the collimator. We demonstrated that the data acquired with the new collimator could be reconstructed without artifact using an OSEM algorithm. We developed four forward projectors (simple projector (S-RT), projector accounting for the surface of the detecting pixel (S-RT-IV), projection modeling the solid angle (S-RT-SA) of the projection tube, and projector modeling the depth of interaction (S-RT-SA-D)). We also modeled the point spread function of the collimator in the image domain, using an anisotropic non-stationary function. To characterize the reconstruction, we studied the conditioning number of the system matrix for each projector and each collimator. We showed that the new collimator was more ill-conditioned than a parallel collimator or a cone-beam collimator. We showed that the image based PSF and the modeling of the depth of interaction improved the quality of the images, but edge artefacts were introduced when modeling the PSF in the image domain. Compared to existing systems, we showed that this new collimator has a large field of view (~ 70 mm in the transaxial plane) with a resolution of 1.0 mm in the best case but suffers from a relatively low sensitivity (1.32x10⁻² %).
110

Multiscale Active Contour Methods in Computer Vision with Applications in Tomography

Alvino, Christopher Vincent 10 April 2005 (has links)
Most applications in computer vision suffer from two major difficulties. The first is they are notoriously ridden with sub-optimal local minima. The second is that they typically require high computational cost to be solved robustly. The reason for these two drawbacks is that most problems in computer vision, even when well-defined, typically require finding a solution in a very large high-dimensional space. It is for these two reasons that multiscale methods are particularly well-suited to problems in computer vision. Multiscale methods, by way of looking at the coarse scale nature of a problem before considering the fine scale nature, often have the ability to avoid sub-optimal local minima and obtain a more globally optimal solution. In addition, multiscale methods typically enjoy reduced computational cost. This thesis applies novel multiscale active contour methods to several problems in computer vision, especially in simultaneous segmentation and reconstruction of tomography images. In addition, novel multiscale methods are applied to contour registration using minimal surfaces and to the computation of non-linear rotationally invariant optical flow. Finally, a methodology for fast robust image segmentation is presented that relies on a lower dimensional image basis derived from an image scale space. The specific advantages of using multiscale methods in each of these problems is highlighted in the various simulations throughout the thesis, particularly their ability to avoid sub-optimal local minima and their ability to solve the problems at a lower overall computational cost.

Page generated in 0.0391 seconds