• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 59
  • 57
  • 17
  • 15
  • 12
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 358
  • 81
  • 74
  • 61
  • 48
  • 48
  • 44
  • 36
  • 35
  • 34
  • 31
  • 29
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Detecção e redução de speckle em imagem medica por ultra-som

Dantas, Ricardo Grossi, 1974- 14 May 2004 (has links)
Orientador: Eduardo Tavares Costa / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-03T23:50:43Z (GMT). No. of bitstreams: 1 Dantas_RicardoGrossi_D.pdf: 17014652 bytes, checksum: 9f49d3669f0a724898b31ddbf51e81c0 (MD5) Previous issue date: 2004 / Resumo: A utilização de ultra-som na área médica tem um espectro bastante amplo e, especificamente no que concerne a imagens, vem se tomando referência em alguns exames clínicos. A ultra-sonografia de modo-B é umas das modalidades mais difundidas, fornecendo imagens anatômicas bidimensionais correspondentes a cortes tomo gráficos do tecido em questão. Apesar de todo o avanço tecnológico que vem sendo realizado nas últimas décadas, as imagens de modo-B ainda apresentam baixa qualidade, se comparadas com imagens de ressonância magnética ou tomografia computadorizada por raios-X. Um dos principais problemas é a ocorrência de speckle, um artefato que cria um padrão granuloso na imagem, falseando as estruturas nela contidas e ocasionando grande subjetividade ao diagnóstico médico. O speckle dá-se em função de interferências destrutivas entre, pelo menos, duas reflexões acústicas que se sobrepõem, sendo um fenômeno comum a sistemas que utilizam fontes coerentes, como ultra-som, laser e radar . O objetivo deste projeto foi o desenvolvimento de métodos matemáticos que levassem à redução de speckle e a conseqüente melhora da qualidade das imagens ultra-sônicas de modo-B. Desta forma, buscou-se contribuir para um menor grau de subjetividade (e maior determinismo) no diagnóstico médico por ultra-som, além de contribuir para a eficácia de métodos automáticos de análise das imagens que porventura possam ser utilizados. Além da motivação principal, procurou-se estender o objetivo a contribuições paralelas em vários outros aspectos relacionados a imagens ultra-sônicas de modo-B. Foi desenvolvido um método inédito de redução de speckle, denominado Diversidade de Fase (ou Split Phase Processing - SPP), além de propor e implementar inovações no método denominado ZAP (Zero Adjustment Processing). Entre as contribuições paralelas, foi desenvolvida a técnica de detecção de speckle usando morfologia matemática, além da introdução dos conceitos de espalhadores efetivos e imagem livre de speckle, usada como padrão de referência nas simulações empregadas. Os métodos desenvolvidos foram aplicados a imagens simuladas, bem como a imagens obtidas de equipamentos ultra-sônicos comerciais em situações clínicas reais. Os resultados foram avaliados qualitativamente, através de inspeção visual, bem como de forma quantitativa, através de análise estatística da melhoria do contraste e da relação sinal-ruído, além da capacidade de detecção de estruturas nas imagens. Foram utilizadas várias imagens de teste e, de forma geral, atingiram-se níveis de redução de speckle de 50 a 90 % com imagens clínicas reais, dependendo do parâmetro de qualidade utilizado / Abstract: Medical ultrasound imaging has been widely used and has become a reference standard in some clinical examinations. B-mode ultrasound imaging is one of its most popular modalities, providing anatomical bidimensional images, corresponding to tomographic slices of the tissue under investigation. Despite ali technological advances that have been achieved during the past decades, B-mode images still present low quality, in comparison to magnetic resonance or X-ray computerized tomography images. One of the chief problems is the occurrence of speckle, an artefact that introduces a worm-like pattem that pervades the image, corrupting real structures and substantially compromising medical diagnosis. Speckle results from destructive interference between retuming echoes, being characteristic of systems that use coherent sources, such as ultrasound, laser and radar. The objective of this thesis was the development of mathematical methods aiming at speckle reduction and the consequent quality improvement of B-mode ultrasound images. We focused on the reduction of the subjectivity in ultrasound medical diagnosis and also on the contribution to the improvement of automatic methods for image analysis. Beyond the scope of the main motivation, it was also looked for contributions in several other features conceming B-mode ultrasound imaging. Two different approaches towards speckle reduction were introduced: the first is an original one, which was called Phase Diversity or Split Phase Processing (SPP), whilst the second includes innovations in the method known as Zero Adjustment Processing (ZAP). Amongst the side-contributions, it was also included the development of a speckle detection method using mathematical morphology techniques, as well as the introduction of the effective scattering and speckle-free concepts, the later used as a golden standard in ultrasound image simulations. The developed methods have been applied to simulated and real images acquired from commercial ultrasound equipments in clinical situations. The results have been evaluated qualitatively, through visual inspection, and quantitatively, through statistical analysis ofthe signal and contrast to noise ratio improvements, as well as the structure detection capabilities in the images. Several test images were used and, in average, it was possible to reach speckle reduction levels between 50 and 90 % with real clinical images, depending on the used quality assurance parameter / Doutorado / Doutor em Engenharia Elétrica
212

Non-parametric edge detection in speckled imagery

Giovanny Giron Amaya, Edwin 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T18:02:43Z (GMT). No. of bitstreams: 2 arquivo4052_1.pdf: 1926198 bytes, checksum: a394edbf4f303fa7b25af920df83cf25 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho propõe uma técnica não-paramétrica para detecção de bordas em imagens speckle. As imagens SAR ("Synthetic aperture Radar"), sonar, B-ultrasound e laser são corrompidas por um ruído não aditivo chamado speckle. Vários modelos estatísticos foram propostos para desrever este ruído, levando ao desenvolvimento de técnicas especiais para melhoramento e análise de imagens. A distribuição G0 é um modelo estatístico que consegue descrever uma ampla gama de áreas, como, por exemplo, em dados SAR, pastos (lisos), florestas (rugosos) e áreas urbanas (muito rugosos). O objetivo deste trabalho é estudar ténicas alternativas na detecção de imagens speckled, tomando como ponto de partida Gambini et al. (2006, 2008). Um novo detector de borda baseado no teste de Kruskal Wallis é proposto. Os nossos resultados numéricos mostram que esse detector é uma alternativa atraente ao detector de M. Gambini, que é baseado na função de verossimilhançaa. Neste trabalho fornecemos evidências de que a técnica de M. Gambini pode ser substituída om sucesso pelo método Kruskal Wallis. O ganho reside em ter um algoritmo 1000 vezes mais rápido, sem omprometer a qualidade dos resultados
213

Modeling synthetic aperture radar image data

Matthew Pianto, Donald 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T18:29:09Z (GMT). No. of bitstreams: 2 arquivo4274_1.pdf: 5027595 bytes, checksum: 37a31f281a0f888465edbdc60cb2db39 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nessa tese estudamos a estimação por máxima verossimilhança (MV) do parâmetro de aspereza da distribuição G 0 A de imagens com speckle (Frery et al., 1997). Descobrimos que, satisfeita uma certa condição dos momentos amostrais, a função de verossimilhança é monótona e as estimativas MV são infinitas, implicando uma região plana. Implementamos quatro estimadores de correção de viés em uma tentativa de obter estimativas MV finitas. Três dos estimadores são obtidos da literatura sobre verossimilhança monótona (Firth, 1993; Jeffreys, 1946) e um, baseado em reamostragem, é proposto pelo autor. Fazemos experimentos numéricos de Monte Carlo para comparar os quatro estimadores e encontramos que não existe um favorito claro, a menos quando um parâmetro (dado a priori da estimação) toma um valor específico. Também aplicamos os estimadores a dados reais de radar de abertura sintética. O resultado desta análise mostra que os estimadores precisam ser comparados com base em suas habilidades de classificar regiões corretamente como ásperas, planas, ou intermediárias e não pelos seus vieses e erros quadráticos médios
214

A novel fuzzy digital image correlation algorithm for non-contact measurement of the strain during tensile tests / Développement et validation d'un algorithme de corrélation d'images numériques utilisant la logique floue pour mesurer la déformation pendant les tests de traction

Zhang, Juan January 2016 (has links)
Cette thèse a pour objet la mesure de déformations sans contact lors d'un essai de traction à l'aide de la méthode de corrélation d'images numériques DIC (Digital Image Correlation). Cette technologie utilise le repérage d'un motif aléatoire de tachetures pour mesurer avec précision les déplacements sur une surface donnée d'un objet subissant une déformation. Plus précisément, un algorithme DIC plus efficace a été formulé, appliqué et validé. La présente thèse comporte cinq parties consacrées au développement et à la validation du nouvel algorithme DIC: (a) la formulation mathématique et la programmation, (b) la vérification numérique, (c) la validation expérimentale, par essai de traction, en comparant les mesures DIC à celles obtenues par des jauges de déformation, (d) l'étude d'un procédé d'atomisation novateur pour générer de façon reproductible le motif de tachetures pour un repérage plus exact, et (e) l'analyse des sources d'erreur dans les mesures DIC. Plus précisément, l'algorithme DIC a servi à analyser, à titre d'exemple d'application, les propriétés mécaniques du polyméthyl métacrylate utilisé pour la reconstruction du squelette. Avec l'algorithme DIC, les images d'un objet sont acquises pendant la déformation de celui-ci. On applique ensuite des techniques d'optimisation non linéaire pour suivre le motif de tachetures à la surface des objets subissant une déformation en traction avant et après le déplacement. Ce procédé d'optimisation demande un choix de valeurs de déplacement initiales. Plus l'estimation de ces valeurs de déplacement initiales est juste, plus il y a de chances que la convergence du processus d'optimisation soit efficace. Ainsi, cette thèse présente une technique de traitement novatrice reposant sur une logique floue incluant aussi l'approximation des valeurs initiales du déplacement pour démarrer un processus itératif d'optimisation, ayant pour résultat une reproduction plus exacte et efficace des déplacements et des déformations. La formulation mathématique du nouvel algorithme a été développée et ensuite mise en œuvre avec succès dans le langage de programmation MATLAB. La vérification de l'algorithme a été faite à l'aide d'images de synthèse simu­lant des déplacements de corps rigides et des déformations de traction uniformes. Plus particulièrement, les images de déplacement simulaient (1) des déplacements de 0, 1 - 1 pixel en translation, (2) des angles de rotation de 0, 5 - 5°, et (3) de grandes déformations en traction de l'ordre de 5000 à 300000µE déformation, respectivement. Les processus de vérification ont démontré que le taux d'exactitude du nouvel algorithme DIC est supérieur à 99% en ce qui concerne les mesures des différents types et niveaux de déplacements simulés. Une validation expérimentale a été menée afin d'examiner l'efficacité de la nouvelle tech­nique dans des conditions réalistes. Des échantillons de PMMA normalisés, respectant la norme ASTM F3087, ont été produits, inspectés et soumis à une charge de traction jus­qu'à la rupture. La déformation de la surface des échantillons a été mesurée au moyen (a) du nouvel algorithme DIC, et (b) des techniques utilisant des jauges de déformation de type rosette. La force maximale moyenne et la limite de résistance mécanique des quatre échantillons étaient de 880 ± 110 N et 49 ± 7 MPa, respectivement. La limite moyenne de déformation mesurée par la jauge de déformation et provenant de l'algorithme DIC étaient de 15750±2570 et 19890±3790 µs déformation, respectivement. Des déformations d'un tel ordre sont courantes pour les matériaux polymériques, et jusqu'à maintenant, la technique DIC n'n’était pas développée pour faire des mesures de déformations aussi importantes. On a constaté que l'erreur relative de la mesure DIC, par rapport à la technique de la jauge de déformation, s'élevait à 26 ± 8%. Par ailleurs, le module de Young moyen et le coefficient de Poisson moyen mesurés en utilisant des jauges de déformations étaient de 3, 78 ± 0, 07 G Pa et 0, 37 ± 0, 02, alors qu'ils étaient de 3, 16 ± 0, 61 GPa et 0, 37 ± 0, 08, respectivement lorsque mesurés avec l'algorithme DIC. L'écart croissant entre les mesures de déformation DIC et celles obtenues au moyen de jauges de déformation est probablement lié à la dis­torsion graduelle du motif de tachetures à la surface des échantillons de traction. Par la suite, on a introduit un facteur de correction de 1, 27 afin de corriger l'erreur systématique dans les mesures de déformation provenant de l'algorithme DIC. La limite de déformation des mesures DIC a été rajustée à 15712±357 µs déformation avec un taux d'erreur moyen relatif de -0, 5 ± 7, 1 %, comparé aux déformations mesurées par la jauge de déformation. Le module de Young moyen et le coefficient moyen de Poisson de l'algorithme DIC et des mesures obtenues par la jauge de déformation ont par ailleurs été rajustés à 3, 8 ± 0, 4 GPa et 0, 368 ± 0, 025, respectivement. Au moyen d'un procédé d'atomisation, des taches de peinture ont été générées de façon reproductible sur la surface d'un objet. Une approche expérimentale de planification facto­rielle a été utilisée pour étudier le motif de tachetures (répartition et gradient de l'échelle des tons de gris) pour mesurer l'exactitude de l'algorithme DIC. Plus particulièrement, neuf motifs de tachetures différents ont été générés au moyen du procédé d'atomisation et testés pour la translation et la rotation de corps rigides. Les résultats ont révélé que l'erreur moyenne relative parmi les neuf motifs de tachetures variait de 1, 1 ± 0, 3% à -6, 5 ± 3, 6%. Le motif de tachetures préféré, lequel se démarquait par une large gamme de taches claires et de valeurs de tons de gris, a produit une erreur relative de 1, 1 ± 0, 3%. Une analyse des erreurs et des sources d'erreurs relatives de la mesure de l'algorithme DIC a été menée. Ti-ois catégories de sources d'erreurs, incluant l'algorithme lui-même, les paramètres du processus (taille des sous-ensembles, nombre de pixels calculés) et l'en­vironnement physique (uniformité des échantillons, motifs de tachetures, effet thermique de la caméra CCD et distorsion de la lentille, erreur de non-linéarité dans le circuit de la jauge de déformation) ont fait l'objet d'une étude et de discussions. Enfin, des solutions ont été amenées afin d'aider à réduire les erreurs systématiques et aléatoires en lien avec les trois catégories de sources d'erreurs susmentionnées. Pour terminer, un nouvel algorithme DIC permettant une approximation plus juste de l'estimation initiale, entraînant par conséquent une convergence efficace et précise de l'op­timisation a été développé, programmé, mis en oeuvre et vérifié avec succès pour ce qui est des déformations importantes. La validation expérimentale a fait ressortir une erreur systé­matique inattendue des mesures DIC lorsque comparées aux mesures obtenues au moyen de la technique des jauges de déformation. Plus l'échantillon se déformait, plus l'erreur augmentait proportionnellement. Par conséquent, la distorsion graduelle des tachetures sur la surface de l'objet était probablement la cause de l'erreur. L'erreur étant systéma­tique, elle a été corrigée. Le procédé d'atomisation a permis de générer des tachetures de façon reproductible sur la surface d'un objet. Grâce aux mesures DIC, le comportement mécanique des polymères soumis à des déformations importantes, comme le polyméthyl métacrylate servant à la reconstruction du squelette, peut être étudié et une fois maîtrisé, servir à l'élaboration de matériaux plus efficaces. / Abstract : The present thesis is focused on the non-contact and efficient strain measurement using the Digital Image Correlation (DIC) method, which employs the tracking of random speckle pattern for accurate measurement of displacements on a surface of an object undergoing deformation. Specifically, a more efficient DIC algorithm was successfully developed, implemented, and validated. This thesis consists of five parts related to the novel DIC algorithm: (a) the development and implementation, (b) the numerical verification, (c) the experimental validation, for tensile loading, by comparing to the deformation measurements using the strain gauge technique, (d) the investigation of a novel atomization process to reproducibly generate the speckle pattern for accurate tracking, and (e) the analysis of the error sources in the DIC measurements. Specifically, the DIC algorithm was used to exemplarily examine the mechanical properties of polymethyl methacrylate (PMMA) used in skeletal reconstruction. In the DIC algorithm, images of an object are captured as it deforms. Nonlinear optimization techniques are then used to correlate the speckle on the surface of the objects before and after the displacement. This optimization process includes a choice of suitable initial displacement values. The more accurate the estimation of these initial displacement values are, the more likely and the more efficient the convergence of the optimization process is. The thesis introduced a novel, fuzzy logics based processing technique, approximation of the initial values of the displacement for initializing iterative optimization, which more accurately and efficiently renders the displacements and deformations as results. The mathematical formulation of the novel algorithm was developed and then successfully implemented into MATLAB programming language. The algorithmic verification was performed using computer-generated images simulating rigid body displacements and uniform tensile deformations. Specifically, the rigid motion images simulated (1) displacements of 0.1-1 pixel for the rigid body translation, (2) rotation angles of 0.5-5 ̊ for rigid body rotation and (3) large tensile deformations of 5000-300000µɛ, respectively. The verification processes showed that the accuracy of the novel DIC algorithm, for the simulated displacement types and levels above 99%. The experimental validation was conducted to examine the effectiveness of the novel technique under realistic testing conditions. Normalized PMMA specimens, in accordance to ASTM F3087, were produced, inspected and subjected to tensile loading until failure. The deformation of the specimen surface was measured using (a) the novel DIC, and (b) strain gauge rosette techniques. The mean maximum force and ultimate strength of four specimens were 882.2±108.3 N and 49.3±6.2 MPa, respectively. The mean ultimate deformation from the gauge and DIC groups were 15746±2567µɛ and 19887±3790µɛ, respectively. These large deformations are common in polymeric materials, and the DIC technique has thus far not been investigated for large deformation. The relative mean error of the DIC measurement, in reference to those of the strain gauge technique, was found to be up to 26.0±7.1%. Accordingly, the mean Young's modulus and Poisson's ratio of strain gauge measurement were 3.78±0.07 GPa and 0.374±0.02, and of the DIC measurements were 3.16±0.61 GPa and 0.373±0.08, respectively. The increasing difference of the DIC strain measurements relative to those of the strain gauge technique is likely related to the gradual distortion of the speckle pattern on the surface of the tensile specimen. Subsequently, a Correction Factor (CF) of 1.27 was introduced to correct for the systematic error in the deformation measurements of the DIC group. The corrected ultimate deformation of the DIC measurements became 15712±357µɛ with the relative mean error of -0.5±7.1%, if compared to those measurements of the strain gauge techniques. Correspondingly, the mean Young's Modulus and Poisson's ratio of the DIC and of the strain gauge measurements became 3.8±0.4 GPa and 0.368±0.025, respectively. Using an atomization process, paint speckles were reproducibly generated on the surface of an object. A factorial design of experiments was used to investigate the speckle pattern (grey value distribution and gradient) for the DIC measurement accuracy. Specifically, nine different speckle patterns were generated using the atomization process and tested for rigid body translation and rotation. The results showed the relative mean errors among the nine speckle patterns varied from 1.1±0.3% to -6.5±3.6%. The preferred speckle pattern, which was characterized by a wide range of sharp speckle and of grey values, produced a mean error of 1.1±0.3%. The analysis of errors and relating sources in the DIC measurement was conducted. Three categories of sources including algorithmic sources, processing parameters sources (subset size, number of pixels computed) and physical environment sources (specimen uniformity, speckle pattern, self-heating effect of the CCD camera and lens distortion of the camera, non-linearity error in strain gauge circuit) were investigated and discussed. Finally, the solutions were provided in order to help reduce the systematic and random errors relating to the aforementioned three categories of sources for errors. In conclusion, a novel DIC algorithm for a more accurate approximation of the initial guess and accordingly for an efficient and accurate convergence of the optimization was successfully formulated, developed, implemented and verified for relatively large deformations. The experimental validation surprisingly showed a systematic error of the DIC measurements, if compared to the measurements of the strain gauge technique. The larger the deformation applied to the specimen, the larger the error gradually became. Therefore, the gradual distortion of the speckles on the surface of the object was likely the underlying cause of the error. The error was systematic and therefore corrected. The atomization process allowed generating reproducible speckles on the surface of an object. Using the DIC measurements, the mechanical behavior of polymers, undergoing large deformations, such as polymethyl methacrylate used in skeletal reconstruction can be investigated and, once understood, the knowledge gained can help develop more effective materials.
215

Apport des technologies d'imagerie non invasives dans l'évaluation du pronostic des pathologies cardiovasculaires. / Utility of non-invasive imaging techniques in evaluating thé prognosis of cardiovascular disease

Chopard dit Jean, Romain 17 June 2014 (has links)
Pour ce travail de thèse, nous avons réalisé cinq études originales en utilisant trois technologies d'imageries cardiovasculaires non-invasives.-Nous avons démontré, à partir d'une étude ex-vivo sur des artères coronaires humaines, que le scanner64 détecteurs ne permettait pas de caractériser précisément les différents composants des plaques. Ladistinction des plaques fibreuses et des plaques lipidiques est en effet impossible. Par ailleurs, notretravail a montré que l'IVUS ne devait pas servir d'imagerie de référence lors des études sur la plaque carcet examen présente lui aussi de nombreuses imprécisions.-Notre travail sur la thrombo-aspiration rapporte un effet très significatif de l'extraction effective dethrombus lors des thrombo-aspirations à la phase aiguë des STEMI, avec une réduction de la taille du no-reflow et de l'infarctus, évaluées en IRM ; une thrombo-aspiration positive représentant par ailleurs dansnotre travail, un critère indépendamment lié à la taille finale de l'infarctus. L'extraction effective dethrombus pourrait être considéré, en salle de cathétérisme, comme un critère de jugement de l'efficacitéde la thrombo-aspiration.-Notre étude sur les syndomes coronaires à coronaires angiographiquement normales a confirmé l'intérêtde l'IRM dans le bilan étiologique de cette présentation clinique, permettant un diagnostic étiologiquedans 2/3 des cas. Par ailleurs, nous avons observé une excellente évolution pour le tiers des patients chezqui l'IRM ne décèle pas d'anomalie myocardique. Des études d'une plus grande envergure serontnécessaires afin de confirmer nos résultats.-A partir d'IRM cardiaque réalisées chez des patients ayant présenté un premier épisode de STEMI, nousavons pu déterminer une valuer seuil de troponine prédictive de la survenue d'un no-reflow.-Enfin, à partir d'analyses en Speckle Tracking, nous avons mis en évidence une dysfonction systolique VD,objectivée par une altération des valeurs de Strain longitudinal VD, chez les patients présentant une EPgrave ou de gravité intermédiaire, comparativement à un groupe de patients avec une EP non grave. / In this doctoral thesis, we report on five original studies that use three différent non-invasive cardiovascular imaging techniques:- In an ex vivo study of human coronary arteries, we show that 64-slice computed tomography (CT) scan isnot capable of distinguishing between différent components of plaques. Indeed, it is impossible todifferentiate between fibrous and lipid plaques. Our study also showed that intravascular ultrasound(IVUS) should not be used as thé référence method in studies of plaque composition, since this techniquealso suffers from numerous limitations.- Our study of thé efficacy of thrombo-aspiration showed a significant benefit with effective extraction ofthrombus during thrombo-aspiration at thé acute phase of ST élévation myocardial infarction (STEMI),notably with a réduction of thé extent of no-reflow and of infarct size as evaluated by magnetic résonanceimaging (MRI). Productive thrombo-aspiration was shown in our study to be an independent predictor offinal infarct size. Effective extraction of thrombotic material could be considered in thé cathlab as acriterion for evaluating thé success of thé thrombo-aspiration procédure.- Our study of acute coronary syndromes with normal coronary arteries confirmed thé utility of MRI inestablishing thé etiology of this clinical présentation, and made it possible to establish an etiologicaldiagnosis in two-thirds of patients. We also observed excellent outcomes in thé third of patients in whomMRI did not find any myocardial anomalies. Larger studies are warranted to confirm thèse findings.- Based on cardiac MRI performed in patients presenting a first épisode of STEMI, we established athreshold value of troponin that predicts thé occurrence of no-reflow.- Lastly, using speckle-tracking analysis, we demonstrated impaired systolic right ventricular function inpatients with intermediate to high risk pulmonary embolism (PE), evaluated by altérations in longitudinalstrain values at thé level of thé right ventricle, compared to a control group of patients with low risk PE.
216

Hypertrophie ventriculaire gauche physiologique ou pathologique : Intérêt d’une approche multiparamétrique / Physiological or pathological left ventricular hypertrophy : interest of a multi-parametric approach

Schnell, Frédéric 17 November 2015 (has links)
Introduction : Le diagnostic de cardiomyopathie hypertrophique (CMH) est difficile chez l’athlète. En effet, le remodelage physiologique induit par l’entraînement physique intense entraîne des modifications électriques et morphologiques qui peuvent mimer une cardiomyopathie. Or il est indispensable de poser le diagnostic de cardiomyopathie avec certitude chez un athlète. Ne pas contre-indiquer un athlète avec une cardiomyopathie l’expose à un risque de mort subite, mais poser un diagnostic par excès l’expose à de lourdes répercussions tant professionnelles que sociales. Méthodes : (1) Nous avons cherché à améliorer les critères ECG actuels de détection de cardiomyopathie chez l’athlète à partir d’une cohorte multicentrique d’athlètes et de CMH. (2) Nous avons cherché à déterminer quel bilan complémentaire réaliser en cas d’anomalie ECG par un suivi longitudinal d’athlètes avec ondes T négatives. (3) Nous avons essayé de mieux caractériser le phénotype des athlètes atteints de CMH par rapport aux CMH sédentaires dans une cohorte multicentrique. (4) Nous avons tenté de déterminer si l’utilisation des nouvelles techniques d’imagerie de déformation myocardique permettait d’améliorer la pertinence diagnostique et pronostique en cas de CMH dans une cohorte de CMH et d’athlètes rennais. Résultats : Nous avons proposé une nouvelle classification ECG permettant de mieux identifier les athlètes avec modifications ECG non pathologiques sans diminuer pour autant la capacité à détecter les CMH. En cas d’ondes T négatives chez l’athlète, nous avons démontré qu’il était indispensable de réaliser une IRM myocardique. En effet l’échocardiographie peut être prise en défaut dans près de 35% des cas. Néanmoins, les critères diagnostiques actuels de CMH peuvent être pris en défaut; en effet les athlètes avec une CMH ont un phénotype différent des CMH sédentaires avec une meilleure fonction systolique, notamment longitudinale, et diastolique. L’évaluation de la fonction longitudinale à l’effort et l’évaluation de la dispersion mécanique sont des paramètres qui semblent prometteurs en terme de diagnostic. En effet l’altération la fonction longitudinale semble être en lien avec la fibrose myocardique. L’échocardiographie d’effort, notamment la présence d’une insuffisance mitrale à l’effort, semble être un facteur pronostic important dans les CMH. Conclusions : les travaux réalisés ont permis de développer des outils pour mieux différencier une hypertrophie ventriculaire gauche (HVG) pathologique d’une HVG physiologique mais également pour mieux caractériser cette HVG et déterminer avec plus de précision le pronostic des CMH . / Introduction: the diagnosis of hypertrophic cardiomyopathy (HCM) in athlete is difficult. Indeed, intense sports practice induces an electrical and morphological physiological remodeling which can be difficult to differentiate from the changes induced in pathology. However, it is essential to diagnose an athlete with a cardiomyopathy. Indeed, in case of underlying cardiomyopathy the athlete will be at risk of sudden cardiac death, but an excessive over diagnosis has strong professional and social consequences. Methods: (1) we have tried to improve the ECG criteria’s, which enable the differentiation between ECG changes induced by exercise and the ECG changes induced by an underlying cardiomyopathy. (2) We tried to define the best investigation algorithm in case of abnormal ECG changes in athletes. (3) We tried to improve the characterization of the phenotype of athletes with HCM as compared to sedentary HCM. (4) We tried to investigate if the use of new imaging technics, i.e. speckle tracking, might improve the diagnostic accuracy and enable a better prognostic evaluation in HCM. Results: We have proposed a new classification of ECG in athletes enabling to decrease the rate of false positive ECG in athletes without decreasing its diagnostic accuracy in HCM. In case of pathological T wave inversion (PTWI) in athletes, we demonstrated that a CMR is mandatory, as echocardiography missed a diagnosis of pathology in 35% of PTWI athletes. Nevertheless, the diagnosis of HCM with current criteria’s of HCM can be challenging. Indeed, HCM athletes have a different phenotype from HCM sedentary, with a better systolic and diastolic function; they also have a better longitudinal function. The assessment of longitudinal function during exercise and mechanical dispersion are promising tool for the diagnosis of HCM in athletes. Indeed, the alteration of longitudinal strain is related to myocardial fibrosis. Exercise echocardiography, i.e. exercise mitral insufficiency, seems to be a prognostic factor in HCM patients. Conclusions: Ours results enabled to develop tools which might help to better differentiate pathological and physiological left ventricular hypertrophy (LVH); but also to better characterize LVH and the prognosis in HCM patients.
217

Velocity Variations of the Kaskawulsh Glacier, Yukon Territory, 2009-2011

Darling, Samantha January 2012 (has links)
Laser altimetry and satellite gravity surveys indicate that the St Elias Icefields are currently losing mass and are among the largest non-polar sea level contributors in the world. However, a poor understanding of glacier dynamics in the region is a major hurdle in evaluating regional variations in ice motion and the relationship between changing surface conditions and ice flux. This study combines in-situ dGPS measurements and advanced Radarsat-2 (RS-2) processing techniques to determine daily and seasonal ice velocities for the Kaskawulsh Glacier from summer 2009 to summer 2011. Three permanent dGPS stations were installed along the centreline of the glacier in 2009, with an additional permanent station on the South Arm in 2010. The Precise Point Positioning (PPP) method is used to process the dGPS data using high accuracy orbital reconstruction. RS-2 imagery was acquired on a 24-day cycle from January to March 2010, and from October to March 2010-2011 in a combination of ultra-fine and fine beam modes. Seasonal velocity regimes are readily identifiable in the dGPS results, with distinct variations in both horizontal velocity and vertical motion. The Spring Regime consists of an annual peak in horizontal velocity that corresponds closely with the onset of the melt season and progresses up-glacier, following the onset of melt at each station. The Summer Regime sees variable horizontal velocity and vertical uplift, superimposed on a long-term decline in motion. The Fall Regime sees a gradual slowing at all stations with little variation in horizontal velocity or vertical position. Rapid but short accelerations lasting up to 10 days were seen in the Winter regimes in both 2010 and 2011, occurring at various times throughout each regime. These events initiated at the Upper Station and progress down-glacier at propagation speeds up to 16,380 m day-1 and were accompanied by vertical uplift lasting for similar periods. Three velocity maps, one from the winter of 2010 and two from the fall/winter of 2011, produced from speckle tracking were validated by comparison with dGPS velocity, surface flow direction, and bedrock areas of zero motion, with an average velocity error of 2.0% and average difference in orientation of 4.3º.
218

Regional Assessment of Glacier Motion in Kluane National Park, Yukon Territory

Waechter, Alexandra January 2013 (has links)
This project presents regional velocity measurements for the eastern portion of the St. Elias Mountains, including the entire glaciated area of Kluane National Park, derived from speckle tracking of Radarsat-2 imagery acquired in winter 2011 and 2012. This technique uses a cross-correlation approach to determine the displacement of the ‘speckle’ pattern of radar phase returns between two repeat-pass images. Further reconstruction of past velocities is performed on a selection of key glaciers using feature tracking of Landsat-5 imagery, allowing for the investigation of variability in glacier motion on interannual and decadal time scales. The results of the analysis showed that there is a strong velocity gradient across the region reflecting high accumulation rates on the Pacific-facing slope of the mountain range. These glaciers may have velocities an order of magnitude greater than glaciers of a similar size on the landward slope. Interannual variability was high, both in relation to surge events, of which a number were identified, and variation of other unknown controls on glacier motion. A long-term trend of velocity decrease was observed on the Kaskawulsh Glacier when comparing the results of this analysis to work carried out in the 1960s, the pattern of which is broadly congruent to measurements of surface elevation change over a similar period.
219

Laser Speckle Patterns with Digital Image Correlation

Newberry, Shawn 01 September 2021 (has links)
Digital Laser Speckle Image Correlation (DiLSIC) is a technique that utilizes a laser generated speckle pattern with Digital Image Correlation (DIC). This technology eliminates the need to apply an artifact speckle pattern to the surface of the material of interest, and produces a finer speckle pattern resulting in a more sensitive analysis. This investigation explores the parameters effecting laser speckle patterns for DIC and studies DiLSIC as a tool to measure surface strain and detect subsurface defects on pressure vessels. In this study a 632.8 nm 30 mW neon-helium laser generated the speckle pattern by passing through the objective end of an objective lens. All experiments took place in a lab setting on a high performance laminar flow stabilizer optical table.This investigation began with a deeper look at the camera settings that effect the effectiveness of using laser speckles with DIC. The first studies were concentrated on the aperture size (f-stop), shutter speed, and gain (ISO) of the camera. Through a series of zero-correlation studies, translation tests, and settings studies, it was discovered that, much like white light DIC, an increased gain allowed for more noise and less reliable measurements when using DiLSIC. It was shown that the aperture size and shutter speed will largely depend on the surface composition of the material, and that these factors should be investigated with each new sample of different surface finish.To determine the feasibility of using DiLSIC on pressure vessels two samples were acquired. The first was a standard ASTM filament wound composite pressure vessel (CPV) which had an upper load limit of 40 psi. The second was a plastic vessel that had internal subsurface defects added with the use of an air pencil grinder. Both vessels were put under a pressure load with the use of a modified air compressor that allowed for multiple loading cycles through the use of a pressure relief valve. The CPV was mapped out in 10-degree increments between the 90° and 180° markings that were on the pressure vessel, occurring in three areas, each one inch apart. The CPV had a pressure load applied to at 10, 20, 30,and 40 psi. DiLSIC was able to measure increasing displacement with increased loading on the surface of the CPV, however with a load limit of 40 psi no strains were detected. The plastic vessel had known subsurface defects, and these areas were the focus of the investigation. The plastic vessel was loaded with a pressure load at 5, 10, 12, 15, 17, and 20 psi. The 5 psi loaded image was used as a reference image for the correlation and decorrelation consistently occurred at 20 psi. This investigation proved that DiLSIC can detect and locate subsurface defects through strain measurement. The results were verified with traditional white light DIC, which also showed that the subsurface defects on pressure vessels were detectable. The DIC and DiLSIC results did not agree on maximum strain measurement, with the DiLSIC prediciting much larger strains than traditional DIC. This is due to the larger effect out-of-plane displacement has on DiLSIC. DiLSIC was able to detect subsurface defects on a pressure vessel. The median measured hoop strain was in agreement for DiLSIC, DIC and the predicted hoop strain for a wall thickness of 0.1 inches. However, DiLSIC also produced unreliable maximum strain measurements. This technique shows potential for future applications, but more investigations will be needed to implement it for industrial use. A full investigation into the parameters surrounding this technique, and the factors that contribute the most to added noise and unreliability should be conducted. This technology is being developed by multiple entities and shows promising results, and once further advanced could be a useful tool for rapid surface strain measurement and subsurface defect detection in nondestructive evaluation applications. Therefore, it is recommended to continue further investigations into this technology and its applications.
220

Active Stereo Vision for Precise Autonomous Vehicle Hitching

Michael Clark Feller (8071319) 03 December 2019 (has links)
<p>This thesis describes the development of a low-cost, low-power, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. Few studies have been completed on the hitching problem, yet it is an important challenge to be solved for vehicles in the agricultural and transportation industries. Existing sensor solutions are high cost, high power, and require modification to the hitch in order to work. Other potential sensor solutions such as LiDAR and Digital Fringe Projection suffer from these same fundamental problems. </p> <p>The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. As a whole, the system cost is $188, with a power usage of 2.3 W.</p> <p>To test the system, a model test of the hitching problem was developed using an RC car and a target to represent a hitch. In the application, both the stereo system and the texture camera are used for measurement of the hitch, and a control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35⁰ of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 mm of lateral error and 1.5⁰ of angular error. Ultimately, this is believed to be the first low power, low cost hitching system that does not require modification of the hitch in order to sense it. </p>

Page generated in 0.0544 seconds