• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 445
  • 78
  • 12
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 661
  • 661
  • 188
  • 187
  • 167
  • 86
  • 66
  • 62
  • 55
  • 54
  • 50
  • 46
  • 46
  • 43
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Contribution à l'extraction et à l'exploitation d'attributs géométriques du maillage 3D de fragments archéologiques

Laugerotte, Cédric 03 March 2006 (has links)
Ce document porte sur l'extraction d'attributs géométriques présents sur les modèles 3D résultant de l'acquisition numérique de fragments archéologiques. Ces attributs sont ensuite exploités à des fins de classifications, de reconstructions et de remontages. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
642

Sources of contrast and acquisition methods in functional MRI of the human brain

Denolin, Vincent 08 October 2002 (has links)
<p align="justify">L'Imagerie fonctionnelle par Résonance Magnétique (IRMf) a connu un développement important depuis sa découverte au début des années 1990. Basée le plus souvent sur l'effet BOLD (Blood Oxygenation Level Dependent), cette technique permet d'obtenir de façon totalement non-invasive des cartes d'activation cérébrale, avec de meilleures résolutions spatiale et temporelle que les méthodes préexistantes telles que la tomographie par émission de positrons (TEP). Facilement praticable au moyen des imageurs par RMN disponible dans les hôpitaux, elle a mené à de nombreuses applications dans le domaine des neurosciences et de l'étude des pathologies cérébrales.</p><p><p align="justify">Il est maintenant bien établi que l'effet BOLD est dû à une augmentation de l'oxygénation du sang veineux dans les régions du cerveau où se produit l'activation neuronale, impliquant une diminution de la différence de susceptibilité magnétique entre le sang et les tissus environnants (la déoxyhémoglobine étant paramagnétique et l'oxyhémoglobine diamagnétique), et par conséquent un augmentation du signal si la méthode d'acquisition est sensible aux inhomogénéités de champ magnétique. Cependant, il reste encore de nombreuses inconnues quant aux mécanismes liant les variations d'oxygénation, de flux et de volume sanguin à l'augmentation de signal observée, et la dépendance du phénomène en des paramètres tels que l'intensité du champ, la résolution spatiale, et le type de séquence de RMN utilisée. La première partie de la thèse est donc consacrée à l'étude de l'effet BOLD, dans le cas particulier des contributions dues aux veines de drainage dans les séquences de type écho de gradient rendues sensibles au mouvement par l'ajout de gradients de champ. Le modèle développé montre que, contrairement au comportement suggéré par de précédentes publications, l'effet de ces gradients n'est pas une diminution monotone de la différence de signal lorsque l'intensité des gradients augmente. D'importantes oscillations sont produites par l'effet de phase dû au déplacement des spins du sang dans les gradients additionnels, et par la variation de cette phase suite à l'augmentation du flux sanguin. La validation expérimentale du modèle est réalisée au moyen de la séquence PRESTO (Principles of Echo-Shifting combined with a Train of Observations), c'est-à-dire une séquence en écho de gradient où des gradients supplémentaires permettent d'augmenter la sensibilité aux inhomogénéités de champ, et donc à l'effet BOLD. Un accord qualitatif avec la théorie est établi en montrant que la variation de signal observée peut augmenter lorsqu'on intensifie les gradients additionnels.</p><p><p align="justify">Un autre source de débat continuel dans le domaine de l'IRMf réside dans l'optimalisation des méthodes d'acquisition, au point de vue notamment de leur sensibilité à l'effet BOLD, leurs résolutions spatiale et temporelle, leur sensibilité à divers artefacts tels que la perte de signal dans les zones présentant des inhomogénéités de champ à grande échelle, et la contamination des cartes d'activation par les contributions des grosses veines, qui peuvent être distantes du lieu d'activation réel. Les séquences en écho de spin sont connues pour être moins sensibles à ces deux derniers problèmes, c'est pourquoi la deuxième partie de la thèse est consacrée à une nouvelle technique permettant de donner une pondération T2 plutôt que T2* aux images. Le principe de base de la méthode n'est pas neuf, puisqu'il s'agit de la « Préparation T2 » (T2prep), qui consiste à atténuer l'aimantation longitudinale différemment selon la valeur du temps de relaxation T2, mais il n’avait jamais été appliqué à l’IRMf. Ses avantages par rapport à d’autres méthodes hybrides T2 et T2* sont principalement le gain en résolution temporelle et en dissipation d’énergie électromagnétique dans les tissus. Le contraste généré par ces séquences est étudié au moyen de solutions stationnaires des équations de Bloch. Des prédictions sont faites quant au contraste BOLD, sur base de ces solutions stationnaires et d’une description simplifiée de l’effet BOLD en termes de variations de T2 et T2*. Une méthode est proposée pour rendre le signal constant au travers du train d’impulsions en faisant varier l’angle de bascule d’une impulsion à l’autre, ce qui permet de diminuer le flou dans les images. Des expériences in vitro montrent un accord quantitatif excellent avec les prédictions théoriques quant à l’intensité des signaux mesurés, aussi bien dans le cas de l’angle constant que pour la série d’angles variables. Des expériences d’activation du cortex visuel démontrent la faisabilité de l’IRMf au moyen de séquences T2prep, et confirment les prédictions théoriques quant à la variation de signal causée par l’activation.</p><p><p align="justify"> La troisième partie de la thèse constitue la suite logique des deux premières, puisqu’elle est consacrée à une extension du principe de déplacement d’écho (echo-shifting) aux séquences en écho de spin à l’état stationnaire, ce qui permet d’obtenir une pondération T2 et T2* importante tout en maintenant un temps de répétition court, et donc une bonne résolution temporelle. Une analyse théorique approfondie de la formation du signal dans de telles séquences est présentée. Elle est basée en partie sur la technique de résolution des équations de Bloch utilisée dans la deuxième partie, qui consiste à calculer l’aimantation d’état stationnaire en fonction des angles de précession dans le plan transverse, puis à intégrer sur les isochromats pour obtenir le signal résultant d’un voxel (volume element). Le problème est aussi envisagé sous l’angle des « trajectoires de cohérence », c’est-à-dire la subdivision du signal en composantes plus ou moins déphasées, par l’effet combiné des impulsions RF, des gradients appliqués et des inhomogénéités du champ magnétique principal. Cette approche permet d’interpréter l’intensité du signal dans les séquences à écho déplacé comme le résultat d’interférences destructives entre diverses composantes physiquement interprétables. Elle permet de comprendre comment la variation de la phase de l’impulsion d’excitation (RF-spoiling) élimine ces interférences. Des expériences in vitro montrent un accord quantitatif excellent avec les calculs théoriques, et la faisabilité de la méthode in vivo est établie. Il n’est pas encore possible de conclure quant à l’applicabilité de la nouvelle méthode dans le cadre de l’IRMf, mais l’approche théorique proposée a en tout cas permis de revoir en profondeur les mécanismes de formation du signal pour l’ensemble des méthodes à écho déplacé, puisque le cas de l’écho de gradient s’avère complètement similaire au cas de l’écho de spin.</p><p><p align="justify">La thèse évolue donc progressivement de la modélisation de l’effet BOLD vers la conception de séquences, permettant ainsi d’aborder deux aspects fondamentaux de la physique de l’IRMf.</p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
643

High-resolution computer imaging in 2D and 3D for recording and interpreting archaeological excavations =: Le rôle de l'image numérique bidimensionelle et tridimensionelle de haute résolution dans l'enregistrement et l'interprétation des données archéologiques

Avern, Geoffrey J. January 2000 (has links)
Doctorat en philosophie et lettres / info:eu-repo/semantics/nonPublished
644

Ferramentas interativas de auxílio a diagnóstico por neuro-imagens 3D / Interactive tools for volumetric neuroimage based diagnosis

Yauri Vidalón, José Elías 22 August 2018 (has links)
Orientador: Wu Shin-Ting / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-22T07:56:06Z (GMT). No. of bitstreams: 1 YauriVidalon_JoseElias_M.pdf: 9888204 bytes, checksum: 2eb49026222826c846f3aeb94929c1c6 (MD5) Previous issue date: 2012 / Resumo: Por apresentar alta resolução espacial e espectral, é crescente o uso de imagens de ressonância magnética tanto no estudo dos órgãos humanos como também no diagnóstico das anormalidades estruturais e funcionais e no planejamento e treinamento cirúrgico. Junto com a rápida evolução dos algoritmos de processamento de imagens médicas, surgiram na última década aplicativos de diagnósticos assistidos por computador especializados em mamografia, angiografia e imagens da região torácica. A complexidade estrutural do cérebro e as diferenças anatômicas individuais do crânio constituem, no entanto, ainda desafios ao desenvolvimento de um sistema de diagnóstico especializado em neuro-imagens. A intervenção de especialistas é muitas vezes imprescindível na identificação e na interpretação dos achados radiológicos. Nesta dissertação, propomos o uso de três técnicas para auxiliar os especialistas da área médica na busca por achados radiológicos sutis de forma interativa. São apresentados dois objetos de interação, lente móvel e sonda volumétrica, que permitem atualizar continuamente os dados em foco enquanto são manipulados. Com isso, é possível investigar regiões cerebrais de interesse preservando o seu contexto. E, a fim de facilitar a percepção visual das variações funcionais ou estruturais sutis, propomos utilizar um editor de funções de transferência 1D para realçar ou aumentar o contraste entre os voxels adjacentes. As ferramentas foram avaliadas por um grupo de especialistas em neuro-imagens do Laboratório de Neuro-imagens da Faculdade de Ciências Médicas da Unicamp / Abstract: Because of its high spatial and spectral resolution, it is increasing the use of magnetic resonance images both in the study of human organs as well as in the diagnosis of structural and functional abnormalities and in the surgery planning and training. Along with the rapid evolution of medical image processing algorithms, computer-aided diagnostics systems specialized in mammography, angiography, and computed tomography and magnetic resonance of the thorax have emerged in the last decade. The structural complexity of the brain and individual anatomical shape of skulls are, however, challenges in developing a diagnostic system specializing in neuro-imaging. Expert interventions are still essential both in the identification and in the interpretation of radiological findings. In this dissertation, we propose the use of three techniques to aid the medical experts in the search of subtle findings in an interactive way. We present two widgets, movable lens and volumetric probe, that allow one to update continuously the volume data in focus while are manipulated. In this way, it is possible to investigate brain regions of interest preserving its context. And, in order to facilitate the visual perception of the subtle functional or structural changes, we propose to use an editor of 1D transfer function to enhance or to increase the contrast between adjacent voxels. The tools were assessed by the neuro-imaging experts of the Laboratory of Neuro-Images of the Faculty of Medical Sciences of Unicamp / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
645

A Java image editor and enhancer

Darbhamulla, Lalitha 01 January 2004 (has links)
The purpose of this project is to develop a Java Applet that provides all the tools needed for creating image fantasies. It lets the user pick a template and an image, and combine them together. The user can then apply image processing techniques such as rotation, zooming, blurring etc according to his/her requirements.
646

Fantomy pro oftalmologický ultrazvukový systém / Phantoms for ultrasound system in ophthalmology

Fabík, Vojtěch January 2013 (has links)
In our work we have studied the ultrasonic imaging systems and their use in ophthalmology, especially with the device Nidek 4000. We described ophthalmological examination methods. In addition, we are using the simulation program Field II. It simulated eye phantom and created his B-scan and biometry, where we compared the effects of different central frequency ultrasonic probes and different speeds of sound in the resulting values. We also created phantoms using agarose gel and materials of different properties. On phantoms, we studied the effect of the velocity of ultrasound in measurement results, effect of the concentration of the agarose gel to the velocity of sound. And we created phantoms simulating the human eye. Measurement protocol was created for use in teaching.
647

Multiresolution variance-based image fusion

Ragozzino, Matthew 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Multiresolution image fusion is an emerging area of research for use in military and commercial applications. While many methods for image fusion have been developed, improvements can still be made. In many cases, image fusion methods are tailored to specific applications and are limited as a result. In order to make improvements to general image fusion, novel methods have been developed based on the wavelet transform and empirical variance. One particular novelty is the use of directional filtering in conjunction with wavelet transforms. Instead of treating the vertical, horizontal, and diagonal sub-bands of a wavelet transform the same, each sub-band is handled independently by applying custom filter windows. Results of the new methods exhibit better performance across a wide range of images highlighting different situations.
648

Machine Vision Assisted In Situ Ichthyoplankton Imaging System

Iyer, Neeraj 12 July 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Recently there has been a lot of effort in developing systems for sampling and automatically classifying plankton from the oceans. Existing methods assume the specimens have already been precisely segmented, or aim at analyzing images containing single specimen (extraction of their features and/or recognition of specimens as single targets in-focus in small images). The resolution in the existing systems is limiting. Our goal is to develop automated, very high resolution image sensing of critically important, yet under-sampled, components of the planktonic community by addressing both the physical sensing system (e.g. camera, lighting, depth of field), as well as crucial image extraction and recognition routines. The objective of this thesis is to develop a framework that aims at (i) the detection and segmentation of all organisms of interest automatically, directly from the raw data, while filtering out the noise and out-of-focus instances, (ii) extract the best features from images and (iii) identify and classify the plankton species. Our approach focusses on utilizing the full computational power of a multicore system by implementing a parallel programming approach that can process large volumes of high resolution plankton images obtained from our newly designed imaging system (In Situ Ichthyoplankton Imaging System (ISIIS)). We compare some of the widely used segmentation methods with emphasis on accuracy and speed to find the one that works best on our data. We design a robust, scalable, fully automated system for high-throughput processing of the ISIIS imagery.
649

Deep Learning Strategies for Pandemic Preparedness and Post-Infection Management

Lee, Sang Won January 2024 (has links)
The global transmission of Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) has resulted in over 677 million infections and 6.88 million tragic deaths worldwide as of March 10th, 2023. During the pandemic, the ability to effectively combat SARS-CoV-2 had been hindered by the lack of rapid, reliable, and cost-effective testing platforms for readily screening patients, discerning incubation stages, and accounting for variants. The limited knowledge of the viral pathogenesis further hindered rapid diagnosis and long-term clinical management of this complex disease. While effective in the short term, measures such as social distancing and lockdowns have resulted in devastating economic loss, in addition to material and psychological hardships. Therefore, successfully reopening society during a pandemic depends on frequent, reliable testing, which can result in the timely isolation of highly infectious cases before they spread or become contagious. Viral loads, and consequently an individual's infectiousness, change throughout the progression of the illness. These dynamics necessitate frequent testing to identify when an infected individual can safely interact with non-infected individuals. Thus, scalable, accurate, and rapid serial testing is a cornerstone of an effective pandemic response, a prerequisite for safely reopening society, and invaluable for early containment of epidemics. Given the significant challenges posed by the pandemic, the power of artificial intelligence (AI) can be harnessed to create new diagnostic methods and be used in conjunction with serial tests. With increasing utilization of at-home lateral flow immunoassay (LFIA) tests, the National Institutes of Health (NIH) and Centers for Disease Control and Prevention (CDC) have consistently raised concerns about a potential underreporting of actual SARS-CoV-2-positive cases. When AI is paired with serial tests, it could instantly notify, automatically quantify, aid in real-time contact tracing, and assist in isolating infected individuals. Moreover, the computer vision-assisted methodology can help objectively diagnose conditions, especially in cases where subjective LFIA tests are employed. Recent advances in the interdisciplinary scientific fields of machine learning and biomedical engineering support a unique opportunity to design AI-based strategies for pandemic preparation and response. Deep learning algorithms are transforming the interpretation and analysis of image data when used in conjunction with biomedical imaging modalities such as MRI, Xray, CT scans, confocal microscopes, etc. These advances have enabled researchers to carry out real-time viral infection diagnostics that were previously thought to be impossible. The objective of this thesis is to use SARS-CoV-2 as a model virus and investigate the potential of applying multi-class instance segmentation deep learning and other machine learning strategies to build pandemic preparedness for rapid, in-depth, and longitudinal diagnostic platforms. This thesis encompasses three research tasks: 1) computer vision-assisted rapid serial testing, 2) infected cell phenotyping, and 3) diagnosing the long-term consequences of infection (i.e., long-term COVID). The objective of Task 1 is to leverage the power of AI, in conjunction with smartphones, to rapidly and simultaneously diagnose COVID-19 infections for millions of people across the globe. AI not only makes it possible for rapid and simultaneous screenings of millions but can also aid in the identification and contact tracing of individuals who may be carriers of the virus. The technology could be used, for example, in university settings to manage the entry of students into university buildings, ensuring that only students who test negative for the virus are allowed within campus premises, while students who test positive are placed in quarantine until they recover. The technology could also be used in settings where strict adherence to COVID-19 prevention protocols is compromised, for example, in an Emergency Room. This technology could also help with CDC’s concern on growing incidences of underreporting positive COVID-19 cases with growing utilization of at-home LFIA tests. AI can address issues that arise from relying solely on the visual interpretation of LFIA tests to make accurate diagnoses. One problem is that LFIA test results may be subjective or ambiguous, especially when the test line of the LFIA displays faint color, indicating a low analyte abundance. Therefore, reaching a decisive conclusion regarding the patient's diagnosis becomes challenging. Additionally, the inclusion of a secondary source for verifying the test results could potentially increase the test's cost, as it may require the purchase of complementary electronic gadgets. To address these issues, our innovation would be accurately calibrated with appropriate sensitivity markers, ensuring increased accuracy of the diagnostic test and rapid acquisition of test results from the simultaneous classification of millions of LFIA tests as either positive or negative. Furthermore, the designed network architecture can be utilized to detect other LFIA-based tests, such as early pregnancy detection, HIV LFIA detection, and LFIA-based detection of other viruses. Such minute advances in machine learning and artificial intelligence can be leveraged on many different scales and at various levels to revolutionize the health sector. The motivating purpose of Task 2 is to design a highly accurate instance segmentation network architecture not only for the analysis of SARS-CoV-2 infected cells but also one that yields the highest possible segmentation accuracy for all applications in biomedical sciences. For example, the designed network architecture can be utilized to analyze macrophages, stem cells, and other types of cells. Task 3 focuses on conducting studies that were previously considered computationally impossible. The invention will assist medical researchers and dentists in automatically calculating alveolar crest height (ACH) in teeth using over 500 dental Xrays. This will help determine if patients diagnosed with COVID-19 by a positive PCR test exhibited more alveolar bone loss and had greater bone loss in the two years preceding their COVID-positive test when compared to a control group without a positive COVID-19 test. The contraction of periodontal disease results in higher levels of transmembrane serine protease 2 (TMPRSS2) within the buccal cavity, which is instrumental in enabling the entry of SARS-CoV-2. Gum inflammation, a symptom of periodontal disease, can lead to alterations in the ACH of teeth within the oral mucosa. Through this innovation, we can calculate ACHs of various teeth and, therefore, determine the correlation between ACH and the risk of contracting SARS-CoV-2 infection. Without the invention, extensive manpower and time would be required to make such calculations and gather data for further research into the effects of SARS-CoV-2 infection, as well as other related biological phenomena within the human body. Furthermore, the novel network framework can be modified and used to calculate dental caries and other periodontal diseases of interest.
650

Detecting informal buildings from high resolution quickbird satellite image, an application for insitu [sic.] upgrading of informal setellement [sic.] for Manzese area - Dar es Salaam, Tanzania.

Ezekia, Ibrahim S. K. January 2005 (has links)
Documentation and formalization of informal settlements ("insitu" i.e. while people continue to live in the settlement) needs appropriate mapping and registration system of real property that can finally lead into integrating an informal city to the formal city. For many years extraction of geospatial data for informal settlement upgrading have been through the use of conventional mapping, which included manual plotting from aerial photographs and the use of classical surveying methods that has proved to be slow because of manual operation, very expensive, and requires well-trained personnel. The use of high-resolution satellite image like QuickBird and GIS tools has recently been gaining popularity to various aspects of urban mapping and planning, thereby opening-up new opportunities for efficient management of rapidly changing environment of informal settlements. This study was based on Manzese informal area in the city of Dar es salaam, Tanzania for which the Ministry of Lands and Human Settlement Development is committed at developing strategic information and decision making tools for upgrading informal areas using digital database, Orthophotos and Quickbird satellite image. A simple prototype approach developed in this study, that is, 'automatic detection and extraction of informal buildings and other urban features', is envisaged to simplify and speedup the process of land cover mapping that can be used by various governmental and private segments in our society. The proposed method, first tests the utility of high resolution QuickBird satellite image to classify the detailed 11 classes of informal buildings and other urban features using different image classification methods like the Box, maximum likelihood and minimum distance classifier, followed by segmentation and finally editing of feature outlines. The overall mapping accuracy achieved for detailed classification of urban land cover was 83%. The output demonstrates the potential application of the proposed approach for urban feature extraction and updating. The study constrains and recommendations for future work are also discussed. / Thesis (M.Env.Dev.)-University of KwaZulu-Natal, Pietermaritzburg, 2005.

Page generated in 0.0806 seconds