• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 553
  • 127
  • 90
  • 47
  • 23
  • 12
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1007
  • 1007
  • 270
  • 241
  • 214
  • 209
  • 188
  • 185
  • 178
  • 173
  • 169
  • 167
  • 165
  • 111
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Suivi longitudinal des endoprothèses coronaires par analyse de séquences d'images de tomographie par cohérence optique. / Longitudinal follow-up of coronary stents by optical coherence tomography image sequence analysis.

Menguy, Pierre-Yves 19 December 2016 (has links)
Cette thèse porte sur la segmentation et la caractérisation des artères coronaires et des endoprothèses (stent) en imagerie de Tomographie par Cohérence Optique (OCT). L’OCT est une imagerie de très haute résolution qui permet d’apprécier des structures fines comme la couche intimale de la paroi vasculaire et les mailles du stent (struts). L’objectif de cette thèse est de proposer des outils logiciels autorisant l’analyse automatique d’un examen avec un temps d’exécution compatible avec une utilisation peropératoire. Ces travaux font suite à la thèse de Dubuisson en OCT, qui avait proposé un premier formalisme pour la segmentation de la lumière et la détection des struts pour les stents métalliques. Nous avons revisité la chaine de traitement pour ces deux problèmes et proposé une méthode préliminaire de détection de stents en polymère biorésorbable. Une modélisation surfacique du stent a permis d’estimer une série d’indices cliniques à partir des diamètres, surfaces et volumes mesurés sur chaque coupe ou sur l’ensemble de l’examen. L’apposition du stent par rapport à la paroi est également mesurée et visualisée en 3D avec une échelle de couleurs intuitive. La lumière artérielle est délimitée à l’aide d’un algorithme de recherche de plus court chemin de type Fast Marching. Son originalité est d’exploiter l’image sous la forme hélicoïdale native de l’acquisition. Pour la détection du stent métallique, les maxima locaux de l’image suivis d’une zone d’ombre ont été détectés et caractérisés par un vecteur d’attributs calculés dans leur voisinage (valeur relative du maximum, pente en niveau de gris, symétrie...). Les pics correspondant à des struts ont été discriminés du speckle environnant par une étape de régression logistique avec un apprentissage à partir d’une vérité terrain construite par un expert. Une probabilité d’appartenance des pics aux struts est construite à partir de la combinaison des attributs obtenue. L’originalité de la méthode réside en la fusion des probabilités des éléments proches avant d’appliquer un critère de décision lui aussi déterminé à partir de la vérité terrain. La méthode a été évaluée sur une base de données de 14 examens complets, à la fois au niveau des pixels et des struts détectés. Nous avons également validé de façon exhaustive une méthode de recalage non rigide d’images OCT exploitant des amers appariés par un expert sur les examens source et cible. L’objectif de ce recalage est de pouvoir comparer les examens coupe à coupe et les indices calculés aux mêmes positions à des temps d’acquisition différents. La fiabilité du modèle de déformation a été évaluée sur un corpus de quarante-quatre paires d’examens OCT à partir d’une technique de validation croisée par Leave-One-Out. / This thesis deals with the segmentation and characterization of coronary arteries and stents in Optical Coherence Tomography (OCT) imaging. OCT is a very high resolution imaging that can appreciate fine structures such as the intimal layer of the vascular wall and stitches (struts). The objective of this thesis is to propose software tools allowing the automatic analysis of an examination with a runtime compatible with an intraoperative use. This work follows Dubuisson's thesis in OCT, which proposed a first formalism for light segmentation and strut detection for metal stents. We revisited the treatment chain for these two problems and proposed a preliminary method for detecting bioabsorbable polymer stents. Surface modeling of the stent made it possible to estimate a series of clinical indices from the diameters, surfaces and volumes measured on each section or on the entire examination. Applying the stent to the wall is also measured and visualized in 3D with an intuitive color scale. The arterial lumen is delineated using a Fast Marching short path search algorithm. Its originality is to exploit the image in the native helical form of the acquisition. For the detection of the metallic stent, the local maxima of the image followed by a shadow zone have been detected and characterized by a vector of attributes calculated in their neighborhood (relative value of the maximum, slope in gray level, symmetry ...). Peaks corresponding to struts were discriminated from the surrounding speckle by a logistic regression step with learning from a field truth constructed by an expert. A probability of belonging to the peaks to struts is constructed from the combination of attributes obtained. The originality of the method lies in the fusion of the probabilities of the close elements before applying a decision criterion also determined from the ground truth. The method was evaluated on a database of 14 complete examinations, both at the level of pixels and struts detected. We have also extensively validated a method of non-rigid registration of OCT images using bitters matched by an expert on the source and target exams. The objective of this registration is to be able to compare cut-to-cut examinations and indices calculated at the same positions at different acquisition times. The reliability of the strain model was evaluated on a corpus of forty-four pairs of OCT exams from a Leave-One-Out cross validation technique.
242

Modélisation de la croissance de tumeurs cérébrales : application à la radiothérapie / Brain tumor growth modeling : application to radiotherapy

Lê, Matthieu 23 June 2016 (has links)
Les glioblastomes comptent parmi les cas les plus répandus et agressifs de tumeurs cérébrales. Ils sont généralement traités avec une combinaison de résection chirurgicale, suivie de chimiothérapie et radiothérapie. Cependant, le caractère infiltrant de la tumeur rend son traitement particulièrement délicat. La personnalisation de modèles biophysiques permet d’automatiser la mise au point de thérapies spécifiques au patient, en maximisant les chances de survie. Dans cette thèse nous nous sommes attachés à élaborer des outils permettant de personnaliser la radiothérapie des glioblastomes. Nous avons tout d’abord étudié l’impact de la prise en compte de l’œdème vasogénique. Notre étude rétrospective se fonde sur une base de donnée de patients traités avec un médicament anti-angiogénique, révélant a posteriori la présence de l’œdème. Ensuite, nous avons étudié le lien entre l’incertitude due à la segmentation de la tumeur et la distribution de la dose. Pour se faire, nous avons mis au point une méthode permettant d’échantillonner efficacement de multiples segmentations réalistes, à partir d’une unique segmentation clinique. De plus, nous avons personnalisé un modèle de croissance tumorale aux images IRM de sept patients. La méthode Bayésienne adoptée permet notamment d’estimer l’incertitude sur les paramètres personnalisés. Finalement, nous avons montré comment cette personnalisation permet de définir automatiquement la dose à prescrire au patient, en combinant le modèle de croissance tumoral avec un modèle de réponse à la dose délivrée. Les résultats prometteurs présentés ouvrent de nouvelles perspectives pour la personnalisation de la radiothérapie des tumeurs cérébrales. / Glioblastomas are among the most common and aggressive primary brain tumors. It is usually treated with a combination of surgical resection, followed with concurrent chemo- and radiotherapy. However, the infiltrative nature of the tumor makes its control particularly challenging. Biophysical model personalization allows one to automatically define patient specific therapy plans which maximize survival rates. In this thesis, we focused on the elaboration of tools to personalize radiotherapy planning. First, we studied the impact of taking into account the vasogenic edema into the planning. We studied a database of patients treated with anti-angiogenic drug, revealing a posteriori the presence of the edema. Second, we studied the relationship between the uncertainty in the tumor segmentation and dose distribution. For that, we present an approach in order to efficiently sample multiple plausible segmentations from a single expert one. Third, we personalized a tumor growth model to seven patients’ MR images. We used a Bayesian approach in order to estimate the uncertainty in the personalized parameters of the model. Finally, we showed how combining a personalized model of tumor growth with a dose response model could be used to automatically define patient specific dose distribution. The promising results of our approaches offer new perspectives for personalized therapy planning.
243

Development of Dose Verification Detectors Towards Improving Proton Therapy Outcomes

January 2019 (has links)
abstract: The challenge of radiation therapy is to maximize the dose to the tumor while simultaneously minimizing the dose elsewhere. Proton therapy is well suited to this challenge due to the way protons slow down in matter. As the proton slows down, the rate of energy loss per unit path length continuously increases leading to a sharp dose near the end of range. Unlike conventional radiation therapy, protons stop inside the patient, sparing tissue beyond the tumor. Proton therapy should be superior to existing modalities, however, because protons stop inside the patient, there is uncertainty in the range. “Range uncertainty” causes doctors to take a conservative approach in treatment planning, counteracting the advantages offered by proton therapy. Range uncertainty prevents proton therapy from reaching its full potential. A new method of delivering protons, pencil-beam scanning (PBS), has become the new standard for treatment over the past few years. PBS utilizes magnets to raster scan a thin proton beam across the tumor at discrete locations and using many discrete pulses of typically 10 ms duration each. The depth is controlled by changing the beam energy. The discretization in time of the proton delivery allows for new methods of dose verification, however few devices have been developed which can meet the bandwidth demands of PBS. In this work, two devices have been developed to perform dose verification and monitoring with an emphasis placed on fast response times. Measurements were performed at the Mayo Clinic. One detector addresses range uncertainty by measuring prompt gamma-rays emitted during treatment. The range detector presented in this work is able to measure the proton range in-vivo to within 1.1 mm at depths up to 11 cm in less than 500 ms and up to 7.5 cm in less than 200 ms. A beam fluence detector presented in this work is able to measure the position and shape of each beam spot. It is hoped that this work may lead to a further maturation of detection techniques in proton therapy, helping the treatment to reach its full potential to improve the outcomes in patients. / Dissertation/Thesis / Doctoral Dissertation Physics 2019
244

Applying a Novel Integrated Persistent Feature to Understand Topographical Network Connectivity in Older Adults with Autism Spectrum Disorder

January 2019 (has links)
abstract: Autism spectrum disorder (ASD) is a developmental neuropsychiatric condition with early childhood onset, thus most research has focused on characterizing brain function in young individuals. Little is understood about brain function differences in middle age and older adults with ASD, despite evidence of persistent and worsening cognitive symptoms. Functional Magnetic Resonance Imaging (MRI) in younger persons with ASD demonstrate that large-scale brain networks containing the prefrontal cortex are affected. A novel, threshold-selection-free graph theory metric is proposed as a more robust and sensitive method for tracking brain aging in ASD and is compared against five well-accepted graph theoretical analysis methods in older men with ASD and matched neurotypical (NT) participants. Participants were 27 men with ASD (52 +/- 8.4 years) and 21 NT men (49.7 +/- 6.5 years). Resting-state functional MRI (rs-fMRI) scans were collected for six minutes (repetition time=3s) with eyes closed. Data was preprocessed in SPM12, and Data Processing Assistant for Resting-State fMRI (DPARSF) was used to extract 116 regions-of-interest defined by the automated anatomical labeling (AAL) atlas. AAL regions were separated into six large-scale brain networks. This proposed metric is the slope of a monotonically decreasing convergence function (Integrated Persistent Feature, IPF; Slope of the IPF, SIP). Results were analyzed in SPSS using ANCOVA, with IQ as a covariate. A reduced SIP was in older men with ASD, compared to NT men, in the Default Mode Network [F(1,47)=6.48; p=0.02; 2=0.13] and Executive Network [F(1,47)=4.40; p=0.04; 2=0.09], a trend in the Fronto-Parietal Network [F(1,47)=3.36; p=0.07; 2=0.07]. There were no differences in the non-prefrontal networks (Sensory motor network, auditory network, and medial visual network). The only other graph theory metric to reach significance was network diameter in the Default Mode Network [F(1,47)=4.31; p=0.04; 2=0.09]; however, the effect size for the SIP was stronger. Modularity, Betti number, characteristic path length, and eigenvalue centrality were all non-significant. These results provide empirical evidence of decreased functional network integration in pre-frontal networks of older adults with ASD and propose a useful biomarker for tracking prognosis of aging adults with ASD to enable more informed treatment, support, and care methods for this growing population. / Dissertation/Thesis / Masters Thesis Biomedical Engineering 2019
245

Reducering av stråldos vid angiografi/intervention

Hermansson, Adriana, Hjelm, Elvira January 2018 (has links)
Bakgrund: Utvecklingen av genomlysnings-baserade interventioner har varit framgångsrika de senaste årtiondena, då det gäller behandling av patienter. Införandet av genomlysning har dock medfört risker, för både patient och personal. Detta på grund av att stråldoser ökat i takt med en större efterfrågan på genomlysning och en utveckling av mer avancerade undersökningar. Med tanke på att stråldosen från intervention har en biologisk långtidseffekt, så är strålskydd ett upp-märksammat och återkommande ämne då det påverkar både sjukvårdspersonal, patienter och sam-hället. Brist på strålskydd och oaktsamhet hos operatörerna medför en ökad risk för strålningsindu-cerade skador. Eftersom antalet interventioner ständigt ökar så behövs även en ökad medvetenhet hos vårdpersonal kring vilka risker strålningen kan medföra och hur de kan undvikas. Syfte: Att beskriva vilka åtgärder som kan vidtas för att minska stråldosen för personal och patient vid angiografi/intervention. Metod: En systematisk litteraturstudie har använts. Resultat: I det sammanställda resultatet har flera olika strålskyddsåtgärder presenterats för att reducera stråldosen under intervention/angiografi. Genomlysningstid, modern teknik, uppdaterade inställningar i apparaturen och blyskydd har alla visat sig minska stråldosen markant till patient och personal. Utbildning och feedback ses även vara en viktig del i strålskyddsarbetet då det utgör grun-den för att riktlinjer kring strålskydd följs. Slutsats: Att det finns alternativa strålskyddsåtgärder vid IR- undersökningar/åtgärder är något som kan konstateras. Faktum är att en kombination av dessa åtgärder helt klart vore optimalt, för ökad säkerhet hos både personal och patienter vid exponering av strålning. / Background: The development of fluoroscopy-guided interventions the recent decades has resulted in a successful treatment of patients. However, the introduction of these diagnostic devices carry risks for both patients and staff. This is due to the fact that radiation doses increases in line with a greater demand for interventions and the development of advanced procedures. The radiation may cause long-term biological effects which makes radiation protection an important subject as it affects both staff, patients and society. Lack of radiation protection and negligence among operators cause an increased risk of radiation-induced damage. As the number of interventions constantly increase, improved knowledge is needed. Aim: The aim of this study is to describe which methods can be applied in order to reduce radiation doses for staff and patients in angiography / intervention. Method: A systematic study of existing literature. Results: The result shows several different radiation protection strategies to reduce the radiation dose during intervention / angiography. Fluoroscopy time, modern technology, updated settings in the device and lead protection have all been shown to reduce radiation dose significantly to both patients and staff. Education and feedback are also considered to be an important part of the radiation safety as they provide the basic guidelines on radiation protection. Conclusion: There is a lot of alternative radiation protection strategies in fluoroscopy-guided inter-ventions. In fact, a combination of these strategies would clearly be optimal for increased safety for both staff and patients exposed to radiation.
246

Generative Adversarial Networks for Lupus Diagnostics

Pradeep Periasamy (7242737) 16 October 2019 (has links)
The recent boom of Machine Learning Network Architectures like Generative Adversarial Networks (GAN), Deep Convolution Generative Adversarial Networks (DCGAN), Self Attention Generative Adversarial Networks (SAGAN), Context Conditional Generative Adversarial Networks (CCGAN) and the development of high-performance computing for big data analysis has the potential to be highly beneficial in many domains and fittingly in the early detection of chronic diseases. The clinical heterogeneity of one such chronic auto-immune disease like Systemic Lupus Erythematosus (SLE), also known as Lupus, makes it difficult for medical diagnostics. One major concern is a limited dataset that is available for diagnostics. In this research, we demonstrate the application of Generative Adversarial Networks for data augmentation and improving the error rates of Convolution Neural Networks (CNN). Limited Lupus dataset of 30 typical ’butterfly rash’ images is used as a model to decrease the error rates of a widely accepted CNN architecture like Le-Net. For the Lupus dataset, it can be seen that there is a 73.22% decrease in the error rates of Le-Net. Therefore such an approach can be extended to most recent Neural Network classifiers like ResNet. Additionally, a human perceptual study reveals that the artificial images generated from CCGAN are preferred to closely resemble real Lupus images over the artificial images generated from SAGAN and DCGAN by 45 Amazon MTurk participants. These participants are identified as ’healthcare professionals’ in the Amazon MTurk platform. This research aims to help reduce the time in detection and treatment of Lupus which usually takes 6 to 9 months from its onset.
247

Medical Data Management on the cloud / Gestion de données médicales sur le cloud

Mohamad, Baraa 23 June 2015 (has links)
Résumé indisponible / Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions.
248

Development and verification of medical image analysis tools within the 3D slicer environment

Forbes, Jessica LeeAnn 01 May 2016 (has links)
Rapid development of domain specialized medical imaging tools is essential for deploying medical imaging technologies to advance clinical research and clinical practice. This work describes the development process, deployment method, and evaluation of modules constructed within the 3D Slicer environment. These tools address critical problems encountered in four different clinical domains: quality control review of large repositories of medical images, rule-based automated label map cleaning, quantification of calcification in the heart using low-dose radiation scanning, and waist circumference measurement from abdominal scans. Each of these modules enables and accelerates clinical research by incorporating medical imaging technologies that minimize manual human effort. They are distributed within the multi-platform 3D Slicer Extension Manager environment for use in the computational environment most convenient to the clinician scientist.
249

The development of a DICOM import software and Modality Calculators for Radiology Protocols

Chen, Jiawen 01 August 2015 (has links)
Medical imaging can involve different modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. Each examination generated by these modalities has a set of unique instructions, the imaging protocol, which are imaging parameters determine the signal and contrast for creating a particular medical image. Properly managing imaging protocols is like systemically documenting an instruction handbook, which guarantees the quality of scans by providing the radiologist and technicians with appropriate procedures for a given indication. It also ensures patients’ safety by reducing repeated scans to acquire the desired image information. Radiology Protocols (RP) is a company that provides an online medical protocol database to improve protocol management. It recently developed RP Import, an imaging protocol import software, to automatically collecting elements from the medical imaging file – Digital Imaging and Communications in Medicine (DICOM) files, and mapping them to the specified protocol database. With the help of RP Import, protocol creation is much faster and eliminates manual definition of the parameters, thus this tool lays the foundation of the further development of imaging protocol management. Radiology Protocols also developed a series of Modality Calculators to assist radiologists and technicians to build or modify imaging protocols as they are needed. These calculators cover most of the essential medical parameter calculations associated with different modalities. By using them, computing specific parameters while editing protocols becomes more convenient, and determining the use and amount of certain medical parameters becomes more precise as well. In summary, RP Import and Modality Calculators are two meaningful tools in protocol management, and they also play very important roles in regulating procedure and dosage during the medical image practice.
250

Boundary-constrained inverse consistent image registration and its applications

Kumar, Dinesh 01 May 2011 (has links)
This dissertation presents a new inverse consistent image registration (ICIR) method called boundary-constrained inverse consistent image registration (BICIR). ICIR algorithms jointly estimate the forward and reverse transformations between two images while minimizing the inverse consistency error (ICE). The ICE at a point is defined as the distance between the starting and ending location of a point mapped through the forward transformation and then the reverse transformation. The novelty of the BICIR method is that a region of interest (ROI) in one image is registered with its corresponding ROI. This is accomplished by first registering the boundaries of the ROIs and then matching the interiors of the ROIs using intensity registration. The advantages of this approach include providing better registration at the boundary of the ROI, eliminating registration errors caused by registering regions outside the ROI, and theoretically minimizing computation time since only the ROIs are registered. The first step of the BICIR algorithm is to inverse consistently register the boundaries of the ROIs. The resulting forward and reverse boundary transformations are extended to the entire ROI domains using the Element Free Galerkin Method (EFGM). The transformations produced by the EFGM are then made inverse consistent by iteratively minimizing the ICE. These transformations are used as initial conditions for inverse-consistent intensity-based registration of the ROI interiors. Weighted extended B-splines (WEB-splines) are used to parameterize the transformations. WEB-splines are used instead of B-splines since WEB-splines can be defined over an arbitrarily shaped ROI. Results are presented showing that the BICIR method provides better registration of 2D and 3D anatomical images than the small-deformation, inverse-consistent, linear-elastic (SICLE) image registration algorithm which registers entire images. Specifically, the BICIR method produced registration results with lower similarity cost, reduced boundary matching error, increased ROI relative overlap, and lower inverse consistency error than the SICLE algorithm.

Page generated in 0.3513 seconds