1 |
Biomedical image computing : the development and application of mathematical and computational modelsGraham, James January 2016 (has links)
Biomedical images contain a great deal of information that is useful and a great deal that is not. Computational analysis and interpretation of biomedical images involves extraction of some or all of the useful information. The useless information can take the form of unwanted clutter or noise that can obscure the useful information or inhibit the interpretation. Various mathematical and computational processes may be applied to reduce the effects of noise and distracting content. The most successful approaches involve the use of mathematical or computational models that express the properties of the required information. Interpretation of images involves finding objects or structures in the image that match the properties of the model. This dissertation describes the development and application of different models required for the interpretation of a variety of different image types arising from clinical medicine or biomedical research. These include:* neural network models, * Point Distribution Models, and the associated Active Shape Models, which have become part of the research toolkit of many academic and commercial organisations, * models of the appearance of nerve fibres in noisy confocal microscope images,* models of pose changes in carpal bones during wrist motion, A number of different application problem are described, in which variants of these methods have been developed and used: * cytogenetics, * proteomics, * assessing bone quality, * segmentation of magnetic resonance images, * measuring nerve fibres * inferring 3D motion from 2D cinefluoroscopy sequences. The methods and applications represented here encompass the progression of biomedical image analysis from early developments, where computational power became adequate to the challenges posed by biomedical image data, to recent, highly computationally-intensive methods.
|
2 |
Extraction de caractéristiques et apprentissage statistique pour l'imagerie biomédicale cellulaire et tissulaire / Feature extraction and machine learning for cell and tissue biomedical imagingZubiolo, Alexis 11 December 2015 (has links)
L'objectif de cette thèse est de s'intéresser à la classification de cellules et de tissus au sein d'images d'origine biomédicales en s'appuyant sur des critères morphologiques. Le but est de permettre aux médecins et aux biologistes de mieux comprendre les lois qui régissent certains phénomènes biologiques. Ce travail se décompose en trois principales parties correspondant aux trois problèmes typiques des divers domaines de l'imagerie biomédicale abordés. L'objet de la première est l'analyse de vidéos d'endomicroscopie du colon dans lesquelles il s'agit de déterminer automatiquement la classe pathologique des polypes qu'on y observe. Cette tâche est réalisée par un apprentissage supervisé multiclasse couplant les séparateurs à vaste marge à des outils de théorie des graphes. La deuxième partie s'intéresse à l'étude de la morphologie de neurones de souris observés par microscopie confocale en fluorescence. Afin de disposer d'une information riche, les neurones sont observés à deux grossissements, l'un permettant de bien caractériser les corps cellulaires, l'autre, plus faible, pour voir les dendrites apicales dans leur intégralité. Sur ces images, des descripteurs morphologiques des neurones sont extraits automatiquement en vue d'une classification. La dernière partie concerne le traitement multi-échelle d'images d'histologie digitale dans le contexte du cancer du rein. Le réseau vasculaire est extrait et mis sous forme de graphe afin de pouvoir établir un lien entre l'architecture vasculaire de la tumeur et sa classe pathologique. / The purpose of this Ph.D. thesis is to study the classification based on morphological features of cells and tissues taken from biomedical images. The goal is to help medical doctors and biologists better understand some biological phenomena. This work is spread in three main parts corresponding to the three typical problems in biomedical imaging tackled. The first part consists in analyzing endomicroscopic videos of the colon in which the pathological class of the polyps has to be determined. This task is performed using a supervised multiclass machine learning algorithm combining support vector machines and graph theory tools. The second part concerns the study of the morphology of mice neurons taken from fluorescent confocal microscopy. In order to obtain a rich information, the neurons are imaged at two different magnifications, the higher magnification where the soma appears in details, and the lower showing the whole cortex, including the apical dendrites. On these images, morphological features are automatically extracted with the intention of performing a classification. The last part is about the multi-scale processing of digital histology images in the context of kidney cancer. The vascular network is extracted and modeled by a graph to establish a link between the architecture of the tumor and its pathological class.
|
3 |
A Hierarchical Image Processing Approach for Diagnostic Analysis of Microcirculation VideosMirshahi, Nazanin 08 December 2011 (has links)
Knowledge of the microcirculatory system has added significant value to the analysis of tissue oxygenation and perfusion. While developments in videomicroscopy technology have enabled medical researchers and physicians to observe the microvascular system, the available software tools are limited in their capabilities to determine quantitative features of microcirculation, either automatically or accurately. In particular, microvessel density has been a critical diagnostic measure in evaluating disease progression and a prognostic indicator in various clinical conditions. As a result, automated analysis of the microcirculatory system can be substantially beneficial in various real-time and off-line therapeutic medical applications, such as optimization of resuscitation. This study focuses on the development of an algorithm to automatically segment microvessels, calculate the density of capillaries in microcirculatory videos, and determine the distribution of blood circulation. The proposed technique is divided into four major steps: video stabilization, video enhancement, segmentation and post-processing. The stabilization step estimates motion and corrects for the motion artifacts using an appropriate motion model. Video enhancement improves the visual quality of video frames through preprocessing, vessel enhancement and edge enhancement. The resulting frames are combined through an adjusted weighted median filter and the resulting frame is then thresholded using an entropic thresholding technique. Finally, a region growing technique is utilized to correct for the discontinuity of blood vessels. Using the final binary results, the most commonly used measure for the assessment of microcirculation, i.e. Functional Capillary Density (FCD), is calculated. The designed technique is applied to video recordings of healthy and diseased human and animal samples obtained by MicroScan device based on Sidestream Dark Field (SDF) imaging modality. To validate the final results, the calculated FCD results are compared with the results obtained by blind detailed inspection of three medical experts, who have used AVA (Automated Vascular Analysis) semi-automated microcirculation analysis software. Since there is neither a fully automated accurate microcirculation analysis program, nor a publicly available annotated database of microcirculation videos, the results acquired by the experts are considered the gold standard. Bland-Altman plots show that there is ``Good Agreement" between the results of the algorithm and that of gold standard. In summary, the main objective of this study is to eliminate the need for human interaction to edit/ correct results, to improve the accuracy of stabilization and segmentation, and to reduce the overall computation time. The proposed methodology impacts the field of computer science through development of image processing techniques to discover the knowledge in grayscale video frames. The broad impact of this work is to assist physicians, medical researchers and caregivers in making diagnostic and therapeutic decisions for microcirculatory abnormalities and in studying of the human microcirculation.
|
4 |
Biomedical Image Segmentation and Object Detection Using Deep Convolutional Neural NetworksLiming Wu (6622538) 11 June 2019 (has links)
<p>Quick and accurate segmentation and object detection of the biomedical image is the starting point of most disease analysis and understanding of biological processes in medical research. It will enhance drug development and advance medical treatment, especially in cancer-related diseases. However, identifying the objects in the CT or MRI images and labeling them usually takes time even for an experienced person. Currently, there is no automatic detection technique for nucleus identification, pneumonia detection, and fetus brain segmentation. Fortunately, as the successful application of artificial intelligence (AI) in image processing, many challenging tasks are easily solved with deep convolutional neural networks. In light of this, in this thesis, the deep learning based object detection and segmentation methods were implemented to perform the nucleus segmentation, lung segmentation, pneumonia detection, and fetus brain segmentation. The semantic segmentation is achieved by the customized U-Net model, and the instance localization is achieved by Faster R-CNN. The reason we choose U-Net is that such a network can be trained end-to-end, which means the architecture of this network is very simple, straightforward and fast to train. Besides, for this project, the availability of the dataset is limited, which makes U-Net a more suitable choice. We also implemented the Faster R-CNN to achieve the object localization. Finally, we evaluated the performance of the two models and further compared the pros and cons of them. The preliminary results show that deep learning based technique outperforms all existing traditional segmentation algorithms. </p>
|
5 |
Quantitative Phenotyping in Tissue MicroenvironmentsSingh, Shantanu 29 July 2011 (has links)
No description available.
|
6 |
MICROSCOPY IMAGE REGISTRATION, SYNTHESIS AND SEGMENTATIONChichen Fu (5929679) 10 June 2019 (has links)
<div>Fluorescence microscopy has emerged as a powerful tool for studying cell biology because it enables the acquisition of 3D image volumes deeper into tissue and the imaging of complex subcellular structures. Fluorescence microscopy images are frequently distorted by motion resulting from animal respiration and heartbeat which complicates the quantitative analysis of biological structures needed to characterize the structure and constituency of tissue volumes. This thesis describes a two pronged approach to quantitative analysis consisting of non-rigid registration and deep convolutional neural network segmentation. The proposed image registration method is capable of correcting motion artifacts in three dimensional fluorescence microscopy images collected over time. In particular, our method uses 3D B-Spline based nonrigid registration using a coarse-to-fine strategy to register stacks of images collected at different time intervals and 4D rigid registration to register 3D volumes over time. The results show that the proposed method has the ability of correcting global motion artifacts of sample tissues in four dimensional space, thereby revealing the motility of individual cells in the tissue.</div><div><br></div><div>We describe in thesis nuclei segmentation methods using deep convolutional neural networks, data augmentation to generate training images of different shapes and contrasts, a refinement process combining segmentation results of horizontal, frontal, and sagittal planes in a volume, and a watershed technique to enumerate the nuclei. Our results indicate that compared to 3D ground truth data, our method can successfully segment and count 3D nuclei. Furthermore, a microscopy image synthesis method based on spatially constrained cycle-consistent adversarial networks is used to efficiently generate training data. A 3D modified U-Net network is trained with a combination of Dice loss and binary cross entropy metrics to achieve accurate nuclei segmentation. A multi-task U-Net is utilized to resolve overlapping nuclei. This method was found to achieve high accuracy object-based and voxel-based evaluations.</div>
|
7 |
Machine learning for blob detection in high-resolution 3D microscopy imagesTer Haak, Martin January 2018 (has links)
The aim of blob detection is to find regions in a digital image that differ from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for example, the blobs are local intensity maxima (i.e. bright spots) corresponding to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images. The research question is therefore how ML can be effectively used to perform the blob detection. A blob detector is proposed that first extracts a set of relevant and nonredundant image features, then classifies pixels as blobs and finally uses a clustering algorithm to split up connected blobs. The detector works out-of-core, meaning it can process images that do not fit in memory, by dividing the images into chunks. Results prove the feasibility of this blob detector and show that it can compete with other popular software for blob detection. But unlike other tools, the proposed blob detector does not require parameter tuning, making it easier to use and more reliable. / Syftet med blobdetektion är att hitta regioner i en digital bild som skiljer sig från omgivningen med avseende på egenskaper som intensitet eller form. Biologisk bildanalys är en vanlig tillämpning där blobbar kan beteckna intresseregioner som har färgats in med ett fluorescerande färgämne. Vid bildbaserad in situ-sekvensering för ribonukleinsyra (RNA) är blobbarna lokala intensitetsmaxima (dvs ljusa fläckar) motsvarande platserna för specifika RNA-nukleobaser i celler. Traditionella metoder för blob-detektering bygger på enkla bildbehandlingssteg som måste vägledas av användaren. Problemet är att användaren måste hitta optimala parametrar för varje steg som ofta är specifika för just den bilden och som inte kan generaliseras till andra bilder. Dessutom är några av de befintliga verktygen inte lämpliga för storleken på mikroskopibilderna som ofta är i mycket hög upplösning och 3D. Maskininlärning (ML) är en samling tekniker som ger datorer möjlighet att “lära sig” från data. För att eliminera beroendet av användarparametrar, är tanken att tillämpa ML för att lära sig definitionen av en blob från uppmärkta bilder. Forskningsfrågan är därför hur ML effektivt kan användas för att utföra blobdetektion. En blobdetekteringsalgoritm föreslås som först extraherar en uppsättning relevanta och icke-överflödiga bildegenskaper, klassificerar sedan pixlar som blobbar och använder slutligen en klustringsalgoritm för att dela upp sammansatta blobbar. Detekteringsalgoritmen fungerar utanför kärnan, vilket innebär att det kan bearbeta bilder som inte får plats i minnet genom att dela upp bilderna i mindre delar. Resultatet visar att detekteringsalgoritmen är genomförbar och visar att den kan konkurrera med andra populära programvaror för blobdetektion. Men i motsats till andra verktyg behöver den föreslagna detekteringsalgoritmen inte justering av sina parametrar, vilket gör den lättare att använda och mer tillförlitlig.
|
8 |
Trénovatelné metody pro automatické zpracování biomedicínských obrazů / Trainable Methods for Automatic Biomedical Image ProcessingUher, Václav January 2018 (has links)
This thesis deals with possibilities of automatic segmentation of biomedical images. For the 3D image segmentation, a deep learning method has been proposed. In the work problems of network design, memory optimization method and subsequent composition of the resulting image are solved. The uniqueness of the method lies in 3D image processing on a GPU in combination with augmentation of training data and preservation of the output size with the original image. This is achieved by dividing the image into smaller parts with the overlay and then folding to the original size. The functionality of the method is verified on the segmentation of human brain tissue on magnetic resonance imaging, where it overcomes human accuracy when compared a specialist vs. specialist, and cell segmentation on a slices of the Drosophila brain from an electron microscope, where published results from the impacted paper are overcome.
|
9 |
Continual Learning and Biomedical Image Data : Attempting to sequentially learn medical imaging datasets using continual learning approaches / Kontinuerligt lärande och Biomedicinsk bilddata : Försöker att sekventiellt lära sig medicinska bilddata genom att använda metoder för kontinuerligt lärandeSoselia, Davit January 2022 (has links)
While deep learning has proved to be useful in a large variety of tasks, a limitation remains of needing all classes and samples to be present at the training stage in supervised problems. This is a major issue in the field of biomedical imaging since keeping samples in the training sets consistently is often a liability. Furthermore, this issue prevents the simple updating of older models with only the new data when it is introduced, and prevents collaboration between companies. In this work, we examine an array of Continual Learning approaches to try to improve upon the baseline of the naive finetuning approach when retraining on new tasks, and achieve accuracy levels similar to the ones seen when all the data is available at the same time. Continual learning approaches with which we attempt to mitigate the problem are EWC, UCB, EWC Online, SI, MAS, CN-DPM. We explore some complex scenarios with varied classes being included in the tasks, as well as close to ideal scenarios where the sample size is balanced among the tasks. Overall, we focus on X-ray images, since they encompass a large variety of diseases, with new diseases requiring retraining. In the preferred setting, where classes are relatively balanced, we get an accuracy of 63.30 versus a baseline of 53.92 and the target score of 66.83. For the continued training on the same classes, we get an accuracy of 35.52 versus a baseline of 27.73. We also examine whether learning rate adjustments at task level improve accuracy, with some improvements for EWC Online. The preliminary results indicate that CL approaches such as EWC Online and SI could be integrated into radiography data learning pipelines to reduce catastrophic forgetting in situations where some level of sequential training ability justifies the significant computational overhead. / Även om djupinlärning har visat sig vara användbart i en mängd olika uppgifter, kvarstår en begränsning av att behöva alla klasser och prover som finns på utbildningsstadiet i övervakade problem. Detta är en viktig fråga inom området biomedicinsk avbildning eftersom det ofta är en belastning att hålla prover i träningsuppsättningarna. Dessutom förhindrar det här problemet enkel uppdatering av äldre modeller med endast nya data när de introduceras och förhindrar samarbete mellan företag. I det här arbetet undersöker vi en rad kontinuerliga inlärningsmetoder för att försöka förbättra baslinjen för den naiva finjusteringsmetoden vid omskolning på nya uppgifter och närma sig noggrannhetsnivåer som de som ses när alla data är tillgängliga samtidigt. Kontinuerliga inlärningsmetoder som vi försöker mildra problemet med inkluderar bland annat EWC, UCB, EWC Online, SI. Vi utforskar några komplexa scenarier med olika klasser som ingår i uppgifterna, samt nära idealiska scenarier där exempelstorleken balanseras mellan uppgifterna. Sammantaget fokuserar vi på röntgenbilder, eftersom de omfattar ett stort antal sjukdomar, med nya sjukdomar som kräver omskolning. I den föredragna inställningen får vi en noggrannhet på 63,30 jämfört med en baslinje på 53,92 och målpoängen på 66,83. Medan vi för den utökade träningen på samma klasser får en noggrannhet på 35,52 jämfört med en baslinje på 27,73. Vi undersöker också om justeringar av inlärningsfrekvensen på uppgiftsnivå förbättrar noggrannheten, med vissa förbättringar för EWC Online. De preliminära resultaten tyder på att CL-metoder som EWC Online och SI kan integreras i rörledningar för röntgendatainlärning för att minska katastrofal glömska i situationer där en viss nivå av sekventiell utbildningsförmåga motiverar den betydande beräkningskostnaden.
|
10 |
Graph-based registration for biomedical images / Recalage basé graphe pour les images médicalesPham, Hong Nhung 11 February 2019 (has links)
Le contexte de cette thèse est le recalage d'images endomicroscopiques. Le microendoscope multiphotonique fournit différentes trajectoires de balayage que nous considérons dans ce travail. Nous proposons d'abord une méthode de recalage non rigide dont l'estimation du mouvement est transformée en un problème d'appariement d'attributs dans le cadre des Log-Demons et d'ondelettes sur graphes. Nous étudions les ondelettes de graphe spectral (SGW) pour capturer les formes des images, en effet, la représentation des données sur les graphes est plus adaptée aux données avec des structures complexes. Nos expériences sur des images endomicroscopiques montrent que cette méthode est supérieure aux techniques de recalage d'images non rigides existantes. Nous proposons ensuite une nouvelle stratégie de recalage d'images pour les images endomicroscopiques acquises sur des grilles irrégulières. La transformée en ondelettes sur graphe est flexible et peut être appliquée à différents types de données, quelles que soient la densité de points et la complexité de la structure de données. Nous montrons également comment le cadre des Log-Demons peut être adapté à l'optimisation de la fonction objective définie pour les images acquises avec un échantillonnage irrégulier. / The context of this thesis is the image registration for endomicroscopic images. Multiphoton microendoscope provides different scanning trajectories which are considered in this work. First we propose a nonrigid registration method whose motion estimation is cast into a feature matching problem under the Log-Demons framework using Graph Wavelets. We investigate the Spectral Graph Wavelets (SGWs) to capture the shape feature of the images. The data representation on graphs is more adapted to data with complex structures. Our experiments on endomicroscopic images show that this method outperforms the existing nonrigid image registration techniques. We then propose a novel image registration strategy for endomicroscopic images acquired on irregular grids. The Graph Wavelet transform is flexible to apply on different types of data regardless of the data point densities and how complex the data structure is. We also show how the Log-Demons framework can be adapted to the optimization of the objective function defined for images with an irregular sampling.
|
Page generated in 0.0829 seconds