• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 227
  • 28
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 321
  • 321
  • 205
  • 131
  • 123
  • 90
  • 75
  • 65
  • 64
  • 55
  • 53
  • 44
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Automatisering av skjuvvågselastografidata för kärldiagnostisk applikation. / Automatization of Shear Wave Elastography Data for Arterial Application

Boltshauser, Rasmus, Zheng, Jimmy January 2018 (has links)
Sammanfattning   Hjärt- och kärlsjukdommar är den ledande dödsorsaken i världen. En av det vanligaste hjärt- och kärlsjukdomarna är åderförkalkning. Sjukdomen kännetecknas av förhårdning samt plackansamling i kärl och bidrar till stroke och hjärtinfarkt. Information om kärlväggens styvhet kan spela en viktig roll vid diagnostiseringen av bland annat åderförkalkning. Skjuvvågselastografi (SWE) är en noninvasiv ultraljudsbaserad metod som idag används för att mäta elasticitet och styvhet av större mjuka vävnader som lever- och bröstvävnad. Dock används inte metoden inom kärlapplikationer, då få genomgående studier har utförts på SWE för kärl. Målet med projektet är att automatisera kvantifieringen av skjuvvågshastigheten för SWE och undersöka hur automatiseringens förmåga och begränsningar beror av automatiseringsinställningar. Med verktyg erhållna från CBH (skolan för kemi, bioteknologi och hälsa) skapades ett MATLAB-program med denna förmåga. Programmet applicerades på två fantommodeller. Automatiseringsinställningarna påverkade automatiseringen av dessa modeller olika, vilket innebar att generella optimala inställningar inte kunde finnas. Optimala inställningar beror på vad automatiseringen skall undersöka. / Medicinsk avbildning
22

Multi-organ segmentation med användning av djup inlärning / Multi-Organ SegmentationUsing Deep Learning

Karlsson, Albin, Olmo, Daniel January 2020 (has links)
Medicinsk bildanalys är både tidskonsumerade och kräver expertis. I den härrapporten vidareutvecklas en 2.5D version av faltningsnätverket U-Net anpassadför automatiserad njuresegmentering. Faltningsnätverk har tidigare visatliknande prestation som experter. Träningsdata för nätverket anpassades genomatt manuellt segmentera MR-bilder av njurar. 2.5D U-Net nätverket tränades med64 st njursegmenteringar från tidigare arbete. Volymanalys på nätverketssegmenterings förslag av 38.000 patienter visade den mängden segmenteradevoxlar som inte tillhörde njurarna var 0,35 %. Efter tillägg av 56 st av vårasegmenteringar minskade det till 0.11 %, en reduktion av cirka 68 %. Det är enstor förbättring av nätverket och ett viktigt steg mot tillämpning avautomatiserad segmentering. / Medical image analysis is both time consuming and requires expertise. In thisreport, a 2.5D version of the U-net convolution network adapted for automatedkidney segmentation is further developed. Convolution neural networks havepreviously shown expert level performance in image segmentation. Training datafor the network was created by manually segmenting MRI images of kidneys.The 2.5D U-Net network was trained with 64 kidney segmentations fromprevious work. Volume analysis on the network’s kidney segmentation proposalsof 38,000 patients showed that the ammount of segmented voxels that are notpart of the kidneys was 0.35%. After the addition of 56 of our segmentations, itdecreased to just 0.11%, indicating a reduction of about 68%. This is a majorimprovement of the network and an important step towards the development ofpractical applications of automated segmentation.
23

Developing an Image Morphing Approach for Visualization of Digital Twin Liver Fat Reduction

Gustafsson, Peter January 2022 (has links)
Nonalcoholic liver steatosis (NALS) is a condition where fat infiltrates the tissue of the liver and accumulates in droplets. While not a dangerous condition on its own, if left for long enough it can develop into conditions which could cause serious and potentially permanent damage to the liver. One of the primary approaches for preventing NALS from progressing is through changes in diet and lifestyle. However, explaining to a patient the impact of such a change can be difficult, which hampers motivation in many instances. Digital twin technology can provide simulations of what will happen to the body after a lifestyle change, but the output data is very abstract and can thus be a challenge to convey properly to a patient. In this project I investigate a digital data visualization approach where a photo of a liver sample is morphed to showcase liver fat droplets shrinking as a result of a changed lifestyle, as simulated by the digital twin. The approach uses a simple image morphing algorithm that pulls pixel intensity values from regions designated by a morph field and composites a newimage from the updated values. By selectively choosing regions of interest to pull pixels towards or away from, with a ramping cutoff in morph field strength, it is possible to designate certain regions in the image to be morphed. The program is capable of generating time series of increasingly morphed images in both greyscale and truecolour, and it can save the time series as an animated .GIF file, with linear interpolation between the morphed images in the time series.
24

Deep learning on large neuroimaging datasets

Jönemo, Johan January 2024 (has links)
Magnetic resonance imaging (MRI) is a medical imaging method that has become increasingly more important during the last 4 decades. This is partly because it allows us to acquire a 3D-representation of a part of the body without exposing patients to ionizing radiation. Furthermore, it also typically gives better contrast between soft tissues than x-ray based techniques such as CT. The image acquisition procedure of MRI is also much more flexible. One can vary the signal sequence, not only to change how different types of tissue map to different intensities, but also to measure flow, diffusion or even brain activity over time.  Machine learning has gained great impetus the last decade and a half. This is probably partly because of the work done on the mathematical foundations of machine learning done at the end of last century in conjunction with the availability of specialized massively parallel processors, originally developed as graphical processing units (GPUs), which are ideal for training or running machine learning models. The work presented in this thesis combines MRI and machine learning in order to leverage the large amounts of MRI-data available in open data sets, to address questions of clinical relevance about the brain.  The thesis comprises three studies. In the first one the subproblem which augmentation methods are useful in the larger context of classifying autism, was investigated. The second study is about predicting brain age. In particular it aims to construct light-weight models using the MRI volumes in a condensed form, so that the model can be trained in a short time and still reach good accuracy. The third study is a development of the previous that investigates other ways of condensing the brain volumes. / Magnetresonansavbildningar, ofta kallat MR eller MRI, är en bilddiagnostik-metod som har blivit allt viktigare under de senaste 40 åren. Detta på grund av att man kan erhålla 3D-bilder av kroppsdelar utan att utsätta patienter för joniserande strålning. Dessutom får man typiskt bättre kontraster mellan mjukdelar än man får med motsvarande genomlysningsmetod (CT, eller 3D röntgen). Själva bildinsamlingsförfarandet är också mera flexibelt med MR. Man kan genom att ändra program för utsända och registrerade signa-ler, inte bara ändra vad som framförallt framträder på bilden (t.ex. vatten, fett, H-densitet, o.s.v.) utan även mäta flöde och diffusion eller till och med hjärnaktivitet över tid. Maskininlärning har fått ett stort uppsving under 2010-talet, dels på grund av utveckling av teknologin för att träna och konstruera maskininlärningsmodeller dels på grund av tillgängligheten av massivt parallella specialprocessorer – initialt utvecklade för att generera datorgrafik. Detta arbete kombinerar MR med maskininlärning, för att dra nytta av de stora mängder MR data som finns samlad i öppna databaser, för att adressera frågor av kliniskt intresse angående hjärnan. Avhandlingen innehåller tre studier. I den första av dessa undersöks del-problemet vilken eller vilka metoder för att artificiellt utöka träningsdata som är bra vid klassificering om en person har autism. Det andra arbetet adresserar bedömning av så kallad "hjärn-ålder". Framför allt strävar arbetet efter att hitta lättviktsmodeller som använder en komprimerad form av varje hjärnvolym, och därmed snabbt kan tränas till att bedöma en persons ålder från en MR-volym av hjärnan. Det tredje arbetet utvecklar modellen från det föregående genom att undersöka andra typer av komprimering. / <p><strong>Funding:</strong> This research was supported by the Swedish research council (2017-04889), the ITEA/VINNOVA project ASSIST (Automation, Surgery Support and Intuitive 3D visualization to optimize workflow in IGT SysTems, 2021-01954), and the Åke Wiberg foundation (M20-0031, M21-0119, M22-0088).</p>
25

Compact Representations for Fast Nonrigid Registration of Medical Images

Timoner, Samson 04 July 2003 (has links)
We develop efficient techniques for the non-rigid registration of medical images by using representations that adapt to the anatomy found in such images. Images of anatomical structures typically have uniform intensity interiors and smooth boundaries. We create methods to represent such regions compactly using tetrahedra. Unlike voxel-based representations, tetrahedra can accurately describe the expected smooth surfaces of medical objects. Furthermore, the interior of such objects can be represented using a small number of tetrahedra. Rather than describing a medical object using tens of thousands of voxels, our representations generally contain only a few thousand elements. Tetrahedra facilitate the creation of efficient non-rigid registration algorithms based on finite element methods (FEM). We create a fast, FEM-based method to non-rigidly register segmented anatomical structures from two subjects. Using our compact tetrahedral representations, this method generally requires less than one minute of processing time on a desktop PC. We also create a novel method for the non-rigid registration of gray scale images. To facilitate a fast method, we create a tetrahedral representation of a displacement field that automatically adapts to both the anatomy in an image and to the displacement field. The resulting algorithm has a computational cost that is dominated by the number of nodes in the mesh (about 10,000), rather than the number of voxels in an image (nearly 10,000,000). For many non-rigid registration problems, we can find a transformation from one image to another in five minutes. This speed is important as it allows use of the algorithm during surgery. We apply our algorithms to find correlations between the shape of anatomical structures and the presence of schizophrenia. We show that a study based on our representations outperforms studies based on other representations. We also use the results of our non-rigid registration algorithm as the basis of a segmentation algorithm. That algorithm also outperforms other methods in our tests, producing smoother segmentations and more accurately reproducing manual segmentations.
26

Image analysis techniques for classification of pulmonary disease in cattle

Miller, C. Denise 13 September 2007 (has links)
Histologic analysis of tissue samples is often a critical step in the diagnosis of disease. However, this type of assessment is inherently subjective, and consequently a high degree of variability may occur between results produced by different pathologists. Histologic analysis is also a very time-consuming task for pathologists. Computer-based quantitative analysis of tissue samples shows promise for both reducing the subjectivity of traditional manual tissue assessments, as well as potentially reducing the time required to analyze each sample. <p>The objective of this thesis project was to investigate image processing techniques and to develop software which could be used as a diagnostic aid in pathology assessments of cattle lung tissue samples. The software examines digital images of tissue samples, identifying and highlighting the presence of a set of features that indicate disease, and that can be used to distinguish various pulmonary diseases from one another. The output of the software is a series of segmented images with relevant disease indicators highlighted, and measurements quantifying the occurrence of these features within the tissue samples. Results of the software analysis of a set of 50 cattle lung tissue samples were compared to the detailed manual analysis of these samples by a pathology expert.<p>The combination of image analysis techniques implemented in the thesis software shows potential. Detection of each of the disease indicators is successful to some extent, and in some cases the analysis results are extremely good. There is a large difference in accuracy rates for identification of the set of disease indicators, however, with sensitivity values ranging from a high of 94.8% to a low of 22.6%. This wide variation in result scores is partially due to limitations of the methodology used to determine accuracy.
27

Evaluation of Lung Perfusion Using Pre and Post Contrast-Enhanced CT Images ¡V Pulmonary Embolism

Weng, Ming-hsu 15 July 2005 (has links)
In recent years, computer tomography (CT) has become an increasingly important tool in the clinical diagnosis, mainly because of the advent of fast scanning techniques and high spatial resolution of the vision hardware. In addition to the detailed information of morphology, functional CT also gives the physiologic information, such as perfusion. It can help doctors to make better decision. Our goal in this paper is to evaluate lung perfusion by comparing pre and post contrast-enhanced CT images. After the contrast agent is injected, it flows with blood stream and causes the temporal changes in CT values. Therefore, we can quantize perfusion values from the changes of CT values between pre and post contrast-enhanced CT images. Then guided by color -coded maps, a quantitative analysis for the assessment of lung perfusion can be performed. As a result, it is easier for observer to determinate the lung perfusion distribution. Moreover, we can use color - coded images to visualize pulmonary embolism and monitor therapeutic efficacy.
28

Using MR anatomically simulated normal image to reveal spect finited resolution effects

Wilson, Timothy Lyle 12 1900 (has links)
No description available.
29

Hierarchical segmentation of mammograms based on pixel intensity /

Masek, Martin. January 2004 (has links)
Thesis (Ph.D.)--University of Western Australia, 2004.
30

Medical Image Segmentation by Transferring Ground Truth Segmentation

Vyas, Aseem January 2015 (has links)
The segmentation of medical images is a difficult task due to the inhomogeneous intensity variations that occurs during digital image acquisition, the complicated shape of the object, and the medical expert’s lack of semantic knowledge. Automated segmentation algorithms work well for some medical images, but no algorithm has been general enough to work for all medical images. In practice, most of the time the segmentation results are corrected by the experts before the actual use. In this work, we are motivated to determine how to make use of manually segmented data in automatic segmentation. The key idea is to transfer the ground truth segmentation from the database of train images to a given test image. The ground truth segmentation of MR images is done by experts. The process includes a hierarchical image decomposition approach that performs the shape matching of test images at several levels, starting with the image as a whole (i.e. level 0) and then going through a pyramid decomposition (i.e. level 1, level 2, etc.) with the database of the train images and the given test image. The goal of pyramid decomposition is to find the section of the training image that best matches a section of the test image of a different level. After that, a re-composition approach is taken to place the best matched sections of the training image to the original test image space. Finally, the ground truth segmentation is transferred from the best training images to their corresponding location in the test image. We have tested our method on a hip joint MR image database and the experiment shows successful results on level 0, level 1 and level 2 re-compositions. Results improve with deeper level decompositions, which supports our hypotheses.

Page generated in 0.0449 seconds