• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 589
  • 589
  • 153
  • 116
  • 108
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 73
  • 70
  • 70
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Detecting illegal gold mining sites in the Amazon forest : Using Deep Learning to Classify Satellites Images

Labbe, Nathan January 2021 (has links)
Illegal gold mining in the Amazon forest has increased dramatically since 2005 with the rise in the price of gold. The use of chemicals such as mercury, which facilitate the gold extraction, increases the toxicity of the soil and can enter the food chain, leading to health problems for the inhabitants, and causes the environmental scourge we know today. In addition, the massive increase in these activities favours deforestation and impacts on protected areas such as indigenous areas and natural reserves. Organisations and governments from Peru, Brazil and French Guyana in particular, are trying to regulate these activities, but the area to cover being very large, by the time illegal exploitation is detected it is often too late to react. The idea of this thesis is to evaluate whether it is possible to automate the task of detecting these illegal gold mines using open satellite images and deep learning. In order to answer this question, this report includes the creation of new datasets, as well as the evaluation of two techniques which are object detection using RetinaNet and semantic segmentation using U-Net. The influence of image spectral bands is also studied in this thesis. The numerous trained models are all evaluated using the Dice Coefficient and Intersection over Union metrics, and each comparison is supported by the statistical sign-test. The report shows the superiority of the segmentation model for the binary classification of illegal mines. However, it is suggested to first use RetinaNet to find out more precisely whether the mine is legal or illegal, and then to use U-Net if the mine is illegal in order to make a more precise segmentation. It also shows and illustrates the importance of using the right image spectral bands which greatly increases the accuracy of the models. / Den illegala guldutvinningen i Amazonas har ökat dramatiskt sedan 2005 i och med att guldpriset stigit. Användningen av kemikalier, exempelvis kvicksilver, underlättar guldutvinningen men ökar giftigheten i marken och kan komma in i näringskedjan. Detta leder till hälsoproblem för invånarna och orsakar det miljöplågeri som vi känner till i dag. Dessutom leder den massiva ökningen av dessa verksamheter till ytterligare avskogning i skyddade områden, vilket exempelvis påverkar ursprungsområden och naturreservat. Organisationer och regeringar i Peru, Brasilien och Franska Guyana försöker att reglera denna verksamhet, men eftersom det område som ska täckas är mycket stort är det ofta för sent att agera när olaglig exploatering upptäcks. Syftet med denna avhandling är att utvärdera om det är möjligt att automatisera uppgiften att upptäcka dessa illegala guldgruvor med hjälp av öppna satellitbilder och djup inlärning. För att besvara denna fråga omfattar denna rapport skapandet av nya datamängder samt utvärderingen av två tekniker som är objektsdetektering med hjälp av RetinaNet och semantisk segmentering med hjälp av U-Net. Inflytandet av bildens spektralband studeras också i denna avhandling. De tränade modellerna utvärderas alla med hjälp av Dice-koefficienten och Intersection over Union-måtten, och varje jämförelse stöds av det statistiska sign-testet. Rapporten visar att segmenteringsmodellen är extremt överlägsen när det gäller binär klassificering av illegala gruvor. Det föreslås dock att man först använder RetinaNet för att mer exakt ta reda på om gruvan är laglig eller olaglig, och sedan använder U-Net om gruvan är olaglig för att göra en mer exakt segmentering. Rapporten visar och illustrerar också vikten av att använda rätt bildspektralband, vilket ökar modellernas noggrannhet avsevärt
452

Detecting Slag Formation with Deep Learning Methods : An experimental study of different deep learning image segmentation models

von Koch, Christian, Anzén, William January 2021 (has links)
Image segmentation through neural networks and deep learning have, in the recent decade, become a successful tool for automated decision-making. For Luossavaara-Kiirunavaara Aktiebolag (LKAB), this means identifying the amount of slag inside a furnace through computer vision.  There are many prominent convolutional neural network architectures in the literature, and this thesis explores two: a modified U-Net and the PSPNet. The architectures were combined with three loss functions and three class weighting schemes resulting in 18 model configurations that were evaluated and compared. This thesis also explores transfer learning techniques for neural networks tasked with identifying slag in images from inside a furnace. The benefit of transfer learning is that the network can learn to find features from already labeled data of another context. Finally, the thesis explored how temporal information could be utilised by adding an LSTM layer to a model taking pairs of images as input, instead of one. The results show (1) that the PSPNet outperformed the U-Net for all tested configurations in all relevant metrics, (2) that the model is able to find more complex features while converging quicker by using transfer learning, and (3) that utilising temporal information reduced the variance of the predictions, and that the modified PSPNet using an LSTM layer showed promise in handling images with outlying characteristics.
453

An evaluation of using a U-Net CNN with a random forest pre-screener : On a dataset of hand-drawn maps provided by länsstyrelsen i Jönköping

Hellgren, Robin, Axelsson, Martin January 2021 (has links)
Much research has been done on the use of machine learning to extract features such as buildings, lakes et cetera from satellite imagery, and while this dataset is valuable for many use cases, it is limited to time periods in which satellites were used. Historical maps have a much greater range of available time periods but the viability of using machine learning to extract data from these has not been investigated to any great extent. This case study uses a real-world use case to show the efficacy of using a U-Net convolutional neural network to extract features drawn on hand-drawn maps. By implementing a random forest as a pre-screener to the U-Net the goal was to filter out noise that could lead to false positives. By filtering out the noise the hope was to increase the accuracy of the U-Net. The pre-screener in this study has not performed well on the dataset and has not improved the performance of the U-Net. The U-Nets ability to extrapolate the location of features not explicitly drawn on the map was not clearly established. The results of this study show that the U-Net CNN could be an invaluable tool for quickly extracting data from this typically cumbersome data source, allowing for easier access to a wealth of data. The fields of archeology and climate science would find this especially useful.
454

Digitálně obrazové zpracování vzorků v příčném řezu / Digital Image Processing of Cross-section Samples

Beneš, Miroslav January 2014 (has links)
The thesis is aimed on the digital analysis and processing of micro- scopic image data with a focus on cross-section samples from the artworks which fall into cultural heritage domain. It contributes to solution of two different problems of image processing - image seg- mentation and image retrieval. The performance evaluation of differ- ent image segmentation methods on a data set of cross-section images is carried out in order to study the behavior of individual approaches and to propose guidelines how to choose suitable method for segmen- tation of microscopic images. Moreover, the benefit of segmenta- tion combination approach is studied and several distinct combination schemes are proposed. The evaluation is backed up by a large number of experiments where image segmentation algorithms are assessed by several segmentation quality measures. Applicability of achieved re- sults is shown on image data of different origin. In the second part, content-based image retrieval of cross-section samples is addressed and functional solution is presented. Its implementation is included in Nephele system, an expert system for processing and archiving the material research reports with image processing features, designed and implemented for the cultural heritage application area. 1
455

Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

Prakash, Mangal 06 April 2022 (has links)
Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research.
456

Delaunay-based Vector Segmentation of Volumetric Medical Images / Vektorová segmentace objemových medicínských dat založená na Delaunay triangulaci

Španěl, Michal January 2011 (has links)
Image segmentation plays an important role in medical image analysis. Many segmentation algorithms exist. Most of them produce data which are more or less not suitable for further surface extraction and anatomical modeling of human tissues. In this thesis, a novel segmentation technique based on the 3D Delaunay triangulation is proposed. A modified variational tetrahedral meshing approach is used to adapt a tetrahedral mesh to the underlying CT volumetric data, so that image edges are well approximated in the mesh. In order to classify tetrahedra into regions/tissues whose characteristics are similar, three different clustering schemes are presented. Finally, several methods for improving quality of the mesh and its adaptation to the image structure are also discussed.
457

Applications of Graph Cutting for Probabilistic Characterization of Microstructures in Ferrous Alloys

Brust, Alexander Frederick 29 August 2019 (has links)
No description available.
458

Uncertainty Estimation in Volumetric Image Segmentation

Park, Donggyun January 2023 (has links)
The performance of deep neural networks and estimations of their robustness has been rapidly developed. In contrast, despite the broad usage of deep convolutional neural networks (CNNs)[1] for medical image segmentation, research on their uncertainty estimations is being far less conducted. Deep learning tools in their nature do not capture the model uncertainty and in this sense, the output of deep neural networks needs to be critically analysed with quantitative measurements, especially for applications in the medical domain. In this work, epistemic uncertainty, which is one of the main types of uncertainties (epistemic and aleatoric) is analyzed and measured for volumetric medical image segmentation tasks (and possibly more diverse methods for 2D images) at pixel level and structure level. The deep neural network employed as a baseline is 3D U-Net architecture[2], which shares the essential structural concept with U-Net architecture[3], and various techniques are applied to quantify the uncertainty and obtain statistically meaningful results, including test-time data augmentation and deep ensembles. The distribution of the pixel-wise predictions is estimated by Monte Carlo simulations and the entropy is computed to quantify and visualize how uncertain (or certain) the predictions of each pixel are. During the estimation, given the increased network training time in volumetric image segmentation, training an ensemble of networks is extremely time-consuming and thus the focus is on data augmentation and test-time dropouts. The desired outcome is to reduce the computational costs of measuring the uncertainty of the model predictions while maintaining the same level of estimation performance and to increase the reliability of the uncertainty estimation map compared to the conventional methods. The proposed techniques are evaluated on publicly available volumetric image datasets, Combined Healthy Abdominal Organ Segmentation (CHAOS, a set of 3D in-vivo images) from Grand Challenge (https://chaos.grand-challenge.org/). Experiments with the liver segmentation task in 3D Computed Tomography (CT) show the relationship between the prediction accuracy and the uncertainty map obtained by the proposed techniques. / Prestandan hos djupa neurala nätverk och estimeringar av deras robusthet har utvecklats snabbt. Däremot, trots den breda användningen av djupa konvolutionella neurala nätverk (CNN) för medicinsk bildsegmentering, utförs mindre forskning om deras osäkerhetsuppskattningar. Verktyg för djupinlärning fångar inte modellosäkerheten och därför måste utdata från djupa neurala nätverk analyseras kritiskt med kvantitativa mätningar, särskilt för tillämpningar inom den medicinska domänen. I detta arbete analyseras och mäts epistemisk osäkerhet, som är en av huvudtyperna av osäkerheter (epistemisk och aleatorisk) för volymetriska medicinska bildsegmenteringsuppgifter (och möjligen fler olika metoder för 2D-bilder) på pixelnivå och strukturnivå. Det djupa neurala nätverket som används som referens är en 3D U-Net-arkitektur [2] och olika tekniker används för att kvantifiera osäkerheten och erhålla statistiskt meningsfulla resultat, inklusive testtidsdata-augmentering och djupa ensembler. Fördelningen av de pixelvisa förutsägelserna uppskattas av Monte Carlo-simuleringar och entropin beräknas för att kvantifiera och visualisera hur osäkra (eller säkra) förutsägelserna för varje pixel är. Under uppskattningen, med tanke på den ökade nätverksträningstiden i volymetrisk bildsegmentering, är träning av en ensemble av nätverk extremt tidskrävande och därför ligger fokus på dataaugmentering och test-time dropouts. Det önskade resultatet är att minska beräkningskostnaderna för att mäta osäkerheten i modellförutsägelserna samtidigt som man bibehåller samma nivå av estimeringsprestanda och ökar tillförlitligheten för kartan för osäkerhetsuppskattning jämfört med de konventionella metoderna. De föreslagna teknikerna kommer att utvärderas på allmänt tillgängliga volymetriska bilduppsättningar, Combined Healthy Abdominal Organ Segmentation (CHAOS, en uppsättning 3D in-vivo-bilder) från Grand Challenge (https://chaos.grand-challenge.org/). Experiment med segmenteringsuppgiften för lever i 3D Computed Tomography (CT) vissambandet mellan prediktionsnoggrannheten och osäkerhetskartan som erhålls med de föreslagna teknikerna.
459

Volumetric Image Segmentation of Lizard Brains / Tredimensionell segmentering av ödlehjärnor

Dragunova, Yulia January 2023 (has links)
Accurate measurement brain region volumes are important in studying brain plasticity, which brings insight into the fundamental mechanisms in animal, memory, cognitive, and behavior research. The traditional methods of brain volume measurements are ellipsoid or histology. In this study, micro-computed tomography (micro-CT) method was used to achieve more accurate results. However, manual segmentation of micro-CT images is time consuming, hard to reprodu-ce, and has the risk of human error. Automatic image segmentation is a faster method for obtaining the segmentations and has the potential to provide eciency, reliability, repeatability, and scalability. Different methods are tested and compared in this thesis. In this project, 29 micro-CT scans of lizard heads were used and measurements of the volumes of 6 dierent brain regions was of interest. The lizard heads were semi-manually segmented into 6 regions and three open-source segmentation algorithms were compared, one atlas-based algorithm and two deep-learning-based algorithms. Dierent number of training data were quantitatively compared for deep-learning methods from all three orientations (sagittal, horizontal and coronal). Data augmentation was tested and compared, as well. The comparison shows that the deep-learning algorithms provided more accurate results than the atlas-based algorithm. The results also demonstrated that in the sagittal plane, 5 manually segmented images for training are enough to provide resulting predictions with high accuracy (dice score 0.948). Image augmentation was shown to improve the accuracy of the segmentations but a unique dataset still plays an important role. In conclusion, the results show that the manual segmentation work can be reduced drastically by using deep learning for image segmentation. / Noggrann mätning av hjärnregionsvolymer är viktigt för att studera hjärnans plasticitet, vilket ger insikt i de grundläggande mekanismerna inom djurstudier, minnes-, kognitions- och beteendeforskning. De traditionella metoderna för mätning av hjärnvolym är ellipsoid modellen eller histologi. I den här studien användes mikrodatortomografi (mikro-CT) metoden för att få mer korrekta resultat. Manuell segmentering av mikro-CT-bilder är dock tidskrävande, svår att reproducera och har en risk för mänskliga fel. Automatisk bildsegmentering är en snabb metod för att erhålla segmenteringarna. Den har potentialen att ge eektivitet, tillförlitlighet, repeterbarhet och skalbarhet. Därför testas och jämförs tre metoder för automatisk segmentering i denna studie. I projektet användes 29 mikro-CT-bilder av ödlehuvuden för att få fram volymerna hos 6 olika hjärnregioner. Ödlehuvudena segmenterades halvmanu- ellt i 6 regioner och tre segmenteringsalgoritmer med öppen källkod jämfördes (en atlasbaserad algoritm och två djupinlärningsbaserade algoritmer). Olika antal träningsdata jämfördes kvantitativt för djupinlärningsmetoder i alla tre plan (sagittal, horisontell och frontal). Även datautökning testades och analyserades. Jämförelsen visar att djupinlärningsalgoritmerna gav mer signifikanta resultat än den atlasbaserade algoritmen. Resultaten visade även att i det sagittala planet räcker det med 5 manuellt segmenterade bilder för träning för att ge segmenteringar med hög noggrannhet (dice värde 0,948). Datautökningen har visat sig förbättra segmenteringarnas noggrannhet, men ett unikt dataset spelar fortfarande en viktig roll. Sammanfattningsvis visar resultaten att det manuella segmenteringsarbetet kan minskas drastiskt genom att använda djupinlärning för bildsegmentering.
460

GlacierNet Variant for Large Scale Glacier Mapping

Xie, Zhiyuan 13 July 2022 (has links)
No description available.

Page generated in 0.1184 seconds