• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 8
  • 3
  • 1
  • Tagged with
  • 69
  • 69
  • 48
  • 41
  • 38
  • 29
  • 29
  • 23
  • 20
  • 20
  • 16
  • 15
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Detection of Fat-Water Inversions in MRI Data With Deep Learning Methods

Hellgren, Lisa, Asketun, Fanny January 2021 (has links)
Magnetic resonance imaging (MRI) is a widely used medical imaging technique for examinations of the body. However, artifacts are a common problem, that must be handled for reliable diagnoses and to avoid drawing inaccurate conclusions about the contextual insights. Magnetic resonance (MR) images acquired with a Dixon-sequence enables two channels with separate fat and water content. Fat-water inversions, also called swaps, are one common artifact with this method where voxels from the two channels are swapped, producing incorrect data. This thesis investigates the possibility to use deep learning methods for an automatic detection of swaps in MR volumes. The data used in this thesis are MR volumes from UK Biobank, processed by AMRA Medical. Segmentation masks of complicated swaps are created by operators who manually annotate the swap, but only if the regions affect subsequent measurements. The segmentation masks are therefore not fully reliable, and additional synthesized swaps were created. Two different deep learning approaches were investigated, a reconstruction-based method and a segmentation-based method. The reconstruction-based networks were trained to reconstruct a volume as similar as possible to the input volume without any swaps. When testing the network on a volume with a swap, the location of the swap can be estimated from the reconstructed volume with postprocessing methods. Autoencoders is an example of a reconstruction-based network. The segmentation-based models were trained to segment a swap directly from the input volume, thus using volumes with swaps both during training and testing. The segmentation-based networks were inspired by a U-Net. The performance of the models from both approaches was evaluated on data with real and synthetic swaps with the metrics: Dice coefficient, precision, and recall. The result shows that the reconstruction-based models are not suitable for swap detection. Difficulties in finding the right architecture for the models resulted in bad reconstructions, giving unreliable predictions. Further investigations in different post-processing methods, architectures, and hyperparameters might improve swap detection. The segmentation-based models are robust with reliable detections independent of the size of the swaps, despite being trained on data with synthesized swaps. The results from the models look very promising, and can probably be used as an automated method for swap detection with some further fine-tuning of the parameters.
32

Automatic classification of fish and bubbles at pixel-level precision in multi-frequency acoustic echograms using U-Net convolutional neural networks

Slonimer, Alex 05 April 2022 (has links)
Multi-frequency backscatter acoustic profilers (echosounders) are used to measure biological and physical phenomena in the ocean in ways that are not possible with optical methods. Echosounders are commonly used on ocean observatories and by commercial fisheries but require significant manual effort to classify species of interest within the collected echograms. The work presented in this thesis tackles the challenging task of automating the identification of fish and other phenomena in echosounder data, with specific application to aggregations of juvenile salmon, schools of herring, and bubbles of air that have been mixed into the water. U-Net convolutional neural networks (CNNs) are used to accomplish this task by identifying classes at the pixel level. The data considered here were collected in Okisollo Channel on the coast of British Columbia, Canada, using an Acoustic Zooplankton and Fish Profiler at four frequencies (67.5, 125, 200, and 455 kHz). The entrainment of air bubbles and the behaviour of fish are both governed by the surrounding physical environment. To improve the classification, simulated channels for water depth and solar elevation angle (a proxy for sunlight) are used to encode the CNNs with information related to the environment providing spatial and temporal context. The manual annotation of echograms at the pixel level is a challenging process, and a custom application was developed to aid in this process. A relatively small set of annotations were created and are used to train the CNNs. During training, the echogram data are divided into randomly-spaced square tiles to encode the models with robust features, and into overlapping tiles for added redundancy during classification. This is done without removing noise in the data, thus ensuring broad applicability. This approach is proven highly successful, as evidenced by the best-performing U-Net model producing F1 scores of 93.0%, 87.3% and 86.5% for herring, salmon, and bubble classes, respectively. These models also achieve promising results when applied to echogram data with coarser resolution. One goal in fisheries acoustics is to detect distinct schools of fish. Following the initial pixel level classification, the results from the best performing U-Net model are fed through a heuristic module, inspired by traditional fisheries methods, that links connected components of identified fish (school candidates) into distinct school objects. The results are compared to the outputs from a recent study that relied on a Mask R-CNN architecture to apply instance segmentation for classifying fish schools. It is demonstrated that the U-Net/heuristic hybrid technique improves on the Mask R-CNN approach by a small amount for the classification of herring schools, and by a large amount for aggregations of juvenile salmon (improvement in mean average precision from 24.7% to 56.1%). / Graduate
33

U-Net ship detection in satellite optical imagery

Smith, Benjamin 05 October 2020 (has links)
Deep learning ship detection in satellite optical imagery suffers from false positive occurrences with clouds, landmasses, and man-made objects that interfere with correctly classifying ships. A custom U-Net is implemented to challenge this issue and aims to capture more features in order to provide a more accurate class accuracy. This model is trained with two different systematic architectures: single node architecture and a parameter server variant whose workers act as a boosting mechanism. To ex-tend this effort, a refining method of offline hard example mining aims to improve the accuracy of the trained models in both the validation and target datasets however it results in over correction and a decrease in accuracy. The single node architecture results in 92% class accuracy over the validation dataset and 68% over the target dataset. This exceeds class accuracy scores in related works which reached up to 88%. A parameter server variant results in class accuracy of 86% over the validation set and 73% over the target dataset. The custom U-Net is able to achieve acceptable and high class accuracy on a subset of training data keeping training time and cost low in cloud based solutions. / Graduate
34

Image Segmentation Using Deep Learning Regulated by Shape Context / Bildsegmentering med djupt lärande reglerat med formkontext

Wang, Wei January 2018 (has links)
In recent years, image segmentation by using deep neural networks has made great progress. However, reaching a good result by training with a small amount of data remains to be a challenge. To find a good way to improve the accuracy of segmentation with limited datasets, we implemented a new automatic chest radiographs segmentation experiment based on preliminary works by Chunliang using deep learning neural network combined with shape context information. When the process was conducted, the datasets were put into origin U-net at first. After the preliminary process, the segmented images were then repaired through a new network with shape context information. In this experiment, we created a new network structure by rebuilding the U-net into a 2-input structure and refined the processing pipeline step. In this proposed pipeline, the datasets and shape context were trained together through the new network model by iteration. The proposed method was evaluated on 247 posterior-anterior chest radiographs of public datasets and n-folds cross-validation was also used. The outcome shows that compared to origin U-net, the proposed pipeline reaches higher accuracy when trained with limited datasets. Here the "limited" datasets refer to 1-20 images in the medical image field. A better outcome with higher accuracy can be reached if the second structure is further refined and shape context generator's parameter is fine-tuned in the future. / Under de senaste åren har bildsegmentering med hjälp av djupa neurala nätverk gjort stora framsteg. Att nå ett bra resultat med träning med en liten mängd data kvarstår emellertid som en utmaning. För att hitta ett bra sätt att förbättra noggrannheten i segmenteringen med begränsade datamängder så implementerade vi en ny segmentering för automatiska röntgenbilder av bröstkorgsdiagram baserat på tidigare forskning av Chunliang. Detta tillvägagångssätt använder djupt lärande neurala nätverk kombinerat med "shape context" information. I detta experiment skapade vi en ny nätverkstruktur genom omkonfiguration av U-nätverket till en 2-inputstruktur och förfinade pipeline processeringssteget där bilden och "shape contexten" var tränade tillsammans genom den nya nätverksmodellen genom iteration.Den föreslagna metoden utvärderades på dataset med 247 bröströntgenfotografier, och n-faldig korsvalidering användes för utvärdering. Resultatet visar att den föreslagna pipelinen jämfört med ursprungs U-nätverket når högre noggrannhet när de tränas med begränsade datamängder. De "begränsade" dataseten här hänvisar till 1-20 bilder inom det medicinska fältet. Ett bättre resultat med högre noggrannhet kan nås om den andra strukturen förfinas ytterligare och "shape context-generatorns" parameter finjusteras.
35

CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells

Al-Waisy, A.S., Alruban, A., Al-Fahdawi, S., Qahwaji, Rami S.R., Ponirakis, G., Malik, R.A., Mohammed, M.A., Kadry, S. 15 March 2022 (has links)
Yes / The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, µm2), mean cell perimeter (MCP, µm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p
36

Finding Corresponding Regions In Different Mammography Projections Using Convolutional Neural Networks / Prediktion av Motsvarande Regioner i Olika Mammografiprojektioner med Faltningsnätverk

Eriksson, Emil January 2022 (has links)
Mammography screenings are performed regularly on women in order to detect early signs of breast cancer, which is the most common form of cancer. During an exam, X-ray images (called mammograms) are taken from two different angles and reviewed by a radiologist. If they find a suspicious lesion in one of the views, they confirm it by finding the corresponding region in the other view. Finding the corresponding region is a non-trivial task, due to the different image projections of the breast and different angles of compression needed during the exam. This thesis explores the possibility of using deep learning, a data-driven approach, to solve the corresponding regions problem. Specifically, a convolutional neural network (CNN) called U-net is developed and trained on scanned mammograms, and evaluated on both scanned and digital mammograms. A model based method called the arc model is developed for comparison. Results show that the best U-net produced better results than the arc model on all evaluated metrics, and succeeded in finding the corresponding area 83.9% of times, compared to 72.6%. Generalization to digital images was excellent, achieving an even higher score of 87.6%, compared to 83.5% for the arc model.
37

Determination of Biomass in Shrimp-Farm using Computer Vision

Tammineni, Gowtham Chowdary 30 October 2023 (has links)
The automation in the aquaculture is proving to be more and more effective these days. The economic drain on the aquaculture farmers due to the high mortality of the shrimps can be reduced by ensuring the welfare of the animals. The health of shrimps can decline with even barest of changes in the conditions in the farm. This is the result of increase in stress. As shrimps are quite sensitive to the changes, even small changes can increase the stress in the animals which results in the decline of health. This severely dampens the mortality rate in the animals. Also, human interference while feeding the shrimps severely induces the stress on the shrimps and thereby affecting the shrimp’s mortality. So, to ensure the optimum efficiency of the farm, the feeding of the shrimps is made automated. The underfeeding and overfeeding also affects the growth of shrimps. To determine the right amount of food to provide for shrimps, Biomass is a very helpful parameter. The use of artificial intelligence (AI) to calculate the farm's biomass is the project's primary area of interest. This model uses the cameras mounted on top of the tank at densely populated areas. These cameras monitor the farm, and our model detects the biomass. By doing so, it is possible to estimate how much food should be distributed at that particular area. Biomass of the shrimps can be calculated with the help of the number of shrimps and the average lengths of the shrimps detected. With the reduced human interference in calculating the biomass, the health of the animals improves and thereby making the process sustainable and economical.
38

Deep Learning for Point Detection in Images

Runow, Björn January 2020 (has links)
The main result of this thesis is a deep learning model named BearNet, which can be trained to detect an arbitrary amount of objects as a set of points. The model is trained using the Weighted Hausdorff distance as loss function. BearNet has been applied and tested on two problems from the industry. These are: From an intensity image, detect two pocket points of an EU-pallet which an autonomous forklift could utilize when determining where to insert its forks. From a depth image, detect the start, bend and end points of a straw attached to a juice package, in order to help determine if the straw has been attached correctly. In the development process of BearNet I took inspiration from the designs of U-Net, UNet++ and a high resolution network named HRNet. Further, I used a dataset containing RGB-images from a surveillance camera located inside a mall, on which the aim was to detect head positions of all pedestrians. In an attempt to reproduce a result from another study, I found that the mall dataset suffers from training set contamination when a model is trained, validated, and tested on it with random sampling. Hence, I propose that the mall dataset is evaluated with a sequential data split strategy, to limit the problem. I found that the BearNet architecture is well suited for both the EU-pallet and straw datasets, and that it can be successfully used on either RGB,  intensity or depth images. On the EU-pallet and straw datasets, BearNet consistently produces point estimates within five and six pixels of ground truth, respectively. I also show that the straw dataset only constitutes a small subset of all the challenges that exist in the problem domain related to the attachment of a straw to a juice package, and that one therefore cannot train a robust deep learning model on it. As an example of this, models trained on the straw dataset cannot correctly handle samples in which there is no straw visible.
39

Segmentace buněk pomocí konvolučních neuronových sítí / Cell segmentation using convolutional neural networks

Hrdličková, Alžběta January 2021 (has links)
This work examines the use of convolutional neural networks with a focus on semantic and instance segmentation of cells from microscopic images. The theoretical part contains a description of deep neural networks and a summary of widely used convolutional architectures for image segmentation. The practical part of the work is devoted to the creation of a convolutional neural network model based on the U-Net architecture. It also contains cell segmentation of predicted images using three methods, namely thresholding, the watershed and the random walker.
40

U-net based deep learning architectures for object segmentation in biomedical images

Nahian Siddique (11219427) 04 August 2021 (has links)
<div>U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net's potential is still increasing, this review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.</div><div>In recent years, deep learning for health care is rapidly infiltrating and transforming medical fields thanks to the advances in computing power, data availability, and algorithm development. In particular, U-Net, a deep learning technique, has achieved remarkable success in medical image segmentation and has become one of the premier tools in this area. While the accomplishments of U-Net and other deep learning algorithms are evident, there still exist many challenges in medical image processing to achieve human-like performance. In this thesis, we propose a U-net architecture that integrates a residual skip connections and recurrent feedback with EfficientNet as a pretrained encoder. Residual connections help feature propagation in deep neural networks and significantly improve performance against networks with a similar number of parameters while recurrent connections ameliorate gradient learning. We also propose a second model that utilizes densely connected layers aiding deeper neural networks. And the proposed third model that incorporates fractal expansions to bypass diminishing gradients. EfficientNet is a family of powerful pretrained encoders that streamline neural network design. The use of EfficientNet as an encoder provides the network with robust feature extraction that can be used by the U-Net decoder to create highly accurate segmentation maps. The proposed networks are evaluated against state-of-the-art deep learning based segmentation techniques to demonstrate their superior performance.</div>

Page generated in 0.0179 seconds