Spelling suggestions: "subject:"[een] U-NET"" "subject:"[enn] U-NET""
31 |
Rekonstrukce řídce vzorkovaného obrazu pomocí hlubokého učení / Reconstruction of Sparse Sampled Images with Deep LearningLe, Hoang Anh January 2021 (has links)
The main goal of this thesis was to increase reconstruction quality of sparse sampled microscopic images by using neural networks. The thesis will cover various approaches for image reconstruction and will also include descriptions of implementations, which were used. Implementations will be evaluated based on quality of reconstruction, but also based on segmentation, which could be their main possible application.
|
32 |
Advanced UNet for 3D Lung Segmentation and ApplicationsKadia, Dhaval Dilip 18 May 2021 (has links)
No description available.
|
33 |
Detection of Fat-Water Inversions in MRI Data With Deep Learning MethodsHellgren, Lisa, Asketun, Fanny January 2021 (has links)
Magnetic resonance imaging (MRI) is a widely used medical imaging technique for examinations of the body. However, artifacts are a common problem, that must be handled for reliable diagnoses and to avoid drawing inaccurate conclusions about the contextual insights. Magnetic resonance (MR) images acquired with a Dixon-sequence enables two channels with separate fat and water content. Fat-water inversions, also called swaps, are one common artifact with this method where voxels from the two channels are swapped, producing incorrect data. This thesis investigates the possibility to use deep learning methods for an automatic detection of swaps in MR volumes. The data used in this thesis are MR volumes from UK Biobank, processed by AMRA Medical. Segmentation masks of complicated swaps are created by operators who manually annotate the swap, but only if the regions affect subsequent measurements. The segmentation masks are therefore not fully reliable, and additional synthesized swaps were created. Two different deep learning approaches were investigated, a reconstruction-based method and a segmentation-based method. The reconstruction-based networks were trained to reconstruct a volume as similar as possible to the input volume without any swaps. When testing the network on a volume with a swap, the location of the swap can be estimated from the reconstructed volume with postprocessing methods. Autoencoders is an example of a reconstruction-based network. The segmentation-based models were trained to segment a swap directly from the input volume, thus using volumes with swaps both during training and testing. The segmentation-based networks were inspired by a U-Net. The performance of the models from both approaches was evaluated on data with real and synthetic swaps with the metrics: Dice coefficient, precision, and recall. The result shows that the reconstruction-based models are not suitable for swap detection. Difficulties in finding the right architecture for the models resulted in bad reconstructions, giving unreliable predictions. Further investigations in different post-processing methods, architectures, and hyperparameters might improve swap detection. The segmentation-based models are robust with reliable detections independent of the size of the swaps, despite being trained on data with synthesized swaps. The results from the models look very promising, and can probably be used as an automated method for swap detection with some further fine-tuning of the parameters.
|
34 |
Automatic classification of fish and bubbles at pixel-level precision in multi-frequency acoustic echograms using U-Net convolutional neural networksSlonimer, Alex 05 April 2022 (has links)
Multi-frequency backscatter acoustic profilers (echosounders) are used to measure biological and physical phenomena in the ocean in ways that are not possible with optical methods. Echosounders are commonly used on ocean observatories and by commercial fisheries but require significant manual effort to classify species of interest within the collected echograms. The work presented in this thesis tackles the challenging task of automating the identification of fish and other phenomena in echosounder data, with specific application to aggregations of juvenile salmon, schools of herring, and bubbles of air that have been mixed into the water.
U-Net convolutional neural networks (CNNs) are used to accomplish this task by identifying classes at the pixel level. The data considered here were collected in Okisollo Channel on the coast of British Columbia, Canada, using an Acoustic Zooplankton and Fish Profiler at four frequencies (67.5, 125, 200, and 455 kHz). The entrainment of air bubbles and the behaviour of fish are both governed by the surrounding physical environment. To improve the classification, simulated channels for water depth and solar elevation angle (a proxy for sunlight) are used to encode the CNNs with information related to the environment providing spatial and temporal context. The manual annotation of echograms at the pixel level is a challenging process, and a custom application was developed to aid in this process. A relatively small set of annotations were created and are used to train the CNNs. During training, the echogram data are divided into randomly-spaced square tiles to encode the models with robust features, and into overlapping tiles for added redundancy during classification. This is done without removing noise in the data, thus ensuring broad applicability. This approach is proven highly successful, as evidenced by the best-performing U-Net model producing F1 scores of 93.0%, 87.3% and 86.5% for herring, salmon, and bubble classes, respectively. These models also achieve promising results when applied to echogram data with coarser resolution.
One goal in fisheries acoustics is to detect distinct schools of fish. Following the initial pixel level classification, the results from the best performing U-Net model are fed through a heuristic module, inspired by traditional fisheries methods, that links connected components of identified fish (school candidates) into distinct school objects. The results are compared to the outputs from a recent study that relied on a Mask R-CNN architecture to apply instance segmentation for classifying fish schools. It is demonstrated that the U-Net/heuristic hybrid technique improves on the Mask R-CNN approach by a small amount for the classification of herring schools, and by a large amount for aggregations of juvenile salmon (improvement in mean average precision from 24.7% to 56.1%). / Graduate
|
35 |
U-Net ship detection in satellite optical imagerySmith, Benjamin 05 October 2020 (has links)
Deep learning ship detection in satellite optical imagery suffers from false positive occurrences with clouds, landmasses, and man-made objects that interfere with correctly classifying ships. A custom U-Net is implemented to challenge this issue and aims to capture more features in order to provide a more accurate class accuracy. This model is trained with two different systematic architectures: single node architecture and a parameter server variant whose workers act as a boosting mechanism. To ex-tend this effort, a refining method of offline hard example mining aims to improve the accuracy of the trained models in both the validation and target datasets however it results in over correction and a decrease in accuracy. The single node architecture results in 92% class accuracy over the validation dataset and 68% over the target dataset. This exceeds class accuracy scores in related works which reached up to 88%. A parameter server variant results in class accuracy of 86% over the validation set and 73% over the target dataset. The custom U-Net is able to achieve acceptable and high class accuracy on a subset of training data keeping training time and cost low in cloud based solutions. / Graduate
|
36 |
CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial CellsAl-Waisy, A.S., Alruban, A., Al-Fahdawi, S., Qahwaji, Rami S.R., Ponirakis, G., Malik, R.A., Mohammed, M.A., Kadry, S. 15 March 2022 (has links)
Yes / The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, µm2), mean cell perimeter (MCP, µm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p
|
37 |
Finding Corresponding Regions In Different Mammography Projections Using Convolutional Neural Networks / Prediktion av Motsvarande Regioner i Olika Mammografiprojektioner med FaltningsnätverkEriksson, Emil January 2022 (has links)
Mammography screenings are performed regularly on women in order to detect early signs of breast cancer, which is the most common form of cancer. During an exam, X-ray images (called mammograms) are taken from two different angles and reviewed by a radiologist. If they find a suspicious lesion in one of the views, they confirm it by finding the corresponding region in the other view. Finding the corresponding region is a non-trivial task, due to the different image projections of the breast and different angles of compression needed during the exam. This thesis explores the possibility of using deep learning, a data-driven approach, to solve the corresponding regions problem. Specifically, a convolutional neural network (CNN) called U-net is developed and trained on scanned mammograms, and evaluated on both scanned and digital mammograms. A model based method called the arc model is developed for comparison. Results show that the best U-net produced better results than the arc model on all evaluated metrics, and succeeded in finding the corresponding area 83.9% of times, compared to 72.6%. Generalization to digital images was excellent, achieving an even higher score of 87.6%, compared to 83.5% for the arc model.
|
38 |
Determination of Biomass in Shrimp-Farm using Computer VisionTammineni, Gowtham Chowdary 30 October 2023 (has links)
The automation in the aquaculture is proving to be more and more effective these days.
The economic drain on the aquaculture farmers due to the high mortality of the shrimps can be reduced by ensuring the welfare of the animals. The health of shrimps can decline with even barest of changes in the conditions in the farm. This is the result of increase in stress. As shrimps are quite sensitive to the changes, even small changes can increase the stress in the animals which results in the decline of health. This severely dampens the mortality rate in the animals.
Also, human interference while feeding the shrimps severely induces the stress on the shrimps and thereby affecting the shrimp’s mortality. So, to ensure the optimum
efficiency of the farm, the feeding of the shrimps is made automated. The underfeeding and overfeeding also affects the growth of shrimps. To determine the right amount of food to provide for shrimps, Biomass is a very helpful parameter.
The use of artificial intelligence (AI) to calculate the farm's biomass is the project's primary area of interest. This model uses the cameras mounted on top of the tank at densely populated areas. These cameras monitor the farm, and our model detects the biomass. By doing so, it is possible to estimate how much food should be distributed at that particular area. Biomass of the shrimps can be calculated with the help of the number of shrimps and the average lengths of the shrimps detected. With the reduced human interference in calculating the biomass, the health of the animals improves and thereby making the process sustainable and economical.
|
39 |
Deep Learning for Point Detection in ImagesRunow, Björn January 2020 (has links)
The main result of this thesis is a deep learning model named BearNet, which can be trained to detect an arbitrary amount of objects as a set of points. The model is trained using the Weighted Hausdorff distance as loss function. BearNet has been applied and tested on two problems from the industry. These are: From an intensity image, detect two pocket points of an EU-pallet which an autonomous forklift could utilize when determining where to insert its forks. From a depth image, detect the start, bend and end points of a straw attached to a juice package, in order to help determine if the straw has been attached correctly. In the development process of BearNet I took inspiration from the designs of U-Net, UNet++ and a high resolution network named HRNet. Further, I used a dataset containing RGB-images from a surveillance camera located inside a mall, on which the aim was to detect head positions of all pedestrians. In an attempt to reproduce a result from another study, I found that the mall dataset suffers from training set contamination when a model is trained, validated, and tested on it with random sampling. Hence, I propose that the mall dataset is evaluated with a sequential data split strategy, to limit the problem. I found that the BearNet architecture is well suited for both the EU-pallet and straw datasets, and that it can be successfully used on either RGB, intensity or depth images. On the EU-pallet and straw datasets, BearNet consistently produces point estimates within five and six pixels of ground truth, respectively. I also show that the straw dataset only constitutes a small subset of all the challenges that exist in the problem domain related to the attachment of a straw to a juice package, and that one therefore cannot train a robust deep learning model on it. As an example of this, models trained on the straw dataset cannot correctly handle samples in which there is no straw visible.
|
40 |
Segmentace buněk pomocí konvolučních neuronových sítí / Cell segmentation using convolutional neural networksHrdličková, Alžběta January 2021 (has links)
This work examines the use of convolutional neural networks with a focus on semantic and instance segmentation of cells from microscopic images. The theoretical part contains a description of deep neural networks and a summary of widely used convolutional architectures for image segmentation. The practical part of the work is devoted to the creation of a convolutional neural network model based on the U-Net architecture. It also contains cell segmentation of predicted images using three methods, namely thresholding, the watershed and the random walker.
|
Page generated in 0.0452 seconds