• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 82
  • 57
  • 25
  • 16
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 530
  • 530
  • 136
  • 107
  • 95
  • 86
  • 81
  • 75
  • 74
  • 74
  • 71
  • 68
  • 64
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Microarray image processing : a novel neural network framework

Zineddin, Bachar January 2011 (has links)
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
22

Adaptive biological image-guided radiation therapy in pharyngo-laryngeal squamous cell carcinoma

Geets, Xavier 28 April 2008 (has links)
In recent years, the impressive progress performed in imaging, computational and technological fields have made possible the emergence of image-guided radiation therapy (IGRT) and adaptive radiation therapy (ART). The accuracy in radiation dose delivery reached by IMRT offers the possibility to increase locoregional dose-intensity, potentially overcoming the poor tumor control achieved by standard approaches. However, before implementing such a technique in clinical routine, a particular attention has to be paid at the target volumes definition and delineation procedures to avoid inadequate dosage to TVs/OARs. In head and neck squamous cell carcinoma (HNSCC), the GTV is typically defined on CT acquired prior to treatment. However, providing functional information about the tumor, FDG-PET might advantageously complete the classical CT-Scan to better define the TVs. Similarly, re-imaging the tumor with optimal imaging modality might account for the constantly changing anatomy and tumor shape occurring during the course of fractionated radiotherapy. Integrating this information into the treatment planning might ultimately lead to a much tighter dose distribution. From a methodological point of view, the delineation of TVs on anatomical or functional images is not a trivial task. Firstly, the poor soft tissue contrast provided by CT comes out of large interobserver variability in GTV delineation. In this regard, we showed that the use of consistent delineation guidelines significantly improved consistency between observers, either with CT and with MRI. Secondly, the intrinsic characteristics of PET images, including the blur effect and the high level of noise, make the detection of the tumor edges arduous. In this context, we developed specific image restoration tools, i.e. edge-preserving filters for denoising, and deconvolution algorithms for deblurring. This procedure restores the image quality, allowing the use of gradient-based segmentation techniques. This method was validated on phantom and patient images, and proved to be more accurate and reliable than threshold-based methods. Using these segmentation methods, we proved that GTVs significantly shrunk during radiotherapy in patients with HNSCC, whatever the imaging modality used (MRI, CT, FDG-PET). No clinically significant difference was found between CT and MRI, while FDG-PET provided significantly smaller volumes than those based on anatomical imaging. Refining the target volume delineation by means of functional and sequential imaging ultimately led to more optimal dose distribution to TVs with subsequent soft tissue sparing. In conclusion, we demonstrated that a multi-modality-based adaptive planning is feasible in HN tumors and potentially opens new avenues for dose escalation strategies. As a high level of accuracy is required by such approach, the delineation of TVs however requires a special care.
23

A Probabilistic Approach to Image Feature Extraction, Segmentation and Interpretation

Pal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
24

Segmentace obrazů listů dřevin / Segmentation of images with leaves of woody species

Valchová, Ivana January 2016 (has links)
The thesis focuses on segmentation of images with leaves of woody species. The main aim was to investigate existing image segmentation methods, choose suitable method for given data and implement it. Inputs are scanned leaves and photographs of various quality. The thesis summarizes the general methods of image segmentation and describes algorithm that gives us the best results. Based on the histogram, the algorithm decides whether the input is of sufficient quality and can be segmented by Otsu algorithm or is not and should be segmented using GrowCut algorithm. Next, the image is improved by morphological closing and holes filling. Finally, only the largest object is left. Results are illustrated using generated output images. Powered by TCPDF (www.tcpdf.org)
25

Fast segmentation of the LV myocardium in real-time 3D echocardiography

Verhoek, Michael January 2011 (has links)
Heart disease is a major cause of death in western countries. In order to diagnose and monitor heart disease, 3D echocardiography is an important tool, as it provides a fast, relatively low-cost, portable and harmless way of imaging the moving heart. Segmentation of cardiac walls is an indispensable method of obtaining quantitative measures of heart function. However segmentation of ultrasound images has its challenges: image quality is often relatively low and current segmentation methods are often not fast. It is desirable to make the segmentation technique as fast as possible, making quantitative heart function measures available at the time of recording. In this thesis, we test two state-of-the-art fast segmentation techniques to address this issue; furthermore, we develop a novel technique for finding the best segmentation propagation strategy between points of time in a cardiac image sequence. The first fast method is Graph Cuts (GC), an energy minimisation technique that represents the image as a graph. We test this method on static 3D echocardiography to segment the myocardium, varying the importance of the regulariser function. We look at edge measures, position constraints and tissue characterisation and find that GC is relatively fast and accurate. The second fast method is Random Forests (RFos), a discriminative classifier using binary decision trees, used in machine learning. To our knowledge, we are the first to test this method for myocardial segmentation on 2D and 3D static echocardiography. We investigate the number of trees, image features used, some internal parameters, and compare with intensity thresholding. We conclude that RFos are very fast and more accurate than GC segmentation. The static RFo method is subsequently applied to all time frames. We describe a novel optical flow based propagation technique that improves the static results by propagating the results from well-performing time frames to less-performing frames. We describe a learning algorithm that learns for each frame which propagation strategy is best. Furthermore, we look at the influence of the number of images and of the training set available per tree, and we compare against other methods that use motion information. Finally, we perform the same propagation learning method on the static GC results, concluding that the propagation method improves the static results in this case as well. We compare the dynamic GC results with the dynamic RFo results and find that RFos are more accurate and faster than GC.
26

Segmentation-based Retinal Image Analysis

Wu, Qian January 2019 (has links)
Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used. Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques. Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively. Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.
27

Skin lesion segmentation and classification using deep learning

Unknown Date (has links)
Melanoma, a severe and life-threatening skin cancer, is commonly misdiagnosed or left undiagnosed. Advances in artificial intelligence, particularly deep learning, have enabled the design and implementation of intelligent solutions to skin lesion detection and classification from visible light images, which are capable of performing early and accurate diagnosis of melanoma and other types of skin diseases. This work presents solutions to the problems of skin lesion segmentation and classification. The proposed classification approach leverages convolutional neural networks and transfer learning. Additionally, the impact of segmentation (i.e., isolating the lesion from the rest of the image) on the performance of the classifier is investigated, leading to the conclusion that there is an optimal region between “dermatologist segmented” and “not segmented” that produces best results, suggesting that the context around a lesion is helpful as the model is trained and built. Generative adversarial networks, in the context of extending limited datasets by creating synthetic samples of skin lesions, are also explored. The robustness and security of skin lesion classifiers using convolutional neural networks are examined and stress-tested by implementing adversarial examples. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
28

vU-net: edge detection in time-lapse fluorescence live cell images based on convolutional neural networks

Zhang, Xitong 23 April 2018 (has links)
Time-lapse fluorescence live cell imaging has been widely used to study various dynamic processes in cell biology. As the initial step of image analysis, it is important to localize and segment cell edges with higher accuracy. However, fluorescence live-cell images usually have issues such as low contrast, noises, uneven illumination in comparison to immunofluorescence images. Deep convolutional neural networks, which learn features directly from training images, have successfully been applied in natural image analysis problems. However, the limited amount of training samples prevents their routine application in fluorescence live-cell image analysis. In this thesis, by exploiting the temporal coherence in time-lapse movies together with VGG-16 [1] pre-trained model, we demonstrate that we can train a deep neural network using a limited number of image frames to segment the entire time-lapse movies. We propose a novel framework, vU-net, which combines the advantages of VGG-16 [1] in feature extraction and U-net [2] in feature reconstruction. Moreover, we design an auxiliary convolutional block at the end of the architecture to enhance edge detection. We evaluate our framework using dice coefficient and the distance between the predicted edge and the ground truth on high-resolution image datasets of an adhesion marker, paxillin, acquired by a Total Internal Reflection Fluorescence (TIRF) microscope. Our results demonstrate that, on difficult datasets: (i) The testing dice coefficient of vU-net is 3.2% higher than U-net with the same amount of training images. (ii) vU-net can achieve the best prediction results of U-net with one third of training images needed by U-net. (iii) vU-net produces more robust prediction than U-net. Therefore, vU-net can be more practically applied to challenging live cell movies than U-net since it requires a small size of training sets and achieved accurate segmentation.
29

Segmentação de imagens coloridas baseada na mistura de cores e redes neurais / Segmentation of color images based on color mixture and neural networks

Diego Rafael Moraes 26 March 2018 (has links)
O Color Mixture é uma técnica para segmentação de imagens coloridas, que cria uma \"Retina Artificial\" baseada na mistura de cores, e faz a quantização da imagem projetando todas as cores em 256 planos no cubo RGB. Em seguida, atravessa todos esses planos com um classificador Gaussiano, visando à segmentação da imagem. Porém, a abordagem atual possui algumas limitações. O classificador atual resolve exclusivamente problemas binários. Inspirado nesta \"Retina Artificial\" do Color Mixture, esta tese define uma nova \"Retina Artificial\", propondo a substituição do classificador atual por uma rede neural artificial para cada um dos 256 planos, com o objetivo de melhorar o desempenho atual e estender sua aplicação para problemas multiclasse e multiescala. Para esta nova abordagem é dado o nome de Neural Color Mixture. Para a validação da proposta foram realizadas análises estatísticas em duas áreas de aplicação. Primeiramente para a segmentação de pele humana, tendo sido comparado seus resultados com oito métodos conhecidos, utilizando quatro conjuntos de dados de tamanhos diferentes. A acurácia de segmentação da abordagem proposta nesta tese superou a de todos os métodos comparados. A segunda avaliação prática do modelo proposto foi realizada com imagens de satélite devido à vasta aplicabilidade em áreas urbanas e rurais. Para isto, foi criado e disponibilizado um banco de imagens, extraídas do Google Earth, de dez regiões diferentes do planeta, com quatro escalas de zoom (500 m, 1000 m, 1500 m e 2000 m), e que continham pelo menos quatro classes de interesse: árvore, solo, rua e água. Foram executados quatro experimentos, sendo comparados com dois métodos, e novamente a proposta foi superior. Conclui-se que a nova proposta pode ser utilizada para problemas de segmentação de imagens coloridas multiclasse e multiescala. E que possivelmente permite estender o seu uso para qualquer aplicação, pois envolve uma fase de treinamento, em que se adapta ao problema. / The Color Mixture is a technique for color images segmentation, which creates an \"Artificial Retina\" based on the color mixture, and quantizes the image by projecting all the colors in 256 plans into the RGB cube. Then, it traverses all those plans with a Gaussian classifier, aiming to reach the image segmentation. However, the current approach has some limitations. The current classifier solves exclusively binary problems. Inspired by this \"Artificial Retina\" of the Color Mixture, we defined a new \"Artificial Retina\", as well as we proposed the replacement of the current classifier by an artificial neural network for each of the 256 plans, with the goal of improving current performance and extending your application to multiclass and multiscale issues. We called this new approach \"Neural Color Mixture\". To validate the proposal, we analyzed it statistically in two areas of application. Firstly for the human skin segmentation, its results were compared with eight known methods using four datasets of different sizes. The segmentation accuracy of the our proposal in this thesis surpassed all the methods compared. The second practical evaluation of the our proposal was carried out with satellite images due to the wide applicability in urban and rural areas. In order to do this, we created and made available a database of satellite images, extracted from Google Earth, from ten different regions of the planet, with four zoom scales (500 m, 1000 m, 1500 m and 2000 m), which contained at least four classes of interest: tree, soil, street and water. We compared our proposal with a neural network of the multilayer type (ANN-MLP) and an Support Vector Machine (SVM). Four experiments were performed, compared to two methods, and again the proposal was superior. We concluded that our proposal can be used for multiclass and multiscale color image segmentation problems, and that it possibly allows to extend its use to any application, as it involves a training phase, in which our methodology adapts itself to any kind of problem.
30

ScatterNet hybrid frameworks for deep learning

Singh, Amarjot January 2019 (has links)
Image understanding is the task of interpreting images by effectively solving the individual tasks of object recognition and semantic image segmentation. An image understanding system must have the capacity to distinguish between similar looking image regions while being invariant in its response to regions that have been altered by the appearance-altering transformation. The fundamental challenge for any such system lies within this simultaneous requirement for both invariance and specificity. Many image understanding systems have been proposed that capture geometric properties such as shapes, textures, motion and 3D perspective projections using filtering, non-linear modulus, and pooling operations. Deep learning networks ignore these geometric considerations and compute descriptors having suitable invariance and stability to geometric transformations using (end-to-end) learned multi-layered network filters. These deep learning networks in recent years have come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite the success of these deep networks, there remains a fundamental lack of understanding in the design and optimization of these networks which makes it difficult to develop them. Also, training of these networks requires large labeled datasets which in numerous applications may not be available. In this dissertation, we propose the ScatterNet Hybrid Framework for Deep Learning that is inspired by the circuitry of the visual cortex. The framework uses a hand-crafted front-end, an unsupervised learning based middle-section, and a supervised back-end to rapidly learn hierarchical features from unlabelled data. Each layer in the proposed framework is automatically optimized to produce the desired computationally efficient architecture. The term `Hybrid' is coined because the framework uses both unsupervised as well as supervised learning. We propose two hand-crafted front-ends that can extract locally invariant features from the input signals. Next, two ScatterNet Hybrid Deep Learning (SHDL) networks (a generative and a deterministic) were introduced by combining the proposed front-ends with two unsupervised learning modules which learn hierarchical features. These hierarchical features were finally used by a supervised learning module to solve the task of either object recognition or semantic image segmentation. The proposed front-ends have also been shown to improve the performance and learning of current Deep Supervised Learning Networks (VGG, NIN, ResNet) with reduced computing overhead.

Page generated in 0.0376 seconds