• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 267
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 579
  • 579
  • 148
  • 114
  • 103
  • 94
  • 83
  • 81
  • 79
  • 79
  • 74
  • 69
  • 68
  • 68
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Atlas-based Segmentation of Temporal Bone Anatomy

Liang, Tong 28 July 2017 (has links)
No description available.
32

Microarray image processing : a novel neural network framework

Zineddin, Bachar January 2011 (has links)
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
33

Adaptive biological image-guided radiation therapy in pharyngo-laryngeal squamous cell carcinoma

Geets, Xavier 28 April 2008 (has links)
In recent years, the impressive progress performed in imaging, computational and technological fields have made possible the emergence of image-guided radiation therapy (IGRT) and adaptive radiation therapy (ART). The accuracy in radiation dose delivery reached by IMRT offers the possibility to increase locoregional dose-intensity, potentially overcoming the poor tumor control achieved by standard approaches. However, before implementing such a technique in clinical routine, a particular attention has to be paid at the target volumes definition and delineation procedures to avoid inadequate dosage to TVs/OARs. In head and neck squamous cell carcinoma (HNSCC), the GTV is typically defined on CT acquired prior to treatment. However, providing functional information about the tumor, FDG-PET might advantageously complete the classical CT-Scan to better define the TVs. Similarly, re-imaging the tumor with optimal imaging modality might account for the constantly changing anatomy and tumor shape occurring during the course of fractionated radiotherapy. Integrating this information into the treatment planning might ultimately lead to a much tighter dose distribution. From a methodological point of view, the delineation of TVs on anatomical or functional images is not a trivial task. Firstly, the poor soft tissue contrast provided by CT comes out of large interobserver variability in GTV delineation. In this regard, we showed that the use of consistent delineation guidelines significantly improved consistency between observers, either with CT and with MRI. Secondly, the intrinsic characteristics of PET images, including the blur effect and the high level of noise, make the detection of the tumor edges arduous. In this context, we developed specific image restoration tools, i.e. edge-preserving filters for denoising, and deconvolution algorithms for deblurring. This procedure restores the image quality, allowing the use of gradient-based segmentation techniques. This method was validated on phantom and patient images, and proved to be more accurate and reliable than threshold-based methods. Using these segmentation methods, we proved that GTVs significantly shrunk during radiotherapy in patients with HNSCC, whatever the imaging modality used (MRI, CT, FDG-PET). No clinically significant difference was found between CT and MRI, while FDG-PET provided significantly smaller volumes than those based on anatomical imaging. Refining the target volume delineation by means of functional and sequential imaging ultimately led to more optimal dose distribution to TVs with subsequent soft tissue sparing. In conclusion, we demonstrated that a multi-modality-based adaptive planning is feasible in HN tumors and potentially opens new avenues for dose escalation strategies. As a high level of accuracy is required by such approach, the delineation of TVs however requires a special care.
34

A Probabilistic Approach to Image Feature Extraction, Segmentation and Interpretation

Pal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
35

Image Segmentation and Analysis for Automated Classification of Traumatic Pelvic Injuries

Vasilache, Simina 26 April 2010 (has links)
In the past decades, technical advances have allowed for the collection and storage of more types and larger quantities of medical data. The increase in the volume of existing medical data has increased the need for processing and analyzing such data. Medical data holds information that is invaluable for diagnostic as well as treatment planning purposes. Presently, a large portion of the data is not optimally used towards medical decisions because information contained in the data is inaccessible through simple human inspection, or traditional computational methods. In the field of trauma medicine, where caregivers are frequently confronted with situations where they need to make rapid decisions based on large amounts of information, the need for reliable, fast and automated computational methods for decision support systems is stringent. Such methods could process and analyze, in a timely fashion, all available medical data and provide caretakers with recommendations/predictions for both patient diagnostic and treatment planning. Presently however, even extracting features that are known to be useful for diagnosis, like presence and location of hemorrhage and fracture, is not easily achievable in automatic manner. Trauma is the main cause of death among Americans age 40 and younger; hence, it has become a national priority. A computer-aided decision making system capable of rapidly analyzing all data available for a patient and forming reliable recommendations for physicians can greatly impact the quality of care provided to patients. Such a system would also reduce the overall costs involved in patient care as it helps in optimizing the decisions, avoiding unnecessary procedures, and customizing treatments for individual patients. Among different types of trauma with a high impact on the lives of Americans, traumatic pelvic injuries, which often occur in motor vehicle accidents and in falls, have had a tremendous toll on both human lives and healthcare costs in the United States. The present project has developed automated computational methods and algorithms to analyze pelvic CT images and extract significant features describing the severity of injuries. Such a step is of great importance as every CT scan consists of tens of slices that need to be closely examined. This method can automatically extract information hidden in CT images and therefore reduce the time of the examination. The method identifies and signals areas of potential abnormality and allows the user to decide upon the action to be taken (e.g. further examination of the image and/or area and neighboring images in the scan). The project also initiates the design of a system that combines the features extracted from biomedical signals and images with information such as injury scores, injury mechanism and demographic information in order to detect the presence and the severity of Traumatic Pelvic Injuries and to provide recommendations for diagnosis and treatment. The recommendations are provided in form of grammatical rules, allowing physicians to explore the reasoning behind these assessments.
36

Segmentace obrazů listů dřevin / Segmentation of images with leaves of woody species

Valchová, Ivana January 2016 (has links)
The thesis focuses on segmentation of images with leaves of woody species. The main aim was to investigate existing image segmentation methods, choose suitable method for given data and implement it. Inputs are scanned leaves and photographs of various quality. The thesis summarizes the general methods of image segmentation and describes algorithm that gives us the best results. Based on the histogram, the algorithm decides whether the input is of sufficient quality and can be segmented by Otsu algorithm or is not and should be segmented using GrowCut algorithm. Next, the image is improved by morphological closing and holes filling. Finally, only the largest object is left. Results are illustrated using generated output images. Powered by TCPDF (www.tcpdf.org)
37

Fast segmentation of the LV myocardium in real-time 3D echocardiography

Verhoek, Michael January 2011 (has links)
Heart disease is a major cause of death in western countries. In order to diagnose and monitor heart disease, 3D echocardiography is an important tool, as it provides a fast, relatively low-cost, portable and harmless way of imaging the moving heart. Segmentation of cardiac walls is an indispensable method of obtaining quantitative measures of heart function. However segmentation of ultrasound images has its challenges: image quality is often relatively low and current segmentation methods are often not fast. It is desirable to make the segmentation technique as fast as possible, making quantitative heart function measures available at the time of recording. In this thesis, we test two state-of-the-art fast segmentation techniques to address this issue; furthermore, we develop a novel technique for finding the best segmentation propagation strategy between points of time in a cardiac image sequence. The first fast method is Graph Cuts (GC), an energy minimisation technique that represents the image as a graph. We test this method on static 3D echocardiography to segment the myocardium, varying the importance of the regulariser function. We look at edge measures, position constraints and tissue characterisation and find that GC is relatively fast and accurate. The second fast method is Random Forests (RFos), a discriminative classifier using binary decision trees, used in machine learning. To our knowledge, we are the first to test this method for myocardial segmentation on 2D and 3D static echocardiography. We investigate the number of trees, image features used, some internal parameters, and compare with intensity thresholding. We conclude that RFos are very fast and more accurate than GC segmentation. The static RFo method is subsequently applied to all time frames. We describe a novel optical flow based propagation technique that improves the static results by propagating the results from well-performing time frames to less-performing frames. We describe a learning algorithm that learns for each frame which propagation strategy is best. Furthermore, we look at the influence of the number of images and of the training set available per tree, and we compare against other methods that use motion information. Finally, we perform the same propagation learning method on the static GC results, concluding that the propagation method improves the static results in this case as well. We compare the dynamic GC results with the dynamic RFo results and find that RFos are more accurate and faster than GC.
38

Segmentation-based Retinal Image Analysis

Wu, Qian January 2019 (has links)
Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used. Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques. Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively. Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.
39

Skin lesion segmentation and classification using deep learning

Unknown Date (has links)
Melanoma, a severe and life-threatening skin cancer, is commonly misdiagnosed or left undiagnosed. Advances in artificial intelligence, particularly deep learning, have enabled the design and implementation of intelligent solutions to skin lesion detection and classification from visible light images, which are capable of performing early and accurate diagnosis of melanoma and other types of skin diseases. This work presents solutions to the problems of skin lesion segmentation and classification. The proposed classification approach leverages convolutional neural networks and transfer learning. Additionally, the impact of segmentation (i.e., isolating the lesion from the rest of the image) on the performance of the classifier is investigated, leading to the conclusion that there is an optimal region between “dermatologist segmented” and “not segmented” that produces best results, suggesting that the context around a lesion is helpful as the model is trained and built. Generative adversarial networks, in the context of extending limited datasets by creating synthetic samples of skin lesions, are also explored. The robustness and security of skin lesion classifiers using convolutional neural networks are examined and stress-tested by implementing adversarial examples. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
40

Using Deep Learning Semantic Segmentation to Estimate Visual Odometry

Unknown Date (has links)
In this research, image segmentation and visual odometry estimations in real time are addressed, and two main contributions were made to this field. First, a new image segmentation and classification algorithm named DilatedU-NET is introduced. This deep learning based algorithm is able to process seven frames per-second and achieves over 84% accuracy using the Cityscapes dataset. Secondly, a new method to estimate visual odometry is introduced. Using the KITTI benchmark dataset as a baseline, the visual odometry error was more significant than could be accurately measured. However, the robust framerate speed made up for this, able to process 15 frames per second. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection

Page generated in 0.1416 seconds