• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 9
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Correlation between PET/MRI image features andpathological subtypes for localized prostate cancer / Korrelation mellan PET-/MR-bildegenskaper och patologiska undertyper för lokal prostatacancer

Lindahl, Jens January 2021 (has links)
Prostate cancer is the most common cancer in Sweden. Patients with the condition have a good prognosis in general and most cases can be treated. Localized prostate cancer is primarily treated via surgery or radiation therapy and is diagnosed with the help of different imaging modalities, such as magnetic resonance imaging, MRI, and positron emission tomography, PET. The diagnosis is confirmed and the aggressiveness of the cancer is determined through biopsies. Samples from a small part of the prostate are extracted and then examined. This could mean that parts of higher aggressiveness may be missed, which in turn could lead to under-treatment of the cancer. The aggressiveness of a lesion can be described by Gleason Score, GS, which is determined by an visual assessment of the shape, size and arrangement of the cells. The aim of this study was to correlate GS with in-vivo images using MRI and PET. This was accomplished by investigating image data from PSMA PET, Acetate PET, Ktrans MRI and T2-weighted MRI from a cohort of 26 prostate cancer patients containing 74 lesions. Regions of interests, ROI:s, were created and applied on all images. Statistics such as median and max value were extracted from each ROI. The statistics were combined to get a wide range of descriptive variables for each respective imaging modality. These were normalised against a certain zone of the prostate or only the absolute value. The results indicated that PSMA PET, Acetate PET and Ktrans MRI were correlated to GS, while T2-weighted MRI was not. Data also indicated that PSMA PET, Acetate PET and Ktrans MRI give complementary information to each other, which could indicate that a combination of the modalities would better predict GS. The implications of these findings could affect both the diagnostics and the treatment of prostate cancer.
22

Obrazová analýza v tribotechnické diagnostice / Image analysis in tribodiagnostics

Machalík, Stanislav January 2011 (has links)
Image analysis of wear particles is a suitable support tool for detail analysis of engine, gear, hydraulic and industrial oils. It allows to obtain information not only of basic parameters of abrasion particles but also data that would be very difficult to obtain using classical ways of evaluation. Based on the analysis of morphological or image characteristics of particles, the progress of wearing the machine parts out can be followed and, as a result, possible breakdown of the engine can be prevented or the optimum period for changing the oil can be determined. The aim of this paper is to explore the possibilities of using the image analysis combined with the method of analytical ferrography and suggest a tool for automated particle classification. Current methods of wear particle analysis are derived from the evaluation that does not offer an exact idea of processes that take place between the friction surfaces in the engine system. The work is based upon the method of analytical ferrography which allows to evaluate the state of the machine. The benefit of use of classifiers defined in this wirk is the possibility of automated evaluation of analytical ferrography outputs; the use of them eliminates the crucial disadvantage of ferrographical analysis which is its dependence on the subjective evaluation done by the expert who performs the analysis. Classifiers are defined as a result of using the methods of machine learning. Based on an extensive database of particles that was created in the first part of the work, the classifiers were trained as a result, they make the evaluation of ferrographically separated abrasion particles from oils taken from lubricated systems possible. In the next stage, experiments were carried out and optimum classifier settings were determined based on the results of the experiments.
23

Digital Image Processing via Combination of Low-Level and High-Level Approaches.

Wang, Dong January 2011 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. There is no clear definition how to divide the digital image processing, but normally, digital image processing includes three main steps: low-level, mid-level and highlevel processing. Low-level processing involves primitive operations, such as: image preprocessing to reduce the noise, contrast enhancement, and image sharpening. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. Finally, higher-level processing involves "making sense" of an ensemble of recognised objects, as in image analysis. Based on the theory just described in the last paragraph, this thesis is organised in three parts: Colour Edge and Face Detection; Hand motion detection; Hand Gesture Detection and Medical Image Processing. II In Colour Edge Detection, two new images G-image and R-image are built through colour space transform, after that, the two edges extracted from G-image and R-image respectively are combined to obtain the final new edge. In Face Detection, a skin model is built first, then the boundary condition of this skin model can be extracted to cover almost all of the skin pixels. After skin detection, the knowledge about size, size ratio, locations of ears and mouth is used to recognise the face in the skin regions. In Hand Motion Detection, frame differe is compared with an automatically chosen threshold in order to identify the moving object. For some special situations, with slow or smooth object motion, the background modelling and frame differencing are combined in order to improve the performance. In Hand Gesture Recognition, 3 features of every testing image are input to Gaussian Mixture Model (GMM), and then the Expectation Maximization algorithm (EM)is used to compare the GMM from testing images and GMM from training images in order to classify the results. In Medical Image Processing (mammograms), the Artificial Neural Network (ANN) and clustering rule are applied to choose the feature. Two classifier, ANN and Support Vector Machine (SVM), have been applied to classify the results, in this processing, the balance learning theory and optimized decision has been developed are applied to improve the performance.
24

Ground Plane Feature Detection in Mobile Vision-Aided Inertial Navigation

Panahandeh, Ghazaleh, Mohammadiha, Nasser, Jansson, Magnus January 2012 (has links)
In this paper, a method for determining ground plane features in a sequence of images captured by a mobile camera is presented. The hardware of the mobile system consists of a monocular camera that is mounted on an inertial measurement unit (IMU). An image processing procedure is proposed, first to extract image features and match them across consecutive image frames, and second to detect the ground plane features using a two-step algorithm. In the first step, the planar homography of the ground plane is constructed using an IMU-camera motion estimation approach. The obtained homography constraints are used to detect the most likely ground features in the sequence of images. To reject the remaining outliers, as the second step, a new plane normal vector computation approach is proposed. To obtain the normal vector of the ground plane, only three pairs of corresponding features are used for a general camera transformation. The normal-based computation approach generalizes the existing methods that are developed for specific camera transformations. Experimental results on real data validate the reliability of the proposed method. / <p>QC 20121107</p>
25

Novel image processing algorithms and methods for improving their robustness and operational performance

Romanenko, Ilya January 2014 (has links)
Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels.
26

Vyhledávání obrazu na základě podobnosti / Image search using similarity measures

Harvánek, Martin January 2014 (has links)
There are these methods implemented: circular sectors, color moments, color coherence vector and Gabor filters, they are based on low-level image features. These methods were evaluated after their optimal parameters were found. The finding of optimal parameters of methods is done by measuring of classification accuracy of learning operators and usage of operator cross validation on images in program RapidMiner. Implemented methods are evaluated on these image categories - ancient, beach, bus, dinousaur, elephant, flower, food, horse, mountain and natives, based on total average precision. The classification accuracy result is increased by 8 % by implemented modification (HSB color space + statistical function median) of original method circular sectors. The combination of methods color moments, circular sectors and Gabor filters with weighted ratio gives the best total average precision at 70,48 % and is the best method among all implemented methods.
27

Évaluation de la qualité des documents anciens numérisés

Rabeux, Vincent 06 March 2013 (has links)
Les travaux de recherche présentés dans ce manuscrit décrivent plusieurs apports au thème de l’évaluation de la qualité d’images de documents numérisés. Pour cela nous proposons de nouveaux descripteurs permettant de quantifier les dégradations les plus couramment rencontrées sur les images de documents numérisés. Nous proposons également une méthodologie s’appuyant sur le calcul de ces descripteurs et permettant de prédire les performances d’algorithmes de traitement et d’analyse d’images de documents. Les descripteurs sont définis en analysant l’influence des dégradations sur les performances de différents algorithmes, puis utilisés pour créer des modèles de prédiction à l’aide de régresseurs statistiques. La pertinence, des descripteurs proposés et de la méthodologie de prédiction, est validée de plusieurs façons. Premièrement, par la prédiction des performances de onze algorithmes de binarisation. Deuxièmement par la création d’un processus automatique de sélection de l’algorithme de binarisation le plus performant pour chaque image. Puis pour finir, par la prédiction des performances de deux OCRs en fonction de l’importance du défaut de transparence (diffusion de l’encre du recto sur le verso d’un document). Ce travail sur la prédiction des performances d’algorithmes est aussi l’occasion d’aborder les problèmes scientifiques liés à la création de vérités-terrains et d’évaluation de performances. / This PhD. thesis deals with quality evaluation of digitized document images. In order to measure the quality of a document image, we propose to create new features dedicated to the characterization of most commons degradations. We also propose to use these features to create prediction models able to predict the performances of different types of document analysis algorithms. The features are defined by analyzing the impact of a specific degradation on the results of an algorithm and then used to create statistical regressors.The relevance of the proposed features and predictions models, is analyzed in several experimentations. The first one aims to predict the performance of different binarization methods. The second experiment aims to create an automatic procedure able to select the best binarization method for each image. At last, the third experiment aims to create a prediction model for two commonly used OCRs. This work on performance prediction algorithms is also an opportunity to discuss the scientific problems of creating ground-truth for performance evaluation.
28

Machine Learning Techniques with Specific Application to the Early Olfactory System

Auffarth, Benjamin January 2012 (has links)
This thesis deals with machine learning techniques for the extraction of structure and the analysis of the vertebrate olfactory pathway based on related methods. Some of its main contributions are summarized below. We have performed a systematic investigation for classification in biomedical images with the goal of recognizing a material in these images by its texture. This investigation included (i) different measures for evaluating the importance of image descriptors (features), (ii) methods to select a feature set based on these evaluations, and (iii) classification algorithms. Image features were evaluated according to their estimated relevance for the classification task and their redundancy with other features. For this purpose, we proposed a framework for relevance and redundancy measures and, within this framework, we proposed two new measures. These were the value difference metric and the fit criterion. Both measures performed well in comparison with other previously used ones for evaluating features. We also proposed a Hopfield network as a method for feature selection, which in experiments gave one of the best results relative to other previously used approaches. We proposed a genetic algorithm for clustering and tested it on several realworld datasets. This genetic algorithm was novel in several ways, including (i) the use of intra-cluster distance as additional optimization criterion, (ii) an annealing procedure, and (iii) adaptation of mutation rates. As opposed to many conventional clustering algorithms, our optimization framework allowed us to use different cluster validation measures including those which do not rely on cluster centroids. We demonstrated the use of the clustering algorithm experimentally with several cluster validity measures as optimization criteria. We compared the performance of our clustering algorithm to that of the often-used fuzzy c-means algorithm on several standard machine learning datasets from the University of California/Urvine (UCI) and obtained good results. The organization of representations in the brain has been observed at several stages of processing to spatially decompose input from the environment into features that are somehow relevant from a behavioral or perceptual standpoint. For the perception of smells, the analysis of such an organization, however, is not as straightforward because of the missing metric. Some studies report spatial clusters for several combinations of physico-chemical properties in the olfactory bulb at the level of the glomeruli. We performed a systematic study of representations based on a dataset of activity-related images comprising more than 350 odorants and covering the whole spatial array of the first synaptic level in the olfactory system. We found clustered representations for several physico-chemical properties. We compared the relevance of these properties to activations and estimated the size of the coding zones. The results confirmed and extended previous studies on olfactory coding for physico-chemical properties. Particularly of interest was the spatial progression by carbon chain that we found. We discussed our estimates of relevance and coding size in the context of processing strategies. We think that the results obtained in this study could guide the search into olfactory coding primitives and the understanding of the stimulus space. In a second study on representations in the olfactory bulb, we grouped odorants together by perceptual categories, such as floral and fruity. By the application of the same statistical methods as in the previous study, we found clustered zones for these categories. Furthermore, we found that distances between spatial representations were related to perceptual differences in humans as reported in the literature. This was possibly the first time that such an analysis had been done. Apart from pointing towards a spatial decomposition by perceptual dimensions, results indicate that distance relationships between representations could be perceptually meaningful. In a third study, we modeled axon convergence from olfactory receptor neurons to the olfactory bulb. Sensory neurons were stimulated by a set of biologically-relevant odors, which were described by a set of physico-chemical properties that covaried with the neural and glomerular population activity in the olfactory bulb. Convergence was mediated by the covariance between olfactory neurons. In our model, we could replicate the formation of glomeruli and concentration coding as reported in the literature, and further, we found that the spatial relationships between representational zones resulting from our model correlated with reported perceptual differences between odor categories. This shows that natural statistics, including similarity of physico-chemical structure of odorants, can give rise to an ordered arrangement of representations at the olfactory bulb level where the distances between representations are perceptually relevant. / <p>QC 20120224</p>
29

Vyhledávání graffiti tagů podle podobnosti / Graffiti Tag Retrieval

Grünseisen, Vojtěch January 2013 (has links)
This work focuses on a possibility of using current computer vision alghoritms and methods for automatic similarity matching of so called graffiti tags. Those are such graffiti, that are used as a fast and simple signature of their authors. The process of development and implementation of CBIR system, which is created for this task, is described. For the purposes of finding images similarity, local features are used, most notably self-similarity features.
30

Automatické třídění fotografií podle obsahu / Automatic Photography Categorization

Gajová, Veronika January 2012 (has links)
Purpose of this thesis is to design and implement a tool for automatic categorization of photos. The proposed tool is based on the Bag of Words classification method and it is realized as a plug-in for the XnView image viewer. The plug-in is able to classify a selected group of photos into predefined image categories. Subsequent notation of image categories is written directly into IPTC metadata of the picture as a keyword.

Page generated in 0.0732 seconds