• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Near Images: A Tolerance Based Approach to Image Similarity and its Robustness to Noise and Lightening

Shahfar, Shabnam 27 September 2011 (has links)
This thesis represents a tolerance near set approach to detect similarity between digital images. Two images are considered as sets of perceptual objects and a tolerance relation defines the nearness between objects. Two perceptual objects resemble each other if the difference between their descriptions is smaller than a tolerable level of error. Existing tolerance near set approaches to image similarity consider both images in a single tolerance space and compare the size of tolerance classes. This approach is shown to be sensitive to noise and distortions. In this thesis, a new tolerance-based method is proposed that considers each image in a separate tolerance space and defines the similarity based on differences between histograms of the size of tolerance classes. The main advantage of the proposed method is its lower sensitivity to distortions such as adding noise, darkening or brightening. This advantage has been shown here through a set of experiments.
2

Near Images: A Tolerance Based Approach to Image Similarity and its Robustness to Noise and Lightening

Shahfar, Shabnam 27 September 2011 (has links)
This thesis represents a tolerance near set approach to detect similarity between digital images. Two images are considered as sets of perceptual objects and a tolerance relation defines the nearness between objects. Two perceptual objects resemble each other if the difference between their descriptions is smaller than a tolerable level of error. Existing tolerance near set approaches to image similarity consider both images in a single tolerance space and compare the size of tolerance classes. This approach is shown to be sensitive to noise and distortions. In this thesis, a new tolerance-based method is proposed that considers each image in a separate tolerance space and defines the similarity based on differences between histograms of the size of tolerance classes. The main advantage of the proposed method is its lower sensitivity to distortions such as adding noise, darkening or brightening. This advantage has been shown here through a set of experiments.
3

Možnosti srovnávání obrázků v mobiních aplikacích / Possibilities of image comparison in mobile applications

Jírů, Michaela January 2015 (has links)
This thesis is about methods of image comparison. Goal is to create a mobile app that allows user to compare images in real time. In the first part there is a theoretical basis, in particular image similarity algorithms. The practical part contains the app implementation including use case analysis, user interface design and functional requirements. It is followed by source code samples a description of frameworks used. Last part is made of testing the implemented algorithms regarding speed and precision.
4

Podobnost obrazu na základě barvy / Image similarity based on colour

Hampl, Filip January 2015 (has links)
This diploma thesis deals with image similarity based on colour. There are discussed necessary theoretical basis for better understanding of this topic. These basis are color models, that are implemented in work, principle of creating the histogram and its comparing. Next chapter deals with summary of recent progress in the field of image comparison and overview of several most used methods. Practical part introduces training image database, which gives results of success for each created method. These methods are separately described, including their principles and achieved results. In the very end of this work, user interface is described. This interface provides a transparent presentation of the results for the chosen method.
5

Similarity models for atlas-based segmentation of whole-body MRI volumes

Axberg, Elin, Klerstad, Ida January 2020 (has links)
In order to analyse body composition of MRI (Magnetic Resonance Imaging) volumes, atlas-based segmentation is often used to retrieve information from specific organs or anatomical regions. The method behind this technique is to use an already segmented image volume, an atlas, to segment a target image volume by registering the volumes to each other. During this registration a deformation field will be calculated, which is applied to a segmented part of the atlas, resulting in the same anatomical segmentation in the target. The drawback with this method is that the quality of the segmentation is highly dependent on the similarity between the target and the atlas, which means that many atlases are needed to obtain good segmentation results in large sets of MRI volumes. One potential solution to overcome this problem is to create the deformation field between a target and an atlas as a sequence of small deformations between more similar bodies.  In this master thesis a new method for atlas-based segmentation has been developed, with the anticipation of obtaining good segmentation results regardless of the level of similarity between the target and the atlas. In order to do so, 4000 MRI volumes were used to create a manifold of human bodies, which represented a large variety of different body types. These MRI volumes were compared to each other and the calculated similarities were saved in matrices called similarity models. Three different similarity measures were used to create the models which resulted in three different versions of the model. In order to test the hypothesis of achieving good segmentation results when the deformation field was constructed as a sequence of small deformations, the similarity models were used to find the shortest path (the path with the least dissimilarity) between a target and an atlas in the manifold.  In order to evaluate the constructed similarity models, three MRI volumes were chosen as atlases and 100 MRI volumes were randomly picked to be used as targets. The shortest paths between these volumes were used to create the deformation fields as a sequence of small deformations. The created fields were then used to segment the anatomical regions ASAT (abdominal subcutaneous adipose tissue), LPT (left posterior thigh) and VAT (visceral adipose tissue). The segmentation performance was measured with Dice Index, where segmentations constructed at AMRA Medical AB were used as ground truth. In order to put the results in relation to another segmentation method, direct deformation fields between the targets and the atlases were also created and the segmentation results were compared to the ground truth with the Dice Index. Two different types of transformation methods, one non-parametric and one affine transformation, were used to create the deformation fields in this master thesis. The evaluation showed that good segmentation results can be achieved for the segmentation of VAT for one of the constructed similarity models. These results were obtained when a non-parametric registration method was used to create the deformation fields. In order to achieve similar results for an affine registration and to improve the segmentation of other anatomical regions, further investigations are needed.
6

Selecting stimuli parameters for video quality studies based on perceptual similarity distances

Kumcu, A., Platisa, L., Chen, H., Gislason-Lee, Amber J., Davies, A.G., Schelkens, P., Taeymans, Y., Philips, W. 16 March 2015 (has links)
Yes / This work presents a methodology to optimize the selection of multiple parameter levels of an image acquisition, degradation, or post-processing process applied to stimuli intended to be used in a subjective image or video quality assessment (QA) study. It is known that processing parameters (e.g. compression bit-rate) or techni- cal quality measures (e.g. peak signal-to-noise ratio, PSNR) are often non-linearly related to human quality judgment, and the model of either relationship may not be known in advance. Using these approaches to select parameter levels may lead to an inaccurate estimate of the relationship between the parameter and subjective quality judgments – the system’s quality model. To overcome this, we propose a method for modeling the rela- tionship between parameter levels and perceived quality distances using a paired comparison parameter selection procedure in which subjects judge the perceived similarity in quality. Our goal is to enable the selection of evenly sampled parameter levels within the considered quality range for use in a subjective QA study. This approach is tested on two applications: (1) selection of compression levels for laparoscopic surgery video QA study, and (2) selection of dose levels for an interventional X-ray QA study. Subjective scores, obtained from the follow-up single stimulus QA experiments conducted with expert subjects who evaluated the selected bit-rates and dose levels, were roughly equidistant in the perceptual quality space - as intended. These results suggest that a similarity judgment task can help select parameter values corresponding to desired subjective quality levels. / Parts of this work were performed within the Telesurgery project (co-funded by iMinds, a digital research institute founded by the Flemish Government; project partners are Unilabs Teleradiology, SDNsquare and Barco, with project support from IWT) and the PANORAMA project (co-funded by grants from Belgium, Italy, France, the Netherlands, the United Kingdom, and the ENIAC Joint Undertaking).
7

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
8

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
9

Určování podobnosti objektů na základě obrazové informace / Determination of Objects Similarity Based on Image Information

Rajnoha, Martin January 2021 (has links)
Monitoring of public areas and their automatic real-time processing became increasingly significant due to the changing security situation in the world. However, the problem is an analysis of low-quality records, where even the state-of-the-art methods fail in some cases. This work investigates an important area of image similarity – biometric identification based on face image. The work deals primarily with the face super-resolution from a sequence of low-resolution images and it compares this approach to the single-frame methods, that are still considered as the most accurate. A new dataset was created for this purpose, which is directly designed for the multi-frame face super-resolution methods from the low-resolution input sequence, and it is of comparable size with the leading world datasets. The results were evaluated by both a survey of human perception and defined objective metrics. A hypothesis that multi-frame methods achieve better results than single-frame methods was proved by a comparison of both methods. Architectures, source code and the dataset were released. That caused a creation of the basis for future research in this field.
10

Deep learning for identification of figurative elements in trademark images using Vienna codes

Uzairi, Arjeton January 2021 (has links)
Labeling of trademark images with Vienna codes from the Vienna classification is a manual process carried out by domain experts, which enables searching trademark image databases using specific keywords that describe the semantic meaning of the figurative elements. In this research, we are investigating how application of supervised learning algorithms can improve and automate the manual process of labeling of new un-labeled trademark images. The successful implementation of deep learning algorithms in the task of computer vision for image classification has motivated us to investigate which of the supervised learning algorithms performs better trademark image classification. More specifically, to solve the problem of identification of figurative elements in new un-labeled images, we have used multi-class image classification approach based on deep learning and machine learning. To address this problem, we have generated a unique benchmarking dataset composed of 14,500 unique logos extracted from the European Union Intellectual Property Office Open Data Portal. The results after executing a set of controlled experiments on the given dataset indicate that deep learning models have overall better performance than machine learning models. In particular, CNN models reach better accuracy and precision, and significantly higher recall and F1 score for shorter training times, compared to recurrent neural networks such as LSTMs and GRUs. From the machine learning models, results indicate that Support Vector Machines have higher accuracy and overall better performance time compared to Decision Trees, Random Forests and Naïve Bayes models. This study shows that deep learning models can solve the problem of the labeling of trademark images with Vienna codes, and that can be applied by Intellectual Property Offices in real-world application for automation of the classification task which is carried out manually by the domain experts.

Page generated in 0.0566 seconds