• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An improved incremental/decremental delaunay mesh-generation strategy for image representation

EL Marzouki, Badr Eddine 16 December 2016 (has links)
Two highly effective content-adaptive methods for generating Delaunay mesh models of images, known as IID1 and IID2, are proposed. The methods repeatedly alternate between mesh simplification and refinement, based on the incremental/decremental mesh-generation framework of Adams, which has several free parameters. The effect of different choices of the framework's free parameters is studied, and the results are used to derive two mesh-generation methods that differ in computational complexity. The higher complexity IID2 method generates mesh models of superior reconstruction quality, while the lower complexity IID1 method trades mesh quality in return for a decrease in computational cost. Some of the contributions of our work include the recommendation of a better choice for the growth-schedule parameter of the framework, as well as the use of Floyd-Steinberg error diffusion for the initial-mesh selection. As part of our work, we evaluated the performance of the proposed methods using a data set of 50 images varying in type (e.g., photographic, computer generated, and medical), size and bit depth with multiple target mesh densities ranging from 0.125% to 4%. The experimental results show that our proposed methods perform extremely well, yielding high-quality image approximations in terms of peak-signal-to-noise ratio (PSNR) and subjective visual quality, at an equivalent or lower computational cost compared to other well known approaches such as the ID1, ID2, and IDDT methods of Adams, and the greedy point removal (GPR) scheme of Demaret and Iske. More specifically, the IID2 method outperforms the GPR scheme in terms of mesh quality by 0.2-1.0 dB with a 62-93% decrease in computational cost. Furthermore, the IID2 method yields meshes of similar quality to the ID2 method at a computational cost that is lower by 9-41%. The IID1 method provides improvements in mesh quality in 93% of the test cases by margins of 0.04-1.31 dB compared to the IDDT scheme, while having a similar complexity. Moreover, reductions in execution time of 4-59% are achieved compared to the ID1 method in 86% of the test cases. / Graduate / 0544, 0984, 0537 / marzouki@uvic.ca
12

The Architectural City Images In Cinema: The Representation Of City In Renaissance As A Case Study

Akcay, Aysegul 01 June 2008 (has links) (PDF)
The aim of this study is to understand the limits of spatial transformations of architectural images in cinema. In this exposition the architectural city images are analyzed with referenced to case study by reading the representation of space and city in model film Renaissance which in the city becomes notion. The interaction between architecture and cinema is discussed by using concepts such as space, time, perception, framing, editing and continuity in addition to their relations with future cities and spatial designs in these worlds.
13

Depth-adaptive methodologies for 3D image caregorization

Kounalakis, Tsampikos January 2015 (has links)
Image classification is an active topic of computer vision research. This topic deals with the learning of patterns in order to allow efficient classification of visual information. However, most research efforts have focused on 2D image classification. In recent years, advances of 3D imaging enabled the development of applications and provided new research directions. In this thesis, we present methodologies and techniques for image classification using 3D image data. We conducted our research focusing on the attributes and limitations of depth information regarding possible uses. This research led us to the development of depth feature extraction methodologies that contribute to the representation of images thus enhancing the recognition efficiency. We proposed a new classification algorithm that adapts to the need of image representations by implementing a scale-based decision that exploits discriminant parts of representations. Learning from the design of image representation methods, we introduced our own which describes each image by its depicting content providing more discriminative image representation. We also propose a dictionary learning method that exploits the relation of training features by assessing the similarity of features originating from similar context regions. Finally, we present our research on deep learning algorithms combined with data and techniques used in 3D imaging. Our novel methods provide state-of-the-art results, thus contributing to the research of 3D image classification.
14

Mesh models of images, their generation, and their application in image scaling

Mostafavian, Ali 22 January 2019 (has links)
Triangle-mesh modeling, as one of the approaches for representing images based on nonuniform sampling, has become quite popular and beneficial in many applications. In this thesis, image representation using triangle-mesh models and its application in image scaling are studied. Consequently, two new methods, namely, the SEMMG and MIS methods are proposed, where each solves a different problem. In particular, the SEMMG method is proposed to address the problem of image representation by producing effective mesh models that are used for representing grayscale images, by minimizing squared error. The MIS method is proposed to address the image-scaling problem for grayscale images that are approximately piecewise-smooth, using triangle-mesh models. The SEMMG method, which is proposed for addressing the mesh-generation problem, is developed based on an earlier work, which uses a greedy-point-insertion (GPI) approach to generate a mesh model with explicit representation of discontinuities (ERD). After in-depth analyses of two existing methods for generating the ERD models, several weaknesses are identified and specifically addressed to improve the quality of the generated models, leading to the proposal of the SEMMG method. The performance of the SEMMG method is then evaluated by comparing the quality of the meshes it produces with those obtained by eight other competing methods, namely, the error-diffusion (ED) method of Yang, the modified Garland-Heckbert (MGH) method, the ERDED and ERDGPI methods of Tu and Adams, the Garcia-Vintimilla-Sappa (GVS) method, the hybrid wavelet triangulation (HWT) method of Phichet, the binary space partition (BSP) method of Sarkis, and the adaptive triangular meshes (ATM) method of Liu. For this evaluation, the error between the original and reconstructed images, obtained from each method under comparison, is measured in terms of the PSNR. Moreover, in the case of the competing methods whose implementations are available, the subjective quality is compared in addition to the PSNR. Evaluation results show that the reconstructed images obtained from the SEMMG method are better than those obtained by the competing methods in terms of both PSNR and subjective quality. More specifically, in the case of the methods with implementations, the results collected from 350 test cases show that the SEMMG method outperforms the ED, MGH, ERDED, and ERDGPI schemes in approximately 100%, 89%, 99%, and 85% of cases, respectively. Moreover, in the case of the methods without implementations, we show that the PSNR of the reconstructed images produced by the SEMMG method are on average 3.85, 0.75, 2, and 1.10 dB higher than those obtained by the GVS, HWT, BSP, and ATM methods, respectively. Furthermore, for a given PSNR, the SEMMG method is shown to produce much smaller meshes compared to those obtained by the GVS and BSP methods, with approximately 65% to 80% fewer vertices and 10% to 60% fewer triangles, respectively. Therefore, the SEMMG method is shown to be capable of producing triangular meshes of higher quality and smaller sizes (i.e., number of vertices or triangles) which can be effectively used for image representation. Besides the superior image approximations achieved with the SEMMG method, this work also makes contributions by addressing the problem of image scaling. For this purpose, the application of triangle-mesh mesh models in image scaling is studied. Some of the mesh-based image-scaling approaches proposed to date employ mesh models that are associated with an approximating function that is continuous everywhere, which inevitably yields edge blurring in the process of image scaling. Moreover, other mesh-based image-scaling approaches that employ approximating functions with discontinuities are often based on mesh simplification where the method starts with an extremely large initial mesh, leading to a very slow mesh generation with high memory cost. In this thesis, however, we propose a new mesh-based image-scaling (MIS) method which firstly employs an approximating function with selected discontinuities to better maintain the sharpness at the edges. Secondly, unlike most of the other discontinuity-preserving mesh-based methods, the proposed MIS method is not based on mesh simplification. Instead, our MIS method employs a mesh-refinement scheme, where it starts from a very simple mesh and iteratively refines the mesh to reach a desirable size. For developing the MIS method, the performance of our SEMMG method, which is proposed for image representation, is examined in the application of image scaling. Although the SEMMG method is not designed for solving the problem of image scaling, examining its performance in this application helps to better understand potential shortcomings of using a mesh generator in image scaling. Through this examination, several shortcomings are found and different techniques are devised to address them. By applying these techniques, a new effective mesh-generation method called MISMG is developed that can be used for image scaling. The MISMG method is then combined with a scaling transformation and a subdivision-based model-rasterization algorithm, yielding the proposed MIS method for scaling grayscale images that are approximately piecewise-smooth. The performance of our MIS method is then evaluated by comparing the quality of the scaled images it produces with those obtained from five well-known raster-based methods, namely, bilinear interpolation, bicubic interpolation of Keys, the directional cubic convolution interpolation (DCCI) method of Zhou et al., the new edge-directed image interpolation (NEDI) method of Li and Orchard, and the recent method of super-resolution using convolutional neural networks (SRCNN) by Dong et al.. Since our main goal is to produce scaled images of higher subjective quality with the least amount of edge blurring, the quality of the scaled images are first compared through a subjective evaluation followed by some objective evaluations. The results of the subjective evaluation show that the proposed MIS method was ranked best overall in almost 67\% of the cases, with the best average rank of 2 out of 6, among 380 collected rankings with 20 images and 19 participants. Moreover, visual inspections on the scaled images obtained with different methods show that the proposed MIS method produces scaled images of better quality with more accurate and sharper edges. Furthermore, in the case of the mesh-based image-scaling methods, where no implementation is available, the MIS method is conceptually compared, using theoretical analysis, to two mesh-based methods, namely, the subdivision-based image-representation (SBIR) method of Liao et al. and the curvilinear feature driven image-representation (CFDIR) method of Zhou et al.. / Graduate
15

Contributions to generic visual object categorization

Fu, Huanzhang 14 December 2010 (has links) (PDF)
This thesis is dedicated to the active research topic of generic Visual Object Categorization(VOC), which can be widely used in many applications such as videoindexation and retrieval, video monitoring, security access control, automobile drivingsupport etc. Due to many realistic difficulties, it is still considered to be one ofthe most challenging problems in computer vision and pattern recognition. In thiscontext, we have proposed in this thesis our contributions, especially concerning thetwo main components of the methods addressing VOC problems, namely featureselection and image representation.Firstly, an Embedded Sequential Forward feature Selection algorithm (ESFS)has been proposed for VOC. Its aim is to select the most discriminant features forobtaining a good performance for the categorization. It is mainly based on thecommonly used sub-optimal search method Sequential Forward Selection (SFS),which relies on the simple principle to add incrementally most relevant features.However, ESFS not only adds incrementally most relevant features in each stepbut also merges them in an embedded way thanks to the concept of combinedmass functions from the evidence theory which also offers the benefit of obtaining acomputational cost much lower than the one of original SFS.Secondly, we have proposed novel image representations to model the visualcontent of an image, namely Polynomial Modeling and Statistical Measures basedImage Representation, called PMIR and SMIR respectively. They allow to overcomethe main drawback of the popular "bag of features" method which is the difficultyto fix the optimal size of the visual vocabulary. They have been tested along withour proposed region based features and SIFT. Two different fusion strategies, earlyand late, have also been considered to merge information from different "channels"represented by the different types of features.Thirdly, we have proposed two approaches for VOC relying on sparse representation,including a reconstructive method (R_SROC) as well as a reconstructiveand discriminative one (RD_SROC). Indeed, sparse representation model has beenoriginally used in signal processing as a powerful tool for acquiring, representingand compressing the high-dimensional signals. Thus, we have proposed to adaptthese interesting principles to the VOC problem. R_SROC relies on the intuitiveassumption that an image can be represented by a linear combination of trainingimages from the same category. Therefore, the sparse representations of images arefirst computed through solving the ℓ1 norm minimization problem and then usedas new feature vectors for images to be classified by traditional classifiers such asSVM. To improve the discrimination ability of the sparse representation to betterfit the classification problem, we have also proposed RD_SROC which includes adiscrimination term, such as Fisher discrimination measure or the output of a SVMclassifier, to the standard sparse representation objective function in order to learna reconstructive and discriminative dictionary. Moreover, we have also proposedChapter 0. Abstractto combine the reconstructive and discriminative dictionary and the adapted purereconstructive dictionary for a given category so that the discrimination power canfurther be increased.The efficiency of all the methods proposed in this thesis has been evaluated onpopular image datasets including SIMPLIcity, Caltech101 and Pascal2007.
16

A Flexible mesh-generation strategy for image representation based on data-dependent triangulation

Li, Ping 15 May 2012 (has links)
Data-dependent triangulation (DDT) based mesh-generation schemes for image representation are studied. A flexible mesh-generation framework and a highly effective mesh-generation method that employs this framework are proposed. The proposed framework is derived from frameworks proposed by Rippa and Garland and Heckbert by making a number of key modifications to facilitate the development of much more effective mesh-generation methods. As the proposed framework has several free parameters, the effects of different choices of these parameters on mesh quality (both in terms of squared error and subjectively) are studied, leading to the recommendation of a particular set of choices for these parameters. A new mesh-generation method is then introduced that employs the proposed framework with these best parameter choices. Experimental results show our proposed mesh-generation method outperforms several competing approaches, namely, the DDT-based incremental scheme proposed by Garland and Heckbert, the COMPRESS scheme proposed by Rippa, and the adaptive thinning scheme proposed by Demaret and Iske. More specifically, in terms of PSNR, our proposed method was found to outperform these three schemes by median margins of 4.1 dB, 10.76 dB, and 0.83 dB, respectively. The subjective qualities of reconstructed images were also found to be correspondingly better. In terms of computational cost, our proposed method was found to be comparable to the schemes proposed by Garland and Heckbert and Rippa. Moreover, our proposed method requires only about 5 to 10% of the time of the scheme proposed by Demaret and Iske. In terms of memory cost, our proposed method was shown to require essentially same amount of memory as the schemes proposed by Garland and Heckbert and Rippa, and orders of magnitude (33 to 800 times) less memory than the scheme proposed by Demaret and Iske. / Graduate
17

Image Representation using Attribute-Graphs

Prabhu, Nikita January 2016 (has links) (PDF)
In a digital world of Flickr, Picasa and Google Images, developing a semantic image represen-tation has become a vital problem. Image processing and computer vision researchers to date, have used several di erent representations for images. They vary from low level features such as SIFT, HOG, GIST etc. to high level concepts such as objects and people. When asked to describe an object or a scene, people usually resort to mid-level features such as size, appearance, feel, use, behaviour etc. Such descriptions are commonly referred to as the attributes of the object or scene. These human understandable, machine detectable attributes have recently become a popular feature category for image representation for various vision tasks. In addition to image and object characteristics, object interactions and back-ground/context information and the actions taking place in the scene form an important part of an image description. It is therefore, essential, to develop an image representation which can e ectively describe various image components and their interactions. Towards this end, we propose a novel image representation, termed Attribute-Graph. An Attribute-Graph is an undirected graph, incorporating both local and global image character-istics. The graph nodes characterise objects as well as the overall scene context using mid-level semantic attributes, while the edges capture the object topology and the actions being per-formed. We demonstrate the e ectiveness of Attribute-Graphs by applying them to the problem of image ranking. Since an image retrieval system should rank images in a way which is compatible with visual similarity as perceived by humans, it is intuitive that we work in a human understandable feature space. Most content based image retrieval algorithms treat images as a set of low level features or try to de ne them in terms of the associated text. Such a representation fails to capture the semantics of the image. This, more often than not, results in retrieved images which are semantically dissimilar to the query. Ranking using the proposed attribute-graph representation alleviates this problem. We benchmark the performance of our ranking algorithm on the rPascal and rImageNet datasets, which we have created in order to evaluate the ranking performance on complex queries containing multiple objects. Our experimental evaluation shows that modelling images as Attribute-Graphs results in improved ranking performance over existing techniques.
18

Perceived features and similarity of images: An investigation into their relationships and a test of Tversky's contrast model.

Rorissa, Abebe 05 1900 (has links)
The creation, storage, manipulation, and transmission of images have become less costly and more efficient. Consequently, the numbers of images and their users are growing rapidly. This poses challenges to those who organize and provide access to them. One of these challenges is similarity matching. Most current content-based image retrieval (CBIR) systems which can extract only low-level visual features such as color, shape, and texture, use similarity measures based on geometric models of similarity. However, most human similarity judgment data violate the metric axioms of these models. Tversky's (1977) contrast model, which defines similarity as a feature contrast task and equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, explains human similarity judgments much better than the geometric models. This study tested the contrast model as a conceptual framework to investigate the nature of the relationships between features and similarity of images as perceived by human judges. Data were collected from 150 participants who performed two tasks: an image description and a similarity judgment task. Qualitative methods (content analysis) and quantitative (correlational) methods were used to seek answers to four research questions related to the relationships between common and distinctive features and similarity judgments of images as well as measures of their common and distinctive features. Structural equation modeling, correlation analysis, and regression analysis confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky (1977). Tversky's (1977) contrast model based upon a combination of two methods for measuring common and distinctive features, and two methods for measuring similarity produced statistically significant structural coefficients between the independent latent variables (common and distinctive features) and the dependent latent variable (similarity). This model fit the data well for a sample of 30 (435 pairs of) images and 150 participants (χ2 =16.97, df=10, p = .07508, RMSEA= .040, SRMR= .0205, GFI= .990, AGFI= .965). The goodness of fit indices showed the model did not significantly deviate from the actual sample data. This study is the first to test the contrast model in the context of information representation and retrieval. Results of the study are hoped to provide the foundations for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
19

Saliency-weighted graphs for efficient visual content description and their applications in real-time image retrieval systems

Ahmad, J., Sajjad, M., Mehmood, Irfan, Rho, S., Baik, S.W. 18 July 2019 (has links)
Yes / The exponential growth in the volume of digital image databases is making it increasingly difficult to retrieve relevant information from them. Efficient retrieval systems require distinctive features extracted from visually rich contents, represented semantically in a human perception-oriented manner. This paper presents an efficient framework to model image contents as an undirected attributed relational graph, exploiting color, texture, layout, and saliency information. The proposed method encodes salient features into this rich representative model without requiring any segmentation or clustering procedures, reducing the computational complexity. In addition, an efficient graph-matching procedure implemented on specialized hardware makes it more suitable for real-time retrieval applications. The proposed framework has been tested on three publicly available datasets, and the results prove its superiority in terms of both effectiveness and efficiency in comparison with other state-of-the-art schemes. / Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2012904).
20

Contributions to generic visual object categorization / Catégorisation automatique d'images

Fu, Huanzhang 14 December 2010 (has links)
Cette thèse de doctorat est consacrée à un sujet de recherche très porteur : la Catégorisation générique d’objets Visuels (VOC). En effet, les applications possibles sont très nombreuses, incluant l’indexation d’images et de vidéos, la vidéo surveillance, le contrôle d’accès de sécurité, le soutien à la conduite automobile, etc. En raison de ses nombreux verrous scientifiques, ce sujet est encore considéré comme l’un des problèmes les plus difficiles en vision par ordinateur et en reconnaissance de formes. Dans ce contexte, nous avons proposé dans ce travail de thèse plusieurs contributions, en particulier concernant les deux principaux éléments des méthodes résolvant les problèmes de VOC, notamment la sélection des descripteurs et la représentation d’images. Premièrement, un algorithme nomme "Embedded Sequential Forward feature Selection"(ESFS) a été proposé pour VOC. Son but est de sélectionner les descripteurs les plus discriminants afin d’obtenir une bonne performance pour la catégorisation. Il est principalement basé sur la méthode de recherche sous-optimale couramment utilisée "Sequential Forward Selection" (SFS), qui repose sur le principe simple d’ajouter progressivement les descripteurs les plus pertinents. Cependant, ESFS non seulement ajoute progressivement les descripteurs les plus pertinents à chaque étape mais de plus les fusionne d’une manière intégrée grâce à la notion de fonctions de masses combinées empruntée à la théorie de l’évidence qui offre également l’avantage d’obtenir un coût de calcul beaucoup plus faible que celui de SFS original. Deuxièmement, nous avons proposé deux nouvelles représentations d’images pour modéliser le contenu visuel d’une image : la Représentation d’Image basée sur la Modélisation Polynomiale et les Mesures Statistiques, appelées respectivement PMIR et SMIR. Elles permettent de surmonter l’inconvénient principal de la méthode populaire "bag of features" qui est la difficulté de fixer la taille optimale du vocabulaire visuel. Elles ont été testées avec nos descripteurs bases région ainsi que les descripteurs SIFT. Deux stratégies différentes de fusion, précoce et tardive, ont également été considérées afin de fusionner les informations venant des "canaux «différents représentés par les différents types de descripteurs. Troisièmement, nous avons proposé deux approches pour VOC en s’appuyant sur la représentation sparse. La première méthode est reconstructive (R_SROC) alors que la deuxième est reconstructive et discriminative (RD_SROC). En effet, le modèle de représentation sparse a été utilisé originalement dans le domaine du traitement du signal comme un outil puissant pour acquérir, représenter et compresser des signaux de grande dimension. Ainsi, nous avons proposé une adaptation de ces principes intéressants au problème de VOC. R_SROC repose sur l’hypothèse intuitive que l’image peut être représentée par une combinaison linéaire des images d’apprentissage de la même catégorie. [...] / This thesis is dedicated to the active research topic of generic Visual Object Categorization(VOC), which can be widely used in many applications such as videoindexation and retrieval, video monitoring, security access control, automobile drivingsupport etc. Due to many realistic difficulties, it is still considered to be one ofthe most challenging problems in computer vision and pattern recognition. In thiscontext, we have proposed in this thesis our contributions, especially concerning thetwo main components of the methods addressing VOC problems, namely featureselection and image representation.Firstly, an Embedded Sequential Forward feature Selection algorithm (ESFS)has been proposed for VOC. Its aim is to select the most discriminant features forobtaining a good performance for the categorization. It is mainly based on thecommonly used sub-optimal search method Sequential Forward Selection (SFS),which relies on the simple principle to add incrementally most relevant features.However, ESFS not only adds incrementally most relevant features in each stepbut also merges them in an embedded way thanks to the concept of combinedmass functions from the evidence theory which also offers the benefit of obtaining acomputational cost much lower than the one of original SFS.Secondly, we have proposed novel image representations to model the visualcontent of an image, namely Polynomial Modeling and Statistical Measures basedImage Representation, called PMIR and SMIR respectively. They allow to overcomethe main drawback of the popular "bag of features" method which is the difficultyto fix the optimal size of the visual vocabulary. They have been tested along withour proposed region based features and SIFT. Two different fusion strategies, earlyand late, have also been considered to merge information from different "channels"represented by the different types of features.Thirdly, we have proposed two approaches for VOC relying on sparse representation,including a reconstructive method (R_SROC) as well as a reconstructiveand discriminative one (RD_SROC). Indeed, sparse representation model has beenoriginally used in signal processing as a powerful tool for acquiring, representingand compressing the high-dimensional signals. Thus, we have proposed to adaptthese interesting principles to the VOC problem. R_SROC relies on the intuitiveassumption that an image can be represented by a linear combination of trainingimages from the same category. Therefore, the sparse representations of images arefirst computed through solving the ℓ1 norm minimization problem and then usedas new feature vectors for images to be classified by traditional classifiers such asSVM. To improve the discrimination ability of the sparse representation to betterfit the classification problem, we have also proposed RD_SROC which includes adiscrimination term, such as Fisher discrimination measure or the output of a SVMclassifier, to the standard sparse representation objective function in order to learna reconstructive and discriminative dictionary. Moreover, we have also proposedChapter 0. Abstractto combine the reconstructive and discriminative dictionary and the adapted purereconstructive dictionary for a given category so that the discrimination power canfurther be increased.The efficiency of all the methods proposed in this thesis has been evaluated onpopular image datasets including SIMPLIcity, Caltech101 and Pascal2007.

Page generated in 0.1334 seconds