• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 66
  • 15
  • 12
  • 8
  • 8
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 281
  • 281
  • 147
  • 114
  • 69
  • 59
  • 49
  • 49
  • 44
  • 40
  • 38
  • 36
  • 36
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Content based image retrieval for bio-medical images

Nahar, Vikas, January 2010 (has links) (PDF)
Thesis (M.S.)--Missouri University of Science and Technology, 2010. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
12

Three new methods for color and texture based image matching in Content-Based Image Retrieval

HE, DAAN 22 April 2010 (has links)
Image matching is an important and necessary process in Content-Based Image Retrieval (CBIR). We propose three new methods for image matching: the first one is based on the Local Triplet Pattern (LTP) histograms; the second one is based on the Gaussian Mixture Models (GMMs) estimated by using the Extended Mass-constraint (EMass) algorithm; and the third one is called the DCT2KL algorithm. First, the LTP histograms are proposed to capture spatial relationships between color levels of neighboring pixels. An LTP level is extracted from each 3x3 pixel block, which is a unique number describing the color level relationship between a pixel and its neighboring pixels. Second, we consider how to represent and compare multi-dimensional color features using GMMs. GMMs are alternative methods to histograms for representing data distributions. GMMs address the high-dimensional problems from which histograms usually suffer inefficiency. In order to avoid local maxima problems in most GMM estimation algorithms, we apply the deterministic annealing method to estimate GMMs. Third, motivated by image compression algorithms, the DCT2KL method addresses the high dimensional data by using the Discrete Cosine Transform (DCT) coefficients in the YCbCr color space. The DCT coefficients are restored by partially decoding JPEG images. Assume that each DCT coefficient sequence is emitted from a memoryless source, and all these sources are independent of each other. For each target image we form a hypothesis that its DCT coefficient sequences are emitted from the same sources as the corresponding sequences in the query image. Testing these hypotheses by measuring the log-likelihoods leads to a simple yet efficient scheme that ranks each target image according to the Kullback-Leibler (KL) divergence between the empirical distribution of the DCT coefficient sequences in the query image and that in the target image. Finally we present a scheme to combine different features and methods to boost the performance of image retrieval. Experimental results on different image data sets show that these three methods proposed above outperform the related works in literature, and the combination scheme further improves the retrieval performance.
13

Texture Descriptors For Content-based Image Retrieval

Carkacioglu, Abdurrahman 01 January 2003 (has links) (PDF)
Content Based Image Retrieval (CBIR) systems represent images in the database by color, texture, and shape information. In this thesis, we concentrate on tex- ture features and introduce a new generic texture descriptor, namely, Statistical Analysis of Structural Information (SASI). Moreover, in order to increase the re- trieval rates of a CBIR system, we propose a new method that can also adapt an image retrieval system into a con&macr / gurable one without changing the underlying feature extraction mechanism and the similarity function. SASI is based on statistics of clique autocorrelation coe&plusmn / cients, calculated over structuring windows. SASI de&macr / nes a set of clique windows to extract and measure various structural properties of texture by using a spatial multi- resolution method. Experimental results, performed on various image databases, indicate that SASI is more successful then the Gabor Filter descriptors in cap- turing small granularities and discontinuities such as sharp corners and abrupt changes. Due to the &deg / exibility in designing the clique windows, SASI reaches higher average retrieval rates compared to Gabor Filter descriptors. However, the price of this performance is increased computational complexity. Since, retrieving of similar images of a given query image is a subjective task, it is desirable that retrieval mechanism should be con&macr / gurable by the user. In the proposed method, basically, original feature space of a content-based retrieval system is nonlinearly transformed into a new space, where the distance between the feature vectors is adjusted by learning. The transformation is realized by Arti&macr / cial Neural Network architecture. A cost function is de&macr / ned for learning and optimized by simulated annealing method. Experiments are done on the texture image retrieval system, which use SASI and Gabor Filter features. The results indicate that con&macr / gured image retrieval system is signi&macr / cantly better than the original system.
14

Multi-modal Video Ummarization Using Hidden Markov Models For Content-based Multimedia Indexing

Yasaroglu, Yagiz 01 January 2003 (has links) (PDF)
This thesis deals with scene level summarization of story-based videos. Two different approaches for story-based video summarization are investigated. The first approach probabilistically models the input video and identifies scene boundaries using the same model. The second approach models scenes and classifies scene types by evaluating likelihood values of these models. In both approaches, hidden Markov models are used as the probabilistic modeling tools. The first approach also exploits the relationship between video summarization and video production, which is briefly explained, by means of content types. Two content types are defined, dialog driven and action driven content, and the need to define such content types is emonstrated by simulations. Different content types use different hidden Markov models and features. The selected model segments input video as a whole. The second approach models scene types. Two types, dialog scene and action scene, are defined with different features and models. The system classifies fixed sized partitions of the video as either of the two scene types, and segments partitions separately according to their scene types. Performance of these two systems are compared against a iv deterministic video summarization method employing clustering based on visual properties and video structure related rules. Hidden Markov model based video summarization using content types enjoys the highest performance.
15

Intelligent content-based image retrieval framework based on semi-automated learning and historic profiles

chungkp@yahoo.com, Kien- Ping Chung January 2007 (has links)
Over the last decade, storage of non text-based data in databases has become an increasingly important trend in information management. Image in particular, has been gaining popularity as an alternative, and sometimes more viable, option for information storage. While this presents a wealth of information, it also creates a great problem in retrieving appropriate and relevant information during searching. This has resulted in an enormous growth of interest, and much active research, into the extraction of relevant information from non text-based databases. In particular,content-based image retrieval (CBIR) systems have been one of the most active areas of research. The retrieval principle of CBIR systems is based on visual features such as colour, texture, and shape or the semantic meaning of the images. To enhance the retrieval speed, most CBIR systems pre-process the images stored in the database. This is because feature extraction algorithms are often computationally expensive. If images are to be retrieved from the World-Wide-Web (WWW), the raw images have to be downloaded and processed in real time. In this case, the feature extraction speed becomes crucial. Ideally, systems should only use those feature extraction algorithms that are most suited for analysing the visual features that capture the common relationship between the images in hand. In this thesis, a statistical discriminant analysis based feature selection framework is proposed. Such a framework is able to select the most appropriate visual feature extraction algorithms by using relevance feedback only on the user labelled samples. The idea is that a smaller image sample group is used to analyse the appropriateness of each visual feature, and only the selected features will be used for image comparison and ranking. As the number of features is less, an improvement in the speed of retrieval is achieved. From experimental results, it is found that the retrieval accuracy for small sample data has also improved. Intelligent E-Business has been used as a case study in this thesis to demonstrate the potential of the framework in the application of image retrieval system. In addition, an inter-query framework has been proposed in this thesis. This framework is also based on the statistical discriminant analysis technique. A common approach in inter-query for a CBIR system is to apply the term-document approach. This is done by treating each image’s name or address as a term, and the query session as a document. However, scalability becomes an issue with this technique as the number of stored queries increases. Moreover, this approach is not appropriate for a dynamic image database environment. In this thesis, the proposed inter-query framework uses a cluster approach to capture the visual properties common to the previously stored queries. Thus, it is not necessary to “memorise” the name or address of the images. In order to manage the size of the user’s profile, the proposed framework also introduces a merging approach to combine clusters that are close-by and similar in their characteristics. Experiments have shown that the proposed framework has outperformed the short term learning approach. It also has the advantage that it eliminates the burden of the complex database maintenance strategies required in the term-document approach commonly needed by the interquery learning framework. Lastly, the proposed inter-query learning framework has been further extended by the incorporation of a new semantic structure. The semantic structure is used to connect the previous queries both visually and semantically. This structure provides the system with the ability to retrieve images that are semantically similar and yet visually different. To do this, an active learning strategy has been incorporated for exploring the structure. Experiments have again shown that the proposed new framework has outperformed the previous framework.
16

Efficient content-based retrieval of images using triangle-inequality-based algorithms /

Berman, Andrew P. January 1999 (has links)
Thesis (Ph. D.)--University of Washington, 1999. / Vita. Includes bibliographical references (p. [95]-100).
17

Efficient Image Matching with Distributions of Local Invariant Features

Grauman, Kristen, Darrell, Trevor 22 November 2004 (has links)
Sets of local features that are invariant to common image transformations are an effective representation to use when comparing images; current methods typically judge feature sets' similarity via a voting scheme (which ignores co-occurrence statistics) or by comparing histograms over a set of prototypes (which must be found by clustering). We present a method for efficiently comparing images based on their discrete distributions (bags) of distinctive local invariant features, without clustering descriptors. Similarity between images is measured with an approximation of the Earth Mover's Distance (EMD), which quickly computes the minimal-cost correspondence between two bags of features. Each image's feature distribution is mapped into a normed space with a low-distortion embedding of EMD. Examples most similar to a novel query image are retrieved in time sublinear in the number of examples via approximate nearest neighbor search in the embedded space. We also show how the feature representation may be extended to encode the distribution of geometric constraints between the invariant features appearing in each image.We evaluate our technique with scene recognition and texture classification tasks.
18

Techniques for content-based image characterization in wavelets domain

Voulgaris, Georgios January 2008 (has links)
This thesis documents the research which has led to the design of a number of techniques aiming to improve the performance of content-based image retrieval (CBIR) systems in wavelets domain using texture analysis. Attention was drawn on CBIR in transform domain and in particular wavelets because of the excellent characteristics for compression and texture extraction applications and the wide adoption in both the research community and the industry. The issue of performance is addressed in terms of accuracy and speed. The rationale for this research builds upon the conclusion that CBIR has not yet reached a good performance balance of accuracy, efficiency and speed for wide adoption in practical applications. The issue of bridging the sensory gap, which is defined as "[the difference] between the object in the real world and the information in a (computational) description derived from a recording of that scene." has yet to be resolved. Furthermore, speed improvement remains an uncharted territory as is feature extraction directly from the bitstream of compressed images. To address the above requirements the first part of this work introduces three techniques designed to jointly address the issue of accuracy and processing cost of texture characterization in wavelets domain. The second part introduces a new model for mapping the wavelet coefficients of an orthogonal wavelet transformation to a circular locus. The model is applied in order to design a novel rotation-invariant texture descriptor. All of the aforementioned techniques are also designed to bridge the gap between texture-based image retrieval and image compression by using appropriate compatible design parameters. The final part introduces three techniques for improving the speed of a CBIR query through more efficient calculation of the Li-distance, when it is used as an image similarity metric. The contributions conclude with a novel technique which, in conjunction with a widely adopted wavelet-based compression algorithm, extracts texture information directly from the compressed bit-stream for speed and storage requirements savings. The experimental findings indicate that the proposed techniques form a solid groundwork which can be extended to practical applications.
19

Document image retrieval with improvements in database quality

Kauniskangas, H. (Hannu) 23 June 1999 (has links)
Abstract Modern technology has made it possible to produce, process, transmit and store digital images efficiently. Consequently, the amount of visual information is increasing at an accelerating rate in many diverse application areas. To fully exploit this new content-based image retrieval techniques are required. Document image retrieval systems can be utilized in many organizations which are using document image databases extensively. This thesis presents document image retrieval techniques and new approaches to improve database content. The goal of the thesis is to develop a functional retrieval system and to demonstrate that better retrieval results can be achieved with the proposed database generation methods. Retrieval system architecture, a document data model, and tools for querying document image databases are introduced. The retrieval framework presented allows users to interactively define, construct and combine queries using document or image properties: physical (structural), semantic, textual and visual image content. A technique for combining primitive features like color, shape and texture into composite features is presented. A novel search base reduction technique which uses structural and content properties of documents is proposed for speeding up the query process. A new model for database generation within the image retrieval system is presented. An approach for automated document image defect detection and management is presented to build high quality and retrievable database objects. In image database population, image feature profiles and their attributes are manipulated automatically to better match with query requirements determined by the available query methods, the application environment and the user. Experiments were performed with multiple image databases containing over one thousand images. They comprised a range of document and scene images from different categories, properties and condition. The results show that better recall and accuracy for retrieval is achieved with the proposed optimization techniques. The search base reduction technique results in a considerable speed-up in overall query processing. The constructed document image retrieval system performs well in different retrieval scenarios and provides a consistent basis for algorithm development. The proposed modular system structure and interfaces facilitate its usage in a wide variety of document image retrieval applications.
20

Problematika obsahového webu / The Issue of Content-Based Website

Sova, Martin January 2012 (has links)
The theme of the present thesis is a content-based website. The paper defines the concept presented on a model of layers functioning content-based website and analyses its functioning from the perspective of systems theory on the basis of identified major transformation functions bound to the operation of the web content. The reader will be acquainted with the model of content distribution on the website and the possibilities of financing its operations. Formulated hypotheses are testing possibilities of return on investment using specific advertising possibilities; validity of these hypotheses is then tested on the data collected during operation of the specific content sites. Than the problem of processes taking place in creating web content is further analyzed. There is a practical example of selection and implementation of an information system built to support the creation of content on a particular website: analyzing operating processes, it describes how the selection of appropriate resources and their deployment is made. The goal of this thesis is to help answer the question whether the operation of the content-based website may be financed by advertising the location of elements, identify what kind of processes are operated in content creation site, and state how to select and implement an information system to support them.

Page generated in 0.078 seconds