• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 57
  • 17
  • 7
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 205
  • 205
  • 112
  • 108
  • 53
  • 48
  • 47
  • 39
  • 30
  • 30
  • 30
  • 26
  • 26
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Ανάπτυξη διαδικτυακού συστήματος βάσης δεδομένων με λειτουργικότητα ανάκτησης ιατρικών εικόνων

Χατζή, Διονυσία Γεωργία 24 January 2014 (has links)
Στην εργασία μας με τίτλο «Ανάπτυξη διαδικτυακού συστήματος βάσης δεδομένων με λειτουργικότητα ανάκτησης ιατρικών εικόνων» αρχικά παραθέσαμε τις τεχνικές που έχουν αναπτυχθεί από τις αρχές της δημιουργίας του τομέα της ανάκτησης εικόνας μέχρι σήμερα. Παρότι έχουν γίνει πολλές προσπάθειες για την ανάπτυξη μεθόδων οι οποίες θα βασίζονται αποκλειστικά στο περιεχόμενο τους, έως σήμερα οι περισσότερες μηχανές αναζήτησης βασίζονται ακόμη στην ομοιότητα των εικόνων βάσει των μεταδεδομένων που τις περιγράφουν.Στη συνέχεια κάναμε μια μικρή αναφορά σε ιατρικά συστήματα ανάκτησης ιατρικών εικόνων που έχουν δημιουργηθεί μέχρι σήμερα , όπως επίσης και στα αποτελέσματα του διαγωνισμού imageCLEF, ο οποίος διεξάγεται κάθε χρόνο από το 2003. Ο διαγωνισμός έχει δύο σκέλη , την ανάκτηση βάσει περιεχομένου και την ανάκτηση βάσει κειμένου, γι’ αυτό και συμμετέχουν πολλές ομάδες που ασχολούνται με την επεξεργασία φυσικής γλώσσας. Κάθε χρόνο η δυσκολία του διαγωνισμού αυξάνεται θέτοντας νέες προκλήσεις στις συμμετέχουσες ομάδες. Σύμφωνα με τα αποτελέσματα του διαγωνισμού τα καλύτερα αποτελέσματα προκύπτουν από το συνδυασμό μεθόδων και από τις δύο κατηγορίες ανάκτησης. Το σύστημα που αναπτύξαμε χρησιμοποιεί και τις δυο παραπάνω τεχνικές. Η ανάκτηση βάσει κειμένου πραγματοποιείται χρησιμοποιώντας λέξεις κλειδιά που υπάρχουν ήδη στη βάση. Ενώ για την ανάκτηση βάσει περιεχομένου εξάγουμε δύο χαρακτηριστικά , το ιστόγραμμα χρώματος και το autocorrelogram, τα οποία τα αποθηκεύουμε ως διανύσματα στη βάση και όταν θέλουμε να κάνουμε ένα ερώτημα εξάγουμε τα ίδια χαρακτηριστικά από την εικόνα ερώτημα. Η σύγκριση των δυο διανυσμάτων γίνεται υπολογίζοντας την Ευκλείδεια απόσταση μεταξύ του διανύσματος της εικόνας ερωτήματος και όλων των άλλων εικόνων της βάσης. / In our thesis, titled "Web based database system development with functionality of medical image retrieval" we present the retrieval techniques which have been developed until today. Therefore there have been done many e orts on development of methods which will rely on image content, until today most search engines (eg Google, Yahoo!) return relevant results by using text based image retrieval. Thereafter we cited some medical image retrieval systems which have been developed until today, as well as the results of imageCLEF contest, which is carried out from 2003 and every year since then. The contest has two parts, text based image retrieval and content based image retrieval, that' s the reason why many groups participated in the contest, deal with natural language processing. Every year the di culty increased and new challenges were posed to the participants. According to the results of the contest the best systems came from the combination of the two image retrieval categories. The system we developed uses the two techniques we mentioned above. Text based image retrieval is implemented by using keywords which exist in the database. While for content based image retrieval we extract two characteristics, colour histogram and autocorrelogram, which are saved as vectors in the database and when we make a query we extract the same characteristics from the image query. To compare the images we compute the distances between the image query vector and and all the other image vectors of the database. The above methods incorporated into SIDB, which is an online database management system. The system has been developed using PHP and postgreSQL and the images which have been used are medical exams from di erent parts of the human body. The biggest part of which come from the IRMA database, which has been created at Aachen University and which was used for many years in ImageCLEF competition.
12

Hledání obrázků k textům / Matching Images to Texts

Hajič, Jan January 2014 (has links)
We build a joint multimodal model of text and images for automatically assigning illustrative images to journalistic articles. We approach the task as an unsupervised representation learning problem of finding a common representation that abstracts from individual modalities, inspired by multimodal Deep Boltzmann Machine of Srivastava and Salakhutdinov. We use state-of-the-art image content classification features obtained from the Convolutional Neural Network of Krizhevsky et al. as input "images" and entire documents instead of keywords as input texts. A deep learning and experiment management library Safire has been developed. We have not been able to create a successful retrieval system because of difficulties with training neural networks on the very sparse word observation. However, we have gained substantial understanding of the nature of these difficulties and thus are confident that we will be able to improve in future work.
13

Image Retrieval using Automatic Region Tagging

Awg Iskandar, Dayang Nurfatimah, dnfaiz@fit.unimas.my January 2008 (has links)
The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions.
14

Fast Contour Matching Using Approximate Earth Mover's Distance

Grauman, Kristen, Darrell, Trevor 05 December 2003 (has links)
Weighted graph matching is a good way to align a pair of shapes represented by a set of descriptive local features; the set of correspondences produced by the minimum cost of matching features from one shape to the features of the other often reveals how similar the two shapes are. However, due to the complexity of computing the exact minimum cost matching, previous algorithms could only run efficiently when using a limited number of features per shape, and could not scale to perform retrievals from large databases. We present a contour matching algorithm that quickly computes the minimum weight matching between sets of descriptive local features using a recently introduced low-distortion embedding of the Earth Mover's Distance (EMD) into a normed space. Given a novel embedded contour, the nearest neighbors in a database of embedded contours are retrieved in sublinear time via approximate nearest neighbors search. We demonstrate our shape matching method on databases of 10,000 images of human figures and 60,000 images of handwritten digits.
15

Multi-Technique Fusion for Shape-Based Image Retrieval

El-Ghazal, Akrem January 2009 (has links)
Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually.
16

Multi-Technique Fusion for Shape-Based Image Retrieval

El-Ghazal, Akrem January 2009 (has links)
Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually.
17

Semantic Assisted, Multiresolution Image Retrieval in 3D Brain MR Volumes

Quddus, Azhar January 2010 (has links)
Content Based Image Retrieval (CBIR) is an important research area in the field of multimedia information retrieval. The application of CBIR in the medical domain has been attempted before, however the use of CBIR in medical diagnostics is a daunting task. The goal of diagnostic medical image retrieval is to provide diagnostic support by displaying relevant past cases, along with proven pathologies as ground truths. Moreover, medical image retrieval can be extremely useful as a training tool for medical students and residents, follow-up studies, and for research purposes. Despite the presence of an impressive amount of research in the area of CBIR, its acceptance for mainstream and practical applications is quite limited. The research in CBIR has mostly been conducted as an academic pursuit, rather than for providing the solution to a need. For example, many researchers proposed CBIR systems where the image database consists of images belonging to a heterogeneous mixture of man-made objects and natural scenes while ignoring the practical uses of such systems. Furthermore, the intended use of CBIR systems is important in addressing the problem of "Semantic Gap". Indeed, the requirements for the semantics in an image retrieval system for pathological applications are quite different from those intended for training and education. Moreover, many researchers have underestimated the level of accuracy required for a useful and practical image retrieval system. The human eye is extremely dexterous and efficient in visual information processing; consequently, CBIR systems should be highly precise in image retrieval so as to be useful to human users. Unsurprisingly, due to these and other reasons, most of the proposed systems have not found useful real world applications. In this dissertation, an attempt is made to address the challenging problem of developing a retrieval system for medical diagnostics applications. More specifically, a system for semantic retrieval of Magnetic Resonance (MR) images in 3D brain volumes is proposed. The proposed retrieval system has a potential to be useful for clinical experts where the human eye may fail. Previously proposed systems used imprecise segmentation and feature extraction techniques, which are not suitable for precise matching requirements of the image retrieval in this application domain. This dissertation uses multiscale representation for image retrieval, which is robust against noise and MR inhomogeneity. In order to achieve a higher degree of accuracy in the presence of misalignments, an image registration based retrieval framework is developed. Additionally, to speed-up the retrieval system, a fast discrete wavelet based feature space is proposed. Further improvement in speed is achieved by semantically classifying of the human brain into various "Semantic Regions", using an SVM based machine learning approach. A novel and fast identification system is proposed for identifying a 3D volume given a 2D image slice. To this end, we used SVM output probabilities for ranking and identification of patient volumes. The proposed retrieval systems are tested not only for noise conditions but also for healthy and abnormal cases, resulting in promising retrieval performance with respect to multi-modality, accuracy, speed and robustness. This dissertation furnishes medical practitioners with a valuable set of tools for semantic retrieval of 2D images, where the human eye may fail. Specifically, the proposed retrieval algorithms provide medical practitioners with the ability to retrieve 2D MR brain images accurately and monitor the disease progression in various lobes of the human brain, with the capability to monitor the disease progression in multiple patients simultaneously. Additionally, the proposed semantic classification scheme can be extremely useful for semantic based categorization, clustering and annotation of images in MR brain databases. This research framework may evolve in a natural progression towards developing more powerful and robust retrieval systems. It also provides a foundation to researchers in semantic based retrieval systems on how to expand existing toolsets for solving retrieval problems.
18

Multiple-Instance Learning Image Database Retrieval employing Orthogonal Fractal Bases

Wang, Ya-ling 08 August 2004 (has links)
The objective of the present work is to propose a novel method to extract a stable feature set representative of image content. Each image is represented by a linear combination of fractal orthonormal basis vectors. The mapping coefficients of an image projected onto each orthonormal basis constitute the feature vector. The set of orthonormal basis vectors are generated by utilizing fractal iterative function through target and domain blocks mapping. The distance measure remains consistent, i.e., isometric embedded, between any image pairs before and after the projection onto orthonormal axes. Not only similar images generate points close to each other in the feature space, but also dissimilar ones produce feature points far apart. The above statements are logically equivalent to that distant feature points are guaranteed to map to images with dissimilar contents, while close feature points correspond to similar images. In this paper, we adapt the Multiple Instance Learning paradigm using the Diverse Density algorithm as a way of modeling the ambiguity in images in order to learning concepts used to classify images. A user labels an image as positive if the image contains the concepts, as negative if the image far from the concepts. Each example image is a bag of blocks where only the bag is labeled. The User selects positive and negative image examples to train the concepts in feature space. From a small collection of positive and negative examples, the system learns the concepts using them to retrieve images that contain the concepts from database. Each concept having similar blocks becomes the group in each image. According groups¡¦ location distribution, variation and spatial relations computes positive examples and database images similarity.
19

Color Image Retrieval Using Wavelet Transform and Texture Features

Tsao, Yu-Jen 14 August 2005 (has links)
As the digital technology advances with each passing day and the internet is evolving so quickly, the use of digital images is increasing on the demand. More information is showed in terms of digital patterns or images in our daily life. Besides retrieving image data from a given image database by context, we can alternatively do that by the image features we prescribed. This method is then called content-based image retrieval, CBIR. The wavelet transform possesses the power of multi-resolutional analysis for digital images. It¡¦s bands are mutually independent so that good results can often be obtained from partial analyses. Although wavelet transform is usually used for image compression and texture analysis, it has also many recent applications in the area of image retrieval. In this research, we propose the use of some new image roughness features to represent the variation of image textures. After an image is transformed on the wavelet, we collect the roughness features as well as wavelet energy features from each band. These features are then used to sort out desired images. We can show that the features as used in this work can be extracted even when the images are altered by some rotation, partial magnification or viewpoint changes.
20

Content based image retrieval for bio-medical images

Nahar, Vikas, January 2010 (has links) (PDF)
Thesis (M.S.)--Missouri University of Science and Technology, 2010. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).

Page generated in 0.0612 seconds