Current Web image search engines, such as Google or Bing Images, adopt a hybrid search approach in which a text-based query (e.g.
"apple") is used to retrieve a set of relevant images, which are then refined by the user (e.g. by re-ranking the retrieved images based on similarity to a selected example). This approach makes it possible to use both text information (e.g. the initial query) and image features (e.g. as part of the refinement stage) to identify images which are relevant to the user. One limitation of these current systems is that text and image features are treated as independent components and are often used in a decoupled manner.
This work proposes to develop an integrated hybrid search method which leverages the synergies between text and image features.
Recently, there has been tremendous progress in the computer vision community in learning models of visual concepts from collections of example images. While impressive performance has been achieved on standardized data sets, scaling these methods so that they are capable of working at web scale remains a significant challenge. This work will develop approaches to visual modeling that can be scaled to address the task of retrieving billions of images on the Web.
Specifically, we propose to address two research issues related to integrated text- and image-based retrieval. First, we will explore whether models of visual concepts which are learned from collections of web images can be utilized to improve the image ranking associated with a text-based query. Second, we will investigate the hypothesis that the click-patterns associated with standard web image search engines can be utilized to learn query-specific image similarity measures that support improved query-refinement performance. We will evaluate our research by constructing a prototype integrated hybrid retrieval system based on the data from 300K real-world image queries. We will conduct user-studies to evaluate the effectiveness of our learned similarity measures and quantify the benefit of our method in real world search tasks such as target search.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/43746 |
Date | 06 January 2012 |
Creators | Jing, Yushi |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.001 seconds