• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The design and control of visual routines for the computation of simple geometric properties and relations

Romanycia, Marc Hector Joseph January 1987 (has links)
The present work is based on the Visual Routine theory of Shimon Ullman. This theory holds that efficient visual perception is managed by first applying spatially parallel methods to an initial input image in order to construct the basic representation-maps of features within the image. Then, this phase is followed by the application of serial methods - visual routines - which are applied to the most salient items in these and other subsequently created maps. Recent work in the visual routine tradition is reviewed, as well as relevant psychological work on preattentive and attentive vision. An analysis is made of the problem of devising a visual routine language for computing geometric properties and relations. The most useful basic representations to compute directly from a world of 2-D geometric shapes are determined. An argument is made for the case that an experimental program is required to establish which basic operations and which methods for controlling them will lead to the efficient computation of geometric properties and relations. A description is given of an implemented computer system which can correctly compute, in images of simple 2-D geometric shapes, the properties vertical, horizontal, closed, and convex, and the relations inside, outside, touching, centred-in, connected, parallel, and being-part-of. The visual routines which compute these, the basic operations out of which the visual routines are composed, and the important logic which controls the goal-directed application of the routines to the image are all described in detail. The entire system is embedded in a Question-and-Answer system which is capable of answering questions of an image, such as "Find all the squares inside triangles" or "Find all the vertical bars outside of closed convex shapes." By asking many such questions about various test images, the effectiveness of the visual routines and their controlling logic is demonstrated. / Science, Faculty of / Computer Science, Department of / Graduate
2

Content-based photo quality assessment.

January 2012 (has links)
基於審美的圖像質量自動評估近年來引起了計算機視覺領域的普遍關注。在這篇論文裡, 我們提出使用局部與整體特徵, 基於圖像內容進行圖片質量評估。首先, 圖像的主題區域被提取出來。這部分區域最吸引觀看者的注意力。基於主題區域, 我們提取局部特徵, 並結合整體特徵進行圖像質量評估。攝影專家拍攝圖片時, 對於不同內容的圖片, 會採取不同的技術手段和審美衡量標準。基於此項觀察, 我們提出根據圖片的內容, 在提取主題區域以及特徵的時候採用不同的手段。我們講數據根據圖像內容分為七類, 並分別設計主題區域提取方法和設計特徵。我們通過翔實的實驗數據,證明提出的框架之有效。 / 同時, 我們提出根據圖像內容特徵構建自適應分類器, 以在不事先知道圖像內容分類的情況下進行自動質量評估, 並取得滿意效果。 / Automatically assessing photo quality from the perspective of visual aesthetics is of great interest in high-level vision research and has drawn much attention in recent years. In this paper, we propose content-based photo quality assessment using both regional and global features. Under this framework, subject areas, which draw the most attentions of human eyes, are first extracted. Then regional features extracted both from subject areas and background regions are combined with global features to assess photo quality. Since professional photographers adopt different photographic techniques and have different aesthetic criteria in mind when taking different types of photos (e.g. landscape versus portrait), we propose to segment subject areas and extract visual features in different ways according to the variety of photo content. We divide the photos into seven categories based on the irvisual content and develop a set of new subject are a extraction methods and new visual features specially designed for different categories. / This argument is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new features over different categories of photos. In addition, we propose an approach of online training an adaptive classifier to combine the proposed features according to the visual content of a test photo without knowing its category. Another contribution of this work is to construct a large and diversified benchmark database for the research of photo quality assessment. It includes 17, 613 photos with manually labeled ground truth. This new benchmark database will be released to the research community. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Luo, Wei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 47-52). / Abstracts also in Chinese. / Chapter Abstract --- p.i / Chapter Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Photo Quality Assessment by Professionals --- p.2 / Chapter 1.2 --- Automatic Quality Assessment --- p.6 / Chapter 1.3 --- Our Approach --- p.8 / Chapter 2 --- RelatedWork --- p.12 / Chapter 3 --- Content-based Quality Assessment --- p.15 / Chapter 3.1 --- Global Features --- p.15 / Chapter 3.1.1 --- Hue Composition Feature --- p.15 / Chapter 3.1.2 --- Scene Composition Feature --- p.19 / Chapter 3.2 --- Subject Area Extraction Methods --- p.21 / Chapter 3.2.1 --- Clarity-Based Subject Area Extraction --- p.22 / Chapter 3.2.2 --- Layout-Based Subject Area Extraction --- p.25 / Chapter 3.2.3 --- Human-Based Subject Area Extraction --- p.25 / Chapter 3.3 --- Regional Features --- p.25 / Chapter 3.3.1 --- Dark Channel Feature --- p.27 / Chapter 3.3.2 --- Clarity Contrast Feature --- p.28 / Chapter 3.3.3 --- Lighting Contrast Feature --- p.30 / Chapter 3.3.4 --- Composition Geometry Feature --- p.30 / Chapter 3.3.5 --- Complexity Features --- p.31 / Chapter 3.3.6 --- Human Based Features --- p.31 / Chapter 3.4 --- Quality Assessment without the Information of Photo Categories --- p.33 / Chapter 4 --- Experimental Results --- p.37 / Chapter 4.1 --- Database description --- p.37 / Chapter 4.2 --- Experimental Settings --- p.40 / Chapter 4.3 --- Result Analysis --- p.41 / Chapter 4.4 --- Conclusions and Discussions --- p.44 / Bibliography --- p.47
3

High-level, part-based features for fine-grained visual categorization

Berg, Thomas January 2017 (has links)
Object recognition--"What is in this image?"--is one of the basic problems of computer vision. Most work in this area has been on finding basic-level object categories such as plant, car, and bird, but recently there has been an increasing amount of work in fine-grained visual categorization, in which the task is to recognize subcategories of a basic-level category, such as blue jay and bluebird. Experimental psychology has found that while basic-level categories are distinguished by the presence or absence of parts (a bird has a beak but car does not), subcategories are more often distinguished by the characteristics of their parts (a starling has a narrow, yellow beak while a cardinal has a wide, red beak). In this thesis we tackle fine-grained visual categorization, guided by this observation. We develop alignment procedures that let us compare corresponding parts, build classifiers tailored to finding the interclass differences at each part, and then combine the per-part classifiers to build subcategory classifiers. Using this approach, we outperform previous work in several fine-grained categorization settings: bird species identification, face recognition, and face attribute classification. In addition, the construction of subcategory classifiers from part classifiers allows us to automatically determine which parts are most relevant when distinguishing between any two subcategories. We can use this to generate illustrations of the differences between subcategories. To demonstrate this, we have built a digital field guide to North American birds which includes automatically generated images highlighting the key differences between visually similar species. This guide, "Birdsnap," also identifies bird species in users' uploaded photos using our subcategory classifiers. We have released Birdsnap as a web site and iPhone application.

Page generated in 0.1391 seconds