• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2622
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6226
  • 6226
  • 1983
  • 1503
  • 1193
  • 1145
  • 1016
  • 999
  • 952
  • 923
  • 893
  • 793
  • 771
  • 660
  • 654
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A system for counting people using image processing /

Ng, Chi-kin. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references.
132

An online interactive multiple-perspectives map system

Huo, Jing. January 2002 (has links)
Thesis (M.S.)--University of Florida, 2002. / Title from title page of source document. Includes vita. Includes bibliographical references.
133

Human activity tracking for wide-area surveillance

O'Malley, Patrick D. January 2002 (has links)
Thesis (M.S.)--University of Florida, 2002. / Title from title page of source document. Document formatted into pages; contains vi, 46 p.; also contains graphics. Includes vita. Includes bibliographical references.
134

Image databases using perceptual organization, color and texture for retrieval in digital libraries /

Iqbal, Qasim. January 2002 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2002. / Vita. Includes bibliographical references. Available also from UMI Company.
135

Scene categorization based on multiple-feature reinforced contextual visual words

Qin, Jianzhao., 覃剑钊. January 2011 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
136

Using semantic sub-scenes to facilitate scene categorization and understanding

Zhu, Shanshan, 朱珊珊 January 2014 (has links)
This thesis proposes to learn the absent cognitive element in conventional scene categorization methods: sub-scenes, and use them to better categorize and understand scenes. In scene categorization, it has been observed that the problem of ambiguity occurs when treating the scene as a whole. Scene ambiguity arises from when a similar set of sub-scenes are arranged differently to compose different scenes, or when a scene literally contains several categories. However, these ambiguities can be discerned by the knowledge of sub-scenes. Thus, it is worthy to study sub-scenes and use them to better understand a scene. The proposed research firstly considers an unsupervised method to segment sub-scenes. It emphasizes on generating more integral regions instead of over-segmented regions usually produced by conventional segmentation methods. Several properties of sub-scenes are explored such as proximity grouping, area of influence, similarity and harmony based on psychological principles. These properties are formulated into constraints that are used directly in the proposed framework. A self-determined approach is employed to produce a final segmentation result based on the characteristics of each image in an unsupervised manner. The proposed method performs competitively against other state-of-the-art unsupervised segmentation methods with F-measure of 0.55, Covering of 0.51 and VoI of 1.93 in the Berkeley segmentation dataset. In the Stanford background dataset, it achieves the overlapping score of 0.566 which is higher than the score of 0.499 of the comparison method. To segment and label sub-scenes simultaneously, a supervised approach of semantic segmentation is proposed. It is developed based on a Hierarchical Conditional Random Field classification framework. The proposed method integrates contextual information into the model to improve classification performance. Contextual information including global consistency and spatial context are considered in the proposed method. Global consistency is developed based on generalizing the scene by scene types and spatial context takes the spatial relationship into account. The proposed method improves semantic segmentation by boosting more logical class combinations. It achieves the best score in the MSRC-21 dataset with global accuracy at 87% and the average accuracy at 81%, which out-performs all other state-of-the-art methods by 4% individually. In the Stanford background dataset, it achieves global accuracy at 80.5% and average accuracy at 71.8%, also out-performs other methods by 2%. Finally, the proposed research incorporates sub-scenes into the scene categorization framework to improve categorization performance, especially in ambiguity cases. The proposed method encodes the sub-scene in the way that their spatial information is also considered. Sub-scene descriptor compensates the global descriptor of a scene by evaluating local features with specific geometric attributes. The proposed method obtains an average categorization accuracy of 92.26% in the 8 Scene Category dataset, which outperforms all other published methods by over 2% of improvement. It evaluates ambiguity cases more accurately by discerning which part exemplifies a scene category and how those categories are organized. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
137

Maximum likelihood techniques for joint segmentation-classification of multi-spectral chromosome images

Schwartzkopf, Wade Carl 28 August 2008 (has links)
Not available / text
138

Temporal spatio-velocity transform and its applications

Sato, Koichi 28 August 2008 (has links)
Not available / text
139

IMAGE SAMPLING AND MULTIPLEXING WITH TWO-DIMENSIONAL PHASE GRATINGS

Scott, Paul Walter January 1978 (has links)
No description available.
140

Data analytics and crawl from hidden web databases

Yan, Hui January 2015 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science

Page generated in 0.1121 seconds