• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 38
  • 32
  • 23
  • 21
  • 19
  • 18
  • 18
  • 17
  • 14
  • 14
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The detection of contours and their visual motion

Spacek, L. A. January 1985 (has links)
No description available.
2

The extraction and recognition of text from multimedia document images

Smith, R. W. January 1987 (has links)
No description available.
3

Multisubband structures and their application to image processing

Tufan, Emir January 1996 (has links)
No description available.
4

50,000 Tiny Videos: A Large Dataset for Non-parametric Content-based Retrieval and Recognition

Karpenko, Alexandre 22 September 2009 (has links)
This work extends the tiny image data-mining techniques developed by Torralba et al. to videos. A large dataset of over 50,000 videos was collected from YouTube. This is the largest user-labeled research database of videos available to date. We demonstrate that a large dataset of tiny videos achieves high classification precision in a variety of content-based retrieval and recognition tasks using very simple similarity metrics. Content-based copy detection (CBCD) is evaluated on a standardized dataset, and the results are applied to related video retrieval within tiny videos. We use our similarity metrics to improve text-only video retrieval results. Finally, we apply our large labeled video dataset to various classification tasks. We show that tiny videos are better suited for classifying activities than tiny images. Furthermore, we demonstrate that classification can be improved by combining the tiny images and tiny videos datasets.
5

50,000 Tiny Videos: A Large Dataset for Non-parametric Content-based Retrieval and Recognition

Karpenko, Alexandre 22 September 2009 (has links)
This work extends the tiny image data-mining techniques developed by Torralba et al. to videos. A large dataset of over 50,000 videos was collected from YouTube. This is the largest user-labeled research database of videos available to date. We demonstrate that a large dataset of tiny videos achieves high classification precision in a variety of content-based retrieval and recognition tasks using very simple similarity metrics. Content-based copy detection (CBCD) is evaluated on a standardized dataset, and the results are applied to related video retrieval within tiny videos. We use our similarity metrics to improve text-only video retrieval results. Finally, we apply our large labeled video dataset to various classification tasks. We show that tiny videos are better suited for classifying activities than tiny images. Furthermore, we demonstrate that classification can be improved by combining the tiny images and tiny videos datasets.
6

Multiview active shape models with SIFT descriptors

Milborrow, Stephen January 2016 (has links)
This thesis presents techniques for locating landmarks in images of human faces. A modified Active Shape Model (ASM [21]) is introduced that uses a form of SIFT descriptors [68]. Multivariate Adaptive Regression Splines (MARS [40]) are used to efficiently match descriptors around landmarks. This modified ASM is fast and performs well on frontal faces. The model is then extended to also handle non-frontal faces. This is done by first estimating the face's pose, rotating the face upright, then applying one of three ASM submodels specialized for frontal, left, or right three-quarter views. The multiview model is shown to be effective on a variety of datasets.
7

Segmentation and clustering in neural networks for image recognition

Jan, Ying-Wei January 1994 (has links)
No description available.
8

The Smart Phone as a Mouse

Qin, Yinghao January 2006 (has links)
With the development of hardware, mobile phone has become a feature-rich handheld device. Built-in camera and Bluetooth technology are supported in most current mobile phones. A real-time image processing experiment was conducted with a SonyEricsson P910i smartphone and a desktop computer. This thesis describes the design and implementation of a system which uses a mobile phone as a PC mouse. The movement of the mobile phone can be detected by analyzing the images captured by the onboard camera and the mouse cursor in the PC can be controlled by the movement of the phone.
9

Exploration Of Image Recognition On Specific Patterns and Research Of Sub-pixel Algorithm

Yang, Jeng-Ho 10 July 2002 (has links)
Image processing technologies are broadly applied on modern machine vision and industrial inspection , but there is usually a trade-off between accuracy and speed of inspection . We plan to solve the plight by two steps : 1.We will develop many major image processing methods such as image boundary¡Bto remove the noise¡Bpattern match , and so on . 2.We will focus on sub-pixel algorithm and boundary research to improve the image accuracy and processing time by software under limited hardware . As we know , pixel is the most basic element of an image , but we can divide one pixel into several smaller parts by mathematics ; in the meanwhile , the pixel accuracy can be improved . We will use algorithm to realize the goal in continuous way , and research on the flow of image recognition to find out a best flow for any specific image properties .
10

Image understanding for automatic human and machine separation

Romero Macias, Cristina January 2013 (has links)
The research presented in this thesis aims to extend the capabilities of human interaction proofs in order to improve security in web applications and services. The research focuses on developing a more robust and efficient Completely Automated Public Turing test to tell Computers and Human Apart (CAPTCHA) to increase the gap between human recognition and machine recognition. Two main novel approaches are presented, each one of them targeting a different area of human and machine recognition: a character recognition test, and an image recognition test. Along with the novel approaches, a categorisation for the available CAPTCHA methods is also introduced. The character recognition CAPTCHA is based on the creation of depth perception by using shadows to represent characters. The characters are created by the imaginary shadows produced by a light source, using as a basis the gestalt principle that human beings can perceive whole forms instead of just a collection of simple lines and curves. This approach was developed in two stages: firstly, two dimensional characters, and secondly three-dimensional character models. The image recognition CAPTCHA is based on the creation of cartoons out of faces. The faces used belong to people in the entertainment business, politicians, and sportsmen. The principal basis of this approach is that face perception is a cognitive process that humans perform easily and with a high rate of success. The process involves the use of face morphing techniques to distort the faces into cartoons, allowing the resulting image to be more robust against machine recognition. Exhaustive tests on both approaches using OCR software, SIFT image recognition, and face recognition software show an improvement in human recognition rate, whilst preventing robots break through the tests.

Page generated in 0.2197 seconds