• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1936
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 24
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3567
  • 3567
  • 970
  • 858
  • 790
  • 789
  • 643
  • 617
  • 572
  • 536
  • 529
  • 525
  • 478
  • 443
  • 443
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Personalised information filtering using event causality

Dolbear, Catherine January 2004 (has links)
Previous research on multimedia information filtering has mainly concentrated on key frame identification and video skim generation for browsing purposes, however applications requiring the generation of summaries as the final product for user con- sumption are of equal scientific and commercial interest. Recent advances in computer vision have enabled the extraction of semantic events from an audio-visual signal, so it can be assumed for our purposes that such semantic labels are already available for use. We concentrate instead on developing methods to prioritise these semantic elements for inclusion in a summary which can be personalised to meet a particular user's needs. Our work differentiates itself from that in the literature as it is driven by the results of a knowledge elicitation study with expert summarisers. The experts in our study believe that summaries structured as a narrative are better able to convey the content of the original data to a user. Motivated by the information filtering problem, the primary contribution of this thesis is the design and implementation of a system to summarise sequences of events by automatic modelling of the causal relationships between them. We show, by com- parison against summaries generated by experts and with the introduction of a new coherence metric, that modelling the causal relationships between events increases the coherence and accuracy of summaries. We suggest that this claim is valid, not only in the domain of soccer highlights generation, in which we carry out the bulk of our experiments, but also in any other domain in which causal relationships can be iden- tified between events. This proposal is tested by applying our summarisation system to another, significantly different domain, that of business meeting summarisation, using the soccer training set and a manually generated ontology mapping. We introduce the concept of a context-group of causally related events as a first step towards modelling narrative episodes and present a comparison between a case based reasoning and a two-stage Markov model approach to summarisation. For both methods we show that by including entire context-groups in the summary, rather than single events in isolation, more accurate summaries can be generated. Our approach to personalisation biases a summary according to particular narrative plotlines using different subsets of the training data. Results show that the number of instances of certain event classes can be increased by biasing the training set appropriately. This method gives very similar results to a standard weighting method, while avoiding the need to tailor the weights to a particular application domain.
242

Evaluation of Computer Vision Algorithms Optimized for Embedded GPU:s. / Utvärdering av bildbehandlingsalgoritmer optimerade för inbyggda GPU:er

Nilsson, Mattias January 2014 (has links)
The interest of using GPU:s as general processing units for heavy computations (GPGPU) has increased in the last couple of years. Manufacturers such as Nvidia and AMD make GPU:s powerful enough to outrun CPU:s in one order of magnitude, for suitable algorithms. For embedded systems, GPU:s are not as popular yet. The embedded GPU:s available on the market have often not been able to justify hardware changes from the current systems (CPU:s and FPGA:s) to systems using embedded GPU:s. They have been too hard to get, too energy consuming and not suitable for some algorithms. At SICK IVP, advanced computer vision algorithms run on FPGA:s. This master thesis optimizes two such algorithms for embedded GPU:s and evaluates the result. It also evaluates the status of the embedded GPU:s on the market today. The results indicates that embedded GPU:s perform well enough to run the evaluatedd algorithms as fast as needed. The implementations are also easy to understand compared to implementations for FPGA:s which are competing hardware.
243

Recognition and position estimation of 3D objects from range images using algebraic and moment invariants

Umasuthan, M. January 1995 (has links)
No description available.
244

Geometric methods for video sequence analysis and applications

Isgro, Francesco January 2001 (has links)
No description available.
245

Hough transform methods for curve detection and parameter estimation

Princen, John January 1990 (has links)
No description available.
246

Computer vision methods for guitarist left-hand fingering recognition

Burns, Anne-Marie. January 2006 (has links)
This thesis presents a method to visually detect and recognize fingering gestures of the left hand of a guitarist. The choice of computer vision to perform that task is motivated by the absence of a satisfying method for realtime guitarist fingering detection. The development of this computer vision method follows preliminary manual and automated analyses of video recordings of a guitarist. These first analyses led to some important findings about the design methodology of such a system, namely the focus on the effective gesture, the consideration of the action of each individual finger, and a recognition system not relying on comparison against a knowledge-base of previously learned fingering positions. Motivated by these results, studies on three important aspects of a complete fingering system were conducted. One study was on realtime finger-localization, another on string and fret detection, and the last on movement segmentation. Finally, these concepts were integrated into a prototype and a system for left-hand fingering detection was developed. Such a data acquisition system for fingering retrieval has uses in music theory, music education, automatic music and accompaniment generation and physical modeling.
247

Learning Patch-based Structural Element Models with Hierarchical Palettes

Chua, Jeroen 21 November 2012 (has links)
Image patches can be factorized into ‘shapelets’ that describe segmentation patterns, and palettes that describe how to paint the segments. This allows a flexible factorization of local shape (segmentation patterns) and appearance (palettes), which we argue is useful for tasks such as object and scene recognition. Here, we introduce the ‘shapelet’ model- a framework that is able to learn a library of ‘shapelet’ segmentation patterns to capture local shape, and hierarchical palettes of colors to capture appearance. Using a learned shapelet library, image patches can be analyzed using a variational technique to produce descriptors that separately describe local shape and local appearance. These descriptors can be used for high-level vision tasks, such as object and scene recognition. We show that the shapelet model is competitive with SIFT-based methods and structure element (stel) model variants on the object recognition datasets Caltech28 and Caltech101, and the scene recognition dataset All-I-Have-Seen.
248

Learning Patch-based Structural Element Models with Hierarchical Palettes

Chua, Jeroen 21 November 2012 (has links)
Image patches can be factorized into ‘shapelets’ that describe segmentation patterns, and palettes that describe how to paint the segments. This allows a flexible factorization of local shape (segmentation patterns) and appearance (palettes), which we argue is useful for tasks such as object and scene recognition. Here, we introduce the ‘shapelet’ model- a framework that is able to learn a library of ‘shapelet’ segmentation patterns to capture local shape, and hierarchical palettes of colors to capture appearance. Using a learned shapelet library, image patches can be analyzed using a variational technique to produce descriptors that separately describe local shape and local appearance. These descriptors can be used for high-level vision tasks, such as object and scene recognition. We show that the shapelet model is competitive with SIFT-based methods and structure element (stel) model variants on the object recognition datasets Caltech28 and Caltech101, and the scene recognition dataset All-I-Have-Seen.
249

Vision-based Human-computer Interaction Using Laser Pointer

Erdem, Ibrahim Aykut 01 January 2003 (has links) (PDF)
By the availability of today&rsquo / s inexpensive powerful hardware, it becomes possible to design real-time computer vision systems even in personal computers. Therefore, computer vision becomes a powerful tool for human-computer interaction (HCI). In this study, three different vision-based HCI systems are described. As in all vision-based HCI systems, the developed systems requires a camera (a webcam) to monitor the actions of the users. For pointing tasks, laser pointer is used as the pointing device. The first system is Vision-Based Keyboard System. In this system, the keyboard is a passive device. Therefore, it can be made up of any material having a keyboard layout image. The web camera is placed to see the entire keyboard image and captures the movement of the laser beam. The user enters a character to the computer by covering the corresponding character region in the keyboard layout image with the laser pointer. Additionally, this keyboard system can be easily adapted for disabled people who have little or no control of their hands to use a keyboard. The disabled user can attach a laser pointer to an eyeglass and control the beam of the laser pointer by only moving his/her head. For the same class of disabled people, Vision-Based Mouse System is also developed. By using the same setup used in the previous keyboard system, this system provides the users to control mouse cursor and actions. The last system is Vision-Based Continuous Graffiti1-like Text Entry System. The user sketches characters in a GraffitiTM-like alphabet in a continuous manner on a flat surface using a laser pointer. The beam of the laser pointer is tracked during the image sequences captured by a camera and the corresponding written word is recognized from the extracted trace of the laser beam.
250

Object extraction from infrared images /

Garg, Anupam. Unknown Date (has links)
Thesis (MEng (ElectronSys))--University of South Australia, 1996

Page generated in 0.0784 seconds