• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 21
  • 13
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 56
  • 44
  • 44
  • 36
  • 27
  • 27
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Airborne Infrared Target Tracking with the Nintendo Wii Remote Sensor

Beckett, Andrew 1984- 14 March 2013 (has links)
Intelligence, surveillance, and reconnaissance unmanned aircraft systems (UAS) are the most common variety of UAS in use today and provide invaluable capabilities to both the military and civil services. Keeping the sensors centered on a point of interest for an extended period of time is a demanding task requiring the full attention and cooperation of the UAS pilot and sensor operator. There is great interest in developing technologies which allow an operator to designate a target and allow the aircraft to automatically maneuver and track the designated target without operator intervention. Presently, the barriers to entry for developing these technologies are high: expertise in aircraft dynamics and control as well as in real- time motion video analysis is required and the cost of the systems required to flight test these technologies is prohibitive. However, if the research intent is purely to develop a vehicle maneuvering controller then it is possible to obviate the video analysis problem entirely. This research presents a solution to the target tracking problem which reliably provides automatic target detection and tracking with low expense and computational overhead by making use of the infrared sensor from a Nintendo Wii Remote Controller.
152

Development of a vision-based local positioning system for weed detection

Fontaine, Veronique 18 May 2004 (has links)
Herbicides applications could possibly be reduced if targeted. Targeting the applications requires prior identification and quantification of the weed population. This task could possibly be done by a weed scout robot. The ability to position a camera over the inter-row space of densely seeded crops will help to simplify the task of automatically quantifying weed infestations. As part of the development of an autonomous weed scout, a vision-based local positioning system for weed detection has been developed and tested in a laboratory setting. Four Line-detection algorithms have been tested and a robotic positioning device, or XYZtheta-table, was developed and tested. <p> The Line-detection algorithms were based respectively on a stripe analysis, a blob analysis, a linear regression and the Hough Transform. The last two also included an edge-detection step. Images of parallel line patterns representing crop rows were collected at different angles, with and without weed-simulating noise. The images were processed by the four programs. The ability of the programs to determine the angle of the rows and the location of an inter-row space centreline was evaluated in a laboratory setting. All algorithms behaved approximately the same when determining the rows angle in the noise-free images, with a mean error of 0.5°. In the same situation, all algorithms could find the centreline of an inter-row space within 2.7 mm. Generally, the mean errors increased when noise was added to the images, up to 1.1° and 8.5 mm for the Linear Regression algorithm. Specific dispersions of the weeds were identified as possible causes of increase of the error in noisy images. Because of its insensitivity to noise, the Stripe Analysis algorithm was considered the best overall. The fastest program was the Blob Analysis algorithm with a mean processing time of 0.35 s per image. Future work involves evaluation of the Line-detection algorithms with field images. <p>The XYZtheta-table consisted of rails allowing movement of a camera in the 3 orthogonal directions and of a rotational table that could rotate the camera about a vertical axis. The ability of the XYZtheta-table to accurately move the camera within the XY-space and rotate it at a desired angle was evaluated in a laboratory setting. The XYZtheta-table was able to move the camera within 7 mm of a target and to rotate it with a mean error of 0.07°. The positioning accuracy could be improved by simple mechanical modifications on the XYZtheta-table.
153

Evolutionary Design for Computational Visual Attention

Bruce, Neil January 2003 (has links)
A new framework for simulating the visual attention system in primates is introduced. The proposed architecture is an abstraction of existing approaches influenced by the work of Koch and Ullman, and Tompa. Each stage of the attentional hierarchy is chosen with consideration for both psychophysics and mathematical optimality. A set of attentional operators are derived that act on basic image channels of intensity, hue and orientation to produce maps representing perceptual importance of each image pixel. The development of such operators is realized within the context of a genetic optimization. The model includes the notion of an information domain where feature maps are transformed to a domain that more closely corresponds to the human visual system. A careful analysis of various issues including feature extraction, density estimation and data fusion is presented within the context of the visual attention problem.
154

Evolutionary Design for Computational Visual Attention

Bruce, Neil January 2003 (has links)
A new framework for simulating the visual attention system in primates is introduced. The proposed architecture is an abstraction of existing approaches influenced by the work of Koch and Ullman, and Tompa. Each stage of the attentional hierarchy is chosen with consideration for both psychophysics and mathematical optimality. A set of attentional operators are derived that act on basic image channels of intensity, hue and orientation to produce maps representing perceptual importance of each image pixel. The development of such operators is realized within the context of a genetic optimization. The model includes the notion of an information domain where feature maps are transformed to a domain that more closely corresponds to the human visual system. A careful analysis of various issues including feature extraction, density estimation and data fusion is presented within the context of the visual attention problem.
155

Non-destructive Testing Of Textured Foods By Machine Vision

Beriat, Pelin 01 February 2009 (has links) (PDF)
In this thesis, two different approaches are used to extract the relevant features for classifying the aflatoxin contaminated and uncontaminated scaled chili pepper samples: Statistical approach and Local Discriminant Bases (LDB) approach. In the statistical approach, First Order Statistical (FOS) features and Gray Level Cooccurrence Matrix (GLCM) features are extracted. In the LDB approach, the original LDB algorithm is modified to perform 2D searches to extract the most discriminative features from the hyperspectral images by removing irrelevant features and/or combining the features that do not provide sufficient discriminative information on their own. The classification is performed by using Linear Discriminant Analysis (LDA) classifier. Hyperspectral images of scaled chili peppers purchased from various locations in Turkey are used in this study. Correct classification accuracy about 80% is obtained by using the extracted features.
156

Hand Gesture Recognition System

Gingir, Emrah 01 September 2010 (has links) (PDF)
This thesis study presents a hand gesture recognition system, which replaces input devices like keyboard and mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-colored glove or need lots of training data. The system mentioned in this study disables all these restrictions and provides an adaptive, effort free environment to the user. Study starts with an analysis of the different color space performances over skin color extraction. This analysis is independent of the working system and just performed to attain valuable information about the color spaces. Working system is based on two steps, namely hand detection and hand gesture recognition. In the hand detection process, normalized RGB color space skin locus is used to threshold the coarse skin pixels in the image. Then an adaptive skin locus, whose varying boundaries are estimated from coarse skin region pixels, segments the distinct skin color in the image for the current conditions. Since face has a distinct shape, face is detected among the connected group of skin pixels by using the shape analysis. Non-face connected group of skin pixels are determined as hands. Gesture of the hand is recognized by improved centroidal profile method, which is applied around the detected hand. A 3D flight war game, a boxing game and a media player, which are controlled remotely by just using static and dynamic hand gestures, were developed as human machine interface applications by using the theoretical background of this study. In the experiments, recorded videos were used to measure the performance of the system and a correct recognition rate of ~90% was acquired with nearly real time computation.
157

Hyperspectral Imaging And Machine Learning Of Texture Foods For Classification

Atas, Musa 01 October 2011 (has links) (PDF)
In this thesis the main objective is to design a machine vision system that classifies aflatoxin contaminated chili peppers from uncontaminated ones in a rapid and non-destructive manner via hyperspectral imaging and machine learning techniques. Hyperspectral image series of chili pepper samples collected from different regions of Turkey have been acquired under halogen and UV illuminations. A novel feature set based on quantized absolute difference of consecutive spectral band features is proposed. Spectral band energies along with absolute difference energies of the consecutive spectral bands are utilized as features and compared with other feature extraction methods such as Teager energy operator and 2D wavelet Linear Discriminant Bases (2D-LDB). For feature selection, Fisher discrimination power, information theoretic Minimum Redundancy Maximum Relevance (mRMR) method and proposed Multi Layer Perceptron (MLP) based feature selection schemes are utilized.Finally, Linear Discriminant Classifier (LDC), Support Vector Machines (SVM) and MLP are used as classifiers. It is observed that MLP outperforms other learning models in terms of predictor performance. We verified the performance and robustness of our proposed methods on different real world datasets. It is suggested that to achieve high classification accuracy and predictor robustness, a machine vision system with halogen excitation and quantized absolute difference of consecutive spectral band features should be utilized.
158

Shape and Pose Recovery of Novel Objects Using Three Images from a Monocular Camera in an Eye-In-Hand Configuration

Colbert, Steven C. 06 April 2010 (has links)
Knowing the shape and pose of objects of interest is critical information when planning robotic grasping and manipulation maneuvers. The ability to recover this information from objects for which the system has no prior knowledge is a valuable behavior for an autonomous or semiautonomous robot. This work develops and presents an algorithm for the shape and pose recovery of unknown objects using no a priori information. Using a monocular camera in an eye-in-hand configuration, three images of the object of interest are captured from three disparate viewing directions. Machine vision techniques are employed to process these images into silhouettes. The silhouettes are used to generate an approximation of the surface of the object in the form of a three dimensional point cloud. The accuracy of this approximation is improved by fitting an eleven parameter geometric shape to the points such that the fitted shape ignores disturbances from noise and perspective projection effects. The parametrized shape represents the model of the unknown object and can be utilized for planning robot grasping maneuvers or other object classification tasks. This work is implemented and tested in simulation and hardware. A simulator is developed to test the algorithm for various three dimensional shapes and any possible imaging positions. Several shapes and viewing configurations are tested and the accuracy of the recoveries are reported and analyzed. After thorough testing of the algorithm in simulation, it is implemented on a six axis industrial manipulator and tested on a range of real world objects: both geometric and amorphous. It is shown that the accuracy of the hardware implementation performs exceedingly well and approaches the accuracy of the simulator, despite the additional sources of error and uncertainty present.
159

Usability Analysis in Locomotion Interface for Human Computer Interaction System Design

Farhadi-Niaki, Farzin 09 January 2019 (has links)
In the past decade and more than any time before, new technologies have been broadly applied in various fields of interaction between human and machine. Despite many functionality studies, yet, how such technologies should be evaluated within the context of human computer interaction research remains unclear. This research aims at proposing a mechanism to evaluate/predict the design of user interfaces with their interacting components. At the first level of analysis, an original concept extracts the usability results of components, such as effectiveness, efficiency, adjusted satisfaction, and overall acceptability, for comparison in the fields of interest. At the second level of analysis, another original concept defines new metrics based on the level of complexity in interactions between input modality and feedback of performing a task, in the field of classical solid mechanics. Having these results, a set of hypotheses is provided to test if some common satisfaction criteria can be predicted from their correlations with the components of performance, complexity, and overall acceptability. In the context of this research, three multimodal applications are implemented and experimentally tested to study the quality of interactions through the proposed hypotheses: a) full-body gestures vs. mouse/keyboard, in a Box game; b) arm/hand gestures vs. three-dimensional haptic controller, in a Slingshot game; and c) hand/finger gestures vs. mouse/keyboard, in a Race game. Their graphical user interfaces are designed to cover some extents of static/dynamic gestures, pulse/continuous touch-based controls, and discrete/analog tasks measured. They are quantified based on a new definition termed index of complexity which represents a concept of effort in the domain of locomotion interaction. Single/compound devices are also defined and studied to evaluate the effect of user’s attention in multi-tasking interactions. The proposed method of investigation for usability is meant to assist human-computer interface developers to reach a proper overall acceptability, performance, and effort-based analyses prior to their final user interface design.
160

Návrh uživatelsky přizpůsobitelného automatizovaného systému vizuální kontroly kvality pro montážní linky / Design of customizable automated visual quality control system for assembly lines

Martini, Silvano January 2017 (has links)
The diploma thesis deals with the problematics of machine vision. The research is devoted to description of the machine vision system hardware and software components. In practical part is described design and realization of the software, which allows several tasks of machine vision to be performed at the same time.

Page generated in 0.0834 seconds