• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1932
  • 313
  • 150
  • 112
  • 108
  • 69
  • 56
  • 46
  • 24
  • 20
  • 14
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 3555
  • 3555
  • 966
  • 850
  • 789
  • 786
  • 641
  • 616
  • 569
  • 535
  • 528
  • 525
  • 475
  • 443
  • 442
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Automatic defect detection in industrial radioscopic and ultrasonic images

Lawson, Shaun W. January 1996 (has links)
This thesis describes a number of approaches to the problems of automatic defect detection in ultrasonic Time of Flight Diffraction (TOFD) and X-ray radioscopic images of butt welds in steel plate. A number of novel image segmentation techniques are developed, two of which feature the use of backpropagation artificial neural networks. Two new methods for defect detection in ultrasonic TOFD images are described - the first uses thresholding of individual one-dimensional A-scans, and the second uses a neural network to classify pixels using two dimensional local area statistics. In addition, three new methods for defect detection in radioscopic images are described - the first is based on the use of two conventional spatial filters, the second uses grey level morphology to replace the 'blurring' stage of conventional "blur and subtract' procedures, and the third uses a neural network to classify pixels using raw grey level data at the input layer. It is considered that all five methods which have been developed show novelty in their methodology, design and implementation, most specifically in that (1) no previous methods for automatic defect detection in TOFD images, (2) very few successful implementations of grey level data processing by neural networks, and (3) few examples of local area segmentation of 'real' textured images for automatic inspection have been reported in the literature. The methods developed were tested against data interpreted by skilled NDT inspectors. In the case of the ultrasonic TOFD image processing, both automatic methods performed exceptionally well, producing results comparable to that of a human inspector. In the case of the radioscopic image processing, the ANN method also produced results comparable to that achieved by a human inspector and also gave comparable or consistently better results than those obtained using a number of existing techniques.
282

Deformable contour methods for shape extraction from binary edge-point images

Gilson, Stuart J. January 1999 (has links)
No description available.
283

Multi sensor data fusion applied to a class of autonomous land vehicles

Walker, Richard James January 1993 (has links)
Many applications exist for unmanned vehicles, factory maintenance, planetary exploration, in reactor inspection etc. Robotic systems will inhabit a world which will contain obstacles, these obstacles will threaten their pursuit of a successful goal. In all but the most simple and benign environment these obstacles will be in motion. The presence or location of an obstacle will not be known a priori. Therefore in order to build practical, useful robots a means of sensing the environment in order to determine traversable/non-traversable space needs to be developed. In addition, to prevent them from becoming lost, practical robots will be required to generate an estimate of where they are in the world in relation to known features, this capability is referred to as localisation. Clearly the primary sense for determining traversable spaces is sight. However current research into machine vision has produced systems that are either too slow, too specific (i.e. related to a particular problem domain rather than a general one) to too unreliable. These factors have lead to the development of an active sensor, the motion structured light sensor. This sensor solves the ill-posed problem and the problem of large data rates by illuminating the world with a laser sheet and determining 3D topography from the image of the intersection of this sheet and the world. The sensor has been developed to detect and track moving obstacles over time and has also been used as a means of vehicle localisation with respect to an a priori map. Although vision, and in particular structured light, is a useful source of topographic information, other sensors offer the ability to determine the presence of geometric features in a scene, such as ultrasonic sensors and laser rangefinders. Motivated by the desire to generate richer descriptions of world state from disparate information sources the research area of Multi Sensor Data Fusion (MSDF) is addressed. A mechanism for combining information based on the first and second order statistics available from the Kalman filter is presented. The MSDF system is applied i) in simulation to a second order plant and ii) to a laboratory based robot. This approach leads to greater accuracy of state estimation which leads to greater system robustness and robustness with respect to sensor failure / sensor error. This thesis therefore presents a method of generating more accurate estimates of state by using multiple sources of information. This enables systems to be built that are more robust, not only due to the fact that state estimates are more accurate but also due to the fact that these systems will possess mutliple redundancy through the use of multiple sensors. It is shown that the use of multiple sensors also enables the system to become more robust with respect to the poor chose of noise models required by the Kalman filter.
284

Colour constancy and its applications in machine vision

Forsyth, D. A. January 1988 (has links)
No description available.
285

Coverage Planning for Robotic Vision Applications in Complex 3D Environment

Jing, Wei 01 July 2017 (has links)
Using robots to perform the vision-based coverage tasks such as inspection, shape reconstruction and surveillance has attracted increasing attentions from both academia and industry in past few years. These tasks require the robot to carry vision sensors such as RGB camera, laser scanner, thermal camera to take surface measurement of the desired target objects, with a required surface coverage constraint. Due to the high demands and repetitive natures of these tasks, automatically generating efficient robotic motion plans could significantly reduce cost and improve productivity, which is highly desirable. Several planning approaches have been proposed for the vision-based coverage planning problems with robots in the past. However, these planning methods either only focused on coverage problems in 2D environment, or found less optimal results, or were specific to limited scenarios. In this thesis, we proposed the novel planning algorithms for vision-based coverage planning problems with industrial manipulators and Unmanned Aerial Vehicles (UAVs) in complex 3D environment. Different sampling and optimization methods have been used in the proposed planning algorithms to achieve better planning results. The very first and important step of these coverage planning tasks is to identify a suitable viewpoint set that satisfies the application requirements. This is considered as a view planning problem. The second step is to plan collision-free paths, as well as the visiting sequence of the viewpoints. This step can be formulated as a sequential path planning problem, or path planning problem for short. In this thesis, we developed view planning methods that generate candidate viewpoints using randomized sampling-based and Medial Object-based methods. The view planning methods led to better results with fewer required viewpoints and higher coverage ratios. Moreover, the proposed view planning methods were also applied to practical application in which the detailed 3D building model needs to be reconstructed when only 2D public map data is available. In addition to the proposed view planning algorithms, we also combined the view planning and path planning problems as a single coverage planning problem; and solved the combined problem in a single optimization process to achieve better results. The proposed planning method was applied to industrial shape inspection application with robotic manipulators. Additionally, we also extended the planning method to a industrial robotic inspection system with kinematic redundancy to enlarge the workspace and to reduce the required inspection time. Moreover, a learning-based robotic calibration method was also developed in order to accurately position vision sensors to desired viewpoints in these instances with industrial manipulators.
286

Deep neural networks for video classification in ecology

Conway, Alexander January 2020 (has links)
Analyzing large volumes of video data is a challenging and time-consuming task. Automating this process would very valuable, especially in ecological research where massive amounts of video can be used to unlock new avenues of ecological research into the behaviour of animals in their environments. Deep Neural Networks, particularly Deep Convolutional Neural Networks, are a powerful class of models for computer vision. When combined with Recurrent Neural Networks, Deep Convolutional models can be applied to video for frame level video classification. This research studies two datasets: penguins and seals. The purpose of the research is to compare the performance of image-only CNNs, which treat each frame of a video independently, against a combined CNN-RNN approach; and to assess whether incorporating the motion information in the temporal aspect of video improves the accuracy of classifications in these two datasets. Video and image-only models offer similar out-of-sample performance on the simpler seals dataset but the video model led to moderate performance improvements on the more complex penguin action recognition dataset.
287

Computer vision for automatic opening of fuming slag-furnace

Burman, Hannes January 2021 (has links)
This thesis covers the implementation of visual algorithms for a robot that is to operate at a smelter furnace. The goal is for the robot to replace a human in the opening, closing and flow regulation process as danger can arise when 1300°C slag flows out of the furnace. A thermal lance is used for opening the furnace which means the robot also has to understand if the lance is burning or not. A heat camera with temperature intervals 0-660°C and 300-2000°C was used to record the furnace during these critical moments which has been used to test different vision and tracking algorithms, such as mean shift and continuously adaptive mean shift. The heat images were filtered to extract only the relevant slag flow part, which then were used to track if slag was flowing, and see how large the slag flow was. Opening of the furnace was possible to identify for both temperature intervals. For the closing of the furnace both intervals were also successful, but the lower interval used a different algorithm for this case to be successful. A relative slag flow has been identified which looks promising for further real life studies. The ignition of the lance result is inconclusive as the data recorded was not fit for analysing this case, though a few conclusions could be made indicating a thermal camera may be unfit to track the thermal lance state.
288

Weighted Plane Features for Simultaneous Localization and Mapping

Leyder, Nicholas January 2021 (has links)
No description available.
289

Computer vision methods for guitarist left-hand fingering recognition

Burns, Anne-Marie. January 2006 (has links)
No description available.
290

Cell Phenotype Analyzer: Automated Techniques for Cell Phenotyping using Contactless Dielectrophoresis

Bala, Divya Chandrakant 23 June 2016 (has links)
Cancer is among the leading causes of death worldwide. In 2012, there were 14 million new cases and 8.2 million cancer-related deaths worldwide. The number of new cancer cases is expected rise to 22 million within the next two decades. Most chronic cancers cannot be cured. However, if the precise cancer cell type is diagnosed at an earlier, less aggressive stage then the chance of curing the disease increases with accurate drug delivery. This work is a humble contribution to the advancement of cancer research. This work delves into biological cell phenotyping under a dielectrophoresis setup using computer vision. Dielectrophoresis is a well-known phenomenon in which dielectric particles are subjected to a non-homogeneous electric field. This work is an analytical part of a larger proposed system replete with hardware, software and microfluidics integration to achieve cancer cell characterization, separation and enrichment using contactless dielectrophoresis. To analyze the cell morphology, various detection and tracking algorithms have been implemented and tested on a diverse dataset comprising cell-separation video sequences. Other related applications like cell-counting and cell-proximity detection have also been implemented. Performances were evaluated against ground truth using metrics like precision, recall and RMS cell-count error. A detection approach using difference of Gaussian and super-pixel algorithm gave the highest average F-measure of 0.745. A nearest neighbor tracker and Kalman tracking method gave the best overall tracking performance with an average F-measure of 0.95. This combination of detection and tracking methods proved to be best suited for this dataset. A graphical user interface to automate the experimentation process of the proposed system was also designed. / Master of Science

Page generated in 0.0609 seconds