Spelling suggestions: "subject:"computer vision systems"" "subject:"coomputer vision systems""
1 |
Hypothesis verification using iconic matchingBrisdon, Kay January 1990 (has links)
A new technique for iconic hypothesis verification in model-based vision systems has been developed, which enhances the resolution of the problem of three-dimensional object recognition in two-dimensional scenes. This thesis investigates an iconic feature-matching approach to verification, in which two-dimensional image features are predicted from a specific view of a three-dimensional geometric model, and these features are matched directly to the unprocessed image data. This solves the crucial image to model registration problem. The iconic matching approach solves two of the major disadvantages of the usual symbolic matching method; where symbolic image constructs are compared with symbolic model data. The symbolic description of image features is not robust, and detailed matches cannot be made, as much of the original data has been lost. The investigation of iconic verification has been split into two parts. Firstly individual features are matched. Secondly the results from these are aggregated into a model match score. For the first stage four iconic evaluators have been developed and compared. These predictive evaluators are designed to assess the "edge-ness" of a small patch of an image. The advantage of one of these techniques over its equivalent data-driven approach is shown. The complete verification procedure aggregates the image-specific iconic feature evaluation scores. The iconic matching technique has been tested in the domain of car recognition in outdoor scene images. Its sensitivity in images containing a great deal of distracting noise has been very encouraging. There are however many application areas for this research. Iconic matching can be used to track both individual features and entire objects, for example in successive frames of a sequence of images over time
|
2 |
Model driven image understanding : A frame-based approachRosin, P. January 1988 (has links)
No description available.
|
3 |
An architecture for high performance image processing and its application for edge detection algorithmsWang, Han January 1989 (has links)
No description available.
|
4 |
Edge labelling and depth reconstruction by fusion of range and intensitydataZhang, Guanghua January 1992 (has links)
No description available.
|
5 |
Three-dimensional object recognition using vector encoded scene dataTolman, J. D. January 1988 (has links)
No description available.
|
6 |
VOLUME MEASUREMENT OF BIOLOGICAL MATERIALS IN LIVESTOCK OR VEHICULAR SETTINGS USING COMPUTER VISIONMatthew B Rogers (13171323) 28 July 2022 (has links)
<p>A Velodyne Puck VLP-16 LiDAR and a Carnegie Robotics Multisense S21 stereo camera were placed in an environmental testing chamber to investigate dust and lighting effects on depth returns. The environmental testing chamber was designed and built with varied lighting conditions with corn dust plumes forming the atmosphere. Specific software employing ROS, Python, and OpenCV were written for point cloud streaming and publishing. Dust chamber results showed while dust effects were present in point clouds produced by both instruments, the stereo camera was able to “see” the far wall of the chamber and did not image the dust plume, unlike the LiDAR sensor. The stereo camera was also set up to measure the volume of total mixed ration (TMR) and shelled grain in various volume scenarios with mixed surface terrains. Calculations for finding actual pixel area based on depth were utilized along with a volume formula exploiting the depth capability of the stereo camera for the results. Resulting accuracy was good for a target of 8 liters of shelled corn with final values between 6.8 and 8.3 liters from three varied surface scenarios. Lessons learned from the chamber and volume measurements were applied to loading large grain vessels being filled from a 750-bushel grain cart in the form of calculating the volume of corn grain and tracking the location of the vessel in near real time. Segmentation, masking, and template matching were the primary software tools used within ROS, OpenCV, and Python. The S21 was the center hardware piece. Resulting video and images show some lag between depth and color images, dust blocking depth pixels, and template matching misses. However, results were sufficient to show proof of concept of tracking and volume estimation. </p>
|
Page generated in 0.0883 seconds