• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Design and Implementation of an Effective Vision-Based Leader-Follower Tracking Algorithm Using PI Camera

Li, Songwei 08 1900 (has links)
The thesis implements a vision-based leader-follower tracking algorithm on a ground robot system. One camera is the only sensor installed the leader-follower system and is mounted on the follower. One sphere is the only feature installed on the leader. The camera identifies the sphere in the openCV Library and calculates the relative position between the follower and leader using the area and position of the sphere in the camera frame. A P controller for the follower and a P controller for the camera heading are built. The vision-based leader-follower tracking algorithm is verified according to the simulation and implementation.
2

Development of Test Equipment for Analysis of Camera Vision Systems Used in Car Industry : Printed Ciruit Board Design and Power Distribution Network Stability

Johansson, Jimmy, Odén, Martin January 2015 (has links)
The main purpose of this thesis was to develop a printed circuit board for Autoliv Electronics AB. This circuit board should be placed in their test equipment to support some of their camera vision systems used in cars. The main task was to combine the existing hardware into one module. To be able to achieve this, the most important factors in designing a printed circuit board was considered. A satisfying power distribution network is the most crucial one. This was accomplished by using decoupling capacitors to achieve low enough impedance for all circuits. Calculations and simulations were executed for all integrated circuits to find the correct size and numbers of capacitors. The impedance of the circuit board was tested with a network analyzer to confirm that the impedance were low enough, which was the case. System functionality was never tested completely, due to delivery problems with some external equipment.
3

Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking

Parnian, Neda 24 September 2008 (has links)
This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications.
4

Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking

Parnian, Neda 24 September 2008 (has links)
This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications.
5

SORTED : Serial manipulator with Object Recognition Trough Edge Detection

Bodén, Rikard, Pernow, Jonathan January 2019 (has links)
Today, there is an increasing demand for smart robots that can make decisions on their own and cooperate with humans in changing environments. The application areas for robotic arms with camera vision are likely to increase in the future of artificial intelligence as algorithms become more adaptable and intelligent than ever. The purpose of this bachelor’s thesis is to develop a robotic arm that recognises arbitrarily placed objects with camera vision and has the ability to pick and place the objects when they appear in unpredictable positions. The robotic arm has three degrees of freedom and the construction is modularised and 3D-printed with respect to maintenance, but also in order to be adaptive to new applications. The camera vision sensor is integrated in an external camera tripod with its field of view over the workspace. The camera vision sensor recognises objects through colour filtering and it uses an edge detection algorithm to return measurements of detected objects. The measurements are then used as input for the inverse kinematics, that calculates the rotation of each stepper motor. Moreover, there are three different angular potentiometers integrated in each axis to regulate the rotation by each stepper motor. The results in this thesis show that the robotic arm is able to pick up to 90% of the detected objects when using barrel distortion correction in the algorithm. The findings in this thesis is that barrel distortion, that comes with the camera lens, significantly impacts the precision of the robotic arm and thus the results. It can also be stated that the method for barrel distortion correction is affected by the geometry of detected objects and differences in illumination over the workspace. Another conclusion is that correct illumination is needed in order for the vision sensor to differentiate objects with different hue and saturation. / Idag ökar efterfrågan på smarta robotar som kan ta egna beslut och samarbeta med människor i föränderliga miljöer. Tillämpningsområdena för robotar med kamerasensorer kommer sannolikt att öka i en framtid av artificiell intelligens med algoritmer som blir mer intelligenta och anpassningsbara än tidigare. Syftet med detta kandidatexamensarbete är att utveckla en robotarm som, med hjälp av en kamerasensor, kan ta upp och sortera godtyckliga objekt när de uppträder på oförutsägbara positioner. Robotarmen har tre frihetsgrader och hela konstruktionen är 3D-printad och modulariserad för att vara underhållsvänlig, men också anpassningsbar för nya tillämpningsområden. Kamerasensorn ¨ar integrerad i ett externt kamerastativ med sitt synfält över robotarmens arbetsyta. Kamerasensorn detekterar objekt med hjälp av en färgfiltreringsalgoritm och returnerar sedan storlek, position och signatur för objekten med hjälp av en kantdetekteringsalgoritm. Objektens storlek används för att kalibrera kameran och kompensera för den radiella förvrängningen hos linsen. Objektens relativa position används sedan till invers kinematik för att räkna ut hur mycket varje stegmotor ska rotera för att erhålla den önskade vinkeln på varje axel som gör att gripdonet kan nå det detekterade objektet. Robotarmen har även tre olika potentiometrar integrerade i varje axel för att reglera rotationen av varje stegmotor. Resultaten i denna rapport visar att robotarmen kan detektera och plocka upp till 90% av objekten när kamerakalibrering används i algoritmen. Slutsatsen från rapporten är att förvrängningen från kameralinsen har störst påverkan på robotarmens precision och därmed resultatet. Det går även att konstatera att metoden som används för att korrigera kameraförvrängningen påverkas av geometrin samt orienteringen av objekten som ska detekteras, men framför allt variationer i belysning och skuggor över arbetsytan. En annan slutsats är att belysningen över arbetsytan är helt avgörande för om kamerasensorn ska kunna särskilja objekt med olika färgmättad och nyans.

Page generated in 0.0844 seconds