• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • Tagged with
  • 14
  • 14
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Energy Efficient and Programmable Architecture for Wireless Vision Sensor Node

Imran, Muhammad January 2013 (has links)
Wireless Vision Sensor Networks (WVSNs) is an emerging field which has attracted a number of potential applications because of smaller per node cost, ease of deployment, scalability and low power stand alone solutions. WVSNs consist of a number of wireless Vision Sensor Nodes (VSNs). VSN has limited resources such as embedded processing platform, power supply, wireless radio and memory.  In the presence of these limited resources, a VSN is expected to perform complex vision tasks for a long duration of time without battery replacement/recharging. Currently, reduction of processing and communication energy consumptions have been major challenges for battery operated VSNs. Another challenge is to propose generic solutions for a VSN so as to make these solutions suitable for a number of applications. To meet these challenges, this thesis focuses on energy efficient and programmable VSN architecture for machine vision systems which can classify objects based on binary data. In order to facilitate generic solutions, a taxonomy has been developed together with a complexity model which can be used for systems’ classification and comparison without the need for actual implementation. The proposed VSN architecture is based on tasks partitioning between a VSN and a server as well as tasks partitioning locally on the node between software and hardware platforms. In relation to tasks partitioning, the effect on processing, communication energy consumptions, design complexity and lifetime has been investigated. The investigation shows that the strategy, in which front end tasks up to segmentation, accompanied by a bi-level coding, are implemented on Field Programmable Platform (FPGA) with small sleep power, offers a generalized low complexity and energy efficient VSN architecture. The implementation of data intensive front end tasks on hardware reconfigurable platform reduces processing energy. However, there is a scope for reducing communication energy, related to output data. This thesis also explores data reduction techniques including image coding, region of interest coding and change coding which reduces output data significantly. For proof of concept, VSN architecture together with tasks partitioning, bi-level video coding, duty cycling and low complexity background subtraction technique has been implemented on real hardware and functionality has been verified for four applications including particle detection system, remote meter reading, bird detection and people counting. The results based on measured energy values shows that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to currently published VSNs. The lifetime based on measured energy values showed that for a sample period of 5 minutes, VSN can achieve 3.2 years lifetime with a battery of 37.44 kJ energy. In addition to this, proposed VSN offers generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.
2

Tracking and Planning for Surveillance Applications

Skoglar, Per January 2012 (has links)
Vision and infrared sensors are very common in surveillance and security applications, and there are numerous examples where a critical infrastructure, e.g. a harbor, an airport, or a military camp, is monitored by video surveillance systems. There is a need for automatic processing of sensor data and intelligent control of the sensor in order to obtain efficient and high performance solutions that can support a human operator. This thesis considers two subparts of the complex sensor fusion system; namely target tracking and sensor control.The multiple target tracking problem using particle filtering is studied. In particular, applications where road constrained targets are tracked with an airborne video or infrared camera are considered. By utilizing the information about the road network map it is possible to enhance the target tracking and prediction performance. A dynamic model suitable for on-road target tracking with a camera is proposed and the computational load of the particle filter is treated by a Rao-Blackwellized particle filter. Moreover, a pedestrian tracking framework is developed and evaluated in a real world experiment. The exploitation of contextual information, such as road network information, is highly desirable not only to enhance the tracking performance, but also for track analysis, anomaly detection and efficient sensor management. Planning for surveillance and reconnaissance is a broad field with numerous problem definitions and applications. Two types of surveillance and reconnaissance problems are considered in this thesis. The first problem is a multi-target search and tracking problem. Here, the task is to control the trajectory of an aerial sensor platform and the pointing direction of its camera to be able to keep track of discovered targets and at the same time search for new ones. The key to successful planning is a measure that makes it possible to compare different tracking and searching tasks in a unified framework and this thesis suggests one such measure. An algorithm based on this measure is developed and simulation results of a multi-target search and tracking scenario in an urban area are given. The second problem is aerial information exploration for single target estimation and area surveillance. In the single target case the problem is to control the trajectory of a sensor platform with a vision or infrared camera such that the estimation performance of the target is maximized. The problem is treated both from an information filtering and from a particle filtering point of view. In area exploration the task is to gather useful image data of the area of interest by controlling the trajectory of the sensor platform and the pointing direction of the camera. Good exploration of a point of interest is characterized by several images from different viewpoints. A method based on multiple information filters is developed and simulation results from area and road exploration scenarios are presented.
3

Automatic Configuration of Vision Sensor

Ollesson, Niklas January 2013 (has links)
In factory automation cameras and image processing algorithms can be used to inspect objects. This can decrease the faulty objects that leave the factory and reduce manual labour needed. A vision sensor is a system where camera and image processing is delivered together, and that only needs to be configured for the application that it is to be used for. Thus no programming knowledge is needed for the customer. In this Master’s thesis a way to make the configuration of a vision sensor even easier is developed and evaluated. The idea is that the customer knows his or her product much better than he or she knows image processing. The customer could take images of positive and negative samples of the object that is to be inspected. The algorithm should then, given these images, configure the vision sensor automatically. The algorithm that is developed to solve this problem is described step by step with examples to illustrate the problems that needed to be solved. Much of the focus is on how to compare two configurations to each other, in order to find the best one. The resulting configuration from the algorithm is then evaluated with respect to types of applications, computation time and representativeness of the input images.
4

Indoor triangulation system using vision sensors

Nicander, Torun January 2020 (has links)
This thesis aims to investigate a triangulation system for indoor positioning in two dimensions (2D). The system was implemented using three Pixy2 vision sensors placed on a straight baseline. A Pixy2 consists of a camera lens and an image sensor (Aptina MT9M114) as well as a microcontroller (NXP LPC4330), and other components. It can track one or multiple colours, or a combination of colours.  To position an object using triangulation, one needs to determine the angles (α) to the object from a pair of known observing points (i.e., any pair of the three Pixy2s' placed in fixed positions on the baseline in this project). This is done from the Pixy2s' images. Using the Pinhole Camera Model, the tangent of the angle, tan(α), is found to have a linear relation with the displacement Δx in the image plane (in pixels), namely, tan(α) = k Δx, where k is a constant depending on the specific Pixy2. A wooden test board was made specially to determine k for all the Pixy2s. It had distance marks made in two dimensions and had a Pixy2 affixed at the origin. By placing a coloured object at three different sets of spatial sampling points (marks), the constant k for each Pixy2 was determined with the error variance of < 5%. Position estimations of the triangulation system were conducted using all three pairs formed from the three Pixy2s and placing the positioned object at different positions in the 2D plane on the board. A combination using estimation values from all three pairs to make a more accurate estimate was also evaluated. The estimation results show the positioning accuracy ranging from 0.03678 cm to 2.064 cm for the z-coordinate, and from 0.02133 cm to 0.9785 cm for the x-coordinate, which are very satisfactory results.      The vision sensors were quite sensitive to the light environment when finely tuned to track one object, which therefore has a significant effect on the performance of the vision sensor-based triangulation.      An extension of the system to use more than three Pixy2s has been looked into and shown to be feasible. A method for auto-calibrating the Pixy2s' positions on the baseline was suggested and implemented. After auto-calibration, the system still performed satisfactory position estimations.
5

Optical Flow for Event Detection Camera

Almatrafi, Mohammed Mutlaq January 2019 (has links)
No description available.
6

Toward a Sustainable Human-Robot Collaborative Production Environment

Alhusin Alkhdur, Abdullah January 2017 (has links)
This PhD study aimed to address the sustainability issues of the robotic systems from the environmental and social aspects. During the research, three approaches were developed: the first one an online programming-free model-driven system that utilises web-based distributed human-robot collaboration architecture to perform distant assembly operations. It uses a robot-mounted camera to capture the silhouettes of the components from different angles. Then the system analyses those silhouettes and constructs the corresponding 3D models.Using the 3D models together with the model of a robotic assembly cell, the system guides a distant human operator to assemble the real components in the actual robotic cell. To satisfy the safety aspect of the human-robot collaboration, a second approach has been developed for effective online collision avoidance in an augmented environment, where virtual three-dimensional (3D) models of robots and real images of human operators from depth cameras are used for monitoring and collision detection. A prototype system is developed and linked to industrial robot controllers for adaptive robot control, without the need of programming by the operators. The result of collision detection reveals four safety strategies: the system can alert an operator, stop a robot, move away the robot, or modify the robot’s trajectory away from an approaching operator. These strategies can be activated based on the operator’s location with respect to the robot. The case study of the research further discusses the possibility of implementing the developed method in realistic applications, for example, collaboration between robots and humans in an assembly line.To tackle the energy aspect of the sustainability for the human-robot production environment, a third approach has been developed which aims to minimise the robot energy consumption during assembly. Given a trajectory and based on the inverse kinematics and dynamics of a robot, a set of attainable configurations for the robot can be determined, perused by calculating the suitable forces and torques on the joints and links of the robot. The energy consumption is then calculated for each configuration and based on the assigned trajectory. The ones with the lowest energy consumption are selected. / <p>QC 20170223</p>
7

A REDUNDANT MONITORING SYSTEM FOR HUMAN WELDER OPERATION USING IMU AND VISION SENSORS

Yu, Rui 01 January 2018 (has links)
In manual control, the welding gun’s moving speed can significantly influence the welding results and critical welding operations usually require welders to concentrate consistently in order to react rapidly and accurately. However, human welders always have some habitual action which can have some subtle influence the welding process. It takes countless hours to train an experienced human welder. Using vision and IMU sensor will be able to set up a system and allow the worker got more accurate visual feedback like an experienced worker. The problem is that monitor and measuring of the control process not always easy under a complex working environment like welding. In this thesis, a new method is developed that use two different methods to compensate each other to obtain accurate monitoring results. Vision sensor and IMU sensor both developed to obtain the accurate data from the control process in real-time but don’t influence other. Although both vision and IMU sensor has their own limits, they also have their own advantage which can contribute to the measuring system.
8

A Perception Payload for Small-UAS Navigation in Structured Environments

Bharadwaj, Akshay S. 26 September 2018 (has links)
No description available.
9

PreCro : A Pedestrian Crossing Robot / PeCro - roboten som hjälper människor med synnedsättning i trafiken

HEDBERG, EBBA, SUNDIN, LINNEA January 2020 (has links)
For people who suffer from visual impairment, getting around in traffic can be a great struggle. The robot PeCro, short for Pedestrian Crossing, was created as an aid for these people to use at pedestrian crossings equipped with traffic lights. The prototype was constructed in Solid Edge ST9 as a three wheeled mobile robot and consists of several components. The microcontroller, Arduino Uno, was programmed in Arduino IDE. The vision sensor used was a Pixy2 camera that can detect and track selected colour codes. A steering model called differential drive is used. It is controlled through magnetic encoders mounted on the two motor shafts. PeCro scans the environment looking for green light. If detected, PeCro searches for the blue box on the traffic light pillar on the opposite side of the street. When it is detected it crosses the street and turns 180 degrees to enable crossing the street again. The performance of a vision sensor in different light environments, the efficiency of magnetic encoders measuring travelled distance and regulating steering as well as linear interpolation as a distance calculation method, was studied. The results show that the detecting performance of PeCro is affected by the light environment and the maximum distance at which the used colour codes are detected, was 163 cm respective 150 cm. Another result shows that when measuring distance with magnetic encoders, a constant distance deviation from the desired distance occurs. This method is desirable compared to using linear interpolation to measure the distance. In conclusion, to implement and use PeCro in real life situations, further development has to be done. / Människor som lever med synnedsättning kan möta stora hinder när de rör sig i stadstrafik. Roboten PeCro, förkortning av Pedestrian Crossing (övergångsställe), skapades för att användas som ett hjälpmedel för dessa personer vid övergångsställen utrustade med trafikljus. Prototypen konstruerades som en mobil robot försedd med tre hjul i CAD-programmet Solid Edge ST9 och består av ett flertal komponenter. Mikrokontrollern, Arduino Uno, programmerades i Arduino IDE. En Pixy2-kamera användes som bildsensor som kan spåra och detektera färgkoder. Differentialstyrning användes för att enkelt kunna styra PeCro med hjälp av magnetiska givare som var fästa på motoraxlarna. PeCro skannar sin omgivning. Om den ser grönt ljus, börjar den leta efter den blå lådan på gatustolpen på motsatt sida vägen. När den blåa lådan detekteras åker roboten över övergångsstället och roterar 180 grader för att kunna användas i motsatt riktning, tillbaka över vägen. I projektet studerades en bildsensors prestanda i olika ljusmiljöer, de magnetiska givarnas effektivitet vid avståndsmätning och dess reglering av styrningen, samt avståndsmätning genom linjär interpolation. Från resultaten kan ses att PeCros detektering påverkas av ljusmiljön och att det maximala avståndet som respektive färgkod kan detekteras på är 163 respektive 150 cm. Vidare kan ses att vid avståndsmätning med magnetiska givare uppstår en konstant avvikelse från den önskade sträckan. Avståndsmätning med magnetiska givare är att föredra framför mätning med linjär interpolation. Avslutningsvis, om PeCro ska kunna användas i vardagliga situationer, kommer viss vidareutveckling behöva genomföras.
10

Real time evaluation of weld quality in narrow groove pipe welding

Marmelo, Patricia C. January 2012 (has links)
With the growth in pipeline installations all over the world, there is a great demand for highly productive and robust welding systems. Mechanised pipe welding has been developed over the last 50 years and the present focus is towards development of automated pipeline welding systems. Pipeline welding automation is aimed at reducing costs and improving the installation quality. To attain fully automated pipe welding systems there is a need to rely on sensors and controls systems to mimic human like capabilities, such as visual inspection, in real time. The key aim of this work is to develop and evaluate methods of automatic assessment of weld bead shape and quality during narrow gap GMAW of transmission pipelines. This implies that the measured bead profile will be assessed to determine whether the bead shape will cause defects when the subsequent pass is deposited. Different approaches have been used to conquer the challenge that is emulating human reasoning, all with different objectives in mind. In spite of extensive literature research performed, very little information was found concerning the real time determination and assessment of bead shape quality and none of it was reported to be applied successfully to the pipeline industry. Despite the continuous development of laboratory laser vision systems commercial ones have been on the market for decades, some specifically developed for the welding application. Laser vision sensor systems provide surface profile information, and are the only sensors which can satisfactorily measure bead profile on a narrow groove. In order to be able to use them to automatically assess weld bead shape and quality, a deep understanding of their characteristics and limitations needs to be achieved. Once that knowledge was attained it was then applied to determine the best sensor configuration for this purpose. After that the development of human like judgment algorithms were developed to accomplish the aim that was set. Empirical rules were obtained from an experienced welder regarding the acceptability of bead shapes and were then applied in the developed system with good results. To scientifically evaluate and determine the rules to use in this system, further experiments would be required. The output of the system developed showed very accurate, reliable and consistent results that were true to the external measurements and comparisons performed. The developed system has numerous applications in the pipeline industry and it could easily be implemented on commercial systems.

Page generated in 0.0533 seconds