Spelling suggestions: "subject:"visionbased"" "subject:"visionbased""
11 |
Distributed Control for Vision-based ConvoyingGoi, Hien 19 January 2010 (has links)
This thesis describes the design of a vision-based vehicle-following system that uses only on-board sensors to enable a convoy of follower vehicles to autonomously track the trajectory of a manually-driven lead vehicle. The tracking is done using the novel concept of a constant time delay, where a follower tracks the delayed trajectory of its leader. Two separate controllers, one linearized about a point ahead and the other linearized about a constant-velocity trajectory, were designed and tested in simulations and experiments. The experiments were conducted with full-sized military vehicles on a 1.3 km test track. Successful field trials with one follower for 10 laps and with two followers for 13.5 laps are presented.
|
12 |
Dynamic HVAC Operations Based on Occupancy Patterns With Real-Time Vision- Based SystemLu, Siliang 01 May 2017 (has links)
An integrated heating, ventilation and air-conditioning (HVAC) system is one of the most important components to determining the energy consumption of the entire building. For commercial buildings, particularly office buildings and schools, the heating and cooling loads are largely dependent on the occupant behavioral patterns such as occupancy rates and their activities. Therefore, if HVAC systems can respond to dynamic occupancy profiles, there is a large potential to reduce energy consumption. However, currently, most of existing HVAC systems operate without the ability to adjust supply air rate accordingly in response to the dynamic profiles of occupants. Due to this inefficiency, much of the HVAC energy use is wasted, particularly when the conditioned spaces are unoccupied or under-occupied (less occupants than the intended design). The solution to this inefficiency is to control HVAC system based on dynamic occupant profiles. Motivated by this, the research provides a real-time vision-based occupant pattern recognition system for occupancy counting as well as activity level classification. The proposed vision-based system is integrated into the existing HVAC simulation model of a U.S. office building to investigate the level of energy savings as well as thermal comfort improvement compared to traditional existing HVAC control system. The research is divided into two parts. The first part is to use an open source library based on neural network for real-time occupant counting and background subtraction method for activity level classification with a common static RGB camera. The second part utilizes a DOE reference office building model with customized dynamic occupancy schedule, including the number of occupant schedule, activity schedule and clothing insulation schedule to identify the potential energy savings compared with conventional HVAC control system. The research results revealed that vision-based systems can detect occupants and classify activity level in real time with accuracy around 90% when there are not many occlusions. Additionally, the dynamic occupant schedules indeed can bring about energy savings. Details of vision-based system, methodology, simulation configurations and results will be presented in the paper as well as potential opportunities for use throughout multiple types of commercial buildings, specifically focused on office and educational institutes.
|
13 |
Indoor navigation of mobile robots based on visual memory and image-based visual servoing / Navigation d'un robot mobile en milieu intérieur par asservissement visuel à partir d'une mémoire visuelleBista, Suman Raj 20 December 2016 (has links)
Cette thèse présente une méthode de navigation par asservissement visuel à l'aide d'une mémoire d'images. Le processus de navigation est issu d'informations d'images 2D sans utiliser aucune connaissance 3D. L'environnement est représenté par un ensemble d'images de référence avec chevauchements, qui sont automatiquement sélectionnés au cours d'une phase d'apprentissage préalable. Ces images de référence définissent le chemin à suivre au cours de la navigation. La commutation des images de référence au cours de la navigation est faite en comparant l'image acquise avec les images de référence à proximité. Basé sur les images actuelles et deux images de référence suivantes, la vitesse de rotation d'un robot mobile est calculée en vertu d'une loi du commandé par asservissement visuel basé image. Tout d'abord, nous avons utilisé l'image entière comme caractéristique, où l'information mutuelle entre les images de référence et la vue actuelle est exploitée. Ensuite, nous avons utilisé des segments de droite pour la navigation en intérieur, où nous avons montré que ces segments sont de meilleurs caractéristiques en environnement intérieur structuré. Enfin, nous avons combiné les segments de droite avec des points pour augmenter l'application de la méthode à une large gamme de scénarios d'intérieur pour des mouvements sans heurt. La navigation en temps réel avec un robot mobile équipé d'une caméra perspective embarquée a été réalisée. Les résultats obtenus confirment la viabilité de notre approche et vérifient qu'une cartographie et une localisation précise ne sont pas nécessaire pour une navigation intérieure utile. / This thesis presents a method for appearance-based navigation from an image memory by Image-Based Visual Servoing (IBVS). The entire navigation process is based on 2D image information without using any 3D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are selected automatically during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images during navigation is done by comparing the current acquired image with nearby reference images. Based on the current image and two succeeding key images, the rotational velocity of a mobile robot is computed under IBVS control law. First, we have used the entire image as a fea-ture, where mutual information between reference images and the current view is exploited. Then, we have used line segments for the indoor navigation, where we have shown that line segments are better features for the structured indoor environment. Finally, we combined line segments with point-based features for increasing the application of the method to a wide range of indoor scenarios with smooth motion. Real-time navigation with a Pioneer 3DX equipped with an on-board perspective camera has been performed in indoor environment. The obtained results confirm the viability of our approach and verify that accurate mapping and localization are not mandatory for a useful indoor navigation.
|
14 |
Vision-Based Fall Detection Using Confidence Prediction and Motion AnalysisRos, Dara 25 May 2022 (has links)
No description available.
|
15 |
Real-time Evaluation of Vision-based Navigation for Autonomous Landing of a Rotorcraft Unmanned Aerial Vehicle in a Non-cooperative EnvironmentRowley, Dale D. 02 March 2005 (has links) (PDF)
Landing a rotorcraft unmanned aerial vehicle (RUAV) without human supervision is a capability that would significantly broaden the usefulness of UAVs. The benefits are even greater if the functionality is expanded to involve landing sites with unknown terrain and a lack of GPS or other positioning aids. Examples of these types of non-cooperative environments could range from remote mountainous regions to an urban building rooftop or a cluttered parking lot. The research of this thesis builds upon an approach that was initiated at NASA Ames Research Center to advance technology in the landing phase of RUAV operations. The approach consists of applying JPL's binocular stereo ranging algorithm to identify a landing site free of hazardous terrain. JPL's monocular feature tracking algorithm is then applied to keep track of the chosen landing point in subsequent camera images. Finally, a position-estimation routine makes use of the tracking output to estimate the rotorcraft's position relative to the landing point. These position estimates make it possible to guide the rotorcraft toward, and land at, the safe landing site. This methodology is implemented in simulation within the context of a fully-autonomous RUAV mission. Performance metrics are defined and tests are carried out in simulation to independently evaluate the performance of each algorithm. The stereo ranging algorithm is shown to successfully identify a safe landing point on average 70%-90% of the time in a cluttered parking lot scenario. The tracking algorithm is demonstrated to be robust under extreme operating conditions, and lead to a position-estimation error of less than 1 meter during a 2-minute hover at 12 meters above the ground. Preliminary tests with actual flight hardware are done to confirm the validity of these results, and to prepare for demonstrations and testing in flight.
|
16 |
Vision-based Indoor Positioning:Using Graph Topology and Metaheuristics OptimizationElashry, Abdelgwad January 2022 (has links)
No description available.
|
17 |
The aeroplane spin motion and an investigation into factors affecting the aeroplane spinHoff, Rein January 2014 (has links)
A review of aeroplane spin literature is presented, including early spin research history and lessons learned from spinning trials. Despite many years of experience in spinning evaluation, it is difficult to predict spin characteristics and problems have been encountered and several prototype aeroplanes have been lost. No currently published method will reliably predict an aeroplane’s spin recovery characteristics. Quantitative data is required to study the spin motion of the aeroplane in adequate detail. An alternative method, Vision Based State Estimation, has been used to capture the spin motion. This alternative method has produced unique illustrations of the spinning research aeroplane and data has been obtained that could possibly be very challenging to obtain using traditional methods. To investigate the aerodynamic flow of a spinning aeroplane, flights have been flown using wool tufts on wing, aft fuselage and empennage for flow visualization. To complement the tuft observations, the differential pressure between the upper and lower horizontal tail and wing surfaces have been measured at selected points. Tufts indicate that a large-scale Upper Surface Vortex forms on the outside wing. This USV has also been visualized using a smoke source. The flow structures on top of both wings, and on top of the horizontal tail surfaces, have also been studied on another aeroplane model. The development of these rotational flow effects has been related to the spin motion. It is hypothesized that the flow structure of the turbulent boundary layer on the outside upper wing surface is due to additional accelerations induced by the rotational motion of the aeroplane. The dynamic effects have been discussed and their importance for the development of the spin considered. In addition, it is suggested that another dynamic effect might exist due to the additional acceleration of the turbulent boundary layer due to the rotational motion of the aeroplane. It is recommended that future spin recovery prediction methods account for dynamic effects, in addition to aerodynamic control effectiveness and aeroplane inertia, since the spin entry phase is important for the subsequent development of the spin. Finally, suggestions for future research are given.
|
18 |
Autonomous Goal-Based Mapping and Navigation Using a Ground RobotFerrin, Jeffrey L. 01 December 2016 (has links)
Ground robotic vehicles are used in many different applications. Many of these uses include tele-operation of the robot. This allows the robot to be deployed in locations that are too difficult or are unsafe for human access. The ability of a ground robot to autonomously navigate to a desired location without a-priori map information and without using GPS would allow robotic vehicles to be used in many of these situations and would free the operator to focus on other more important tasks. The purpose of this research is to develop algorithms that enable a ground robot to autonomously navigate to a user-selected location. The goal is selected from a video feed from the robot and the robot drives to the goal location while avoiding obstacles. The method uses a monocular camera for measuring the locations of the goal and landmarks. The method is validated in simulation and through experiments on an iRobot Packbot platform. A novel goal-based robocentric mapping algorithm is derived in Chapter 3. This map is created using an extended Kalman filter (EKF) by tracking the position of the goal along with other available landmarks surrounding the robot as it drives towards the goal. The mapping is robocentric, meaning that the map is a local map created in the robot-body frame. A unique state definition of the goal states and additional landmarks is presented that improves the estimate of the goal location. An improved 3D model is derived and used to allow the robot to drive on non-flat terrain while calculating the position of the goal and other landmarks. The observability and consistency of the proposed method are shown in Chapter 4. The visual tracking algorithm is explained in Chapter 5. This tracker is used with the EKF to improve tracking performance and to allow the objects to be tracked even after leaving the camera field of view for significant periods of time. This problem presents a difficult challenge for visual tracking because of the drastic change in size of the goal object as the robot approaches the goal. The tracking method is validated through experiments in real-world scenarios. The method of planning and control is derived in Chapter 6. A Model Predictive Control (MPC) formulation is designed that explicitly handles the sensor constraints of a monocular camera that is rigidly mounted to the vehicle. The MPC uses an observability-based cost function to drive the robot along a path that minimizes the position error of the goal in the robot-body frame. The MPC algorithm also avoids obstacles while driving to the goal. The conditions are explained that guarantee the robot will arrive within some specified distance of the goal. The entire system is implemented on an iRobot Packbot and experiments are conducted and presented in Chapter 7. The methods described in this work are shown to work on actual hardware allowing the robot to arrive at a user-selected goal in real-world scenarios.
|
19 |
Vision-Based Control of a Full-Size Car by Lane DetectionKunz, N. Chase 01 May 2017 (has links)
Autonomous driving is an area of increasing investment for researchers and auto manufacturers. Integration has already begun for self-driving cars in urban environments. An essential aspect of navigation in these areas is the ability to sense and follow lane markers. This thesis focuses on the development of a vision-based control platform using lane detection to control a full-sized electric vehicle with only a monocular camera. An open-source, integrated solution is presented for automation of a stock vehicle. Aspects of reverse engineering, system identification, and low-level control of the vehicle are discussed. This work also details methods for lane detection and the design of a non-linear vision-based control strategy.
|
20 |
A single-chip real-Time range finderChen, Sicheng 30 September 2004 (has links)
Range finding are widely used in various industrial applications, such as machine vision, collision avoidance, and robotics. Presently most range finders either rely on active transmitters or sophisticated mechanical controllers and powerful processors to extract range information, which make the range finders costly, bulky, or slowly, and limit their applications. This dissertation is a detailed description of a real-time vision-based range sensing technique and its single-chip CMOS implementation. To the best of our knowledge, this system is the first single chip vision-based range finder that doesn't need any mechanical position adjustment, memory or digital processor. The entire signal processing on the chip is purely analog and occurs in parallel. The chip captures the image of an object and extracts the depth and range information from just a single picture. The on-chip, continuous-time, logarithmic photoreceptor circuits are used to couple spatial image signals into the range-extracting processing network. The photoreceptor pixels can adjust their operating regions, simultaneously achieving high sensitivity and wide dynamic range. The image sharpness processor and Winner-Take-All circuits are characterized and analyzed carefully for their temporal bandwidth and detection performance. The mathematical and optical models of the system are built and carefully verified. A prototype based on this technique has been fabricated and tested. The experimental results prove that the range finder can achieve acceptable range sensing precision with low cost and excellent speed performance in short-to-medium range coverage. Therefore, it is particularly useful for collision avoidance.
|
Page generated in 0.0538 seconds