Spelling suggestions: "subject:"visionbased"" "subject:"visionbased""
1 |
Robot control using joint and end-effector sensingWijesoma, Wijerupage Sardha January 1990 (has links)
No description available.
|
2 |
Estimation algorithm for autonomous aerial refueling using a vision based relative navigation systemBowers, Roshawn Elizabeth 01 November 2005 (has links)
A new impetus to develop autonomous aerial refueling has arisen out of the growing
demand to expand the capabilities of unmanned aerial vehicles (UAVs). With
autonomous aerial refueling, UAVs can retain the advantages of being small, inexpensive,
and expendable, while offering superior range and loiter-time capabilities.
VisNav, a vision based sensor, offers the accuracy and reliability needed in order to
provide relative navigation information for autonomous probe and drogue aerial refueling
for UAVs. This thesis develops a Kalman filter to be used in combination with
the VisNav sensor to improve the quality of the relative navigation solution during
autonomous probe and drogue refueling. The performance of the Kalman filter is examined
in a closed-loop autonomous aerial refueling simulation which includes models
of the receiver aircraft, VisNav sensor, Reference Observer-based Tracking Controller
(ROTC), and atmospheric turbulence. The Kalman filter is tuned and evaluated
for four aerial refueling scenarios which simulate docking behavior in the absence of
turbulence, and with light, moderate, and severe turbulence intensity. The docking
scenarios demonstrate that, for a sample rate of 100 Hz, the tuning and performance
of the filter do not depend on the intensity of the turbulence, and the Kalman filter
improves the relative navigation solution from VisNav by as much as 50% during
the early stages of the docking maneuver. For the aerial refueling scenarios modeledin this thesis, the addition of the Kalman filter to the VisNav/ROTC structure resulted
in a small improvement in the docking accuracy and precision. The Kalman
filter did not, however, significantly improve the probability of a successful docking
in turbulence for the simulated aerial refueling scenarios.
|
3 |
Lifelong Visual Localization for Automated VehiclesMühlfellner, Peter January 2015 (has links)
Automated driving can help solve the current and future problems of individualtransportation. Automated valet parking is a possible approach to help with overcrowded parking areas in cities and make electric vehicles more appealing. In an automated valet system, drivers are able to drop off their vehicle close to a parking area. The vehicle drives to a free parking spot on its own, while the driver is free to perform other tasks — such as switching the mode of transportation. Such a system requires the automated car to navigate unstructured, possibly three dimensional areas. This goes beyond the scope ofthe tasks performed in the state of the art for automated driving. This thesis describes a visual localization system that provides accuratemetric pose estimates. As sensors, the described system uses multiple monocular cameras and wheel-tick odometry. This is a sensor set-up that is close to what can be found in current production cars. Metric pose estimates with errors in the order of tens of centimeters enable maneuvers such as parking into tight parking spots. This system forms the basis for automated navigationin the EU-funded V-Charge project. Furthermore, we present an approach to the challenging problem of life-long mapping and localization. Over long time spans, the visual appearance ofthe world is subject to change due to natural and man-made phenomena. The effective long-term usage of visual maps requires the ability to adapt to these changes. We describe a multi-session mapping system, that fuses datasets intoiiia single, unambiguous, metric representation. This enables automated navigation in the presence of environmental change. To handle the growing complexityof such a system we propose the concept of Summary Maps, which contain a reduced set of landmarks that has been selected through a combination of scoring and sampling criteria. We show that a Summary Map with bounded complexity can achieve accurate localization under a wide variety of conditions. Finally, as a foundation for lifelong mapping, we propose a relational database system. This system is based on use-cases that are not only concerned with solving the basic mapping problem, but also with providing users with a better understanding of the long-term processes that comprise a map. We demonstrate that we can pose interesting queries to the database, that help us gain a better intuition about the correctness and robustness of the created maps. This is accomplished by answering questions about the appearance and distribution of visual landmarks that were used during mapping. This thesis takes on one of the major unsolved challenges in vision-based localization and mapping: long-term operation in a changing environment. We approach this problem through extensive real world experimentation, as well as in-depth evaluation and analysis of recorded data. We demonstrate that accurate metric localization is feasible both during short term changes, as exemplified by the transition between day and night, as well as longer term changes, such as due to seasonal variation.
|
4 |
Automated Spacecraft Docking Using a Vision-Based Relative Navigation SensorMorris, Jeffery C. 14 January 2010 (has links)
Automated spacecraft docking is a concept of operations with several important
potential applications. One application that has received a great deal of attention
recently is that of an automated docking capable unmanned re-supply spacecraft. In
addition to being useful for re-supplying orbiting space stations, automated shuttles
would also greatly facilitate the manned exploration of nearby space objects, including
the Moon, near-Earth asteroids, or Mars. These vehicles would allow for longer
duration human missions than otherwise possible and could even accelerate human
colonization of other worlds. This thesis develops an optimal docking controller for an
automated docking capable spacecraft. An innovative vision-based relative navigation
system called VisNav is used to provide real-time relative position and orientation
estimates, while a Kalman post-filter generates relative velocity and angular rate estimates
from the VisNav output. The controller's performance robustness is evaluated
in a closed-loop automated spacecraft docking simulation of a scenario in circular
lunar orbit. The simulation uses realistic dynamical models of the two vehicles, both
based on the European Automated Transfer Vehicle. A high-fidelity model of the
VisNav sensor adds realism to the simulated relative navigation measurements. The
docking controller's performance is evaluated in the presence of measurement noise,
with the cases of sensor noise only, vehicle mass errors plus sensor noise, errors in
vehicle moments of inertia plus sensor noise, initial starting position errors plus sensor noise, and initial relative attitude errors plus sensor noise each being considered.
It was found that for the chosen cases and docking scenario, the final controller was
robust to both types of mass property modeling errors, as well as both types of initial
condition modeling errors, even in the presence of sensor noise. The VisNav
system was found to perform satisfactorily in all test cases, with excellent estimate
error convergence characteristics for the scenario considered. These results demonstrate
preliminary feasibility of the presented docking system, including VisNav, for
space-based automated docking applications.
|
5 |
Embedded vision system for intra-row weedingOberndorfer, Thomas January 2006 (has links)
<p>Weed control is nowadays a hi-tech discipline. Inter-row weed control is very sophisticated </p><p>whereas the intra-row weed control lacks a lot. The aim of this pro ject is to implement </p><p>an embedded system of an autonomous vision based intra-row weeding robot. Weed and </p><p>crops can be distinguished due to several attributes like colour, shape and context fea- </p><p>tures. Using an emebedded system has several advantages. The embedded system is </p><p>specialized on video processing and is designed to withstand the needs of outdoor use. </p><p>This embedded system is already able to distinguish between weed and crops. The per- </p><p>formance of the hardware is very good whereas the software still needs some optimizations.</p>
|
6 |
Embedded vision system for intra-row weedingOberndorfer, Thomas January 2006 (has links)
Weed control is nowadays a hi-tech discipline. Inter-row weed control is very sophisticated whereas the intra-row weed control lacks a lot. The aim of this pro ject is to implement an embedded system of an autonomous vision based intra-row weeding robot. Weed and crops can be distinguished due to several attributes like colour, shape and context fea- tures. Using an emebedded system has several advantages. The embedded system is specialized on video processing and is designed to withstand the needs of outdoor use. This embedded system is already able to distinguish between weed and crops. The per- formance of the hardware is very good whereas the software still needs some optimizations.
|
7 |
An Authoring Tool of VIsion-based Somatosensory Action (ATVISA)Chiang, Chia-Chi 29 August 2012 (has links)
Human-Computer Interaction (HCI) in tradition is narrow defined the communication of information between human and machines. Because the limited of the HCI's speed and the natural level, it needs to use the medium form such as symbol instructions and buttons to express the intent of human. In recent year, the trend of HCI development will be focused on human, with directly computing, determining, and displaying technologies progress, and constantly innovation. Somatosensory equipment not only breaks through the limit of HCI but also the mode of interaction of traditional equipment. Somatosensory equipment can retrieve images through the infrared projector or visible camera, capture the human motion and action, and increase its interaction for natural and intuition. But unfortunately, most of systems are limited to a unique application for special areas, and only detected specific sequences of actions. Once changing the interaction of applications then users have to rewrite the action sequences recognition program to satisfy the somatosensory demands. System cannot be defined human action sequences flexible according the applications request of users, the production process is complex and the scope of application is narrow.
This thesis presents an Authoring Tool of Vision-based Somatosensory Action (ATVISA) to improve the drawback. Users can define the human action sequences by the graphical interface, customize the visual detection quickly and recognize correspond to the Somatosensory Action. Till the Somatosensory equipment detects the defined action sequences, triggering the correspond event and dealing with the event request. This thesis employs ATVISA applied to the action sequences and three rehabilitation projects, that with the flexibility and diversification. Users also can compile human action sequences with professional expertise to application to area of education, game, rehabilitation, and so on.
|
8 |
Vision-based Navigation for Mobile Robots on Ill-structured RoadsLee, Hyun Nam 16 January 2010 (has links)
Autonomous robots can replace humans to explore hostile areas, such as Mars and
other inhospitable regions. A fundamental task for the autonomous robot is navigation.
Due to the inherent difficulties in understanding natural objects and changing environments,
navigation for unstructured environments, such as natural environments, has largely
unsolved problems. However, navigation for ill-structured environments [1], where roads
do not disappear completely, increases the understanding of these difficulties.
We develop algorithms for robot navigation on ill-structured roads with monocular
vision based on two elements: the appearance information and the geometric information.
The fundamental problem of the appearance information-based navigation is road presentation.
We propose a new type of road description, a vision vector space (V2-Space), which
is a set of local collision-free directions in image space. We report how the V2-Space is
constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic,
and time-delay constraints in motion planning. Failures occur due to the limitations of the
appearance information-based navigation, such as a lack of geometric information. We
expand the research to include consideration of geometric information.
We present the vision-based navigation system using the geometric information. To
compute depth with monocular vision, we use images obtained from different camera perspectives
during robot navigation. For any given image pair, the depth error in regions
close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed
on the road plane and predict them accordingly before the robot makes its move.
We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal
locations to take frames while navigating. Experiments show that the algorithm can
significantly reduce the depth error and hence reduce the risk of collisions. Although this
approach is developed for monocular vision, it can be applied to multiple cameras to control
the depth error. The concept of an untrusted area can be applied to 3D reconstruction
with a two-view approach.
|
9 |
Virtual Mouse¡GVision-Based Gesture RecognitionChen, Chih-Yu 01 July 2003 (has links)
The thesis describes a method for human-computer interaction through vision-based gesture recognition and hand tracking, which consists of five phases: image grabbing, image segmentation, feature extraction, gesture recognition, and system mouse controlling. Unlike most of previous works, our method recognizes hand with just one camera and requires no color markers or mechanical gloves. The primary work of the thesis is improving the accuracy and speed of the gesture recognition. Further, the gesture commands will be used to replace the mouse interface on a standard personal computer to control application software in a more intuitive manner.
|
10 |
Distributed Control for Vision-based ConvoyingGoi, Hien 19 January 2010 (has links)
This thesis describes the design of a vision-based vehicle-following system that uses only on-board sensors to enable a convoy of follower vehicles to autonomously track the trajectory of a manually-driven lead vehicle. The tracking is done using the novel concept of a constant time delay, where a follower tracks the delayed trajectory of its leader. Two separate controllers, one linearized about a point ahead and the other linearized about a constant-velocity trajectory, were designed and tested in simulations and experiments. The experiments were conducted with full-sized military vehicles on a 1.3 km test track. Successful field trials with one follower for 10 laps and with two followers for 13.5 laps are presented.
|
Page generated in 0.0582 seconds