• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 85
  • 85
  • 30
  • 28
  • 25
  • 21
  • 19
  • 16
  • 13
  • 13
  • 13
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A DSP embedded optical naviagtion system

Gunnam, Kiran Kumar 30 September 2004 (has links)
Spacecraft missions such as spacecraft docking and formation flying require high precision relative position and attitude data. Although Global Positioining Systems can provide this capability near the earth, deep space missions require the use of alternative technologies. One such technology is the vision-based navigation (VISNAV) sensor system developed at Texas A&M University. VISNAV comprises an electro-optical sensor combined with light sources or beacons. This patented sensor has an analog detector in the focal plane with a rise time of a few microseconds. Accuracies better than one part in 2000 of the field of view have been obtained. This research presents a new approach involving simultaneous activation of beacons with frequency division multiplexing as part of the VISNAV sensor system. In addition, it discusses the synchronous demodulation process using digital heterodyning and decimating filter banks on a low-power fixed point DSP, which improves the accuracy of the sensor measurements and the reliability of the system. This research also presents an optimal and computationally efficient six-degree-of-freedom estimation algorithm using a new measurement model based on the attitude representation of Modified Rodrigues Parameters.
82

A DSP embedded optical naviagtion system

Gunnam, Kiran Kumar 30 September 2004 (has links)
Spacecraft missions such as spacecraft docking and formation flying require high precision relative position and attitude data. Although Global Positioining Systems can provide this capability near the earth, deep space missions require the use of alternative technologies. One such technology is the vision-based navigation (VISNAV) sensor system developed at Texas A&M University. VISNAV comprises an electro-optical sensor combined with light sources or beacons. This patented sensor has an analog detector in the focal plane with a rise time of a few microseconds. Accuracies better than one part in 2000 of the field of view have been obtained. This research presents a new approach involving simultaneous activation of beacons with frequency division multiplexing as part of the VISNAV sensor system. In addition, it discusses the synchronous demodulation process using digital heterodyning and decimating filter banks on a low-power fixed point DSP, which improves the accuracy of the sensor measurements and the reliability of the system. This research also presents an optimal and computationally efficient six-degree-of-freedom estimation algorithm using a new measurement model based on the attitude representation of Modified Rodrigues Parameters.
83

Robot navigation in sensor space

Keeratipranon, Narongdech January 2009 (has links)
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
84

Synthèse d’une solution GNC basée sur des capteurs de flux optique bio-inspirés adaptés à la mesure des basses vitesses pour un atterrissage lunaire autonome en douceur / Design of a GNC Solution based on Bio-Inspired Optic Flow Sensors adapted to low speed measurement for an Autonomous Soft Lunar Landing

Sabiron, Guillaume 18 November 2014 (has links)
Dans cette thèse, nous nous intéressons au problème de l’atterrissage lunaire autonome et nous proposons une méthode innovante amenant une alternative à l’utilisation de capteurs classiques qui peuvent se révéler encombrants, énergivores et très onéreux.La première partie est consacrée au développement et à la construction de capteurs de mouvement inspirés de la vision des insectes volants et mesurant le flux optique.Le flux optique correspond à la vitesse angulaire relative de l’environnement mesurée par la rétine d’un agent. Dans un environnement fixe, les mouvements d’un robot génèrent un flux optique contenant des informations essentielles sur le mouvement de ce dernier. En utilisant le principe du « temps de passage », nous présentons les résultats expérimentaux obtenus en extérieur avec deux versions de ces capteurs.Premièrement, un capteur mesurant le flux optique dans les deux directions opposées est développé et testé en laboratoire. Deuxièmement un capteur adapté à la mesure des faibles flux optiques similaires à ceux pouvant être mesurés lors d’un alunissage est développé, caractérisé et enfin testé sur un drone hélicoptère en conditions extérieures.Dans la seconde partie, une méthode permettant de réaliser le guidage, la navigation et la commande (GNC pour Guidance Navigation and Control) du système est proposée. L’innovation réside dans le fait que l’atterrissage en douceur est uniquement assuré par les capteurs de flux optique. L’utilisation des capteurs inertiels est réduite au maximum. Plusieurs capteurs orientés dans différentes directions de visée, et fixés à la structure de l’atterrisseur permettent d’atteindre les conditions finales définies par les partenaires industriels. Les nombreuses informations décrivant la position et l’attitude du système contenues dans le flux optique sont exploitées grâce aux algorithmes de navigation qui permettent d’estimer les flux optiques ventraux et d’expansion ainsi que le tangage.Nous avons également montré qu’il est possible de contrôler l’atterrisseur planétaire en faisant suivre aux flux optiques estimés une consigne optimale au sens de la consommation d’énergie. Les simulations réalisées durant la thèse ont permis de valider le fonctionnement et le potentiel de la solution GNC proposée en intégrant le code du capteur ainsi que des images simulées du sol de la lune. / In this PhD thesis, the challenge of autonomous lunar landing was addressed and an innovative method was developed, which provides an alternative to the classical sensor suites based on RADAR, LIDAR and cameras, which tend to be bulky, energy consuming and expensive. The first part is devoted to the development of a sensor inspired by the fly’s visual sensitivity to optic flow (OF). The OF is an index giving the relative angular velocity of the environment sensed by the retina of a moving insect or robot. In a fixed environment (where there is no external motion), the self-motion of an airborne vehicle generates an OF containing information about its own velocity and attitude and the distance to obstacles. Based on the “Time of Travel” principle we present the results obtained for two versions of 5 LMSs based optic flow sensors. The first one is able to measure accurately the OF in two opposite directions. It was tested in the laboratory and gave satisfying results. The second optic flow sensor operates at low velocities such as those liable to occur during lunar landing was developed. After developing these sensors, their performances were characterized both indoors and outdoors, and lastly, they were tested onboard an 80-kg helicopter flying in an outdoor environment. The Guidance Navigation and Control (GNC) system was designed in the second part on the basis of several algorithms, using various tools such as optimal control, nonlinear control design and observation theory. This is a particularly innovative approach, since it makes it possible to perform soft landing on the basis of OF measurements and as less as possible on inertial sensors. The final constraints imposed by our industrial partners were met by mounting several non-gimbaled sensors oriented in different gaze directions on the lander’s structure. Information about the lander’s self-motion present in the OF measurements is extracted by navigation algorithms, which yield estimates of the ventral OF, expansion OF and pitch angle. It was also established that it is possible to bring the planetary lander gently to the ground by tracking a pre-computed optimal reference trajectory in terms of the lowest possible fuel consumption. Software-in-the-loop simulations were carried out in order to assess the potential of the proposed GNC approach by testing its performances. In these simulations, the sensor firmware was taken into account and virtual images of the lunar surface were used in order to improve the realism of the simulated landings.
85

AUTOMATING BIG VISUAL DATA COLLECTION AND ANALYTICS TOWARD LIFECYCLE MANAGEMENT OF ENGINEERING SYSTEMS

Jongseong Choi (9011111) 09 September 2022 (has links)
Images have become a ubiquitous and efficient data form to record information. Use of this option for data capture has largely increased due to the widespread availability of image sensors and sensor platforms (e.g., smartphones and drones), the simplicity of this approach for broad groups of users, and our pervasive access to the internet as one class of infrastructure in itself. Such data contains abundant visual information that can be exploited to automate asset assessment and management tasks that traditionally are manually conducted for engineering systems. Automation of the data collection, extraction and analytics is however, key to realizing the use of these data for decision-making. Despite recent advances in computer vision and machine learning techniques extracting information from an image, automation of these real-world tasks has been limited thus far. This is partly due to the variety of data and the fundamental challenges associated with each domain. Due to the societal demands for access to and steady operation of our infrastructure systems, this class of systems represents an ideal application where automation can have high impact. Extensive human involvement is required at this time to perform everyday procedures such as organizing, filtering, and ranking of the data before executing analysis techniques, consequently, discouraging engineers from even collecting large volumes of data. To break down these barriers, methods must be developed and validated to speed up the analysis and management of data over the lifecycle of infrastructure systems. In this dissertation, big visual data collection and analysis methods are developed with the goal of reducing the burden associated with human manual procedures. The automated capabilities developed herein are focused on applications in lifecycle visual assessment and are intended to exploit large volumes of data collected periodically over time. To demonstrate the methods, various classes of infrastructure, commonly located in our communities, are chosen for validating this work because they: (i) provide commodities and service essential to enable, sustain, or enhance our lives; and (ii) require a lifecycle structural assessment in a high priority. To validate those capabilities, applications of infrastructure assessment are developed to achieve multiple approaches of big visual data such as region-of-interest extraction, orthophoto generation, image localization, object detection, and image organization using convolution neural networks (CNNs), depending on the domain of lifecycle assessment needed in the target infrastructure. However, this research can be adapted to many other applications where monitoring and maintenance are required over their lifecycle.

Page generated in 0.0926 seconds