• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Fusion of Laser Range-Finding and Computer Vision Data for Traffic Detection by Autonomous Vehicles

Cacciola, Stephen J. 21 January 2008 (has links)
The DARPA Challenges were created in response to a Congressional and Department of Defense (DoD) mandate that one-third of US operational ground combat vehicles be unmanned by the year 2015. The Urban Challenge is the latest competition that tasks industry, academia, and inventors with designing an autonomous vehicle that can safely operate in an urban environment. A basic and important capability needed in a successful competition vehicle is the ability to detect and classify objects. The most important objects to classify are other vehicles on the road. Navigating traffic, which includes other autonomous vehicles, is critical in the obstacle avoidance and decision making processes. This thesis provides an overview of the algorithms and software designed to detect and locate these vehicles. By combining the individual strengths of laser range-finding and vision processing, the two sensors are able to more accurately detect and locate vehicles than either sensor acting alone. The range-finding module uses the built-in object detection capabilities of IBEO Alasca laser rangefinders to detect the location, size, and velocity of nearby objects. The Alasca units are designed for automotive use, and so they alone are able to identify nearby obstacles as vehicles with a high level of certainty. After some basic filtering, an object detected by the Alasca scanner is given an initial classification based on its location, size, and velocity. The vision module uses the location of these objects as determined by the ranger finder to extract regions of interest from large images through perspective transformation. These regions of the image are then examined for distinct characteristics common to all vehicles such as tail lights and tires. Checking multiple characteristics helps reduce the number of false-negative detections. Since the entire image is never processed, the image size and resolution can be maximized to ensure the characteristics are as clear as possible. The existence of these characteristics is then used to modify the certainty level from the IBEO and determine if a given object is a vehicle. / Master of Science
42

Perception and Planning of Connected and Automated Vehicles

Mangette, Clayton John 09 June 2020 (has links)
Connected and Automated Vehicles (CAVs) represent a growing area of study in robotics and automotive research. Their potential benefits of increased traffic flow, reduced on-road accident, and improved fuel economy make them an attractive option. While some autonomous features such as Adaptive Cruise Control and Lane Keep Assist are already integrated into consumer vehicles, they are limited in scope and require innovation to realize fully autonomous vehicles. This work addresses the design problems of perception and planning in CAVs. A decentralized sensor fusion system is designed using Multi-target tracking to identify targets within a vehicle's field of view, enumerate each target with the lane it occupies, and highlight the most important object (MIO) for Adaptive cruise control. Its performance is tested using the Optimal Sub-pattern Assignment (OSPA) metric and correct assignment rate of the MIO. The system has an average accuracy assigning the MIO of 98%. The rest of this work considers the coordination of multiple CAVs from a multi-agent motion planning perspective. A centralized planning algorithm is applied to a space similar to a traffic intersection and is demonstrated empirically to be twice as fast as existing multi-agent planners., making it suitable for real-time planning environments. / Master of Science / Connected and Automated Vehicles are an emerging area of research that involve integrating computational components to enable autonomous driving. This work considers two of the major challenges in this area of research. The first half of this thesis considers how to design a perception system in the vehicle that can correctly track other vehicles and assess their relative importance in the environment. A sensor fusion system is designed which incorporates information from different sensor types to form a list of relevant target objects. The rest of this work considers the high-level problem of coordination between autonomous vehicles. A planning algorithm which plans the paths of multiple autonomous vehicles that is guaranteed to prevent collisions and is empirically faster than existing planning methods is demonstrated.
43

Multi-Sensor, Fused Airspace Monitoring Systems for Automated Collision Avoidance between UAS and Crewed Aircraft

Post, Alberto Martin 07 January 2022 (has links)
The autonomous operation of Uncrewed Aircraft Systems (UAS) beyond the pilot in command's visual line of sight is currently restricted due to a lack of cost-effective surveillance sensors robust enough to operate in low-level airspace. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is one possible method of combining the strengths of different sensors to increase the overall airspace surveillance quality to allow for robust detect and avoid (DAA) capabilities; enabling beyond visual line of sight operations. This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for a low cost method of long-term sensor evaluation. Lastly, an potential method of a heterogeneous, score-based, sensor fusion is presented and simulated. / Master of Science / Long range operations of Uncrewed Aircraft Systems (UAS) are currently restricted due to a lack of cost-effective surveillance sensors that work well enough near the ground in the presence changing terrain. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is a solution to this problem by combining the strengths of different sensors to allow for better collision avoidance capabilities; enabling these long range operations. This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for long-term sensor testing. Lastly, an potential method of a sensor fusion using different types of sensors is presented and simulated.
44

Sensor fusion to detect scale and direction of gravity in monocular SLAM systems

Tucker, Seth C. January 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Monocular simultaneous localization and mapping (SLAM) is an important technique that enables very inexpensive environment mapping and pose estimation in small systems such as smart phones and unmanned aerial vehicles. However, the information generated by monocular SLAM is in an arbitrary and unobservable scale, leading to drift and making it difficult to use with other sources of odometry for control or navigation. To correct this, the odometry needs to be aligned with metric scale odometry from another device, or else scale must be recovered from known features in the environment. Typically known environmental features are not available, and for systems such as cellphones or unmanned aerial vehicles (UAV), which may experience sustained, small scale, irregular motion, an IMU is often the only practical option. Because accelerometers measure acceleration and gravity, an inertial measurement unit (IMU) must filter out gravity and track orientation with complex algorithms in order to provide a linear acceleration measurement that can be used to recover SLAM scale. In this thesis, an alternative method will be proposed, which detects and removes gravity from the accelerometer measurement by using the unscaled direction of acceleration derived from the SLAM odometry.
45

Exploration of Deep Learning Applications on an Autonomous Embedded Platform (Bluebox 2.0)

Katare, Dewant 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self-driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles. This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform. 1. A machine learning based approach for the forward collision warning system in an autonomous vehicle. 2. 3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds. The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose. The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects.
46

Radar and LiDAR Fusion for Scaled Vehicle Sensing

Beale, Gregory Thomas 02 April 2021 (has links)
Scaled test-beds (STBs) are popular tools to develop and physically test algorithms for advanced driving systems, but often lack automotive-grade radars in their sensor suites. To overcome resolution issues when using a radar at small scale, a high-level sensor fusion approach between the radar and automotive-grade LiDAR was proposed. The sensor fusion approach was expected to leverage the higher spatial resolution of the LiDAR effectively. First, multi object radar tracking software (RTS) was developed to track a maneuvering full-scale vehicle using an extended Kalman filter (EKF) and the joint probabilistic data association (JPDA). Second, a 1/5th scaled vehicle performed the same vehicle maneuvers but scaled to approximately 1/5th the distance and speed. When taking the scaling factor into consideration, the RTS' positional error at small scale was, on average, over 5 times higher than in the full-scale trials. Third, LiDAR object sensor tracks were generated for the small-scale trials using a Velodyne PUCK LiDAR, a simplified point cloud clustering algorithm, and a second EKF implementation. Lastly, the radar sensor tracks and LiDAR sensor tracks served as inputs to a high-level track-to-track fuser for the small-scale trials. The fusion software used a third EKF implementation to track fused objects between both sensors and demonstrated a 30% increase in positional accuracy for a majority of the small-scale trials when compared to using just the radar or just the LiDAR to track the vehicle. The proposed track fuser could be used to increase the accuracy of RTS algorithms when operating in small scale and allow STBs to better incorporate automotive radars into their sensor suites. / Master of Science / Research and development platforms, often supported by robust prototypes, are essential for the development, testing, and validation of automated driving functions. Thousands of hours of safety and performance benchmarks must be met before any advanced driver assistance system (ADAS) is considered production-ready. However, full-scale testbeds are expensive to build, labor-intensive to design, and present inherent safety risks while testing. Scaled prototypes, developed to model system design and vehicle behavior in targeted driving scenarios, can minimize these risks and expenses. Scaled testbeds, more specifically, can improve the ease of safety testing future ADAS systems and help visualize test results and system limitations, better than software simulations, to audiences with varying technical backgrounds. However, these testbeds are not without limitation. Although small-scale vehicles may accommodate similar on-board systems to its full-scale counterparts, as the vehicle scales down the resolution from perception sensors decreases, especially from on board radars. With many automated driving functions relying on radar object detection, the scaled vehicle must host radar sensors that function appropriately at scale to support accurate vehicle and system behavior. However, traditional radar technology is known to have limitations when operating in small-scale environments. Sensor fusion, which is the process of merging data from multiple sensors, may offer a potential solution to this issue. Consequently, a sensor fusion approach is presented that augments the angular resolution of radar data in a scaled environment with a commercially available Light Detection and Ranging (LiDAR) system. With this approach, object tracking software designed to operate in full-scaled vehicles with radars can operate more accurately when used in a scaled environment. Using this improvement, small-scale system tests could confidently and quickly be used to identify safety concerns in ADAS functions, leading to a faster and safer product development cycle.
47

Detection and Localization of Elephants using a Geophone Network

Wahledow, Erik, Sjövik, Philip January 2022 (has links)
Elephants can cause people harm and destroy property in communities livingclose to national parks. Having an automated system that can detect and warnthe people of these communities is of utmost importance in order for human andelephant coexistence. Elephants heavy profile and damped footsteps induce lowfrequency ground waves that can be picked up by geophones. In the thesis twomain problems are investigated, detecting if the geophone measurements contain an elephant footstep and calculating the direction of the elephant footstep.A real time system is built containing a sensor array of three geophones. By analyzing the frequency content of the geophone measurements, elephant footstepscould be detected. The system in capable of detecting elephants situated up to40 meters away from the geophones. Utilizing the sensor array, a direction to theelephant was estimated using triangulation. Two methods of triangulation wereinvestigated. At 15 meters away, the estimation deviated with only a few degrees.At 40 meters away, the estimation was at least good and consistent enough to geta general idea of where the elephant was coming from.
48

A Deep-learning based Approach for Foot Placement Prediction

Lee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.
49

Synesthetic Sensor Fusion via a Cross-Wired Artificial Neural Network.

Seneker, Stephen Samuel 04 May 2002 (has links) (PDF)
The purpose of this interdisciplinary study was to examine the behavior of two artificial neural networks cross-wired based on the synesthesia cross-wiring hypothesis. Motivation for the study was derived from the study of psychology, robotics, and artificial neural networks, with perceivable application in the domain of mobile autonomous robotics where sensor fusion is a current research topic. This model of synesthetic sensor fusion does not exhibit synesthetic responses. However, it was observed that cross-wiring two independent networks does not change the functionality of the individual networks, but allows the inputs to one network to partially determine the outputs of the other network in some cases. Specifically, there are measurable influences of network A on network B, and yet network B retains its ability to respond independently.
50

Transfer learning in laser-based additive manufacturing: Fusion, calibration, and compensation

Francis, Jack 25 November 2020 (has links)
The objective of this dissertation is to provide key methodological advancements towards the use of transfer learning in Laser-Based Additive Manufacturing (LBAM), to assist practitioners in producing high-quality repeatable parts. Currently, in LBAM processes, there is an urgent need to improve the quality and repeatability of the manufacturing process. Fabricating parts using LBAM is often expensive, due to the high cost of materials, the skilled machine operators needed for operation, and the long build times needed to fabricate parts. Additionally, monitoring the LBAM process is expensive, due to the highly specialized infrared sensors needed to monitor the thermal evolution of the part. These factors lead to a key challenge of improving the quality of additively manufactured parts, because additional experiments and/or sensors is expensive. We propose to use transfer learning, which is a statistical technique for transferring knowledge from one domain to a similar, yet distinct, domain, to leverage previous non-identical experiments to assist practitioners in expediting part certification. By using transfer learning, previous experiments completed in similar, but non-identical, domains can be used to provide insight towards the fabrication of high-quality parts. In this dissertation, transfer learning is applied to four key domains within LBAM. First, transfer learning is used for sensor fusion, specifically to calibrate the infrared camera with true temperature measurements from the pyrometer. Second, a Bayesian transfer learning approach is developed to transfer knowledge across different material systems, by modelling material differences as a lurking variable. Third, a Bayesian transfer learning approach for predicting distortion is developed to transfer knowledge from a baseline machine system to a new machine system, by modelling machine differences as a lurking variable. Finally, compensation plans are developed from the transfer learning models to assist practitioners in improving the quality of parts using previous experiments. The work of this dissertation provides current practitioners with methods for sensor fusion, material/machine calibration, and efficient learning of compensation plans with few samples.

Page generated in 0.1003 seconds