Spelling suggestions: "subject:"cotensor fusion"" "subject:"condensor fusion""
41 |
Exploration of Deep Learning Applications on an Autonomous Embedded Platform (Bluebox 2.0)Katare, Dewant 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / An Autonomous vehicle depends on the combination of latest technology or the ADAS safety features such as Adaptive cruise control (ACC), Autonomous Emergency Braking (AEB), Automatic Parking, Blind Spot Monitor, Forward Collision Warning or Avoidance (FCW or FCA), Lane Departure Warning. The current trend follows incorporation of these technologies using the Artificial neural network or Deep neural network, as an imitation of the traditionally used algorithms. Recent research in the field of deep learning and development of competent processors for autonomous or self-driving car have shown amplitude of prospect, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Deployment of several mentioned ADAS safety feature using multiple sensors and individual processors, increases the integration complexity and also results in the distribution of the system, which is very pivotal for autonomous vehicles.
This thesis attempts to tackle two important adas safety feature: Forward collision Warning, and Object Detection using the machine learning and Deep Neural Networks and there deployment in the autonomous embedded platform.
1. A machine learning based approach for the forward collision warning system in an autonomous vehicle.
2. 3-D object detection using Lidar and Camera which is primarily based on Lidar Point Clouds.
The proposed forward collision warning model is based on the forward facing automotive radar providing the sensed input values such as acceleration, velocity and separation distance to a classifier algorithm which on the basis of supervised learning model, alerts the driver of possible collision. Decision Tress, Linear Regression, Support Vector Machine, Stochastic Gradient Descent, and a Fully Connected Neural Network is used for the prediction purpose.
The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques. For this approach, the 2D object detectors is used first, which proposes a 2D bounding box on the images or video frames. Additionally a 3D object detection technique is used where the point clouds are instance segmented and based on raw point clouds density a 3D bounding box is predicted across the previously segmented objects.
|
42 |
Detection and Localization of Elephants using a Geophone NetworkWahledow, Erik, Sjövik, Philip January 2022 (has links)
Elephants can cause people harm and destroy property in communities livingclose to national parks. Having an automated system that can detect and warnthe people of these communities is of utmost importance in order for human andelephant coexistence. Elephants heavy profile and damped footsteps induce lowfrequency ground waves that can be picked up by geophones. In the thesis twomain problems are investigated, detecting if the geophone measurements contain an elephant footstep and calculating the direction of the elephant footstep.A real time system is built containing a sensor array of three geophones. By analyzing the frequency content of the geophone measurements, elephant footstepscould be detected. The system in capable of detecting elephants situated up to40 meters away from the geophones. Utilizing the sensor array, a direction to theelephant was estimated using triangulation. Two methods of triangulation wereinvestigated. At 15 meters away, the estimation deviated with only a few degrees.At 40 meters away, the estimation was at least good and consistent enough to geta general idea of where the elephant was coming from.
|
43 |
Synesthetic Sensor Fusion via a Cross-Wired Artificial Neural Network.Seneker, Stephen Samuel 04 May 2002 (has links) (PDF)
The purpose of this interdisciplinary study was to examine the behavior of two artificial neural networks cross-wired based on the synesthesia cross-wiring hypothesis. Motivation for the study was derived from the study of psychology, robotics, and artificial neural networks, with perceivable application in the domain of mobile autonomous robotics where sensor fusion is a current research topic. This model of synesthetic sensor fusion does not exhibit synesthetic responses. However, it was observed that cross-wiring two independent networks does not change the functionality of the individual networks, but allows the inputs to one network to partially determine the outputs of the other network in some cases. Specifically, there are measurable influences of network A on network B, and yet network B retains its ability to respond independently.
|
44 |
Parameter Estimation Using Sensor Fusion And Model UpdatingFrancoforte, Kevin 01 January 2007 (has links)
Engineers and infrastructure owners have to manage an aging civil infrastructure in the US. Engineers have the opportunity to analyze structures using finite element models (FEM), and often base their engineering decisions on the outcome of the results. Ultimately, the success of these decisions is directly related to the accuracy of the finite element model in representing the real-life structure. Improper assumptions in the model such as member properties or connections, can lead to inaccurate results. A major source of modeling error in many finite element models of existing structures is due to improper representation of the boundary conditions. In this study, it is aimed to integrate experimental and analytical concepts by means of parameter estimation, whereby the boundary condition parameters of a structure in question are determined. FEM updating is a commonly used method to determine the "as-is" condition of an existing structure. Experimental testing of the structure using static and/or dynamic measurements can be utilized to update the unknown parameters. Optimization programs are used to update the unknown parameters by minimizing the error between the analytical and experimental measurements. Through parameter estimation, unknown parameters of the structure such as stiffness, mass or support conditions can be estimated, or more appropriately, "updated", so that the updated model provides for a better representation of the actual conditions of the system. In this study, a densely instrumented laboratory test beam was used to carry-out both analytical and experimental analysis of multiple boundary condition setups. The test beam was instrumented with an array of displacement transducers, tiltmeters and accelerometers. Linear vertical springs represented the unknown boundary stiffness parameters in the numerical model of the beam. Nine different load cases were performed and static measurements were used to update the spring stiffness, while dynamic measurements and additional load cases were used to verify these updated parameters. Two different optimization programs were used to update the unknown parameters and then the results were compared. One optimization tool was developed by the author, Spreadsheet Parameter Estimation (SPE), which utilized the Solver function found in the widely available Microsoft Excel software. The other one, comprehensive MATLAB-based PARameter Identification System (PARIS) software, was developed at Tufts University. Optimization results from the two programs are presented and discussed for different boundary condition setups in this thesis. For this purpose, finite element models were updated using the static data and then these models were checked against dynamic measurements for model validation. Model parameter updating provides excellent insight into the behavior of different boundary conditions and their effect on the overall structural behavior of the system. Updated FEM using estimated parameters from both optimization software programs generally shows promising results when compared to the experimental data sets. Although the use of SPE is simple and generally straight-forward, we will see the apparent limitations when dealing with complex, non-linear support conditions. Due to the inherent error associated with experimental measurements and FEM modeling assumptions, PARIS serves as a better suited tool to perform parameter estimation. Results from SPE can be used for quick analysis of structures, and can serve as initial inputs for the more in depth PARIS models. A number of different sensor types and spatial resolution were also investigated for the possible minimum instrumentation to have an acceptable model representation in terms of model and experimental data correlation.
|
45 |
Tracking in Distributed Networks Using Harmonic Mean Density.Sharma, Nikhil January 2024 (has links)
Sensors are getting smaller, inexpensive and sophisticated, with an increased availability. Compared to 25 years ago, an object tracking system now can easily achieve twice the accuracy, a much larger coverage and fault tolerance, without any significant changes in the overall cost. This is possible by simply employing more than just one sensor and processing measurements from individual sensors sequentially (or even in a batch form).
%This is the centralized scheme of multi-sensor target tracking wherein the sensors send their individual detections to a central facility, where tracking related tasks such as data association, filtering, and track management etc. are performed. This is also perhaps the simplest solution for a multi-sensor approach and also optimal in the sense of minimum mean square error (MMSE) among all other multi-sensor scenario.
In sophisticated sensors, the number of detections can reach thousands in a single frame. The communication and computation load for gathering all such detections at the fusion center will hamper the system's performance while also being vulnerable to faults. A better solution is a distributed architecture wherein the individual sensors are equipped with processing capabilities such that they can detect measurements, extract clutter, form tracks and transmit them to the fusion center. The fusion center now fuses tracks instead of measurements, due to which this scheme is commonly termed track-level fusion.
In addition to sub-optimality, the track-level fusion suffers from a very coarse problem, which occurs due to correlations between the tracks to be fused. Often, in realistic scenarios, the cross-correlations are unknown, without any means to calculate them. Thus, fusion cannot be performed using traditional methods unless extra information is transmitted from the fusion center.
This thesis proposes a novel and generalized method of fusing any two probability density functions (pdf) such that a positive cross-correlation exists between them. In modern tracking systems, the tracks are essentially pdfs and not necessarily Gaussian. We propose harmonic mean density based fusion and prove that it obeys all the necessary requirements of being a viable fusion mechanism. We show that fusion in this case is a classical example of agreement between the fused and participating densities based on average $\chi^2$ divergence. Compared to other such fusion techniques in the literature, the HMD performs exceptionally well.
Transmitting covariance matrices in distributed architecture is not always possible in cases for e.g. tactical and automotive systems. Fusion of tracks without the knowledge of uncertainty is another problem discussed in the thesis. We propose a novel technique for local covariance reconstruction at the fusion center with the knowledge of estimates and a vector of times when update has occurred at local sensor node. It has been shown on a realistic scenario that the reconstructed covariance converges to the actual covariance, in the sense of Frobenius norm, making fusion without covariance, possible. / Thesis / Doctor of Philosophy (PhD)
|
46 |
Assisted GNSS Positioning using State Space CorrectionsPhilipsson, Oskar January 2023 (has links)
Classical GNSS based positioning has accuracy limitations due to many sources oferror. The error sources range from clock errors and orbit errors toerrors due to variations in atmospheric propagation delays. One way to improve GNSSpositioning is to generate real time corrections using a GNSS reference network.The corrections can then be distributed through the mobilenetwork and be delivered in real time to the device that should position itself. This thesis aims to develop a positioning engine utilizing statespace representation corrections (SSR). The thesis also has the goal to develop methods for combiningpseudorange measurements with carrier-phase measurements, in the case when SSR correctionsare used. The static and dynamic performance ofthe positioning engine will be evaluated. Also, the SSR correction format itself, willalso be evaluated and different levels of SSR corrections will be compared. The proposed combined positioning engine uses SSR correctionsand single-difference measurements. Through this, all majorerror sources on the satellite side, device side and in the atmosphere, are removedexcept for an integer ambiguity in the carrier phase measurement. This ambiguityis handled by tracking the GNSS receiver's position along with the integerambiguities in an extended Kalman filter (EKF). Experiments show that usingreal-time SSR corrections leads to a significant improvement in global absolutepositioning for simple GNSS receivers using only a single measurement frequencyand only using pseudorange measurements. For a more advanced receiver capable ofcarrier phase measurements, experiments together with simulation resultsshow that using the proposed combined positioning engine, improves the positioningperformance even further.
|
47 |
Drone Detection and Classification using Machine LearningShafiq, Khurram 26 September 2023 (has links)
UAV (Unmanned Airborne Vehicle) is a source of entertainment and a pleasurable experience, attracting many young people to pursue it as a hobby. With the potential increase in the number of UAVs, the risk of using them for malicious purposes also increases. In addition, birds and UAVs have very similar maneuvers during flights. These UAVs can also carry a significant payload, which can have unintended consequences. Therefore, detecting UAVs near red-zone areas is an important problem. In addition, small UAVs can record video from large distances without being spotted by the naked eye. An appropriate network of sensors may be needed to foresee the arrival of such entities from a safe distance before they pose any danger to the surrounding areas.
Despite the growing interest in UAV detection, limited research has been conducted in this area due to a lack of available data for model training. This thesis proposes a novel approach to address this challenge by leveraging experimental data collected in real-time using high-sensitivity sensors instead of relying solely on simulations. This approach allows for improved model accuracy and a better representation of the complex and dynamic environments in which UAVs operate, which are difficult to simulate accurately. The thesis further explores the application of machine learning and sensor fusion algorithms to detect UAVs and distinguish them from other objects, such as birds, in real-time. Specifically, the thesis utilizes YOLOv3 with deep sort and sensor fusion algorithms to achieve accurate UAV detection.
In this study, we employed YOLOv3, a deep learning model known for its high efficiency and complexity, to facilitate real-time drone versus bird detection. To further enhance the reliability of the system, we incorporated sensor fusion, leading to a more stable and accurate real-time system, and mitigating the incidence of false detections. Our study indicates that the YOLOv3 model outperformed the state-of-the-art models in terms of both speed and robustness, achieving a high level of confidence with a score above 95%. Moreover, the YOLOv3 model demonstrated a promising capability in real-time drone versus bird detection, which suggests its potential for practical applications
|
48 |
Transfer learning in laser-based additive manufacturing: Fusion, calibration, and compensationFrancis, Jack 25 November 2020 (has links)
The objective of this dissertation is to provide key methodological advancements towards the use of transfer learning in Laser-Based Additive Manufacturing (LBAM), to assist practitioners in producing high-quality repeatable parts. Currently, in LBAM processes, there is an urgent need to improve the quality and repeatability of the manufacturing process. Fabricating parts using LBAM is often expensive, due to the high cost of materials, the skilled machine operators needed for operation, and the long build times needed to fabricate parts. Additionally, monitoring the LBAM process is expensive, due to the highly specialized infrared sensors needed to monitor the thermal evolution of the part. These factors lead to a key challenge of improving the quality of additively manufactured parts, because additional experiments and/or sensors is expensive. We propose to use transfer learning, which is a statistical technique for transferring knowledge from one domain to a similar, yet distinct, domain, to leverage previous non-identical experiments to assist practitioners in expediting part certification. By using transfer learning, previous experiments completed in similar, but non-identical, domains can be used to provide insight towards the fabrication of high-quality parts. In this dissertation, transfer learning is applied to four key domains within LBAM. First, transfer learning is used for sensor fusion, specifically to calibrate the infrared camera with true temperature measurements from the pyrometer. Second, a Bayesian transfer learning approach is developed to transfer knowledge across different material systems, by modelling material differences as a lurking variable. Third, a Bayesian transfer learning approach for predicting distortion is developed to transfer knowledge from a baseline machine system to a new machine system, by modelling machine differences as a lurking variable. Finally, compensation plans are developed from the transfer learning models to assist practitioners in improving the quality of parts using previous experiments. The work of this dissertation provides current practitioners with methods for sensor fusion, material/machine calibration, and efficient learning of compensation plans with few samples.
|
49 |
Radar and LiDAR Fusion for Scaled Vehicle SensingBeale, Gregory Thomas 02 April 2021 (has links)
Scaled test-beds (STBs) are popular tools to develop and physically test algorithms for advanced driving systems, but often lack automotive-grade radars in their sensor suites. To overcome resolution issues when using a radar at small scale, a high-level sensor fusion approach between the radar and automotive-grade LiDAR was proposed. The sensor fusion approach was expected to leverage the higher spatial resolution of the LiDAR effectively. First, multi object radar tracking software (RTS) was developed to track a maneuvering full-scale vehicle using an extended Kalman filter (EKF) and the joint probabilistic data association (JPDA). Second, a 1/5th scaled vehicle performed the same vehicle maneuvers but scaled to approximately 1/5th the distance and speed. When taking the scaling factor into consideration, the RTS' positional error at small scale was, on average, over 5 times higher than in the full-scale trials. Third, LiDAR object sensor tracks were generated for the small-scale trials using a Velodyne PUCK LiDAR, a simplified point cloud clustering algorithm, and a second EKF implementation. Lastly, the radar sensor tracks and LiDAR sensor tracks served as inputs to a high-level track-to-track fuser for the small-scale trials. The fusion software used a third EKF implementation to track fused objects between both sensors and demonstrated a 30% increase in positional accuracy for a majority of the small-scale trials when compared to using just the radar or just the LiDAR to track the vehicle. The proposed track fuser could be used to increase the accuracy of RTS algorithms when operating in small scale and allow STBs to better incorporate automotive radars into their sensor suites. / Master of Science / Research and development platforms, often supported by robust prototypes, are essential for the development, testing, and validation of automated driving functions. Thousands of hours of safety and performance benchmarks must be met before any advanced driver assistance system (ADAS) is considered production-ready. However, full-scale testbeds are expensive to build, labor-intensive to design, and present inherent safety risks while testing. Scaled prototypes, developed to model system design and vehicle behavior in targeted driving scenarios, can minimize these risks and expenses. Scaled testbeds, more specifically, can improve the ease of safety testing future ADAS systems and help visualize test results and system limitations, better than software simulations, to audiences with varying technical backgrounds. However, these testbeds are not without limitation. Although small-scale vehicles may accommodate similar on-board systems to its full-scale counterparts, as the vehicle scales down the resolution from perception sensors decreases, especially from on board radars. With many automated driving functions relying on radar object detection, the scaled vehicle must host radar sensors that function appropriately at scale to support accurate vehicle and system behavior. However, traditional radar technology is known to have limitations when operating in small-scale environments. Sensor fusion, which is the process of merging data from multiple sensors, may offer a potential solution to this issue. Consequently, a sensor fusion approach is presented that augments the angular resolution of radar data in a scaled environment with a commercially available Light Detection and Ranging (LiDAR) system. With this approach, object tracking software designed to operate in full-scaled vehicles with radars can operate more accurately when used in a scaled environment. Using this improvement, small-scale system tests could confidently and quickly be used to identify safety concerns in ADAS functions, leading to a faster and safer product development cycle.
|
50 |
A Deep-learning based Approach for Foot Placement PredictionLee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.
|
Page generated in 0.077 seconds