• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Head impact detection with sensor fusion and machine learning

Strandberg, Aron January 2022 (has links)
Head injury is common in many different sports and elsewhere, and is often associated with differentdifficulties. One major problem is to identify and value the injury or the severity. Sometimes there is no sign of head injury, but a serious neck distortion has occurred, causing similar symptoms as head injuries e.g. concussion or mild TBI (traumatic brain injury). This study investigated whether direct and indirect measurements of head kinematics, combined with machine learning and 3D visualization can be used to identify head injury and value the injury. Injury statistics have found that many severe head injuries are caused by oblique impacts. An oblique impact will give rise to both linear and rotational kinematics. Since the human brain is very sensitive to rotational kinematics, many violent rotations of the head can results in large shear strains in the brain. This is when white matter and white matter connections are disrupted in the brain from acceleration and deceleration, or rotational acceleration kinematics which in turn will cause traumatic brain injuries as e.g. diffuse axonal injury (DAI). Lately there has been many studies in this field using different types of new technologies, but the most prevalent is the rise of wearable sensors that have become smaller, faster and more energy efficient where they have been integrated into mouthguards and inertial measurement units (IMUs) the size of a sim-card that measures and reports a body's specific force. It has been shown that a 6-axis IMU (3-axis rotational- and 3-axis acceleration measurements) may improve head injury prediction but more data is needed to confirm with existing head injury criterions and new criterions needs to be developed, that considers directional sensitivity. Today, IMUs are typically used in self-driving cars, aircrafts, spacecrafts, satellites etc. As of today, more and more studies have evaluated and utilized IMUs in new uncharted fields have shown promises, especially in sports, and in the neuroscience and medical field. This study proposed a method to 3D visualize head kinematics during the event of a possible head injury to indirectly identify and value the injury, by medical professionals, as well as, a direct method to identify and also value the severity of head injury with machine learning. An erroneous data collection process of reconstructed head impacts and non-head impacts have been recorded using an open-source 9-axis IMU sensor and a proprietary 6-axis IMU sensor. To value the head injury or the severity, existing head injury criterions as the Abbreviated Injury Scale (AIS), Head Injury Criterion (HIC), Head Impact Power (HIP), Severity Index (SI) and Generalized Acceleration Model for Brain Injury Threshold (GAMBIT) have been introduced. To detect head impact including the severity and non-head impact, a Random Forests (RF) classifier and Support Vector Machine (SVM) classifiers with linear- and radial basis function have been proposed, the prediction results have been promising.
112

Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving / Rutnätsbaserad multisensorfusion för detektering av hinder på vägen: tillämpning på självkörande bilar

Gálvez del Postigo Fernández, Carlos January 2015 (has links)
Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
113

NOVEL ENTROPY FUNCTION BASED MULTI-SENSOR FUSION IN SPACE AND TIME DOMAIN: APPLICATION IN AUTONOMOUS AGRICULTURAL ROBOT

Md Nazmuzzaman Khan (10581479) 07 May 2021 (has links)
<div><div><div> How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. <br></div></div></div><div><br></div><div> First, we propose a solution for real-time crop row detection from autonomous navigation of agricultural vehicle using domain knowledge and unsupervised machine learning based approach. We implement projective transformation to transform camera image plane to an image plane exactly at the top of the crop rows, so that parallel crop rows remain parallel. Then we use color based segmentation to differentiate crop and weed pixels from background. We implement hierarchical density-based spatial clustering of applications with noise (HDBSCAN) clustering algorithm to differentiate between the crop row clusters and weed clusters. <br></div><div><br></div><div> Finally we use Random sample consensus (RANSAC) for robust line fitting through the detected crop row clusters. We test our algorithm against four different well established methods for crop row detection in-terms of processing time and accuracy. Our proposed method, Clustering Algorithm based RObust LIne Fitting (CAROLIF), shows significantly better accuracy compared to three other methods with average intersect over union (IoU) value of 73%. We also test our algorithm on a video taken from an agricultural vehicle at a corn field in Indiana. CAROLIF shows promising results under lighting variation, vibration and unusual crop-weed growth. <br></div><div><br></div><div><div> Then we propose a robust weed classification system based on convolutional neural network (CNN) and novel decision-level evidence-based multi-sensor fusion algorithm. We create a small dataset of three different weeds (Giant ragweed, Pigweed and Cocklebur) commonly available in corn fields. We train three different CNN architectures on our dataset. Based on classification accuracy and inference time, we choose VGG16 with transfer learning architecture for real-time weed classification.</div><div> </div><div> To create a robust and stable weed classification pipeline, a multi-sensor fusion algorithm based on Dempster-Shafer (DS) evidence theory with a novel entropy function is proposed. The proposed novel entropy function is inspired from Shannon and Deng entropy but it shows better results at understanding uncertainties in certain scenarios, compared to Shannon and Deng entropy, under DS framework. Our proposed algorithm has two advantages compared to other sensor fusion algorithms. First, it can be applied to both space and time domain to fuse results from multiple sensors and create more robust results. Secondly, it can detect which sensor is faulty in the sensors array and compensate for the faulty sensor by giving it lower weight at real-time. Our proposed algorithm calculates the evidence distance from each sensor and determines if one sensor agrees or disagrees with another. Then it rewards the sensors which agrees with another according to their information quality which is calculated using our novel entropy function. The proposed algorithm can combine highly conflicting evidences from multiple sensors and overcomes the limitation of original DS combination rule. After testing our algorithm with real and simulation data, it shows better convergence rate, anti-disturbing ability and transition property compared to other methods available from open literature.</div></div><div><br></div><div><div> Finally, we present a fuzzy-logic based approach to measure the confidence</div><div> of the detected object's bounding-box (BB) position from a CNN detector. The CNN detector gives us the position of BB with percentage accuracy of the object inside the BB on each image plane. But how do we know for sure that the position of the BB is correct? When we are detecting an object using multiple cameras, the position of the BB on the camera image plane may appear in different places based on the detection accuracy and the position of the cameras. But in 3D space, the object is at the exact same position for both cameras. We use this relation between the camera image planes to create a fuzzy-fusion system which will calculate the confidence value of detection. Based on the fuzzy-rules and accuracy of BB position, this system gives us confidence values at three different stages (`Low', `OK' and `High'). This proposed system is successful at giving correct confidence score for scenarios where objects are correctly detected, objects are partially detected and objects are incorrectly detected. </div></div>
114

Smart shoe gait analysis and diagnosis: designing and prototyping of hardware and software

Peddinti, Seshasai Vamsi Krishna January 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Gait analysis plays a major role in treatment of osteoarthritis, knee or hip replacements, and musculoskeletal diseases. It is extensively used for injury rehabilitation and physical therapy for issues like Hemiplegia and Diplegia. It also provides us with the information to detect various improper gaits such as Parkinson's disease, Hemiplegic and diplegic gaits. Though there are many wearable and non-wearable methods to detect the improper gate performance, they are usually not user friendly and have restrictions. Most existing devices and systems can detect the gait but are very limited with regards of diagnosing them. The proposed method uses two A201 Force sensing resistors, accelerometer, and gyroscope to detect the gait and send diagnosed information of the possibility of the specified improper gaits via Bluetooth wireless communication system to the user's hand-held device or the desktop. The data received from the sensors was analyzed by the custom made micro-controller and is sent to the desktop or mobile device via Bluetooth module. The peak pressure values during a gait cycle were recorded and were used to indicate if the walk cycle of a person is normal or it has any abnormality. Future work: A magnetometer can be added to get more accurate results. More improper gaits can be detected by using two PCBs, one under each foot. Data can be sent to cloud and saved for future comparisons.
115

Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup

Betrabet, Siddhant S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
116

Novel Capacitive Sensors for Chemical and Physical Monitoring in Microfluidic Devices

Rajan, Parthiban 12 June 2019 (has links)
No description available.
117

Automotive sensor fusion systems for traffic aware adaptive cruise control

Gandy, Jonah T. 13 May 2022 (has links) (PDF)
The autonomous driving (AD) industry is advancing at a rapid pace. New sensing technology for tracking vehicles, controlling vehicle behavior, and communicating with infrastructure are being added to commercial vehicles. These new automotive technologies reduce on road fatalities, improve ride quality, and improve vehicle fuel economy. This research explores two types of automotive sensor fusion systems: a novel radar/camera sensor fusion system using a long shortterm memory (LSTM) neural network (NN) to perform data fusion improving tracking capabilities in a simulated environment and a traditional radar/camera sensor fusion system that is deployed in Mississippi State’s entry in the EcoCAR Mobility Challenge (2019 Chevrolet Blazer) for an adaptive cruise control system (ACC) which functions in on-road applications. Along with vehicles, pedestrians, and cyclists, the sensor fusion system deployed in the 2019 Chevrolet Blazer uses vehicle-to-everything (V2X) communication to communicate with infrastructure such as traffic lights to optimize and autonomously control vehicle acceleration through a connected corridor
118

Quadcopter stabilization based on IMU and Monocamera Fusion

Pérez Rodríguez, Arturo January 2023 (has links)
Unmanned aerial vehicles (UAVs), commonly known as drones, have revolutionized numerous fields ranging from aerial photography to surveillance and logistics. Achieving stable flight is essential for their successful operation, ensuring accurate data acquisition, reliable manoeuvring, and safe operation. This thesis explores the feasibility of employing a frontal mono camera and sensor fusion techniques to enhance drone stability during flight. The objective of this research is to investigate whether a frontal mono camera, combined with sensor fusion algorithms, can be used to effectively stabilize a drone in various flight scenarios. By leveraging machine vision techniques and integrating data from onboard gyroscopes, the proposed approach aims to provide real-time feedback for controlling the drone. The methodology for this study involves the Crazyflie 2.1 drone platform equipped with a frontal camera and an Inertial Measurement Unit (IMU). The drone’s flight data, including position, orientation, and velocity, is continuously monitored and analyzed using Kalman Filter (KF). This algorithm processes the data from the camera and the IMU to estimate the drone’s state accurately. Based on these estimates, corrective commands are generated and sent to the drone’s control system to maintain stability. To evaluate the effectiveness of the proposed system, a series of flight tests are conducted under different environmental conditions and flight manoeuvres. Performance metrics such as drift, level of oscillations, and overall flight stability are analyzed and compared against baseline experiments with conventional stabilization methods. Additional simulated tests are carried out to study the effect of the communication delay. The expected outcomes of this research will contribute to the advancement of drone stability systems. If successful, the implementation of a frontal camera and sensor fusion can provide a cost-effective and lightweight solution for stabilizing drones.
119

A Vision-Based Relative Navigation Approach for Autonomous Multirotor Aircraft

Leishman, Robert C. 29 April 2013 (has links) (PDF)
Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizinga relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.
120

Visual-Inertial Odometry for Autonomous Ground Vehicles

Burusa, Akshay Kumar January 2017 (has links)
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option. / Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.

Page generated in 0.0725 seconds