• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 25
  • 10
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 262
  • 262
  • 69
  • 62
  • 60
  • 54
  • 49
  • 47
  • 42
  • 39
  • 37
  • 37
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Wireless Sensing and Fusion using Deep Neural Networks

Yu, Jianyuan 20 September 2022 (has links)
Deep Neural Networks (DNNs) have been proposed to solve many difficult problems within the context of wireless sensing. Indoor localization and human activity recognition (HAR) are two major applications of wireless sensing. However, current fingerprint-based localization methods require massive amounts of labeled data and suffer severe performance degradation in NLOS environments. To address this challenge, we first apply DNNs to multi-modal wireless signals, including Wi-Fi, an inertial measurement unit (IMU), and ultra-wideband (UWB). By formulating localization as a multi-modal sequence regression problem, a multi- stream recurrent fusion method is developed to combine the current hidden state of each modality. This is done in the context of recurrent neural networks while accounting for the modality uncertainty directly learned from its immediate past states. The proposed method was evaluated on a large-scale open dataset and compared with a wide range of baseline methods. It is shown that the proposed approach has an average error below 20 centimeters, which is nearly three times better than classic methods. Second, in the context of activity recognition, we propose a multi-band WiFi fusion frame- work that hierarchically combines the features of sub-6 GHz channel state information (CSI) and the beam signal-to-noise ratio (SNR) at 60 GHz at different granularity levels. Specifically, we introduce three fusion methods: simple input fusion, feature fusion, and a more customized feature permutation that accounts for the granularity correspondence between the CSI and beam SNR measurements for task-specific sensing. To mitigate the problem of limited labeled training data, we further propose an autoencoder-based unsupervised fusion network consisting of separate encoders and decoders for the CSI and beam SNR. The effectiveness of the framework is thoroughly validated using an in-house experimental platform which includes indoor localization, pose recognition, and occupancy sensing. Finally, in the context of array processing, we solve the Model order estimation (MOE) problem, a prerequisite for Direction of Arrival (DoA) estimation in the presence of correlated multipath, a well-known difficult problem. Due to the limits imposed by array geometry, it is not possible to estimate spatial parameters for an arbitrary number of sources; an estimate of the signal model is required. While classic methods fail at MOE in the presence of correlated multi-path interference, we show that data-driven supervised learning models can meet this challenge. In particular, we propose the application of Residual Neural Net- works (ResNets), with grouped symmetric kernel filters to provide an accuracy over 95%, and a weighted loss function to eliminate the underestimation error of model order. The improved MOE is shown improve subsequent array processing tasks such as reducing the overhead needed for temporal smoothing, reducing the search space for signal association, and improving DoA estimation. / Doctor of Philosophy / Radio Frequency (RF) signals are used not only for wireless communication (its most well-known application), but is also commonly used to sense the environment. One specific application, localization and navigation, can require accuracy of 0.5 meters or below, which is a significant challenge indoors. To address this problem, we apply deep learning (a technique that has gains significant attention in recent years) to fuse types of RF signals, including signals and devices commonly used in smart phones (e.g., UWB, WiFi and IMUs). The result is a technique that can achieve 20cm accuracy in indoor location applications. In addition to localization, commercial WiFi signals can also be used to sense/determine human activity. The received signals from a WiFi transmitter contain sensing information about the environment, including geometric information (angles, distance and velocity) about objects. We specifically show that our proposed approach can successfully recognize human pose, whether or not a specific seat is occupied, and a person's location. Moreover, we show that this can be done with relatively little labelled data using a technique known as transfer learning. Finally, we apply the another neural network structure to solve a particular problem in multi-antenna processing, model order estimation in the presence of coherent multipath. The resulting system can deliver a 95% accuracy in complex environments greatly improving overall array processing.
2

Improvement of Speckle-Tracked Freehand 3-D Ultrasound Through the Use of Sensor Fusion

Lang, Andrew 20 October 2009 (has links)
Freehand 3-D ultrasound (US) using a 2-D US probe has the advantage over conventional 3-D probes of being able to collect arbitrary 3-D volumes at a lower cost. Traditionally, generating a volume requires external tracking to record the US probe position. An alternative means of tracking the US probe position is through speckle tracking. Ultrasound imaging has the advantage that the speckle inherent in all images contains relative position information due to the decorrelation of speckle over distance. However, tracking the position of US images using speckle information alone suffers from drifts caused by tissue inconsistencies and overall lack of accuracy. This thesis presents two novel methods of improving the accuracy of speckle-tracked 3-D US through the use of sensor fusion. The first method fuses the speckle-tracked US positions with those measured by an electromagnetic (EM) tracker. Measurements are combined using an unscented Kalman filter (UKF). The fusion is able to reduce drift errors as well as to eliminate high-frequency jitter noise from the EM tracker positions. Such fusion produces a smooth and accurate 3-D reconstruction superior to those using the EM tracker alone. The second method involves the registration of speckle-tracked 3-D US volumes to preoperative CT volumes. We regard registration combined with speckle tracking as a form of sensor fusion. In this case, speckle tracking is used in the registration to generate an initial position for each US image. To improve the accuracy of the US-to-CT registration, the US volume is registered to the CT volume by creating individual US "sub-volumes", each consisting of a small section of the entire US volume. The registration proceeds from the beginning of the US volume to the end, registering every sub-volume. The work is validated through spine phantoms created from clinical patient CT data as well as an animal study using a lamb cadaver. Using this technique, we are able to successfully register a speckle-tracked US volume to a CT volume with excellent accuracy. As a by-product of accurate registration, any drift from the speckle tracking is eliminated and the freehand 3-D US volume is improved. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-10-19 00:10:25.717
3

Estimation for Sensor Fusion and Sparse Signal Processing

Zachariah, Dave January 2013 (has links)
Progressive developments in computing and sensor technologies during the past decades have enabled the formulation of increasingly advanced problems in statistical inference and signal processing. The thesis is concerned with statistical estimation methods, and is divided into three parts with focus on two different areas: sensor fusion and sparse signal processing. The first part introduces the well-established Bayesian, Fisherian and least-squares estimation frameworks, and derives new estimators. Specifically, the Bayesian framework is applied in two different classes of estimation problems: scenarios in which (i) the signal covariances themselves are subject to uncertainties, and (ii) distance bounds are used as side information. Applications include localization, tracking and channel estimation. The second part is concerned with the extraction of useful information from multiple sensors by exploiting their joint properties. Two sensor configurations are considered here: (i) a monocular camera and an inertial measurement unit, and (ii) an array of passive receivers. New estimators are developed with applications that include inertial navigation, source localization and multiple waveform estimation. The third part is concerned with signals that have sparse representations. Two problems are considered: (i) spectral estimation of signals with power concentrated to a small number of frequencies,and (ii) estimation of sparse signals that are observed by few samples, including scenarios in which they are linearly underdetermined. New estimators are developed with applications that include spectral analysis, magnetic resonance imaging and array processing. / <p>QC 20130426</p>
4

Sensor Fusion for Heavy Duty Vehicle Platooning / Sensorfusion för tunga fordon i fordonståg

Nilsson, Sanna January 2012 (has links)
The aim of platooning is to enable several Heavy Duty Vehicles (HDVs) to drive in a convoy and act as one unit to decrease the fuel consumption. By introducing wireless communication and tight control, the distance between the HDVs can be decreased significantly. This implies a reduction of the air drag and consequently the fuel consumption for all the HDVs in the platoon. The challenge in platooning is to keep the HDVs as close as possible to each other without endangering safety. Therefore, sensor fusion is necessary to get an accurate estimate of the relative distance and velocity, which is a pre-requisite for the controller. This master thesis aims at developing a sensor fusion framework from on-board sensor information as well as other vehicles’ sensor information communicated over a WiFi link. The most important sensors are GPS, that gives a rough position of each HDV, and radar that provides relative distance for each pair of HDV’s in the platoon. A distributed solution is developed, where an Extended Kalman Filter (EKF) estimates the state of the whole platoon. The state vector includes position, velocity and length of each HDV, which is used in a Model Predictive Control (MPC). Furthermore, a method is discussed on how to handle vehicles outside the platoon and how various road surfaces can be managed. This master thesis is a part of a project consisting of three parallel master’s theses. The other two master’s theses investigate and implement rough pre-processing of data, time synchronization and MPC associated with platooning. It was found that the three implemented systems could reduce the average fuel consumption by 11.1 %.
5

Remote monitoring and fault diagnosis of an industrial machine through sensor fusion

Lang, Haoxiang 05 1900 (has links)
Fault detection and diagnosis is quite important in engineering systems, and deserves further attention in view of the increasing complexity of modern machinery. Traditional single-sensor methods of fault monitoring and diagnosis may find it difficult to meet modern industrial requirements because there is usually no direct way to measure and accurately correlate a machine fault to a single sensor output. Fusion of information from multiple sensors can overcome this shortcoming. In this thesis, a neural-fuzzy approach of multi-sensor fusion is developed for a network-enabled remote fault diagnosis system. The approach is validated by applying it to an industrial machine called the Iron Butcher, which is a machine used in the fish processing industry for the removal of the head in fish prior to further processing for canning. An important characteristic of the fault diagnosis approach developed in this thesis is to make an accurate decision of the machine condition by fusing information from different sensors. First, sound, vibration and vision signals are acquired from the machine using a microphone, an accelerometer and a digital CCD camera, respectively. Second, the sound and vibration signals are transformed into the frequency domain using fast Fourier transformation (FFT). A feature vector from the FFT frequency spectra is defined and extracted from the acquired information. Also, a feature based vision tracking approach—the Scale Invariant Feature Transform (SIFT)—is applied to the vision data to track the object of interest (fish) in a robust manner. Third, Sound, vibration and vision feature vectors are provided as inputs to a neuro-fuzzy network for fault detection and diagnosis. A four-layer neural network including a fuzzy hidden layer is developed in the thesis to analyze and diagnose existing faults. By training the neural network with sample data for typical faults, faults of five crucial components in the fish cutting machine are detected with high reliability and robustness. Alarms to warn about impending faults may be generated as well during the machine operation. A network-based remote monitoring architecture is developed as well in the thesis, which will facilitate engineers to monitor the machine condition in a more flexible manner from a remote site. Developed multi-sensor approaches are validated using computer simulations and physical experimentation with the industrial machine, and compared with a single-sensor approach.
6

Remote monitoring and fault diagnosis of an industrial machine through sensor fusion

Lang, Haoxiang 05 1900 (has links)
Fault detection and diagnosis is quite important in engineering systems, and deserves further attention in view of the increasing complexity of modern machinery. Traditional single-sensor methods of fault monitoring and diagnosis may find it difficult to meet modern industrial requirements because there is usually no direct way to measure and accurately correlate a machine fault to a single sensor output. Fusion of information from multiple sensors can overcome this shortcoming. In this thesis, a neural-fuzzy approach of multi-sensor fusion is developed for a network-enabled remote fault diagnosis system. The approach is validated by applying it to an industrial machine called the Iron Butcher, which is a machine used in the fish processing industry for the removal of the head in fish prior to further processing for canning. An important characteristic of the fault diagnosis approach developed in this thesis is to make an accurate decision of the machine condition by fusing information from different sensors. First, sound, vibration and vision signals are acquired from the machine using a microphone, an accelerometer and a digital CCD camera, respectively. Second, the sound and vibration signals are transformed into the frequency domain using fast Fourier transformation (FFT). A feature vector from the FFT frequency spectra is defined and extracted from the acquired information. Also, a feature based vision tracking approach—the Scale Invariant Feature Transform (SIFT)—is applied to the vision data to track the object of interest (fish) in a robust manner. Third, Sound, vibration and vision feature vectors are provided as inputs to a neuro-fuzzy network for fault detection and diagnosis. A four-layer neural network including a fuzzy hidden layer is developed in the thesis to analyze and diagnose existing faults. By training the neural network with sample data for typical faults, faults of five crucial components in the fish cutting machine are detected with high reliability and robustness. Alarms to warn about impending faults may be generated as well during the machine operation. A network-based remote monitoring architecture is developed as well in the thesis, which will facilitate engineers to monitor the machine condition in a more flexible manner from a remote site. Developed multi-sensor approaches are validated using computer simulations and physical experimentation with the industrial machine, and compared with a single-sensor approach.
7

Multimodal Movement Sensing using Motion Capture and Inertial Sensors for Mixed-Reality Rehabilitation

January 2010 (has links)
abstract: This thesis presents a multi-modal motion tracking system for stroke patient rehabilitation. This system deploys two sensor modules: marker-based motion capture system and inertial measurement unit (IMU). The integrated system provides real-time measurement of the right arm and trunk movement, even in the presence of marker occlusion. The information from the two sensors is fused through quaternion-based recursive filters to promise robust detection of torso compensation (undesired body motion). Since this algorithm allows flexible sensor configurations, it presents a framework for fusing the IMU data and vision data that can adapt to various sensor selection scenarios. The proposed system consequently has the potential to improve both the robustness and flexibility of the sensing process. Through comparison between the complementary filter, the extended Kalman filter (EKF), the unscented Kalman filter (UKF) and the particle filter (PF), the experimental part evaluated the performance of the quaternion-based complementary filter for 10 sensor combination scenarios. Experimental results demonstrate the favorable performance of the proposed system in case of occlusion. Such investigation also provides valuable information for filtering algorithm and strategy selection in specific sensor applications. / Dissertation/Thesis / M.S. Electrical Engineering 2010
8

Remote monitoring and fault diagnosis of an industrial machine through sensor fusion

Lang, Haoxiang 05 1900 (has links)
Fault detection and diagnosis is quite important in engineering systems, and deserves further attention in view of the increasing complexity of modern machinery. Traditional single-sensor methods of fault monitoring and diagnosis may find it difficult to meet modern industrial requirements because there is usually no direct way to measure and accurately correlate a machine fault to a single sensor output. Fusion of information from multiple sensors can overcome this shortcoming. In this thesis, a neural-fuzzy approach of multi-sensor fusion is developed for a network-enabled remote fault diagnosis system. The approach is validated by applying it to an industrial machine called the Iron Butcher, which is a machine used in the fish processing industry for the removal of the head in fish prior to further processing for canning. An important characteristic of the fault diagnosis approach developed in this thesis is to make an accurate decision of the machine condition by fusing information from different sensors. First, sound, vibration and vision signals are acquired from the machine using a microphone, an accelerometer and a digital CCD camera, respectively. Second, the sound and vibration signals are transformed into the frequency domain using fast Fourier transformation (FFT). A feature vector from the FFT frequency spectra is defined and extracted from the acquired information. Also, a feature based vision tracking approach—the Scale Invariant Feature Transform (SIFT)—is applied to the vision data to track the object of interest (fish) in a robust manner. Third, Sound, vibration and vision feature vectors are provided as inputs to a neuro-fuzzy network for fault detection and diagnosis. A four-layer neural network including a fuzzy hidden layer is developed in the thesis to analyze and diagnose existing faults. By training the neural network with sample data for typical faults, faults of five crucial components in the fish cutting machine are detected with high reliability and robustness. Alarms to warn about impending faults may be generated as well during the machine operation. A network-based remote monitoring architecture is developed as well in the thesis, which will facilitate engineers to monitor the machine condition in a more flexible manner from a remote site. Developed multi-sensor approaches are validated using computer simulations and physical experimentation with the industrial machine, and compared with a single-sensor approach. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate
9

Sensor Fusion Algorithm for Airborne Autonomous Vehicle Collision Avoidance Applications

Doe, Julien Albert 01 December 2018 (has links)
A critical ability of any aircraft is to be able to detect potential collisions with other airborne objects, and maneuver to avoid these collisions. This can be done by utilizing sensors on the aircraft to monitor the sky for collision threats. However, several problems face a system which aims to use multiple sensors for target tracking. The data collected from sensors needs to be clustered, fused, and otherwise processed such that the flight control system can make accurate decisions based on it. Raw sensor data, while filled with useful information, is tainted with inaccuracies due to limitations and imperfections of the sensor. Combined use of different sensors presents further issues in how to handle disagreements between sensor data. This thesis project tackles the problem of processing data from multiple sensors (in this application, a radar and an infrared sensor) on an airborne platform in order to allow the aircraft to make flight corrections to avoid collisions.
10

Multi-rate Sensor Fusion for GPS Navigation Using Kalman Filtering

Mayhew, David McNeil 08 July 1999 (has links)
With the advent of the Global Position System (GPS), we now have the ability to determine absolute position anywhere on the globe. Although GPS systems work well in open environments with no overhead obstructions, they are subject to large unavoidable errors when the reception from some of the satellites is blocked. This occurs frequently in urban environments, such as downtown New York City. GPS systems require at least four satellites visible to maintain a good position 'fix' . Tall buildings and tunnels often block several, if not all, of the satellites. Additionally, due to Selective Availability (SA), where small amounts of error are intentionally introduced, GPS errors can typically range up to 100 ft or more. This thesis proposes several methods for improving the position estimation capabilities of a system by incorporating other sensor and data technologies, including Kalman filtered inertial navigation systems, rule-based and fuzzy-based sensor fusion techniques, and a unique map-matching algorithm. / Master of Science

Page generated in 0.0286 seconds