• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 10
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 24
  • 20
  • 17
  • 17
  • 17
  • 16
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Opportunistic Relaying Scheme for Optimal Communications and Source Localization

Perez-Ramirez, Javier 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / The selection of relay nodes (RNs) for optimal communication and source location estimation is studied. The RNs are randomly placed at fixed and known locations over a geographical area. A mobile source senses and collects data at various locations over the area and transmits the data to a destination node with the help of the RNs. The destination node not only needs to collect the sensed data but also the location of the source where the data is collected. Hence, both high quality data collection and the correct location of the source are needed. Using the measured distances between the relays and the source, the destination estimates the location of the source. The selected RNs must be optimal for joint communication and source location estimation. We show in this paper how this joint optimization can be achieved. For practical decentralized selection, an opportunistic RN selection algorithm is used. Bit error rate performance as well as mean squared error in location estimation are presented and compared to the optimal relay selection results.
12

Computational Acoustic Beamforming of Noise Source on Wind Turbine Airfoil

Li, Chi Shing January 2014 (has links)
A new method, Computational Acoustic Beamforming, is proposed in this thesis. This novel numerical sound source localization methodology combines the advantages of the Computational Fluid Dynamics (CFD) simulation and experimental acoustic beamforming, which enable this method to take directivity of sound source emission into account while maintaining a relatively low cost. This method can also aid the optimization of beamforming algorithm and microphone array design. In addition, it makes sound source prediction of large structures in the low frequency range possible. Three modules, CFD, Computational Aeroacoustics (CAA) and acoustic beamforming, are incorporated in this proposed method. This thesis adopts an open source commercial software OpenFOAM for the flow field simulation with the Improved Delayed Detached Eddy Simulation (IDDES) turbulence model. The CAA calculation is conducted by an in-house code using impermeable Ffowcs-Williams and Hawkings (FW-H) equation for static sound source. The acoustic beamforming is performed by an in-house Delay and Sum (DAS) beamformer code with several different microphone array designs. Each module has been validated with currently available experimental data and numerical results. A flow over NACA 0012 airfoil case was chosen as a demonstration case for the new method. The aerodynamics and aeroacoustics results are shown and compared with the experimental measurements. A relatively good agreement has been achieved which gives the confidence of using this newly proposed method in sound source localization applications.
13

Bayesian M/EEG source localization with possible joint skull conductivity estimation / Méthodes bayésiennes pour la localisation des sources M/EEG et estimation de la conductivité du crâne

Costa, Facundo hernan 02 March 2017 (has links)
Les techniques M/EEG permettent de déterminer les changements de l'activité du cerveau, utiles au diagnostic de pathologies cérébrales, telle que l'épilepsie. Ces techniques consistent à mesurer les potentiels électriques sur le scalp et le champ magnétique autour de la tête. Ces mesures sont reliées à l'activité électrique du cerveau par un modèle linéaire dépendant d'une matrice de mélange liée à un modèle physique. La localisation des sources, ou dipôles, des mesures M/EEG consiste à inverser le modèle physique. Cependant, la non-unicité de la solution (due à la loi fondamentale de physique) et le faible nombre de dipôles rendent le problème inverse mal-posé. Sa résolution requiert une forme de régularisation pour restreindre l'espace de recherche. La littérature compte un nombre important de travaux traitant de ce problème, notamment avec des approches variationnelles. Cette thèse développe des méthodes Bayésiennes pour résoudre des problèmes inverses, avec application au traitement des signaux M/EEG. L'idée principale sous-jacente à ce travail est de contraindre les sources à être parcimonieuses. Cette hypothèse est valide dans plusieurs applications, en particulier pour certaines formes d'épilepsie. Nous développons différents modèles Bayésiens hiérarchiques pour considérer la parcimonie des sources. En théorie, contraindre la parcimonie des sources équivaut à minimiser une fonction de coût pénalisée par la norme l0 de leurs positions. Cependant, la régularisation l0 générant des problèmes NP-complets, l'approximation de cette pseudo-norme par la norme l1 est souvent adoptée. Notre première contribution consiste à combiner les deux normes dans un cadre Bayésien, à l'aide d'une loi a priori Bernoulli-Laplace. Un algorithme Monte Carlo par chaîne de Markov est utilisé pour estimer conjointement les paramètres du modèle et les positions et intensités des sources. La comparaison des résultats, selon plusieurs scenarii, avec ceux obtenus par sLoreta et la régularisation par la norme l1 montre des performances intéressantes, mais au détriment d'un coût de calcul relativement élevé. Notre modèle Bernoulli Laplace résout le problème de localisation des sources pour un instant donné. Cependant, il est admis que l'activité cérébrale a une certaine structure spatio-temporelle. L'exploitation de la dimension temporelle est par conséquent intéressante pour contraindre d'avantage le problème. Notre seconde contribution consiste à formuler un modèle de parcimonie structurée pour exploiter ce phénomène biophysique. Précisément, une distribution Bernoulli-Laplacienne multivariée est proposée comme loi a priori pour les dipôles. Une variable latente est introduite pour traiter la loi a posteriori complexe résultante et un algorithme d'échantillonnage original de type Metropolis Hastings est développé. Les résultats montrent que la technique d'échantillonnage proposée améliore significativement la convergence de la méthode MCMC. Une analyse comparative des résultats a été réalisée entre la méthode proposée, une régularisation par la norme mixte l21, et l'algorithme MSP (Multiple Sparse Priors). De nombreuses expérimentations ont été faites avec des données synthétiques et des données réelles. Les résultats montrent que notre méthode a plusieurs avantages, notamment une meilleure localisation des dipôles. Nos deux précédents algorithmes considèrent que le modèle physique est entièrement connu. Cependant, cela est rarement le cas dans les applications pratiques. Au contraire, la matrice du modèle physique est le résultat de méthodes d'approximation qui conduisent à des incertitudes significatives. / M/EEG mechanisms allow determining changes in the brain activity, which is useful in diagnosing brain disorders such as epilepsy. They consist of measuring the electric potential at the scalp and the magnetic field around the head. The measurements are related to the underlying brain activity by a linear model that depends on the lead-field matrix. Localizing the sources, or dipoles, of M/EEG measurements consists of inverting this linear model. However, the non-uniqueness of the solution (due to the fundamental law of physics) and the low number of dipoles make the inverse problem ill-posed. Solving such problem requires some sort of regularization to reduce the search space. The literature abounds of methods and techniques to solve this problem, especially with variational approaches. This thesis develops Bayesian methods to solve ill-posed inverse problems, with application to M/EEG. The main idea underlying this work is to constrain sources to be sparse. This hypothesis is valid in many applications such as certain types of epilepsy. We develop different hierarchical models to account for the sparsity of the sources. Theoretically, enforcing sparsity is equivalent to minimizing a cost function penalized by an l0 pseudo norm of the solution. However, since the l0 regularization leads to NP-hard problems, the l1 approximation is usually preferred. Our first contribution consists of combining the two norms in a Bayesian framework, using a Bernoulli-Laplace prior. A Markov chain Monte Carlo (MCMC) algorithm is used to estimate the parameters of the model jointly with the source location and intensity. Comparing the results, in several scenarios, with those obtained with sLoreta and the weighted l1 norm regularization shows interesting performance, at the price of a higher computational complexity. Our Bernoulli-Laplace model solves the source localization problem at one instant of time. However, it is biophysically well-known that the brain activity follows spatiotemporal patterns. Exploiting the temporal dimension is therefore interesting to further constrain the problem. Our second contribution consists of formulating a structured sparsity model to exploit this biophysical phenomenon. Precisely, a multivariate Bernoulli-Laplacian distribution is proposed as an a priori distribution for the dipole locations. A latent variable is introduced to handle the resulting complex posterior and an original Metropolis-Hastings sampling algorithm is developed. The results show that the proposed sampling technique improves significantly the convergence. A comparative analysis of the results is performed between the proposed model, an l21 mixed norm regularization and the Multiple Sparse Priors (MSP) algorithm. Various experiments are conducted with synthetic and real data. Results show that our model has several advantages including a better recovery of the dipole locations. The previous two algorithms consider a fully known leadfield matrix. However, this is seldom the case in practical applications. Instead, this matrix is the result of approximation methods that lead to significant uncertainties. Our third contribution consists of handling the uncertainty of the lead-field matrix. The proposed method consists in expressing this matrix as a function of the skull conductivity using a polynomial matrix interpolation technique. The conductivity is considered as the main source of uncertainty of the lead-field matrix. Our multivariate Bernoulli-Laplacian model is then extended to estimate the skull conductivity jointly with the brain activity. The resulting model is compared to other methods including the techniques of Vallaghé et al and Guttierez et al. Our method provides results of better quality without requiring knowledge of the active dipole positions and is not limited to a single dipole activation.
14

Blind Received Signal Strength Difference Based Source Localization with System Parameter Error and Sensor Position Uncertainty

Lohrasbipeydeh, Hannan 27 August 2014 (has links)
Passive source localization in wireless sensor networks (WSNs) is an important field of research with numerous applications in signal processing and wireless communications. One purpose of a WSN is to determine the position of a signal emitted from a source. This position is estimated based on received noisy measurements from sensors (anchor nodes) that are distributed over a geographical area. In most cases, the sensor positions are assumed to be known exactly, which is not always reasonable. Even if the sensor positions are measured initially, they can change over time. Due to the sensitivity of source location estimation accuracy with respect to the a priori sensor position information, the source location estimates obtained can vary significantly regardless of the localization method used. Therefore, the sensor position uncertainty should be considered to obtain accurate estimates. Among the many localization approaches, signal strength based methods have the advantages of low cost and simple implementation. The received signal energy mainly depends on the transmitted power and path loss exponent which are often unknown in practical scenarios. In this dissertation, three received signal strength difference (RSSD) based methods are presented to localize a source with unknown transmit power. A nonlinear RSSD-based model is formulated for systems perturbed by noise. First, an effective low complexity constrained weighted least squares (CWLS) technique in the presence of sensor uncertainty is derived to obtain a least squares initial estimate (LSIE) of the source location. Then, this estimate is improved using a computationally efficient Newton method. The Cramer-Rao lower bound (CRLB) is derived to determine the effect of sensor location uncertainties on the source location estimate. Results are presented which show that the proposed method achieves the CRLB when the signal to noise ratio (SNR) is sufficiently high. Least squares (LS) based methods are typically used to obtain the location estimate that minimizes the data vector error instead of directly minimizing the unknown parameter estimation error. This can result in poor performance, particularly in noisy environments, due to bias and variance in the location estimate. Thus, an efficient two stage estimator is proposed here. First, a minimax optimization problem is developed to minimize the mean square error (MSE) of the proposed RSSD-based model. Then semidefinite relaxation is employed to transform this nonconvex and nonlinear problem into a convex optimization problem. This can be solved e ciently to obtain the optimal solution of the corresponding semidefinite programming (SDP) problem. Performance results are presented which con rm the e ciency of the proposed method which achieves the CRLB. Finally, an extended total least squares (ETLS) method is developed for blind localization which considers perturbations in the system parameters as well as the constraints imposed by the relation between the observation matrix and data vector. The corresponding nonlinear and nonconvex RSSD-based localization problem is then transformed to an ETLS problem with fewer constraints. This is transformed to a convex semidefinite programming (SDP) problem using relaxation. The proposed ETLS-SDP method is extended to the case with an unknown path loss exponent. The mean squared error (MSE) and corresponding CRLB are derived as performance benchmarks. Performance results are presented which show that the RSSD-based ETLS-SDP method attains the CRLB for a sufficiently large SNR. / Graduate / 0544 / lohrasbi@uvic.ca
15

Géolocalisation de sources radio-électriques : stratégies, algorithmes et performances / Geographical localization of de radiotransmitters : strategies, algorithms and performances

Bosse, Jonathan 24 January 2012 (has links)
Cette thèse porte sur la géolocalisation de sources radio-électriques dans le cadre du traitement d'antenne, c'est-à-dire l'estimation de la position, dans le plan ou l'espace, de sources incidentes à divers réseaux multicapteurs (stations de base). Il s'agit de concevoir des algorithmes estimant au mieux la position d'un ensemble de sources et de caractériser les limites théoriques, en termes d'erreur quadratique moyenne, des approches envisagées pour résoudre le problème de géolocalisation. Nous nous plaçons dans un contexte passif, sans a priori sur les signaux émis. De manière classique, la position des sources est souvent estimée à l'aide de paramètres intermédiaires (angles d'arrivée, temps d'arrivée, fréquences d'arrivée ...) estimés localement sur chacune des stations de base dans un premier temps. Ces paramètres intermédiaires sont ensuite transmis à une unité centrale de traitement qui réalise l'étape de localisation dans un second temps. On parle parfois d'approche en deux étapes. Cette solution décentralisée est par nature sous-optimale. Une approche optimale du problème de localisation consiste à estimer directement la position des sources à l'aide de l'ensemble des signaux reçus par les stations et transmis directement à l'unité centrale de traitement. Il convient alors de réaliser la localisation à l'aide d'une approche centralisée ne comportant qu'une seule étape : la position des sources étant directement estimée à partir de l'ensemble des signaux disponibles à l'unité centrale de traitement. Le problème à résoudre dépend directement de la position des sources et non plus de paramètres intermédiaires. Cette approche du problème de localisation offre de nouvelles perspectives quant à la conception de nouveaux algorithmes et pose la question de son intérêt théorique en termes d'amélioration des performances de localisation. Dans cette thèse, nous examinons l'intérêt constitué par l'exploitation simultanée des signaux de toutes les stations de base afin de réaliser la localisation. Nous nous attachons dans un premier temps à caractériser en termes d'erreur quadratique moyenne et de borne de Cramer-Rao les approches centralisées et décentralisées pour la localisation dans un contexte de signaux bande étroite sur l'ensemble du réseau de stations. Ensuite, pour le cas plus général de signaux large bande sur l'ensemble du réseau de stations, nous proposons une approche basée sur un traitement spatio-temporel. Nous montrons son intérêt comparativement à l'état de l'art et aux performances optimales théoriques qui font elles-mêmes l'objet d'une partie des travaux exposés dans cette thèse. Un algorithme de géolocalisation en contexte de multitrajets est également proposé dans cette thèse. / This thesis deal with the geographical positioning of multiple radio transmitters thanks to array processing techniques. The estimation of the position is achieved thanks to multiple sensor base stations. We aim to design estimators of the sources position and to characterize the fundamental limits of localization strategies in terms of root-mean-square-error in a passive signal context (no prior information on the transmitted signals). Traditionally, the geographical positioning is achieved by means of a two steps procedure. In the first step, intermediate location parameters (angles of arrival, times of arrival ...) are locally estimated, in a decentralized processing on each station. Then, the location is achieved in a central processing unit thanks to all the transmitted parameters in the second step. This strategy is obviously suboptimal. An optimal solution of the geographical localization problem rather consists in estimating the position of the sources in a centralized manner at the central processing unit, assuming that all base stations are able to transfer all their signals to the central processing unit. Then the localization can be achieved in a one step procedure. The problem now depends on the position of the sources directly and not on intermediate parameters. This approach appears to be very interesting but the characterization of their fundamental limits is still an open question. In this thesis we examine the advantages of the centralized one step procedure compared to the traditional decentralized two-step procedure. First, we study the case of narrowband signals on the station network that offers a relevant theoretical framework to compare the performance of centralized and decentralized localization scheme. Then, we propose an alternative to the existing techniques in the more general wideband signal context based on a spatio-temporal approach. The comparison of existing techniques and the new ones to optimal performance is also part of the work reported in this thesis. A multistage geographical positioning technique is also provided for the multipaths propagation context.
16

Interpolated Perturbation-Based Decomposition as a Method for EEG Source Localization

Lipof, Gabriel Zelik 01 June 2019 (has links) (PDF)
In this thesis, the perturbation-based decomposition technique developed by Szlavik [1] was used in an attempt to solve the inverse problem in EEG source localization. A set of dipole locations were forward modeled using a 4-layer sphere model of the head at uniformly distributed lead locations to form the vector basis necessary for the method. Both a two-dimensional and a pseudo-three-dimensional versions of the model were assessed with the two-dimensional model yielding decompositions with minimal error and the pseudo-three-dimensional version having unacceptable levels of error. The utility of interpolation as a method to reduce the number of data points to become overdefined was assessed as well. The approach was effective as long as the number of component functions did not exceed the number of data points and stayed relatively small (less than 77 component functions). This application of the method to a spatially variate system indicates its potential for other systems and with some tweaking to the least squares algorithm used, could be applied to multivariate systems.
17

Developing and Testing a Novel De-centralized Cycle-free Game Theoretic Traffic Signal Controller: A Traffic Efficiency and Environmental Perspective

Abdelghaffar, Hossam Mohamed Abdelwahed 30 April 2018 (has links)
Traffic congestion negatively affects traveler mobility and air quality. Stop and go vehicular movements associated with traffic jams typically result in higher fuel consumption levels compared to cruising at a constant speed. The first objective in the dissertation is to investigate the spatial relationship between air quality and traffic flow patterns. We developed and applied a recursive Bayesian estimation algorithm to estimate the source location (associated with traffic jam) of an airborne contaminant (aerosol) in a simulation environment. This algorithm was compared to the gradient descent algorithm and an extended Kalman filter algorithm. Results suggest that Bayesian estimation is less sensitive to the choice of the initial state and to the plume dispersion model. Consequently, Bayesian estimation was implemented to identify the location (correlated with traffic flows) of the aerosol (soot) that can be attributed to traffic in the vicinity of the Old Dominion University campus, using data collected from a remote sensing system. Results show that the source location of soot pollution is located at congested intersections, which demonstrate that air quality is correlated with traffic flows and congestion caused by signalized intersections. Sustainable mobility can help reduce traffic congestion and vehicle emissions, and thus, optimizing the performance of available infrastructure via advanced traffic signal controllers has become increasingly appealing. The second objective in the dissertation is to develop a novel de-centralized traffic signal controller, achieved using a Nash bargaining game-theoretic framework, that operates a flexible phasing sequence and free cycle length to adapt to dynamic changes in traffic demand levels. The developed controller was implemented and tested in the INTEGRATION microscopic traffic assignment and simulation software. The proposed controller was compared to the operation of an optimum fixed-time coordinated plan, an actuated controller, a centralized adaptive phase split controller, a decentralized phase split and cycle length controller, and a fully coordinated adaptive phase split, cycle length, and offset optimization controller to evaluate its performance. Testing was initially conducted on an isolated intersection, showing a 77% reduction in queue length, a 17% reduction in vehicle emission levels, and a 64% reduction in total delay. In addition, the developed controller was tested on an arterial network producing statistically significant reductions in total delay ranging between 36% and 67% and vehicle emissions reductions ranging between 6% and 13%. Analysis of variance, Tukey, and pairwise comparison tests were conducted to establish the significance of the proposed controller. Moreover, the controller was tested on a network of 38 intersections producing significant reduction in the travel time by 23.6%, a reduction in the queue length by 37.6%, and a reduction in CO2 emissions by 10.4%. Finally, the controller was tested on the Los Angeles downtown network composed of 457 signalized intersections, producing a 35% reduction in travel time, a 54.7% reduction in queue length, and a 10% reduction in the CO2 emissions. The results demonstrate that the proposed decentralized controller produces major improvements over other state-of-the-art centralized and de-centralized controllers. The proposed controller is capable of alleviating congestion as well as reducing emissions and enhancing air quality. / PHD / Traffic congestion affects traveler mobility and also has an impact on air quality, and consequently, on public health. Stop-and-go driving, which is typically associated with traffic jams, results in increased fuel consumption when compared to cruising at a constant speed. This in turn contributes to the amount of vehicle emissions that create air pollution, which contributes to global warming. Consequently, studying the spatial relationships between air quality and traffic flow patterns is directly related to enhancing air quality, as improving these patterns can reduce traffic congestion. The first objective in this dissertation is to investigate the spatial relationship between air quality and traffic flow patterns. We developed and applied a recursive Bayesian estimation algorithm to estimate the source location of an airborne contaminant (aerosol) in a simulation environment. This algorithm was compared to the gradient descent algorithm and the extended Kalman filter. Results suggest that Bayesian estimation is less sensitive to the choice of the initial state and to the plume dispersion model when compared to the other two approaches. Consequently, an experimental investigation using Bayesian estimation was conducted to identify the location (correlated with traffic flows) of the aerosol (soot) that can be attributed to traffic in the vicinity of the Old Dominion University campus, using data collected from a remote sensing system (a compact light detection and ranging [LiDAR] system). The results show that the location of soot pollution in the study area is located at congested intersections, which demonstrates that air quality is correlated with traffic flows and congestion caused by signalized intersections. Sustainable mobility could enhance air quality and alleviate congestion. Accordingly, optimizing the utilization of the available infrastructure using advanced traffic signal controllers has become necessary to mitigate traffic congestion in a world with growing pressure on financial and physical resources. The second objective in the dissertation is to develop a novel de-centralized traffic signal controller that is achieved using a Nash bargaining game-theoretic framework. This framework has a flexible phasing sequence and free cycle length, and thus can adapt to dynamic changes in traffic demand. The controller was implemented and evaluated using the INTEGRATION microscopic traffic assignment and simulation software. The proposed controller was tested and compared to state-of-the-art isolated and coordinated traffic signal controllers. The proposed controller was tested on an isolated intersection, producing a reduction in the queue length ranging from 58% to 77%, and a reduction in vehicle emission levels ranging from 6% to 17%. In the case of the arterial testing, the controller was compared to an optimum fixed-time coordinated plan, an actuated controller, a centralized adaptive phase split controller, a decentralized phase split and cycle length controller, and a fully coordinated adaptive phase split, cycle length, and offset optimization controller to evaluate its performance. On the arterial network, the proposed controller produced reductions in the total delay ranging from 36% to 67%, and a reduction in vehicle emissions ranging from 6% to 13%. Statistical tests show that the proposed controller produces major improvements over other state-of-the-art centralized and de-centralized controllers. In the domain of large scale networks, simulations were conducted on the town of Blacksburg, Virginia composed of 38 signalized intersections. The results show significant reductions on the intersection approaches with travel time savings of 23.6%, a reduction in the average queue length of 37.6%, a reduction in the average number of vehicle stops of 23.6%, a reduction in CO₂ emissions of 10.4%, a reduction in the fuel consumption of 9.8%, and a reduction in NO<sub>X<\sub> emissions of 5.4%. In addition, the proposed controller was tested on downtown Los Angles, California, including the most congested downtown area, which has 457 signalized intersections, and compared to the performance of a decentralized phase split and cycle length controller. The results show significant reductions on the intersections links in the average travel time of 35.1%, a reduction in the average queue length of 54.7%, a reduction in the average number of stops of 44%, a reduction in CO₂ emissions of 10%, a reduction in the fuel consumption of 10%, and a reduction in NO<sub>X<\sub> emissions of 11.7%. Furthermore, simulations were conducted at lower traffic flow levels and showed significant reductions on the network performance producing reductions in vehicle average total delay of 36.7%, a reduction in the stopped delay by 90.2%, and a reduction in the average number of stops by 35%, over a decentralized phase split and cycle length controller. The results demonstrate that the proposed decentralized controller reduces traffic congestion, fuel consumption and vehicle emission levels, and produces major improvements over other state-of-the-art centralized and de-centralized controllers.
18

Approaches to Multiple-source Localization and Signal Classification

Reed, Jesse 10 June 2009 (has links)
Source localization with a wireless sensor network remains an important area of research as the number of applications with this problem increases. This work considers the problem of source localization by a network of passive wireless sensors. The primary means by which localization is achieved is through direction-finding at each sensor, and in some cases, range estimation as well. Both single and multiple-target scenarios are considered in this research. In single-source environments, a solution that outperforms the classic least squared error estimation technique by combining direction and range estimates to perform localization is presented. In multiple-source environments, two solutions to the complex data association problem are addressed. The first proposed technique offers a less complex solution to the data association problem than a brute-force approach at the expense of some degradation in performance. For the second technique, the process of signal classification is considered as another approach to the data association problem. Environments in which each signal possesses unique features can be exploited to separate signals at each sensor by their characteristics, which mitigates the complexity of the data association problem and in many cases improves the accuracy of the localization. Two approaches to signal-selective localization are considered in this work. The first is based on the well-known cyclic MUSIC algorithm, and the second combines beamforming and modulation classification. Finally, the implementation of a direction-finding system is discussed. This system includes a uniform circular array as a radio frequency front end and the universal software radio peripheral as a data processor. / Master of Science
19

Odor Source Localization Using Swarm Robotics

Thomas, Joseph 12 1900 (has links)
Locating an odor source in a turbulent environment, an instinctive behavior of insects such as moths, is a nontrivial task in robotics. Robots equipped with odor sensors find it difficult to locate the odor source due to the sporadic nature of odor patches in a turbulent environment. In this thesis, we develop a swarm algorithm which acquires information from odor patches and utilizes it to locate the odor source. The algorithm utilizes an intelligent integration of the chemotaxis, anemotaxis and spiralling approaches, where the chemotactic behavior is implemented by the recently proposed Glowworm Swarm Optimization (GSO) algorithm. Agents switch between chemotactic, anemotactic, and spiralling modes in accordance with the information available from the environment for optimal performance. The proposed algorithm takes full advantage of communication and collaboration between the robots. It is shown to be robust, efficient and well suited for implementation in olfactory robots. An important feature of the algorithm is the use of maximum concentration encountered in the recent past for navigation, which is seen to improve algorithmic performance significantly. The algorithm initially assumes agents to be point masses, later this is modified for robots and includes a gyroscopic avoidance strategy. A variant of the algorithm which does not demand wind information, is shown to be capable of locating odor sources even in no wind environment. A deterministic GSO algorithm has been proposed which is shown capable of faster convergence. Another proposed variant, the push pull GSO algorithm is shown to be more efficient in the presence of obstacle avoidance. The proposed algorithm is also seen capable of locating odor source under varying wind conditions. We have also shown the simultaneous capture of multiple odor sources by the proposed algorithm. A mobile odor source is shown to be captured and tracked by the proposed approach. The proposed approaches are later tested on data obtained from a realistic dye mixing experiment. A gas source localization experiment is also carried out in the lab to demonstrate the validity of the proposed approaches under real world conditions.
20

Bioelectric Source Localization in Peripheral Nerves

Zariffa, Jose 23 February 2010 (has links)
Currently there does not exist a type of peripheral nerve interface that adequately combines spatial selectivity, spatial coverage and low invasiveness. In order to address this lack, we investigated the application of bioelectric source localization algorithms, adapted from electroencephalography/magnetoencephalography, to recordings from a 56-contact “matrix” nerve cuff electrode. If successful, this strategy would enable us to improve current neuroprostheses and conduct more detailed investigations of neural control systems. Using forward field similarities, we first developed a method to reduce the number of unnecessary variables in the inverse problem, and in doing so obtained an upper bound on the spatial resolution. Next, a simulation study of the peripheral nerve source localization problem revealed that the method is unlikely to work unless noise is very low and a very accurate model of the nerve is available. Under more realistic conditions, the method had localization errors in the 140 μm-180 μm range, high numbers of spurious pathways, and low resolution. On the other hand, the simulations also showed that imposing physiologically meaningful constraints on the solution can reduce the number of spurious pathways. Both the influence of the constraints and the importance of the model accuracy were validated experimentally using recordings from rat sciatic nerves. Unfortunately, neither idealized models nor models based on nerve sample cross-sections were sufficiently accurate to allow reliable identification of the branches stimulated during the experiments. To overcome this problem, an experimental leadfield was constructed using training data, thereby eliminating the dependence on anatomical models. This new strategy was successful in identifying single-branch cases, but not multi-branches ones. Lastly, an examination of the information contained in the matrix cuff recordings was performed in comparison to a single-ring configuration of contacts. The matrix cuff was able to achieve better fascicle discrimination due to its ability to select among the most informative locations around the nerve. These findings suggest that nerve cuff-based neuroprosthetic applications would benefit from implanting devices with a large number of contacts, then performing a contact selection procedure. Conditions that must be met before source localization approaches can be applied in practice to peripheral nerves were also discussed.

Page generated in 0.1675 seconds