• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 9
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 31
  • 26
  • 16
  • 13
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Iterative Timing Recovery for Magnetic Recording Channels with Low Signal-to-Noise Ratio

Nayak, Aravind Ratnakar 07 July 2004 (has links)
Digital communication systems invariably employ an underlying analog communication channel. At the transmitter, data is modulated to obtain an analog waveform which is input to the channel. At the receiver, the output of the channel needs to be mapped back into the discrete domain. To this effect, the continuous-time received waveform is sampled at instants chosen by the timing recovery block. Therefore, timing recovery is an essential component of digital communication systems. A widely used timing recovery method is based on a phase-locked loop (PLL), which updates its timing estimates based on a decision-directed device. Timing recovery performance is a strong function of the reliability of decisions, and hence, of the channel signal-to-noise ratio (SNR). Iteratively decodable error-control codes (ECCs) like turbo codes and LDPC codes allow operation at SNRs lower than ever before, thus exacerbating timing recovery. We propose iterative timing recovery, where the timing recovery block, the equalizer and the ECC decoder exchange information, giving the timing recovery block access to decisions that are much more reliable than the instantaneous ones. This provides significant SNR gains at a marginal complexity penalty over a conventional turbo equalizer where the equalizer and the ECC decoder exchange information. We also derive the Cramer-Rao bound, which is a lower bound on the estimation error variance of any timing estimator, and propose timing recovery methods that outperform the conventional PLL and achieve the Cramer-Rao bound in some cases. At low SNR, timing recovery suffers from cycle slips, where the receiver drops or adds one or more symbols, and consequently, almost always the ECC decoder fails to decode. Iterative timing recovery has the ability to corrects cycle slips. To reduce the number of iterations, we propose cycle slip detection and correction methods. With iterative timing recovery, the PLL with cycle slip detection and correction recovers most of the SNR loss of the conventional receiver that separates timing recovery and turbo equalization.
22

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
23

Analise de tecnicas de localização em redes de sensores sem fio / Analysis of localization techniques in wireless sensor networks

Moreira, Rafael Barbosa 26 February 2007 (has links)
Orientador: Paulo Cardieri / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T19:04:48Z (GMT). No. of bitstreams: 1 Moreira_RafaelBarbosa_M.pdf: 769599 bytes, checksum: 765bba4630a38b38a3832828cf0947b7 (MD5) Previous issue date: 2007 / Resumo: Nesta dissertação, o problema da localização em redes de sensores sem fio é investigado. É apresentada uma análise de desempenho de técnicas de localização por meio de simulação e por meio da avaliação do limite de Cramér-Rao para o erro de localização. Em ambas as formas de análise foram avaliados efeitos de diversos fatores no desempenho, relacionados à topologia da rede e ao ambiente de propagação . Na análise por meio de simulação, foram consideradas as técnicas de localização baseadas em observações de potência do sinal recebido, enquanto que na análise usando o limite de Cramér-Rao, foram analisadas também as técnicas baseadas no tempo de chegada e no ângulo de chegada do sinal recebido. Este trabalho também avaliou os efeitos da polarização das estimativas de distâncias (usadas no processo de localização) no limite inferior de Cramér-Rao. Esta polarização é geralmente desprezada na literatura, o que pode levar a imprecisões no cálculo do limite de Cramér-Rao, em certas condições de propagação. Uma nova expressão para este limite foi derivada para um caso simples de estimador, considerando agora a polarização. Tomando como base o desenvolvimento desta nova expressão, foi derivada também uma nova expressão para o limite inferior de Cramér-Rao considerando os efeitos do desvanecimento lognormal e do desvanecimento Nakagami do canal de propagação / Abstract: This dissertation investigates on the localization problem in wireless sensor networks. A performance analysis of localization techniques through simulations and the Cramér-Rao lower bound is presented. The effects of several parameters on the localization performance are investigated, including network topology and propagation environment. The simulation analysis considered localization techniques based on received signal strength observations, while the Cramér-Rao analysis considered also techniques based on the time of arrival and angle of arrival of the received signal. This work also investigated how the Cramér-Rao limit is affected by the observation bias in localization techniques based on the received signal strength. This bias is usually neglected in the literature, what may lead to imprecisions on the Cramér-Rao limit computation under certain propagation conditions. A new expression for this limit was derived for a simple estimator case, now considering the bias. With the development of this new expression, it was also derived a new expression for the Cramér-Rao lower bound considering the effects of lognormal fading and Nakagami fading on the propagation channel / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
24

Statistical Methods for Image Change Detection with Uncertainty

Lingg, Andrew James January 2012 (has links)
No description available.
25

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
26

Traitement d'antenne adapté aux modèles linéaires intégrant une interférence structurée. Application aux signaux mécaniques.

Bouleux, Guillaume 04 December 2007 (has links) (PDF)
Le cadre d'étude de ce travail est celui du traitement d'antenne appliqué au modèle linéaire. Dans divers domaines, comme en bio-médical ou encore en RADAR, le nombre de Directions D'Arrivées (DDA) d'intérêts est un ensemble réduit de toutes les directions constituant le modèle. Nous optons donc pour un modèle structuré s'écrivant <br /><br />Observation = Signal d'intérêt + Interférence structurée + Bruit<br /><br />Où l'interférence structurée est composée d'un certain nombre de Directions D'Arrivées connues ou estimées. De ce modèle, nous proposons deux types d'approches : (1) nous supposons disposer de la connaissance de M-S DDA sur un total de M et (2) nous souhaitons estimer de manière séquentielle M DDA.<br /><br />La littérature fournit des solutions pour résoudre le problème d'estimation de S DDA d'intérêts sur un total de M. Les solutions proposées utilisent une déflation orthogonale du sous-espace signal bruité. Nous donnons alors une nouvelle Borne de Cramér-Rao (CRB) que nous nommons Prior-CRB associée à ce type modèle et nous montrons sous quelles conditions (très restrictives) cette borne est inférieure à une CRB classique issue du modèle linéaire composé de M DDA. Pour s'absoudre des contraintes liées au modèle à déflation orthogonale nous proposons alors d'employer une déflation oblique en place de la déflation orthogonale. Nous construisons alors de nouveau estimateurs des DDA d'intérêts. A la vue des simulations, les performances sont bien meilleures que les algorithmes à déflation orthogonale et nous proposons d'expliquer ces performances par la dérivation des variances théoriques de chacun des estimateurs proposés. Ainsi, via l'analyse de ces variances, nous montrons pourquoi la projection oblique est plus appropriée et nous donnons une relation d'ordre de chacune des variances associées aux algorithmes étudiés.<br /><br />Ici encore le problème de l'estimation séquentielle de M DDA est un problème suscitant un grand intérêt. Seulement, les solutions proposées dans la littérature utilisent une déflation orthogonale pour annuler séquentiellement les directions préalablement estimées via un critère MUSIC modifié. Nous nous démarquons en proposant un algorithme qui pondère, par une fonction quadratique de forçage à zéro, le pseudo-spectre de MUSIC. Cette approche montre de bien meilleures performances que les méthodes à déflation orthogonale et permet de s'affranchir très nettement de la résolution de Rayleigh grâce au contrôle de la fonction de pondération. Nous montrons de plus que cet algorithme est efficace et que l'erreur de propagation peut s'annuler via le réglage d'un paramètre de la fonction de pondération. Pour caractériser au mieux les performances de cet algorithme nous proposons une CRB, que nous nommons la Interfering-CRB issue d'un modèle linéaire constitué d'une DDA d'intérêt et de M-1 DDA interférentes (DDA estimées préalablement ou restant à estimer). Nous montrons que cette borne « reflète » bien l'algorithme ZF-MUSIC.
27

Target Localization Methods For Frequency-only Mimo Radar

Kalkan, Yilmaz 01 September 2012 (has links) (PDF)
This dissertation is focused on developing the new target localization and the target velocity estimation methods for frequency-only multi-input, multi-output (MIMO) radar systems with widely separated antennas. If the frequency resolutions of the transmitted signals are enough, only the received frequencies and the Doppler shifts can be used to find the position of the target. In order to estimate the position and the velocity of the target, most multistatic radars or radar networks use multiple independent measurements from the target such as time-of-arrival (TOA), angle-of-arrival (AOA) and frequency-of-arrival (FOA). Although, frequency based systems have many advantages, frequency based target localization methods are very limited in literature because of the fact that highly non-linear equations are involved in solutions. In this thesis, alternative target localization and the target velocity estimation methods are proposed for frequency-only systems with low complexity. One of the proposed methods is able to estimate the target position and the target velocity based on the measurements of the Doppler frequencies. Moreover, the target movement direction can be estimated efficiently. This method is referred to as &quot / Target Localization via Doppler Frequencies - TLDF&quot / and it can be used for not only radar but also all frequency-based localization systems such as Sonar or Wireless Sensor Networks. Besides the TLDF method, two alternative target position estimation methods are proposed as well. These methods are based on the Doppler frequencies, but they requires the target velocity vector to be known. These methods are referred to as &quot / Target Localization via Doppler Frequencies and Target Velocity - TLD&amp / V methods&quot / and can be divided two sub-methods. One of them is based on the derivatives of the Doppler Frequencies and hence it is called as &quot / Derivated Doppler - TLD&amp / V-DD method&quot / . The second method uses the Maximum Likelihood (ML) principle with grid search, hence it is referred to as &quot / Sub-ML, TLD&amp / V-subML method&quot / . The more realistic signal model for ground based, widely separated MIMO radar is formed as including Swerling target fluctuations and the Doppler frequencies. The Cramer-Rao Bounds (CRB) are derived for the target position and the target velocity estimations for this signal model. After the received signal is constructed, the Doppler frequencies are estimated by using the DFT based periodogram spectral estimator. Then, the estimated Doppler frequencies are collected in a fusion center to localize the target. Finally, the multiple targets localization problem is investigated for frequency-only MIMO radar and a new data association method is proposed. By using the TLDF method, the validity of the method is simulated not only for the targets which are moving linearly but also for the maneuvering targets. The proposed methods can localize the target and estimate the velocity of the target with less error according to the traditional isodoppler based method. Moreover, these methods are superior than the traditional method with respect to the computational complexity. By using the simulations with MATLAB, the superiorities of the proposed methods to the traditional method are shown.
28

TOA-Based Robust Wireless Geolocation and Cramér-Rao Lower Bound Analysis in Harsh LOS/NLOS Environments

Yin, Feng, Fritsche, Carsten, Gustafsson, Fredrik, Zoubir, Abdelhak M January 2013 (has links)
We consider time-of-arrival based robust geolocation in harsh line-of-sight/non-line-of-sight environments. Herein, we assume the probability density function (PDF) of the measurement error to be completely unknown and develop an iterative algorithm for robust position estimation. The iterative algorithm alternates between a PDF estimation step, which approximates the exact measurement error PDF (albeit unknown) under the current parameter estimate via adaptive kernel density estimation, and a parameter estimation step, which resolves a position estimate from the approximate log-likelihood function via a quasi-Newton method. Unless the convergence condition is satisfied, the resolved position estimate is then used to refine the PDF estimation in the next iteration. We also present the best achievable geolocation accuracy in terms of the Cramér-Rao lower bound. Various simulations have been conducted in both real-world and simulated scenarios. When the number of received range measurements is large, the new proposed position estimator attains the performance of the maximum likelihood estimator (MLE). When the number of range measurements is small, it deviates from the MLE, but still outperforms several salient robust estimators in terms of geolocation accuracy, which comes at the cost of higher computational complexity.
29

Modélisation de signaux fortement non stationnaires à phase et à amplitude locales polynomiales.

Jabloun, Meryem 10 July 2007 (has links) (PDF)
Ce travail de recherche est consacré à l'élaboration et le développement d'une nouvelle méthode d'estimation<br />et de reconstruction de signaux fortement non-stationnaires, modulés non-linéairement à la fois<br />en amplitude et en fréquence. L'estimation de tels signaux dans un contexte trés bruité est un problème<br />délicat et les méthodes existantes de la littérature présentent plusieurs inconvénients dans ce cas.<br />Nous avons montré comment une approche locale permet une meilleure adaptabilité du modèle à la<br />nature des variations locales des amplitudes et des fréquences instantanées. Les résultats de l'estimation<br />sont par conséquent améliorés. L'originalité de la méthode proposée tient à l'application de modèles paramétriques bien adaptés sur des segments temporels de courtes durées extraits du signal étudié. Nous<br />avons proposé une stratégie de segmentation puis une stratégie de fusion des segments estimés permettant<br />la reconstruction du signal dans la totalité de sa durée. L'approche proposée permet de s'affranchir d'un<br />modèle global du signal requérant un ordre d'approximation élevé.<br />La validation de l'efficacité de l'estimation a été effectuée au préalable sur un segment temporel court.<br />Le modèle considéré localement consiste en une approximation polynomiale de la fréquence et de l'amplitude<br />exprimée dans une base polynomiale discrète et orthonormale que nous avons calculée. Cette base<br />permet de réduire le couplage entre les paramètres du modèle. Nous proposons et comparons deux techniques<br />différentes pour estimer ces derniers. La première est fondée sur la maximisation de la fonction<br />de vraisemblance en utilisant la technique d'optimisation stochastique le recuit simulé. Tandis que la<br />deuxième se base sur une approche Bayésienne employant les méthodes MCMC simulées par l'algorithme<br />de Metroplois-Hastings.<br />Nous montrons, sur des simulations et également sur des signaux réels, que l'approche proposée fournit<br />de bons résultats d'estimation par comparaison à celles de la HAF.
30

Capture de mouvement par mesure de distances dans un réseau corporel hétérogène

Aloui, Saifeddine 05 February 2013 (has links) (PDF)
La capture de mouvement ambulatoire est un sujet en plein essor pour des applications aussi diverses que le suivi des personnes âgées, l'assistance des sportifs de haut niveau, la réhabilitation fonctionnelle, etc. Ces applications exigent que le mouvement ne soit pas contraint par un système externe, qu'il puisse être réalisé dans différentes situations, y compris en extérieur, que l'équipement soit léger et à un faible coût, qu'il soit réellement ambulatoire et sans procédure complexe de calibration.Actuellement, seuls les systèmes utilisant un exosquelette ou bien des modules inertiels (souvent combinés avec des modules magnétiques) permettent d'effectuer de la capture de mouvement de façon ambulatoire. Le poids de l'exosquelette est très important et il impose des contraintes sur les mouvements de la personne, ce qui le rend inutilisable pour certaines applications telles que le suivi de personnes âgées. La technologie inertielle est plus légère. Elle permet d'effectuer la capture du mouvement sans contrainte sur l'espace de mesure ou sur les mouvements réalisés. Par contre, elle souffre de dérives des gyromètres, et le système doit être recalibré.L'objectif de cette thèse est de développer un système de capture de mouvement de chaînes articulées, bas-coût et temps réel, réellement ambulatoire, ne nécessitant pas d'infrastructure de capture spécifique, permettant une utilisation dans de nombreux domaines applicatifs (rééducation, sport, loisirs, etc.).On s'intéresse plus particulièrement à des mesures intra-corporelles. Ainsi, tous les capteurs sont placés sur le corps et aucun dispositif externe n'est utilisé. Outre un démonstrateur final permettant de valider l'approche proposée, on s'astreint à développer également des outils qui permettent de dimensionner le système en termes de technologie, nombre et position des capteurs, mais également à évaluer différents algorithmes de fusion des données. Pour ce faire, on utilise la borne de Cramer-Rao.Le sujet est donc pluridisciplinaire. Il traite des aspects de modélisation et de dimensionnement de systèmes hybrides entièrement ambulatoires. Il étudie des algorithmes d'estimation adaptés au domaine de la capture de mouvement corps entier en traitant les problématiques d'observabilité de l'état et en tenant compte des contraintes biomécaniques pouvant être appliquées. Ainsi, un traitement adapté permet de reconstruire en temps réel la posture du sujet à partir de mesures intra-corporelles, la source étant également placée sur le corps.

Page generated in 0.0586 seconds