Spelling suggestions: "subject:"ramo"" "subject:"raio""
91 |
Iterative Timing Recovery for Magnetic Recording Channels with Low Signal-to-Noise RatioNayak, Aravind Ratnakar 07 July 2004 (has links)
Digital communication systems invariably employ an underlying analog communication channel. At the transmitter, data is modulated to obtain an analog waveform which is input to the channel. At the receiver, the output of the channel needs to be mapped back into the discrete domain. To this effect, the continuous-time received waveform is sampled at instants chosen by the timing recovery block. Therefore, timing recovery is an essential component of digital communication systems.
A widely used timing recovery method is based on a phase-locked loop (PLL), which updates its timing estimates based on a decision-directed device. Timing recovery performance is a strong function of the reliability of decisions, and hence, of the channel signal-to-noise ratio (SNR). Iteratively decodable error-control codes (ECCs) like turbo codes and LDPC codes allow operation at SNRs lower than ever before, thus exacerbating timing recovery.
We propose iterative timing recovery, where the timing recovery block, the equalizer and the ECC decoder exchange information, giving the timing recovery block access to decisions that are much more reliable than the instantaneous ones. This provides significant SNR gains at a marginal complexity penalty over a conventional turbo equalizer where the equalizer and the ECC decoder exchange information. We also derive the Cramer-Rao bound, which is a lower bound on the estimation error variance of any timing estimator, and propose timing recovery methods that outperform the conventional PLL and achieve the Cramer-Rao bound in some cases.
At low SNR, timing recovery suffers from cycle slips, where the receiver drops or adds one or more symbols, and consequently, almost always the ECC decoder fails to decode. Iterative timing recovery has the ability to corrects cycle slips. To reduce the number of iterations, we propose cycle slip detection and correction methods. With iterative timing recovery, the PLL with cycle slip detection and correction recovers most of the SNR loss of the conventional receiver that separates timing recovery and turbo equalization.
|
92 |
Performance and Implementation Aspects of Nonlinear FilteringHendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
|
93 |
Analise de tecnicas de localização em redes de sensores sem fio / Analysis of localization techniques in wireless sensor networksMoreira, Rafael Barbosa 26 February 2007 (has links)
Orientador: Paulo Cardieri / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T19:04:48Z (GMT). No. of bitstreams: 1
Moreira_RafaelBarbosa_M.pdf: 769599 bytes, checksum: 765bba4630a38b38a3832828cf0947b7 (MD5)
Previous issue date: 2007 / Resumo: Nesta dissertação, o problema da localização em redes de sensores sem fio é investigado. É apresentada uma análise de desempenho de técnicas de localização por meio de simulação e por meio da avaliação do limite de Cramér-Rao para o erro de localização. Em ambas as formas de análise foram avaliados efeitos de diversos fatores no desempenho, relacionados à topologia da rede e ao ambiente de propagação . Na análise por meio de simulação, foram consideradas as técnicas de localização baseadas em observações de potência do sinal recebido, enquanto que na análise usando o limite de Cramér-Rao, foram analisadas também as técnicas baseadas no tempo de chegada e no ângulo de chegada do sinal recebido. Este trabalho também avaliou os efeitos da polarização das estimativas de distâncias (usadas no processo de localização) no limite inferior de Cramér-Rao. Esta polarização é geralmente desprezada na literatura, o que pode levar a imprecisões no cálculo do limite de Cramér-Rao, em certas condições de propagação. Uma nova expressão para este limite foi derivada para um caso simples de estimador, considerando agora a polarização. Tomando como base o desenvolvimento desta nova expressão, foi derivada também uma nova expressão para o limite inferior de Cramér-Rao considerando os efeitos do desvanecimento lognormal e do desvanecimento Nakagami do canal de propagação / Abstract: This dissertation investigates on the localization problem in wireless sensor networks. A performance analysis of localization techniques through simulations and the Cramér-Rao lower bound is presented. The effects of several parameters on the localization performance are investigated, including network topology and propagation environment. The simulation analysis considered localization techniques based on received signal strength observations, while the Cramér-Rao analysis considered also techniques based on the time of arrival and angle of arrival of the received signal. This work also investigated how the Cramér-Rao limit is affected by the observation bias in localization techniques based on the received signal strength. This bias is usually neglected in the literature, what may lead to imprecisions on the Cramér-Rao limit computation under certain propagation conditions. A new expression for this limit was derived for a simple estimator case, now considering the bias. With the development of this new expression, it was also derived a new expression for the Cramér-Rao lower bound considering the effects of lognormal fading and Nakagami fading on the propagation channel / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
94 |
Fluorescence Molecular Tomography: A New Volume Reconstruction MethodShamp, Stephen Joseph 06 July 2010 (has links)
Medical imaging is critical for the detection and diagnosis of disease, guided biopsies, assessment of therapies, and administration of treatment. While computerized tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultra-sound (US) are the more familiar modalities, interest in yet other modalities continues to grow. Among the motivations are reduction of cost, avoidance of ionizing radiation, and the search for new information, including biochemical and molecular processes. Fluorescence Molecular Tomography (FMT) is one such emerging technique and, like other techniques, has its advantages and limitations. FMT can reconstruct the distribution of fluorescent molecules in vivo using near-infrared radiation or visible band light to illuminate the subject. FMT is very safe since non-ionizing radiation is used, and inexpensive due to the comparatively low cost of the imaging system.
This should make it particularly well suited for small animal studies for research. A broad range of cell activity can be identified by FMT, making it a potentially valuable tool for cancer screening, drug discovery and gene therapy.
Since FMT imaging is scattering dominated, reconstruction of volume images is significantly more computationally intensive than for CT. For instance, to reconstruct a 32x32x32 image, a flattened matrix with approximately 10¹°, or 10 billion, elements must be dealt with in the inverse problem, while requiring more than 100 GB of memory. To reduce the error introduced by noisy measurements, significantly more measurements are needed, leading to a proportionally larger matrix. The computational complexity of reconstructing FMT images, along with inaccuracies in photon propagation models, has heretofore limited the resolution and accuracy of FMT.
To surmount the problems stated above, we decompose the forward problem into a Khatri-Rao product. Inversion of this model is shown to lead to a novel reconstruction method that significantly reduces the computational complexity and memory requirements for overdetermined datasets. Compared to the well known SVD approach, this new reconstruction method decreases computation time by a factor of up to 25, while simultaneously reducing the memory requirement by up to three orders of magnitude. Using this method, we have reconstructed images up to 32x32x32. Also outlined is a two step approach which would enable imaging larger volumes. However, it remains a topic for future research.
In achieving the above, the author studied the physics of FMT, developed an extensive set of original computer programs, performed COMSOL simulations on photon diffusion, and unavoidably, developed visual displays.
|
95 |
Physics-Guided Machine Learning in Ocean Acoustics Using Fisher InformationMortenson, Michael Craig 14 April 2022 (has links)
Waterborne acoustic signals carry information about the ocean environment. Ocean geoacoustic inversion is the task of estimating environmental parameters from received acoustic signals by matching the measured sound with the predictions of a physics-based model. A lower bound on the uncertainty associated with environmental parameter estimates, the Cramér-Rao bound, can be calculated from the Fisher information, which is dependent on derivatives of a physics-based model. Physics-based preconditioners circumvent the need for variable step sizes when computing numerical derivatives. This work explores the feasibility of using a neural network to perform geoacoustic inversion for environmental parameters and their associated uncertainties from ship noise spectrogram data. To train neural networks, a synthetic dataset is generated and tested for generalizability against 31 measurements taken during the SBCEX2017 study of the New England Mud Patch.
|
96 |
Building an Efficient Occupancy Grid Map Based on Lidar Data Fusion for Autonomous driving ApplicationsSalem, Marwan January 2019 (has links)
The Localization and Map building module is a core building block for designing an autonomous vehicle. It describes the vehicle ability to create an accurate model of its surroundings and maintain its position in the environment at the same time. In this thesis work, we contribute to the autonomous driving research area by providing a proof-of-concept of integrating SLAM solutions into commercial vehicles; improving the robustness of the Localization and Map building module. The proposed system applies Bayesian inference theory within the occupancy grid mapping framework and utilizes Rao-Blackwellized Particle Filter for estimating the vehicle trajectory. The work has been done at Scania CV where a heavy duty vehicle equipped with multiple-Lidar sensory architecture was used. Low level sensor fusion of the different Lidars was performed and a parallelized implementation of the algorithm was achieved using a GPU. When tested on the frequently used datasets in the community, the implemented algorithm outperformed the scan-matching technique and showed acceptable performance in comparison to another state-of-art RBPF implementation that adapts some improvements on the algorithm. The performance of the complete system was evaluated under a designed set of real scenarios. The proposed system showed a significant improvement in terms of the estimated trajectory and provided accurate occupancy representations of the vehicle surroundings. The fusion module was found to build more informative occupancy grids than the grids obtained form individual sensors. / Modulen som har hand om både lokalisering och byggandet av karta är en av huvudorganen i ett system för autonom körning. Den beskriver bilens förmåga att skapa en modell av omgivningen och att hålla en position i förhållande till omgivningen. I detta examensarbete bidrar vi till forskningen inom autonom bilkörning med ett valideringskoncept genom att integrera SLAM-lösningar i kommersiella fordon, vilket förbättrar robustheten hos lokaliserings-kartbyggarmodulen. Det föreslagna systemet använder sig utav Bayesiansk statistik applicerat i ett ramverk som har hand om att skapa en karta, som består av ett rutnät som används för att beskriva ockuperingsgraden. För att estimera den bana som fordonet kommer att färdas använder ramverket RBPF(Rao-Blackwellized particle filter). Examensarbetet har genomförts hos Scania CV, där ett tungt fordon utrustat med flera lidarsensorer har använts. En lägre nivå av sensor fusion applicerades för de olika lidarsensorerna och en parallelliserad implementation av algoritmen implementerades på GPU. När algoritmen kördes mot data som ofta används av ”allmänheten” kan vi konstatera att den implementerade algoritmen ger ett väldigt mycket bättre resultat än ”scan-matchnings”-tekniken och visar på ett acceptabelt resultat i jämförelse med en annan högpresterande RBPFimplementation, vilken tillför några förbättringar på algoritmen. Prestandan av hela systemet utvärderas med ett antal egendesignade realistiska scenarion. Det föreslagna systemet visar på en tydlig förbättring av uppskattningen av körbanan och bidrar även med en exakt representation av omgivningen. Sensor Fusionen visar på en bättre och mer informativ representation än när man endast utgår från de individuella lidarsensorerna.
|
97 |
Klinische Studie zur anamnestischen, klinischen, endoskopischen, laboranalytischen und bakteriologischen Untersuchung des Respirationstraktes von Pferden mit chronischer AtemwegssymptomatikKasch, Stefanie 16 November 2023 (has links)
Pferde mit respiratorischer Symptomatik, insbesondere Pferde mit milderen Symptomen wie beispielsweise gelegentlichem Husten, geringgradigem Nasenausfluss und Leistungsinsuffizienz wurden häufig ohne ausreichende Diagnostik mit einem Antibiotikum vorbehandelt. Insgesamt wurden an IAD erkrankte Pferde häufiger mit einem Antibiotikum vorbehandelt als Pferde, bei denen RAO diagnostiziert wurde. Die Ergebnisse der mikrobiologischen Untersuchungen zeigen jedoch, dass eine bakterielle Beteiligung bei der Pathogenese des equinen Asthmas mit hoher Wahrscheinlichkeit eine untergeordnete Rolle spielt und somit eine antibiotische Behandlung in diesen Fällen nicht indiziert ist. Die Ergebnisse der klinischen, endoskopischen und zytologischen Untersuchung dieser Studie untermauern die Ergebnisse vorangegangener Studien.:Inhaltsverzeichnis
1. Einleitung 1
2. Literaturübersicht 2
2.1. Equines Asthma 2
2.1.1. Begriffsentwicklung 2
2.1.2. Recurrent Airway Obstruction 3
2.1.3. Inflammatory Airway Disease 5
2.1.4. Summer pasture associated pulmonary disease (SPAOPD) 7
2.2. Diagnostik Equines Asthma 7
2.2.1. Anamnese 7
2.2.2. Klinische Untersuchung 7
2.2.3. Blutbild und Blutchemie 8
2.2.4. Arterielle Blutgasanalyse 8
2.2.5. Endoskopie und Probengewinnung 9
2.2.6. Auswertung der BAL 10
2.2.6.1. Leukozyten 10
2.2.6.1.1. Neutrophile Granulozyten 10
2.2.6.1.2. Eosinophile Granulozyten 10
2.2.6.1.3. Mastzellen 11
2.2.6.1.4. Lymphozyten 11
2.2.6.2. Makrophagen 11
2.2.6.3. Epithelzellen 12
2.2.6.4. Curschmann-Spiralen 12
2.2.6.5. Bakterien 13
2.2.6.6. Pilze 13
2.2.7. Mikrobiologische Untersuchungen 13
2.3. Therapie des Equinen Asthmas 17
2.3.1. Optimierung der Haltungsbedingungen 17
2.3.2. Medikamente 19
2.3.2.1. Bronchospasmolyse 19
2.3.2.2. Sekretolyse 19
2.3.2.3. Glukokortikoide 20
3. Material und Methoden 21
3.1. Patienten 21
3.2. Anamnese 21
3.3. Klinische Untersuchung 21
3.4. Labordiagnostische Untersuchungen des venösen Blutes 23
3.5. Arterielle Blutgasanalyse 24
3.6. Endoskopie (inklusive Entnahme BAL, TBS, Spülprobe der oberen Atemwege) 24
3.7. Zytologische Untersuchung der bronchoalveolären Lavageflüssigkeit 27
3.8. Mikrobiologische Untersuchungen 27
3.9. Gruppeneinteilung 28
3.10. Statistische Untersuchungen 30
4. Ergebnisse 31
4.1. Studienpopulation 31
4.2. Klinische Untersuchung 36
4.3. Labordiagnostik Blut 39
4.4. Endoskopie 40
4.5. Zytologie BAL 42
4.6. Mikrobiologie 44
4.6.1. BAL 44
4.6.2. TBS 44
4.6.3. Spülprobe der oberen Atemwege 48
4.6.4. Zusammenhang zwischen der Vorbehandlung mit einem Antibiotikum und dem Nachweis von Bakterien 49
5. Diskussion 52
5.1. Limitationen der Studie 52
5.2. Studienpopulation 53
5.3. Klinische Untersuchung 55
5.4. Labordiagnostik Blut 56
5.5. Endoskopie 57
5.6. Zytologie der BAL 57
5.7. Mikrobiologische Untersuchung 58
5.7.1. BAL 59
5.7.2. Tracheobronchialsekret 60
5.7.3. Spülprobe der oberen Atemwege 62
5.7.4. Zusammenhang zwischen der Vorbehandlung mit einem Antibiotikum und dem Nachweis von Bakterien 62
5.8. Schlussfolgerung 63
6. Zusammenfassung 64
7. Summary 66
8. Literaturverzeichnis 68
9. Tabellenverzeichnis 77
10. Abbildungsverzeichnis 78
Danksagung 79
|
98 |
Comprehensive Flow Cytometric Characterization of Bronchoalveolar Lavage Cells Indicates Comparable Phenotypes Between Asthmatic and Healthy Horses But Functional Lymphocyte DifferencesGressler, A. Elisabeth, Lübke, Sabrina, Wagner, Bettina, Arnold, Corinna, Lohmann, Katharina L., Schnabel, Christiane L. 20 October 2023 (has links)
Equine asthma (EA) is a highly relevant disease, estimated to affect up to 20% of all horses, and compares to human asthma. The pathogenesis of EA is most likely immune-mediated, yet incompletely understood. To study the immune response in the affected lower airways, mixed leukocytes were acquired through bronchoalveolar lavage (BAL) and the cell populations were analyzed on a single-cell basis by flow cytometry (FC). Samples of 38 horses grouped as respiratory healthy or affected by mild to moderate (mEA) or severe EA (sEA) according to their history, clinical signs, and BAL cytology were analyzed. Using FC, BAL cells and PBMC were comprehensively characterized by cell surface markers ex vivo. An increased percentage of DH24A+ polymorphonuclear cells, and decreased percentages of CD14+ macrophages were detected in BAL from horses with sEA compared to healthy horses or horses with mEA, while lymphocyte proportions were similar between all groups. Independently of EA, macrophages in BAL were CD14+CD16+, which contrasts the majority of CD14+CD16- classical monocytes in PBMC. Percentages of CD16-expressing BAL macrophages were reduced in BAL from horses with sEA compared to healthy horses. While PBMC lymphocytes predominantly contain CD4+ T cells, B cells and few CD8+ T cells, BAL lymphocytes comprised mainly CD8+ T cells, fewer CD4+ T cells and hardly any B cells. These lymphocyte subsets' distributions were similar between all groups. After PMA/ionomycin stimulation in vitro, lymphocyte activation (CD154 and T helper cell cytokine expression) was analyzed in BAL cells of 26 of the horses and group differences were observed (p=0.01-0.11). Compared to healthy horses' BAL, CD154+ lymphocytes from horses with mEA, and CD4+IL-17A+ lymphocytes from horses with sEA were increased in frequency. Activated CD4+ T helper cells were more frequent in asthmatics' (mEA, sEA) compared to healthy horses' PBMC lymphocytes. In summary, FC analysis of BAL cells identified increased polymorphonuclear cells frequencies in sEA as established, while macrophage percentages were mildly reduced, and lymphocyte populations remained unaffected by EA. Cytokine production differences of BAL lymphocytes from horses with sEA compared to healthy horses' cells point towards a functional difference, namely increased local type 3 responses in sEA.
|
99 |
Statistical Methods for Image Change Detection with UncertaintyLingg, Andrew James January 2012 (has links)
No description available.
|
100 |
Statistical Analysis of Geolocation Fundamentals Using Stochastic GeometryO'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited.
Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy.
Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability.
Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position.
The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?"
Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target.
Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
|
Page generated in 0.0629 seconds