1 |
Statistical Analysis of Geolocation Fundamentals Using Stochastic GeometryO'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited.
Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy.
Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability.
Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS.
When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position.
The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?"
Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target.
Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model.
In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
|
2 |
Radio resource sharing with edge caching for multi-operator in large cellular networksSanguanpuak, T. (Tachporn) 04 January 2019 (has links)
Abstract
The aim of this thesis is to devise new paradigms on radio resource sharing including cache-enabled virtualized large cellular networks for mobile network operators (MNOs). Also, self-organizing resource allocation for small cell networks is considered.
In such networks, the MNOs rent radio resources from the infrastructure provider (InP) to support their subscribers. In order to reduce the operational costs, while at the same time to significantly increase the usage of the existing network resources, it leads to a paradigm where the MNOs share their infrastructure, i.e., base stations (BSs), antennas, spectrum and edge cache among themselves. In this regard, we integrate the theoretical insights provided by stochastic geometrical approaches to model the spectrum and infrastructure sharing for large cellular networks.
In the first part of the thesis, we study the non-orthogonal multi-MNO spectrum allocation problem for small cell networks with the goal of maximizing the overall network throughput, defined as the expected weighted sum rate of the MNOs. Each MNO is assumed to serve multiple small cell BSs (SBSs). We adopt the many-to-one stable matching game framework to tackle this problem. We also investigate the role of power allocation schemes for SBSs using Q-learning.
In the second part, we model and analyze the infrastructure sharing system considering a single buyer MNO and multiple seller MNOs. The MNOs are assumed to operate over their own licensed spectrum bands while sharing BSs. We assume that multiple seller MNOs compete with each other to sell their infrastructure to a potential buyer MNO. The optimal strategy for the seller MNOs in terms of the fraction of infrastructure to be shared and the price of the infrastructure, is obtained by computing the equilibrium of a Cournot-Nash oligopoly game.
Finally, we develop a game-theoretic framework to model and analyze a cache-enabled virtualized cellular networks where the network infrastructure, e.g., BSs and cache storage, owned by an InP, is rented and shared among multiple MNOs. We formulate a Stackelberg game model with the InP as the leader and the MNOs as the followers. The InP tries to maximize its profit by optimizing its infrastructure rental fee. The MNO aims to minimize the cost of infrastructure by minimizing the cache intensity under probabilistic delay constraint of the user (UE). Since the MNOs share their rented infrastructure, we apply a cooperative game concept, namely, the Shapley value, to divide the cost among the MNOs. / Tiivistelmä
Tämän väitöskirjan tavoitteena on tuottaa uusia paradigmoja radioresurssien jakoon, mukaan lukien virtualisoidut välimuisti-kykenevät suuret matkapuhelinverkot matkapuhelinoperaattoreille. Näiden kaltaisissa verkoissa operaattorit vuokraavat radioresursseja infrastruktuuritoimittajalta (InP, infrastructure provider) asiakkaiden tarpeisiin. Toimintakulujen karsiminen ja samanaikainen olemassa olevien verkkoresurssien hyötykäytön huomattava kasvattaminen johtaa paradigmaan, jossa operaattorit jakavat infrastruktuurinsa keskenään. Tämän vuoksi työssä tutkitaan teoreettisia stokastiseen geometriaan perustuvia malleja spektrin ja infrastruktuurin jakamiseksi suurissa soluverkoissa.
Työn ensimmäisessä osassa tutkitaan ei-ortogonaalista monioperaattori-allokaatioongelmaa pienissä soluverkoissa tavoitteena maksimoida verkon yleistä läpisyöttöä, joka määritellään operaattoreiden painotettuna summaläpisyötön odotusarvona. Jokaisen operaattorin oletetaan palvelevan useampaa piensolutukiasemaa (SBS, small cell base station). Työssä käytetään monelta yhdelle -vakaata sovituspeli-viitekehystä SBS:lle käyttäen Q-oppimista.
Työn toisessa osassa mallinnetaan ja analysoidaan infrastruktuurin jakamista yhden ostaja-operaattorin ja monen myyjä-operaattorin tapauksessa. Operaattorien oletetaan toimivan omilla lisensoiduilla taajuuksillaan jakaen tukiasemat keskenään. Myyjän optimaalinen strategia infrastruktuurin myytävän osan suuruuden ja hinnan suhteen saavutetaan laskemalla Cournot-Nash -olipologipelin tasapainotila.
Lopuksi, työssä kehitetään peli-teoreettinen viitekehys virtualisoitujen välimuistikykenevien soluverkkojen mallintamiseen ja analysointiin, missä InP:n omistama verkkoinfrastruktuuri vuokrataan ja jaetaan monen operaattorin kesken. Työssä muodostetaan Stackelberg-pelimalli, jossa InP toimii johtajana ja operaattorit seuraajina. InP pyrkii maksimoimaan voittonsa optimoimalla infrastruktuurin vuokrahintaa. Operaattori pyrkii minimoimaan infrastruktuurin hinnan minimoimalla välimuistin tiheyttä satunnaisen käyttäjän viive-ehtojen mukaisesti. Koska operaattorit jakavat vuokratun infrastruktuurin, työssä käytetään yhteistyöpeli-ajatusta, nimellisesti, Shapleyn arvoa, jakamaan kustannuksia operaatoreiden kesken.
|
Page generated in 0.0861 seconds