• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 18
  • 17
  • 10
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Resource Allocation and Pricing in Virtual Wireless Networks

Chen, Xin 01 January 2014 (has links) (PDF)
The Internet architecture has proven its success by completely changing people’s lives. However, making significant architecture improvements has become extremely difficult since it requires competing Internet Service Providers to jointly agree. Re- cently, network virtualization has attracted the attention of many researchers as a solution to this ossification problem. A network virtualization environment allows multiple network architectures to coexist on a shared physical resource. However, most previous research has focused on network virtualization in a wired network en- vironment. It is well known that wireless networks have become one of the main access technologies. Due to the probabilistic nature of the wireless environment, vir- tualization becomes more challenging. This thesis consider virtualization in wireless networks with a focus on the challenges due to randomness. First, I apply mathe- matical tools from stochastic geometry on the random system model, with transport capacity as the network performance metric. Then I design an algorithm which can allow multiple virtual networks working in a distributed fashion to find a solution such that the aggregate satisfaction of the whole network is maximized. Finally, I proposed a new method of charging new users fairly when they ask to enter the system. I measure the cost of the system when a new user with a virtual network request wants to share the resource and demonstrate a simple method for estimating this “price”.
12

Advances in Stochastic Geometry for Cellular Networks

Saha, Chiranjib 24 August 2020 (has links)
The mathematical modeling and performance analysis of cellular networks have seen a major paradigm shift with the application of stochastic geometry. The main purpose of stochastic geometry is to endow probability distributions on the locations of the base stations (BSs) and users in a network, which, in turn, provides an analytical handle on the performance evaluation of cellular networks. To preserve the tractability of analysis, the common practice is to assume complete spatial randomness} of the network topology. In other words, the locations of users and BSs are modeled as independent homogeneous Poisson point processes (PPPs). Despite its usefulness, the PPP-based network models fail to capture any spatial coupling between the users and BSs which is dominant in a multi-tier cellular network (also known as the heterogeneous cellular networks (HetNets)) consisting of macro and small cells. For instance, the users tend to form hotspots or clusters at certain locations and the small cell BSs (SBSs) are deployed at higher densities at these locations of the hotspots in order to cater to the high data demand. Such user-centric deployments naturally couple the locations of the users and SBSs. On the other hand, these spatial couplings are at the heart of the spatial models used in industry for the system-level simulations and standardization purposes. This dissertation proposes fundamentally new spatial models based on stochastic geometry which closely emulate these spatial couplings and are conductive for a more realistic and fine-tuned performance analysis, optimization, and design of cellular networks. First, this dissertation proposes a new class of spatial models for HetNets where the locations of the BSs and users are assumed to be distributed as Poisson cluster process (PCP). From the modeling perspective, the proposed models can capture different spatial couplings in a network topology such as the user hotspots and user BS coupling occurring due to the user-centric deployment of the SBSs. The PCP-based model is a generalization of the state-of-the-art PPP-based HetNet model. This is because the model reduces to the PPP-based model once all spatial couplings in the network are ignored. From the stochastic geometry perspective, we have made contributions in deriving the fundamental distribution properties of PCP, such as the distance distributions and sum-product functionals, which are instrumental for the performance characterization of the HetNets, such as coverage and rate. The focus on more refined spatial models for small cells and users brings to the second direction of the dissertation, which is modeling and analysis of HetNets with millimeter wave (mm-wave) integrated access and backhaul (IAB), an emerging design concept of the fifth generation (5G) cellular networks. While the concepts of network densification with small cells have emerged in the fourth generation (4G) era, the small cells can be realistically deployed with IAB since it solves the problem of high capacity wired backhaul of SBSs by replacing the last-mile fibers with mm-wave links. We have proposed new stochastic geometry-based models for the performance analysis of IAB-enabled HetNets. Our analysis reveals some interesting system-design insights: (1) the IAB HetNets can support a maximum number of users beyond which the data rate drops below the rate of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no rate gain is observed with the addition of more SBSs. The third and final direction of this dissertation is the combination of machine learning and stochastic geometry to construct a new class of data driven network models which can be used in the performance optimization and design of a network. As a concrete example, we investigate the classical problem of wireless link scheduling where the objective is to choose an optimal subset of simultaneously active transmitters (Tx-s) from a ground set of Tx-s which will maximize the network-wide sum-rate. Since the optimization problem is NP-hard, we replace the computationally expensive heuristic by inferring the point patterns of the active Tx-s in the optimal subset after training a determinantal point process (DPP). Our investigations demonstrate that the DPP is able to learn the spatial interactions of the Tx-s in the optimal subset and gives a reasonably accurate estimate of the optimal subset for any new ground set of Tx-s. / Doctor of Philosophy / The high speed global cellular communication network is one of the most important technologies, and it continues to evolve rapidly with every new generation. This evolution greatly depends on observing performance-trends of the emerging technologies on the network models through extensive system-level simulations. Since these simulation models are extremely time-consuming and error prone, the complementary analytical models of cellular networks have been an area of active research for a long time. These analytical models are intended to provide crisp insights on the network behavior such as the dependence of network performance metrics (such as coverage or rate) on key system-level parameters (such as transmission powers, base station (BS) density) which serve as the prior knowledge for more fine-tuned simulations. Over the last decade, the analytical modeling of the cellular networks has been driven by stochastic geometry. The main purpose of stochastic geometry is to endow the locations of the base stations (BSs) and users with probability distributions and then leverage the properties of these distributions to average out the spatial randomness. This process of spatial averaging allows us to derive the analytical expressions of the system-level performance metrics despite the presence of a large number of random variables (such as BS and user locations, channel gains) under some reasonable assumptions. The simplest stochastic geometry based model of cellular networks, which is also the most tractable, is the so-called Poisson point process (PPP) based network model. In this model, users and BSs are assumed to be distributed as independent homogeneous PPPs. This is equivalent to saying that the users and BSs independently and uniformly at random over a plane. The PPP-based model turned out to be a reasonably accurate representation of the yesteryear’s cellular networks which consisted of a single tier of macro BSs (MBSs) intended to provide a uniform coverage blanket over the region. However, as the data-hungry devices like smart-phones, tablets, and application like online gaming continue to flood the consumer market, the network configuration is rapidly deviating from this baseline setup with different spatial interactions between BSs and users (also termed spatial coupling) becoming dominant. For instance, the user locations are far from being homogeneous as they are concentrated in specific areas like residential and commercial zones (also known as hotspots). Further, the network, previously consisting of a single tier of macro BSs (MBSs), is becoming increasingly heterogeneous with the deployment of small cell BSs (SBSs) with small coverage footprints and targeted to serve the user hotspots. It is not difficult to see that the network topology with these spatial couplings is quite far from complete spatial randomness which is the basis of the PPP-based models. The key contribution of this dissertation is to enrich the stochastic geometry-based mathematical models so that they can capture the fine-grained spatial couplings between the BSs and users. More specifically, this dissertation contributes in the following three research directions. Direction-I: Modeling Spatial Clustering. We model the locations of users and SBSs forming hotspots as Poisson cluster processes (PCPs). A PCP is a collection of offspring points which are located around the parent points which belong to a PPP. The coupling between the locations of users and SBSs (due to their user-centric deployment) can be introduced by assuming that the user and SBS PCPs share the same parent PPP. The key contribution in this direction is the construction of a general HetNet model with a mixture of PPP and PCP-distributed BSs and user distributions. Note that the baseline PPP-based HetNet model appears as one of the many configurations supported by this general model. For this general model, we derive the analytical expressions of the performance metrics like coverage probability, BS load, and rate as functions of the coupling parameters (e.g. BS and user cluster size). Direction-II: Modeling Coupling in Wireless Backhaul Networks. While the deployment of SBSs clearly enhances the network performance in terms of coverage, one might wonder: how long network densification with tens of thousands of SBSs can meet the everincreasing data demand? It turns out that in the current network setting, where the backhaul links (i.e. the links between the BSs and core network) are still wired, it is not feasible to densify the network beyond some limit. This backhaul bottleneck can be overcome if the backhaul links also become wireless and the backhaul and access links (link between user and BS) are jointly managed by an integrated access and backhaul (IAB) network. In this direction, we develop the analytical models of IAB-enabled HetNets where the key challenge is to tackle new types of couplings which exist between the rates on the wireless access and backhaul links. Such couplings exist due to the spatial correlation of the signal qualities of the two links and the number of users served by different BSs. Two fundamental insights obtained from this work are as follows: (1) the IAB HetNets can support a maximum number of users beyond which the network performance drops below that of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no performance gain is observed with the addition of more SBSs. Direction-III: Modeling Repulsion. In this direction, we focus on modeling another aspect of spatial coupling imposed by the intra-point repulsion. Consider a device-to-device (D2D) communication scenario, where some users are transmitting some on-demand content locally cached in their devices using a common channel. Any reasonable multiple access scheme will ensure that two nearly users are never simultaneously active as they will cause severe mutual interference and thereby reducing the network-wide sum rate. Thus the active users in the network will have some spatial repulsion. The locations of these users can be modeled as determinantal point processes (DPPs). The key property of DPP is that it forms a bridge between stochastic geometry and machine learning, two otherwise non-overlapping paradigms for wireless network modeling and design. The main focus in this direction is to explore the learning framework of DPP and bring together advantages of stochastic geometry and machine learning to construct a new class of data-driven analytical network models.
13

Coverage, Secrecy, and Stability Analysis of Energy Harvesting Wireless Networks

Kishk, Mustafa 03 August 2018 (has links)
Including energy harvesting capability in a wireless network is attractive for multiple reasons. First and foremost, powering base stations with renewable resources could significantly reduce their reliance on the traditional energy sources, thus helping in curtailing the carbon footprint. Second, including this capability in wireless devices may help in increasing their lifetime, which is especially critical for devices for which it may not be easy to charge or replace batteries. This will often be the case for a large fraction of sensors that will form the {em digital skin} of an Internet of Things (IoT) ecosystem. Motivated by these factors, this work studies fundamental performance limitations that appear due to the inherent unreliability of energy harvesting when it is used as a primary or secondary source of energy by different elements of the wireless network, such as mobile users, IoT sensors, and/or base stations. The first step taken towards this objective is studying the joint uplink and downlink coverage of radio-frequency (RF) powered cellular-based IoT. Modeling the locations of the IoT devices and the base stations (BSs) using two independent Poisson point processes (PPPs), the joint uplink/downlink coverage probability is derived. The resulting expressions characterize how different system parameters impact coverage performance. Both mathematical expressions and simulation results show how these system parameters should be tuned in order to achieve the performance of the regularly powered IoT (IoT devices are powered by regular batteries). The placement of RF-powered devices close to the RF sources, to harvest more energy, imposes some concerns on the security of the signals transmitted by these RF sources to their intended receivers. Studying this problem is the second step taken in this dissertation towards better understanding of energy harvesting wireless networks. While these secrecy concerns have been recently addressed for the point-to-point link, it received less attention for the more general networks with randomly located transmitters (RF sources) and RF-powered devices, which is the main contribution in the second part of this dissertation. In the last part of this dissertation, we study the stability of solar-powered cellular networks. We use tools from percolation theory to study percolation probability of energy-drained BSs. We study the effect of two system parameters on that metric, namely, the energy arrival rate and the user density. Our results show the existence of a critical value for the ratio of the energy arrival rate to the user density, above which the percolation probability is zero. The next step to further improve the accuracy of the stability analysis is to study the effect of correlation between the battery levels at neighboring BSs. We provide an initial study that captures this correlation. The main insight drawn from our analysis is the existence of an optimal overlapping coverage area for neighboring BSs to serve each other's users when they are energy-drained. / Ph. D. / Renewable energy is a strong potential candidate for powering wireless networks, in order to ensure green, environment-friendly, and self-perpetual wireless networks. In particular, renewable energy gains its importance when cellular coverage is required in off-grid areas where there is no stable resource of energy. In that case, it makes sense to use solar-powered base stations to provide cellular coverage. In fact, solar-powered base stations are deployed already in multiple locations around the globe. However, in order to extend this to a large scale deployment, many fundamental aspects of the performance of such networks needs to be studied. One of these aspects is the stability of solar-powered cellular networks. In this dissertation, we study the stability of such networks by applying probabilistic analysis that leads to a set of useful system-level insights. In particular, we show the existence of a critical value for the energy intensity, above which the system stability is ensured. Another type of wireless networks that will greatly benefit from renewable energy is internet of things (IoT). IoT devices usually require several orders of magnitude lower power compared to the base stations. In addition, they are expected to be massively deployed, often in hard-to-reach locations. This makes it impractical or at least cost inefficient to rely on replacing or recharging batteries in these devices. Among many possible resources of renewable energy, radio frequency (RF) energy harvesting is the strongest candidate for powering IoT devices, due to ubiquity of RF signals even at hard-to-reach places. However, relying on RF signals as the sole resource of energy may affect the overall reliability of the IoT. Hence, rigorous performance analysis of RF-powered IoT networks is required. In this dissertation, we study multiple aspects of the performance of such networks, using tools from probability theory and stochastic geometry. In particular, we provide concrete mathematical expressions that can be used to determine the performance drop resulting from using renewable energy as the sole source of power. One more aspect of the performance of RF-powered IoT is the secrecy of the RF signals used by the IoT devices to harvest energy. The placement of RF-powered devices close to the RF sources, to harvest more energy, imposes some concerns on the security of the signals transmitted by these RF sources to their intended receivers. We study the effect of using secrecy enhancing techniques by the RF sources on the amount of energy harvested by the RF-powered devices. We provide performance comparison of three popular secrecy-enhancing techniques. In particular, we study the scenarios under which each of these techniques outperforms the others in terms of secrecy performance and energy harvesting probability. This material is based upon work supported by the U.S. National Science Foundation (Grant CCF1464293). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.
14

Average Link Rate Analysis over Finite Time Horizon in a Wireless Network

Bodepudi, Sai Nisanth 30 March 2017 (has links)
Instantaneous and ergodic rates are two of the most commonly used metrics to characterize throughput of wireless networks. Roughly speaking, the former characterizes the rate achievable in a given time slot, whereas the latter is useful in characterizing average rate achievable over a long time period. Clearly, the reality often lies somewhere in between these two extremes. Consequently, in this work, we define and characterize a more realistic N-slot average rate (achievable rate averaged over N time slots). This N-slot average rate metric refines the popular notion of ergodic rate, which is defined under the assumption that a user experiences a complete ensemble of channel and interference conditions in the current session (not always realistic, especially for short-lived sessions). The proposed metric is used to study the performance of typical nodes in both ad hoc and downlink cellular networks. The ad hoc network is modeled as a Poisson bipolar network with a fixed distance between each transmitter and its intended receiver. The cellular network is also modeled as a homogeneous Poisson point process. For both these setups, we use tools from stochastic geometry to derive the distribution of N-slot average rate in the following three cases: (i) rate across N time slots is completely correlated, (ii) rate across N time slots is independent and identically distributed, and (iii) rate across N time slots is partially correlated. While the reality is close to third case, the exact characterization of the first two extreme cases exposes certain important design insights. / Master of Science
15

Applications des processus de Lévy et processus de branchement à des études motivées par l'informatique et la biologie

Bansaye, Vincent 14 November 2008 (has links) (PDF)
Dans une première partie, j'étudie un processus de stockage de données en temps continu où le disque dur est identifié à la droite réelle. Ce modèle est une version continu du problème original de Parking de Knuth. Ici l'arrivée des fichiers est Poissonienne et le fichier se stocke dans les premiers espaces libres à droite de son point d'arrivée, quitte à se fragmenter. Dans un premier temps, je construis le modèle et donne une caractérisation géométrique et analytique de la partie du disque recouverte au temps t. Ensuite j'étudie les régimes asymptotiques au moment de saturation du disque. Enfin, je décris l'évolution en temps d'un block de données typique. La deuxième partie est constituée de l'étude de processus de branchement, motivée par des questions d'infection cellulaire. Dans un premier temps, je considère un processus de branchement en environnement aléatoire sous-critique, et détermine les théorèmes limites en fonction de la population initiale, ainsi que des propriétes sur les environnements, les limites de Yaglom et le Q-processus. Ensuite, j'utilise ce processus pour établir des résultats sur un modèle décrivant la prolifération d'un parasite dans une cellule en division. Je détermine la probabilité de guérison, le nombre asymptotique de cellules inféctées ainsi que les proportions asymptotiques de cellules infectées par un nombre donné de parasites. Ces différents résulats dépendent du régime du processus de branchement en environnement aléatoire. Enfin, j'ajoute une contamination aléatoire par des parasites extérieures.
16

Malliavin-Stein Method in Stochastic Geometry

Schulte, Matthias 19 March 2013 (has links)
In this thesis, abstract bounds for the normal approximation of Poisson functionals are computed by the Malliavin-Stein method and used to derive central limit theorems for problems from stochastic geometry. As a Poisson functional we denote a random variable depending on a Poisson point process. It is known from stochastic analysis that every square integrable Poisson functional has a representation as a (possibly infinite) sum of multiple Wiener-Ito integrals. This decomposition is called Wiener-Itô chaos expansion, and the integrands are denoted as kernels of the Wiener-Itô chaos expansion. An explicit formula for these kernels is known due to Last and Penrose. Via their Wiener-Itô chaos expansions the so-called Malliavin operators are defined. By combining Malliavin calculus and Stein's method, a well-known technique to derive limit theorems in probability theory, bounds for the normal approximation of Poisson functionals in the Wasserstein distance and vectors of Poisson functionals in a similar distance were obtained by Peccati, Sole, Taqqu, and Utzet and Peccati and Zheng, respectively. An analogous bound for the univariate normal approximation in Kolmogorov distance is derived. In order to evaluate these bounds, one has to compute the expectation of products of multiple Wiener-Itô integrals, which are complicated sums of deterministic integrals. Therefore, the bounds for the normal approximation of Poisson functionals reduce to sums of integrals depending on the kernels of the Wiener-Itô chaos expansion. The strategy to derive central limit theorems for Poisson functionals is to compute the kernels of their Wiener-Itô chaos expansions, to put the kernels in the bounds for the normal approximation, and to show that the bounds vanish asymptotically. By this approach, central limit theorems for some problems from stochastic geometry are derived. Univariate and multivariate central limit theorems for some functionals of the intersection process of Poisson k-flats and the number of vertices and the total edge length of a Gilbert graph are shown. These Poisson functionals are so-called Poisson U-statistics which have an easier structure since their Wiener-Itô chaos expansions are finite, i.e. their Wiener-Itô chaos expansions consist of finitely many multiple Wiener-Itô integrals. As examples for Poisson functionals with infinite Wiener-Itô chaos expansions, central limit theorems for the volume of the Poisson-Voronoi approximation of a convex set and the intrinsic volumes of Boolean models are proven.
17

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
18

Localização industrial: uma aproximação usando processos pontuais espaciais / Firm location: an approach using spatial point process

Morales, Adriano Barasal 08 June 2018 (has links)
O objetivo desta pesquisa é mostrar como aproveitar novas bases de dados disponíveis e o avanço de métodos computacionais para extrair informações estatísticas sobre a localização espacial de firmas. Para isso, propomos uma aplicação de métodos de estatística espacial para modelar o padrão de localização de novas empresas de serviços no município de São Paulo. Neste trabalho, assumimos que a localização espacial dessas firmas foi gerada através de um processo pontual bidimensional e assim aplicamos dois modelos distintos: um baseado em intensidade não estocástica baseada no processo de Poisson, e um modelo de intensidade estocástica baseado processo de Cox log Gaussiano (Log Gaussian Cox Process - LGCP). A principal base de dados utilizada é base georeferenciada baseada no Cadastro Central de Empresas construída pelo Centro de Estudos da Metrópole (CEM), contendo observações de empresas na região metropolitana de São Paulo, para o ano base de 2000. Utilizamos como variáveis explicativas de localização informações advindas de sistemas de informações geográficas (SIG), o Censo demográfico e imagens de satélite do National Oceanic and Atmospheric Administration (NOAA). Os resultados encontrados mostram a importância dessa metodologia no processo de construção de modelos de localização espacial, combinando distintas fontes de dados e introduzindo novas perspectivas sobre o estudo empírico de economia urbana. / The objective of this research is to show how to take advantage of new available databases and computational methods to extract statistical information about the spatial location of firms. In this sense, we propose an application of spatial statistics methods to model the location patterns of new services firms in the city of São Paulo. In this paper, we assume that the spatial location of these firms was generated through a two-dimensional point process and thus we applied two distinct models: one based on non-stochastic intensity based on the Poisson process, and a stochastic intensity model based on the Log Gaussian Cox process (LGCP). The main input used is a georeferenced database based on the Central Business Register made by the Center for Metropolis Studies (CEM), containing data of firms in the metropolitan region of São Paulo, for the base year 2000. We use as explanatory variables information from geographic information systems (GIS), demographic census and satellite imagery from National Oceanic and Atmospheric Administration (NOAA). The results show the usefulness of these models the construction of spatial location models, combining different data sources and introducing new perspectives on the empirical study of urban economics.
19

Localização industrial: uma aproximação usando processos pontuais espaciais / Firm location: an approach using spatial point process

Adriano Barasal Morales 08 June 2018 (has links)
O objetivo desta pesquisa é mostrar como aproveitar novas bases de dados disponíveis e o avanço de métodos computacionais para extrair informações estatísticas sobre a localização espacial de firmas. Para isso, propomos uma aplicação de métodos de estatística espacial para modelar o padrão de localização de novas empresas de serviços no município de São Paulo. Neste trabalho, assumimos que a localização espacial dessas firmas foi gerada através de um processo pontual bidimensional e assim aplicamos dois modelos distintos: um baseado em intensidade não estocástica baseada no processo de Poisson, e um modelo de intensidade estocástica baseado processo de Cox log Gaussiano (Log Gaussian Cox Process - LGCP). A principal base de dados utilizada é base georeferenciada baseada no Cadastro Central de Empresas construída pelo Centro de Estudos da Metrópole (CEM), contendo observações de empresas na região metropolitana de São Paulo, para o ano base de 2000. Utilizamos como variáveis explicativas de localização informações advindas de sistemas de informações geográficas (SIG), o Censo demográfico e imagens de satélite do National Oceanic and Atmospheric Administration (NOAA). Os resultados encontrados mostram a importância dessa metodologia no processo de construção de modelos de localização espacial, combinando distintas fontes de dados e introduzindo novas perspectivas sobre o estudo empírico de economia urbana. / The objective of this research is to show how to take advantage of new available databases and computational methods to extract statistical information about the spatial location of firms. In this sense, we propose an application of spatial statistics methods to model the location patterns of new services firms in the city of São Paulo. In this paper, we assume that the spatial location of these firms was generated through a two-dimensional point process and thus we applied two distinct models: one based on non-stochastic intensity based on the Poisson process, and a stochastic intensity model based on the Log Gaussian Cox process (LGCP). The main input used is a georeferenced database based on the Central Business Register made by the Center for Metropolis Studies (CEM), containing data of firms in the metropolitan region of São Paulo, for the base year 2000. We use as explanatory variables information from geographic information systems (GIS), demographic census and satellite imagery from National Oceanic and Atmospheric Administration (NOAA). The results show the usefulness of these models the construction of spatial location models, combining different data sources and introducing new perspectives on the empirical study of urban economics.
20

Designing MIMO interference alignment networks

Nosrat Makouei, Behrang 25 October 2012 (has links)
Wireless networks are increasingly interference-limited, which motivates the development of sophisticated interference management techniques. One recently discovered approach is interference alignment, which attains the maximum sum rate scaling (with signal-to-noise ratio) in many network configurations. Interference alignment is not yet well understood from an engineering perspective. Such design considerations include (i) partial rather than complete knowledge of channel state information, (ii) correlated channels, (iii) bursty packet-based network traffic that requires the frequent setup and tear down of sessions, and (iv) the spatial distribution and interaction of transmit/receive pairs. This dissertation aims to establish the benefits and limitations of interference alignment under these four considerations. The first contribution of this dissertation considers an isolated group of transmit/receiver pairs (a cluster) cooperating through interference alignment and derives the signal-to-interference-plus-noise ratio distribution at each receiver for each stream. This distribution is used to compare interference alignment to beamforming and spatial multiplexing (as examples of common transmission techniques) in terms of sum rate to identify potential switching points between them. This dissertation identifies such switching points and provides design recommendations based on severity of the correlation or the channel state information uncertainty. The second contribution considers transmitters that are not associated with any interference alignment cooperating group but want to use the channel. The goal is to retain the benefits of interference alignment amid interference from the out-of-cluster transmitters. This dissertation shows that when the out-of-cluster transmitters have enough antennas, they can access the channel without changing the performance of the interference alignment receivers. Furthermore, optimum transmit filters maximizing the sum rate of the out-of-cluster transmit/receive pairs are derived. When insufficient antennas exist at the out-of-cluster transmitters, several transmit filters that trade off complexity and sum rate performance are presented. The last contribution, in contrast to the first two, takes into account the impact of large scale fading and the spatial distribution of the transmit/receive pairs on interference alignment by deriving the transmission capacity in a decentralized clustered interference alignment network. Channel state information uncertainty and feedback overhead are considered and the optimum training period is derived. Transmission capacity of interference alignment is compared to spatial multiplexing to highlight the tradeoff between channel estimation accuracy and the inter-cluster interference; the closer the nodes to each other, the higher the channel estimation accuracy and the inter-cluster interference. / text

Page generated in 0.0584 seconds