• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 9
  • 9
  • 7
  • 6
  • 1
  • Tagged with
  • 113
  • 113
  • 44
  • 29
  • 20
  • 19
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

NOVEL MODEL-BASED AND DEEP LEARNING APPROACHES TO SEGMENTATION AND OBJECT DETECTION IN 3D MICROSCOPY IMAGES

Camilo G Aguilar Herrera (9226151) 13 August 2020 (has links)
<div><div><div><p>Modeling microscopy images and extracting information from them are important problems in the fields of physics and material science. </p><p><br></p><p>Model-based methods, such as marked point processes (MPPs), and machine learning approaches, such as convolutional neural networks (CNNs), are powerful tools to perform these tasks. Nevertheless, MPPs present limitations when modeling objects with irregular boundaries. Similarly, machine learning techniques show drawbacks when differentiating clustered objects in volumetric datasets.</p><p> </p><p>In this thesis we explore the extension of the MPP framework to detect irregularly shaped objects. In addition, we develop a CNN approach to perform efficient 3D object detection. Finally, we propose a CNN approach together with geometric regularization to provide robustness in object detection across different datasets.</p><p><br></p><p>The first part of this thesis explores the addition of boundary energy to the MPP by using active contours energy and level sets energy. Our results show this extension allows the MPP framework to detect material porosity in CT microscopy images and to detect red blood cells in DIC microscopy images.</p><p><br></p><p>The second part of this thesis proposes a convolutional neural network approach to perform 3D object detection by regressing objects voxels into clusters. Comparisons with leading methods demonstrate a significant speed-up in 3D fiber and porosity detection in composite polymers while preserving detection accuracy.</p><p><br></p><p>The third part of this thesis explores an improvement in the 3D object detection approach by regressing pixels into their instance centers and using geometric regularization. This improvement demonstrates robustness when comparing 3D fiber detection in several large volumetric datasets.</p><p><br></p></div></div></div><div><div><div><p>These methods can contribute to fast and correct structural characterization of large volumetric datasets, which could potentially lead to the development of novel materials.</p></div></div></div>
72

CURVILINEAR STRUCTURE DETECTION IN IMAGES BY CONNECTED-TUBE MARKED POINT PROCESS AND ANOMALY DETECTION IN TIME SERIES

Tianyu Li (15349048) 26 April 2023 (has links)
<p><em>Curvilinear structure detection in images has been investigated for decades. In general, the detection of curvilinear structures includes two aspects, binary segmentation of the image and  inference of the graph representation of the curvilinear network. In our work, we propose a connected-tube model based on a marked point process (MPP) for addressing the two issues. The proposed tube model is applied to fiber detection in microscopy images by combining connected-tube and ellipse models. Moreover, a tube-based segmentation algorithm has been proposed to improve the segmentation accuracy. Experiments on fiber-reinforced polymer images, satellite images, and retinal vessel images will be presented. Additionally, we extend the 2D tube model to a 3D tube model, with each tube be modeled as a cylinder. To investigate the supervised curvilinear structure detection method, we focus on the application of road detection in satellite images and propose a two-stage learning strategy for road segmentation. A probability map is generated in the first stage by a selected neural network, then we attach the probability map image to the original RGB images and feed the resulting four images to a U-Net-like network in the second stage to get a refined result.</em></p> <p><br></p> <p><em>Anomaly detection in time series is a key step in diagnosing abnormal behavior in some systems. Long Short-Term Memory networks (LSTMs) have been demonstrated to be useful for anomaly detection in time series, due to their predictive power. However, for a system with thousands of different time sequences, a single LSTM predictor may not perform well for all the sequences. To enhance adaptability, we propose a stacked predictor framework. Also, we propose a novel dynamic thresholding algorithm based on the prediction errors to extract the potential anomalies. To further improve the accuracy of anomaly detection, we propose a post-detection verification method based on a fast and accurate time series subsequence matching algorithm.</em></p> <p><br></p> <p><em>To detect anomalies from multi-channel time series, a bi-directional transformer-based predictor is applied to generate the prediction error sequences, and a statistical model referred as an anomaly marked point process (Anomaly-MPP) is proposed to extract the anomalies from the error sequences. The effectiveness of our methods is demonstrated by testing on a variety of time series datasets.</em></p>
73

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
74

Valuation, hedging and the risk management of insurance contracts

Barbarin, Jérôme 03 June 2008 (has links)
This thesis aims at contributing to the study of the valuation of insurance liabilities and the management of the assets backing these liabilities. It consists of four parts, each devoted to a specific topic. In the first part, we study the pricing of a classical single premium life insurance contract with profit, in terms of a guaranteed rate on the premium and a participation rate on the (terminal) financial surplus. We argue that, given the asset allocation of the insurer, these technical parameters should be determined by taking explicitly into account the risk management policy of the insurance company, in terms of a risk measure such as the value-at-risk or the conditional value-at-risk. We then design a methodology that allows us to fix both parameters in such a way that the contract is fairly priced and simultaneously exhibits a risk consistent with the risk management policy. In the second part, we focus on the management of the surrender option embedded in most life insurance contracts. In Chapter 2, we argue that we should model the surrender time as a random time not adapted to the filtration generated by the financial assets prices, instead of assuming that the surrender time is an optimal stopping time as it is usual in the actuarial literature. We then study the valuation of insurance contracts with a surrender option in such a model. We here follow the financial literature on the default risk and in particular, the reduced-form models. In Chapter 3 and 4, we study the hedging strategies of such insurance contracts. In Chapter 3, we study their risk-minimizing strategies and in Chapter 4, we focus on their ``locally risk-minimizing' strategies. As a by-product, we study the impact of a progressive enlargement of filtration on the so-called ``minimal martingale measure'. The third part is devoted to the systematic mortality risk. Due to its systematic nature, this risk cannot be diversified through increasing the size of the portfolio. It is thus also important to study the hedging strategies an insurer should follow to mitigate its exposure to this risk. In Chapter 5, we study the risk-minimizing strategies for a life insurance contract when no mortality-linked financial assets are traded on the financial market. We here extend Dahl and Moller’s results and show that the risk-minimizing strategy of a life insurance contract is given by a weighted average of risk-minimizing strategies of purely financial claims, where the weights are given by the (stochastic) survival probabilities. In Chapter 6, we first study the application of the HJM methodology to the modelling of a longevity bonds market and describe a coherent theoretical setting in which we can properly define the longevity bond prices. Then, we study the risk-minimizing strategies for pure endowments and annuities portfolios when these longevity bonds are traded. Finally, the fourth part deals with the design of ALM strategies for a non-life insurance portfolio. In particular, this chapter aims at studying the risk-minimizing strategies for a non life insurance company when inflation risk and interest rate risk are taken into account. We derive the general form of these strategies when the cumulative payments of the insurer are described by an arbitrary increasing process adapted to the natural filtration of a general marked point process and when the inflation and the term structure of interest rates are simultaneously described by the HJM model of Jarrow and Yildirim. We then systematically apply this result to four specific models of insurance claims. We first study two ``collective' models. We then study two ``individual' models where the claims are notified at a random time and settled through time.
75

Metody modelování a statistické analýzy procesu extremálních hodnot / Methods of modelling and statistical analysis of an extremal value process

Jelenová, Klára January 2012 (has links)
In the present work we deal with the problem of etremal value of time series, especially of maxima. We study times and values of maximum by an approach of point process and we model distribution of extremal values by statistical methods. We estimate parameters of distribution using different methods, namely graphical methods of data analysis and subsequently we test the estimated distribution by tests of goodness of fit. We study the stationary case and also the cases with a trend. In connection with distribution of excesess and exceedances over a threshold we deal with generalized Pareto distribution.
76

Localização industrial: uma aproximação usando processos pontuais espaciais / Firm location: an approach using spatial point process

Morales, Adriano Barasal 08 June 2018 (has links)
O objetivo desta pesquisa é mostrar como aproveitar novas bases de dados disponíveis e o avanço de métodos computacionais para extrair informações estatísticas sobre a localização espacial de firmas. Para isso, propomos uma aplicação de métodos de estatística espacial para modelar o padrão de localização de novas empresas de serviços no município de São Paulo. Neste trabalho, assumimos que a localização espacial dessas firmas foi gerada através de um processo pontual bidimensional e assim aplicamos dois modelos distintos: um baseado em intensidade não estocástica baseada no processo de Poisson, e um modelo de intensidade estocástica baseado processo de Cox log Gaussiano (Log Gaussian Cox Process - LGCP). A principal base de dados utilizada é base georeferenciada baseada no Cadastro Central de Empresas construída pelo Centro de Estudos da Metrópole (CEM), contendo observações de empresas na região metropolitana de São Paulo, para o ano base de 2000. Utilizamos como variáveis explicativas de localização informações advindas de sistemas de informações geográficas (SIG), o Censo demográfico e imagens de satélite do National Oceanic and Atmospheric Administration (NOAA). Os resultados encontrados mostram a importância dessa metodologia no processo de construção de modelos de localização espacial, combinando distintas fontes de dados e introduzindo novas perspectivas sobre o estudo empírico de economia urbana. / The objective of this research is to show how to take advantage of new available databases and computational methods to extract statistical information about the spatial location of firms. In this sense, we propose an application of spatial statistics methods to model the location patterns of new services firms in the city of São Paulo. In this paper, we assume that the spatial location of these firms was generated through a two-dimensional point process and thus we applied two distinct models: one based on non-stochastic intensity based on the Poisson process, and a stochastic intensity model based on the Log Gaussian Cox process (LGCP). The main input used is a georeferenced database based on the Central Business Register made by the Center for Metropolis Studies (CEM), containing data of firms in the metropolitan region of São Paulo, for the base year 2000. We use as explanatory variables information from geographic information systems (GIS), demographic census and satellite imagery from National Oceanic and Atmospheric Administration (NOAA). The results show the usefulness of these models the construction of spatial location models, combining different data sources and introducing new perspectives on the empirical study of urban economics.
77

Assinaturas dinâmicas de um sistema coerente com aplicações / Dynamic signatures of a coherent system with applications.

Flor, José Alberto Ramos 27 February 2012 (has links)
O objetivo da dissertação é analisar a assinatura em um contexto geral que considera a dinâmica no tempo e a dependência estocástica, utilizando a teoria de martingais para processos pontuais. / The main goal in this work is to analyse the signature structure in a broader context considering time dynamics and stochastic dependence using the point processes martingale theory.
78

Localização industrial: uma aproximação usando processos pontuais espaciais / Firm location: an approach using spatial point process

Adriano Barasal Morales 08 June 2018 (has links)
O objetivo desta pesquisa é mostrar como aproveitar novas bases de dados disponíveis e o avanço de métodos computacionais para extrair informações estatísticas sobre a localização espacial de firmas. Para isso, propomos uma aplicação de métodos de estatística espacial para modelar o padrão de localização de novas empresas de serviços no município de São Paulo. Neste trabalho, assumimos que a localização espacial dessas firmas foi gerada através de um processo pontual bidimensional e assim aplicamos dois modelos distintos: um baseado em intensidade não estocástica baseada no processo de Poisson, e um modelo de intensidade estocástica baseado processo de Cox log Gaussiano (Log Gaussian Cox Process - LGCP). A principal base de dados utilizada é base georeferenciada baseada no Cadastro Central de Empresas construída pelo Centro de Estudos da Metrópole (CEM), contendo observações de empresas na região metropolitana de São Paulo, para o ano base de 2000. Utilizamos como variáveis explicativas de localização informações advindas de sistemas de informações geográficas (SIG), o Censo demográfico e imagens de satélite do National Oceanic and Atmospheric Administration (NOAA). Os resultados encontrados mostram a importância dessa metodologia no processo de construção de modelos de localização espacial, combinando distintas fontes de dados e introduzindo novas perspectivas sobre o estudo empírico de economia urbana. / The objective of this research is to show how to take advantage of new available databases and computational methods to extract statistical information about the spatial location of firms. In this sense, we propose an application of spatial statistics methods to model the location patterns of new services firms in the city of São Paulo. In this paper, we assume that the spatial location of these firms was generated through a two-dimensional point process and thus we applied two distinct models: one based on non-stochastic intensity based on the Poisson process, and a stochastic intensity model based on the Log Gaussian Cox process (LGCP). The main input used is a georeferenced database based on the Central Business Register made by the Center for Metropolis Studies (CEM), containing data of firms in the metropolitan region of São Paulo, for the base year 2000. We use as explanatory variables information from geographic information systems (GIS), demographic census and satellite imagery from National Oceanic and Atmospheric Administration (NOAA). The results show the usefulness of these models the construction of spatial location models, combining different data sources and introducing new perspectives on the empirical study of urban economics.
79

On perfect simulation and EM estimation

Larson, Kajsa January 2010 (has links)
Perfect simulation  and the EM algorithm are the main topics in this thesis. In paper I, we present coupling from the past (CFTP) algorithms that generate perfectly distributed samples from the multi-type Widom--Rowlin-son (W--R) model and some generalizations of it. The classical W--R model is a point process in the plane or the  space consisting of points of several different types. Points of different types are not allowed to be closer than some specified distance, whereas points of the same type can be arbitrary close. A stick-model and soft-core generalizations are also considered. Further, we  generate samples without edge effects, and give a bound on sufficiently small intensities (of the points) for the algorithm to terminate. In paper II, we consider the  forestry problem on how to estimate  seedling dispersal distributions and effective plant fecundities from spatially data of adult trees  and seedlings, when the origin of the seedlings are unknown.   Traditional models for fecundities build on allometric assumptions, where the fecundity is related to some  characteristic of the adult tree (e.g.\ diameter). However, the allometric assumptions are generally too restrictive and lead to nonrealistic estimates. Therefore we present a new model, the unrestricted fecundity (UF) model, which uses no allometric assumptions. We propose an EM algorithm to estimate the unknown parameters.   Evaluations on real and simulated data indicates better performance for the UF model. In paper III, we propose  EM algorithms to  estimate the passage time distribution on a graph.Data is obtained by observing a flow only at the nodes -- what happens on the edges is unknown. Therefore the sample of passage times, i.e. the times it takes for the flow to stream between two neighbors, consists of right censored and uncensored observations where it sometimes is unknown which is which.       For discrete passage time distributions, we show that the maximum likelihood (ML) estimate is strongly consistent under certain  weak conditions. We also show that our propsed EM algorithm  converges to the ML estimate if the sample size is sufficiently large and the starting value is sufficiently close to the true parameter. In a special case we show that it always converges.  In the continuous case, we propose an EM algorithm for fitting  phase-type distributions to data.
80

Fundamentals of Heterogeneous Cellular Networks

Dhillon, Harpreet Singh 24 February 2014 (has links)
The increasing complexity of heterogeneous cellular networks (HetNets) due to the irregular deployment of small cells demands significant rethinking in the way cellular networks are perceived, modeled and analyzed. In addition to threatening the relevance of classical models, this new network paradigm also raises questions regarding the feasibility of state-of-the-art simulation-based approach for system design. This dissertation proposes a fundamentally new approach based on random spatial models that is not only tractable but also captures current deployment trends fairly accurately. First, this dissertation presents a general baseline model for HetNets consisting of K different types of base stations (BSs) that may differ in terms of transmit power, deployment density and target rate. Modeling the locations of each class of BSs as an independent Poisson Point Process (PPP) allows the derivation of surprisingly simple expressions for coverage probability and average rate. One interpretation of these results is that adding more BSs or tiers does not necessarily change the coverage probability, which indicates that fears of "interference overload" in HetNets are probably overblown. Second, a flexible notion of BS load is incorporated by introducing a new idea of conditionally thinning the interference field. For this generalized model, the coverage probability is shown to increase when lightly loaded small cells are added to the existing macrocellular networks. This is due to the fact that owing to the smaller loads, small cells typically transmit less often than macrocells, thus contributing less to the interference power. The same idea of conditional thinning is also shown to be useful in modeling the non-uniform user distributions, especially when the users lie closer to the BSs. Third, the baseline model is extended to study multi-antenna HetNets, where BSs across tiers may additionally differ in terms of the number of transmit antennas, number of users served and the multi-antenna transmission strategy. Using novel tools from stochastic orders, a tractable framework is developed to compare the performance of various multi-antenna transmission strategies for a fairly general spatial model, where the BSs may follow any general stationary distribution. The analysis shows that for a given total number of transmit antennas in the network, it is preferable to spread them across many single-antenna BSs vs. fewer multi-antenna BSs. Fourth, accounting for the load on the serving BS, downlink rate distribution is derived for a generalized cell selection model, where shadowing, following any general distribution, impacts cell selection while fading does not. This generalizes the baseline model and all its extensions, which either ignore the impact of channel randomness on cell selection or lumps all the sources of randomness into a single random variable. As an application of these results, it is shown that in certain regimes, shadowing naturally balances load across various tiers and hence reduces the need for artificial cell selection bias. Fifth and last, a slightly futuristic scenario of self-powered HetNets is considered, where each BS is powered solely by a self-contained energy harvesting module that may differ across tiers in terms of the energy harvesting rate and energy storage capacity. Since a BS may not always have sufficient energy, it may not always be available to serve users. This leads to a notion of availability region, which characterizes the fraction of time each type of BS can be made available under variety of strategies. One interpretation of this result is that the self-powered BSs do not suffer performance degradation due to the unreliability associated with energy harvesting if the availability vector corresponding to the optimal system performance lies in the availability region. / text

Page generated in 0.0542 seconds