• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 9
  • 9
  • 7
  • 6
  • 1
  • Tagged with
  • 111
  • 111
  • 44
  • 29
  • 20
  • 19
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dominated coupling from the past and some extensions of the area interaction process

Ambler, Graeme K. January 2002 (has links)
No description available.
2

Polyanalytic Bergman Kernels

Haimi, Antti January 2013 (has links)
The thesis consists of three articles concerning reproducing kernels ofweighted spaces of polyanalytic functions on the complex plane. In the first paper, we study spaces of polyanalytic polynomials equipped with a Gaussianweight. In the remaining two papers, more general weight functions are considered. More precisely, we provide two methods to compute asymptotic expansions for the kernels near the diagonal and then apply the techniques to get estimates for reproducing kernels of polyanalytic polynomial spaces equipped with rather general weight functions. / <p>QC 20130513</p>
3

Rate Estimators for Non-stationary Point Processes

Anna N Tatara (6629942) 11 June 2019 (has links)
<div>Non-stationary point processes are often used to model systems whose rates vary over time. Estimating underlying rate functions is important for input to a discrete-event simulation along with various statistical analyses. We study nonparametric estimators to the marked point process, the infinite-server queueing model, and the transitory queueing model. We conduct statistical inference for these estimators by establishing a number of asymptotic results.</div><div><br></div><div>For the marked point process, we consider estimating the offered load to the system over time. With direct observations of the offered load sampled at fixed intervals, we establish asymptotic consistency, rates of convergence, and asymptotic covariance through a Functional Strong Law of Large Numbers, a Functional Central Limit Theorem, and a Law of Iterated Logarithm. We also show that there exists an asymptotically optimal interval width as the sample size approaches infinity.</div><div><br></div><div>The infinite-server queueing model is central in many stochastic models. Specifically, the mean number of busy servers can be used as an estimator for the total load faced to a multi-server system with time-varying arrivals and in many other applications. Through an omniscient estimator based on observing both the arrival times and service requirements for n samples of an infinite-server queue, we show asymptotic consistency and rate of convergence. Then, we establish the asymptotics for a nonparametric estimator based on observations of the busy servers at fixed intervals.</div><div><br></div><div>The transitory queueing model is crucial when studying a transitory system, which arises when the time horizon or population is finite. We assume we observe arrival counts at fixed intervals. We first consider a natural estimator which applies an underlying nonhomogeneous Poisson process. Although the estimator is asymptotically unbiased, we see that a correction term is required to retrieve an accurate asymptotic covariance. Next, we consider a nonparametric estimator that exploits the maximum likelihood estimator of a multinomial distribution to see that this estimator converges appropriately to a Brownian Bridge.</div>
4

Advances in Stochastic Geometry for Cellular Networks

Saha, Chiranjib 24 August 2020 (has links)
The mathematical modeling and performance analysis of cellular networks have seen a major paradigm shift with the application of stochastic geometry. The main purpose of stochastic geometry is to endow probability distributions on the locations of the base stations (BSs) and users in a network, which, in turn, provides an analytical handle on the performance evaluation of cellular networks. To preserve the tractability of analysis, the common practice is to assume complete spatial randomness} of the network topology. In other words, the locations of users and BSs are modeled as independent homogeneous Poisson point processes (PPPs). Despite its usefulness, the PPP-based network models fail to capture any spatial coupling between the users and BSs which is dominant in a multi-tier cellular network (also known as the heterogeneous cellular networks (HetNets)) consisting of macro and small cells. For instance, the users tend to form hotspots or clusters at certain locations and the small cell BSs (SBSs) are deployed at higher densities at these locations of the hotspots in order to cater to the high data demand. Such user-centric deployments naturally couple the locations of the users and SBSs. On the other hand, these spatial couplings are at the heart of the spatial models used in industry for the system-level simulations and standardization purposes. This dissertation proposes fundamentally new spatial models based on stochastic geometry which closely emulate these spatial couplings and are conductive for a more realistic and fine-tuned performance analysis, optimization, and design of cellular networks. First, this dissertation proposes a new class of spatial models for HetNets where the locations of the BSs and users are assumed to be distributed as Poisson cluster process (PCP). From the modeling perspective, the proposed models can capture different spatial couplings in a network topology such as the user hotspots and user BS coupling occurring due to the user-centric deployment of the SBSs. The PCP-based model is a generalization of the state-of-the-art PPP-based HetNet model. This is because the model reduces to the PPP-based model once all spatial couplings in the network are ignored. From the stochastic geometry perspective, we have made contributions in deriving the fundamental distribution properties of PCP, such as the distance distributions and sum-product functionals, which are instrumental for the performance characterization of the HetNets, such as coverage and rate. The focus on more refined spatial models for small cells and users brings to the second direction of the dissertation, which is modeling and analysis of HetNets with millimeter wave (mm-wave) integrated access and backhaul (IAB), an emerging design concept of the fifth generation (5G) cellular networks. While the concepts of network densification with small cells have emerged in the fourth generation (4G) era, the small cells can be realistically deployed with IAB since it solves the problem of high capacity wired backhaul of SBSs by replacing the last-mile fibers with mm-wave links. We have proposed new stochastic geometry-based models for the performance analysis of IAB-enabled HetNets. Our analysis reveals some interesting system-design insights: (1) the IAB HetNets can support a maximum number of users beyond which the data rate drops below the rate of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no rate gain is observed with the addition of more SBSs. The third and final direction of this dissertation is the combination of machine learning and stochastic geometry to construct a new class of data driven network models which can be used in the performance optimization and design of a network. As a concrete example, we investigate the classical problem of wireless link scheduling where the objective is to choose an optimal subset of simultaneously active transmitters (Tx-s) from a ground set of Tx-s which will maximize the network-wide sum-rate. Since the optimization problem is NP-hard, we replace the computationally expensive heuristic by inferring the point patterns of the active Tx-s in the optimal subset after training a determinantal point process (DPP). Our investigations demonstrate that the DPP is able to learn the spatial interactions of the Tx-s in the optimal subset and gives a reasonably accurate estimate of the optimal subset for any new ground set of Tx-s. / Doctor of Philosophy / The high speed global cellular communication network is one of the most important technologies, and it continues to evolve rapidly with every new generation. This evolution greatly depends on observing performance-trends of the emerging technologies on the network models through extensive system-level simulations. Since these simulation models are extremely time-consuming and error prone, the complementary analytical models of cellular networks have been an area of active research for a long time. These analytical models are intended to provide crisp insights on the network behavior such as the dependence of network performance metrics (such as coverage or rate) on key system-level parameters (such as transmission powers, base station (BS) density) which serve as the prior knowledge for more fine-tuned simulations. Over the last decade, the analytical modeling of the cellular networks has been driven by stochastic geometry. The main purpose of stochastic geometry is to endow the locations of the base stations (BSs) and users with probability distributions and then leverage the properties of these distributions to average out the spatial randomness. This process of spatial averaging allows us to derive the analytical expressions of the system-level performance metrics despite the presence of a large number of random variables (such as BS and user locations, channel gains) under some reasonable assumptions. The simplest stochastic geometry based model of cellular networks, which is also the most tractable, is the so-called Poisson point process (PPP) based network model. In this model, users and BSs are assumed to be distributed as independent homogeneous PPPs. This is equivalent to saying that the users and BSs independently and uniformly at random over a plane. The PPP-based model turned out to be a reasonably accurate representation of the yesteryear’s cellular networks which consisted of a single tier of macro BSs (MBSs) intended to provide a uniform coverage blanket over the region. However, as the data-hungry devices like smart-phones, tablets, and application like online gaming continue to flood the consumer market, the network configuration is rapidly deviating from this baseline setup with different spatial interactions between BSs and users (also termed spatial coupling) becoming dominant. For instance, the user locations are far from being homogeneous as they are concentrated in specific areas like residential and commercial zones (also known as hotspots). Further, the network, previously consisting of a single tier of macro BSs (MBSs), is becoming increasingly heterogeneous with the deployment of small cell BSs (SBSs) with small coverage footprints and targeted to serve the user hotspots. It is not difficult to see that the network topology with these spatial couplings is quite far from complete spatial randomness which is the basis of the PPP-based models. The key contribution of this dissertation is to enrich the stochastic geometry-based mathematical models so that they can capture the fine-grained spatial couplings between the BSs and users. More specifically, this dissertation contributes in the following three research directions. Direction-I: Modeling Spatial Clustering. We model the locations of users and SBSs forming hotspots as Poisson cluster processes (PCPs). A PCP is a collection of offspring points which are located around the parent points which belong to a PPP. The coupling between the locations of users and SBSs (due to their user-centric deployment) can be introduced by assuming that the user and SBS PCPs share the same parent PPP. The key contribution in this direction is the construction of a general HetNet model with a mixture of PPP and PCP-distributed BSs and user distributions. Note that the baseline PPP-based HetNet model appears as one of the many configurations supported by this general model. For this general model, we derive the analytical expressions of the performance metrics like coverage probability, BS load, and rate as functions of the coupling parameters (e.g. BS and user cluster size). Direction-II: Modeling Coupling in Wireless Backhaul Networks. While the deployment of SBSs clearly enhances the network performance in terms of coverage, one might wonder: how long network densification with tens of thousands of SBSs can meet the everincreasing data demand? It turns out that in the current network setting, where the backhaul links (i.e. the links between the BSs and core network) are still wired, it is not feasible to densify the network beyond some limit. This backhaul bottleneck can be overcome if the backhaul links also become wireless and the backhaul and access links (link between user and BS) are jointly managed by an integrated access and backhaul (IAB) network. In this direction, we develop the analytical models of IAB-enabled HetNets where the key challenge is to tackle new types of couplings which exist between the rates on the wireless access and backhaul links. Such couplings exist due to the spatial correlation of the signal qualities of the two links and the number of users served by different BSs. Two fundamental insights obtained from this work are as follows: (1) the IAB HetNets can support a maximum number of users beyond which the network performance drops below that of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no performance gain is observed with the addition of more SBSs. Direction-III: Modeling Repulsion. In this direction, we focus on modeling another aspect of spatial coupling imposed by the intra-point repulsion. Consider a device-to-device (D2D) communication scenario, where some users are transmitting some on-demand content locally cached in their devices using a common channel. Any reasonable multiple access scheme will ensure that two nearly users are never simultaneously active as they will cause severe mutual interference and thereby reducing the network-wide sum rate. Thus the active users in the network will have some spatial repulsion. The locations of these users can be modeled as determinantal point processes (DPPs). The key property of DPP is that it forms a bridge between stochastic geometry and machine learning, two otherwise non-overlapping paradigms for wireless network modeling and design. The main focus in this direction is to explore the learning framework of DPP and bring together advantages of stochastic geometry and machine learning to construct a new class of data-driven analytical network models.
5

Point process modeling as a framework to dissociate intrinsic and extrinsic components in neural systems

Fiddyment, Grant Michael 03 November 2016 (has links)
Understanding the factors shaping neuronal spiking is a central problem in neuroscience. Neurons may have complicated sensitivity and, often, are embedded in dynamic networks whose ongoing activity may influence their likelihood of spiking. One approach to characterizing neuronal spiking is the point process generalized linear model (GLM), which decomposes spike probability into explicit factors. This model represents a higher level of abstraction than biophysical models, such as Hodgkin-Huxley, but benefits from principled approaches for estimation and validation. Here we address how to infer factors affecting neuronal spiking in different types of neural systems. We first extend the point process GLM, most commonly used to analyze single neurons, to model population-level voltage discharges recorded during human seizures. Both GLMs and descriptive measures reveal rhythmic bursting and directional wave propagation. However, we show that GLM estimates account for covariance between these features in a way that pairwise measures do not. Failure to account for this covariance leads to confounded results. We interpret the GLM results to speculate the mechanisms of seizure and suggest new therapies. The second chapter highlights flexibility of the GLM. We use this single framework to analyze enhancement, a statistical phenomenon, in three distinct systems. Here we define the enhancement score, a simple measure of shared information between spike factors in a GLM. We demonstrate how to estimate the score, including confidence intervals, using simulated data. In real data, we find that enhancement occurs prominently during human seizure, while redundancy tends to occur in mouse auditory networks. We discuss implications for physiology, particularly during seizure. In the third part of this thesis, we apply point process modeling to spike trains recorded from single units in vitro under external stimulation. We re-parameterize models in a low-dimensional and physically interpretable way; namely, we represent their effects in principal component space. We show that this approach successfully separates the neurons observed in vitro into different classes consistent with their gene expression profiles. Taken together, this work contributes a statistical framework for analyzing neuronal spike trains and demonstrates how it can be applied to create new insights into clinical and experimental data sets.
6

Bayesian point process modeling to quantify excess risk in spatial epidemiology: an analysis of stillbirths with a maternal contextual effect

Zahrieh, David 01 August 2017 (has links)
Motivated by the paucity of high quality stillbirth surveillance data and the spatial analyses of such data, the current research sets out to quantitatively describe the pattern of stillbirth events that may lead to mechanistic hypotheses. We broaden the appeal of Bayesian Poisson point process modeling to quantify excess risk while accounting for unobserved heterogeneity. We consider a practical data analysis strategy when fitting the point process model and study the utility of parameterizing the intensity function governing the point process to include a maternal contextual effect to account for variation due to multiple stillbirth events experienced by the same mother in independent pregnancies. Simulation studies suggest that our practical data analysis strategy is reasonable and that there is a variance-bias trade-off associated with the use of a maternal contextual effect. The methodology is applied to the spatial distribution of stillbirth events in Iowa during the years 2005 through 2011 obtained using an active, statewide public health surveillance approach. Several localized areas of excess risk were identified and mapped based on model components that captured the nuanced and salient features of the data. A conditional formulation of the point process model is then considered, which has two main advantages: the ability to easily incorporate covariate information attached to both stillbirth and live birth, as well as obviate the need to estimate the background intensity. We assess the utility of the conditional approach in the presence of unobserved heterogeneity, compare two Bayesian estimation techniques, and extend the conditional formulation to adequately capture spatio-temporal effects. The motivating study comes from the Iowa Registry for Congenital and Inherited Disorders who has a committed interest in the surveillance and epidemiology of stillbirth in Iowa and whether the occurrence might be geographically linked.
7

Manipulations of spike trains and their impact on synchrony analysis

Pazienti, Antonio January 2007 (has links)
The interaction between neuronal cells can be identified as the computing mechanism of the brain. Neurons are complex cells that do not operate in isolation, but they are organized in a highly connected network structure. There is experimental evidence that groups of neurons dynamically synchronize their activity and process brain functions at all levels of complexity. A fundamental step to prove this hypothesis is to analyze large sets of single neurons recorded in parallel. Techniques to obtain these data are meanwhile available, but advancements are needed in the pre-processing of the large volumes of acquired data and in data analysis techniques. Major issues include extracting the signal of single neurons from the noisy recordings (referred to as spike sorting) and assessing the significance of the synchrony. This dissertation addresses these issues with two complementary strategies, both founded on the manipulation of point processes under rigorous analytical control. On the one hand I modeled the effect of spike sorting errors on correlated spike trains by corrupting them with realistic failures, and studied the corresponding impact on correlation analysis. The results show that correlations between multiple parallel spike trains are severely affected by spike sorting, especially by erroneously missing spikes. When this happens sorting strategies characterized by classifying only good'' spikes (conservative strategies) lead to less accurate results than tolerant'' strategies. On the other hand, I investigated the effectiveness of methods for assessing significance that create surrogate data by displacing spikes around their original position (referred to as dithering). I provide analytical expressions of the probability of coincidence detection after dithering. The effectiveness of spike dithering in creating surrogate data strongly depends on the dithering method and on the method of counting coincidences. Closed-form expressions and bounds are derived for the case where the dither equals the allowed coincidence interval. This work provides new insights into the methodologies of identifying synchrony in large-scale neuronal recordings, and of assessing its significance. / Die Informationsverarbeitung im Gehirn erfolgt maßgeblich durch interaktive Prozesse von Nervenzellen, sogenannten Neuronen. Diese zeigen eine komplexe Dynamik ihrer chemischen und elektrischen Eigenschaften. Es gibt deutliche Hinweise darauf, dass Gruppen synchronisierter Neurone letztlich die Funktionsweise des Gehirns auf allen Ebenen erklären können. Um die schwierige Frage nach der genauen Funktionsweise des Gehirns zu beantworten, ist es daher notwendig, die Aktivität vieler Neuronen gleichzeitig zu messen. Die technischen Voraussetzungen hierfür sind in den letzten Jahrzehnten durch Multielektrodensyteme geschaffen worden, die heute eine breite Anwendung finden. Sie ermöglichen die simultane extrazelluläre Ableitung von bis zu mehreren hunderten Kanälen. Die Voraussetzung für die Korrelationsanalyse von vielen parallelen Messungen ist zunächst die korrekte Erkennung und Zuordnung der Aktionspotentiale einzelner Neurone, ein Verfahren, das als Spikesortierung bezeichnet wird. Eine weitere Herausforderung ist die statistisch korrekte Bewertung von empirisch beobachteten Korrelationen. Mit dieser Dissertationsschrift lege ich eine theoretische Arbeit vor, die sich der Vorverarbeitung der Daten durch Spikesortierung und ihrem Einfluss auf die Genauigkeit der statistischen Auswertungsverfahren, sowie der Effektivität zur Erstellung von Surrogatdaten für die statistische Signifikanzabschätzung auf Korrelationen widmet. Ich verwende zwei komplementäre Strategien, die beide auf der analytischen Berechnung von Punktprozessmanipulationen basieren. In einer ausführlichen Studie habe ich den Effekt von Spikesortierung in mit realistischen Fehlern behafteten korrelierten Spikefolgen modeliert. Zum Vergleich der Ergebnisse zweier unterschiedlicher Methoden zur Korrelationsanalyse auf den gestörten, sowie auf den ungestörten Prozessen, leite ich die entsprechenden analytischen Formeln her. Meine Ergebnisse zeigen, dass koinzidente Aktivitätsmuster multipler Spikefolgen durch Spikeklassifikation erheblich beeinflusst werden. Das ist der Fall, wenn Neuronen nur fälschlicherweise Spikes zugeordnet werden, obwohl diese anderen Neuronen zugehörig sind oder Rauschartefakte sind (falsch positive Fehler). Jedoch haben falsch-negative Fehler (fälschlicherweise nicht-klassifizierte oder missklassifizierte Spikes) einen weitaus grösseren Einfluss auf die Signifikanz der Korrelationen. In einer weiteren Studie untersuche ich die Effektivität einer Klasse von Surrogatmethoden, sogenannte Ditheringverfahren, welche paarweise Korrelationen zerstören, in dem sie koinzidente Spikes von ihrer ursprünglichen Position in einem kleinen Zeitfenster verrücken. Es zeigt sich, dass die Effektivität von Spike-Dithering zur Erzeugung von Surrogatdaten sowohl von der Dithermethode als auch von der Methode zur Koinzidenzzählung abhängt. Für die Wahrscheinlichkeit der Koinzidenzerkennung nach dem Dithern stelle ich analytische Formeln zur Verfügung. Die vorliegende Arbeit bietet neue Einblicke in die Methoden zur Korrelationsanalyse auf multi-variaten Punktprozessen mit einer genauen Untersuchung von unterschiedlichen statistischen Einflüssen auf die Signifikanzabschätzung. Für die praktische Anwendung ergeben sich Leitlinien für den Umgang mit Daten zur Synchronizitätsanalyse.
8

Influence modeling in behavioral data

Li, Liangda 21 September 2015 (has links)
Understanding influence in behavioral data has become increasingly important in analyzing the cause and effect of human behaviors under various scenarios. Influence modeling enables us to learn not only how human behaviors drive the diffusion of memes spread in different kinds of networks, but also the chain reactions evolve in the sequential behaviors of people. In this thesis, I propose to investigate into appropriate probabilistic models for efficiently and effectively modeling influence, and the applications and extensions of the proposed models to analyze behavioral data in computational sustainability and information search. One fundamental problem in influence modeling is the learning of the degree of influence between individuals, which we called social infectivity. In the first part of this work, we study how to efficient and effective learn social infectivity in diffusion phenomenon in social networks and other applications. We replace the pairwise infectivity in the multidimensional Hawkes processes with linear combinations of those time-varying features, and optimize the associated coefficients with lasso regularization on coefficients. In the second part of this work, we investigate the modeling of influence between marked events in the application of energy consumption, which tracks the diffusion of mixed daily routines of household members. Specifically, we leverage temporal and energy consumption information recorded by smart meters in households for influence modeling, through a novel probabilistic model that combines marked point processes with topic models. The learned influence is supposed to reveal the sequential appliance usage pattern of household members, and thereby helps address the problem of energy disaggregation. In the third part of this work, we investigate a complex influence modeling scenario which requires simultaneous learning of both infectivity and influence existence. Specifically, we study the modeling of influence in search behaviors, where the influence tracks the diffusion of mixed search intents of search engine users in information search. We leverage temporal and textual information in query logs for influence modeling, through a novel probabilistic model that combines point processes with topic models. The learned influence is supposed to link queries that serve for the same formation need, and thereby helps address the problem of search task identification. The modeling of influence with the Markov property also help us to understand the chain reaction in the interaction of search engine users with query auto-completion (QAC) engine within each query session. The fourth part of this work studies how a user's present interaction with a QAC engine influences his/her interaction in the next step. We propose a novel probabilistic model based on Markov processes, which leverage such influence in the prediction of users' click choices of suggested queries of QAC engines, and accordingly improve the suggestions to better satisfy users' search intents. In the fifth part of this work, we study the mutual influence between users' behaviors on query auto-completion (QAC) logs and normal click logs across different query sessions. We propose a probabilistic model to explore the correlation between user' behavior patterns on QAC and click logs, and expect to capture the mutual influence between users' behaviors in QAC and click sessions.
9

Bias correction of bounded location errors in binary data

Walker, Nelson B. January 1900 (has links)
Master of Science / Department of Statistics / Trevor Hefley / Binary regression models for spatial data are commonly used in disciplines such as epidemiology and ecology. Many spatially-referenced binary data sets suffer from location error, which occurs when the recorded location of an observation differs from its true location. When location error occurs, values of the covariates associated with the true spatial locations of the observations cannot be obtained. We show how a change of support (COS) can be applied to regression models for binary data to provide bias-corrected coefficient estimates when the true values of the covariates are unavailable, but the unknown location of the observations are contained within non-overlapping polygons of any geometry. The COS accommodates spatial and non-spatial covariates and preserves the convenient interpretation of methods such as logistic and probit regression. Using a simulation experiment, we compare binary regression models with a COS to naive approaches that ignore location error. We illustrate the flexibility of the COS by modeling individual-level disease risk in a population using a binary data set where the location of the observations are unknown, but contained within administrative units. Our simulation experiment and data illustration corroborate that conventional regression models for binary data which ignore location error are unreliable, but that the COS can be used to eliminate bias while preserving model choice.
10

Stochastická rekonstrukce bodových vzorků / Stochastic reconstruction of random point patterns

Koňasová, Kateřina January 2018 (has links)
Point procesess serve as stochastic models for locations of objects that are ran- domly placed in space, e.g. the locations of trees of a given species in a forest stand, earthquake epicenters or defect positions in industrial materials. Stochas- tic reconstruction is an algorithmic procedure providing independent replicates of point process data which may be used for various purposes, e.g. testing sta- tistical hypothesis. The main advantage of this technique is that we do not need to specify any theoretical model for the observed data, only the estimates of se- lected summary characteristics are employed. Main aim of this work is to discuss the possibility of extension of the stochastic reconstruction algorithm for inho- mogeneous point patterns. 1

Page generated in 0.0669 seconds