• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 9
  • 9
  • 7
  • 6
  • 1
  • Tagged with
  • 111
  • 111
  • 44
  • 29
  • 20
  • 19
  • 16
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evolutive two-level population process and large population approximations

Méléard, Sylvie, Roelly, Sylvie January 2013 (has links)
We are interested in modeling the Darwinian evolution of a population described by two levels of biological parameters: individuals characterized by an heritable phenotypic trait submitted to mutation and natural selection and cells in these individuals influencing their ability to consume resources and to reproduce. Our models are rooted in the microscopic description of a random (discrete) population of individuals characterized by one or several adaptive traits and cells characterized by their type. The population is modeled as a stochastic point process whose generator captures the probabilistic dynamics over continuous time of birth, mutation and death for individuals and birth and death for cells. The interaction between individuals (resp. between cells) is described by a competition between individual traits (resp. between cell types). We are looking for tractable large population approximations. By combining various scalings on population size, birth and death rates and mutation step, the single microscopic model is shown to lead to contrasting nonlinear macroscopic limits of different nature: deterministic approximations, in the form of ordinary, integro- or partial differential equations, or probabilistic ones, like stochastic partial differential equations or superprocesses.
32

Asymptotic Analysis of Interference in Cognitive Radio Networks

Yaobin, Wen 05 April 2013 (has links)
The aggregate interference distribution in cognitive radio networks is studied in a rigorous and analytical way using the popular Poisson point process model. While a number of results are available for this model for non-cognitive radio networks, cognitive radio networks present extra levels of difficulties for the analysis, mainly due to the exclusion region around the primary receiver, which are typically addressed via various ad-hoc approximations (e.g., based on the interference cumulants) or via the large-deviation analysis. Unlike the previous studies, we do not use here ad-hoc approximations but rather obtain the asymptotic interference distribution in a systematic and rigorous way, which also has a guaranteed level of accuracy at the distribution tail. This is in contrast to the large deviation analysis, which provides only the (exponential) order of scaling but not the outage probability itself. Unlike the cumulant-based analysis, our approach provides a guaranteed level of accuracy at the distribution tail. Additionally, our analysis provides a number of novel insights. In particular, we demonstrate that there is a critical transition point below which the outage probability decays only polynomially but above which it decays super-exponentially. This provides a solid analytical foundation to the earlier empirical observations in the literature and also reveals what are the typical ways outage events occur in different regimes. The analysis is further extended to include interference cancelation and fading (from a broad class of distributions). The outage probability is shown to scale down exponentially in the number of canceled nearest interferers in the below-critical region and does not change significantly in the above-critical one. The proposed asymptotic expressions are shown to be accurate in the non-asymptotic regimes as well.
33

On the separation of preferences among marked point process wager alternatives

Park, Jee Hyuk 15 May 2009 (has links)
A wager is a one time bet, staking money on one among a collection of alternatives having uncertain reward. Wagers represent a common class of engineering decision, where “bets” are placed on the design, deployment, and/or operation of technology. Often such wagers are characterized by alternatives having value that evolves according to some future cash flow. Here, the values of specific alternatives are derived from a cash flow modeled as a stochastic marked point process. A principal difficulty with these engineering wagers is that the probability laws governing the dynamics of random cash flow typically are not (completely) available; hence, separating the gambler’s preference among wager alternatives is quite difficult. In this dissertation, we investigate a computational approach for separating preferences among alternatives of a wager where the alternatives have values that evolve according to a marked point processes. We are particularly concerned with separating a gambler’s preferences when the probability laws on the available alternatives are not completely specified.
34

Asymptotic Analysis of Interference in Cognitive Radio Networks

Yaobin, Wen 05 April 2013 (has links)
The aggregate interference distribution in cognitive radio networks is studied in a rigorous and analytical way using the popular Poisson point process model. While a number of results are available for this model for non-cognitive radio networks, cognitive radio networks present extra levels of difficulties for the analysis, mainly due to the exclusion region around the primary receiver, which are typically addressed via various ad-hoc approximations (e.g., based on the interference cumulants) or via the large-deviation analysis. Unlike the previous studies, we do not use here ad-hoc approximations but rather obtain the asymptotic interference distribution in a systematic and rigorous way, which also has a guaranteed level of accuracy at the distribution tail. This is in contrast to the large deviation analysis, which provides only the (exponential) order of scaling but not the outage probability itself. Unlike the cumulant-based analysis, our approach provides a guaranteed level of accuracy at the distribution tail. Additionally, our analysis provides a number of novel insights. In particular, we demonstrate that there is a critical transition point below which the outage probability decays only polynomially but above which it decays super-exponentially. This provides a solid analytical foundation to the earlier empirical observations in the literature and also reveals what are the typical ways outage events occur in different regimes. The analysis is further extended to include interference cancelation and fading (from a broad class of distributions). The outage probability is shown to scale down exponentially in the number of canceled nearest interferers in the below-critical region and does not change significantly in the above-critical one. The proposed asymptotic expressions are shown to be accurate in the non-asymptotic regimes as well.
35

Attraction and repulsion : modelling interfirm interactions in geographical space

Protsiv, Sergiy January 2012 (has links)
More than three quarters of the world’s economic activity is concentrated in cities. But what drives people and firms to agglomerate in urban areas? Clearly, some places may offer inherent benefits due to the location itself, such as a mild climate or the presence of natural harbours, but that does not tell the whole story. Rather urban areas also offer spaces for interaction among people and firms as well as the proximity to potential partners, customers, and competitors, which could have a significant impact on the appeal of a location for a firm. Using multiple novel methods based on a unique detailed geographical dataset, this dissertation explores how a location’s attractiveness is impacted by the presence of nearby firms in three studies. The first study explores the influence of the density of economic activity on wages at a given location and attempts to disentangle the separate mechanisms that could be at work. The second study is concerned with the locations of foreign-owned firms and more specifically whether foreign-owned firms are more influenced by agglomeration benefits than domestic firms. The final study switches from modelling the effects of location to modelling the location patterns themselves using economic theory-based spatial point processes. The results of these studies make significant contributions to empirical research both in economic geography and international business as a set of theoretical propositions are tested on a very detailed dataset using an advanced methodology. The results could also be of interest for practitioners as the importance of location decisions is further reinforced, as well as for policymakers as the analyses explore not only the benefits but also the detriments of agglomeration. Sergiy Protsiv is a researcher at the Center for Strategy and Competitiveness at the Stockholm School of Economics. He participated in several projects on clusters and regional development, most notably the European Cluster Observatory. / <p>Diss. Stockholm : Handelshögskolan, 2012</p>
36

On the role of non-uniform smoothness parameters and the probabilistic method in applications of the Stein-Chen Method

Weinberg, Graham Victor Unknown Date (has links) (PDF)
The purpose of the research presented here is twofold. The first component explores the probabilistic interpretation of Stein’s method, as introduced in Barbour (1988). This is done in the setting of random variable approximations. This probabilistic method, where the Stein equation is interpreted in terms of the generator of an underlying birth and death process having equilibrium distribution equal to that of the approximant, provides a natural explanation of why Stein’s method works. An open problem has been to use this generator approach to obtain bounds on the differences of the solution to the Stein equation. Uniform bounds on these differences produce Stein “magic” factors, which control the bounds. With the choice of unit per capita death rate for the birth and death process, we are able to produce a result giving a new Stein factor bound, which applies to a selection of distributions. The proof is via a probabilistic approach, and we also include a probabilistic proof of a Stein factor bound from Barbour, Holst and Janson (1992). These results generalise the work of Xia (1999), which applies to the Poisson distribution with unit per capita death rate. (For complete abstract open document)
37

Modeling Time Series and Sequences: Learning Representations and Making Predictions

Lian, Wenzhao January 2015 (has links)
<p>The analysis of time series and sequences has been challenging in both statistics and machine learning community, because of their properties including high dimensionality, pattern dynamics, and irregular observations. In this thesis, novel methods are proposed to handle the difficulties mentioned above, thus enabling representation learning (dimension reduction and pattern extraction), and prediction making (classification and forecasting). This thesis consists of three main parts. </p><p>The first part analyzes multivariate time series, which is often non-stationary due to high levels of ambient noise and various interferences. We propose a nonlinear dimensionality reduction framework using diffusion maps on a learned statistical manifold, which gives rise to the construction of a low-dimensional representation of the high-dimensional non-stationary time series. We show that diffusion maps, with affinity kernels based on the Kullback-Leibler divergence between the local statistics of samples, allow for efficient approximation of pairwise geodesic distances. To construct the statistical manifold, we estimate time-evolving parametric distributions by designing a family of Bayesian generative models. The proposed framework can be applied to problems in which the time-evolving distributions (of temporally localized data), rather than the samples themselves, are driven by a low-dimensional underlying process. We provide efficient parameter estimation and dimensionality reduction methodology and apply it to two applications: music analysis and epileptic-seizure prediction.</p><p> </p><p>The second part focuses on a time series classification task, where we want to leverage the temporal dynamic information in the classifier design. In many time series classification problems including fraud detection, a low false alarm rate is required; meanwhile, we enhance the positive detection rate. Therefore, we directly optimize the partial area under the curve (PAUC), which maximizes the accuracy in low false alarm rate regions. Latent variables are introduced to incorporate the temporal information, while maintaining a max-margin based method solvable. An optimization routine is proposed with its properties analyzed; the algorithm is designed as scalable to web-scale data. Simulation results demonstrate the effectiveness of optimizing the performance in the low false alarm rate regions. </p><p> </p><p>The third part focuses on pattern extraction from correlated point process data, which consist of multiple correlated sequences observed at irregular times. The analysis of correlated point process data has wide applications, ranging from biomedical research to network analysis. We model such data as generated by a latent collection of continuous-time binary semi-Markov processes, corresponding to external events appearing and disappearing. A continuous-time modeling framework is more appropriate for multichannel point process data than a binning approach requiring time discretization, and we show connections between our model and recent ideas from the discrete-time literature. We describe an efficient MCMC algorithm for posterior inference, and apply our ideas to both synthetic data and a real-world biometrics application.</p> / Dissertation
38

Advances in point process filters and their application to sympathetic neural activity

Zaydens, Yevgeniy 12 March 2016 (has links)
This thesis is concerned with the development of techniques for analyzing the sequences of stereotypical electrical impulses within neurons known as spikes. Sequences of spikes, also called spike trains, transmit neural information; decoding them often provides details about the physiological processes generating the neural activity. Here, the statistical theory of event arrivals, called point processes, is applied to human muscle sympathetic spike trains, a peripheral nerve signal responsible for cardiovascular regulation. A novel technique that uses observed spike trains to dynamically derive information about the physiological processes generating them is also introduced. Despite the emerging usage of individual spikes in the analysis of human muscle sympathetic nerve activity, the majority of studies in this field remain focused on bursts of activity at or below cardiac rhythm frequencies. Point process theory applied to multi-neuron spike trains captured both fast and slow spiking rhythms. First, analysis of high-frequency spiking patterns within cardiac cycles was performed and, surprisingly, revealed fibers with no cardiac rhythmicity. Modeling spikes as a function of average firing rates showed that individual nerves contribute substantially to the differences in the sympathetic stressor response across experimental conditions. Subsequent investigation of low-frequency spiking identified two physiologically relevant frequency bands, and modeling spike trains as a function of hemodynamic variables uncovered complex associations between spiking activity and biophysical covariates at these two frequencies. For example, exercise-induced neural activation enhances the relationship of spikes to respiration but does not affect the extremely precise alignment of spikes to diastolic blood pressure. Additionally, a novel method of utilizing point process observations to estimate an internal state process with partially linear dynamics was introduced. Separation of the linear components of the process model and reduction of the sampled space dimensionality improved the computational efficiency of the estimator. The method was tested on an established biophysical model by concurrently computing the dynamic electrical currents of a simulated neuron and estimating its conductance properties. Computational load reduction, improved accuracy, and applicability outside neuroscience establish the new technique as a valuable tool for decoding large dynamical systems with linear substructure and point process observations.
39

Modelování náhodných mozaik / Random tessellations modeling

Seitl, Filip January 2018 (has links)
The motivation for this work comes from physics, when dealing with microstructures of polycrystalline materials. An adequate probabilistic model is a three-dimensional (3D) random tessellation. The original contribution of the author is dealing with the Gibbs-Voronoi and Gibbs- Laguerre tessellations in 3D, where the latter model is completely new. The energy function of the underlying Gibbs point process reflects interactions between geometrical characteristics of grains. The aim is the simulation, parameter estimation and degree-of-fit testing. Mathematical background for the methods is described and numerical results based on simulated data are presented in the form of tables and graphs. The interpretation of results confirms that the Gibbs-Laguerre model is promising for further investigation and applications.
40

Teste para avaliar a propriedade de incrementos independentes em um processo pontual / Test to evaluate the property of independent increments in a point process

Francys Andrews de Souza 26 June 2013 (has links)
Em econometria um dos tópicos que vem se tornando ao longo dos anos primordial e a análise de ultra-frequência, ou seja, a análise da transação negócio a negócio. Ela tem se mostrado fundamental na modelagem da microestrutura do mercado intraday. Ainda assim temos uma teoria escassa que vem crescendo de forma humilde a cerca deste tema. Buscamos desenvolver um teste de hipótese para verificar se os dados de ultra-frequência apresentam incrementos independentes e estacionários, pois neste cenário saber disso é de grande importância, ja que muitos trabalhos tem como base essa hipótese. Além disso Grimshaw et. al. (2005)[6] mostrou que ao utilizarmos uma distribuição de probabilidade contínua para modelarmos dados econômicos, em geral, estimamos uma função de intensidade crescente, devido a resultados viciados obtidos como consequência do arredondamento, em nosso trabalho buscamos trabalhar com distribuições discretas para que contornar esse problema acarretado pelo uso de distribuições contínuas / In econometrics a topic that is becoming primordial over the years is the ultra frequency analysis, or analysis of the trades to trades transaction. This topic is shown to be fundamental in modeling the microstructure of the market intraday. Nevertheless we have a little theory that is growing so lowly about this topic. We seek to develop a hypothesis test to verify that the data ultrasonic frequency have independent and stationary increments, for this scenario the knowledge of it great importance, since many jobs is based on this hypothesis. In general Grimshaw et. al. (2005)[6] showed that when we use a continuous probability distribution to model ecomomic data, we estimate a function of increasing intensity due to addicts results obtained as a result of rounding. In our research we seek to work with discrete distributions to circumvent this problem entailed by the use of continuous distributions

Page generated in 0.0663 seconds