Spelling suggestions: "subject:"[een] STOCHASTIC APPROXIMATION"" "subject:"[enn] STOCHASTIC APPROXIMATION""
21 |
Radio frequency interference modeling and mitigation in wireless receiversGulati, Kapil 21 October 2011 (has links)
In wireless communication systems, receivers have generally been designed under the assumption that the additive noise in system is Gaussian. Wireless receivers, however, are affected by radio frequency interference (RFI) generated from various sources such as other wireless users, switching electronics, and computational platforms. RFI is well modeled using non-Gaussian impulsive statistics and can severely degrade the communication performance of wireless receivers designed under the assumption of additive Gaussian noise.
Methods to avoid, cancel, or reduce RFI have been an active area of research over the past three decades. In practice, RFI cannot be completely avoided or canceled at the receiver. This dissertation derives the statistics of the residual RFI and utilizes them to analyze and improve the communication performance of wireless receivers. The primary contributions of this dissertation are to (i) derive instantaneous statistics of co-channel interference in a field of Poisson and Poisson-Poisson clustered interferers, (ii) characterize throughput, delay, and reliability of decentralized wireless networks with temporal correlation, and (iii) design pre-filters to mitigate RFI in wireless receivers. / text
|
22 |
Monte Carlo Methods for Stochastic Differential Equations and their ApplicationsLeach, Andrew Bradford, Leach, Andrew Bradford January 2017 (has links)
We introduce computationally efficient Monte Carlo methods for studying the statistics of stochastic differential equations in two distinct settings. In the first, we derive importance sampling methods for data assimilation when the noise in the model and observations are small. The methods are formulated in discrete time, where the "posterior" distribution we want to sample from can be analyzed in an accessible small noise expansion. We show that a "symmetrization" procedure akin to antithetic coupling can improve the order of accuracy of the sampling methods, which is illustrated with numerical examples. In the second setting, we develop "stochastic continuation" methods to estimate level sets for statistics of stochastic differential equations with respect to their parameters. We adapt Keller's Pseudo-Arclength continuation method to this setting using stochastic approximation, and generalized least squares regression. Furthermore, we show that the methods can be improved through the use of coupling methods to reduce the variance of the derivative estimates that are involved.
|
23 |
Optimization under Uncertainty with Applications in Data-driven Stochastic Simulation and Rare-event EstimationZhang, Xinyu January 2022 (has links)
For many real-world problems, optimization could only be formulated with partial information or subject to uncertainty due to reasons such as data measurement error, model misspecification, or that the formulation depends on the non-stationary future. It thus often requires one to make decisions without knowing the problem's full picture. This dissertation considers the robust optimization framework—a worst-case perspective—to characterize uncertainty as feasible regions and optimize over the worst possible scenarios. Two applications in this worst-case perspective are discussed: stochastic estimation and rare-event simulation.
Chapters 2 and 3 discuss a min-max framework to enhance existing estimators for simulation problems that involve a bias-variance tradeoff. Biased stochastic estimators, such as finite-differences for noisy gradient estimation, often contain parameters that need to be properly chosen to balance impacts from the bias and the variance. While the optimal order of these parameters in terms of the simulation budget can be readily established, the precise best values depend on model characteristics that are typically unknown in advance. We introduce a framework to construct new classes of estimators, based on judicious combinations of simulation runs on sequences of tuning parameter values, such that the estimators consistently outperform a given tuning parameter choice in the conventional approach, regardless of the unknown model characteristics. We argue the outperformance via what we call the asymptotic minimax risk ratio, obtained by minimizing the worst-case asymptotic ratio between the mean square errors of our estimators and the conventional one, where the worst case is over any possible values of the model unknowns. In particular, when the minimax ratio is less than 1, the calibrated estimator is guaranteed to perform better asymptotically. We identify this minimax ratio for general classes of weighted estimators and the regimes where this ratio is less than 1. Moreover, we show that the best weighting scheme is characterized by a sum of two components with distinct decay rates. We explain how this arises from bias-variance balancing that combats the adversarial selection of the model constants, which can be analyzed via a tractable reformulation of a non-convex optimization problem.
Chapters 4 and 5 discuss extreme event estimation using a distributionally robust optimization framework. Conventional methods for extreme event estimation rely on well-chosen parametric models asymptotically justified from extreme value theory (EVT). These methods, while powerful and theoretically grounded, could however encounter difficult bias-variance tradeoffs that exacerbates especially when data size is too small, deteriorating the reliability of the tail estimation. The chapters study a framework based on the recently surging literature of distributionally robust optimization. This approach can be viewed as a nonparametric alternative to conventional EVT, by imposing general shape belief on the tail instead of parametric assumption and using worst-case optimization as a resolution to handle the nonparametric uncertainty. We explain how this approach bypasses the bias-variance tradeoff in EVT. On the other hand, we face a conservativeness-variance tradeoff which we describe how to tackle. We also demonstrate computational tools for the involved optimization problems and compare our performance with conventional EVT across a range of numerical examples.
|
24 |
Model-Free Controller Design based on Simultaneous Perturbation Stochastic Approximation / 同時摂動確率近似に基づくモデルフリー型制御器設計Mohd, Ashraf bin Ahmad 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19125号 / 情博第571号 / 新制||情||100(附属図書館) / 32076 / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 杉江 俊治, 教授 石井 信, 教授 加納 学, 准教授 東 俊一 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
25 |
A Nonlinear Stochastic Optimization Framework For REDPatro, Rajesh Kumar 12 1900 (has links) (PDF)
No description available.
|
26 |
On some problems in the simulation of flow and transport through porous mediaThomas, Sunil George 20 October 2009 (has links)
The dynamic solution of multiphase flow through porous media is of
special interest to several fields of science and engineering, such as petroleum,
geology and geophysics, bio-medical, civil and environmental, chemical engineering
and many other disciplines. A natural application is the modeling of
the flow of two immiscible fluids (phases) in a reservoir. Others, that are broadly
based and considered in this work include the hydrodynamic dispersion (as in
reactive transport) of a solute or tracer chemical through a fluid phase. Reservoir
properties like permeability and porosity greatly influence the flow of these
phases. Often, these vary across several orders of magnitude and can be discontinuous
functions. Furthermore, they are generally not known to a desired level
of accuracy or detail and special inverse problems need to be solved in order
to obtain their estimates. Based on the physics dominating a given sub-region
of the porous medium, numerical solutions to such flow problems may require
different discretization schemes or different governing equations in adjacent regions.
The need to couple solutions to such schemes gives rise to challenging
domain decomposition problems. Finally, on an application level, present day
environment concerns have resulted in a widespread increase in CO₂capture and
storage experiments across the globe. This presents a huge modeling challenge
for the future. This research work is divided into sections that aim to study various
inter-connected problems that are of significance in sub-surface porous media
applications. The first section studies an application of mortar (as well as nonmortar,
i.e., enhanced velocity) mixed finite element methods (MMFEM and
EV-MFEM) to problems in porous media flow. The mortar spaces are first
used to develop a multiscale approach for parabolic problems in porous media
applications. The implementation of the mortar mixed method is presented for
two-phase immiscible flow and some a priori error estimates are then derived
for the case of slightly compressible single-phase Darcy flow. Following this,
the problem of modeling flow coupled to reactive transport is studied. Applications
of such problems include modeling bio-remediation of oil spills and other
subsurface hazardous wastes, angiogenesis in the transition of tumors from a
dormant to a malignant state, contaminant transport in groundwater flow and
acid injection around well bores to increase the permeability of the surrounding
rock. Several numerical results are presented that demonstrate the efficiency
of the method when compared to traditional approaches. The section following
this examines (non-mortar) enhanced velocity finite element methods for solving
multiphase flow coupled to species transport on non-matching multiblock grids.
The results from this section indicate that this is the recommended method of
choice for such problems.
Next, a mortar finite element method is formulated and implemented
that extends the scope of the classical mortar mixed finite element method
developed by Arbogast et al [12] for elliptic problems and Girault et al [62] for
coupling different numerical discretization schemes. Some significant areas of
application include the coupling of pore-scale network models with the classical
continuum models for steady single-phase Darcy flow as well as the coupling
of different numerical methods such as discontinuous Galerkin and mixed finite
element methods in different sub-domains for the case of single phase flow [21,
109]. These hold promise for applications where a high level of detail and
accuracy is desired in one part of the domain (often associated with very small
length scales as in pore-scale network models) and a much lower level of detail at other parts of the domain (at much larger length scales). Examples include
modeling of the flow around well bores or through faulted reservoirs.
The next section presents a parallel stochastic approximation method
[68, 76] applied to inverse modeling and gives several promising results that
address the problem of uncertainty associated with the parameters governing
multiphase flow partial differential equations. For example, medium properties
such as absolute permeability and porosity greatly influence the flow behavior,
but are rarely known to even a reasonable level of accuracy and are very often
upscaled to large areas or volumes based on seismic measurements at discrete
points. The results in this section show that by using a few measurements of
the primary unknowns in multiphase flow such as fluid pressures and concentrations
as well as well-log data, one can define an objective function of the
medium properties to be determined, which is then minimized to determine the
properties using (as in this case) a stochastic analog of Newton’s method. The
last section is devoted to a significant and current application area. It presents a
parallel and efficient iteratively coupled implicit pressure, explicit concentration
formulation (IMPEC) [52–54] for non-isothermal compositional flow problems.
The goal is to perform predictive modeling simulations for CO₂sequestration
experiments.
While the sections presented in this work cover a broad range of topics
they are actually tied to each other and serve to achieve the unifying, ultimate
goal of developing a complete and robust reservoir simulator. The major results
of this work, particularly in the application of MMFEM and EV-MFEM
to multiphysics couplings of multiphase flow and transport as well as in the
modeling of EOS non-isothermal compositional flow applied to CO₂sequestration,
suggest that multiblock/multimodel methods applied in a robust parallel
computational framework is invaluable when attempting to solve problems as
described in Chapter 7. As an example, one may consider a closed loop control
system for managing oil production or CO₂sequestration experiments in huge
formations (the “instrumented oil field”). Most of the computationally costly activity occurs around a few wells. Thus one has to be able to seamlessly connect
the above components while running many forward simulations on parallel
clusters in a multiblock and multimodel setting where most domains employ an
isothermal single-phase flow model except a few around well bores that employ,
say, a non-isothermal compositional model. Simultaneously, cheap and efficient
stochastic methods as in Chapter 8, may be used to generate history matches of
well and/or sensor-measured solution data, to arrive at better estimates of the
medium properties on the fly. This is obviously beyond the scope of the current
work but represents the over-arching goal of this research. / text
|
27 |
Analog Signal Processor for Adaptive Antenna ArraysHossu, Mircea January 2007 (has links)
An analog circuit for beamforming in a mobile Ku band satellite TV antenna array has been implemented. The circuit performs continuous-time gradient descent using simultaneous perturbation gradient estimation. Simulations were performed using Agilent ADS circuit simulator. Field tests were performed in a realistic scenario using a satellite signal. The results were comparable to the simulation predictions and to results obtained using a digital implementation of a similar stochastic approximation algorithm.
|
28 |
Analog Signal Processor for Adaptive Antenna ArraysHossu, Mircea January 2007 (has links)
An analog circuit for beamforming in a mobile Ku band satellite TV antenna array has been implemented. The circuit performs continuous-time gradient descent using simultaneous perturbation gradient estimation. Simulations were performed using Agilent ADS circuit simulator. Field tests were performed in a realistic scenario using a satellite signal. The results were comparable to the simulation predictions and to results obtained using a digital implementation of a similar stochastic approximation algorithm.
|
29 |
Population SAMC, ChIP-chip Data Analysis and BeyondWu, Mingqi 2010 December 1900 (has links)
This dissertation research consists of two topics, population stochastics approximation Monte Carlo (Pop-SAMC) for Baysian model selection problems and ChIP-chip data analysis. The following two paragraphs give a brief introduction to each of the two topics, respectively.
Although the reversible jump MCMC (RJMCMC) has the ability to traverse the space of possible models in Bayesian model selection problems, it is prone to becoming trapped into local mode, when the model space is complex. SAMC, proposed by Liang, Liu and Carroll, essentially overcomes the difficulty in dimension-jumping moves, by introducing a self-adjusting mechanism. However, this learning mechanism has not yet reached its maximum efficiency. In this dissertation, we propose a Pop-SAMC algorithm; it works on population chains of SAMC, which can provide a more efficient self-adjusting mechanism and make use of crossover operator from genetic algorithms to further increase its efficiency. Under mild conditions, the convergence of this algorithm is proved. The effectiveness of Pop-SAMC in Bayesian model selection problems is examined through a change-point identification example and a large-p linear regression variable selection example. The numerical results indicate that Pop- SAMC outperforms both the single chain SAMC and RJMCMC significantly.
In the ChIP-chip data analysis study, we developed two methodologies to identify the transcription factor binding sites: Bayesian latent model and population-based
test. The former models the neighboring dependence of probes by introducing a latent indicator vector; The later provides a nonparametric method for evaluation of test scores in a multiple hypothesis test by making use of population information of samples. Both methods are applied to real and simulated datasets. The numerical results indicate the Bayesian latent model can outperform the existing methods, especially when the data contain outliers, and the use of population information can significantly improve the power of multiple hypothesis tests.
|
30 |
Stochastic methods in computational stereoCoffman, Thayne Richard 16 June 2011 (has links)
Computational stereo estimates 3D structure by analyzing visual changes between two or more passive images of a scene that are captured from different viewpoints. It is a key enabler for ubiquitous autonomous systems, large-scale surveying, virtual reality, and improved techniques for compression, tracking, and object recognition. The fact that computational stereo is an under-constrained inverse problem causes many challenges. Its computational and memory requirements are high. Typical heuristics and assumptions, used to constrain solutions or reduce computation, prevent treatment of key realities such as reflection, translucency, ambient lighting changes, or moving objects in the scene. As a result, a general solution is lacking.
Stochastic models are common in computational stereo, but stochastic algorithms are severely under-represented. In this dissertation I present two stochastic algorithms and demonstrate their advantages over deterministic approaches.
I first present the Quality-Efficient Stochastic Sampling (QUESS) approach. QUESS reduces the number of match quality function evaluations needed to estimate dense stereo correspondences. This facilitates the use of complex quality metrics or metrics that take unique values at non-integer disparities. QUESS is shown to outperform two competing approaches, and to have more attractive memory and scaling properties than approaches based on exhaustive sampling.
I then present a second novel approach based on the Hough transform and extend it with distributed ray tracing (DRT). DRT is a stochastic anti-aliasing technique common to computer rendering but which has not been used in computational stereo. I demonstrate that the DRT-enhanced approach outperforms the unenhanced approach, a competing variation that uses re-accumulation in the Hough domain, and another baseline approach. DRT’s advantages are particularly strong for reduced image resolution and/or reduced accumulator matrix resolution. In support of this second approach, I develop two novel variations of the Hough transform that use DRT, and demonstrate that they outperform competing variations on a traditional line segment detection problem.
I generalize these two examples to draw broader conclusions, suggest future work, and call for a deeper exploration by the community. Both practical and academic gaps in the state of the art can be reduced by a renewed exploration of stochastic computational stereo techniques. / text
|
Page generated in 0.0347 seconds