• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 30
  • 30
  • 30
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Contributions to the theory of Gaussian Measures and Processes with Applications

Zachary A Selk (12474759) 28 April 2022 (has links)
<p>This thesis studies infinite dimensional Gaussian measures on Banach spaces. Let $\mu_0$ be a centered Gaussian measure on Banach space $\mathcal B$, and $\mu^\ast$ is a measure equivalent to $\mu_0$. We are interested in approximating, in sense of relative entropy (or KL divergence) the quantity $\frac{d\mu^z}{d\mu^\ast}$ where $\mu^z$ is a mean shift measure of $\mu_0$ by an element $z$ in the so-called ``Cameron-Martin" space $\mathcal H_{\mu_0}$. That is, we want to find the information projection</p> <p><br></p> <p>$$\inf_{z\in \mathcal H_{\mu_0}} D_{KL}(\mu^z||\mu_0)=\inf_{z\in \mathcal H_{\mu_0}} E_{\mu^z} \left(\log \left(\frac{d\mu^z}{d\mu^\ast}\right)\right).$$</p> <p><br></p> <p>We relate this information projection to a mode computation, to an ``open loop" control problem, and to a variational formulation leading to an Euler-Lagrange equation. Furthermore, we use this relationship to establish a kind of Feynman-Kac theorem for systems of ordinary differential equations. We demonstrate that the solution to a system of second order linear ordinary differential equations is the mode of a diffusion, analogous to the result of Feynman-Kac for parabolic partial differential equations. </p>
12

Modelling of volcanic ashfall : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mathematics at Massey University, Albany, New Zealand

Lim, Leng Leng January 2006 (has links)
Modelling of volcanic ashfall has been attempted by volcanologists but very little work has been done by mathematicians. In this thesis we show that mathematical models can accurately describe the distribution of particulate materials that fall to the ground following an eruption. We also report on the development and analysis of mathematical models to calculate the ash concentration in the atmosphere during ashfall after eruptions. Some of these models have analytical solutions. The mathematical models reported on in this thesis not only describe the distribution of ashfall on the ground but are also able to take into account the effect of variation of wind direction with elevation. In order to model the complexity of the atmospheric flow, the atmosphere is divided into horizontal layers. Each layer moves steadily and parallel to the ground: the wind velocity components, particle settling speed and dispersion coefficients are assumed constant within each layer but may differ from layer to layer. This allows for elevation-dependent wind and turbulence profiles, as well as changing particle settling speeds, the last allowing the effects of the agglomeration of particles to be taken into account.
13

A Generalized Framework for Representing Complex Networks

Viplove Arora (8086250) 06 December 2019 (has links)
<div>Complex systems are often characterized by a large collection of components interacting in nontrivial ways. Self-organization among these individual components often leads to emergence of a macroscopic structure that is neither completely regular nor completely random. In order to understand what we observe at a macroscopic scale, conceptual, mathematical, and computational tools are required for modeling and analyzing these interactions. A principled approach to understand these complex systems (and the processes that give rise to them) is to formulate generative models and infer their parameters from given data that is typically stored in the form of networks (or graphs). The increasing availability of network data from a wide variety of sources, such as the Internet, online social networks, collaboration networks, biological networks, etc., has fueled the rapid development of network science. </div><div><br></div><div>A variety of generative models have been designed to synthesize networks having specific properties (such as power law degree distributions, small-worldness, etc.), but the structural richness of real-world network data calls for researchers to posit new models that are capable of keeping pace with the empirical observations about the topological properties of real networks. The mechanistic approach to modeling networks aims to identify putative mechanisms that can explain the dependence, diversity, and heterogeneity in the interactions responsible for creating the topology of an observed network. A successful mechanistic model can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. While it is difficult to intuit appropriate mechanisms for network formation, machine learning and evolutionary algorithms can be used to automatically infer appropriate network generation mechanisms from the observed network structure.</div><div><br></div><div>Building on these philosophical foundations and a series of (not new) observations based on first principles, we extrapolate an action-based framework that creates a compact probabilistic model for synthesizing real-world networks. Our action-based perspective assumes that the generative process is composed of two main components: (1) a set of actions that expresses link formation potential using different strategies capturing the collective behavior of nodes, and (2) an algorithmic environment that provides opportunities for nodes to create links. Optimization and machine learning methods are used to learn an appropriate low-dimensional action-based representation for an observed network in the form of a row stochastic matrix, which can subsequently be used for simulating the system at various scales. We also show that in addition to being practically relevant, the proposed model is relatively exchangeable up to relabeling of the node-types. </div><div><br></div><div>Such a model can facilitate handling many of the challenges of understanding real data, including accounting for noise and missing values, and connecting theory with data by providing interpretable results. To demonstrate the practicality of the action-based model, we decided to utilize the model within domain-specific contexts. We used the model as a centralized approach for designing resilient supply chain networks while incorporating appropriate constraints, a rare feature of most network models. Similarly, a new variant of the action-based model was used for understanding the relationship between the structural organization of human brains and the cognitive ability of subjects. Finally, our analysis of the ability of state-of-the-art network models to replicate the expected topological variations in network populations highlighted the need for rethinking the way we evaluate the goodness-of-fit of new and existing network models, thus exposing significant gaps in the literature.</div>
14

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
15

Optimal policies in reliability modelling of systems subject to sporadic shocks and continuous healing

DEBOLINA CHATTERJEE (14206820) 03 February 2023 (has links)
<p>Recent years have seen a growth in research on system reliability and maintenance. Various studies in the scientific fields of reliability engineering, quality and productivity analyses, risk assessment, software reliability, and probabilistic machine learning are being undertaken in the present era. The dependency of human life on technology has made it more important to maintain such systems and maximize their potential. In this dissertation, some methodologies are presented that maximize certain measures of system reliability, explain the underlying stochastic behavior of certain systems, and prevent the risk of system failure.</p> <p><br></p> <p>An overview of the dissertation is provided in Chapter 1, where we briefly discuss some useful definitions and concepts in probability theory and stochastic processes and present some mathematical results required in later chapters. Thereafter, we present the motivation and outline of each subsequent chapter.</p> <p><br></p> <p>In Chapter 2, we compute the limiting average availability of a one-unit repairable system subject to repair facilities and spare units. Formulas for finding the limiting average availability of a repairable system exist only for some special cases: (1) either the lifetime or the repair-time is exponential; or (2) there is one spare unit and one repair facility. In contrast, we consider a more general setting involving several spare units and several repair facilities; and we allow arbitrary life- and repair-time distributions. Under periodic monitoring, which essentially discretizes the time variable, we compute the limiting average availability. The discretization approach closely approximates the existing results in the special cases; and demonstrates as anticipated that the limiting average availability increases with additional spare unit and/or repair facility.</p> <p><br></p> <p>In Chapter 3, the system experiences two types of sporadic impact: valid shocks that cause damage instantaneously and positive interventions that induce partial healing. Whereas each shock inflicts a fixed magnitude of damage, the accumulated effect of k positive interventions nullifies the damaging effect of one shock. The system is said to be in Stage 1, when it can possibly heal, until the net count of impacts (valid shocks registered minus valid shocks nullified) reaches a threshold $m_1$. The system then enters Stage 2, where no further healing is possible. The system fails when the net count of valid shocks reaches another threshold $m_2  (> m_1)$. The inter-arrival times between successive valid shocks and those between successive positive interventions are independent and follow arbitrary distributions. Thus, we remove the restrictive assumption of an exponential distribution, often found in the literature. We find the distributions of the sojourn time in Stage 1 and the failure time of the system. Finally, we find the optimal values of the choice variables that minimize the expected maintenance cost per unit time for three different maintenance policies.</p> <p><br></p> <p>In Chapter 4, the above defined Stage 1 is further subdivided into two parts: In the early part, called Stage 1A, healing happens faster than in the later stage, called Stage 1B. The system stays in Stage 1A until the net count of impacts reaches a predetermined threshold $m_A$; then the system enters Stage 1B and stays there until the net count reaches another predetermined threshold $m_1 (>m_A)$. Subsequently, the system enters Stage 2 where it can no longer heal. The system fails when the net count of valid shocks reaches another predetermined higher threshold $m_2 (> m_1)$. All other assumptions are the same as those in Chapter 3. We calculate the percentage improvement in the lifetime of the system due to the subdivision of Stage 1. Finally, we make optimal choices to minimize the expected maintenance cost per unit time for two maintenance policies.</p> <p><br></p> <p>Next, we eliminate the restrictive assumption that all valid shocks and all positive interventions have equal magnitude, and the boundary threshold is a preset constant value. In Chapter 5, we study a system that experiences damaging external shocks of random magnitude at stochastic intervals, continuous degradation, and self-healing. The system fails if cumulative damage exceeds a time-dependent threshold. We develop a preventive maintenance policy to replace the system such that its lifetime is utilized prudently. Further, we consider three variations on the healing pattern: (1) shocks heal for a fixed finite duration $\tau$; (2) a fixed proportion of shocks are non-healable (that is, $\tau=0$); (3) there are two types of shocks---self healable shocks heal for a finite duration, and non-healable shocks. We implement a proposed preventive maintenance policy and compare the optimal replacement times in these new cases with those in the original case, where all shocks heal indefinitely.</p> <p><br></p> <p>Finally, in Chapter 6, we present a summary of the dissertation with conclusions and future research potential.</p>
16

MODEL-FREE ALGORITHMS FOR CONSTRAINED REINFORCEMENT LEARNING IN DISCOUNTED AND AVERAGE REWARD SETTINGS

Qinbo Bai (19804362) 07 October 2024 (has links)
<p dir="ltr">Reinforcement learning (RL), which aims to train an agent to maximize its accumulated reward through time, has attracted much attention in recent years. Mathematically, RL is modeled as a Markov Decision Process, where the agent interacts with the environment step by step. In practice, RL has been applied to autonomous driving, robotics, recommendation systems, and financial management. Although RL has been greatly studied in the literature, most proposed algorithms are model-based, which requires estimating the transition kernel. To this end, we begin to study the sample efficient model-free algorithms under different settings.</p><p dir="ltr">Firstly, we propose a conservative stochastic primal-dual algorithm in the infinite horizon discounted reward setting. The proposed algorithm converts the original problem from policy space to the occupancy measure space, which makes the non-convex problem linear. Then, we advocate the use of a randomized primal-dual approach to achieve O(\eps^-2) sample complexity, which matches the lower bound.</p><p dir="ltr">However, when it comes to the infinite horizon average reward setting, the problem becomes more challenging since the environment interaction never ends and can’t be reset, which makes reward samples not independent anymore. To solve this, we design an epoch-based policy-gradient algorithm. In each epoch, the whole trajectory is divided into multiple sub-trajectories with an interval between each two of them. Such intervals are long enough so that the reward samples are asymptotically independent. By controlling the length of trajectory and intervals, we obtain a good gradient estimator and prove the proposed algorithm achieves O(T^3/4) regret bound.</p>
17

Data analysis and preliminary model development for an odour detection system based on the behaviour of trained wasps

Zhou, Zhongkun January 2008 (has links)
Microplitis croceipes, one of the nectar feeding parasitoid wasps, has been found to associatively learn chemical cues through feeding. The experiments on M. croceipes are performed and recorded by a Sony camcorder in the USDA-ARS Biological Control Laboratory in Tifton, GA, USA. The experimental videos have shown that M. croceipes can respond to Coffee odour in this study. Their detection capabilities and the behaviour of M. croceipes with different levels of coffee odours were studied. First, the data that are related to trained M. croceipes behaviour was extracted from the experimental videos and stored in a Microsoft Excel database. The extracted data represent the behaviour of M. croceipes trained to 0.02g and then exposed to 0.001g, 0.005g, 0.01g, 0.02g and 0.04g of coffee. Secondly, indices were developed to uniquely characterise the behaviour of trained M. croceipes under different coffee concentrations. Thirdly, a preliminary model and its parameters were developed to classify the response of trained wasps when exposed to these five different coffee odours. In summary, the success of this thesis demonstrates the usefulness of data analysis for interpreting experimental data, developing indices, as well as understanding the design principles of a simple model based on trained wasps.
18

DEVELOPMENT OF DROPWISE ADDITIVE MANUFACTURING WITH NON-BROWNIAN SUSPENSIONS: APPLICATIONS OF COMPUTER VISION AND BAYESIAN MODELING TO PROCESS DESIGN, MONITORING AND CONTROL

Andrew J. Radcliffe (9080312) 24 July 2020 (has links)
<div>In the past two decades, the pharmaceutical industry has been engaged in modernization of its drug development and manufacturing strategies, spurred onward by changing market pressures, regulatory encouragement, and technological advancement. Concomitant with these changes has been a shift toward new modalities of manufacturing in support of patient-centric medicine and on-demand production. To achieve these objectives requires manufacturing platforms which are both flexible and scalable, hence the interest in development of small-scale, continuous processes for synthesis, purification and drug product production. Traditionally, the downstream steps begin with a crystalline drug powder – the effluent of the final purification steps – and convert this to tablets or capsules through a series of batch unit operations reliant on powder processing. As an alternative, additive manufacturing technologies provide the means to circumvent difficulties associated with dry powder rheology, while being inherently capable of flexible production.</div><div>Through the combination of physical knowledge, experimental work, and data-driven methods, a framework was developed for ink formulation and process operation in drop-on-demand manufacturing with non-Brownian suspensions. Motivated by the challenges at hand, application of novel computational image analysis techniques yielded insight into the effects of non-Brownian particles and fluid properties on rheology. Furthermore, the extraction of modal and statistical information provided insight into the stochastic events which appear to play a notable role in drop formation from such suspensions. These computer vision algorithms can readily be applied by other researchers interested in the physics of drop coalescence and breakup in order to further modeling efforts.</div><div>Returning to the realm of process development to deal with challenges of monitoring and quality control initiated by suspension-based manufacturing, these machine vision algorithms were combined with Bayesian modeling to enact a probabilistic control strategy at the level of each dosage unit by utilizing the real-time image data acquired by an online process image sensor. Drawing upon a large historical database which spanned a wide range of conditions, a hierarchical modeling approach was used to incorporate the various sources of uncertainty inherent to the manufacturing process and monitoring technology, therefore providing more reliable predictions for future data at in-sample and out-of-sample conditions.</div><div>This thesis thus contributes advances in three closely linked areas: additive manufacturing of solid oral drug products, computer vision methods for event recognition in drop formation, and Bayesian hierarchical modeling to predict the probability that each dosage unit produced is within specifications.</div><div><br></div>
19

Efficient Spectral-Chaos Methods for Uncertainty Quantification in Long-Time Response of Stochastic Dynamical Systems

Hugo Esquivel (10702248) 06 May 2021 (has links)
<div>Uncertainty quantification techniques based on the spectral approach have been studied extensively in the literature to characterize and quantify, at low computational cost, the impact that uncertainties may have on large-scale engineering problems. One such technique is the <i>generalized polynomial chaos</i> (gPC) which utilizes a time-independent orthogonal basis to expand a stochastic process in the space of random functions. The method uses a specific Askey-chaos system that is concordant with the measure defined in the probability space in order to ensure exponential convergence to the solution. For nearly two decades, this technique has been used widely by several researchers in the area of uncertainty quantification to solve stochastic problems using the spectral approach. However, a major drawback of the gPC method is that it cannot be used in the resolution of problems that feature strong nonlinear dependencies over the probability space as time progresses. Such downside arises due to the time-independent nature of the random basis, which has the undesirable property to lose unavoidably its optimality as soon as the probability distribution of the system's state starts to evolve dynamically in time.</div><div><br></div><div>Another technique is the <i>time-dependent generalized polynomial chaos</i> (TD-gPC) which utilizes a time-dependent orthogonal basis to better represent the stochastic part of the solution space (aka random function space or RFS) in time. The development of this technique was motivated by the fact that the probability distribution of the solution changes with time, which in turn requires that the random basis is frequently updated during the simulation to ensure that the mean-square error is kept orthogonal to the discretized RFS. Though this technique works well for problems that feature strong nonlinear dependencies over the probability space, the TD-gPC method possesses a serious issue: it suffers from the curse of dimensionality at the RFS level. This is because in all gPC-based methods the RFS is constructed using a tensor product of vector spaces with each of these representing a single RFS over one of the dimensions of the probability space. As a result, the higher the dimensionality of the probability space, the more vector spaces needed in the construction of a suitable RFS. To reduce the dimensionality of the RFS (and thus, its associated computational cost), gPC-based methods require the use of versatile sparse tensor products within their numerical schemes to alleviate to some extent the curse of dimensionality at the RFS level. Therefore, this curse of dimensionality in the TD-gPC method alludes to the need of developing a more compelling spectral method that can quantify uncertainties in long-time response of dynamical systems at much lower computational cost.</div><div><br></div><div>In this work, a novel numerical method based on the spectral approach is proposed to resolve the curse-of-dimensionality issue mentioned above. The method has been called the <i>flow-driven spectral chaos</i> (FSC) because it uses a novel concept called <i>enriched stochastic flow maps</i> to track the evolution of a finite-dimensional RFS efficiently in time. The enriched stochastic flow map does not only push the system's state forward in time (as would a traditional stochastic flow map) but also its first few time derivatives. The push is performed this way to allow the random basis to be constructed using the system's enriched state as a germ during the simulation and so as to guarantee exponential convergence to the solution. It is worth noting that this exponential convergence is achieved in the FSC method by using only a few number of random basis vectors, even when the dimensionality of the probability space is considerably high. This is for two reasons: (1) the cardinality of the random basis does not depend upon the dimensionality of the probability space, and (2) the cardinality is bounded from above by <i>M+n+1</i>, where <i>M</i> is the order of the stochastic flow map and <i>n</i> is the order of the governing stochastic ODE. The boundedness of the random basis from above is what makes the FSC method be curse-of-dimensionality free at the RFS level. For instance, for a dynamical system that is governed by a second-order stochastic ODE (<i>n=2</i>) and driven by a stochastic flow map of fourth-order (<i>M=4</i>), the maximum number of random basis vectors to consider within the FSC scheme is just 7, independent whether the dimensionality of the probability space is as low as 1 or as high as 10,000.</div><div><br></div><div>With the aim of reducing the complexity of the presentation, this dissertation includes three levels of abstraction for the FSC method, namely: a <i>specialized version</i> of the FSC method for dealing with structural dynamical systems subjected to uncertainties (Chapter 2), a <i>generalized version</i> of the FSC method for dealing with dynamical systems governed by (nonlinear) stochastic ODEs of arbitrary order (Chapter 3), and a <i>multi-element version</i> of the FSC method for dealing with dynamical systems that exhibit discontinuities over the probability space (Chapter 4). This dissertation also includes an implementation of the FSC method to address the dynamics of large-scale stochastic structural systems more effectively (Chapter 5). The implementation is done via a modal decomposition of the spatial function space as a means to reduce the number of degrees of freedom in the system substantially, and thus, save computational runtime.</div>
20

Dissertation_LeiLi

Lei Li (16631262) 26 July 2023 (has links)
<p>In the real world, uncertainty is a common challenging problem faced by individuals, organizations, and firms. Decision quality is highly impacted by uncertainty because decision makers lack complete information and have to leverage the loss and gain in many possible outcomes or scenarios. This study explores dynamic decision making (with known distributions) and decision learning (with unknown distributions but some samples) in not-for-profit operations and supply chain management. We first study dynamic staffing for paid workers and volunteers with uncertain supply in a nonprofit operation where the optimal policy is too complex to compute and implement. Then, we consider dynamic inventory control and pricing under both supply and demand uncertainties where unmet demand is lost leading to a challenging non-concave dynamic problem. Furthermore, we explore decision learning from limited data of focal system and available data of related but different systems by transfer learning, cross learning, and co-learning utilizing the similarities among related systems.</p>

Page generated in 0.1085 seconds