• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimation Techniques for Nonlinear Functions of the Steady-State Mean in Computer Simulation

Chang, Byeong-Yun 08 December 2004 (has links)
A simulation study consists of several steps such as data collection, coding and model verification, model validation, experimental design, output data analysis, and implementation. Our research concentrates on output data analysis. In this field, many researchers have studied how to construct confidence intervals for the mean u of a stationary stochastic process. However, the estimation of the value of a nonlinear function f(u) has not received a lot of attention in the simulation literature. Towards this goal, a batch-means-based methodology was proposed by Munoz and Glynn (1997). Their approach did not consider consistent estimators for the variance of the point estimator for f(u). This thesis, however, will consider consistent variance estimation techniques to construct confidence intervals for f(u). Specifically, we propose methods based on the combination of the delta method and nonoverlapping batch means (NBM), standardized time series (STS), or a combination of both. Our approaches are tested on moving average, autoregressive, and M/M/1 queueing processes. The results show that the resulting confidence intervals (CIs) perform often better than the CIs based on the method of Munoz and Glynn in terms of coverage, the mean of their CI half-width, and the variance of their CI half-width.
2

Variance Estimation in Steady-State Simulation, Selecting the Best System, and Determining a Set of Feasible Systems via Simulation

Batur, Demet 11 April 2006 (has links)
In this thesis, we first present a variance estimation technique based on the standardized time series methodology for steady-state simulations. The proposed variance estimator has competitive bias and variance compared to the existing estimators in the literature. We also present the technique of rebatching to further reduce the bias and variance of our variance estimator. Second, we present two fully sequential indifference-zone procedures to select the best system from a number of competing simulated systems when best is defined by the maximum or minimum expected performance. These two procedures have parabola shaped continuation regions rather than the triangular continuation regions employed in several papers. The rocedures we present accommodate unequal and unknown ariances across systems and the use of common random numbers. However, we assume that basic observations are independent and identically normally distributed. Finally, we present procedures for finding a set of feasible or near-feasible systems among a finite number of simulated systems in the presence of multiple stochastic constraints, especially when the number of systems or constraints is large.
3

Steady-State Analyses: Variance Estimation in Simulations and Dynamic Pricing in Service Systems

Aktaran-Kalayci, Tuba 04 August 2006 (has links)
In this dissertation, we consider analytic and numeric approaches to the solution of probabilistic steady-state problems with specific applications in simulation and queueing theory. Our first objective on steady-state simulations is to develop new estimators for the variance parameter of a selected output process that have better performance than certain existing variance estimators in the literature. To complete our analysis of these new variance estimators, called linear combinations of overlapping variance estimators, we do the following: establish theoretical asymptotic properties of the new estimators; test the theoretical results on a battery of examples to see how the new estimators perform in practice; and use the estimators for confidence interval estimation for both the mean and the variance parameter. Our theoretical and empirical results indicate the new estimators' potential for improvements in accuracy and computational efficiency. Our second objective on steady-state simulations is to derive the expected values of various competing estimators for the variance parameter. In this research, we do the following: formulate the machinery to calculate the exact expected value of a given estimator for the variance parameter; calculate the exact expected values of various variance estimators in the literature; compute these expected values for certain stochastic processes with complicated covariance functions; and derive expressions for the mean squared error of the estimators studied herein. We find that certain standardized time series estimators outperform their competitors as the sample size becomes large. Our research on queueing theory focuses on pricing of the service provided to individual customers in a queueing system. We find sensitivity results that enable efficient computational procedures for dynamic pricing decisions for maximizing the long-run average reward in a queueing facility with the following properties: there are a fixed number of servers, each with the same constant service rate; the system has a fixed finite capacity; the price charged to a customer entering the system depends on the number of customers in the system; and the customer arrival rate depends on the current price of the service. We show that the sensitivity results considered significantly reduce the computational requirements for finding the optimal pricing policies.
4

ON EFFICIENT AUTOMATED METHODS FOR SIMULATION OUTPUT DATA ANALYSIS

Brozovic, Martin January 2014 (has links)
With the increase in computing power and software engineering in the past years computer based stochastic discrete-event simulations have become very commonly used tool to evaluate performance of various, complex stochastic systems (such as telecommunication networks). It is used if analytical methods are too complex to solve, or cannot be used at all. Stochastic simulation has also become a tool, which is often used instead of experimentation in order to save money and time by the researchers. In this thesis, we focus on the statistical correctness of the final estimated results in the context of steady-state simulations performed for the mean analysis of performance measures of stable stochastic processes. Due to various approximations the final experimental coverage can differ greatly from the assumed theoretical level, where the final confidence intervals cover the theoretical mean at much lower frequency than it was expected by the preset theoretical confidence level. We present the results of coverage analysis for the methods of dynamic partially-overlapping batch means, spectral analysis and mean squared error optimal dynamic partially-overlapping batch means. The results show that the variants of dynamic partially-overlapping batch means, that we propose as their modification under Akaroa2, perform acceptably well for the queueing processes, but perform very badly for auto-regressive process. We compare the results of modified mean squared error optimal dynamic partially-overlapping batch means method to the spectral analysis and show that the methods perform equally well. / +420 723 771 283
5

A System Architecture for the Monitoring of Continuous Phenomena by Sensor Data Streams

Lorkowski, Peter 15 March 2019 (has links)
The monitoring of continuous phenomena like temperature, air pollution, precipitation, soil moisture etc. is of growing importance. Decreasing costs for sensors and associated infrastructure increase the availability of observational data. These data can only rarely be used directly for analysis, but need to be interpolated to cover a region in space and/or time without gaps. So the objective of monitoring in a broader sense is to provide data about the observed phenomenon in such an enhanced form. Notwithstanding the improvements in information and communication technology, monitoring always has to function under limited resources, namely: number of sensors, number of observations, computational capacity, time, data bandwidth, and storage space. To best exploit those limited resources, a monitoring system needs to strive for efficiency concerning sampling, hardware, algorithms, parameters, and storage formats. In that regard, this work proposes and evaluates solutions for several problems associated with the monitoring of continuous phenomena. Synthetic random fields can serve as reference models on which monitoring can be simulated and exactly evaluated. For this purpose, a generator is introduced that can create such fields with arbitrary dynamism and resolution. For efficient sampling, an estimator for the minimum density of observations is derived from the extension and dynamism of the observed field. In order to adapt the interpolation to the given observations, a generic algorithm for the fitting of kriging parameters is set out. A sequential model merging algorithm based on the kriging variance is introduced to mitigate big workloads and also to support subsequent and seamless updates of real-time models by new observations. For efficient storage utilization, a compression method is suggested. It is designed for the specific structure of field observations and supports progressive decompression. The unlimited diversity of possible configurations of the features above calls for an integrated approach for systematic variation and evaluation. A generic tool for organizing and manipulating configurational elements in arbitrary complex hierarchical structures is proposed. Beside the root mean square error (RMSE) as crucial quality indicator, also the computational workload is quantified in a manner that allows an analytical estimation of execution time for different parallel environments. In summary, a powerful framework for the monitoring of continuous phenomena is outlined. With its tools for systematic variation and evaluation it supports continuous efficiency improvement.

Page generated in 0.1054 seconds