Spelling suggestions: "subject:"3methods"" "subject:"4methods""
131 |
Effectiveness of using two and three-parameter distributions in place of "best-fit distributions" in discrete event simulation models of production linesSharma, Akash 12 December 2003 (has links)
This study presents the results of using common two or three-parameter "default"
distributions in place of "best fit distributions" in simulations of serial production lines
with finite buffers and blocking. The default distributions used instead of the best-fit
distribution are chosen such that they are non-negative, unbounded, and can match either
the first two moments or the first three moments of the collected data. Furthermore, the
selected default distributions must be commonly available (or easily constructed from)
distributions in simulation software packages. The lognormal is used as the two-parameter
distribution to match the first two moments of the data. The two-level hyper-exponential
and three-parameter lognormal are used as three-parameter distributions to
match the first three moments of the data. To test the use of these distributions in
simulations, production lines have been separated into two major classes: automated and
manual. In automated systems the workstations have fixed processing times and random
time between failures, and random repair times. In manual systems, the workstations are
reliable but have random processing times. Results for both classes of lines show that the
differences in throughput from simulations using best-fit distributions and two parameter
lognormal is small in some cases and can be reduced in others by matching the first three
moments of the data. Also, different scenarios are identified which lead to higher
differences in throughput when using a two-parameter default distribution. / Graduation date: 2004
|
132 |
Tools for environmental statistics : creative visualization and estimating variance from complex surveysCourbois, Jean-Yves Pip 07 January 2000 (has links)
Environmental monitoring poses two challenges to statistical analysis: complex data and complex survey designs. Monitoring for system health involves measuring physical, chemical, and biological properties that have complex relations. Exploring these relations is an integral part of understanding how systems are changing under stress. How does one explore high dimensional data? Many of our current methods rely on "black-box" mathematical methods. Visualization techniques on the other hand are either restricted to low dimensions or hopelessly out of context. The first topic explored in this dissertation suggests a direct search method for use in projection pursuit guided tours.
In Chapter 2 a direct search method for index optimization, the multidirectional pattern search, was explored for use in projection pursuit guided tours. The benefit of this method is that it does not require the projection pursuit index to be continuously differentiable; in contrast to existing methods that require differentiability. Computational comparisons with test data revealed the feasibility
and promise of the method. It successfully found hidden structure in 4 of 6 test data sets. The study demonstrates that the direct search method lends itself well to use in guided tours and allows for non-differentiable indices.
Evaluating estimators of the population variance is covered in Chapter 3. Good estimates of the population variance are useful when designing a survey. These estimates may come from a pilot project or survey. Often in environmental sampling simple random sampling is not possible;�� instead complex designs are used. In this case there is no clear estimator for the population variance. We propose an estimator that is (1) based on a methods of moments approach and (2) extendible to more complex variance component models. Several other estimators have been proposed in the literature. This study compared our method of moment estimator to other variance estimators. Unfortunately our estimator did not do as well as some of the other estimators that have been suggested implying that these estimators do not perform similarly as the literature suggests they do. Two estimators, the sample variance and a ratio estimator based on the Horvitz-Thompson Theorem and a consistency argument, proved to be favorable. / Graduation date: 2000
|
133 |
Drawing graphs nicelyPalmer, Paul A. 24 April 1995 (has links)
A graph may he drawn in many different ways. We investigate how to draw a graph nicely, in the sense of being visually pleasing. We discuss the history of this field, and look at several algorithms for drawing graphs.
For planar graphs this problem has been algorithmically solved: that is, there is an algorithm which takes a n vertex planar graph and places the vertices at some of the nodes of an n-2 by 2n-4 array so that each edge of the planar graph can be drawn with a straight line. We describe in detail one particular implementation of this algorithm, give some examples in which this embedding is pleasing, and give a number of examples in which this grid embedding is not as visually pleasing another drawing of the same graph.
For the more difficult problem of drawing a nonplanar graph, we investigate a spring based algorithm. We give a number of examples in which
this heuristic produces more pleasing drawings than those produced by the planar embedding and a few cases where it fails to do so. / Graduation date: 1997
|
134 |
A computer simulation model of seasonal transpiration in Douglas-fir based on a model of stomatal resistance /Reed, Kenneth Lee, January 1972 (has links)
Thesis (Ph. D.)--Oregon State University, 1972. / Typescript (photocopy). Includes bibliographical references. Also available on the World Wide Web.
|
135 |
The Generalized DEA Model of Fundamental Analysis of Public Firms, with Application to Portfolio SelectionZhang, Xin 01 December 2007 (has links)
Fundamental analysis is an approach for evaluating a public firm for its investmentworthiness by looking at its business at the basic or fundamental financial level. The focus of this thesis is on utilizing financial statement data and a new generalization of the Data Envelopment Analysis, termed the GDEA model, to determine a relative financial strength (RFS) indicator that represents the underlying business strength of a firm. This approach is based on maximizing a correlation metric between GDEA-based score of financial strength and stock price performance. The correlation maximization problem is a difficult binary nonlinear optimization that requires iterative re-configuration of parameters of financial statements as inputs and outputs. A two-step heuristic algorithm that combines random sampling and local search optimization is developed. Theoretical optimality conditions are also derived for checking solutions of the GDEA model. Statistical tests are developed for validating the utility of the RFS indicator for portfolio selection, and the approach is computationally tested and compared with competing approaches.
The GDEA model is also further extended by incorporating Expert Information on input/output selection. In addition to deriving theoretical properties of the model, a new methodology is developed for testing if such exogenous expert knowledge can be significant in obtaining stronger RFS indicators. Finally, the RFS approach under expert information is applied in a Case Study, involving more than 800 firms covering all sectors of the U.S. stock market, to determine optimized RFS indicators for stock selection. Those selected stocks are then used within portfolio optimization models to demonstrate the superiority of the techniques developed in this thesis.
|
136 |
Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic AlgorithmBeal, Dennis Jack 01 December 2009 (has links)
Kernel density estimation is a data smoothing technique that depends heavily on the bandwidth selection. The current literature has focused on optimal selectors for the univariate case that are primarily data driven. Plug-in and cross validation selectors have recently been extended to the general multivariate case.
This dissertation will introduce and develop new and novel techniques for data mining with multivariate kernel density regression using information complexity and the genetic algorithm as a heuristic optimizer to choose the optimal bandwidth and the best predictors in kernel regression models. Simulated and real data will be used to cross validate the optimal bandwidth selectors using information complexity. The genetic algorithm is used in conjunction with information complexity to determine kernel density estimates for variable selection from high dimension multivariate data sets.
Kernel regression is also hybridized with the implicit enumeration algorithm to determine the set of independent variables for the global optimal solution using information criteria as the objective function. The results from the genetic algorithm are compared to the optimal solution from the implicit enumeration algorithm and the known global optimal solution from an explicit enumeration of all possible subset models.
|
137 |
Algorithms for Multi-Sample Cluster AnalysisAlmutairi, Fahad 01 August 2007 (has links)
In this study, we develop algorithms to solve the Multi-Sample Cluster Analysis (MSCA) problem. This problem arises when we have multiple samples and we need to find the statistical model that best fits the cluster structure of these samples. One important area among others in which our algorithms can be used is international market segmentation. In this area, samples about customers’preferences and characteristics are collected from di¤erent regions in the market. The goal in this case is to join the regions with similar customers’characteristics in clusters (segments).
We develop branch and bound algorithms and a genetic algorithm. In these algorithms, any of the available information criteria (AIC, CAIC, SBC, and ICOMP) can be used as the objective function to be optimized. Our algorithms use the Clique Partitioning Problem (CPP) formulation. They are the first algorithms to use information criteria with the CPP formulation.
When the branch and bound algorithms are allowed to run to completion, they converge to the optimal MSCA alternative. These methods also proved to find good solutions when they were stopped short of convergence. In particular, we develop a branching strategy which uses a "look-ahead" technique. We refer to this strategy as the complete adaptive branching strategy. This strategy makes the branch and bound algorithm quickly search for the optimal solution in multiple branches of the enumeration tree before using a depth- first branching strategy. In computational tests, this method’s performance was superior to other branching methods as well as to the genetic algorithm.
|
138 |
Approximation Methods for the Standard Deviation of Flow Times in the G/G/s QueueZhao, Xiaofeng 01 August 2007 (has links)
We provide approximation methods for the standard deviation of flow time in system for a general multi-server queue with infinite waiting capacity (G / G / s ). The approximations require only the mean and standard deviation or the coefficient of variation of the inter-arrival and service time distributions, and the number of servers.
These approximations are simple enough to be implemented in manual or spreadsheet calculations, but in comparisons to Monte Carlo simulations have proven to give good approximations (within ±10%) for cases in which the coefficients of variation for the interarrival and service times are between 0 and 1. The approximations also have the desirable properties of being exact for the specific case of Markov queue model M / M / s, as well as some imbedded Markov queuing models ( Ek / M / 1 and M / Eα / 1).
The practical significance of this research is that (1) many real world queuing problems involve the G / G / s queuing systems, and (2) predicting the range of variation of the time in the system (rather than just the average) is needed for decision making. For example, one job shop facility with which the authors have worked, guarantees its customers a nine day turnaround time and must determine the minimum number of machines of each type required to achieve nine days as a “worst case” time in the system. In many systems, the “worst case” value of flow time is very relevant because it represents the lead time that can safely be promised to customers. To estimate this we need both the average and standard deviation of the time in system.
The usefulness of our results stems from the fact that they are computationally simple and thus provide quick approximations without resorting to complex numerical techniques or Monte Carlo simulations. While many accurate approximations for the G / G / s queue have been proposed previously, they often result in algebraically intractable expressions. This hinders attempts to derive closed-form solutions to the decision variables incorporated in optimization models, and inevitably leads to the use of complex numeric methods. Furthermore, actual application of many of these approximations often requires specification of the actual distributions of the inter-arrival time and the service time. Also, these results have tended to focus on delay probabilities and average waiting time, and do not provide a means of estimating the standard deviation of the time in the system.
We also extend the approximations to computing the standard deviation of flow times of each priority class in the G / G / s priority queues and compare the results to those obtained via Monte Carlo simulations. These simulation experiments reveal good approximations for all priority classes with the exception of the lowest priority class in queuing systems with high utilization. In addition, we use the approximations to estimate the average and the standard deviation of the total flow time through queuing networks and have validated these results via Monte Carlo Simulations.
The primary theoretical contributions of this work are the derivations of an original expression for the coefficient of variation of waiting time in the G / G / s queue, which holds exactly for G / M / s and M / G /1 queues. We also do some error sensitivity analysis of the formula and develop interpolation models to calculate the probability of waiting, since we need to estimate the probability of waiting for the G / G / s queue to calculate the coefficient of variation of waiting time.
Technically we develop a general queuing system performance predictor, which can be used to estimate all kinds of performances for any steady state, infinite queues. We intend to make available a user friendly predictor for implementing our approximation methods. The advantages of these models are that they make no assumptions about distribution of inter-arrival time and service time. Our techniques generalize the previously developed approximations and can also be used in queuing networks and priority queues. Hopefully our approximation methods will be beneficial to those practitioners who like simple and quick practical answers to their multi-server queuing systems.
Key words and Phrases: Queuing System, Standard Deviation, Waiting Time, Stochastic Process, Heuristics, G / G/ s, Approximation Methods, Priority Queue, and Queuing Networks.
|
139 |
Truncation rules in simulation analysis : effect of batch size, time scale and input distribution on the application of Schriber's ruleBaxter, Lori K. 04 June 1990 (has links)
The objective of many simulations is to study the
steady-state behavior of a nonterminating system. The
initial conditions of the system are often atypical because
of the complexity of the system. Simulators often start
the simulation with the system empty and idle, and
truncate, or delete, some quantity of the initial
observations to reduce the initialization bias.
This paper studies the application of Schriber's
truncation rule to a queueing model, and the effects of
parameter selection. Schriber's rule requires the
simulator to select the parameters of batch size, number of
batches, and a measure of precision. In addition,
Schriber's rule assumes the output is a time series of
discrete observations. Previous studies of Schriber's rule
have not considered the effect of variation in the time
scale (time between observations).
The performance measures for comparison are the mean
squared error and the half-length of the confidence
interval. The results indicate that the time scale and
batch size are significant parameters, and that the number
of batches has little effect on the output. A change in
the distribution of service time did not alter the results.
In addition, it was determined that multiple replicates
should be used in establishing the truncation point instead
of a single run, and the simulator should carefully
consider the choice of time scale for the output series and
the batch size. / Graduation date: 1991
|
140 |
Motivation in the English classroom : A study of how English teachers work with motivating their studentsBeckman, Ramona January 2010 (has links)
The aim of this essay is to find out what motivation in a school environment actually means and how practicing teachers work with trying to motivate their students to learn English. Interviews were performed with three English teachers from different levels and municipalities in Sweden using qualitative and semi-structured questions. It was found that the teachers have both different and similar views on how to work with motivating their students. It is suggested that there are different kinds of motivation and that motivation is a vital part of the work a teacher does. Furthermore, there are numerous methods to apply to a class, and teachers have to try and develop a feeling for which ones to use. Keywords: Learning English, motivation, methods.
|
Page generated in 0.0429 seconds