• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sequential estimation in statistics and steady-state simulation

Tang, Peng 22 May 2014 (has links)
At the onset of the "Big Data" age, we are faced with ubiquitous data in various forms and with various characteristics, such as noise, high dimensionality, autocorrelation, and so on. The question of how to obtain accurate and computationally efficient estimates from such data is one that has stoked the interest of many researchers. This dissertation mainly concentrates on two general problem areas: inference for high-dimensional and noisy data, and estimation of the steady-state mean for univariate data generated by computer simulation experiments. We develop and evaluate three separate sequential algorithms for the two topics. One major advantage of sequential algorithms is that they allow for careful experimental adjustments as sampling proceeds. Unlike one-step sampling plans, sequential algorithms adapt to different situations arising from the ongoing sampling; this makes these procedures efficacious as problems become more complicated and more-delicate requirements need to be satisfied. We will elaborate on each research topic in the following discussion. Concerning the first topic, our goal is to develop a robust graphical model for noisy data in a high-dimensional setting. Under a Gaussian distributional assumption, the estimation of undirected Gaussian graphs is equivalent to the estimation of inverse covariance matrices. Particular interest has focused upon estimating a sparse inverse covariance matrix to reveal insight on the data as suggested by the principle of parsimony. For estimation with high-dimensional data, the influence of anomalous observations becomes severe as the dimensionality increases. To address this problem, we propose a robust estimation procedure for the Gaussian graphical model based on the Integrated Squared Error (ISE) criterion. The robustness result is obtained by using ISE as a nonparametric criterion for seeking the largest portion of the data that "matches" the model. Moreover, an l₁-type regularization is applied to encourage sparse estimation. To address the non-convexity of the objective function, we develop a sequential algorithm in the spirit of a majorization-minimization scheme. We summarize the results of Monte Carlo experiments supporting the conclusion that our estimator of the inverse covariance matrix converges weakly (i.e., in probability) to the latter matrix as the sample size grows large. The performance of the proposed method is compared with that of several existing approaches through numerical simulations. We further demonstrate the strength of our method with applications in genetic network inference and financial portfolio optimization. The second topic consists of two parts, and both concern the computation of point and confidence interval (CI) estimators for the mean µ of a stationary discrete-time univariate stochastic process X \equiv \{X_i: i=1,2,...} generated by a simulation experiment. The point estimation is relatively easy when the underlying system starts in steady state; but the traditional way of calculating CIs usually fails since the data encountered in simulation output are typically serially correlated. We propose two distinct sequential procedures that each yield a CI for µ with user-specified reliability and absolute or relative precision. The first sequential procedure is based on variance estimators computed from standardized time series applied to nonoverlapping batches of observations, and it is characterized by its simplicity relative to methods based on batch means and its ability to deliver CIs for the variance parameter of the output process (i.e., the sum of covariances at all lags). The second procedure is the first sequential algorithm that uses overlapping variance estimators to construct asymptotically valid CI estimators for the steady-state mean based on standardized time series. The advantage of this procedure is that compared with other popular procedures for steady-state simulation analysis, the second procedure yields significant reduction both in the variability of its CI estimator and in the sample size needed to satisfy the precision requirement. The effectiveness of both procedures is evaluated via comparisons with state-of-the-art methods based on batch means under a series of experimental settings: the M/M/1 waiting-time process with 90% traffic intensity; the M/H_2/1 waiting-time process with 80% traffic intensity; the M/M/1/LIFO waiting-time process with 80% traffic intensity; and an AR(1)-to-Pareto (ARTOP) process. We find that the new procedures perform comparatively well in terms of their average required sample sizes as well as the coverage and average half-length of their delivered CIs.
2

A SEQUENTIAL ALGORITHM TO IDENTIFY THE MIXING ENDPOINTS IN LIQUIDS IN PHARMACEUTICAL APPLICATIONS

Saxena, Akriti 28 July 2009 (has links)
The objective of this thesis is to develop a sequential algorithm to determine accurately and quickly, at which point in time a product is well mixed or reaches a steady state plateau, in terms of the Refractive Index (RI). An algorithm using sequential non-linear model fitting and prediction is proposed. A simulation study representing typical scenarios in a liquid manufacturing process in pharmaceutical industries was performed to evaluate the proposed algorithm. The data simulated included autocorrelated normal errors and used the Gompertz model. A set of 27 different combinations of the parameters of the Gompertz function were considered. The results from the simulation study suggest that the algorithm is insensitive to the functional form and achieves the goal consistently with least number of time points.
3

Trial Division : Improvements and Implementations / Trial Division : Förbättringar och Implementationer

Hedenström, Felix January 2017 (has links)
Trial division is possibly the simplest algorithm for factoring numbers.The problem with Trial division is that it is slow and wastes computationaltime on unnecessary tests of division. How can this simple algorithms besped up while still being serial? How does this algorithm behave whenparallelized? Can a superior serial and a parallel version be combined intoan even more powerful algorithm?To answer these questions the basics of trial divisions where researchedand improvements where suggested. These improvements where later im-plemented and tested by measuring the time it took to factorize a givennumber.A version using a list of primes and multiple threads turned out to bethe fastest for numbers larger than 10 10 , but was beaten when factoringlower numbers by its serial counterpart. A problem was detected thatcaused the parallel versions to have long allocation times which slowedthem down, but this did not hinder them much. / Trial division är en av de enklaste algoritmerna när det kommer till attfaktorisera tal. Problemet med trial division är att det är relativt långsamtoch att det gör onödiga beräkningar. Hur kan man göra denna algoritmsnabbare utan att inte göra den seriell? Hur beter sig algoritmen när denär parallelliserad? Kan en förbättrad seriell sedan bli parallelliserad?För att besvara dessa frågor studerades trial division och dess möjligaförbättringar. Dessa olika förbättringar implementerades i form av flerafunktioner som sedan och testades mot varandra.Den snabbaste versionen byggde på att använda en lista utav primtaloch trådar för att minimera antal ’trials’ samt att dela upp arbetet. Denvar dock inte alltid snabbast, då den seriella versionen som också användeen lista av primtal var snabbare för siffror under 10 10 . Sent upptäck-tes ett re-allokeringsproblem med de parallella implementationerna, meneftersom de ändå var snabbare fixades inte detta problemet.
4

Caractérisation impérative des algorithmes séquentiels en temps quelconque, primitif récursif ou polynomial / Imperative characterization of sequential algorithms in general, primitive recursive or polynomial time

Marquer, Yoann 09 October 2015 (has links)
Les résultats de Colson ou de Moschovakis remettent en question que le modèle récursif primitif puisse calculer une valeur par tous les moyens possibles : il y a toutes les fonctions voulues mais il manque des algorithmes. La thèse de Church exprime donc plutôt ce qui peut être calculé que comment le calcul est fait. Nous utilisons la thèse de Gurevich formalisant l'idée intuitive d'algorithme séquentiel par les Abstract States Machines (ASMs).Nous représentons les programmes impératifs par le langage While de Jones, et une variante LoopC du langage de Meyer et Ritchie permettant de sortir d'une boucle lorsqu'une condition est remplie. Nous dirons qu'un langage caractérise une classe algorithmique si les modèles de calcul associés peuvent se simuler mutuellement, en utilisant une dilatation temporelle et un nombre borné de variables temporaires. Nous prouvons que les ASMs peuvent simuler While et LoopC, que si l'espace est primitif récursif alors LoopC est en temps récursif primitif, et que sa restriction LoopC_stat où les bornes des boucles ne peuvent être mises à jour est en temps polynomial. Réciproquement, une étape d'ASM peut être traduite par un programme sans boucle, qu'on peut répéter suffisamment en l'insérant dans un programme qui est dans While si la complexité est quelconque, dans LoopC si elle est récursif primitif, et dans LoopC_stat si elle est polynomiale.Ainsi While caractérise les algorithmes séquentiels en temps quelconque, LoopC ceux en temps et espace récursifs primitifs, et LoopC_stat ceux en temps polynomial / Colson and Moschovakis results cast doubt on the ability of the primitive recursive model to compute a value by any means possible : the model may be complete for functions but there is a lack of algorithms. So the Church thesis express more what can be computed than how the computation is done. We use Gurevich thesis to formalize the intuitive idea of sequential algorithm by the Abstract States Machines (ASMs).We formalize the imperative programs by Jones' While language, and a variation LoopC of Meyer and Ritchie's language allowing to exit a loop if some condition is fulfilled. We say that a language characterizes an algorithmic class if the associated models of computations can simulate each other using a temporal dilatation and a bounded number of temporary variables. We prove that the ASMs can simulate While and LoopC, that if the space is primitive recursive then LoopC is primitive recursive in time, and that its restriction LoopC_stat where the bounds of the loops cannot be updated is in polynomial time. Reciprocally, one step of an ASM can be translated into a program without loop, which can be repeated enough times if we insert it onto a program in While for a general complexity, in LoopC for a primitive recursive complexity, and in LoopC_stat for a polynomial complexity.So While characterizes the sequential algorithms, LoopC the algorithms in primitive recursive space and time, and LoopC_stat the polynomial time algorithms
5

Návrh experimentu pro řešení inverzní úlohy vedení tepla / Design of Experiment for Inverse Heat Transfer Problem

Horák, Aleš January 2011 (has links)
this thesis complex inverse heat transfer problem, which is focused on optimal design of experiment, is studied. There are many fields and applications in technical practice, where inverse tasks are or can be applied. On first place main attention is focused on industrial metallurgical processes such as cooling of continues casting, hydraulic descaling or hot rolling. Inverse problems are in general used to calculate boundary conditions of differential equations and in this field are used to find out Heat Transfer Coefficient (HTC). Knowledge of numerical approximation of precise boundary conditions is nowadays essential. It allows for example design of optimized hot rolling mill cooling focused on material properties and final product quality. Sequential Beck’s approach and optimization method is used in this work to solve inverse heat transfer problems. Special experimental test bench measuring heat transfer intensity was developed and built to full fill specific requirements and required accuracy. There were four different types of thermal sensor applied and studied. Those sensors are in usage in Heat Transfer and Fluid Flow laboratory (Heatlab) at various experimental test benches. Each specific sensor was tailored in Heat Transfer and Fluid Flow Laboratory to specific metallurgical application. Fist type of sensor was designed to simulate cooling during continuous casting. Second sensor is used for experiments simulate hot rolling mill cooling, while third sensor is designated for experiments with fast moving hot rolled products. Last sensor is similar to sensor type one, but thermocouple is located parallel to cooled surface. Experimental part of this study covers series of measurements to investigate Heat Transfer Coefficient (HTC) for various types of coolant, cooling mixtures and spray parameters. Results discovered in this study were compared with published scientific articles, and widely extend the knowledge of cooling efficiency for commonly used

Page generated in 0.0927 seconds