• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 38
  • 11
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 30
  • 27
  • 18
  • 17
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

New and hybrid methods for simulating biochemical systems

Greenfield, Daniel Leo, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
It is a dream of Systems-Biology to efficiently simulate an entire cell on a computer. The potential medical and biological applications of such a tool are immense, and so are the challenges to accomplish it. At the level of a cell, the number of reacting molecules is so low that stochastic effects can be crucial in deciding the system-level behaviour of the cell. Despite the recent development of many new and hybrid stochastic approaches, exact stochastic simulation algorithms are still needed, and are widely employed in most current biochemical simulation packages. Unfortunately, the performance of these algorithms scales badly with the number of reactions. It is shown that this is especially the case for hubs and scale-free networks. This is worrying because hubs are an important component of biochemical systems, and it is widely suspected that biochemical networks are scale-free. It is shown that the scalability issue in these algorithms is due to the high interdependency between reactions. A general method for arbitrarily reducing this interdependency is introduced, and it is shown how it can be used for many classes of simulation processes. This is applied to one of the fastest algorithms currently, the Next Reaction Method. The resulting algorithm, the Reactant-Margin Method, is tested on a wide range of hub sizes and shown to be asymptotically faster than the current best algorithms. Hybrid versions of the Reactant-Margin Method and the Next Reaction Method are also compared on a real biological model - the Lambda-Phage virus, and the new algorithm is again shown to perform better. The problems inherent in the hybridization are also shown to be more exactly and efficiently handled in the Reactant-Margin framework than in the Next-Reaction Method framework. Finally, a software tool called GeNIV is introduced. This GUI-based biochemical modelling and simulation tool is an embodiment of a mechanistic-representation philosophy. It is implements the Reactant Margin and Next Reaction hybrid algorithms, and has a simple representation system for gene-state occupancy and their subsequent biochemical reactions. It is also novel in that it translates the graphical model into Javacode which is compiled and executed for simulation.
2

The effect of calving season on economic risk and return in cow-calf operations in western Canada

Sirski, Tanis 24 August 2012 (has links)
Cow-calf producers in western Canada are faced with many decisions throughout the production cycle. The choice of calving time impacts production rate, marketability of calves, income and expenses and net revenue. The purpose of this study was to determine whether June calving could increase net revenues and be a preferred choice across different risk aversion levels over March calving in western Canada. Data for this study were taken from a study carried out by Iwaasa et al. (2009), who collected information from three sites; Brandon, MB, Lanigan, SK and Swift Current SK. Stochastic budgets and a simulation model were used to study the economic impact of calving time. In Brandon and Lanigan, It was found that June calving increased net income and was the dominant alternative across all levels of risk aversion, and in Swift Current, June dominated at high-risk aversion levels.
3

The effect of calving season on economic risk and return in cow-calf operations in western Canada

Sirski, Tanis 24 August 2012 (has links)
Cow-calf producers in western Canada are faced with many decisions throughout the production cycle. The choice of calving time impacts production rate, marketability of calves, income and expenses and net revenue. The purpose of this study was to determine whether June calving could increase net revenues and be a preferred choice across different risk aversion levels over March calving in western Canada. Data for this study were taken from a study carried out by Iwaasa et al. (2009), who collected information from three sites; Brandon, MB, Lanigan, SK and Swift Current SK. Stochastic budgets and a simulation model were used to study the economic impact of calving time. In Brandon and Lanigan, It was found that June calving increased net income and was the dominant alternative across all levels of risk aversion, and in Swift Current, June dominated at high-risk aversion levels.
4

A Gillespie-Type Algorithm for Particle Based Stochastic Model on Lattice

Liu, Weigang January 2019 (has links)
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases. / Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
5

Random Vector Generation on Large Discrete Spaces

Shin, Kaeyoung 17 December 2010 (has links)
This dissertation addresses three important open questions in the context of generating random vectors having discrete support. The first question relates to the "NORmal To Anything" (NORTA) procedure, which is easily the most widely used amongst methods for general random vector generation. While NORTA enjoys such popularity, there remain issues surrounding its efficient and correct implementation particularly when generating random vectors having denumerable support. These complications stem primarily from having to safely compute (on a digital computer) certain infinite summations that are inherent to the NORTA procedure. This dissertation addresses the summation issue within NORTA through the construction of easily computable truncation rules that can be applied for a range of discrete random vector generation contexts. The second question tackled in this dissertation relates to developing a customized algorithm for generating multivariate Poisson random vectors. The algorithm developed (TREx) is uniformly fast—about hundred to thousand times faster than NORTA—and presents opportunities for straightforward extensions to the case of negative binomial marginal distributions. The third and arguably most important question addressed in the dissertation is that of exact nonparametric random vector generation on finite spaces. Specifically, it is wellknown that NORTA does not guarantee exact generation in dimensions higher than two. This represents an important gap in the random vector generation literature, especially in view of contexts that stipulate strict adherence to the dependency structure of the requested random vectors. This dissertation fully addresses this gap through the development of Maximum Entropy methods. The methods are exact, very efficient, and work on any finite discrete space with stipulated nonparametric marginal distributions. All code developed as part of the dissertation was written in MATLAB, and is publicly accessible through the Web site https://filebox.vt.edu/users/pasupath/pasupath.htm. / Ph. D.
6

Bridging the Gap between Deterministic and Stochastic Modeling with Automatic Scaling and Conversion

Wang, Pengyuan 17 June 2008 (has links)
During the past decade, many successful deterministic models of macromolecular regulatory networks have been built. Deterministic simulations of these models can show only average dynamics of the systems. However, stochastic simulations of macromolecular regulatory models can account for behaviors that are introduced by the noisy nature of the systems but not revealed by deterministic simulations. Thus, converting an existing model of value from the most common deterministic formulation to one suitable for stochastic simulation enables further investigation of the regulatory network. Although many different stochastic models can be developed and evolved from deterministic models, a direct conversion is the first step in practice. This conversion process is tedious and error-prone, especially for complex models. Thus, we seek to automate as much of the conversion process as possible. However, deterministic models often omit key information necessary for a stochastic formulation. Specifically, values in the model have to be scaled before a complete conversion, and the scaling factors are typically not given in the deterministic model. Several functionalities helping model scaling and converting are introduced and implemented in the JigCell modeling environment. Our tool makes it easier for the modeler to include complete details as well as to convert the model. Stochastic simulations are known for being computationally intensive, and thus require high performance computing facilities to be practical. With parallel computation on Virginia Tech's System X supercomputer, we are able to obtain the first stochastic simulation results for realistic cell cycle models. Stochastic simulation results for several mutants, which are thought to be biologically significant, are presented. Successful deployment of the enhanced modeling environment demonstrates the power of our techniques. / Master of Science
7

rstream: Streams of Random Numbers for Stochastic Simulation

L'Ecuyer, Pierre, Leydold, Josef January 2005 (has links) (PDF)
The package rstream provides a unified interface to streams of random numbers for the R statistical computing language. Features are: * independent streams of random numbers * substreams * easy handling of streams (initialize, reset) * antithetic random variates The paper describes this packages and demonstrates an simple example the usefulness of this approach. / Series: Preprint Series / Department of Applied Statistics and Data Processing
8

Energy consumption and execution time estimation of embedded system applications

Rau de Almeida Callou, Gustavo 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:52:55Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Nos últimos anos, a redução do consumo de energia das aplicações dos sistemas embarcados tem recebido uma grande atenção da comunidade científica, visto que, como o tempo de resposta e o baixo consumo de energia são requisitos conflitantes, esses estudos tornam-se altamente necessários. Nesse contexto, é proposta uma metodologia aplicada nas fases iniciais de projeto para dar suporte às decisões relativas ao consumo de energia e ao desempenho das aplicações desses dispositivos embarcados. Al´em disso, esse trabalho propõe modelos temporizados de eventos discretos que são avaliados através de uma metodologia de simulção estocástica com o objetivo de representar diferentes cenários dos sistemas com facilidade. Dessa forma, para cada cenário ´e preciso decidir o n´umero máximo de simulações e o tamanho de cada rodada da simulação, onde ambos os fatores podem impactar no desempenho para se obter tais estimativas. Essa metodologia considera também, um modelo intermediário que representa a descrição do comportamento do sistema e, é através desse modelo que cenários são analisados. Esse modelo intermediário ´e baseado em redes de Petri coloridas temporizadas que permitem não somente a anáise do software, mas também fornece suporte a um conjunto de métodos bem estabelecidos para verificações de propriedades. É nesse contexto que o software, ALUPAS, responsável por estimar o consumo de energia e o tempo de execução dos sistemas embarcados é apresentado. Por fim, um caso de estudo real, assim como tamb´em, exemplos customizados são apresentados com a finalidade de mostrar a aplicabilidade desse trabalho, onde usuários não especializados não precisam interagir diretamente com o formalismo de redes de Petri.
9

Estimating the risks in defined benefit pension funds under the constraints of PF117

Mahmood, Ra'ees January 2017 (has links)
With the issuing of Pension Funds circular PF117 in 2004 in South Africa, regulation required valuation assumptions for defined benefit pension funds to be on a best-estimate basis. Allowance for prudence was to be made through explicit contingency reserves, in order to increase reporting transparency. These reserves for prudence, however, were not permitted to put the fund into deficit (the no-deficit clause). Analysis is conducted to understand the risk that PF117 poses to pension fund sponsors and members under two key measures: contribution rate risk and solvency risk. A stochastic model of a typical South African defined benefit fund is constructed with simulations run to determine the impact of the PF117 requirements. Findings show that a best-estimate funding basis, coupled with the no-deficit clause, results in significant risk under both contribution rate and solvency risk measures, particularly in the short-term. To mitigate these risks, alternative ways of introducing conservatism into the funding basis are required, with possible options including incorporating margins into investment return assumptions or the removal of the no-deficit clause.
10

Stochastic Modeling and Simulation of Gene Networks

Xu, Zhouyi 06 May 2010 (has links)
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.

Page generated in 0.0663 seconds