• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 39
  • 39
  • 11
  • 9
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High-dimensional problems in stochastic modelling of biological processes

Liao, Shuohao January 2017 (has links)
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). This thesis addresses such computational challenges by a tensor-structured computational framework. After a background introduction in Chapter 1, Chapter 2 derives the order of convergence in volume size between the stationary distributions of the exact chemical master equation (CME) and its continuous Fokker-Planck approximation (CFPE). It also proposes the multi-scale approaches to address the failure of the CFPE in capturing the noise-induced multi-stability of the CME distribution. Chapter 3 studies the numerical solution of the high-dimensional CFPE using the tensor train and the quantized-TT data formats. In Chapter 4, the tensor solutions are applied to study the parameter estimation, robustness, sensitivity and bifurcation structures of stochastic reaction networks. A Matlab implementation of the proposed methods/algorithms is available at http://www.stobifan.org.
2

Stochastické modelování reakčně-difuzních procesů v biologii / Stochastické modelování reakčně-difuzních procesů v biologii

Lipková, Jana January 2011 (has links)
Many biological processes can be described in terms of chemical reactions and diffusion. In this thesis, reaction-diusion mechanisms related to the formation of Turing patterns are studied. Necessary and sufficient conditions under which Turing instability occur is presented. Behaviour of Turing patterns is investigated with a use of deterministic approach, compartment-based stochastic simulation algorithm and molecular-based stochastic simulation algorithm.
3

Scenario thinking and stochastic modelling for strategic and policy decisions in agriculture

Strauss, P.G. (Petrus Gerhardus) 06 June 2010 (has links)
In 1985, Pierre Wack, arguably the father of modern scenario thinking, wrote the following: “Forecasts often work because the world does not always change. But sooner or later forecasts will fail when they are needed most: in anticipating major shifts…” (Wack, 1985: 73). The truth of this statement have again become apparent, first as the “food price crisis” played out during 2007 and 2008, and secondly as the current financial and economic crisis are playing out. Respected market commentators and analysts, both internationally and within South Africa, made all sorts of “informed predictions” on topics ranging from oil prices, interest rates, and economic growth rates to input costs and food prices. The problem is that none of these “respected views” and “informed predictions and estimates” became true within the period that was assigned to these predictions. In fact, just the opposite occurred: the unexpected implosion of the global economy and hence commodity markets. The result of the experts “getting it so wrong”, is that questions are being asked about the reliability of risk and uncertainty analysis. Even though the experts used highly advanced analytical techniques in analyzing the risks and uncertainties in order to formulate predictions and outlooks, both the “food price crisis” and the economic implosion were totally unanticipated. The same questions need to be asked in terms of risk and uncertainty analyses in agricultural economics. With agriculture experiencing a period of fundamental changes causing significant uncertainty, risk and uncertainty analyses in agriculture will need to move to the next level in order to ensure that policies and business strategies are robust enough to withstand these newly arising uncertainties. The proposed solution to this problem and therefore the hypothesis offered and tested by this thesis is to work with two techniques in conjunction without combining it when developing a view of the future. The two techniques used, namely intuitive scenario thinking and stochastic modelling are based on two fundamentally different hypotheses namely: the future is like the past and present (stochastic modelling), and the future is not like the past and present but is a result of combining current and unexpectedly new forces or factors (intuitive scenario thinking). The idea behind this stems from the philosophy of Socrates, whereby he postulated that the truth can never be fully known and therefore, when working with the truth, one needs to work with multi-hypotheses about the truth until all but one hypothesis can be discarded. This will then bring one closer to the truth, but never lead you to know the truth in full, since the truth can’t be known in full. Applying this idea means conjunctively using two techniques which are based on the two hypotheses about the future. From a literature review it was realised that two such techniques existed, namely, stochastic modelling and scenario thinking. Stochastic modelling, by its very nature, is based on the assumption that the future is like the past and present since historical data, historical inter-relationships, experience, and modelling techniques are used to develop the model, apply it, and to interpret its results. Scenario thinking on the other hand, and specifically intuitive logics scenario thinking, is based on the notion that the future is not like the past or present, but is rather a combination of existing and new and unknown factors and forces. At first the perceived problem with this idea was thought to exist in the problem of using both techniques in combination, since the two techniques are fundamentally different because of the fundamentally different assumptions on which they are based. The question and challenge was therefore whether these two techniques could be used in combination, and how? However, the solution to this problem was more elementary than what was initially thought. As the two techniques are fundamentally different, it implies that the two techniques can’t be combined because the two underlying assumptions can’t be combined. However, what is possible is to use it in conjunction without adjusting either technique. Rather, one would allow each technique to run its course, which at the same time leads to cross-pollination in terms of ideas and perspectives, where possible and applicable. The cross-pollination of ideas and perspectives will then create a process whereby ideas regarding the two basic assumptions on the future are crystallised and refined through a learning process, hence resulting in clearer perspectives on both hypotheses about whether the future will be like the past and present, or whether the future will be a combination of existing and new but unknown factors and forces. These clearer perspectives provide a framework to the decision-maker whereby the two basic hypotheses on the future can be applied simultaneously to develop strategies and policies that are likely robust enough to be successful in both instances. It also provides a framework whereby reality can be interpreted as it unfolds, which signals to the decision-maker which of the two hypotheses is playing out. This will assist the decision-maker in better perceiving what is in fact happening, hence what the newly perceived truth is in terms of the future, and therefore what needs to be done in order to survive and grow within this newly developing future, reality, or truth. The presentation of three case studies assists in testing the hypothesis of this thesis as presented in chapter one, and concludes that the hypothesis can’t be rejected. Hence, through the presentation of the case studies it is found that using scenario thinking in conjunction with stochastic modelling does indeed facilitate a more complete understanding of the risks and uncertainties pertaining to policy and strategic business decisions in agricultural commodity markets, through fostering a more complete learning experience. It therefore does facilitate better decision-making in an increasingly turbulent and uncertain environment. / Thesis (PhD)--University of Pretoria, 2010. / Agricultural Economics, Extension and Rural Development / unrestricted
4

THE EFFECTS OF LONG-TERM WATER TABLE MANIPULATIONS ON PEATLAND EVAPOTRANSPIRATION, SOIL PHYSICAL PROPERTIES, AND MOISTURE STRESS

Moore, Paul 24 September 2014 (has links)
<p>Northern boreal peatlands represent a globally significant carbon pool that are at risk of drying through land-use change and projected future climate change. The current ecohydrological conceptualization of peatland response to persistent water table (WT) drawdown is largely based on short-term manipulation experiments, but where the long-term response may be mediated by vegetation and microtopography dynamics. The objective of this thesis is to examine the changes to peatland evapotranspiration, soil physical properties, and moisture stress in response to a long-term WT manipulation. The energy balance, hydrology, vegetation, and soil properties were examined at three adjacent peatland sites in the southern sub-boreal region which were subjected to WT manipulations on the order of ±10 cm at two treatment sites (WET, and DRY) compared to the reference site (INT) as a result of berm construction in the 1950s.</p> <p>Sites with an increasing depth to WT were found to have greater microtopographic variation and proportion of the surface covered by raised hummocks. While total abundance of the major plant functional groups was altered, species composition and dominant species of vascular and non-vascular species within microforms was unaltered. Changes in vegetation and microtopography lead to differences in albedo, surface roughness, and surface moisture variability. However, total ET was only significantly different at the WET site. Transpiration losses accounted for the majority of ET, where LAI best explained differences in total ET between sites. Surface moisture availability did not appear to be limiting on moss evaporation, where lab results showed similar moisture retention capacity between microforms and sites, and where low surface bulk density was shown to be a strong controlling factor. Modelling results further suggested that, despite dry surface conditions, surface moisture availability for evaporation was often not limited based on several different parameterizations of peat hydraulic structure with depth.</p> / Doctor of Philosophy (PhD)
5

A Method for Evaluating and Prioritizing Candidate Intersections for Transit Signal Priority Implementation

Abdy, Zeeshan Raza 08 June 2010 (has links)
Transit agencies seeking to improve transit service delivery are increasingly considering the deployment of transit signal priority (TSP). However, the impact of TSP on transit service and on the general traffic stream is a function of many factors, including intersection geometry, signal timings, traffic demands, TSP strategies and parameters, transit vehicle headways, timing when transit vehicles arrive at the intersection, etc. Previous studies have shown that depending on these factors, the net impact of TSP in terms of vehicle or person delay can be positive or negative. Furthermore, due to financial constraints, transit agencies are often able to deploy TSP at only a portion of all of the candidate intersections. Consequently, there is a need to estimate the impact of TSP prior to implementation in order to assist in determining at which intersections TSP should be deployed. Currently, the impacts of TSP are often estimated using microscopic simulation models. However, the application of these models is resource intensive and requires specialized expertise that is often not available in-house to transit agencies. In this thesis, an analytical model was proposed for estimating the delay impacts of green extension and early green (red truncation) TSP strategies. The proposed model is validated with analytical model reported in the literature and microscopic simulation model. This is followed by model sensitivity analysis. A software module is developed using the proposed model. The usefulness of the model is illustrated through its application to estimate the TSP performance. Finally, a prioritization is conducted on sixteen intersections with different geometric and operational traffic strategies. The overall results indicate that the proposed model is suitable for both estimating the pre-deployment and post-deployment TSP performance. The proposed model is suitable for implementation within a spreadsheet and requires considerably less effort, and less technical expertise, to apply than a typical micro-simulation model and therefore is a more suitable tool for transit agencies to use for prioritising TSP deployment.
6

Anisotropy-resolving subgrid-scale modelling using explicit algebraic closures for large eddy simulation

Rasam, Amin January 2014 (has links)
The present thesis deals with the development and performance analysis ofanisotropy-resolving models for the small, unresolved scales (”sub-grid scales”,SGS) in large eddy simulation (LES). The models are characterised by a descriptionof anisotropy by use of explicit algebraic models for both the subgridscale(SGS) stress tensor (EASSM) and SGS scalar flux vector (EASSFM). Extensiveanalysis of the performance of the explicit algebraic SGS stress model(EASSM) has been performed and comparisons made with the conventionalisotropic dynamic eddy viscosity model (DEVM). The studies include LES ofplane channel flow at relatively high Reynolds numbers and a wide range ofresolutions and LES of separated flow in a channel with streamwise periodichill-shaped constrictions (periodic hill flow) at coarse resolutions. The formersimulations were carried out with a pseudo-spectral Navier–Stokes solver, whilethe latter simulations were computed with a second-order, finite-volume basedsolver for unstructured grids. The LESs of channel flow demonstrate that theEASSM gives a good description of the SGS anisotropy, which in turn gives ahigh degree of resolution independence, contrary to the behaviour of LES predictionsusing the DEVM. LESs of periodic hill flow showed that the EASSMalso for this case gives significantly better flow predictions than the DEVM.In particular, the reattachment point was much better predicted with the EASSMand reasonably well predicted even at very coarse resolutions, where theDEVM is unable to predict a proper flow separation.The explicit algebraic SGS scalar flux model (EASSFM) is developed toimprove LES predictions of complex anisotropic flows with turbulent heat ormass transfer, and can be described as a nonlinear tensor eddy diffusivity model.It was tested in combination with the EASSM for the SGS stresses, and itsperformance was compared to the conventional dynamic eddy diffusivity model(DEDM) in channel flow with and without system rotation in the wall-normaldirection. EASSM and EASSFM gave predictions of high accuracy for meanvelocity and mean scalar fields, as well as stresses and scalar flux components.An extension of the EASSM and EASSFM, based on stochastic differentialequations of Langevin type, gave further improvements. In contrast to conventionalmodels, these extended models are able to describe intermittent transferof energy from the small, unresolved scales, to the resolved large ones.The present study shows that the EASSM/EASSFM gives a clear improvementof LES of wall-bounded flows in simple, as well as in complex geometriesin comparison with simpler SGS models. This is also shown to hold for a widerange of resolutions and is particularly accentuated for coarse resolution. The advantages are also demonstrated both for high-order numerical schemes andfor solvers using low-order finite volume methods. The models therefore havea clear potential for more applied computational fluid mechanics. / <p>QC 20140304</p> / Explicit algebraic sub-grid scale modelling for large-eddy simulations
7

A Method for Evaluating and Prioritizing Candidate Intersections for Transit Signal Priority Implementation

Abdy, Zeeshan Raza 08 June 2010 (has links)
Transit agencies seeking to improve transit service delivery are increasingly considering the deployment of transit signal priority (TSP). However, the impact of TSP on transit service and on the general traffic stream is a function of many factors, including intersection geometry, signal timings, traffic demands, TSP strategies and parameters, transit vehicle headways, timing when transit vehicles arrive at the intersection, etc. Previous studies have shown that depending on these factors, the net impact of TSP in terms of vehicle or person delay can be positive or negative. Furthermore, due to financial constraints, transit agencies are often able to deploy TSP at only a portion of all of the candidate intersections. Consequently, there is a need to estimate the impact of TSP prior to implementation in order to assist in determining at which intersections TSP should be deployed. Currently, the impacts of TSP are often estimated using microscopic simulation models. However, the application of these models is resource intensive and requires specialized expertise that is often not available in-house to transit agencies. In this thesis, an analytical model was proposed for estimating the delay impacts of green extension and early green (red truncation) TSP strategies. The proposed model is validated with analytical model reported in the literature and microscopic simulation model. This is followed by model sensitivity analysis. A software module is developed using the proposed model. The usefulness of the model is illustrated through its application to estimate the TSP performance. Finally, a prioritization is conducted on sixteen intersections with different geometric and operational traffic strategies. The overall results indicate that the proposed model is suitable for both estimating the pre-deployment and post-deployment TSP performance. The proposed model is suitable for implementation within a spreadsheet and requires considerably less effort, and less technical expertise, to apply than a typical micro-simulation model and therefore is a more suitable tool for transit agencies to use for prioritising TSP deployment.
8

Dynamic Hedging: CVaR Minimization and Path-Wise Comparison

Smirnov, Ivan Unknown Date
No description available.
9

Stochastic modelling in molecular biology : a probabilistic analysis of protein polymerisation and telomere shortening / Modélisation stochastique en biologie moléculaire : une analyse probabiliste de la polymérisation des protéines et du raccourcissement des télomères

Eugène, Sarah 30 September 2016 (has links)
Dans cette thèse, nous proposons une analyse probabiliste de deux problèmes de biologie moléculaire dans lesquels la stochasticité joue un rôle essentiel : la polymérisation des protéines dans les maladies neurodégénératives ainsi que le raccourcissement des télomères. L’agrégation des protéines en fibrilles amyloïdes est un important phénomène biologique associé à plusieurs maladies humaines telles que les maladies d’Alzheimer, de Huntington ou de Parkinson, ou encore l’amylose ou bien le diabète de type 2. Comme observé au cours des expériences reproduisant les petits volumes des cellules, les courbes d’évolution cinétique de l’agrégation des protéines présentent une phase de croissance exponentielle précédée d’une phase de latence extrêmement fluctuante, liée au temps de nucléation. Après une introduction au problème de polymérisation des protéines dans le chapitre I, nous étudions dans le chapitre II les origines et les propriétés de la variabilité de ladite phase de latence ; pour ce faire, nous proposons un modèle stochastique minimal qui permet de décrire les caractéristiques principales des courbes expérimentales d’agrégation de protéines. On considère alors deux composants chimiques : les monomères et les monomères polymérisés. Au départ, seuls sont présents les monomères ; par suite, ils peuvent polymériser de deux manières différentes : soit deux monomères se rencontrent et for- ment deux monomères polymérisés, soit un monomère se polymérise à la suite d’une collision avec un autre monomère déjà polymérisé. Malgré son efficacité, la simplicité des hypothèses de ce modèle ne lui permet pas de rendre compte de la variabilité observée au cours des expériences. C’est pourquoi dans un second temps, au cours du chapitre III, nous complexifions ce modèle afin de prendre en compte d’autres mécanismes impliqués dans la polymérisation et qui sont susceptibles d’augmenter la variabilité du temps de nucléation. Lors de ces deux chapitres, des résultats asymptotiques incluant diverses échelles de temps sont obtenus pour les processus de Markov correspondants. Une approximation au premier et au second ordre du temps de nucléation sont obtenus à partir de ces théorèmes limites. Ces résultats re- posent sur une renormalisation en temps et en espace du modèle de population, ainsi que sur un principe d’homogénéisation stochastique lié à une version modifiée d’urne d’Ehrenfest. Dans une seconde partie, un modèle stochastique décrivant le raccourcissement des télomères est pro- posé. Les chromosomes des cellules eucaryotes sont raccourcis à chaque mitose à cause des mécanismes de réplication de l’ADN incapables de répliquer les extrémités du chromosome parental. Afin d’éviter une perte de l’information génétique, ces chromosomes possèdent à chaque extrémité des télomères qui n’encodent pas d’information génétique. Au fil des cycles de réplication, ces télomères sont raccourcis jusqu’à rendre la division cellulaire impossible : la cellule entre alors en sénescence réplicative. L’objectif de ce modèle est de remonter aux caractéristiques de la distribution initiale de la taille des télomères à partir de mesures de temps de sénescence. / This PhD dissertation proposes a stochastic analysis of two questions of molecular biology in which randomness is a key feature of the processes involved: protein polymerisation in neurodegenerative diseases on the one hand, and telomere shortening on the other hand. Self-assembly of proteins into amyloid aggregates is an important biological phenomenon associated with human diseases such as prion diseases, Alzheimer’s, Huntington’s and Parkinson’s disease, amyloidosis and type-2 diabetes. The kinetics of amyloid assembly show an exponential growth phase preceded by a lag phase, variable in duration, as seen in bulk experiments and experiments that mimic the small volume of the concerned cells. After an introduction to protein polymerisation in chapter I, we investigate in chapter II the origins and the properties of the observed variability in the lag phase of amyloid assembly. This variability is currently not accounted for by deterministic nucleation-dependent mechanisms. In order to tackle this issue, a stochastic minimal model is proposed, simple, but capable of describing the characteristics of amyloid growth curves. Two populations of chemical components are considered in this model: monomers and polymerised monomers. Initially, there are only monomers and from then, two possible ways of polymerising a monomer: either two monomers collide to combine into two polymerised monomers, or a monomer is polymerised by the encounter of an already polymerised monomer. However efficient, this simple model does not fully explain the variability observed in the experiments, and in chapter III, we extend it in order to take into account other relevant mechanisms of the polymerisation process that may have an impact on fluctuations. In both chapters, asymptotic results involving different time scales are obtained for the corresponding Markov processes. First and second order results for the starting instant of nucleation are derived from these limit theorems. These results rely on a scaling analysis of a population model and the proof of a stochastic averaging principle for a model related to an Ehrenfest urn model. In the second part, a stochastic model for telomere shortening is proposed. In eukaryotic cells, chromosomes are shortened with each occurring mitosis, because the DNA polymerases are unable to replicate the chromosome down to the very end. To prevent potentially catastrophic loss of genetic information, these chromosomes are equipped with telomeres at both ends (repeated sequences that contain no genetic information). After many rounds of replication however, the telomeres are progressively nibbled to the point where the cell cannot divide anymore, a blocked state called replicative senescence. The aim of this model is to trace back to the initial distribution of telomeres from measurements of the time of senescence.
10

Face recognition using Hidden Markov Models

Samaria, Ferdinando Silvestro January 1995 (has links)
This dissertation introduces work on face recognition using a novel technique based on Hidden Markov Models (HMMs). Through the integration of a priori structural knowledge with statistical information, HMMs can be used successfully to encode face features. The results reported are obtained using a database of images of 40 subjects, with 5 training images and 5 test images for each. It is shown how standard one-dimensional HMMs in the shape of top-bottom models can be parameterised, yielding successful recognition rates of up to around 85%. The insights gained from top-bottom models are extended to pseudo two-dimensional HMMs, which offer a better and more flexible model, that describes some of the twodimensional dependencies missed by the standard one-dimensional model. It is shown how pseudo two-dimensional HMMs can be implemented, yielding successful recognition rates of up to around 95%. The performance of the HMMs is compared with the Eigenface approach and various domain and resolution experiments are also carried out. Finally, the performance of the HMM is evaluated in a fully automated system, where database images are cropped automatically.

Page generated in 0.0654 seconds