• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 42
  • 12
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 237
  • 237
  • 237
  • 63
  • 57
  • 56
  • 50
  • 49
  • 48
  • 47
  • 40
  • 40
  • 39
  • 34
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Structural and fluid analysis for large scale PEPA models, with applications to content adaptation systems

Ding, Jie January 2010 (has links)
The stochastic process algebra PEPA is a powerful modelling formalism for concurrent systems, which has enjoyed considerable success over the last decade. Such modelling can help designers by allowing aspects of a system which are not readily tested, such as protocol validity and performance, to be analysed before a system is deployed. However, model construction and analysis can be challenged by the size and complexity of large scale systems, which consist of large numbers of components and thus result in state-space explosion problems. Both structural and quantitative analysis of large scale PEPA models suffers from this problem, which has limited wider applications of the PEPA language. This thesis focuses on developing PEPA, to overcome the state-space explosion problem, and make it suitable to validate and evaluate large scale computer and communications systems, in particular a content adaption framework proposed by the Mobile VCE. In this thesis, a new representation scheme for PEPA is proposed to numerically capture the structural and timing information in a model. Through this numerical representation, we have found that there is a Place/Transition structure underlying each PEPA model. Based on this structure and the theories developed for Petri nets, some important techniques for the structural analysis of PEPA have been given. These techniques do not suffer from the state-space explosion problem. They include a new method for deriving and storing the state space and an approach to finding invariants which can be used to reason qualitatively about systems. In particular, a novel deadlock-checking algorithm has been proposed to avoid the state-space explosion problem, which can not only efficiently carry out deadlock-checking for a particular system but can tell when and how a system structure lead to deadlocks. In order to avoid the state-space explosion problem encountered in the quantitative analysis of a large scale PEPA model, a fluid approximation approach has recently been proposed, which results in a set of ordinary differential equations (ODEs) to approximate the underlying CTMC. This thesis presents an improved mapping from PEPA to ODEs based on the numerical representation scheme, which extends the class of PEPA models that can be subjected to fluid approximation. Furthermore, we have established the fundamental characteristics of the derived ODEs, such as the existence, uniqueness, boundedness and nonnegativeness of the solution. The convergence of the solution as time tends to infinity for several classes of PEPA models, has been proved under some mild conditions. For general PEPA models, the convergence is proved under a particular condition, which has been revealed to relate to some famous constants of Markov chains such as the spectral gap and the Log-Sobolev constant. This thesis has established the consistency between the fluid approximation and the underlying CTMCs for PEPA, i.e. the limit of the solution is consistent with the equilibrium probability distribution corresponding to a family of underlying density dependent CTMCs. These developments and investigations for PEPA have been applied to both qualitatively and quantitatively evaluate the large scale content adaptation system proposed by the Mobile VCE. These analyses provide an assessment of the current design and should guide the development of the system and contribute towards efficient working patterns and system optimisation.
12

Modelling a Moore-Spiegel Electronic Circuit : the imperfect model scenario

Machete, R. L. January 2007 (has links)
The goal of this thesis is to investigate model imperfection in the context of forecasting. We focus on an electronic circuit built in a laboratory and then enclosed to reduce environmental effects. The non-dimensionalised model equations, obtained by applying Kirchhoff’s current and voltage laws, are the Moore-Spiegel Equations [47], but they exhibit a large disparity with the circuit. At parameter values used in the circuit, they yield a periodic trajectory whilst the circuit exhibits chaotic behaviour. Therefore, alternative models for the circuit are sought. The models we consider are local and global prediction models constructed from data. We acknowledge that all our models have errors and then seek to quantify how these errors are distributed across the circuit attractor. To this end, q-pling times of initial uncertainties are computed for the various models. A q-pling time is the time for an initial uncertainty to increase by a factor of q [67], where q is a real number. Whereas it is expected that different models should have different q-pling time distributions, it is found that the diversity in our models can be increased by constructing them in different coordinate spaces. To forecast the future dynamics of the circuit using any of the models, we make probabilistic forecasts [8]. The question of how to choose the spread of the initial ensemble is addressed by the use of skill scores [8, 9]. Finally, the diversity in our models is exploited by combining probabilistic forecasts from them so as to minimise some skill score. It is found that the skill of combined not-so-good models can be increased by combining them as discussed in this thesis.
13

VECTOR QUANTIZATION USING ODE BASED NEURAL NETWORK WITH VARYING VIGILANCE PARAMETER

Khudhair, Ali Dheyaa 01 May 2012 (has links)
Vector Quantization importance has been increasing and it is becoming a vital element in the process of classification and clustering of different types of information to help in the development of machines learning and decisions making, however the different techniques that implements Vector Quantization have always come short in some aspects. A lot of researchers have turned their heads towards the idea of creating a Vector Quantization mechanism that is fast and can be used to classify data that is rapidly being generated from some source, most of the mechanisms depend on a specific style of neural networks, this research is one of those attempts. One of the dilemmas that this technology faces is the compromise that has to be made between the accuracy of the results and the speed of the classification or quantization process, also the complexity of the suggested algorithms makes it very hard to implement and realize any of them on a hardware that can be used as a fast-online classifier which can keep up with the speed of the information being presented to the system, an example for such information sources would be high speed processors, and computer networks intrusion detection systems. This research focuses on creating a Vector Quantizer using neural networks, the neural network that is used in this study is a novel one and has a unique feature that comes from the fact that it is based solely on a set of ordinary differential equations. The input data will be injected in those equations and the classification would be based on finding the equilibrium points of the system with the presence of those input patterns. The elimination of conditional statements in this neural network would mean that the implementation and the execution of the classification process of this technique would have one single path that can accommodate any value. A single execution path will provide easier algorithm analysis and open the possibility to realizing it on a pure analog circuit that can have an operation speed able to match the speed of incoming information and classify the data in a real time fashion. The details of this dynamical system will be provided in this research, also the shortcomings that we have faced and how we overcame them will be explained in particulars. Also, a drastic change in the way of looking at the speed vs. accuracy compromise has been made and presented in this research, aiming towards creating a technique that can produce accurate results with high speeds.
14

Parameter Estimation in Biological Cell Cycle Models Using Deterministic Optimization

Zwolak, Jason W. 28 February 2002 (has links)
Cell cycle models used in biology can be very complex. These models have parameters with initially unknown values. The values of the parameters vastly aect the accuracy of the models in representing real biological cells. Typically people search for the best parameters to these models using computers only as tools to run simulations. In this thesis methods and results are described for a computer program that searches for parameters to a series of related models using well tested algorithms. The code for this program uses ODRPACK for parameter estimation and LSODAR to solve the dierential equations that comprise the model. / Master of Science
15

A Comparison and Catalog of Intrinsic Tumor Growth Models

Sarapata, Elizabeth A. 01 May 2013 (has links)
Determining the dynamics and parameter values that drive tumor growth is of great interest to mathematical modelers, experimentalists and practitioners alike. We provide a basis on which to estimate the growth dynamics of ten different tumors by fitting growth parameters to at least five sets of published experimental data per type of tumor. These timescale tumor growth data are also used to determine which of the most common tumor growth models (exponential, power law, logistic, Gompertz, or von Bertalanffy) provides the best fit for each type of tumor. In order to compute the best-fit parameters, we implemented a hybrid local-global least squares minimization algorithm based on a combination of Nelder-Mead simplex direct search and Monte Carlo Markov Chain methods.
16

A Mathematical Framework on Machine Learning: Theory and Application

Shi, Bin 01 November 2018 (has links)
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
17

Computing with functions in two dimensions

Townsend, Alex January 2014 (has links)
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs near-optimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions. A robust numerical bivariate rootfinder is developed for computing the common zeros of two smooth functions via a resultant method. Using several specialized techniques the algorithm can accurately find the simple common zeros of two functions with polynomial approximants of high degree (&geq; 1,000). Lastly, low rank ideas are extended to linear partial differential equations (PDEs) with variable coefficients defined on rectangles. When these ideas are used in conjunction with a new one-dimensional spectral method the resulting solver is spectrally accurate and efficient, requiring O(n<sup>2</sup>) operations for rank $1$ partial differential operators, O(n<sup>3</sup>) for rank 2, and O(n<sup>4</sup>) for rank &geq,3 to compute an n x n matrix of bivariate Chebyshev expansion coefficients for the PDE solution. The algorithms in this thesis are realized in a software package called Chebfun2, which is an integrated two-dimensional component of Chebfun.
18

A Mathematical Model of the Effect of Aspirin on Blood Clotting

Johng, Breeana J 01 January 2015 (has links)
In this paper, we provide a mathematical model of the effect of aspirin on blood clotting. The model tracks the enzyme prostaglandin H synthase and an important blood clotting factor, thromboxane A2, in the form of thromboxane B2. Through model analysis, we determine conditions under which the reactions of prostaglandin H synthase are self-sustaining. Lastly, through numerical simulations, we demonstrate that the model accurately captures the steady-state chemical concentrations of interest in blood, both with and without aspirin treatment.
19

Multiple-valued functions in the sense of F. J. Almgren

Goblet, Jordan 19 June 2008 (has links)
A multiple-valued function is a "function" that assumes two or more distinct values in its range for at least one point in its domain. While these "functions" are not functions in the normal sense of being single-valued, the usage is so common that there is no way to dislodge it. This thesis is devoted to a particular class of multiple-valued functions: Q-valued functions. A Q-valued function is essentially a rule assigning Q unordered and not necessarily distinct points of R^n to each element of R^m. This object is one of the key ingredients of Almgren's 1700 pages proof that the singular set of an m-dimensional mass minimizing integral current in R^n has dimension at most m-2. We start by developing a decomposition theory and show for instance when a continuous Q-valued function can or cannot be seen as Q "glued" continuous classical functions. Then, the decomposition theory is used to prove intrinsically a Rademacher type theorem for Lipschitz Q-valued functions. A couple of Lipschitz extension theorems are also obtained for partially defined Lipschitz Q-valued functions. The second part is devoted to a Peano type result for a particular class of nonconvex-valued differential inclusions. To the best of the author's knowledge this is the first theorem, in the nonconvex case, where the existence of a continuously differentiable solution is proved under a mere continuity assumption on the corresponding multifunction. An application to a particular class of nonlinear differential equations is included. The third part is devoted to the calculus of variations in the multiple-valued framework. We define two different notions of Dirichlet nearly minimizing Q-valued functions, generalizing Dirichlet energy minimizers studied by Almgren. Hölder regularity is obtained for these nearly minimizers and we give some examples showing that the branching phenomena can be much worse in this context.
20

Convergence rates of adaptive algorithms for deterministic and stochastic differential equations

Moon, Kyoung-Sook January 2001 (has links)
No description available.

Page generated in 0.162 seconds