• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 6
  • 1
  • Tagged with
  • 37
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bounds and constructions of perfect hash families

Fazey, Elizabeth Claire January 2003 (has links)
No description available.
2

Non-centered parameterisations for data augmentation and hierarchical models

Papaspiliopoulos, Omiros January 2003 (has links)
No description available.
3

Nelder-Mead optimization under linear constraints

Brea, Ebert January 2004 (has links)
No description available.
4

Time integration algorithms for the steady states of dissipative non-linear dynamic systems

Bornemann, Paul Burkhard January 2004 (has links)
No description available.
5

Higher-order accuracy in implicit, conservative, single-step time-integration schemes for non-linear structural dynamics

Graham, Edward January 2004 (has links)
No description available.
6

Aspects of learning mixtures

Wang, Yanzhong January 2004 (has links)
No description available.
7

Adaptive Krylov subspace methods for model reduction of large scale systems

Frangos, Michalis January 2008 (has links)
The ultimate goal of every theory according to Albert Einstein, is as follows: "It is the grand object of all theory to make these irreducible elements (axioms/assumptions) as simple and as few in number as possible, without having to renounce the adequate representation of any empirical content whatever" (Albert Einstein, 1954). The main goal of this dissertation falls in the above definition which is focused on model reduction of large-scale linear systems, to derive small and accurate linear systems based on efficient Krylov subspace projection techniques. The rational Arnoldi algorithm which belongs to the class of Krylov subspace projection methods has been applied for deriving reduced order models that are rational interpolating approximations of the original system. The rational Arnoldi algorithm is known in the literature and it is used extensively for the approximation of large scale linear systems due to its numerical stability and efficiency. However there are some outstanding issues which can affect and improve its performance which are investigated in this thesis. The first issue is in the development of a set of simple equations, the Arnoldi-like equations, to describe the rational Arnoldi-algorithm. This set of equations is of the same form as in the case of the well-known standard Arnoldi algorithm, an algorithm based on which many techniques for model reduction have emerged. The reduced order models developed by the rational Arnoldi algorithm interpolate the original system at multiple interpolation points, while the standard Arnoldi algorithm interpolates the original system around infinity. The second issue is in the development of adaptive schemes for the selection of the interpolation points which result in significantly improved approximations, without a priori knowledge of the system's transfer function characteristics. The information about the interpolation points rises from simple error expressions and error approximation expressions derived posing the Arnoldi-like equations. The third issue concerned in this work is in the development of a simple and easy to understand modified version of the rational Arnoldi algorithm which is suitable for adaptive interpolation. A breakdown analysis and an error analysis, essential for the adaptive schemes, are provided. Based on the modified Arnoldi algorithm an efficient restart technique for the algorithm is also developed to improve the approximation further while the order of the approximation remains fixed. The performance of the reduced order approximations is based on updates of some of the interpolation points of the approximations. A drawback of the rational interpolating methods is that they do not guarantee stability for the reduced order models. The fourth issue addressed in the thesis is the parameterisation of a set of interpolating approximations in terms of a free parameter. As a post-processing step of the rational Arnoldi algorithm any unstable reduced order models can be stabilised by a proper selection of the free parameter. Future research directions are provided at the conclusions of the thesis.
8

The Signature in Numerical Algorithms

Litterer, Christian January 2008 (has links)
Particle methods are widely used because they can provide accurate descriptions of evolving measures. Recently it has become clear that by stepping outside the Monte-Carlo paradigm these methods can be of higher order with effective and transparent error bounds. A weakness of particle methods (particularly in the higher order case) is the tendency for the number of particles to explode if the process is iterated and accuracy preserved.
9

Overfitting in estimation of distribution algorithms (EDAS)

Wu, Hao January 2009 (has links)
Estimation of Distribution Algorithms (EDAs) are a class of evolutionary algorithms that use machine learning techniques to solve optimization problems. They generally build probabilistic models based on good solutions found so far and use the constructed models to guide the further search. There is a significant problem within EDAs: when the sample size of EDAs at each generation is not big enough, EDAs fail to find the global optimum no matter how long they are run. To find out the reason, we noticed one of the most important phenomena concerning machine leaming from data is overfitting. in this, the learning algorithm adapts so well to the given data, that noise or particularities of the specific sample are also encoded by the learned model. lt results in reduced performance when the task is the generalisation to unseen data, as producing an overly complex model which may consume unnecessary learning time and computational resources.
10

Greedy algorithms for random regular graphs

Beis, Michail January 2006 (has links)
No description available.

Page generated in 0.0112 seconds