• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 9
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bayesian Semi-parametric Factor Models

Bhattacharya, Anirban January 2012 (has links)
<p>Identifying a lower-dimensional latent space for representation of high-dimensional observations is of significant importance in numerous biomedical and machine learning applications. In many such applications, it is now routine to collect data where the dimensionality of the outcomes is comparable or even larger than the number of available observations. Motivated in particular by the problem of predicting the risk of impending diseases from massive gene expression and single nucleotide polymorphism profiles, this dissertation focuses on building parsimonious models and computational schemes for high-dimensional continuous and unordered categorical data, while also studying theoretical properties of the proposed methods. Sparse factor modeling is fast becoming a standard tool for parsimonious modeling of such massive dimensional data and the content of this thesis is specifically directed towards methodological and theoretical developments in Bayesian sparse factor models.</p><p>The first three chapters of the thesis studies sparse factor models for high-dimensional continuous data. A class of shrinkage priors on factor loadings are introduced with attractive computational properties, with operating characteristics explored through a number of simulated and real data examples. In spite of the methodological advances over the past decade, theoretical justifications in high-dimensional factor models are scarce in the Bayesian literature. Part of the dissertation focuses on exploring estimation of high-dimensional covariance matrices using a factor model and studying the rate of posterior contraction as both the sample size & dimensionality increases. </p><p>To relax the usual assumption of a linear relationship among the latent and observed variables in a standard factor model, extensions to a non-linear latent factor model are also considered.</p><p>Although Gaussian latent factor models are routinely used for modeling of dependence in continuous, binary and ordered categorical data, it leads to challenging computation and complex modeling structures for unordered categorical variables. As an alternative, a novel class of simplex factor models for massive-dimensional and enormously sparse contingency table data is proposed in the second part of the thesis. An efficient MCMC scheme is developed for posterior computation and the methods are applied to modeling dependence in nucleotide sequences and prediction from high-dimensional categorical features. Building on a connection between the proposed model & sparse tensor decompositions, we propose new classes of nonparametric Bayesian models for testing associations between a massive dimensional vector of genetic markers and a phenotypical outcome.</p> / Dissertation
12

Validation and verification of a third degree optimization method

Levin, Anders, Johannesson, Jörgen January 2004 (has links)
<p>Denna kombinerade magisteruppsats i matematik och datalogi handlar om en metod för att hitta ett lokalt minimum för en unimodal funktion inom ett intervall genom användning av ett femtegradspolynom. Femtegradspolynomet skapas med hjälp av interpolation baserad på funktionsvärdena samt första och andra derivatans värden i intervallets ändpunkter. I rapporten härleds matematiskt att metoden konvergerar, följt av ett bevis för att metoden konvergerar med en konvergenshastighet av tre. Slutligen testas metoden mot två referensmetoder för att se användningsbarheten. För detta beskrivs vissa mjukvaruutvecklingsmetoder och några teststrategier. Testen utförs med sex olika funktioner och med tre olika versioner av metoden. Slutsatserna från testen visar att metoden inte är bättre att använda än referensmetoderna även om den har högre konvergenshastighet samt att metoden måste ta hänsyn till när den bara hittar nya approximationer på ena sidan av intervallet. Vi kunde även se från testerna att ingen av metoderna var bra på att ge en korrekt approximation, utan det finns behov av säkrare metoder för detta. Det är därför föreslaget i uppsatsen att man borde försöka att hitta ett annat interpolations-polynom för att förbättra metoden. Man borde även testa mot en metod som har högre konvergenshastighet. För att kunna göra det behöver man titta på andra sätt att representera numeriska värden och det skulle kunna vara intressant för att se om man då skulle få ett annat resultat.</p> / <p>This combined master thesis in Mathematics and in Computer Science deals with a method for finding the local minimum of a unimodal function inside a given interval by using a fifth degree polynomial. This fifth degree polynomial is created from the function value and the first and second derivative values in the end-points of the interval. In this report the presented method is derived mathematically to converge and it is then proven that the method has a convergence rate of three. Last is the method tested against two reference methods to see the usefullness of the method. To do this some software development methods are described in the report and some test strategies are given. The tests are done with six different functions and with three different implementations of the method. The conclusions from the tests are that it is often better to use one of the referencemethods instead of the presented method, even if the presented method has a better convergence rate, and that the method needs to handle when the found approximation always is on one side of the interval. We could also see from the tests that none of the methods were good on finding a correct approximation. Therefore, there exist needs for more secure methods. It is therefore suggested in the report that a search for other interpolating functions ought to be carried out in order to improve the method. Also, it could be interesting to test against another method with even higher convergence rate. To do that, another numerical representation is needed and it would be interesting to see if that changes the outcome</p>
13

Maximum entropy regularization for calibrating a time-dependent volatility function

Hofmann, Bernd, Krämer, Romy 26 August 2004 (has links) (PDF)
We investigate the applicability of the method of maximum entropy regularization (MER) including convergence and convergence rates of regularized solutions to the specific inverse problem (SIP) of calibrating a purely time-dependent volatility function. In this context, we extend the results of [16] and [17] in some details. Due to the explicit structure of the forward operator based on a generalized Black-Scholes formula the ill-posedness character of the nonlinear inverse problem (SIP) can be verified. Numerical case studies illustrate the chances and limitations of (MER) versus Tikhonov regularization (TR) for smooth solutions and solutions with a sharp peak.
14

Implementing Efficient iterative 3D Deconvolution for Microscopy / Implementering av effektiv iterativ 3D-avfaltning för mikroskopi

Mehadi, Ahmed Shah January 2009 (has links)
Both Gauss-Seidel Iterative 3D deconvolution and Richardson-Lucy like algorithms are used due to their stability and high quality results in high noise microscopic medical image processing. An approach to determine the difference between these two algorithms is presented in this paper. It is shown that the convergence rate and the quality of these two algorithms are influenced by the size of the point spread function (PSF). Larger PSF sizes causes faster convergence but this effect falls off for larger sizes . It is furthermore shown that the relaxation factor and the number of iterations are influencing the convergence rate of the two algorithms. It has been found that increasing relaxation factor and number of iterations improve convergence and can reduce the error of the deblurred image. It also found that overrelaxation converges faster than underrelaxation for small number of iterations. However, it can be achieved smaller final error with under-relaxation. The choice of underrelaxation factor and overrelaxation factor value are highly problem specific and different from one type of images. In addition, when it comes to 3D iterative deconvolution, the influence of boundary conditions for these two algorithms is discussed. Implementation aspects are discussed and it is concluded that cache memory is vital for achieving a fast implementation of iterative 3D deconvolution. A mix of the two algorithms have been developed and compared with the previously mentioned Gauss-Seidel and the Richardson-Lucy-like algorithms. The experiments indicate that, if the value of the relaxation parameter is optimized, then the Richardson-Lucy-like algorithm has the best performance for 3D iterative deconvolution. / Upplösningen på bilder tagna med mikroskop är idag begränsad av diffraktion. För att komma runt detta förbättras bilden digitalt utifrån en matematisk modell av den fysiska processen. Den här avhandlingen jämför två algoritmer för att lösa ekvationerna: Richardson-Lucy och Gauss-Seidel. Vidare studeras effekten av parametrar såsom utbredningen av ljusspridfunktionen och regularisering av ekvationslösaren. / Mobile: (0046)762778136
15

Boundary Summation Equation Preconditioning for Ordinary Differential Equations with Constant Coefficients on Locally Refined Meshes

Guzainuer, Maimaitiyiming January 2012 (has links)
This thesis deals with the numerical solution of ordinary differential equations (ODEs) using finite difference (FD) methods. In particular, boundary summation equation (BSE) preconditioning for FD approximations for ODEs with constant coefficients on locally refined meshes is studied. Firstly, the BSE for FD approximations of ODEs with constant coefficients is derived on a locally refined mesh. Secondly, the obtained linear system of equations are solved by the iterative method GMRES. Then, the arithmetic complexity and convergence rate of the iterative solution of the BSE formulation are discussed. Finally, numerical experiments are performed to compare the new approach with the FD approach. The results show that the BSE formulation has low arithmetic complexity and the convergence rate of the iterative solvers is fast and independent of the number of grid points.
16

Convergence rates of stochastic global optimisation algorithms with backtracking : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Statistics at Massey University

Alexander, D.L.J. January 2004 (has links)
A useful measure of quality of a global optimisation algorithm such as simulated annealing is the length of time it must be run to reach a global optimum within a certain accuracy. Such a performance measure assists in choosing and tuning algorithms. This thesis proposes an approach to obtaining such a measure through successive approximation of a generic stochastic global optimisation algorithm with a sequence of stochastic processes culminating in backtracking adaptive search. The overall approach is to approximate the progress of an optimisation algorithm with that of a model process, backtracking adaptive search. The known convergence rate of the model then provides an estimator of the unknown convergence rate of the original algorithm. Parameters specifying this model are chosen based on observation of the optimisation algorithm. The optimisation algorithm may first be approximated with a time-inhomogeneous Markovian process defined on the problem range. The distribution of the number of iterations to convergence for this averaged range process is shown to be identical with that of the original process. This process is itself approximated by a time-homogeneous Markov process in the range, the asymptotic averaged range process. This approximation is defined for all Markovian optimisation algorithms and a weak condition under which its convergence time closely matches that of the original algorithm is developed. The asymptotic averaged range process is of the same form as backtracking adaptive search, the final stage of approximation. Backtracking adaptive search is an optimisation algorithm which generalises pure adaptive search and hesitant adaptive search. In this thesis the distribution of the number of iterations for which the algorithm runs in order to reach a sufficiently extreme objective function level is derived. Several examples of backtracking adaptive search on finite problems are also presented, including special cases that have received attention in the literature. Computational results of the entire approximation framework are reported for several examples. The method can be applied to any optimisation algorithm to obtain an estimate of the time required to obtain solutions of a certain quality. Directions for further work in order to improve the accuracy of such estimates are also indicated.
17

Validation and verification of a third degree optimization method

Levin, Anders, Johannesson, Jörgen January 2004 (has links)
Denna kombinerade magisteruppsats i matematik och datalogi handlar om en metod för att hitta ett lokalt minimum för en unimodal funktion inom ett intervall genom användning av ett femtegradspolynom. Femtegradspolynomet skapas med hjälp av interpolation baserad på funktionsvärdena samt första och andra derivatans värden i intervallets ändpunkter. I rapporten härleds matematiskt att metoden konvergerar, följt av ett bevis för att metoden konvergerar med en konvergenshastighet av tre. Slutligen testas metoden mot två referensmetoder för att se användningsbarheten. För detta beskrivs vissa mjukvaruutvecklingsmetoder och några teststrategier. Testen utförs med sex olika funktioner och med tre olika versioner av metoden. Slutsatserna från testen visar att metoden inte är bättre att använda än referensmetoderna även om den har högre konvergenshastighet samt att metoden måste ta hänsyn till när den bara hittar nya approximationer på ena sidan av intervallet. Vi kunde även se från testerna att ingen av metoderna var bra på att ge en korrekt approximation, utan det finns behov av säkrare metoder för detta. Det är därför föreslaget i uppsatsen att man borde försöka att hitta ett annat interpolations-polynom för att förbättra metoden. Man borde även testa mot en metod som har högre konvergenshastighet. För att kunna göra det behöver man titta på andra sätt att representera numeriska värden och det skulle kunna vara intressant för att se om man då skulle få ett annat resultat. / This combined master thesis in Mathematics and in Computer Science deals with a method for finding the local minimum of a unimodal function inside a given interval by using a fifth degree polynomial. This fifth degree polynomial is created from the function value and the first and second derivative values in the end-points of the interval. In this report the presented method is derived mathematically to converge and it is then proven that the method has a convergence rate of three. Last is the method tested against two reference methods to see the usefullness of the method. To do this some software development methods are described in the report and some test strategies are given. The tests are done with six different functions and with three different implementations of the method. The conclusions from the tests are that it is often better to use one of the referencemethods instead of the presented method, even if the presented method has a better convergence rate, and that the method needs to handle when the found approximation always is on one side of the interval. We could also see from the tests that none of the methods were good on finding a correct approximation. Therefore, there exist needs for more secure methods. It is therefore suggested in the report that a search for other interpolating functions ought to be carried out in order to improve the method. Also, it could be interesting to test against another method with even higher convergence rate. To do that, another numerical representation is needed and it would be interesting to see if that changes the outcome
18

The Effect of Mobility on Wireless Sensor Networks

Hasir, Ibrahim 08 1900 (has links)
Wireless sensor networks (WSNs) have gained attention in recent years with the proliferation of the micro-electro-mechanical systems, which has led to the development of smart sensors. Smart sensors has brought WSNs under the spotlight and has created numerous different areas of research such as; energy consumption, convergence, network structures, deployment methods, time delay, and communication protocols. Convergence rates associated with information propagations of the networks will be questioned in this thesis. Mobility is an expensive process in terms of the associated energy costs. In a sensor network, mobility has significant overhead in terms of closing old connections and creating new connections as mobile sensor nodes move from one location to another. Despite these drawbacks, mobility helps a sensor network reach an agreement more quickly. Adding few mobile nodes to an otherwise static network will significantly improve the network’s ability to reach consensus. This paper shows the effect of the mobility on convergence rate of the wireless sensor networks, through Eigenvalue analysis, modeling and simulation.
19

On gamma kernel function in recursive density estimation

Ma, Xiaoxiao 09 August 2019 (has links)
In this thesis we investigate the convergence rate of gamma kernel estimators in recursive density estimation. Unlike the traditional symmetric and fixed function, the gamma kernel is a kernel function with bounded support and varying shapes. Gamma kernels have been used to address the boundary bias problem which occurs when a symmetric kernel is used to estimate a density which has support on [0, ?). The recursive density estimation is useful when an 'additional data' (on-line) comes from the population density which we want to estimate. We utilize the ideas and results from the adaptive kernel estimation to show that the L_2 convergence rate of the recursive kernel density estimators which use gamma kernels is n^(-4/5).
20

Pusgrupių aproksimacijų tikslumo tyrimai / Investigations of the accuracy of approximations of semigroups

Vilkienė, Monika 02 May 2011 (has links)
Disertacijoje tiriamas operatorių pusgrupių Eulerio ir Josidos approximacijų konvergavimas. Gauti Eulerio aproksimacijų asimptotiniai skleidiniai ir optimalūs liekamųjų narių įverčiai. Taip pat pateiktos įvairios šių skleidinių koeficientų analizinės išraiškos. Josidos aproksimacijoms buvo rasti du optimalūs konvergavimo greičio įverčiai su optimaliomis konstantomis. Taip pat gauti Josidos aproksimacijų asimptotiniai skleidiniai ir liekamųjų narių įverčiai. / In this thesis we investigate the convergence of Euler's and Yosida approximations of operator semigroups. We obtain asymptotic expansions for Euler's approximations of semigroups with optimal bounds for the remainder terms. We provide various explicit formulas for the coefficients for these expansions. For Yosida approximations of semigroups we obtain two optimal error bounds with optimal constants. We also construct asymptotic expansions for Yosida approximations of semigroups and provide optimal bounds for the remainder terms of these expansions.

Page generated in 0.1528 seconds