• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 21
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Novel approaches to creating robust globally convergent algorithms for numerical optimization

Hewlett, Joel David. Wilamowski, Bogdan M. January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes bibliographic references (p.50-52).
2

Circulant preconditioners for Toeplitz matrices and their applicationsin solving partial differential equations

金小慶, Jin, Xiao-qing. January 1992 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
3

Circulant preconditioners from B-splines and their applications.

January 1997 (has links)
by Tat-Ming Tso. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (p. 43-45). / Chapter Chapter 1 --- INTRODUCTION --- p.1 / Chapter §1.1 --- Introduction --- p.1 / Chapter §1.2 --- Preconditioned Conjugate Gradient Method --- p.3 / Chapter §1.3 --- Outline of Thesis --- p.3 / Chapter Chapter 2 --- CIRCULANT AND NON-CIRCULANT PRECONDITIONERS --- p.5 / Chapter §2.1 --- Circulant Matrix --- p.5 / Chapter §2.2 --- Circulant Preconditioners --- p.6 / Chapter §2.3 --- Circulant Preconditioners from Kernel Function --- p.8 / Chapter §2.4 --- Non-circulant Band-Toeplitz Preconditioners --- p.9 / Chapter Chapter 3 --- B-SPLINES --- p.11 / Chapter §3.1 --- Introduction --- p.11 / Chapter §3.2 --- New Version of B-splines --- p.15 / Chapter Chapter 4 --- CIRCULANT PRECONDITIONERS CONSTRUCTED FROM B-SPLINES --- p.24 / Chapter Chapter 5 --- NUMERICAL RESULTS AND CONCLUDING REMARKS --- p.28 / Chapter Chapter 6 --- APPLICATIONS TO SIGNAL PROCESSING --- p.37 / Chapter §6.1 --- Introduction --- p.37 / Chapter §6.2 --- Preconditioned regularized least squares --- p.39 / Chapter §6.3 --- Numerical Example --- p.40 / REFERENCES --- p.43
4

Learning gradients and canonical correlation by kernel methods /

Cai, Jia. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [52]-58)
5

Circulant preconditioners for Toeplitz matrices and their applications in solving partial differential equations /

Jin, Xiao-qing. January 1992 (has links)
Thesis (Ph. D.)--University of Hong Kong, 1993.
6

A study of gradient based particle swarm optimisers

Barla-Szabo, Daniel 29 November 2010 (has links)
Gradient-based optimisers are a natural way to solve optimisation problems, and have long been used for their efficacy in exploiting the search space. Particle swarm optimisers (PSOs), when using reasonable algorithm parameters, are considered to have good exploration characteristics. This thesis proposes a specific way of constructing hybrid gradient PSOs. Heterogeneous, hybrid gradient PSOs are constructed by allowing the gradient algorithm to optimise local best particles, while the PSO algorithm governs the behaviour of the rest of the swarm. This approach allows the distinct algorithms to concentrate on performing the separate tasks of exploration and exploitation. Two new PSOs, the Gradient Descent PSO, which combines the Gradient Descent and PSO algorithms, and the LeapFrog PSO, which combines the LeapFrog and PSO algorithms, are introduced. The GDPSO represents arguably the simplest hybrid gradient PSO possible, while the LeapFrog PSO incorporates the more sophisticated LFOP1(b) algorithm, exhibiting a heuristic algorithm design and dynamic time step adjustment mechanism. The strong tendency of these hybrids to prematurely converge is examined, and it is shown that by modifying algorithm parameters and delaying the introduction of gradient information, it is possible to retain strong exploration capabilities of the original PSO algorithm while also benefiting from the exploitation of the gradient algorithms. / Dissertation (MSc)--University of Pretoria, 2010. / Computer Science / unrestricted
7

Optimum System Modelling Using Recent Gradient Methods

Markettos, Nicholas Denis 04 1900 (has links)
<p> A study of gradient optimization techniques, in particular as applied to system modelling problems, is made. Three efficient techniques are used to derive optimum second-order and third-order models for a seventh-order system. The optimization techniques are the Fletcher-Powell method, a more recent method proposed by Fletcher and a method based on a more general objective function proposed by Jacobson and Oksman.</p> <p> The approximation is carried out in the time domain. Least squares and least pth criteria are used, and almost minimax results are obtained for large values of p. Values of p up to 10^12 are successfully used. The results are compared with other minimax type algorithms.</p> / Thesis / Master of Engineering (MEngr)
8

Preconditioned conjugate gradient methods for the Navier-Stokes equations

Ajmani, Kumud 13 October 2005 (has links)
A generalized Conjugate Gradient like method is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. Preconditioning techniques are employed with the Conjugate Gradient like method to enhance the stability and convergence rate of the overall iterative method. The superiority of the new solver is established by comparisons with a conventional Line GaussSeidel Relaxation (LGSR) solver. Comparisons are based on 'number of iterations required to converge to a steady-state solution' and 'total CPU time required for convergence'. Three test cases representing widely varying flow physics are chosen to investigate the performance of the solvers. Computational test results for very low speed (incompressible flow over a backward facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (shockon- shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the 1vfach 0.1 case, speed-up factors of 30 (in terms of iterations) and 20 (in terms of CPU time) are found in favor of the new solver when compared with the LGSR solver. The corresponding speed-up factors for the transonic flow case are 20 and 18, respectively. The hypersonic case shows relatively lower speed-up factors of 5 and 4, respectively. This study reveals that preconditioning can greatly enhance the range of applicability and improve the performance of Conjugate Gradient like methods. / Ph. D.
9

Some fast algorithms in signal and image processing.

January 1995 (has links)
Kwok-po Ng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 138-139). / Abstracts / Summary / Introduction --- p.1 / Summary of the papers A-F --- p.2 / Paper A --- p.15 / Paper B --- p.36 / Paper C --- p.63 / Paper D --- p.87 / Paper E --- p.109 / Paper F --- p.122
10

Image reconstruction with multisensors.

January 1998 (has links)
by Wun-Cheung Tang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references. / Abstract also in Chinese. / Abstracts --- p.1 / Introduction --- p.3 / Toeplitz and Circulant Matrices --- p.3 / Conjugate Gradient Method --- p.6 / Cosine Transform Preconditioner --- p.7 / Regularization --- p.10 / Summary --- p.13 / Paper A --- p.19 / Paper B --- p.36

Page generated in 0.0656 seconds