• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 7
  • 2
  • Tagged with
  • 38
  • 38
  • 22
  • 14
  • 13
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Método de Descida para problemas de otimização multiobjetivo / Descente Methods for Problem of Multiobjetivo Optimization

JESUS, Lays Grazielle Cardoso Silva de 30 April 2010 (has links)
Made available in DSpace on 2014-07-29T16:02:16Z (GMT). No. of bitstreams: 1 Dissertacao - Lays G C S de Jesus - Matematica.pdf: 936886 bytes, checksum: 303443d6b8eff2308a239c47a7c0d5af (MD5) Previous issue date: 2010-04-30 / In this work, we study the descent of methods for problem of optimization multiobjective which we introduce an order of relation induced by an closed convex cone.We study as it wiel calculate an descent of direction and we prove that every accumalation point of the sequence generated by the descent of methods with search of Armijo is weakly efficient. / Neste trabalho, estudamos o método de descida para problemas de otimização multiobjetivo, para o qual introduzimos uma relação de ordem induzida por um cone fechado e convexo. Estudamos como calcular uma direção de descida e provamos que todo ponto de acumulação da sequência gerada pelo método de descida com busca de Armijo é fracamente eficiente.
22

Preconditioned iterative methods for monotone nonlinear eigenvalue problems

Solov'ëv, Sergey I. 11 April 2006 (has links) (PDF)
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of the symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to ill-conditioned nonlinear eigenvalue problems with very large sparse matrices monotone depending on the spectral parameter. To compute the smallest eigenvalue of large matrix nonlinear eigenvalue problem, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors and inner products of vectors. We investigate the convergence and derive grid-independent error estimates of these methods for computing eigenvalues. Numerical experiments demonstrate practical effectiveness of the proposed methods for a class of mechanical problems.
23

Efficient and Accurate Numerical Techniques for Sparse Electromagnetic Imaging

Sandhu, Ali Imran 04 1900 (has links)
Electromagnetic (EM) imaging schemes are inherently non-linear and ill-posed. Albeit there exist remedies to these fundamental problems, more efficient solutions are still being sought. To this end, in this thesis, the non-linearity is tackled in- corporating a multitude of techniques (ranging from Born approximation (linear), inexact Newton (linearized) to complete nonlinear iterative Landweber schemes) that can account for weak to strong scattering problems. The ill-posedness of the EM inverse scattering problem is circumvented by formulating the above methods into a minimization problem with a sparsity constraint. More specifically, four novel in- verse scattering schemes are formulated and implemented. (i) A greedy algorithm is used together with a simple artificial neural network (ANN) for efficient and accu- rate EM imaging of weak scatterers. The ANN is used to predict the sparsity level of the investigation domain which is then used as the L0 - constraint parameter for the greedy algorithm. (ii) An inexact Newton scheme that enforces the sparsity con- straint on the derivative of the unknown material properties (not necessarily sparse) is proposed. The inverse scattering problem is formulated as a nonlinear function of the derivative of the material properties. This approach results in significant spar- sification where any sparsity regularization method could be efficiently applied. (iii) A sparsity regularized nonlinear contrast source (CS) framework is developed to di- rectly solve the nonlinear minimization problem using Landweber iterations where the convergence is accelerated using a self-adaptive projected accelerated steepest descent algorithm. (iv) A 2.5D finite difference frequency domain (FDFD) based in- verse scattering scheme is developed for imaging scatterers embedded in lossy and inhomogeneous media. The FDFD based inversion algorithm does not require the Green’s function of the background medium and appears a promising technique for biomedical and subsurface imaging with a reasonable computational time. Numerical experiments, which are carried out using synthetically generated mea- surements, show that the images recovered by these sparsity-regularized methods are sharper and more accurate than those produced by existing methods. The methods developed in this work have potential application areas ranging from oil/gas reservoir engineering to biological imaging where sparse domains naturally exist.
24

Nonlinear Boundary Conditions in Sobolev Spaces

Richardson, Walter Brown 12 1900 (has links)
The method of dual steepest descent is used to solve ordinary differential equations with nonlinear boundary conditions. A general boundary condition is B(u) = 0 where where B is a continuous functional on the nth order Sobolev space Hn[0.1J. If F:HnCO,l] —• L2[0,1] represents a 2 differential equation, define *(u) = 1/2 IIF < u) li and £(u) = 1/2 l!B(u)ll2. Steepest descent is applied to the functional 2 £ a * + £. Two special cases are considered. If f:lR —• R is C^(2), a Type I boundary condition is defined by B(u) = f(u(0),u(1)). Given K: [0,1}xR—•and g: [0,1] —• R of bounded variation, a Type II boundary condition is B(u) = ƒ1/0K(x,u(x))dg(x).
25

Hybrid Steepest-Descent Methods for Variational Inequalities

Huang, Wei-ling 26 June 2006 (has links)
Assume that F is a nonlinear operator on a real Hilbert space H which is strongly monotone and Lipschitzian on a nonempty closed convex subset C of H. Assume also that C is the intersection of the fixed point sets of a finite number of nonexpansive mappings on H. We make a slight modification of the iterative algorithm in Xu and Kim (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003), which generates a sequence {xn} from an arbitrary initial point x0 in H. The sequence {xn} is shown to converge in norm to the unique solution u* of the variational inequality, under the conditions different from Xu and Kim¡¦s ones imposed on the parameters. Applications to constrained generalized pseudoinverse are included. The results presented in this paper are complementary ones to Xu and Kim¡¦s theorems (Journal of Optimization Theory and Applications, Vol. 119, No. 1, pp. 185-201, 2003).
26

Influence of rare regions on the critical properties of systems with quenched disorder /

Narayanan, Rajesh, January 1999 (has links)
Thesis (Ph. D.)--University of Oregon, 1999. / Typescript. Includes vita and abstract. Includes bibliographical references (leaves 165-166). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9948028.
27

Continuous steepest descent path for traversing non-convex regions

Beddiaf, Salah January 2016 (has links)
In this thesis, we investigate methods of finding a local minimum for unconstrained problems of non-convex functions with n variables, by following the solution curve of a system of ordinary differential equations. The motivation for this was the fact that existing methods (e.g. those based on Newton methods with line search) sometimes terminate at a non-stationary point when applied to functions f(x) that do not a have positive-definite Hessian (i.e. ∇²f → 0) for all x. Even when methods terminate at a stationary point it could be a saddle or maximum rather than a minimum. The only method which makes intuitive sense in non-convex region is the trust region approach where we seek a step which minimises a quadratic model subject to a restriction on the two-norm of the step size. This gives a well-defined search direction but at the expense of a costly evaluation. The algorithms derived in this thesis are gradient based methods which require systems of equations to be solved at each step but which do not use a line search in the usual sense. Progress along the Continuous Steepest Descent Path (CSDP) is governed both by the decrease in the function value and measures of accuracy of a local quadratic model. Numerical results on specially constructed test problems and a number of standard test problems from CUTEr [38] show that the approaches we have considered are more promising when compared with routines in the optimization tool box of MATLAB [46], namely the trust region method and the quasi-Newton method. In particular, they perform well in comparison with the, superficially similar, gradient-flow method proposed by Behrman [7].
28

Preconditioned iterative methods for monotone nonlinear eigenvalue problems

Solov'ëv, Sergey I. 11 April 2006 (has links)
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of the symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to ill-conditioned nonlinear eigenvalue problems with very large sparse matrices monotone depending on the spectral parameter. To compute the smallest eigenvalue of large matrix nonlinear eigenvalue problem, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors and inner products of vectors. We investigate the convergence and derive grid-independent error estimates of these methods for computing eigenvalues. Numerical experiments demonstrate practical effectiveness of the proposed methods for a class of mechanical problems.
29

Proximity curves for potential-based clustering

Csenki, Attila, Neagu, Daniel, Torgunov, Denis, Micic, Natasha 11 January 2020 (has links)
Yes / The concept of proximity curve and a new algorithm are proposed for obtaining clusters in a finite set of data points in the finite dimensional Euclidean space. Each point is endowed with a potential constructed by means of a multi-dimensional Cauchy density, contributing to an overall anisotropic potential function. Guided by the steepest descent algorithm, the data points are successively visited and removed one by one, and at each stage the overall potential is updated and the magnitude of its local gradient is calculated. The result is a finite sequence of tuples, the proximity curve, whose pattern is analysed to give rise to a deterministic clustering. The finite set of all such proximity curves in conjunction with a simulation study of their distribution results in a probabilistic clustering represented by a distribution on the set of dendrograms. A two-dimensional synthetic data set is used to illustrate the proposed potential-based clustering idea. It is shown that the results achieved are plausible since both the ‘geographic distribution’ of data points as well as the ‘topographic features’ imposed by the potential function are well reflected in the suggested clustering. Experiments using the Iris data set are conducted for validation purposes on classification and clustering benchmark data. The results are consistent with the proposed theoretical framework and data properties, and open new approaches and applications to consider data processing from different perspectives and interpret data attributes contribution to patterns.
30

A feed forward neural network approach for matrix computations

Al-Mudhaf, Ali F. January 2001 (has links)
A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria.

Page generated in 0.0698 seconds