• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 17
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 1
  • Tagged with
  • 222
  • 222
  • 43
  • 28
  • 27
  • 23
  • 22
  • 21
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

A multi-objective programming perspective to statistical learning problems

Yaman, Sibel 17 November 2008 (has links)
It has been increasingly recognized that realistic problems often involve a tradeoff among many conflicting objectives. Traditional methods aim at satisfying multiple objectives by combining them into a global cost function, which in most cases overlooks the underlying tradeoffs between the conflicting objectives. This raises the issue about how different objectives should be combined to yield a final solution. Moreover, such approaches promise that the chosen overall objective function is optimized over the training samples. However, there is no guarantee on the performance in terms of the individual objectives since they are not considered on an individual basis. Motivated by these shortcomings of traditional methods, the objective in this dissertation is to investigate theory, algorithms, and applications for problems with competing objectives and to understand the behavior of the proposed algorithms in light of some applications. We develop a multi-objective programming (MOP) framework for finding compromise solutions that are satisfactory for each of multiple competing performance criteria. The fundamental idea for our formulation, which we refer to as iterative constrained optimization (ICO), evolves around improving one objective while allowing the rest to degrade. This is achieved by the optimization of individual objectives with proper constraints on the remaining competing objectives. The constraint bounds are adjusted based on the objective functions obtained in the most recent iteration. An aggregated utility function is used to evaluate the acceptability of local changes in competing criteria, i.e., changes from one iteration to the next. Conflicting objectives arise in different contexts in many problems of speech and language technologies. In this dissertation, we consider two applications. The first application is language model (LM) adaptation, where a general LM is adapted to a specific application domain so that the adapted LM is as close as possible to both the general model and the application domain data. Language modeling and adaptation is used in many speech and language processing applications such as speech recognition, machine translation, part-of-speech tagging, parsing, and information retrieval. The second application is automatic language identification (LID), where the standard detection performance evaluation measures false-rejection (or miss) and false-acceptance (or false alarm) rates for a number of languages are to be simultaneously minimized. LID systems might be used as a pre-processing stage for understanding systems and for human listeners, and find applications in, for example, a hotel lobby or an international airport where one might speak to a multi-lingual voice-controlled travel information retrieval system. This dissertation is expected to provide new insights and techniques for accomplishing significant performance improvement over existing approaches in terms of the individual competing objectives. Meantime, the designer has a better control over what is achieved in terms of the individual objectives. Although many MOP approaches developed so far are formal and extensible to large number of competing objectives, their capabilities are examined only with two or three objectives. This is mainly because practical problems become significantly harder to manage when the number of objectives gets larger. We, however, illustrate the proposed framework with a larger number of objectives.
192

Funções de interpolação e técnicas de solução para problemas de poisson usando método de elementos finitos de alta ordem / Interpolation functions and techniques for solving poisson problems using high order finite element method

Santos, Caio Fernando Rodrigues dos, 1986- 17 August 2018 (has links)
Orientador: Marco Lúcio Bittencourt / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-17T22:43:41Z (GMT). No. of bitstreams: 1 Santos_CaioFernandoRodriguesdos_M.pdf: 3714047 bytes, checksum: 27c280eb98d3fe8f79e3d49756adf322 (MD5) Previous issue date: 2011 / Resumo: Esse trabalho apresenta uma nova técnica de solução para o problema de Poisson, via problemas de projeção local, baseada na equivalência dos coeficientes para os problemas de Poisson e projeção. Um método de construção de matrizes de massa e rigidez, para triângulos, através do produto de matrizes unidimensionais de massa, mista e rigidez, usando-se coordenadas baricêntricas, é também apresentado. Dois novos conjuntos de funções de interpolação para triângulos, baseado em coordenadas de área, são considerados. Discute-se a propriedade de ortogonalidade dos polinômios de Jacobi, no domínio de integração de um triângulo na direção L2 = (0, 1- L1) e ponderações ótimas dos polinômios de Jacobi para as matrizes de massa são determinadas / Abstract: This work presents a new solution technique to Poisson problems, using local projection solution, based on the equivalence of the coefficients for the Poisson and projection problems. A calculation method for the mass and stiffness matrices of triangles, based on the product of one-dimensional mass, mixed and stiffness matrices, using barycentric coordinates is also proposed. Two new sets of interpolation functions for triangles, based on area coordinates, are considered. The orthogonality property of Jacobi polynomials in the triangle integration domain is discussed for the direction L2 = (0, 1 - L1) and optimal weights of Jacobi polynomials for the mass matrices are determined / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
193

n-Larguras de conjuntos de funções suaves sobre a esfera 'S POT. d' / n-Widths of sets of smooth functions on the sphere 'S POT. d'

Stábile, Régis Leandro Braguim, 1985- 03 May 2009 (has links)
Orientadores: Alexander Kushpel, Sergio Antonio Tozoni / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-13T06:22:45Z (GMT). No. of bitstreams: 1 Stabile_RegisLeandroBraguim_M.pdf: 861507 bytes, checksum: 9a902b95c3b1523df6cf1e1e230b9505 (MD5) Previous issue date: 2009 / Resumo: O objetivo principal da dissertação é realizar um estudo sobre estimativas de n-larguras de conjuntos de funções suaves sobre a esfera unitária d-dimensional real. Esses conjuntos são gerados por operadores multiplicadores. Outro objetivo é desenvolver um texto em português sobre as n-larguras mais importantes, suas propriedades e suas relações. Este objetivo é realizado no primeiro capítulo. No segundo capítulo é realizado um estudo rápido e com poucas demonstrações sobre Análise Harmônica na esfera d-dimensional real. No terceiro capítulo são estudadas estimativas de médias de Levy para uma classe de normas especiais e em seguida esses resultados são aplicados no estudo de estimativas inferiores para as n-larguras de Kolmogorov e Gel'fand e superiores para a de Kolmogorov, para operadores multiplicadores gerais. No quarto e último capítulo são estudadas estimativas para n-larguras de conjuntos de funções suaves, finitamente e infinitamente diferenciáveis sobre a esfera. Várias dessas estimativas são assintoticamente exatas em termos de ordem e as constantes que determinam a ordem dessas estimativas são determinadas explicitamente. / Abstract: The purpose of this work is to study estimates of n-widths of sets of smooth functions on the d-dimensional real unitary sphere. These sets are generated by multipliers operator. Another aim is to develop a text in portuguese about the most important n-widths, your properties and relations. We do this in the first chapter. In the second chapter, we develop a brief and proof-less study about Harmonic Analysis on the d-dimensional real unitary sphere. In the third chapter, the Levy means for a class of special norms are studied and applied in the study of lower estimates for the Kolmorogov and Gel'fand's n-widths, and upper estimates for the Kolmorogov's, for general multipliers operators. In the fourth and last chapter, the estimates for the n-widths of sets of smooth functions, finitely and infinitely differentiables on the sphere are studied. Several of these estimates are asymptotically exacts in terms of order and the constants that determine the order of these estimatives are given in a explicit form. / Mestrado / Mestre em Matemática
194

Estimativas para n-Larguras e números de entropia de conjuntos de funções suaves sobre o toro T^d / Estimates for n-Widths and entropy numbers of sets of smooth functions on the torus T^d

Stábile, Régis Leandro Braguim, 1985- 25 August 2018 (has links)
Orientador: Sergio Antonio Tozoni / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T19:58:00Z (GMT). No. of bitstreams: 1 Stabile_RegisLeandroBraguim_D.pdf: 1552111 bytes, checksum: af2b74d1076ee2c6dd825049748fd3fd (MD5) Previous issue date: 2014 / Resumo: As teorias de n-larguras e de entropia foram introduzidas por Kolmogorov na década de 1930. Desde então, muitos trabalhos têm visado obter estimativas assintóticas para n-larguras e números de entropia de diferentes classes de conjuntos. Neste trabalho, investigamos n-larguras e números de entropia de operadores multiplicadores definidos sobre o toro d-dimensional. Na primeira parte, estabelecemos estimativas inferiores e superiores para n-larguras e números de entropia de operadores multiplicadores gerais. Na segunda parte, aplicamos estes resultados para operadores multiplicadores específicos, associados a conjuntos de funções finitamente e infinitamente diferenciáveis sobre o toro. Em particular, demonstramos que as estimativas obtidas são exatas em termos de ordem em diversas situações / Abstract: The theories of n-widths and entropy were introduced by Kolmogorov in the 1930s. Since then, many works aims to find estimates for n-widths and entropy numbers of different classes of sets. In this work, we investigate n-widths and entropy numbers of multiplier operators defined on the d-dimensional torus. In the first part, upper and lower bounds are established for n-widths and entropy numbers of general multiplier operators. In the second part, we apply these results to specific multiplier operators, associated with sets of finitely and infinitely differentiable functions on the torus. In particular, we prove that, the estimates obtained are order sharp in various situations / Doutorado / Matematica / Doutor em Matemática
195

The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations

Cheeseman, Bevan 29 March 2018 (has links) (PDF)
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
196

The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations

Cheeseman, Bevan 28 November 2017 (has links)
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
197

Efficient Knot Optimization for Accurate B-spline-based Data Approximation

Yo-Sing Yeh (9757565) 14 December 2020
<div>Many practical applications benefit from the reconstruction of a smooth multivariate function from discrete data for purposes such as reducing file size or improving analytic and visualization performance. Among the different reconstruction methods, tensor product B-spline has a number of advantageous properties over alternative data representation. However, the problem of constructing a best-fit B-spline approximation effectively contains many roadblocks. Within the many free parameters in the B-spline model, the choice of the knot vectors, which defines the separation of each piecewise polynomial patch in a B-spline construction, has a major influence on the resulting reconstruction quality. Yet existing knot placement methods are still ineffective, computationally expensive, or impose limitations on the dataset format or the B-spline order. Moving beyond the 1D cases (curves) and onto higher dimensional datasets (surfaces, volumes, hypervolumes) introduces additional computational challenges as well. Further complications also arise in the case of undersampled data points where the approximation problem can become ill-posed and existing regularization proves unsatisfactory.</div><div><br></div><div>This dissertation is concerned with improving the efficiency and accuracy of the construction of a B-spline approximation on discrete data. Specifically, we present a novel B-splines knot placement approach for accurate reconstruction of discretely sampled data, first in 1D, then extended to higher dimensions for both structured and unstructured formats. Our knot placement methods take into account the feature or complexity of the input data by estimating its high-order derivatives such that the resulting approximation is highly accurate with a low number of control points. We demonstrate our method on various 1D to 3D structured and unstructured datasets, including synthetic, simulation, and captured data. We compare our method with state-of-the-art knot placement methods and show that our approach achieves higher accuracy while requiring fewer B-spline control points. We discuss a regression approach to the selection of the number of knots for multivariate data given a target error threshold. In the case of the reconstruction of irregularly sampled data, where the linear system often becomes ill-posed, we propose a locally varying regularization scheme to address cases for which a straightforward regularization fails to produce a satisfactory reconstruction.</div>
198

Numerical singular perturbation approaches based on spline approximation methods for solving problems in computational finance

Khabir, Mohmed Hassan Mohmed January 2011 (has links)
Options are a special type of derivative securities because their values are derived from the value of some underlying security. Most options can be grouped into either of the two categories: European options which can be exercised only on the expiration date, and American options which can be exercised on or before the expiration date. American options are much harder to deal with than European ones. The reason being the optimal exercise policy of these options which led to free boundary problems. Ever since the seminal work of Black and Scholes [J. Pol. Econ. 81(3) (1973), 637-659], the differential equation approach in pricing options has attracted many researchers. Recently, numerical singular perturbation techniques have been used extensively for solving many differential equation models of sciences and engineering. In this thesis, we explore some of those methods which are based on spline approximations to solve the option pricing problems. We show a systematic construction and analysis of these methods to solve some European option problems and then extend the approach to solve problems of pricing American options as well as some exotic options. Proposed methods are analyzed for stability and convergence. Thorough numerical results are presented and compared with those seen in the literature.
199

Statistical methods with application to machine learning and artificial intelligence

Lu, Yibiao 11 May 2012 (has links)
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.
200

Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution

Swinson, Michael D. 28 November 2005 (has links)
Despite recent advances in statistics, artificial neural network theory, and machine learning, nonlinear function estimation in high-dimensional space remains a nontrivial problem. As the response surface becomes more complicated and the dimensions of the input data increase, the dreaded "curse of dimensionality" takes hold, rendering the best of function approximation methods ineffective. This thesis takes a novel approach to solving the high-dimensional function estimation problem. In this work, we propose and develop two distinct parametric projection pursuit learning networks with wide-ranging applicability. Included in this work is a discussion of the choice of basis functions used as well as a description of the optimization schemes utilized to find the parameters that enable each network to best approximate a response surface. The essence of these new modeling methodologies is to approximate functions via the superposition of a series of piecewise one-dimensional models that are fit to specific directions, called projection directions. The key to the effectiveness of each model lies in its ability to find efficient projections for reducing the dimensionality of the input space to best fit an underlying response surface. Moreover, each method is capable of effectively selecting appropriate projections from the input data in the presence of relatively high levels of noise. This is accomplished by rigorously examining the theoretical conditions for approximating each solution space and taking full advantage of the principles of optimization to construct a pair of algorithms, each capable of effectively modeling high-dimensional nonlinear response surfaces to a higher degree of accuracy than previously possible.

Page generated in 0.085 seconds