1 |
Measure Fields for Function ApproximationMarroquin, Jose L. 01 June 1993 (has links)
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
|
2 |
Improving the Generalization Capability of the RBF Neural Networks via the Use of Linear Regression TechniquesLin, Chen-Lia 27 July 2001 (has links)
Neural networks can be looked as a kind of intruments which is
able to learn. For making the fruitful results of neural networks'
learning possess parctical applied value, the thesis makes use of
linear regression technics to strengthen the extended capability of
RBF neural networks.
The thesis researches the training methods of RBF neural networks,
and retains the frame of OLS(orthogonal least square) learning rules
which is published by Chen and Billings in 1992. Besides, aiming at
the RBF's characteristics, the thesis brings up improved learning rules
in first and second phases, and uses " early stop" to be the condition
of training ceasing.
To sum up, chiefly the thesis applies some technics of statistic
linear regression to strenthen the extended capability of RBF, and
using different methods to do computer simulation in different noise
situations.
|
3 |
Micro-net the parallel path artificial neuronMurray, Andrew Gerard William, n/a January 2006 (has links)
A feed forward architecture is suggested that increases the complexity of conventional neural
network components through the implementation of a more complex scheme of interconnection.
This is done with a view to increasing the range of application of the feed forward paradigm.
The uniqueness of this new network design is illustrated by developing an extended taxonomy
of accepted published constructs specific and similar to the higher order, product kernel
approximations achievable using "parallel paths". Network topologies from this taxonomy are
then compared to each other and the architectures containing parallel paths. In attempting this
comparison, the context of the term "network topology" is reconsidered.
The output of "channels" in these parallel paths are the products of a conventional connection
as observed facilitating interconnection between two layers in a multilayered perceptron and the
output of a network processing unit, a "control element", that can assume the identity of a
number of pre-existing processing paradigms.
The inherent property of universal approximation is tested by existence proof and the method
found to be inconclusive. In so doing an argument is suggested to indicate that the parametric
nature of the functions as determined by conditions upon initialization may only lead to
conditional approximations. The property of universal approximation is neither, confirmed or
denied. Universal approximation cannot be conclusively determined by the application of Stone
Weierstrass Theorem, as adopted from real analysis.
This novel implementation requires modifications to component concepts and the training
algorithm. The inspiration for these modifications is related back to previously published work
that also provides the basis of "proof of concept".
By achieving proof of concept the appropriateness of considering network topology without
assessing the impact of the method of training on this topology is considered and discussed in
some detail.
Results of limited testing are discussed with an emphasis on visualising component
contributions to the global network output.
|
4 |
Sequential Optimal Recovery: A Paradigm for Active LearningNiyogi, Partha 12 May 1995 (has links)
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
|
5 |
Design and Analysis of Table-based Arithmetic Units with Memory ReductionChen, Kun-Chih 01 September 2009 (has links)
In many digital signal processing applications, we often need some special function units which can compute complicated arithmetic functions such as reciprocal and logarithm. Conventionally, table-based arithmetic design strategy uses lookup tables to implement these kinds of function units. However, the table size will increase exponentially with respect to the required precision. In this thesis, we propose two methods to reduce the table size: bottom-up non-uniform segmentation and the approach which merges uniform piecewise interpolation and Newton-Raphson method. Experimental results show that we obtain significant table sizes reduction in most cases.
|
6 |
A GENETIC ALGORITHM TECHNIQUE FOR APPROXIMATING FUNCTIONS OF MULTIPLE INDEPENDENT VARIABLESGURUMURTHY, ARAVIND January 2003 (has links)
No description available.
|
7 |
Meta-učící metody pro analýzu trendů her Go / Meta-learning methods for analyzing Go playing trendsMoudřík, Josef January 2013 (has links)
This thesis extends the methodology for extracting evaluations of players from samples of Go game records originally presented in (Baudiš - Moudřík, 2012). Firstly, this work adds more features and lays out a methodology for their comparison. Secondly, we develop a robust machine-learning framework, which is able to capture dependencies between the evaluations and general target variable using ensemble meta-learning with a genetic algorithm. We apply this framework to two domains, estimation of strength and styles. The results show that the inference of the target variables in both cases is viable and reasonably precise. Finally, we present a web application, which realizes the methodology, while presenting a prototype teaching aid for the Go players and gathering more data. Powered by TCPDF (www.tcpdf.org)
|
8 |
[en] CONSTRUCTIVE REGRESSION ON IMPLICIT MANIFOLDS / [pt] REGRESSÃO CONSTRUTIVA EM VARIEDADES IMPLÍCITASMARINA SEQUEIROS DIAS 27 March 2013 (has links)
[pt] Métodos de aprendizagem de variedades assumem que um conjunto de dados de alta dimensão possuem uma representação de baixa dimensionalidade. Tais métodos podem ser empregados para simplificar os dados e obter um melhor entendimento da estrutura da qual os dados fazem parte.
Nesta tese, utiliza-se o método de aprendizagem de variedades chamado votação por tensores para obter informação da dimensionalidade intrínseca dos dados, bem como estimativas confiáveis da orientação dos vetores normais e tangentes em cada ponto da variedade. Em seguida, propõe-se
um método construtivo para aproximar a variedade implícita e realizar uma regressão. O método e chamado de Regressão Construtiva em Variedades Implícitas (RCVI). Com os resultados obtidos no método de votação por
tensores, busca-se uma aproximação da variedade através de uma participação
do domínio, controlada pelo erro, baseada em malhas 2n-adicas (n denota
o numero de características dos dados de entrada) e em arvore binaria
com funções de transição suave. A construção consiste em dividir os dados
em vários subconjuntos, de maneira a aproximar cada subconjunto de
dados com funções implícitas simples. Nesse trabalho empregamos funções
polinomiais multivariadas. A forma global pode ser obtida combinando essas
estruturas simples. A cada dado de entrada esta associada uma saída e
a partir de uma boa aproximação da variedade, utilizando esses dados
de entrada, busca-se obter uma boa estimativa da saída. Dessa forma,
os critérios de parada da subdivisão do domínio incluem uma precisão,
definida pelo usuário, na aproximação da variedade, bem como um critério
envolvendo a dispersão das saídas em cada subdomínio. Para avaliar o
desempenho do método proposto, realiza-se uma regressão com dados reais,
compara-se com métodos de aprendizagem supervisionada e efetua-se ainda
uma aplicação na área de dados de poucos de petróleo. / [en] Manifold Learning Methods assume that a high-dimensional data set has a
low-dimensional representation. These methods can be employed in order
to simplify data, and to obtain a better understanding of the structure of
which the data belong. In this thesis, a tensor voting approach is employed
as a technique of manifold learning, to obtain information about the intrinsic
dimensionality of the data and reliable estimates of the orientation of normal
and tangent vectors at each data point in the manifold. Next, a constructive
method is proposed to approximate an implicit manifold and perform
a regression. The method is called Constructive Regression on Implicit
Manifold (RCVI). With the obtained results, search is made in order to
obtain a manifold approximation, which consists in a domain partition,
error-controlled, based on 2n-trees (n means the number of features of the
input data set) and binary partition trees with smooth transition functions.
The construction implies in partition the data set into several subsets in
order to approximate each subset with a simple implicit function. In this
work, it is used multivariate polynomial functions. The global shape can
be obtained by combining these simple structures. Each input data set is
associated with an output data, then, from a good manifold approximation
using those input data set, it is hoped that occurs a good estimate of the
output data. Therefore, the stop criteria of the domain subdivision include
a precision, deffined by the user, on the manifold approximation, as well
as a criterion that involves the output dispersion on each subdomain. To
evaluate the performance of the proposed method, a regression on real data
is computed, and compared with some supervised learning algorithms and
also an application on well data is performed.
|
9 |
Análise do efeito do jitter de fase na operação de malhas de sincronismo de fase. / Analysis of phase-jitter effect in the operation of phase-locked loops.Takada, Elisa Yoshiko 12 April 2006 (has links)
O jitter de fase é um fenômeno inerente nos sistemas elétricos. O crescente interesse pelo jitter deve-se à degradação que causa em sistemas de transmissão de alta velocidade. Seus efeitos fazem-se sentir ao afetar o processo de recuperação de dados, causando aumento na taxa de erros por bit. Neste trabalho, o jitter é modelado como uma perturbação periódica e seu efeito na operação de PLLs é analisado. Deduzimos uma fórmula para o cálculo da amplitude do jitter envolvendo somente os parâmetros do PLL e do jitter e identificamos as regiões do espaço de parâmetros com os comportamentos dinâmicos do PLL. / Phase jitter or timing jitter is an inherent phenomenum on electrical systems. Jitter growing interest is due to degradation it causes in high-speed transmission systems. It affects the data recovering process and it causes an increase in the bit error rate. In this work, jitter is modelled as a periodic perturbation and its effects in the operation of a PLL are analysed. We deduce a formula that measures jitter amplitude by PLL and jitter parameters and we identify the regions of parameter space according to the system dynamical behaviour.
|
10 |
Konstrukce minimálních DNF reprezentací 2-intervalových funkcí. / Konstrukce minimálních DNF reprezentací 2-intervalových funkcí.Dubovský, Jakub January 2012 (has links)
Title: A construction of minimum DNF representations of 2-interval functions Author: Jakub Dubovský Department: Dep. of Theoretical Computer Science and Mathematical Logic Supervisor: doc.RNDr.Ondřej Čepek, Ph.D. Abstract: The thesis is devoted to interval boolean functions. It is focused on construction of their representation by disjunctive normal forms with minimum number of terms. Summary of known results in this field for 1-interval functions is presented. It shows that method used to prove those results cannot be in general used for two or more interval functions. It tries to extend those results to 2-interval functions. An optimization algorithm for special subclass of them is constructed. Exact error estimation for approximation algorithm is proven. A command line software for experimentation with interval function is part of the thesis. Keywords: boolean function, interval function, representation construction, ap- proximation 1
|
Page generated in 0.1411 seconds