• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 25
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modèles non linéaires et prévision / Non-linear models and forecasting

Madkour, Jaouad 19 April 2013 (has links)
L’intérêt des modèles non-linéaires réside, d’une part, dans une meilleure prise en compte des non-linéaritéscaractérisant les séries macroéconomiques et financières et, d’autre part, dans une prévision plus riche en information.A ce niveau, l’originalité des intervalles (asymétriques et/ou discontinus) et des densités de prévision (asymétriqueset/ou multimodales) offerts par cette nouvelle forme de modélisation suggère qu’une amélioration de la prévisionrelativement aux modèles linéaires est alors possible et qu’il faut disposer de tests d’évaluation assez puissants pourvérifier cette éventuelle amélioration. Ces tests reviennent généralement à vérifier des hypothèses distributionnellessur les processus des violations et des transformées probabilistes associés respectivement à chacune de ces formes deprévision. Dans cette thèse, nous avons adapté le cadre GMM fondé sur les polynômes orthonormaux conçu parBontemps et Meddahi (2005, 2012) pour tester l’adéquation à certaines lois de probabilité, une approche déjà initiéepar Candelon et al. (2011) dans le cadre de l’évaluation de la Value-at-Risk. Outre la simplicité et la robustesse de laméthode, les tests développés présentent de bonnes propriétés en termes de tailles et de puissances. L’utilisation denotre nouvelle approche dans la comparaison de modèles linéaires et de modèles non-linéaires lors d’une analyseempirique a confirmé l’idée selon laquelle les premiers sont préférés si l’objectif est le calcul de simples prévisionsponctuelles tandis que les derniers sont les plus appropriés pour rendre compte de l'incertitude autour de celles-ci. / The interest of non-linear models is, on the one hand, to better take into account non-linearities characterizing themacroeconomic and financial series and, on the other hand, to get richer information in forecast. At this level,originality intervals (asymmetric and / or discontinuous) and forecasts densities (asymmetric and / or multimodal)offered by this new modelling form suggests that improving forecasts according to linear models is possible and thatwe should have enough powerful tests of evaluation to check this possible improvement. Such tests usually meanchecking distributional assumptions on violations and probability integral transform processes respectively associatedto each of these forms of forecast. In this thesis, we have adapted the GMM framework based on orthonormalpolynomials designed by Bontemps and Meddahi (2005, 2012) to test for some probability distributions, an approachalready adopted by Candelon et al. (2011) in the context of backtesting Value-at-Risk. In addition to the simplicity androbustness of the method, the tests we have developed have good properties in terms of size and power. The use of ournew approach in comparison of linear and non-linear models in an empirical analysis confirmed the idea according towhich the former are preferred if the goal is the calculation of simple point forecasts while the latter are moreappropriated to report the uncertainty around them.
12

Interpolating refinable function vectors and matrix extension with symmetry

Zhuang, Xiaosheng 11 1900 (has links)
In Chapters 1 and 2, we introduce the definition of interpolating refinable function vectors in dimension one and high dimensions, characterize such interpolating refinable function vectors in terms of their masks, and derive their sum rule structure explicitly. We study biorthogonal refinable function vectors from interpolating refinable function vectors. We also study the symmetry property of an interpolating refinable function vector and characterize a symmetric interpolating refinable function vector in any dimension with respect to certain symmetry group in terms of its mask. Examples of interpolating refinable function vectors with some desirable properties, such as orthogonality, symmetry, compact support, and so on, are constructed according to our characterization results. In Chapters 3 and 4, we turn to the study of general matrix extension problems with symmetry for the construction of orthogonal and biorthogonal multiwavelets. We give characterization theorems and develop step-by-step algorithms for matrix extension with symmetry. To illustrate our results, we apply our algorithms to several examples of interpolating refinable function vectors with orthogonality or biorthogonality obtained in Chapter 1. In Chapter 5, we discuss some possible future research topics on the subjects of matrix extension with symmetry in high dimensions and frequency-based non-stationary tight wavelet frames with directionality. We demonstrate that one can construct a frequency-based tight wavelet frame with symmetry and show that directional analysis can be easily achieved under the framework of tight wavelet frames. Potential applications and research directions of such tight wavelet frames with directionality are discussed. / Applied Mathematics
13

Decomposition Of Elastic Constant Tensor Into Orthogonal Parts

Dinckal, Cigdem 01 August 2010 (has links) (PDF)
All procedures in the literature for decomposing symmetric second rank (stress) tensor and symmetric fourth rank (elastic constant) tensor are elaborated and compared which have many engineering and scientific applications for anisotropic materials. The decomposition methods for symmetric second rank tensors are orthonormal tensor basis method, complex variable representation and spectral method. For symmetric fourth rank (elastic constant) tensor, there are four mainly decomposition methods namely as, orthonormal tensor basis, irreducible, harmonic decomposition and spectral. Those are applied to anisotropic materials possessing various symmetry classes which are isotropic, cubic, transversely isotropic, tetragonal, trigonal and orthorhombic. For isotropic materials, an expression for the elastic constant tensor different than the traditionally known form is given. Some misprints found in the literature are corrected. For comparison purposes, numerical examples of each decomposition process are presented for the materials possessing different symmetry classes. Some applications of these decomposition methods are given. Besides, norm and norm ratio concepts are introduced to measure and compare the anisotropy degree for various materials with the same or di&curren / erent symmetries. For these materials,norm and norm ratios are calculated. It is suggested that the norm of a tensor may be used as a criterion for comparing the overall e&curren / ect of the properties of anisotropic materials and the norm ratios may be used as a criterion to represent the anisotropy degree of the properties of materials. Finally, comparison of all methods are done in order to determine similarities and differences between them. As a result of this comparison process, it is proposed that the spectral method is a non-linear decomposition method which yields non-linear orthogonal decomposed parts. For symmetric second rank and fourth rank tensors, this case is a significant innovation in decomposition procedures in the literature.
14

Data-driven transform optimization for next generation multimedia applications

Sezer, Osman Gokhan 25 August 2011 (has links)
The objective of this thesis is to formulate a generic dictionary learning method with the guiding principle that states: Efficient representations lead to efficient estimations. The fundamental idea behind using transforms or dictionaries for signal representation is to exploit the regularity within data samples such that the redundancy of the representation is minimized subject to a level of fidelity. This observation translates to rate-distortion cost in compression literature, where a transform that has the lowest rate-distortion cost provides a more efficient representation than the others. In our work, rather than using as an analysis tool, the rate-distortion cost is utilized to improve the efficiency of transforms. For this, an iterative optimization method is proposed, which seeks an orthonormal transform that reduces the expected value of rate-distortion cost of an ensemble of data. Due to the generic nature of the new optimization method, one can design a set of orthonormal transforms either in the original signal domain or on the top of a transform-domain representation. To test this claim, several image codecs are designed, which use block-, lapped- and wavelet-transform structures. Significant increases in compression performances are observed compared to original methods. An extension of the proposed optimization method for video coding gave us state-of-the-art compression results with separable transforms. Also using the robust statistics, an explanation to the superiority of new design over other learning-based methods such as Karhunen-Loeve transform is provided. Finally, the new optimization method and the minimization of the "oracle" risk of diagonal estimators in signal estimation is shown to be equal. With the design of new diagonal estimators and the risk-minimization-based adaptation, a new image denoising algorithm is proposed. While these diagonal estimators denoise local image patches, by formulation the optimal fusion of overlapping local denoised estimates, the new denoising algorithm is scaled to operate on large images. In our experiments, the state-of-the-art results for transform-domain denoising are achieved.
15

Interpolating refinable function vectors and matrix extension with symmetry

Zhuang, Xiaosheng Unknown Date
No description available.
16

Identificação de sistemas não-lineares usando modelos de Volterra baseados em funções ortonormais de Kautz e generalizadas / Identification of nonlinear systems using volterra models based on Kautz functions and generalized orthonormal functions

Rosa, Alex da 03 December 2009 (has links)
Orientadores: Wagner Caradori do Amaral, Ricardo Jose Gabrielli Barreto Campello / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T00:00:28Z (GMT). No. of bitstreams: 1 Rosa_Alexda_D.pdf: 1534572 bytes, checksum: 9100bf7dc7bd642daebdac3e973c668c (MD5) Previous issue date: 2009 / Resumo: Este trabalho enfoca a modelagem de sistemas não-lineares usando modelos de Volterra com funções de base ortonormal (Orthonormal Basis Functions - OBF). Os modelos de Volterra representam uma generalização do modelo de resposta ao impulso para a descrição de sistemas não-lineares e, em geral, exigem um elevado número de termos para representar os kernels de Volterra. Esta desvantagem pode ser superada representando-se os kernels usando um conjunto de funções ortonormais. O modelo resultante, conhecido como modelo OBF-Volterra, pode ser truncado em um n'umero menor de termos se as funções da base forem projetadas adequadamente. O problema central é como selecionar os polos livres que completamente parametrizam estas funções, particularmente as funções de Kautz e as funções ortonormais generalizadas (Generalized Orthonormal Basis Functions - GOBF). Uma das abordagens adotadas para resolver este problema envolve a minimização de um limitante superior para o erro resultante do truncamento da expansao do kernel. Cada kernel multidimensional é decomposto em um conjunto de bases de Kautz independentes, em que cada base é parametrizada por um par individual de pólos complexos conjugados com a intenção de representar a dinamica dominante do kernel ao longo de uma dimensão particular. Obtem-se uma solução analítica para um dos parâmetros de Kautz, válida para modelos de Volterra de qualquer ordem. Outra abordagem envolve a otimização numerica das bases de funções ortonormais usadas para a aproximação de sistemas dinamicos. Esta estrategia e baseada no cálculo de expressões analíticas para os gradientes da sa?da dos filtros ortonormais com relação aos pólos da base. Estes gradientes fornecem direções de busca exatas para otimizar os pólos de uma dada base ortonormal. As direções de busca, por sua vez, podem ser usadas como parte de um procedimento de otimização para obter o mínimo de uma função de custo que leva em consideração o erro de estimação da saída do sistema. As expressões relativas à base de Kautz e à base GOBF são obtidas. A metodologia proposta conta somente com dados entrada-sa'?da medidos do sistema a ser modelado, isto é, não se exige nenhuma informação prévia sobre os kernels de Volterra. Exemplos de simulação ilustram a aplicação desta abordagem para a modelagem de sistemas lineares e não-lineares, incluindo um sistema real de levitação magnética com comportamento oscilatorio. Por ultimo, estuda-se a representação de sistemas dinâmicos incertos baseada em modelos com incerteza estruturada. A incerteza de um conjunto de kernels de Volterra e mapeada em intervalos de pertinência que definem os coeficientes da expansão ortonormal. Condições adicionais são propostas para garantir que todos os kernels do processo sejam representados pelo modelo, o que permite estimar os limites das incertezas / Abstract: This work is concerned with the modeling of nonlinear systems using Volterra models with orthonormal basis functions (OBF). Volterra models represent a generalization of the impulse response model for the description of nonlinear systems and, in general, require a large number of terms for representing the Volterra kernels. Such a drawback can be overcome by representing the kernels using a set of orthonormal functions. The resulting model, so-called OBF-Volterra model, can be truncated into fewer terms if the basis functions are properly designed. The underlying problem is how to select the free-design poles that fully parameterize these functions, particularly the two-parameter Kautz functions and the Generalized Orthonormal Basis Functions (GOBF). One of the approaches adopted to solve this problem involves minimizing an upper bound for the error resulting from the truncation of the kernel expansion. Each multidimensional kernel is decomposed into a set of independent Kautz bases, in which every basis is parameterized by an individual pair of complex conjugate poles intended to represent the dominant dynamic of the kernel along a particular dimension. An analytical solution for one of the Kautz parameters, valid for Volterra models of any order, is derived. Other approach involves the numerical optimization of orthonormal bases of functions used for approximation of dynamic systems. This strategy is based on the computation of analytical expressions for the gradients of the output of the orthonormal filters with respect to the basis poles. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into consideration the error of estimation of the system output. The expressions relative to the Kautz basis and to the GOBF are addressed. The proposed methodology relies solely on input-output data measured from the system to be modeled, i.e., no previous information about the Volterra kernels is required. Simulation examples illustrate the application of this approach to the modeling of linear and nonlinear systems, including a real magnetic levitation system with oscillatory behavior. At last, the representation of uncertain systems based on models having structured uncertainty is studied. The uncertainty of a set of Volterra kernels is mapped on to intervals defining the coefficients of the orthonormal expansion. Additional conditions are proposed to guarantee that all the process kernels to be represented by the model, which allows estimating the uncertainty bounds / Doutorado / Automação / Doutor em Engenharia Elétrica
17

Modelagem e controle preditivo utilizando multimodelos / Modeling and predictive control using multi-models

Machado, Jeremias Barbosa 22 February 2007 (has links)
Orientadores: Wagner Caradori do Amaral, Ricardo Jose Grabrielli Barreto Campello / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T14:19:44Z (GMT). No. of bitstreams: 1 Machado_JeremiasBarbosa_M.pdf: 6477617 bytes, checksum: 3f0c4fec476306e8cc05a7940894b0a0 (MD5) Previous issue date: 2007 / Resumo: o interesse na utilização de algoritmos de controle sofisticados cresce no meio industrial devido à necessidade de melhor qualidade dos produtos produzidos. Uma abordagem que vem ganhando destaque é a utilização de sistemas de controle não-linear que modelam os sistemas por meio de multimodelos lineares. Neste contexto, este trabalho apresenta a modelagem e controle de sistemas não-lineares através de controladores preditivos não-lineares que utilizam multimodelos lineares. Os controladores preditivos baseados em modelos (MBPC - Model Based Predictive Controllers) são controladores cuja principal característica é a utilização de um modelo na determinação de um conjunto de previsões de saída, e a lei de controle é calculada em função destas previsões minimizando-se uma função de custo. O desempenho deste controlador depende da qualidade do modelo utilizado para predição dos sinais de saída. A proposta do trabalho é modelar as não-linearidades do processo sob controle através de modelos fuzzy Takagi-Sugeno - TS com funções de base ortonormal - FBO nos conseqüentes das regras. As FBO's apresentam diversas características conceituais e estruturais de interesse na elaboração dos modelos utilizados nos controladores preditivos, como a ausência de realimentação de saída, o que evita a propagação de erro, além de outras que serão discutidas ao longo deste trabalho. Os parâmetros de um modelo fuzzy TS a serem determinados são os antecedentes das regras, com suas funções de pertinência, e as funções nos conseqüentes das regras, que neste trabalho dar-se-ão de forma automática, sendo os antecedentes das regras obtidos através de agrupamento fuzzy (fuzzy clustering) das amostras de entrada e saída. Para esta tarefa será utilizado o algoritmo de GustafsonKessel. A fim de determinar o número de grupos que irão compor o modelo e, por conseqüência, defil)ir o número de regras e modelos locais, utilizar-se-ão critérios que avaliam a qualidade dos agrupamentos juzzy, como Fuzzy Silhouette, Fuzzy Hipervolume, Average Partition Density e Average Within-Cluster Distance, sendo proposta a combinação dos resultados obtidos em cada um dos critérios. O controle é feito de forma que, para cada modelo local, presente no modelo fuzzy TS-FBO, tem-se um controlador atuando sobre este. As ações de controle locais são combinadas conforme a ativação de cada regra do respectivo modelo local, e a ação de controle global resultante dessa combinação é aplicada ao processo a ser controlado. A abordagem proposta apresenta vantagens estruturais na modelagem e controle de processos nãolineares, quando comparado a outras metodologias de modelagem (como modelos polinomiais NARMAX) e controle, uma vez que esta abordagem é composta de uma estrutura simples com modelos locais lineares (ou afins) formados por FBO's. Para ilustrar o que foi desenvolvido, são apresentadas, no final destes trabalho, implementações na modelagem e controle de processos não-lineares / Abstract: The use of advanced control strategies has been increased in the last years due to the needs of more accurate quality on products. An approach that seems attractive on control and modeling of the nonlinear processes is the use of multiple linear models. In this context, this work presents an altemative approach for modeling and controlling nonlinear processes through nonlinear predictive control (NMBPC) using multi-models. The main characteristic of the Model Based Predictive Controllers is the use of a model for the determination ofthe output predictions. The controllaw is derived based on these output predictions, minimizing a specified cost function. Its performance is directly related to the quality of the model predictor. Therefore, in this work, the process is modeling through Takagi-Sugeno- TS fuzzy models with orthonormal base functions - OBF - on the mIes consequents. OBF' s models present several conceptual and structural characteristics of interest on the elaboration of models predictors, such as, absence of output recursion and feedback of prediction errors, often leading to superior performances over long-range horizon predictions and natural decoupling between multiple outputs; there is no need for previous knowledge about the relevant past terms of the system signals; the representation of a stable system is assuredly stable; tolerance to unmodeled dynamics; ability to deal with time delays. The antecedents ofthe TS fuzzy models are obtained through fuzzy c1ustering ofthe input and output measures. The algorithm of Gustafson-Kessel is used to perform this task. In order to determine the number ofthe local models, clustering validity criteria such as Fuzzy Silhouette, Fuzzy Hipervolume, Average Partition Density e Average Within-Cluster Distance are used. A predictive controller is derived for local model and the global controllaw is obtained by combining each local control law, using the degree of activation of every mIe of the respective local model. The proposed approach presents structural advantages in the modeling and controlling nonlinear process, when compared to other modeling (like polynomial models-NARMAX) and controlling strategies, as this approach is constituted of a simple structure with linear local models using OBF' s. The performance of the proposed strategies is illustrated using some simulated examples / Mestrado / Automação / Mestre em Engenharia Elétrica
18

Sobre separação cega de fontes : proposições e analise de estrategias para processamento multi-usuario

Cavalcante, Charles Casimiro 30 April 2004 (has links)
Orientadores: João Marcos Travassos Romano, Francisco Rodrigo Porto Cavalcanti / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T00:19:46Z (GMT). No. of bitstreams: 1 Cavalcante_CharlesCasimiro_D.pdf: 8652621 bytes, checksum: bf432c4988b60a8e2465828f4f748b47 (MD5) Previous issue date: 2004 / Resumo: Esta tese é dedicada ao estudo de tecnicas de separação cega de fontes aplicadas ao contexto de processamento multiusuario em comunicações digitais. Utilizando estrategias de estimação da função de densidade de probabilidade (fdp), são propostos dois metodos de processamento multiusuario que permitem recuperar os sinais transmitidos pela medida de similaridade de Kullback-Leibler entre a fdp dos sinais a saida do dispositivo de separação e um modelo parametrico que contem as caracteristicas dos sinais transmitidos. Alem desta medida de similaridade, são empregados diferentes metodos que garantem a descorrelação entre as estimativas das fontes de tal forma que os sinais recuperados sejam provenientes de diferentes fontes. E ainda realizada a analise de convergencia dos metodos e suas equivalencias com tecnicas classicas resultando em algumas importantes relações entre criterios cegos e supervisionados, tais como o criterio proposto e o criterio de maxima a posteriori. Estes novos metodos aliam a capacidade de recuperação da informação uma baixa complexidade computacional. A proposição de metodos baseados na estimativa da fdp permitiu a realização de um estudo sobre o impacto das estatisticas de ordem superior em algoritmos adaptativos para separação cega de fontes. A utilização da expansão da fdp em series ortonormais permite avaliar atraves dos cumulantes a dinamica de um processo de separação de fontes. Para tratar com problemas de comunicação digital e proposta uma nova serie ortonormal, desenvolvida em torno de uma função de densidade de probabilidade dada por um somatorio de gaussianas. Esta serie e utilizada para evidenciar as diferenças em relação ao desempenho em tempo real ao se reter mais estatisticas de ordem superior. Simulações computacionais são realizadas para evidenciar o desempenho das propostas frente a tecnicas conhecidas da literatura em varias situações de necessidade de alguma estrategia de recuperação de sinais / Abstract: This thesis is devoted to study blind source separation techniques applied to multiuser processing in digital communications. Using probability density function (pdf) estimation strategies, two multiuser processing methods are proposed. They aim for recovering transmitted signal by using the Kullback-Leibler similarity measure between the signals pdf and a parametric model that contains the signals characteristics. Besides the similarity measure, different methods are employed to guarantee the decorrelation of the sources estimates, providing that the recovered signals origin from different sources. The convergence analysis of the methods as well as their equivalences with classical techniques are presented, resulting on important relationships between blind and supervised criteria such as the proposal and the maximum a posteriori one. Those new methods have a good trade-off between the recovering ability and computational complexity. The proposal os pdf estimation-based methods had allowed the investigation on the impact of higher order statistics on adaptive algorithms for blind source separation. Using pdf orthonormal series expansion we are able to evaluate through cumulants the dynamics of a source separation process. To be able to deal with digital communication signals, a new orthonormal series expansion is proposed. Such expansion is developed in terms of a Gaussian mixture pdf. This new expansion is used to evaluate the differences in real time processing when we retain more higher order statistics. Computational simulations are carried out to stress the performance of the proposals, faced to well known techniques reported in the literature, under the situations where a recovering signal strategy is required. / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
19

Revisiting the CAPM and the Fama-French Multi-Factor Models: Modeling Volatility Dynamics in Financial Markets

Michaelides, Michael 25 April 2017 (has links)
The primary objective of this dissertation is to revisit the CAPM and the Fama-French multi-factor models with a view to evaluate the validity of the probabilistic assumptions imposed (directly or indirectly) on the particular data used. By thoroughly testing the assumptions underlying these models, several departures are found and the original linear regression models are respecified. The respecification results in a family of heterogeneous Student's t models which are shown to account for all the statistical regularities in the data. This family of models provides an appropriate basis for revisiting the empirical adequacy of the CAPM and the Fama-French multi-factor models, as well as other models, such as alternative asset pricing models and risk evaluation models. Along the lines of providing a sound basis for reliable inference, the respecified models can serve as a coherent basis for selecting the relevant factors from the set of possible ones. The latter contributes to the enhancement of the substantive adequacy of the CAPM and the multi-factor models. / Ph. D.
20

Alocação adaptativa de banda e controle de fluxos de tráfego de redes utilizando sistemas Fuzzy e modelagem multifractal / Adaptive bandwidth allocation and traffic flow control using fuzzy systems and multifractal modeling

Cardoso, Alisson Assis 26 June 2014 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2014-09-24T21:03:59Z No. of bitstreams: 2 finalfinal.pdf: 9639130 bytes, checksum: f602829a491b238a34d40c598dc5893a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-25T10:32:28Z (GMT) No. of bitstreams: 2 finalfinal.pdf: 9639130 bytes, checksum: f602829a491b238a34d40c598dc5893a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-25T10:32:28Z (GMT). No. of bitstreams: 2 finalfinal.pdf: 9639130 bytes, checksum: f602829a491b238a34d40c598dc5893a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-06-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Inthispaperweproposeafuzzymodel,calledFuzzyLMScomAutocorrela¸c˜aoMultifractal, whose weights are updated according to information from multifractal traffic modeling. These weights are calculated by incorporating an analytical expression for the autocorrelation function of a multifractal model in the training algorithm of the fuzzy model that is based on the Wiener-Hopf filter. We evaluate the prediction performance of the proposed network traffic prediction algorithm with respect to other predictors. Further, we propose a bandwidth allocation scheme for network traffic based on the fuzzy prediction algorithm. Comparisons with other bandwidth allocation schemes in terms of byte loss rate, link utilization, buffer occupancy and average queue size verifies the efficiency of the proposed scheme. Also, We propose an other adaptive fuzzy algorithm, called Fuzzy-LMS-OBF com alfa adaptivo , for traffic flow control described by theβMWM model. The proposed algorithm uses Orthonormal Basis Functions (OBF) and its training based on the LMS algorithm. We also present an expression for the optimal traffic source rate derived from Fuzzy LMS. Then, we evaluate the performance of the Fuzzy-LMS-OBF com alfa adaptivo algorithm with respect to other methods. Through simulations, we show that the proposed control scheme is benefited from the superior performance of the proposed fuzzy algorithm. Comparisons with other methods in terms of mean and variance of the queue size in the buffer, Utilization rate of the link, Loss rate and Throughput are presented. / Neste trabalho propomos um modelo fuzzy, nomeado Fuzzy LMS com Autocorrela¸c˜ao Multifractal, cujos pesos s˜ao calculados atrav´es de informa¸c˜oes provindas da an´alise multifractal de s´eries temporais. Esses pesos s˜ao encontrados incorporando uma express˜ao anal´ıtica para a fun¸c˜ao de autocorrela¸c˜ao de um modelo multifractal no algoritmo de treinamento do modelo fuzzy que tem como base o filtro de Wiener-Hopf. Avaliamos ent˜ao o desempenho de predi¸c˜ao de tr´afego de redes do modelo fuzzy proposto adaptativo com rela¸c˜ao a outros preditores. Em seguida, propomos um esquema de aloca¸c˜ao de banda para tr´afego de redes baseado no algoritmo Fuzzy LMS com Autocorrela¸c˜ao Multifractal. Compara¸c˜oes com outros esquemas de aloca¸c˜ao de banda em termos de taxa de perda de bytes, utiliza¸c˜ao do enlace, ocupa¸c˜ao do buffer e tamanho m´edio da fila comprovam a eficiˆencia do algoritmo no esquema utilizado. Al´em disso, propomos um outro algoritmo fuzzy adaptativo para controle de fluxos de tr´afego que podem ser descritos pelo modelo multifractalβMWM, que chamamos de Fuzzy-LMS-OBF com alfa adaptivo, o qual utiliza Fun¸c˜oes de Bases Ortonormal (FBO) e tem como base de treinamento, o algoritmo LMS. Propomos tamb´em uma equa¸c˜ao para c´alculo da taxa ´otima de controle derivada do modelo Fuzzy LMS. Em seguida, avaliamos o desempenho do algoritmo de controle adaptativo proposto com rela¸c˜ao a outros m´etodos. Atrav´es de simula¸c˜oes, mostramos que os esquemas de controle e aloca¸c˜ao de taxa se favorecem do desempenho dos algoritmos fuzzy adaptativos propostos. Compara¸c˜oes com outros m´etodos em termos de tamanho m´edio e variˆancia da fila no buffer, Taxa de Utiliza¸c˜ao do enlace e Vaz˜ao s˜ao apresentadas.

Page generated in 0.0657 seconds