• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • Tagged with
  • 21
  • 21
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Escolha otimizada de parâmetros em métodos de pontos interiores para programação linear / Optimized choice of parameters in interior point methods for linear programming

Santos, Luiz Rafael dos, 1981- 25 August 2018 (has links)
Orientadores: Aurelio Ribeiro Leite de Oliveira, Fernando da Rocha Villas-Bôas, Clóvis Perin Filho / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T03:16:18Z (GMT). No. of bitstreams: 1 Santos_LuizRafaeldos_D.pdf: 1892418 bytes, checksum: f636057b6014ba9f4fdbc0c69c99bdeb (MD5) Previous issue date: 2014 / Resumo: Neste trabalho, propomos um método de pontos interiores do tipo preditor-corretor para programação linear em um contexto primal-dual, em que o próximo iterado será escolhido através de um subproblema de minimização de uma função de mérito polinomial a três variáveis: a primeira variável é o tamanho de passo, a segunda define a trajetória central e a última modela o peso que uma direção corretora deve ter. A minimização da função de mérito é feita sujeitando-a à restrições definidas por uma vizinhança da trajetória central que permite passos largos. Dessa maneira, combinamos diferentes direções, tais como preditora, corretora e de centralização com o objetivo de obter uma direção melhor. O método proposto generaliza grande parte dos métodos de pontos interiores preditores-corretores, a depender da escolha do valor das variáveis acima descritas. É feita, então uma análise de convergência do método proposto, considerando um ponto inicial que tem bom desempenho na prática, e que resulta em convergência linear dos iterados em complexidade polinomial. São feitos experimentos numéricos, utilizando o conjunto de testes Netlib, que mostram que essa abordagem é competitiva, quando comparada a implementações de pontos interiores bem estabelecidas como o PCx / Abstract: In this work we propose a predictor-corrector interior point method for linear programming in a primal-dual context, where the next iterate is chosen by the minimization of a polynomial merit function of three variables: the first one is the step length, the second one defines the central path and the last one models the weight that a corrector direction must have. The merit function minimization is performed by restricting it to constraints defined by a neighborhood of the central path that allows wide steps. In this framework, we combine different directions, such as the predictor, the corrector and the centering directions, with the aim of producing a better direction. The proposed method generalizes most of predictor-corrector interior point methods, depending on the choice of the variables described above. Convergence analysis of the method is carried out, considering an initial point that has a good practical performance, which results in Q-linear convergence of the iterates with polynomial complexity. Numerical experiments are made, using the Netlib test set, which show that this approach is competitive when compared to well established solvers, such as PCx / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
12

Contributions to Convergence Analysis of Noisy Optimization Algorithms / Contributions à l'Analyse de Convergence d'Algorithmes d'Optimisation Bruitée

Astete morales, Sandra 05 October 2016 (has links)
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées. / This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods.
13

Real-time Design Constraints in Implementing Active Vibration Control Algorithms.

Hossain, M. Alamgir, Tokhi, M.O. January 2006 (has links)
No / Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.
14

Learning to Rank Algorithms and Their Application in Machine Translation

Xia, Tian January 2015 (has links)
No description available.
15

Detection and analysis of megasatellites in the human genome using in silico methods

Benediktsson, Elís Ingi January 2005 (has links)
Megasatellites are polymorphic tandem repetitive sequences with repeat-units longer than or equal to 1000 base pairs. The novel algorithm Megasatfinder predicts megasatellites in the human genome. A structured method of analysing the algorithm is developed and conducted. The analysis method consists of six test scenarios. Scripts are created, which execute the algorithm using various parameter settings. Three nucleotide sequences are applied; a real sequence extracted from the human genome and two random sequences, generated using different base probabilities. Usability and accuracy are investigated, providing the user with confidence in the algorithm and its output. The results indicate that Megasatfinder is an excellent tool for the detection of megasatellites and that the generated results are highly reliable. The results of the complete analysis suggest alterations in the default parameter settings, presented as user guidelines, and state that artificially generated sequences are not applicable as models for real DNA in computational simulations.
16

Detection and analysis of megasatellites in the human genome using in silico methods

Benediktsson, Elís Ingi January 2005 (has links)
<p>Megasatellites are polymorphic tandem repetitive sequences with repeat-units longer than or equal to 1000 base pairs. The novel algorithm Megasatfinder predicts megasatellites in the human genome. A structured method of analysing the algorithm is developed and conducted. The analysis method consists of six test scenarios. Scripts are created, which execute the algorithm using various parameter settings. Three nucleotide sequences are applied; a real sequence extracted from the human genome and two random sequences, generated using different base probabilities. Usability and accuracy are investigated, providing the user with confidence in the algorithm and its output. The results indicate that Megasatfinder is an excellent tool for the detection of megasatellites and that the generated results are highly reliable. The results of the complete analysis suggest alterations in the default parameter settings, presented as user guidelines, and state that artificially generated sequences are not applicable as models for real DNA in computational simulations.</p>
17

On Ways to Improve Adaptive Filter Performance

Sankaran, Sundar G. 22 December 1999 (has links)
Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters. Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm. We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation. The dependence of the steady-state error and of the tracking properties on the three user-selectable parameters, namely step size, number of vectors used for adaptation (M+1), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on step size. Increasing D results in a linear increase in the lag error and hence the total steady-state mean-squared error. The optimum choices for step size and M are derived. Simulation results are provided to corroborate our analytical results. We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM). The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA. While the first part of this dissertation attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize. A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations. The third part of this dissertation proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters. / Ph. D.
18

Visualizing Algorithm Analysis Topics

Farghally, Mohammed Fawzi Seddik 30 November 2016 (has links)
Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying algorithm dynamics, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present algorithm analysis concepts. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying AA concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. We developed a pool of AAVs targeting the basic concepts of AA. We also developed AAVs for basic sorting algorithms, providing a concrete depiction about how the running time analysis of these algorithms can be calculated. To evaluate AAVs, we conducted a quasi-experiment across two offerings of CS3114 at Virginia Tech. By analyzing OpenDSA student interaction logs, we found that intervention group students spent significantly more time viewing the material as compared to control group students who used traditional textual content. Intervention group students gave positive feedback regarding the usefulness of AAVs to help them understand the AA concepts presented in the course. In addition, intervention group students demonstrated better performance than control group students on the AA part of the final exam. The final exam taken by both the control and intervention groups was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental AA concepts and probe students' misconceptions about these concepts. The pilot AACI was developed using a Delphi process involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students' understanding of the basic AA topics. / Ph. D. / Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to how an algorithm works and the time and space needed by the algorithm, also known as Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying how an algorithm works, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present concepts related to algorithm efficiency. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying efficiency analysis concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. AAVs were tested through a study across two offerings of CS3114 at Virginia Tech. We found that students using AAVs spent significantly more time viewing the material as compared to students who used traditional textual content. Students gave positive feedback regarding the usefulness of AAVs to help them understand the efficiency concepts presented in the course. In addition, students using AAVs demonstrated better performance than students using text on the efficiency part of the final exam. The final exam was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental efficiency concepts and probe students’ misconceptions about these concepts. The pilot AACI was developed through a decision making technique involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students’ understanding of the basic efficiency topics.
19

Algoritmos eficientes para equalização autodidata de sinais QAM. / Efficient algorithms for blind equalization of QAM signals.

João Mendes Filho 30 November 2011 (has links)
Neste trabalho, são propostos e analisados algoritmos autodidatas eficientes para a equalização de canais de comunicação, considerando a transmissão de sinais QAM (quadrature amplitude modulation). Suas funções de erro são construídas de forma a fazer com que o erro de estimação seja igual a zero nas coordenadas dos símbolos da constelação. Essa característica os possibilita ter um desempenho similar ao de um algoritmo de equalização supervisionada como o NLMS (normalized least mean-square), independentemente da ordem da constelação QAM. Verifica-se analiticamente que, sob certas condições favoráveis para a equalização, os vetores de coeficientes dos algoritmos propostos e a correspondente solução de Wiener são colineares. Além disso, usando a informação da estimativa do símbolo transmitido e de seus símbolos vizinhos, esquemas de baixo custo computacional são propostos para aumentar a velocidade de convergência dos algoritmos. No caso do algoritmo baseado no critério do módulo constante, evita-se sua divergência através de um mecanismo que descarta estimativas inconsistentes dos símbolos transmitidos. Adicionalmente, apresenta-se uma análise de rastreio (tracking), que permite obter expressões analíticas para o erro quadrático médio em excesso dos algoritmos propostos em ambientes estacionários e não-estacionários. Através dessas expressões, verifica-se que com sobreamostragem, ausência de ruído e ambiente estacionário, os algoritmos propostos podem alcançar a equalização perfeita, independentemente da ordem da constelação QAM. Os algoritmos são estendidos para a adaptação conjunta dos filtros direto e de realimentação do equalizador de decisão realimentada, levando-se em conta um mecanismo que evita soluções degeneradas. Resultados de simulação sugerem que a utilização dos esquemas aqui propostos pode ser vantajosa na recuperação de sinais QAM, fazendo com que seja desnecessário o chaveamento para o algoritmo de decisão direta. / In this work, we propose efficient blind algorithms for equalization of communication channels, considering the transmission of QAM (quadrature amplitude modulation) signals. Their error functions are constructed in order to make the estimation error equal to zero at the coordinates of the constellation symbols. This characteristic enables the proposed algorithms to have a similar performance to that of a supervised equalization algorithm as the NLMS (normalized least mean-square), independently of the QAM order. Under some favorable conditions, we verify analytically that the coefficient vector of the proposed algorithms are collinear with the Wiener solution. Furthermore, using the information of the symbol estimate in conjunction with its neighborhood, we propose schemes of low computational cost in order to improve their convergence rate. The divergence of the constant-modulus based algorithm is avoided by using a mechanism, which disregards nonconsistent estimates of the transmitted symbols. Additionally, we present a tracking analysis in which we obtain analytical expressions for the excess mean-square error in stationary and nonstationary environments. From these expressions, we verify that using a fractionally-spaced equalizer in a noiseless stationary environment, the proposed algorithms can achieve perfect equalization, independently of the QAM order. The algorithms are extended to jointly adapt the feedforward and feedback filters of the decision feedback equalizer, taking into account a mechanism to avoid degenerative solutions. Simulation results suggest that the proposed schemes may be advantageously used to recover QAM signals and make the switching to the decision direct mode unnecessary.
20

Algoritmos eficientes para equalização autodidata de sinais QAM. / Efficient algorithms for blind equalization of QAM signals.

Mendes Filho, João 30 November 2011 (has links)
Neste trabalho, são propostos e analisados algoritmos autodidatas eficientes para a equalização de canais de comunicação, considerando a transmissão de sinais QAM (quadrature amplitude modulation). Suas funções de erro são construídas de forma a fazer com que o erro de estimação seja igual a zero nas coordenadas dos símbolos da constelação. Essa característica os possibilita ter um desempenho similar ao de um algoritmo de equalização supervisionada como o NLMS (normalized least mean-square), independentemente da ordem da constelação QAM. Verifica-se analiticamente que, sob certas condições favoráveis para a equalização, os vetores de coeficientes dos algoritmos propostos e a correspondente solução de Wiener são colineares. Além disso, usando a informação da estimativa do símbolo transmitido e de seus símbolos vizinhos, esquemas de baixo custo computacional são propostos para aumentar a velocidade de convergência dos algoritmos. No caso do algoritmo baseado no critério do módulo constante, evita-se sua divergência através de um mecanismo que descarta estimativas inconsistentes dos símbolos transmitidos. Adicionalmente, apresenta-se uma análise de rastreio (tracking), que permite obter expressões analíticas para o erro quadrático médio em excesso dos algoritmos propostos em ambientes estacionários e não-estacionários. Através dessas expressões, verifica-se que com sobreamostragem, ausência de ruído e ambiente estacionário, os algoritmos propostos podem alcançar a equalização perfeita, independentemente da ordem da constelação QAM. Os algoritmos são estendidos para a adaptação conjunta dos filtros direto e de realimentação do equalizador de decisão realimentada, levando-se em conta um mecanismo que evita soluções degeneradas. Resultados de simulação sugerem que a utilização dos esquemas aqui propostos pode ser vantajosa na recuperação de sinais QAM, fazendo com que seja desnecessário o chaveamento para o algoritmo de decisão direta. / In this work, we propose efficient blind algorithms for equalization of communication channels, considering the transmission of QAM (quadrature amplitude modulation) signals. Their error functions are constructed in order to make the estimation error equal to zero at the coordinates of the constellation symbols. This characteristic enables the proposed algorithms to have a similar performance to that of a supervised equalization algorithm as the NLMS (normalized least mean-square), independently of the QAM order. Under some favorable conditions, we verify analytically that the coefficient vector of the proposed algorithms are collinear with the Wiener solution. Furthermore, using the information of the symbol estimate in conjunction with its neighborhood, we propose schemes of low computational cost in order to improve their convergence rate. The divergence of the constant-modulus based algorithm is avoided by using a mechanism, which disregards nonconsistent estimates of the transmitted symbols. Additionally, we present a tracking analysis in which we obtain analytical expressions for the excess mean-square error in stationary and nonstationary environments. From these expressions, we verify that using a fractionally-spaced equalizer in a noiseless stationary environment, the proposed algorithms can achieve perfect equalization, independently of the QAM order. The algorithms are extended to jointly adapt the feedforward and feedback filters of the decision feedback equalizer, taking into account a mechanism to avoid degenerative solutions. Simulation results suggest that the proposed schemes may be advantageously used to recover QAM signals and make the switching to the decision direct mode unnecessary.

Page generated in 0.0651 seconds