• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • Tagged with
  • 20
  • 20
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Contributions to Convergence Analysis of Noisy Optimization Algorithms / Contributions à l'Analyse de Convergence d'Algorithmes d'Optimisation Bruitée

Astete morales, Sandra 05 October 2016 (has links)
Cette thèse montre des contributions à l'analyse d'algorithmes pour l'optimisation de fonctions bruitées. Les taux de convergences (regret simple et regret cumulatif) sont analysés pour les algorithmes de recherche linéaire ainsi que pour les algorithmes de recherche aléatoires. Nous prouvons que les algorithmes basé sur la matrice hessienne peuvent atteindre le même résultat que certaines algorithmes optimaux, lorsque les paramètres sont bien choisis. De plus, nous analysons l'ordre de convergence des stratégies évolutionnistes pour des fonctions bruitées. Nous déduisons une convergence log-log. Nous prouvons aussi une borne basse pour le taux de convergence de stratégies évolutionnistes. Nous étendons le travail effectué sur les mécanismes de réévaluations en les appliquant au cas discret. Finalement, nous analysons la mesure de performance en elle-même et prouvons que l'utilisation d'une mauvaise mesure de performance peut mener à des résultats trompeurs lorsque différentes méthodes d'optimisation sont évaluées. / This thesis exposes contributions to the analysis of algorithms for noisy functions. It exposes convergence rates for linesearch algorithms as well as for random search algorithms. We prove in terms of Simple Regret and Cumulative Regret that a Hessian based algorithm can reach the same results as some optimal algorithms in the literature, when parameters are tuned correctly. On the other hand we analyse the convergence order of Evolution Strategies when solving noisy functions. We deduce log-log convergence. We also give a lower bound for the convergence rate of the Evolution Strategies. We extend the work on revaluation by applying it to a discrete settings. Finally we analyse the performance measure itself and prove that the use of an erroneus performance measure can lead to misleading results on the evaluation of different methods.
12

Real-time Design Constraints in Implementing Active Vibration Control Algorithms.

Hossain, M. Alamgir, Tokhi, M.O. January 2006 (has links)
No / Although computer architectures incorporate fast processing hardware resources, high performance real-time implementation of a complex control algorithm requires an efficient design and software coding of the algorithm so as to exploit special features of the hardware and avoid associated architecture shortcomings. This paper presents an investigation into the analysis and design mechanisms that will lead to reduction in the execution time in implementing real-time control algorithms. The proposed mechanisms are exemplified by means of one algorithm, which demonstrates their applicability to real-time applications. An active vibration control (AVC) algorithm for a flexible beam system simulated using the finite difference (FD) method is considered to demonstrate the effectiveness of the proposed methods. A comparative performance evaluation of the proposed design mechanisms is presented and discussed through a set of experiments.
13

Learning to Rank Algorithms and Their Application in Machine Translation

Xia, Tian January 2015 (has links)
No description available.
14

Detection and analysis of megasatellites in the human genome using in silico methods

Benediktsson, Elís Ingi January 2005 (has links)
Megasatellites are polymorphic tandem repetitive sequences with repeat-units longer than or equal to 1000 base pairs. The novel algorithm Megasatfinder predicts megasatellites in the human genome. A structured method of analysing the algorithm is developed and conducted. The analysis method consists of six test scenarios. Scripts are created, which execute the algorithm using various parameter settings. Three nucleotide sequences are applied; a real sequence extracted from the human genome and two random sequences, generated using different base probabilities. Usability and accuracy are investigated, providing the user with confidence in the algorithm and its output. The results indicate that Megasatfinder is an excellent tool for the detection of megasatellites and that the generated results are highly reliable. The results of the complete analysis suggest alterations in the default parameter settings, presented as user guidelines, and state that artificially generated sequences are not applicable as models for real DNA in computational simulations.
15

Detection and analysis of megasatellites in the human genome using in silico methods

Benediktsson, Elís Ingi January 2005 (has links)
<p>Megasatellites are polymorphic tandem repetitive sequences with repeat-units longer than or equal to 1000 base pairs. The novel algorithm Megasatfinder predicts megasatellites in the human genome. A structured method of analysing the algorithm is developed and conducted. The analysis method consists of six test scenarios. Scripts are created, which execute the algorithm using various parameter settings. Three nucleotide sequences are applied; a real sequence extracted from the human genome and two random sequences, generated using different base probabilities. Usability and accuracy are investigated, providing the user with confidence in the algorithm and its output. The results indicate that Megasatfinder is an excellent tool for the detection of megasatellites and that the generated results are highly reliable. The results of the complete analysis suggest alterations in the default parameter settings, presented as user guidelines, and state that artificially generated sequences are not applicable as models for real DNA in computational simulations.</p>
16

On Ways to Improve Adaptive Filter Performance

Sankaran, Sundar G. 22 December 1999 (has links)
Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters. Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm. We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation. The dependence of the steady-state error and of the tracking properties on the three user-selectable parameters, namely step size, number of vectors used for adaptation (M+1), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on step size. Increasing D results in a linear increase in the lag error and hence the total steady-state mean-squared error. The optimum choices for step size and M are derived. Simulation results are provided to corroborate our analytical results. We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM). The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA. While the first part of this dissertation attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize. A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations. The third part of this dissertation proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters. / Ph. D.
17

Visualizing Algorithm Analysis Topics

Farghally, Mohammed Fawzi Seddik 30 November 2016 (has links)
Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying algorithm dynamics, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present algorithm analysis concepts. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying AA concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. We developed a pool of AAVs targeting the basic concepts of AA. We also developed AAVs for basic sorting algorithms, providing a concrete depiction about how the running time analysis of these algorithms can be calculated. To evaluate AAVs, we conducted a quasi-experiment across two offerings of CS3114 at Virginia Tech. By analyzing OpenDSA student interaction logs, we found that intervention group students spent significantly more time viewing the material as compared to control group students who used traditional textual content. Intervention group students gave positive feedback regarding the usefulness of AAVs to help them understand the AA concepts presented in the course. In addition, intervention group students demonstrated better performance than control group students on the AA part of the final exam. The final exam taken by both the control and intervention groups was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental AA concepts and probe students' misconceptions about these concepts. The pilot AACI was developed using a Delphi process involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students' understanding of the basic AA topics. / Ph. D.
18

Algoritmos eficientes para equalização autodidata de sinais QAM. / Efficient algorithms for blind equalization of QAM signals.

João Mendes Filho 30 November 2011 (has links)
Neste trabalho, são propostos e analisados algoritmos autodidatas eficientes para a equalização de canais de comunicação, considerando a transmissão de sinais QAM (quadrature amplitude modulation). Suas funções de erro são construídas de forma a fazer com que o erro de estimação seja igual a zero nas coordenadas dos símbolos da constelação. Essa característica os possibilita ter um desempenho similar ao de um algoritmo de equalização supervisionada como o NLMS (normalized least mean-square), independentemente da ordem da constelação QAM. Verifica-se analiticamente que, sob certas condições favoráveis para a equalização, os vetores de coeficientes dos algoritmos propostos e a correspondente solução de Wiener são colineares. Além disso, usando a informação da estimativa do símbolo transmitido e de seus símbolos vizinhos, esquemas de baixo custo computacional são propostos para aumentar a velocidade de convergência dos algoritmos. No caso do algoritmo baseado no critério do módulo constante, evita-se sua divergência através de um mecanismo que descarta estimativas inconsistentes dos símbolos transmitidos. Adicionalmente, apresenta-se uma análise de rastreio (tracking), que permite obter expressões analíticas para o erro quadrático médio em excesso dos algoritmos propostos em ambientes estacionários e não-estacionários. Através dessas expressões, verifica-se que com sobreamostragem, ausência de ruído e ambiente estacionário, os algoritmos propostos podem alcançar a equalização perfeita, independentemente da ordem da constelação QAM. Os algoritmos são estendidos para a adaptação conjunta dos filtros direto e de realimentação do equalizador de decisão realimentada, levando-se em conta um mecanismo que evita soluções degeneradas. Resultados de simulação sugerem que a utilização dos esquemas aqui propostos pode ser vantajosa na recuperação de sinais QAM, fazendo com que seja desnecessário o chaveamento para o algoritmo de decisão direta. / In this work, we propose efficient blind algorithms for equalization of communication channels, considering the transmission of QAM (quadrature amplitude modulation) signals. Their error functions are constructed in order to make the estimation error equal to zero at the coordinates of the constellation symbols. This characteristic enables the proposed algorithms to have a similar performance to that of a supervised equalization algorithm as the NLMS (normalized least mean-square), independently of the QAM order. Under some favorable conditions, we verify analytically that the coefficient vector of the proposed algorithms are collinear with the Wiener solution. Furthermore, using the information of the symbol estimate in conjunction with its neighborhood, we propose schemes of low computational cost in order to improve their convergence rate. The divergence of the constant-modulus based algorithm is avoided by using a mechanism, which disregards nonconsistent estimates of the transmitted symbols. Additionally, we present a tracking analysis in which we obtain analytical expressions for the excess mean-square error in stationary and nonstationary environments. From these expressions, we verify that using a fractionally-spaced equalizer in a noiseless stationary environment, the proposed algorithms can achieve perfect equalization, independently of the QAM order. The algorithms are extended to jointly adapt the feedforward and feedback filters of the decision feedback equalizer, taking into account a mechanism to avoid degenerative solutions. Simulation results suggest that the proposed schemes may be advantageously used to recover QAM signals and make the switching to the decision direct mode unnecessary.
19

Algoritmos eficientes para equalização autodidata de sinais QAM. / Efficient algorithms for blind equalization of QAM signals.

Mendes Filho, João 30 November 2011 (has links)
Neste trabalho, são propostos e analisados algoritmos autodidatas eficientes para a equalização de canais de comunicação, considerando a transmissão de sinais QAM (quadrature amplitude modulation). Suas funções de erro são construídas de forma a fazer com que o erro de estimação seja igual a zero nas coordenadas dos símbolos da constelação. Essa característica os possibilita ter um desempenho similar ao de um algoritmo de equalização supervisionada como o NLMS (normalized least mean-square), independentemente da ordem da constelação QAM. Verifica-se analiticamente que, sob certas condições favoráveis para a equalização, os vetores de coeficientes dos algoritmos propostos e a correspondente solução de Wiener são colineares. Além disso, usando a informação da estimativa do símbolo transmitido e de seus símbolos vizinhos, esquemas de baixo custo computacional são propostos para aumentar a velocidade de convergência dos algoritmos. No caso do algoritmo baseado no critério do módulo constante, evita-se sua divergência através de um mecanismo que descarta estimativas inconsistentes dos símbolos transmitidos. Adicionalmente, apresenta-se uma análise de rastreio (tracking), que permite obter expressões analíticas para o erro quadrático médio em excesso dos algoritmos propostos em ambientes estacionários e não-estacionários. Através dessas expressões, verifica-se que com sobreamostragem, ausência de ruído e ambiente estacionário, os algoritmos propostos podem alcançar a equalização perfeita, independentemente da ordem da constelação QAM. Os algoritmos são estendidos para a adaptação conjunta dos filtros direto e de realimentação do equalizador de decisão realimentada, levando-se em conta um mecanismo que evita soluções degeneradas. Resultados de simulação sugerem que a utilização dos esquemas aqui propostos pode ser vantajosa na recuperação de sinais QAM, fazendo com que seja desnecessário o chaveamento para o algoritmo de decisão direta. / In this work, we propose efficient blind algorithms for equalization of communication channels, considering the transmission of QAM (quadrature amplitude modulation) signals. Their error functions are constructed in order to make the estimation error equal to zero at the coordinates of the constellation symbols. This characteristic enables the proposed algorithms to have a similar performance to that of a supervised equalization algorithm as the NLMS (normalized least mean-square), independently of the QAM order. Under some favorable conditions, we verify analytically that the coefficient vector of the proposed algorithms are collinear with the Wiener solution. Furthermore, using the information of the symbol estimate in conjunction with its neighborhood, we propose schemes of low computational cost in order to improve their convergence rate. The divergence of the constant-modulus based algorithm is avoided by using a mechanism, which disregards nonconsistent estimates of the transmitted symbols. Additionally, we present a tracking analysis in which we obtain analytical expressions for the excess mean-square error in stationary and nonstationary environments. From these expressions, we verify that using a fractionally-spaced equalizer in a noiseless stationary environment, the proposed algorithms can achieve perfect equalization, independently of the QAM order. The algorithms are extended to jointly adapt the feedforward and feedback filters of the decision feedback equalizer, taking into account a mechanism to avoid degenerative solutions. Simulation results suggest that the proposed schemes may be advantageously used to recover QAM signals and make the switching to the decision direct mode unnecessary.
20

Odysseýs : sistema para análise de documentos de patentes / Odysseýs : system for analysis of patent documents

Masago, Fábio Kenji, 1984 04 August 2013 (has links)
Orientador: Jacques Wainer / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T23:44:38Z (GMT). No. of bitstreams: 1 Masago_FabioKenji_M.pdf: 2909118 bytes, checksum: 6db84a869c4da011cf0f5cd7114bcf63 (MD5) Previous issue date: 2013 / Resumo: Uma patente é um documento sobre uma propriedade de criação concedida pelo Estado aos autores, que impede terceiros a produzir, utilizar, comercializar, importar e exportar a invenção descrita sem a devida autorização do titular do documento. Um estudo na área econômico muito empregado é a utilização de patentes para medir a importância ou impacto tecnológico de um campo inovativo de uma entidade ou nação. Pode-se afirmar que as patentes são como uma espécie de medidores do nível inventivo e as citações contidas nas patentes são um meio para medir o fluxo ou os impactos do conhecimento de um país ou firma, assim como, avaliar tendências de um campo tecnológico. A presente dissertação de mestrado apresenta o desenvolvimento de uma ferramenta para auxiliar no procedimento de análise de patentes, abordando a aplicabilidade do método Latent Dirichlet Allocation (LDA) para o processo de similaridade de patentes. O sistema computacional denominado Odysseýs verifica a similaridade entre uma determinada patente dada pelo usuário e um grupo de documentos, ordenando-os conforme o seu grau de semelhança em relação à patente em avaliação. Além disso, o software permite, de forma não supervisionada, a geração de redes de citações de patentes por meio de buscas de um conjunto de patentes correlacionadas na base de dados do United States Patent and Trademark Office (USPTO) a partir de uma consulta designada pelo usuário, utilizando essas patentes para a análise de similaridade e, também, para a geração da rede de fluxo de conhecimento. A inexistência de softwares nacionais específicos para o processamento de patentes e as poucas ferramentas auxiliares para a análise de tais documentos foram às principais motivações para o desenvolvimento do projeto / Abstract: A patent is a document about an invention's property given by the state to authors, preventing others from producing, using, commercialize, importing and exporting the described invention without a permission of the document's owner. A study in the economic area frequently used is the use of patents to measure importance or technological impact of an innovative field of an entity or nation. Thus, can be asserted that patents are a kind of inventive level meter and their citations is a form of measuring a country's or firm's flow or the impact of knowledge, as well as evaluate trends in a certain technological field. This thesis presents a computational tool to assist in the process of patents analysis, approaching the applicability of the method Latent Dirichlet Allocation (LDA) for the similarity of patents. The computational system called Odysseýs evaluates the similarity between a patent given by the user and a group of documents, ordering them according to their similarity degree in relation to evaluated patent. In addition, the software allows, in an unsupervised manner, generate a patent citation's network by searches for a set of related patents in the database United States Patent and Trademark Office (USPTO) through a query designated by the user applying those patents to the similarity analysis, and also for generation of a knowledge flow network. The inexistence of national software for patent processing and only a few auxiliary tools for the analysis of such documents were the main motivations for the development of this project / Mestrado / Ciência da Computação / Mestre em Ciência da Computação

Page generated in 0.0421 seconds