• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 21
  • 12
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 48
  • 45
  • 44
  • 41
  • 34
  • 31
  • 30
  • 27
  • 25
  • 23
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Numerical Methods in Reaction Rate Theory

Frankcombe, Terry James Unknown Date (has links)
Numerical methods are often required to solve chemical problems, either to verify theoretical models or to access information that is not readily available experimentally. This thesis deals with both situations, though in differing levels of detail. A major component of this thesis is devoted to developing new methods to determine a full eigendecomposition of the matrices derived from "low temperature" unimolecular master equations. When transient behaviour is of interest achieving relative accuracy for more than just the eigenvector corresponding to the smallest eigenvalue is of central importance. Three new methods are presented. The first is based on a weighted implementation of subspace projection methods, in this case explored for the well-known Arnoldi method. This weighted inner product subspace projection methodology is demonstrated to
222

Numerical Methods in Reaction Rate Theory

Frankcombe, Terry James Unknown Date (has links)
Numerical methods are often required to solve chemical problems, either to verify theoretical models or to access information that is not readily available experimentally. This thesis deals with both situations, though in differing levels of detail. A major component of this thesis is devoted to developing new methods to determine a full eigendecomposition of the matrices derived from "low temperature" unimolecular master equations. When transient behaviour is of interest achieving relative accuracy for more than just the eigenvector corresponding to the smallest eigenvalue is of central importance. Three new methods are presented. The first is based on a weighted implementation of subspace projection methods, in this case explored for the well-known Arnoldi method. This weighted inner product subspace projection methodology is demonstrated to
223

Numerical Methods in Reaction Rate Theory

Frankcombe, Terry James Unknown Date (has links)
Numerical methods are often required to solve chemical problems, either to verify theoretical models or to access information that is not readily available experimentally. This thesis deals with both situations, though in differing levels of detail. A major component of this thesis is devoted to developing new methods to determine a full eigendecomposition of the matrices derived from "low temperature" unimolecular master equations. When transient behaviour is of interest achieving relative accuracy for more than just the eigenvector corresponding to the smallest eigenvalue is of central importance. Three new methods are presented. The first is based on a weighted implementation of subspace projection methods, in this case explored for the well-known Arnoldi method. This weighted inner product subspace projection methodology is demonstrated to
224

Numerical Methods in Reaction Rate Theory

Frankcombe, Terry James Unknown Date (has links)
Numerical methods are often required to solve chemical problems, either to verify theoretical models or to access information that is not readily available experimentally. This thesis deals with both situations, though in differing levels of detail. A major component of this thesis is devoted to developing new methods to determine a full eigendecomposition of the matrices derived from "low temperature" unimolecular master equations. When transient behaviour is of interest achieving relative accuracy for more than just the eigenvector corresponding to the smallest eigenvalue is of central importance. Three new methods are presented. The first is based on a weighted implementation of subspace projection methods, in this case explored for the well-known Arnoldi method. This weighted inner product subspace projection methodology is demonstrated to
225

Ενίσχυση σημάτων μουσικής υπό το περιβάλλον θορύβου

Παπανικολάου, Παναγιώτης 20 October 2010 (has links)
Στην παρούσα εργασία επιχειρείται η εφαρμογή αλγορίθμων αποθορυβοποίησης σε σήματα μουσικής και η εξαγωγή συμπερασμάτων σχετικά με την απόδοση αυτών ανά μουσικό είδος. Η κύρια επιδίωξη είναι να αποσαφηνιστούν τα βασικά προβλήματα της ενίσχυσης ήχων και να παρουσιαστούν οι διάφοροι αλγόριθμοι που έχουν αναπτυχθεί για την επίλυση των προβλημάτων αυτών. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η τεχνολογία ενίσχυσης ομιλίας. Στην συνέχεια εξετάζονται και αναλύονται αντιπροσωπευτικοί αλγόριθμοι από κάθε κατηγορία τεχνικών αποθορυβοποίησης, την κατηγορία φασματικής αφαίρεσης, την κατηγορία στατιστικών μοντέλων και αυτήν του υποχώρου. Για να μπορέσουμε να αξιολογήσουμε την απόδοση των παραπάνω αλγορίθμων χρησιμοποιούμε αντικειμενικές μετρήσεις ποιότητας, τα αποτελέσματα των οποίων μας δίνουν την δυνατότητα να συγκρίνουμε την απόδοση του κάθε αλγορίθμου. Με την χρήση τεσσάρων διαφορετικών μεθόδων αντικειμενικών μετρήσεων διεξάγουμε τα πειράματα εξάγοντας μια σειρά ενδεικτικών τιμών που μας δίνουν την ευχέρεια να συγκρίνουμε είτε τυχόν διαφοροποιήσεις στην απόδοση των αλγορίθμων της ίδιας κατηγορίας είτε διαφοροποιήσεις στο σύνολο των αλγορίθμων. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγορίθμου αλλά και με την καταλληλότητα του κάθε αλγορίθμου για συγκεκριμένες συνθήκες θορύβου και για συγκεκριμένο μουσικό είδος. / This thesis attempts to apply Noise Reduction algorithms to signals of music and draw conclusions concerning the performance of each algorithm for every musical genre. The main aims are to clarify the basic problems of sound enhancement and present the various algorithms developed for solving these problems. After a brief introduction to basic concepts on sound enhancement we examine and analyze various algorithms that have been proposed at times in the literature for speech enhancement. These algorithms can be divided into three main classes: spectral subtractive algorithms, statistical-model-based algorithms and subspace algorithms. In order to evaluate the performance of the above algorithms we use objective measures of quality, the results of which give us the opportunity to compare the performance of each algorithm. By using four different methods of objective measures to conduct the experiments we draw a set of values that facilitate us to make within-class algorithm comparisons and across-class algorithm comparisons. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions and music genre.
226

On induction machine faults detection using advanced parametric signal processing techniques / Contribution à la détection de défauts dans les machines asynchrones à l’aide de techniques paramétriques de traitement de signal

Trachi, Youness 22 November 2017 (has links)
L’objectif de ces travaux de thèse est de développer des architectures fiables de surveillance et de détection des défauts d’une machine asynchrone basées sur des techniques paramétriques de traitement du signal. Pour analyser et détecter les défauts, un modèle paramétrique du courant statorique en environnement stationnaire est proposé. Il est supposé être constitué de plusieurs sinusoïdes avec des paramètres inconnus dans le bruit. Les paramètres de ce modèle sont estimés à l’aide des techniques paramétriques telles que les estimateurs spectraux de type sous-espaces (MUSIC et ESPRIT) et l’estimateur du maximum de vraisemblance. Un critère de sévérité des défauts, basé sur l’estimation des amplitudes des composantes fréquentielles du courant statorique, est aussi proposé pour évaluer le niveau de défaillance de la machine. Un nouveau détecteur des défauts est aussi proposé en utilisant la théorie de détection. Il est principalement basé sur le test du rapport de vraisemblance généralisé avec un signal et un bruit à paramètres inconnus. Enfin, les techniques paramétriques proposées ont été évaluées à l’aide de signaux de courant statoriques expérimentaux de machines asynchrones en considérant les défauts de roulements et les ruptures de barres rotoriques. L’analyse des résultats expérimentaux montre clairement l’efficacité et la capacité de détection des techniques paramétriques proposées. / This Ph.D. thesis aims to develop reliable and cost-effective condition monitoring and faults detection architectures for induction machines. These architectures are mainly based on advanced parametric signal processing techniques. To analyze and detect faults, a parametric stator current model under stationary conditions has been considered. It is assumed to be multiple sinusoids with unknown parameters in noise. This model has been estimated using parametric techniques such as subspace spectral estimators and maximum likelihood estimator. A fault severity criterion based on the estimation of the stator current frequency component amplitudes has also been proposed to determine the induction machine failure level. A novel faults detector based on hypothesis testing has been also proposed. This detector is mainly based on the generalized likelihood ratio test detector with unknown signal and noise parameters. The proposed parametric techniques have been evaluated using experimental stator current signals issued from induction machines under two considered faults: bearing and broken rotor bars faults.Experimental results show the effectiveness and the detection ability of the proposed parametric techniques.
227

Modelagem computacional de dados e controle inteligente no espaço de estado / State space computational data modelling and intelligent control

Del Real Tamariz, Annabell 15 July 2005 (has links)
Orientador: Celso Pascoli Bottura / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T18:33:31Z (GMT). No. of bitstreams: 1 DelRealTamariz_Annabell_D.pdf: 5783881 bytes, checksum: 21a1a2e27552398a982a934513988a24 (MD5) Previous issue date: 2005 / Resumo: Este estudo apresenta contribuições para modelagem computacional de dados multivariáveis no espaço de estado, tanto com sistemas lineares invariantes como com variantes no tempo. Propomos para modelagem determinística-estocástica de dados ruidosos, o Algoritmo MOESP_AOKI. Propomos, utilizando Redes Neurais Recorrentes multicamadas, algoritmos para resolver a Equação Algébrica de Riccati Discreta bem como a Inequação Algébrica de Riccati Discreta, via Desigualdades Matriciais Lineares. Propomos um esquema de controle adaptativo com Escalonamento de Ganhos, baseado em Redes Neurais, para sistemas multivariáveis discretos variantes no tempo, identificados pelo algoritmo MOESP_VAR, também proposto nesta tese. Em síntese, uma estrutura de controle inteligente para sistemas discretos multivariáveis variantes no tempo, através de uma abordagem que pode ser chamada ILPV (Intelligent Linear Parameter Varying), é proposta e implementada. Um controlador LPV Inteligente, para dados computacionalmente modelados pelo algoritmo MOESP_VAR, é concretizado, implementado e testado com bons resultados / Abstract: This study presents contributions for state space multivariable computational data modelling with discrete time invariant as well as with time varying linear systems. A proposal for Deterministic-Estocastica Modelling of noisy data, MOESP_AOKI Algorithm, is made. We present proposals forsolving the Discrete-Time Algebraic Riccati Equation as well as the associate Linear Matrix Inequalityusing a multilayer Recurrent Neural Network approaches. An Intelligent Linear Parameter Varying(ILPV) control approach for multivariable discrete Linear Time Varying (LTV) systems identified bythe MOESP_VAR algorithm, are both proposed. A gain scheduling adaptive control scheme based on neural networks is designed to tune on-line the optimal controllers. In synthesis, an Intelligent Linear Parameter Varying (ILPV) Control approach for multivariable discrete Linear Time Varying Systems (LTV), identified by the algorithm MOESP_VAR, is proposed. This way an Intelligent LPV Control for multivariable data computationally modeled via the MOESP_VAR algorithm is structured, implemented and tested with good results / Doutorado / Automação / Doutor em Engenharia Elétrica
228

Algebraic analysis of V-cycle multigrid and aggregation-based two-grid methods

Napov, Artem 12 February 2010 (has links)
This thesis treats two essentially different subjects: V-cycle schemes are considered in Chapters 2-4, whereas the aggregation-based coarsening is analysed in Chapters 5-6. As a matter of paradox, these two multigrid ingredients, when combined together, can hardly lead to an optimal algorithm. Indeed, a V-cycle needs more accurate prolongations than the simple piecewise-constant one, associated to aggregation-based coarsening. On the other hand, aggregation-based approaches use almost exclusively piecewise constant prolongations, and therefore need more involved cycling strategies, K-cycle <a href=http://www3.interscience.wiley.com/journal/114286660/abstract?CRETRY=1&SRETRY=0>[Num.Lin.Alg.Appl. vol.15(2008), pp.473-487]</a> being an attractive alternative in this respect.<p><br><p><br><p>Chapter 2 considers more precisely the well-known V-cycle convergence theories: the approximation property based analyses by Hackbusch (see [Multi-Grid Methods and Applications, 1985, pp.164-167]) and by McCormick [SIAM J.Numer.Anal. vol.22(1985), pp.634-643] and the successive subspace correction theory, as presented in [SIAM Review, vol.34(1992), pp.581-613] by Xu and in [Acta Numerica, vol.2(1993), pp.285-326.] by Yserentant. Under the constraint that the resulting upper bound on the convergence rate must be expressed with respect to parameters involving two successive levels at a time, these theories are compared. Unlike [Acta Numerica, vol.2(1993), pp.285-326.], where the comparison is performed on the basis of underlying assumptions in a particular PDE context, we compare directly the upper bounds. We show that these analyses are equivalent from the qualitative point of view. From the quantitative point of view,<p>we show that the bound due to McCormick is always the best one.<p><br><p><br><p>When the upper bound on the V-cycle convergence factor involves only two successive levels at a time, it can further be compared with the two-level convergence factor. Such comparison is performed in Chapter 3, showing that a nice two-grid convergence (at every level) leads to an optimal McCormick's bound (the best bound from the previous chapter) if and only if a norm of a given projector is bounded on every level.<p><br><p><br><p>In Chapter 4 we consider the Fourier analysis setting for scalar PDEs and extend the comparison between two-grid and V-cycle multigrid methods to the smoothing factor. In particular, a two-sided bound involving the smoothing factor is obtained that defines an interval containing both the two-grid and V-cycle convergence rates. This interval is narrow when an additional parameter α is small enough, this latter being a simple function of Fourier components.<p><br><p><br><p>Chapter 5 provides a theoretical framework for coarsening by aggregation. An upper bound is presented that relates the two-grid convergence factor with local quantities, each being related to a particular aggregate. The bound is shown to be asymptotically sharp for a large class of elliptic boundary value problems, including problems with anisotropic and discontinuous coefficients.<p><br><p><br><p>In Chapter 6 we consider problems resulting from the discretization with edge finite elements of 3D curl-curl equation. The variables in such discretization are associated with edges. We investigate the performance of the Reitzinger and Schöberl algorithm [Num.Lin.Alg.Appl. vol.9(2002), pp.223-238], which uses aggregation techniques to construct the edge prolongation matrix. More precisely, we perform a Fourier analysis of the method in two-grid setting, showing its optimality. The analysis is supplemented with some numerical investigations. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
229

On numerical resilience in linear algebra / Conception d'algorithmes numériques pour la résilience en algèbre linéaire

Zounon, Mawussi 01 April 2015 (has links)
Comme la puissance de calcul des systèmes de calcul haute performance continue de croître, en utilisant un grand nombre de cœurs CPU ou d’unités de calcul spécialisées, les applications hautes performances destinées à la résolution des problèmes de très grande échelle sont de plus en plus sujettes à des pannes. En conséquence, la communauté de calcul haute performance a proposé de nombreuses contributions pour concevoir des applications tolérantes aux pannes. Cette étude porte sur une nouvelle classe d’algorithmes numériques de tolérance aux pannes au niveau de l’application qui ne nécessite pas de ressources supplémentaires, à savoir, des unités de calcul ou du temps de calcul additionnel, en l’absence de pannes. En supposant qu’un mécanisme distinct assure la détection des pannes, nous proposons des algorithmes numériques pour extraire des informations pertinentes à partir des données disponibles après une pannes. Après l’extraction de données, les données critiques manquantes sont régénérées grâce à des stratégies d’interpolation pour constituer des informations pertinentes pour redémarrer numériquement l’algorithme. Nous avons conçu ces méthodes appelées techniques d’Interpolation-restart pour des problèmes d’algèbre linéaire numérique tels que la résolution de systèmes linéaires ou des problèmes aux valeurs propres qui sont indispensables dans de nombreux noyaux scientifiques et applications d’ingénierie. La résolution de ces problèmes est souvent la partie dominante; en termes de temps de calcul, des applications scientifiques. Dans le cadre solveurs linéaires du sous-espace de Krylov, les entrées perdues de l’itération sont interpolées en utilisant les entrées disponibles sur les nœuds encore disponibles pour définir une nouvelle estimation de la solution initiale avant de redémarrer la méthode de Krylov. En particulier, nous considérons deux politiques d’interpolation qui préservent les propriétés numériques clés de solveurs linéaires bien connus, à savoir la décroissance monotone de la norme-A de l’erreur du gradient conjugué ou la décroissance monotone de la norme résiduelle de GMRES. Nous avons évalué l’impact du taux de pannes et l’impact de la quantité de données perdues sur la robustesse des stratégies de résilience conçues. Les expériences ont montré que nos stratégies numériques sont robustes même en présence de grandes fréquences de pannes, et de perte de grand volume de données. Dans le but de concevoir des solveurs résilients de résolution de problèmes aux valeurs propres, nous avons modifié les stratégies d’interpolation conçues pour les systèmes linéaires. Nous avons revisité les méthodes itératives de l’état de l’art pour la résolution des problèmes de valeurs propres creux à la lumière des stratégies d’Interpolation-restart. Pour chaque méthode considérée, nous avons adapté les stratégies d’Interpolation-restart pour régénérer autant d’informations spectrale que possible. Afin d’évaluer la performance de nos stratégies numériques, nous avons considéré un solveur parallèle hybride (direct/itérative) pleinement fonctionnel nommé MaPHyS pour la résolution des systèmes linéaires creux, et nous proposons des solutions numériques pour concevoir une version tolérante aux pannes du solveur. Le solveur étant hybride, nous nous concentrons dans cette étude sur l’étape de résolution itérative, qui est souvent l’étape dominante dans la pratique. Les solutions numériques proposées comportent deux volets. A chaque fois que cela est possible, nous exploitons la redondance de données entre les processus du solveur pour effectuer une régénération exacte des données en faisant des copies astucieuses dans les processus. D’autre part, les données perdues qui ne sont plus disponibles sur aucun processus sont régénérées grâce à un mécanisme d’interpolation. / As the computational power of high performance computing (HPC) systems continues to increase by using huge number of cores or specialized processing units, HPC applications are increasingly prone to faults. This study covers a new class of numerical fault tolerance algorithms at application level that does not require extra resources, i.e., computational unit or computing time, when no fault occurs. Assuming that a separate mechanism ensures fault detection, we propose numerical algorithms to extract relevant information from available data after a fault. After data extraction, well chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to numerically restart the algorithm. We have designed these methods called Interpolation-restart techniques for numerical linear algebra problems such as the solution of linear systems or eigen-problems that are the inner most numerical kernels in many scientific and engineering applications and also often ones of the most time consuming parts. In the framework of Krylov subspace linear solvers the lost entries of the iterate are interpolated using the available entries on the still alive nodes to define a new initial guess before restarting the Krylov method. In particular, we consider two interpolation policies that preserve key numerical properties of well-known linear solvers, namely the monotony decrease of the A-norm of the error of the conjugate gradient or the residual norm decrease of GMRES. We assess the impact of the fault rate and the amount of lost data on the robustness of the resulting linear solvers.For eigensolvers, we revisited state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of Interpolation-restart strategies. For each considered eigensolver, we adapted the Interpolation-restart strategies to regenerate as much spectral information as possible. Through intensive experiments, we illustrate the qualitative numerical behavior of the resulting schemes when the number of faults and the amount of lost data are varied; and we demonstrate that they exhibit a numerical robustness close to that of fault-free calculations. In order to assess the efficiency of our numerical strategies, we have consideredan actual fully-featured parallel sparse hybrid (direct/iterative) linear solver, MaPHyS, and we proposed numerical remedies to design a resilient version of the solver. The solver being hybrid, we focus in this study on the iterative solution step, which is often the dominant step in practice. The numerical remedies we propose are twofold. Whenever possible, we exploit the natural data redundancy between processes from the solver toperform an exact recovery through clever copies over processes. Otherwise, data that has been lost and is not available anymore on any process is recovered through Interpolationrestart strategies. These numerical remedies have been implemented in the MaPHyS parallel solver so that we can assess their efficiency on a large number of processing units (up to 12; 288 CPU cores) for solving large-scale real-life problems.
230

Resource Allocation for Multiple-Input and Multiple-Output Interference Networks

Cao, Pan 11 March 2015 (has links) (PDF)
To meet the exponentially increasing traffic data driven by the rapidly growing mobile subscriptions, both industry and academia are exploring the potential of a new genera- tion (5G) of wireless technologies. An important 5G goal is to achieve high data rate. Small cells with spectrum sharing and multiple-input multiple-output (MIMO) techniques are one of the most promising 5G technologies, since it enables to increase the aggregate data rate by improving the spectral efficiency, nodes density and transmission bandwidth, respectively. However, the increased interference in the densified networks will in return limit the achievable rate performance if not properly managed. The considered setup can be modeled as MIMO interference networks, which can be classified into the K-user MIMO interference channel (IC) and the K-cell MIMO interfering broadcast channel/multiple access channel (MIMO-IBC/IMAC) according to the number of mobile stations (MSs) simultaneously served by each base station (BS). The thesis considers two physical layer (PHY) resource allocation problems that deal with the interference for both models: 1) Pareto boundary computation for the achiev- able rate region in a K-user single-stream MIMO IC and 2) grouping-based interference alignment (GIA) with optimized IA-Cell assignment in a MIMO-IMAC under limited feedback. In each problem, the thesis seeks to provide a deeper understanding of the system and novel mathematical results, along with supporting numerical examples. Some of the main contributions can be summarized as follows. It is an open problem to compute the Pareto boundary of the achievable rate region for a K-user single-stream MIMO IC. The K-user single-stream MIMO IC models multiple transmitter-receiver pairs which operate over the same spectrum simultaneously. Each transmitter and each receiver is equipped with multiple antennas, and a single desired data stream is communicated in each transmitter-receiver link. The individual achievable rates of the K users form a K-dimensional achievable rate region. To find efficient operating points in the achievable rate region, the Pareto boundary computation problem, which can be formulated as a multi-objective optimization problem, needs to be solved. The thesis transforms the multi-objective optimization problem to two single-objective optimization problems–single constraint rate maximization problem and alternating rate profile optimization problem, based on the formulations of the ε-constraint optimization and the weighted Chebyshev optimization, respectively. The thesis proposes two alternating optimization algorithms to solve both single-objective optimization problems. The convergence of both algorithms is guaranteed. Also, a heuristic initialization scheme is provided for each algorithm to achieve a high-quality solution. By varying the weights in each single-objective optimization problem, numerical results show that both algorithms provide an inner bound very close to the Pareto boundary. Furthermore, the thesis also computes some key points exactly on the Pareto boundary in closed-form. A framework for interference alignment (IA) under limited feedback is proposed for a MIMO-IMAC. The MIMO-IMAC well matches the uplink scenario in cellular system, where multiple cells share their spectrum and operate simultaneously. In each cell, a BS receives the desired signals from multiple MSs within its own cell and each BS and each MS is equipped with multi-antenna. By allowing the inter-cell coordination, the thesis develops a distributed IA framework under limited feedback from three aspects: the GIA, the IA-Cell assignment and dynamic feedback bit allocation (DBA), respec- tively. Firstly, the thesis provides a complete study along with some new improvements of the GIA, which enables to compute the exact IA precoders in closed-form, based on local channel state information at the receiver (CSIR). Secondly, the concept of IA-Cell assignment is introduced and its effect on the achievable rate and degrees of freedom (DoF) performance is analyzed. Two distributed matching approaches and one centralized assignment approach are proposed to find a good IA-Cell assignment in three scenrios with different backhaul overhead. Thirdly, under limited feedback, the thesis derives an upper bound of the residual interference to noise ratio (RINR), formulates and solves a corresponding DBA problem. Finally, numerical results show that the proposed GIA with optimized IA-Cell assignment and the DBA greatly outperforms the traditional GIA algorithm.

Page generated in 0.3473 seconds