• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 10
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Oblique decision trees in transformed spaces.

Wickramarachchi, Darshana Chitraka January 2015 (has links)
Decision trees (DTs) play a vital role in statistical modelling. Simplicity and interpretability of the solution structure have made the method popular in a wide range of disciplines. In data classification problems, DTs recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class. Axis parallel splits, the simplest form of splits, partition the feature space parallel to feature axes. However, for some problem domains DTs with axis parallel splits can produce complicated boundary structures. As an alternative, oblique splits are used to partition the feature space potentially simplifying the boundary structure. Various approaches have been explored to find optimal oblique splits. One approach is based on optimisation techniques. This is considered the benchmark approach, however, its major limitation is that the tree induction algorithm is computationally expensive. On the other hand, split finding approaches based on heuristic arguments have gained popularity and have made improvements on benchmark methods. This thesis proposes a methodology to induce oblique decision trees in transformed spaces based on a heuristic argument. As the first goal of the thesis, a new oblique decision tree algorithm, called HHCART (\underline{H}ouse\underline{H}older \underline{C}lassification and \underline{R}egression \underline{T}ree) is proposed. The proposed algorithm utilises a series of Householder matrices to reflect the training data at each non-terminal node during the tree construction. Householder matrices are constructed using the eigenvectors from each classes' covariance matrix. Axis parallel splits in the reflected (or transformed) spaces provide an efficient way of finding oblique splits in the original space. Experimental results show that the accuracy and size of the HHCART trees are comparable with some benchmark methods in the literature. The appealing features of HHCART is that it can handle both qualitative and quantitative features in the same oblique split, conceptually simple and computationally efficient. Data mining applications often come with massive example sets and inducing oblique DTs for such example sets often consumes considerable time. HHCART is a serial computing memory resident algorithm which may be ineffective when handling massive example sets. As the second goal of the thesis parallel computing and disk resident versions of the HHCART algorithm are presented so that HHCART can be used irrespective of the size of the problem. HHCART is a flexible algorithm and the eigenvectors defining Householder matrices can be replaced by other vectors deemed effective in oblique split finding. The third endeavour of this thesis explores this aspect of HHCART. HHCART can be used with other vectors in order to improve classification results. For example, a normal vector of the angular bisector, introduced in the Geometric Decision Tree (GDT) algorithm, is used to construct the Householder reflection matrix. The proposed method produces better results than GDT for some problem domains. In the second case, \textit{Class Representative Vectors} are introduced and used to construct Householder reflection matrices. The results of this experiment show that these oblique trees produce classification results competitive with those achieved with some benchmark decision trees. DTs are constructed using two approaches, namely: top-down and bottom-up. HHCART is a top-down tree, which is the most common approach. As the fourth idea of the thesis, the concept of HHCART is used to induce a new DT, HHBUT, using the bottom-up approach. The bottom-up approach performs cluster analysis prior to the tree building to identify the terminal nodes. The use of the Bayesian Information Criterion (BIC) to determine the number of clusters leads to accurate and compact trees when compared with Cross Validation (CV) based bottom-up trees. We suggest that HHBUT is a good alternative to the existing bottom-up tree especially when the number of examples is much higher than the number of features.
2

Algoritmos da reconstrução espectral para matrizes de Jacobi

Moraes, Ines Ferreira January 1993 (has links)
A reconstrução de matrizes de Jacobi a partir de dados espectrais é de grande importância na Teoria Vibracional. Usamos três métodos distintos para tal reconstrução: a aproximação por polinómios ortogonais a interação de Lanczos e a redução de Householder. Eles são aplicados a um sistema massa-mola supondo conhecidos os pólos e zeros da função frequência resposta correspondente a uma força senoidal aplicada em um dos extremos ou em um ponto interior. Resultados numéricos são obtidos com o software MATLAB e a linguagem FORTRAN. / The reconstruction of Jacobi matrices from spectral data is of great importance in vibration Theory. We use three distinct methods for sue h reconstructi on the orthogonal polynomial approach. the Lanczos interation and the Householder reduction. They are applied to a spring-mass system by assuming to know the peles and zeros of the frequency response function corresponding to a sinusoidal forcing at an end or an interior point. Numerical results are obtained with MATLAB and FORTRAN cedes.
3

Algoritmos da reconstrução espectral para matrizes de Jacobi

Moraes, Ines Ferreira January 1993 (has links)
A reconstrução de matrizes de Jacobi a partir de dados espectrais é de grande importância na Teoria Vibracional. Usamos três métodos distintos para tal reconstrução: a aproximação por polinómios ortogonais a interação de Lanczos e a redução de Householder. Eles são aplicados a um sistema massa-mola supondo conhecidos os pólos e zeros da função frequência resposta correspondente a uma força senoidal aplicada em um dos extremos ou em um ponto interior. Resultados numéricos são obtidos com o software MATLAB e a linguagem FORTRAN. / The reconstruction of Jacobi matrices from spectral data is of great importance in vibration Theory. We use three distinct methods for sue h reconstructi on the orthogonal polynomial approach. the Lanczos interation and the Householder reduction. They are applied to a spring-mass system by assuming to know the peles and zeros of the frequency response function corresponding to a sinusoidal forcing at an end or an interior point. Numerical results are obtained with MATLAB and FORTRAN cedes.
4

Algoritmos da reconstrução espectral para matrizes de Jacobi

Moraes, Ines Ferreira January 1993 (has links)
A reconstrução de matrizes de Jacobi a partir de dados espectrais é de grande importância na Teoria Vibracional. Usamos três métodos distintos para tal reconstrução: a aproximação por polinómios ortogonais a interação de Lanczos e a redução de Householder. Eles são aplicados a um sistema massa-mola supondo conhecidos os pólos e zeros da função frequência resposta correspondente a uma força senoidal aplicada em um dos extremos ou em um ponto interior. Resultados numéricos são obtidos com o software MATLAB e a linguagem FORTRAN. / The reconstruction of Jacobi matrices from spectral data is of great importance in vibration Theory. We use three distinct methods for sue h reconstructi on the orthogonal polynomial approach. the Lanczos interation and the Householder reduction. They are applied to a spring-mass system by assuming to know the peles and zeros of the frequency response function corresponding to a sinusoidal forcing at an end or an interior point. Numerical results are obtained with MATLAB and FORTRAN cedes.
5

Étude de la complexité de la décomposition orthogonale d'une matrice sur plusieurs modèles d'architectures parallèles

Daoudi, El Mostafa 12 May 1989 (has links) (PDF)
Différentes analyses de la méthode de Givens en parallèle sur une architecture à mémoire partagée sont examinées. Présentation de résultats de complexité et d'algorithmes asymptotiquement optimaux. Dans une deuxième partie, consacrée aux architectures à mémoire distribuée, les couts de communication sont pris en compte. Une analyse macroscopique montre l'influence de l'architecture sur la complexité des décompositions de Givens et de Householder s'exécutant sur différents réseaux de processeurs fonctionnant par échange de messages
6

Analysis of Fix‐point Aspects for Wireless Infrastructure Systems

Grill, Andreas, Englund, Robin January 2009 (has links)
A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy. The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code. The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created. The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits. / En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet. Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem. MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox. Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler. / fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus
7

Analysis of Fix‐point Aspects for Wireless Infrastructure Systems

Grill, Andreas, Englund, Robin January 2009 (has links)
<p>A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy.</p><p>The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code.</p><p>The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created.</p><p>The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits.</p> / <p>En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet.</p><p>Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem.</p><p>MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox.</p><p>Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler.</p> / fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus
8

以資料採礦的方法探索影響台灣地區女性戶長的原因

李孟謙, LEE, MENG CHIEN Unknown Date (has links)
「資料採礦」(Data Mining)為一種結合統計分析、資訊工程和各領域間專業知識的一種新興分析技術,例如:產業界的市場分析,金融界的財務分析,保險業的風險管理,生物科技界的疾病分析以及政府的人口統計,在各行各業使用資料採礦技術的人員日益增加。然而,正因資料採礦屬於新興發展的領域,仍有不少事項尚待開發,例如:不同型態的資料如何處理。本文即探討兩種不同型態的資料:資料量多、變數少以及資料量少、變數多兩種,以監督學習(Supervised Learning)和分類(Classification)的概念,分別對觀察值較多的2000年台灣地區戶口普查資料探討影響女性戶長的因素,而對變數較多的攝謢腺癌資料詮釋血清的病症類型,研究不同的類型資料可能的處理步驟。 本文主要的結論為:1.當資料量多時,引入抽樣的概念,資料採礦可利用抽樣將資料量縮減,減少處理時間,並且抽樣資料和全部資料在分類錯誤率的差異頗為相近,因此抽樣為一種可行的處理方式。以研究女性戶長為例,資料量最少的東部資料為抽樣代表,在不失分類準確性的前提下,抽樣3%資料的分析結果與使用整體資料的結果相差不多,達到合乎經濟效應。2.當資料量少時,引入變數縮減的想法,使用敘述性統計量和不均度的17個指標統計量,能替代全部變數進行分析,運用羅吉斯迴歸方法,分類錯誤率的結果在可接受範圍內,並且解決在傳統分析上自由度不夠的問題。以研究攝護腺癌症為例,在不損失太多分類正確性的原則下,將血清透過質譜儀所反映的強度,透過變數縮減的技巧提高分析效率;另外,縮減變數後自由度充足,傳統的統計方法可運用在攝護腺癌的資料上,使分析的工具有較廣泛的選擇。
9

Méthodes par blocs adaptées aux matrices structurées et au calcul du pseudo-inverse / Block methods adapted to structured matrices and calculation of the pseudo-inverse

Archid, Atika 27 April 2013 (has links)
Nous nous intéressons dans cette thèse, à l'étude de certaines méthodes numériques de type krylov dans le cas symplectique, en utilisant la technique de blocs. Ces méthodes, contrairement aux méthodes classiques, permettent à la matrice réduite de conserver la structure Hamiltonienne ou anti-Hamiltonienne ou encore symplectique d'une matrice donnée. Parmi ces méthodes, nous nous sommes intéressés à la méthodes d'Arnoldi symplectique par bloc que nous appelons aussi bloc J-Arnoldi. Notre but essentiel est d’étudier cette méthode de façon théorique et numérique, sur la nouvelle structure du K-module libre ℝ²nx²s avec K = ℝ²sx²s où s ≪ n désigne la taille des blocs utilisés. Un deuxième objectif est de chercher une approximation de l'epérateur exp(A)V, nous étudions en particulier le cas où A est une matrice réelle Hamiltonnienne et anti-symétrique de taille 2n x 2n et V est une matrice rectangulaire ortho-symplectique de taille 2n x 2s sur le sous-espace de Krylov par blocs Km(A,V) = blockspan {V,AV,...,Am-1V}, en conservant la structure de la matrice V. Cette approximation permet de résoudre plusieurs problèmes issus des équations différentielles dépendants d'un paramètre (EDP) et des systèmes d'équations différentielles ordinaires (EDO). Nous présentons également une méthode de Lanczos symplectique par bloc, que nous nommons bloc J-Lanczos. Cette méthode permet de réduire une matrice structurée sous la forme J-tridiagonale par bloc. Nous proposons des algorithmes basés sur deux types de normalisation : la factorisation S R et la factorisation Rj R. Dans une dernière partie, nous proposons un algorithme qui généralise la méthode de Greville afin de déterminer la pseudo inverse de Moore-Penros bloc de lignes par bloc de lignes d'une matrice rectangulaire de manière itérative. Nous proposons un algorithme qui utilise la technique de bloc. Pour toutes ces méthodes, nous proposons des exemples numériques qui montrent l'efficacité de nos approches. / We study, in this thesis, some numerical block Krylov subspace methods. These methods preserve geometric properties of the reduced matrix (Hamiltonian or skew-Hamiltonian or symplectic). Among these methods, we interest on block symplectic Arnoldi, namely block J-Arnoldi algorithm. Our main goal is to study this method, theoretically and numerically, on using ℝ²nx²s as free module on (ℝ²sx²s, +, x) with s ≪ n the size of block. A second aim is to study the approximation of exp (A)V, where A is a real Hamiltonian and skew-symmetric matrix of size 2n x 2n and V a rectangular matrix of size 2n x 2s on block Krylov subspace Km (A, V) = blockspan {V, AV,...Am-1V}, that preserve the structure of the initial matrix. this approximation is required in many applications. For example, this approximation is important for solving systems of ordinary differential equations (ODEs) or time-dependant partial differential equations (PDEs). We also present a block symplectic structure preserving Lanczos method, namely block J-Lanczos algorithm. Our approach is based on a block J-tridiagonalization procedure of a structured matrix. We propose algorithms based on two normalization methods : the SR factorization and the Rj R factorization. In the last part, we proposea generalized algorithm of Greville method for iteratively computing the Moore-Penrose inverse of a rectangular real matrix. our purpose is to give a block version of Greville's method. All methods are completed by many numerical examples.
10

Sur des méthodes préservant les structures d'une classe de matrices structurées / On structure-preserving methods of a class of structured matrices

Ben Kahla, Haithem 14 December 2017 (has links)
Les méthodes d'algèbres linéaire classiques, pour le calcul de valeurs et vecteurs propres d'une matrice, ou des approximations de rangs inférieurs (low-rank approximations) d'une solution, etc..., ne tiennent pas compte des structures de matrices. Ces dernières sont généralement détruites durant le procédé du calcul. Des méthodes alternatives préservant ces structures font l'objet d'un intérêt important par la communauté. Cette thèse constitue une contribution dans ce domaine. La décomposition SR peut être calculé via l'algorithme de Gram-Schmidt symplectique. Comme dans le cas classique, une perte d'orthogonalité peut se produire. Pour y remédier, nous avons proposé deux algorithmes RSGSi et RMSGSi qui consistent à ré-orthogonaliser deux fois les vecteurs à calculer. La perte de la J-orthogonalité s'est améliorée de manière très significative. L'étude directe de la propagation des erreurs d'arrondis dans les algorithmes de Gram-Schmidt symplectique est très difficile à effectuer. Nous avons réussi à contourner cette difficulté et donner des majorations pour la perte de la J-orthogonalité et de l'erreur de factorisation. Une autre façon de calculer la décomposition SR est basée sur les transformations de Householder symplectique. Un choix optimal a abouti à l'algorithme SROSH. Cependant, ce dernier peut être sujet à une instabilité numérique. Nous avons proposé une version modifiée nouvelle SRMSH, qui a l'avantage d'être aussi stable que possible. Une étude approfondie a été faite, présentant les différentes versions : SRMSH et SRMSH2. Dans le but de construire un algorithme SR, d'une complexité d'ordre O(n³) où 2n est la taille de la matrice, une réduction (appropriée) de la matrice à une forme condensée (J(Hessenberg forme) via des similarités adéquates, est cruciale. Cette réduction peut être effectuée via l'algorithme JHESS. Nous avons montré qu'il est possible de réduire une matrice sous la forme J-Hessenberg, en se basant exclusivement sur les transformations de Householder symplectiques. Le nouvel algorithme, appelé JHSJ, est basé sur une adaptation de l'algorithme SRSH. Nous avons réussi à proposer deux nouvelles variantes, aussi stables que possible : JHMSH et JHMSH2. Nous avons constaté que ces algorithmes se comportent d'une manière similaire à l'algorithme JHESS. Une caractéristique importante de tous ces algorithmes est qu'ils peuvent rencontrer un breakdown fatal ou un "near breakdown" rendant impossible la suite des calculs, ou débouchant sur une instabilité numérique, privant le résultat final de toute signification. Ce phénomène n'a pas d'équivalent dans le cas Euclidien. Nous avons réussi à élaborer une stratégie très efficace pour "guérir" le breakdown fatal et traîter le near breakdown. Les nouveaux algorithmes intégrant cette stratégie sont désignés par MJHESS, MJHSH, JHM²SH et JHM²SH2. Ces stratégies ont été ensuite intégrées dans la version implicite de l'algorithme SR lui permettant de surmonter les difficultés rencontrées lors du fatal breakdown ou du near breakdown. Rappelons que, sans ces stratégies, l'algorithme SR s'arrête. Finalement, et dans un autre cadre de matrices structurées, nous avons présenté un algorithme robuste via FFT et la matrice de Hankel, basé sur le calcul approché de plus grand diviseur commun (PGCD) de deux polynômes, pour résoudre le problème de la déconvolution d'images. Plus précisément, nous avons conçu un algorithme pour le calcul du PGCD de deux polynômes bivariés. La nouvelle approche est basée sur un algorithme rapide, de complexité quadratique O(n²), pour le calcul du PGCD des polynômes unidimensionnels. La complexité de notre algorithme est O(n²log(n)) où la taille des images floues est n x n. Les résultats expérimentaux avec des images synthétiquement floues illustrent l'efficacité de notre approche. / The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach

Page generated in 0.4524 seconds