• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 17
  • 8
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 147
  • 44
  • 29
  • 18
  • 17
  • 17
  • 17
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Investigation of Magnetohydrodynamic Fluctuation Modes in the STOR-M Tokamak

Gamudi Elgriw, Sayf 31 July 2009
While magnetohydrodynamic (MHD) instabilities are considered one of the intriguing topics in tokamak physics, a feasibility study was conducted in the Saskatchewan Torus-Modified (STOR-M) tokamak to investigate the global MHD activities during the normal (L-mode) and improved (H-mode) confinement regimes. The experimental setup consists of 32 discrete Mirnov coils arranged into four poloidal arrays and mounted on STOR-M at even toroidal distances. The perturbed magnetic field fluctuations during STOR-M discharges were acquired and processed by the Fourier transform (FT), the wavelet analysis and the singular value decomposition (SVD) techniques. In L-mode discharges, the poloidal MHD mode numbers varied from 2 to 4 with peak frequencies in the range 20-40 kHz. The dominant toroidal modes were reported between 1 and 2 oscillating at frequencies 15-35 kHz. In another experiment, a noticeable MHD suppression was observed during the H-mode-like phase induced by the compact torus (CT) injection into STOR-M. However, a burst-like mode called the gong mode was triggered prior to the H-L transition, followed by coherent Mirnov oscillations. Mirnov oscillations with strong amplitude modulations were observed in the STOR-M tokamak. Correlations between Mirnov signals and soft x-ray (SXR) signals were found.
52

Investigation of Magnetohydrodynamic Fluctuation Modes in the STOR-M Tokamak

Gamudi Elgriw, Sayf 31 July 2009 (has links)
While magnetohydrodynamic (MHD) instabilities are considered one of the intriguing topics in tokamak physics, a feasibility study was conducted in the Saskatchewan Torus-Modified (STOR-M) tokamak to investigate the global MHD activities during the normal (L-mode) and improved (H-mode) confinement regimes. The experimental setup consists of 32 discrete Mirnov coils arranged into four poloidal arrays and mounted on STOR-M at even toroidal distances. The perturbed magnetic field fluctuations during STOR-M discharges were acquired and processed by the Fourier transform (FT), the wavelet analysis and the singular value decomposition (SVD) techniques. In L-mode discharges, the poloidal MHD mode numbers varied from 2 to 4 with peak frequencies in the range 20-40 kHz. The dominant toroidal modes were reported between 1 and 2 oscillating at frequencies 15-35 kHz. In another experiment, a noticeable MHD suppression was observed during the H-mode-like phase induced by the compact torus (CT) injection into STOR-M. However, a burst-like mode called the gong mode was triggered prior to the H-L transition, followed by coherent Mirnov oscillations. Mirnov oscillations with strong amplitude modulations were observed in the STOR-M tokamak. Correlations between Mirnov signals and soft x-ray (SXR) signals were found.
53

A Singular Value Decomposition Approach For Recommendation Systems

Osmanli, Osman Nuri 01 July 2010 (has links) (PDF)
Data analysis has become a very important area for both companies and researchers as a consequence of the technological developments in recent years. Companies are trying to increase their profit by analyzing the existing data about their customers and making decisions for the future according to the results of these analyses. Parallel to the need of companies, researchers are investigating different methodologies to analyze data more accurately with high performance. Recommender systems are one of the most popular and widespread data analysis tools. A recommender system applies knowledge discovery techniques to the existing data and makes personalized product recommendations during live customer interaction. However, the huge growth of customers and products especially on the internet, poses some challenges for recommender systems, producing high quality recommendations and performing millions of recommendations per second. In order to improve the performance of recommender systems, researchers have proposed many different methods. Singular Value Decomposition (SVD) technique based on dimension reduction is one of these methods which produces high quality recommendations, but has to undergo very expensive matrix calculations. In this thesis, we propose and experimentally validate some contributions to SVD technique which are based on the user and the item categorization. Besides, we adopt tags to classical 2D (User-Item) SVD technique and report the results of experiments. Results are promising to make more accurate and scalable recommender systems.
54

An Acoustically Oriented Vocal-Tract Model

ITAKURA, Fumitada, TAKEDA, Kazuya, YEHIA, Hani C. 20 August 1996 (has links)
No description available.
55

Ομαδοποίηση δεδομένων υψηλής διάστασης

Τασουλής, Σωτήρης 09 October 2009 (has links)
Η ομαδοποίηση ομαδοποιεί τα δεδομένα βασισμένη μόνο σε πληροφορία που βρίσκεται σε αυτά η οποία περιγράφει τα αντικείμενα και τις σχέσεις τους. Ο στόχος είναι τα αντικείμενα που βρίσκονται σε μια ομάδα να είναι όμοια(ή σχετικά) μεταξύ τους και διαφορετικά απο τα αντικείμενα των άλλων ομάδων. Όσο μεγαλύτερη είναι η ομοιότητα(ή η ομοιογένεια) σε μια ομάδα και όσο μεγαλύτερη είναι η διαφορετικότητα ανάμεσα στις ομάδες τόσο καλύτερη είναι η ομαδοποίηση. Οι μεθόδοι ομαδοποίησης μπορούν να διακριθούν σε τρείς κατηγορίες, ιεραρχικές, διαχωριστικές, και στις βασισμένες στη πυκνότητα. Οι ιεραρχικοί αλγόριθμοι μας δίνουν ιεραρχίες ομάδων σε μία top-down(συγχωνευτική) ή bottom-up(διαχωριστική) μορφή. Η εργασία αυτή επικεντρώνεται στην ιεραρχική διαχωριστική ομαδοποίηση. Ανάμεσα στους ιεραρχικούς διαχωριστικούς αλγορίθμους ξεχωρίζουμε τον αλγόριθμο Principal Direction Divisive Partitioning (PDDP). Ο PDDP χρησιμοποιεί την προβολή των δεδομένων στα κύρια συστατικά της αντίστοιχης μήτρας συνδιασποράς. Αυτό επιτρέπει την εφαρμογή σε δεδομένα υψηλής διάστασης. Στην εργασία αυτή προτείνεται μια βελτίωση του αλγορίθμου \Principal Direction Divisive Partitioning. Ο προτεινόμενος αλγόριθμος συνδυάζει στοιχεία από την εκτίμηση πυκνότητας και τις μεθόδους βασισμένες στην προβολή με έναν γρήγορο και αποδοτικό αλγόριθμο, ικανό να αντιμετωπίσει δεδομένα υψηλής διάστασης. Τα πειραματικά αποτελέσματα δείχνουν βελτιωμένη απόδοση ομαδοποίησης σε σύγκριση με άλλες δημοφιλείς μεθόδους. Επίσης ερευνάται το πρόβλημα του αυτόματου καθορισμού του πλήθους των ομάδων που είναι πολύ σημαντικό την ανάλυση ομάδων. / Cluster analysis groups data objects based only on information found in the data that describes the objects and their relationships. The goal is that the objects within a group be similar (or related) to one another and different from (or unrelated to) the objects in other groups. The greater the similarity (or homogeneity) within a group and the greater the difference between groups, the better or more distinct the clustering. Clustering methods can be broadly divided into three categories, hierarchical, partitioning and density-based (while there are other categorisations). Hierarchical algorithms provide nested hierarchies of clusters in a top-down (agglomerative), or bottom-up (divisive) fashion. This work is focused on the class of hierarchical divisive clustering algorithms. Amongst the class of divisive hierarchical algorithms, the Principal Direction Divisive Partitioning (PDDP) algorithm is of particular value. PDDP uses the projection of the data onto the principal components of the associated data covariance matrix. This allows the application to high dimensional data. In this work an improvement of the algorithm PDDP is proposed. The proposed algorithm merges concepts from density estimation and projection-based methods towards a fast and efficient clustering algorithm, capable of dealing with high dimensional data. Experimental results show improved partitioning performance compared to other popular methods. Moreover, we explore the problem of automatically determining the number of clusters that is central in cluster analysis.
56

Generalized Hebbian Algorithm for Dimensionality Reduction in Natural Language Processing

Gorrell, Genevieve January 2006 (has links)
The current surge of interest in search and comparison tasks in natural language processing has brought with it a focus on vector space approaches and vector space dimensionality reduction techniques. Presenting data as points in hyperspace provides opportunities to use a variety of welldeveloped tools pertinent to this representation. Dimensionality reduction allows data to be compressed and generalised. Eigen decomposition and related algorithms are one category of approaches to dimensionality reduction, providing a principled way to reduce data dimensionality that has time and again shown itself capable of enabling access to powerful generalisations in the data. Issues with the approach, however, include computational complexity and limitations on the size of dataset that can reasonably be processed in this way. Large datasets are a persistent feature of natural language processing tasks. This thesis focuses on two main questions. Firstly, in what ways can eigen decomposition and related techniques be extended to larger datasets? Secondly, this having been achieved, of what value is the resulting approach to information retrieval and to statistical language modelling at the ngram level? The applicability of eigen decomposition is shown to be extendable through the use of an extant algorithm; the Generalized Hebbian Algorithm (GHA), and the novel extension of this algorithm to paired data; the Asymmetric Generalized Hebbian Algorithm (AGHA). Several original extensions to the these algorithms are also presented, improving their applicability in various domains. The applicability of GHA to Latent Semantic Analysisstyle tasks is investigated. Finally, AGHA is used to investigate the value of singular value decomposition, an eigen decomposition variant, to ngram language modelling. A sizeable perplexity reduction is demonstrated.
57

Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics

Sarkar, Somwrita January 2009 (has links)
Doctor of Philosophy (PhD) / This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
58

Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semantics

Sarkar, Somwrita January 2009 (has links)
Doctor of Philosophy (PhD) / This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
59

High resolution time reversal (TR) imaging based on spatio-temporal windows

Odedo, Victor January 2017 (has links)
Through-the-wall Imaging (TWI) is crucial for various applications such as law enforcement, rescue missions and defense. TWI methods aim to provide detailed information of spaces that cannot be seen directly. Current state-of-the-art TWI systems utilise ultra-wideband (UWB) signals to simultaneously achieve wall penetration and high resolution. These TWI systems transmit signals and mathematically back-project the reflected signals received to image the scenario of interest. However, these systems are diffraction-limited and encounter problems due to multipath signals in the presence of multiple scatterers. Time reversal (TR) methods have become popular for remote sensing because they can take advantage of multipath signals to achieve superresolution (resolution that beats the diffraction limit). The Decomposition Of the Time-Reversal Operator (DORT in its French acronym) and MUltiple SIgnal Classification (MUSIC) methods are both TR techniques which involve taking the Singular Value Decomposition (SVD) of the Multistatic Data Matrix (MDM) which contains the signals received from the target(s) to be located. The DORT and MUSIC imaging methods have generated a lot of interests due to their robustness and ability to locate multiple targets. However these TR-based methods encounter problems when the targets are behind an obstruction, particularly when the properties of the obstruction is unknown as is often the case in TWI applications. This dissertation introduces a novel total sub-MDM algorithm that uses the highly acclaimed MUSIC method to image targets hidden behind an obstruction and achieve superresolution. The algorithm utilises spatio-temporal windows to divide the full-MDM into sub-MDMs. The summation of all images obtained from each sub-MDM give a clearer image of a scenario than we can obtain using the full-MDM. Furthermore, we propose a total sub-differential MDM algorithm that uses the MUSIC method to obtain images of moving targets that are hiddenbehind an obstructing material.
60

New methods for vectorcardiographic signal processing

Karsikas, M. (Mari) 15 November 2011 (has links)
Abstract Vectorcardiography (VCG) determines the direction and magnitude of the heart’s electrical forces. Interpretation of the digital three-dimensional vectorcardiography in clinical applications requires robust methods and novel approaches for calculating the vectorcardiographic features. This dissertation aimed to develop new methods for vectorcardiographic signal processing. The robustness of selected pre-processing and feature extraction algorithms was improved, novel methods for detecting the injured myocardial tissue from electrocardiogram (ECG) were devised, and dynamical behavior of vectorcardiographic features was determined. The main results of the dissertation are: (1) Digitizing process and proper filtering did not produce significant distortions for dipolar Singular Value Decomposition -based ECG parameters from a diagnostic viewpoint, whereas non-dipolar parameters were very sensitive to the pre-processing operations. (2) A novel method for estimating the severity of the myocardial infarction (MI) was developed by combining the action potential based computer model and 12-lead ECG patient data. Using the method it is possible to calculate an approximate estimate of the maximum troponin value and therefore the severity of the MI. In addition, the size and location of the myocardial infarction was found to affect diagnostic significant Total-cosine-R-to-T parameter (TCRT) - changes, both in the simulations and in the patient study. (3) Furthermore, the results showed that carefully targeted improvements to the basic algorithm of the TCRT parameter can evidently decrease the number of algorithm-based failures and therefore improve the diagnostic value of TCRT in different patient data. (4) Finally, a method for calculating beat-to-beat vectorcardiographic features during exercise was developed. It was observed that the breathing affects the beat-to-beat variability of all the QRS/T angle measures and the trend of the TCRT parameter during exercise was found to be negative. Further, the results of the thesis clearly showed that the QRS/T angle measures exhibit a strong correlation with the heart rate in individual subjects. The results of the dissertation highlight the importance of robust algorithms in a VCG analysis. The results should be taken into account in further studies, so that the vectorcardiography can be utilized more effectively in clinical applications. / Tiivistelmä Vektorikardiorgafia (VKG) kuvaa sydämen sähköisen toiminnan suuntaa ja suuruutta sydämen lyönnin eri vaiheissa. Vektorikardiogrammin onnistunut tulkinta kliinisissä sovelluksissa edellyttää luotettavia menetelmiä ja uusia lähestymistapoja vektorikardiografisten piirteiden laskennassa. Tämän väitöskirjan tavoitteena oli kehittää uusia vektorikardiografisia signaalinkäsittelymenetelmiä. Väitöstyössä parannettin tiettyjen elektrokardiorgafisen (EKG) -signaalin esikäsittelyvaiheiden ja piirteentunnistusalgoritmien luotettavuutta, kehitettiin uusia menetelmiä vaurioituneen sydänlihaskudoksen tunnistamiseen EKG-signaalista, sekä tutkittiin vektorikardiografisten piirteiden dynaamista käyttäytymistä. Väitöskirjan päätulokset voidaan tiivistää seuraavasti: (1) Paperitallenteisten EKG-tallenteiden digitointiprosessi ja EKG-signaalin asianmukainen suodatus ei aiheuta diagnostisesti merkittäviä vääristymiä ns. dipolaarisiin singulaariarvohajotelmaan (SVD) perustuviin EKG-parametreihin. Kuitenkin ns. ei-dipolaariset herkemmät parametrit ovat sensitiivisiä näille esikäsittelyvaiheille. (2) Väitöskirjatyössä kehitettiin uusi menetelmä sydäninfarktin vakavuuden arvioimiselle 12-kanavaisesta EKG-signaalista käyttäen aktiopotentiaaleihin perustuvaa tietokonemallia. Väitöstyössä todettiin, että menetelmää käyttäen on mahdollista laskea karkea estimaatti kliinisessä käytössä olevalle maksimaaliselle troponiiniarvolle, joka kertoo vaurion määrästä sydänlihaskudoksessa. Lisäksi sydäninfarktin koon ja sijainnin havaittiin vaikuttavan vektorikardiografiseen de- ja repolarisaation suhdetta kuvaavaan diagnostisesti merkittävään Total-cosine-R-to-T- (TCRT) muuttujaan. (3) Tulokset osoittivat myös, että tekemällä muutamia pieniä parannuksia alkuperäiseen TCRT-parametrin algoritmiin, voidaan merkittävästi vähentää parametrin laskennassa aiheutuvia vääristymiä ja täten parantaa TCRT-parametrin diagnostista arvoa erilaisissa potilasaineistoissa. (4) Neljänneksi, työssä kehitettiin menetelmä, jolla vektorikardiografisia piirteitä laskettiin dynaamisesti lyönti lyönniltä. Hengityksen havaittiin aiheuttavan rasitustestin aikana merkittävää lyöntikohtaista vaihtelua. Työssä havaittiin myös, että niin TCRT-parametrilla kuin myös muillakin de- ja repolarisaation välistä suhdetta kuvaavilla muuttujilla oli selvä korrelaatio sydämen sykkeen kanssa. Väitöskirjan tulokset korostavat luotettavien algoritmien tärkeyttä vektorikardiografisessa analyysissä. Tulosten huomioiminen jatkotutkimuksissa edesauttaa vektorikardiografian hyödyntämistä kliinisissä sovelluksissa.

Page generated in 0.1152 seconds