• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 14
  • 12
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 33
  • 33
  • 31
  • 18
  • 17
  • 15
  • 15
  • 13
  • 11
  • 11
  • 9
  • 9
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Digital video watermarking using singular value decomposition and two-dimensional principal component analysis

Kaufman, Jason R. 14 April 2006 (has links)
No description available.
62

SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION

Brown, Michael J. 05 September 2006 (has links)
No description available.
63

Mining Complex High-Order Datasets

Barnathan, Michael January 2010 (has links)
Selection of an appropriate structure for storage and analysis of complex datasets is a vital but often overlooked decision in the design of data mining and machine learning experiments. Most present techniques impose a matrix structure on the dataset, with rows representing observations and columns representing features. While this assumption is reasonable when features are scalar and do not exhibit co-dependence, the matrix data model becomes inappropriate when dependencies between non-target features must be modeled in parallel, or when features naturally take the form of higher-order multilinear structures. Such datasets particularly abound in functional medical imaging modalities, such as fMRI, where accurate integration of both spatial and temporal information is critical. Although necessary to take full advantage of the high-order structure of these datasets and built on well-studied mathematical tools, tensor analysis methodologies have only recently entered widespread use in the data mining community and remain relatively absent from the literature within the biomedical domain. Furthermore, naive tensor approaches suffer from fundamental efficiency problems which limit their practical use in large-scale high-order mining and do not capture local neighborhoods necessary for accurate spatiotemporal analysis. To address these issues, a comprehensive framework based on wavelet analysis, tensor decomposition, and the WaveCluster algorithm is proposed for addressing the problems of preprocessing, classification, clustering, compression, feature extraction, and latent concept discovery on large-scale high-order datasets, with a particular emphasis on applications in computer-assisted diagnosis. Our framework is evaluated on a 9.3 GB fMRI motor task dataset of both high dimensionality and high order, performing favorably against traditional voxelwise and spectral methods of analysis, discovering latent concepts suggestive of subject handedness, and reducing space and time complexities by up to two orders of magnitude. Novel wavelet and tensor tools are derived in the course of this work, including a novel formulation of an r-dimensional wavelet transform in terms of elementary tensor operations and an enhanced WaveCluster algorithm capable of clustering real-valued as well as binary data. Sparseness-exploiting properties are demonstrated and variations of core algorithms for specialized tasks such as image segmentation are presented. / Computer and Information Science
64

Développement d'une nouvelle méthodologie pour l'intéraction fluide structure nonlinéaire : concepts et validation / Development of a new method for non-linear fluid structure interaction : concepts and validation

Bosco, Elisa 29 November 2017 (has links)
Une méthode innovante pour simuler des interactions fluide-structure complexes tout en gardant un bon compromis temps de calcul/précision est présenté.Pour réduire le temps de simulation des modèles d’ordre réduits sont utilisés au lieu des modèles complets aussi bien pour les modèles structuraux que pour les modèles aérodynamiques. Un des challenges de base était d'utiliser des modèles industrielles hautes fidélités. La technique de condensation dynamique est utilisée pour réduire la taille du modèle éléments finis structures et la décomposition aux valeurs propres est utilisé sur une base de données aérodynamiques construite à partir de simulations CFD.Les non-linéarités structurelles sont réintroduites à posteriori.Une comparaison poussée des méthodes classique d'interpolation comme des méthodes de spline, d’interpolation sur des Manifold de Grassmann avec des méthodes innovantes d'apprentissage statistiques a été amené.Afin de valider complètement la méthodologie développée, une maquette expérimentale visant à imiter le comportement du carénage au sol avant le décollage a été conçue.Ce cas a pu être assimilé à une plaque avec des raideurs de liaisons dans une couche de mélange.La validation de cette méthode est réalisée en comparant les résultats des simulations numériques avec les données enregistrées pendant des essaies en soufflerie. On pourra ainsi comparer aussi bien des champs que des mesures locales. L'ensemble des essais a permis d'améliorer la compréhension de ce phénomène vibratoire qui mène à des problèmes récurrents de fatigue dans cette sous structures.Cette méthode est enfin appliquée à une structure aéronautique: les carénages de volet hypersustentateur / An innovative method for numerical simulating complex problems of fluid structure interaction, such as non-linear transients, characterized by good performances and high precision is presented in this manuscript. To cut down the simulation time, reduced order models are used for both the aerodynamic and structural modules. High fidelity industrial models have been used. A technique of dynamic condensation is employed to reduce the size of the finite element model while the technique of Proper Orthogonal Decomposition is used on a database of aerodynamic pressures built from CFD simulations. Structural non-linearities are reintroduced a posteriori. Different interpolation techniques such as the classic spline interpolation, interpolation on a Grassmann Manifold with more innovative methods of statistical learning have been compared. In order to validate the developed methodology a test campaign has been designed to reproduce a simplified mechanism of interaction inspired by a flap track fairing in take-off configuration. A plate whose stiffness depends on the springs at its attachment to the wind tunnel test section floor is immersed in a mixing layer. In parallel to the test activities a numerical model of the test rig has been developed. The validation of the methodology of fluid structure interaction is done through direct comparison between test data and simulation results. The testing activities have granted a deeper comprehension of the vibratory phenomenon that has led to recurrent fatigue problems on the impacted structures. The methodology is ultimately applied to an industrial problem: the load prediction on flap track fairings excited by engine exhaust.
65

Contribution à la détection, à la localisation d’endommagements par des méthodes d’analyse dynamique des modifications structurales d'une poutre avec tension : application au suivi des câbles du génie civil / Contribution to the detection, localization of damage by dynamic analysis methods for structural changes in a beam with tension : application to the monitoring of civil engineering cables

Le, Thi Thu Ha 04 April 2014 (has links)
L'objectif de ce travail est de mettre au point des méthodes pour détecter, localiser, quantifier et suivre l'évolution de l'endommagement dans les câbles courts, tels que les suspentes des ponts suspendus, à partir de leurs réponses vibratoires. Afin de modéliser ces câbles, un modèle linéaire 1D de poutre d'Euler Bernoulli avec tension est utilisé. Ce modèle permet de modéliser une large gamme de structures, allant de la corde vibrante à la poutre sans tension. Pour le câble, l'endommagement est introduit dans l'équation vibratoire par des modifications locales de la masse linéique et de la rigidité en flexion et par un changement global de la tension. De plus, pour introduire une "fissure" dans l'équation vibratoire d'une poutre, la modification de la rigidité peut être remplacée par un ressort de rotation au niveau de la fissure. Pour ces deux modèles d'introduction d'endommagements, une estimation analytique au premier ordre des variations des paramètres modaux en fonction des modifications est établie. Grâce aux estimations analytiques obtenues pour la variation relative des fréquences en fonctions des modifications physiques, nous développons des techniques de localisation pour deux cas d'étude : deux essais seuls correspondants à deux états (sain et endommagé) et une série d'essais (plusieurs essais de l'état sain à l'état endommagé). Pour ce second cas, une autre méthode de détection et de localisation utilisant cette fois la SVD est proposée. Les méthodes proposées sont testées sur des données numériques et sur des données expérimentales existant dans la littérature ou effectuées pendant la thèse / The objective of this work is to develop methods to detect, localize, quantify and follow the evolution of the damage in short cables, such as suspenders of the suspension bridges, using their vibratory responses. To simulate these cables, a 1D Euler Bernoulli beam linear model with tension is used. This model allows to study a wide range of structures from the vibrating string to the beam without tension. For cables, damage is introduced into the vibratory equation by local changes of the linear density and the bending stiffness and a global change in the tension. To introduce a crack in the vibrating beam equation, the change in the rigidity may be replaced by a pinned joint at the location ofthe crack. For both these models, a first order analytical estimation of the variation of modal parameters due to theses changes is established. Using these analytical estimations of the relative frequency variations in functions of the physical changes, we develop methods of localization for two cases : only two tests corresponding to two states (healthy and damaged) and a series of tests (several tests on the healthy state and several tests on the damaged state). For the second case, we propose another method of detection and localization which uses the SVD tool . These methods are tested on numerical data and experimental data from literature or from tests performed during the phD.
66

State-Space Approaches to Ultra-Wideband Doppler Processing

Holl, Jr., David J. 03 May 2007 (has links)
National security needs dictate the development of new radar systems capable of identifying and tracking exoatmospheric threats to aid our defense. These new radar systems feature reduced noise floors, electronic beam steering, and ultra-wide bandwidths, all of which facilitate threat discrimination. However, in order to identify missile attributes such as RF reflectivity, distance, and velocity, many existing processing algorithms rely upon narrow bandwidth assumptions that break down with increased signal bandwidth. We present a fresh investigation into these algorithms for removing bandwidth limitations and propose novel state-space and direct-data factoring formulations such as * the multidimensional extension to the Eigensystem Realization Algorithm, * employing state-space models in place of interpolation to obtain a form which admits a separation and isolation of solution components, * and side-stepping the joint diagonalization of state transition matrices, which commonly plagues methods like multidimensional ESPRIT. We then benchmark our approaches and relate the outcomes to the Cramer-Rao bound for the case of one and two adjacent reflectors to validate their conceptual design and identify those techniques that compare favorably to or improve upon existing practices.
67

Rapid Frequency Estimation

Koski, Antti E. 28 March 2006 (has links)
Frequency estimation plays an important role in many digital signal processing applications. Many areas have benefited from the discovery of the Fast Fourier Transform (FFT) decades ago and from the relatively recent advances in modern spectral estimation techniques within the last few decades. As processor and programmable logic technologies advance, unconventional methods for rapid frequency estimation in white Gaussian noise should be considered for real time applications. In this thesis, a practical hardware implementation that combines two known frequency estimation techniques is presented, implemented, and characterized. The combined implementation, using the well known FFT and a less well known modern spectral analysis method known as the Direct State Space (DSS) algorithm, is used to demonstrate and promote application of modern spectral methods in various real time applications, including Electronic Counter Measure (ECM) techniques.
68

An Improved C-Fuzzy Decision Tree and its Application to Vector Quantization

Chiu, Hsin-Wei 27 July 2006 (has links)
In the last one hundred years, the mankind has invented a lot of convenient tools for pursuing beautiful and comfortable living environment. Computer is one of the most important inventions, and its operation ability is incomparable with the mankind. Because computer can deal with a large amount of data fast and accurately, people use this advantage to imitate human thinking. Artificial intelligence is developed extensively. Methods, such as all kinds of neural networks, data mining, fuzzy logic, etc., apply to each side fields (ex: fingerprint distinguishing, image compressing, antennal designing, etc.). We will probe into to prediction technology according to the decision tree and fuzzy clustering. The fuzzy decision tree proposed the classification method by using fuzzy clustering method, and then construct out the decision tree to predict for data. However, in the distance function, the impact of the target space was proportional inversely. This situation could make problems in some dataset. Besides, the output model of each leaf node represented by a constant restricts the representation capability about the data distribution in the node. We propose a more reasonable definition of the distance function by considering both input and target differences with weighting factor. We also extend the output model of each leaf node to a local linear model and estimate the model parameters with a recursive SVD-based least squares estimator. Experimental results have shown that our improved version produces higher recognition rates and smaller mean square errors for classification and regression problems, respectively.
69

Multi-tree Monte Carlo methods for fast, scalable machine learning

Holmes, Michael P. 09 January 2009 (has links)
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods: 1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points. 2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points. 3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries. The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.
70

Méthodologie d'analyse de levés électromagnétiques aéroportés en domaine temporel pour la caractérisation géologique et hydrogéologique

Reninger, Pierre-Alexandre 24 October 2012 (has links) (PDF)
Cette thèse doctorale aborde divers aspects méthodologiques de l'analyse de levés électromagnétiques aéroportés en domaine temporel (TDEM) pour une interprétation détaillée à finalités géologique et hydrogéologique. Ce travail s'est appuyé sur un levé réalisé dans la région de Courtenay (Nord-Est de la région Centre) caractérisée par un plateau de craie karstifié (karst des Trois Fontaines) recouvert par des argiles d'altération et des alluvions. Tout d'abord, une méthode de filtrage des données TDEM utilisant la Décomposition en Valeurs Singulières (SVD) a été développée. L'adaptation rigoureuse de cette technique aux mesures TDEM a permis de séparer avec succès les bruits, qui ont pu être cartographiés, et le " signal géologique ", diminuant grandement le temps nécessaire à leur traitement. De plus, la méthode s'est avérée efficace pour obtenir, rapidement, des informations géologiques préliminaires sur la zone. Ensuite, une analyse croisée entre le modèle de résistivité obtenu en inversant les données filtrées et les forages disponibles a été effectuée. Celle-ci a mené à une amélioration de la connaissance géologique et hydrogéologique de la zone. Une figure d'ondulation, séparant deux dépôts de craie, et le réseau de failles en subsurface ont pu être imagés, apportant un cadre géologique au karst des Trois Fontaines. Enfin, une nouvelle méthode combinant l'information aux forages et les pentes issues du modèle de résistivité EM a permis d'obtenir un modèle d‟une précision inégalée du toit de la craie. L'ensemble de ces travaux fournit un cadre solide pour de futures études géo-environnementales utilisant des données TDEM aéroportées, et ce, même en zone anthropisée.

Page generated in 0.0505 seconds