41 |
Improving observability in experimental analysis of rotating systemsDeshpande, Shrirang January 2014 (has links)
No description available.
|
42 |
A Structural Damage Identification Method Based on Unified Matrix Polynomial Approach and Subspace AnalysisZhao, Wancheng January 2008 (has links)
No description available.
|
43 |
Digital video watermarking using singular value decomposition and two-dimensional principal component analysisKaufman, Jason R. 14 April 2006 (has links)
No description available.
|
44 |
SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATIONBrown, Michael J. 05 September 2006 (has links)
No description available.
|
45 |
Weakest Bus Identification Based on Modal Analysis and Singular Value Decomposition TechniquesJalboub, Mohamed K., Rajamani, Haile S., Abd-Alhameed, Raed, Ihbal, Abdel-Baset M.I. 12 February 2010 (has links)
Yes / Voltage instability problems in power system is an important issue that should be taken into consideration during the planning and operation stages of modern power system networks. The system operators always need to know when and where the voltage stability problem can occur in order to apply suitable action to avoid unexpected results. In this paper, a study has been conducted to identify the weakest bus in the power system based on multi-variable control, modal analysis, and Singular Value Decomposition (SVD) techniques for both static and dynamic voltage stability analysis. A typical IEEE 3-machine, 9-bus test power system is used to validate these techniques, for which the test results are presented and discussed.
|
46 |
An Implementation-Based Exploration of HAPOD: Hierarchical Approximate Proper Orthogonal DecompositionBeach, Benjamin Josiah 25 January 2018 (has links)
Proper Orthogonal Decomposition (POD), combined with the Method of Snapshots and Galerkin projection, is a popular method for the model order reduction of nonlinear PDEs. The POD requires the left singular vectors from the singular value decomposition (SVD) of an n-by-m "snapshot matrix" S, each column of which represents the computed state of the system at a given time. However, the direct computation of this decomposition can be computationally expensive, particularly for snapshot matrices that are too large to fit in memory. Hierarchical Approximate POD (HAPOD) (Himpe 2016) is a recent method for the approximate truncated SVD that requires only a single pass over S, is easily parallelizable, and can be computationally cheaper than direct SVD, all while guaranteeing the requested accuracy for the resulting basis. This method processes the columns of S in blocks based on a predefined rooted tree of processors, concatenating the outputs from each stage to form the inputs for the next. However, depending on the selected parameter values and the properties of S, the performance of HAPOD may be no better than that of direct SVD. In this work, we numerically explore the parameter values and snapshot matrix properties for which HAPOD is computationally advantageous over the full SVD and compare its performance to that of a parallelized incremental SVD method (Brand 2002, Brand 2003, and Arrighi2015). In particular, in addition to the two major processor tree structures detailed in the initial publication of HAPOD (Himpe2016), we explore the viability of a new structure designed with an MPI implementation in mind. / Master of Science / Singular Value Decomposition (SVD) provides a way to represent numeric data that breaks the data up into its most important components, as well as measuring how significant each part is. This decomposition is widely used to assist in finding patterns in data and making decisions accordingly, or to obtain simple, yet accurate, representations of complex physical processes. Examples of useful data to decompose include the velocity of water flowing past an obstacle in a river, a large collection of images, or user ratings for a large number of movies. However, computing the SVD directly can be computationally expensive, and usually requires repeated access to the entire dataset. As these data sets can be very large, up to hundreds of gigabytes or even several terabytes, storing all of the data in memory at once may be infeasible. Thus, repeated access to the entire dataset requires that the files be read repeatedly from the hard disk, which can make the required computations exceptionally slow. Fortunately, for many applications, only the most important parts of the data are needed, and the rest can be discarded. As a result, several methods have surfaced that can pick out the most important parts of the data while accessing the original data only once, piece by piece, and can be much faster than computing the SVD directly. In addition, the recent bottleneck in individual computer processor speeds has motivated a need for methods that can efficiently run on a large number of processors in parallel. Hierarchical Approximate POD (HAPOD) [1] is a recently-developed method that can efficiently pick out the most important parts of the data while only accessing the original data once, and which is very easy to run in parallel. However, depending on a user-defined algorithm parameter (weight), HAPOD may return more information than is needed to satisfy the requested accuracy, which determines how much data can be discarded. It turns out that the input weights that result in less extra data also result in slower computations and the eventual need for more data to be stored in memory at once. This thesis explores how to choose this input weight to best balance the amount of extra information used with the speed of the method, and also explores how the properties of the data, such as the size of the data or the distribution of levels of significance of each part, impact the effectiveness of HAPOD.
|
47 |
Singular Value Computation and Subspace ClusteringLiang, Qiao 01 January 2015 (has links)
In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. To compute several eigenvalues, Wielandt is used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we first propose to consider a deflation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate efficiency and robustness of the new algorithm.
In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on self expressiveness property (SEP) such as Sparse Subspace Clustering and Low Rank Representations have been shown to enjoy superior performances than other methods. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Numerical results are given to illustrate the effectiveness and efficiency of this method.
|
48 |
Improving the efficiency and accuracy of nocturnal bird Surveys through equipment selection and partial automationLazarevic, Ljubica January 2010 (has links)
Birds are a key environmental asset and this is recognised through comprehensive legislation and policy ensuring their protection and conservation. Many species are active at night and surveys are required to understand the implications of proposed developments such as towers and reduce possible conflicts with these structures. Night vision devices are commonly used in nocturnal surveys, either to scope an area for bird numbers and activity, or in remotely sensing an area to determine potential risk. This thesis explores some practical and theoretical approaches that can improve the accuracy, confidence and efficiency of nocturnal bird surveillance. As image intensifiers and thermal imagers have operational differences, each device has associated strengths and limitations. Empirical work established that image intensifiers are best used for species identification of birds against the ground or vegetation. Thermal imagers perform best in detection tasks and monitoring bird airspace usage. The typically used approach of viewing bird survey video from remote sensing in its entirety is a slow, inaccurate and inefficient approach. Accuracy can be significantly improved by viewing the survey video at half the playback speed. Motion detection efficiency and accuracy can be greatly improved through the use of adaptive background subtraction and cumulative image differencing. An experienced ornithologist uses bird flight style and wing oscillations to identify bird species. Changes in wing oscillations can be represented in a single inter-frame similarity matrix through area-based differencing. Bird species classification can then be automated using singular value decomposition to reduce the matrices to one-dimensional vectors for training a feed-forward neural network.
|
49 |
Rapid Frequency EstimationKoski, Antti E. 28 March 2006 (has links)
Frequency estimation plays an important role in many digital signal processing applications. Many areas have benefited from the discovery of the Fast Fourier Transform (FFT) decades ago and from the relatively recent advances in modern spectral estimation techniques within the last few decades. As processor and programmable logic technologies advance, unconventional methods for rapid frequency estimation in white Gaussian noise should be considered for real time applications. In this thesis, a practical hardware implementation that combines two known frequency estimation techniques is presented, implemented, and characterized. The combined implementation, using the well known FFT and a less well known modern spectral analysis method known as the Direct State Space (DSS) algorithm, is used to demonstrate and promote application of modern spectral methods in various real time applications, including Electronic Counter Measure (ECM) techniques.
|
50 |
Estimation sur des bases orthogonales des propriétés thermiques de matériaux hétérogènes à propriétés constantes par morceauxGodin, Alexandre 25 January 2013 (has links)
Ce travail se propose de caractériser thermiquement des composites à microstructures complexes. Il s’agit de développer des méthodes d’estimation permettant d’identifier les propriétés thermiques des différentes phases en présence, ainsi que celles associées à leurs interfaces, à partir de mesures issues de la thermographie infrarouge. Cette estimation paramétrique nécessite la connaissance au préalable de la structure géométrique de l’échantillon. Le premier objectif concerne donc l’identification de la structure de l’échantillon testé par la discrimination des différentes phases et interfaces. Une fois la structure de l’échantillon connue, le second objectif est l’identification des paramètres thermiques des différents constituants ainsi que ceux de leurs interfaces. On se propose d’exploiter deux tests spécifiques utilisant le même dispositif expérimental. Deux méthodes mathématiques différentes ont été développées et utilisées pour exploiter les mesures de champ issues du premier test et permettre de retrouver la microstructure de l’échantillon. La première est fondée sur la décomposition en valeurs singulières des données de températures recueillies. Il est montré que cette méthode permet d’obtenir des représentations de la microstructure de très bonne qualité à partir de mesures même fortement bruitées. La seconde méthode permet de raffiner les résultats obtenus à l’aide de la méthode précédente. Elle repose sur la résolution d’un problème d’optimisation sous contraintes en exploitant la technique dite Level-Set pour identifier les frontières des différents constituants de l’échantillon. L’étape d’identification des propriétés thermiques des constituants et des interfaces exploite les mesures de champs issues du second test expérimental. La méthode développée, la SVD-FT combine des techniques de décompositions en valeurs singulières avec desfonctions tests particulières pour dériver des estimateurs linéaires des propriétés recherchées.Cette méthode permet de limiter les effets du bruit de mesure sur la qualité de l’estimation et de s’affranchir des opérations de filtrage des données. / This work reports on the thermal characterization of composites with a complex microstructure. It aims at developping mathematical methods to identify the thermal properties of the constituants and thoses associate at their interfaces. The first step consistsin discriminating the microstructure of the sample to be tested. Then, when the sample structure is known, the second step consists in estimating the thermal parameters of the different phases and those at their interfaces. One experimental device has been set up to realize those two steps. Two mathematical methods have been developped and used to discriminate the microstructure based on the images of the sample recorded bu an infrared camera. The first method is based on the singular value decomposition of the temperature data. It has been shown that this method gives a very good representation of the microstructure even with very noisy data. The second method allows to refine the results obtained by the first one. This method is based on the resolution of an optimization problem under constraints and use a Level-Set technic to identify the boundary of each phase. To estimate the thermal properties of each phase and its interface, the infrared images of the second experiment have been used. The SVD-FT method developed in this work combines the singular values decomposition technic with particular tests functions to derive linear estimat or for the thermal properties. As a result, a significant amplification of the signal/noise ratios is reached.
|
Page generated in 0.1457 seconds