Spelling suggestions: "subject:"singular value decomposition"" "subject:"cingular value decomposition""
41 |
Improving observability in experimental analysis of rotating systemsDeshpande, Shrirang January 2014 (has links)
No description available.
|
42 |
A Structural Damage Identification Method Based on Unified Matrix Polynomial Approach and Subspace AnalysisZhao, Wancheng January 2008 (has links)
No description available.
|
43 |
Digital video watermarking using singular value decomposition and two-dimensional principal component analysisKaufman, Jason R. 14 April 2006 (has links)
No description available.
|
44 |
SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATIONBrown, Michael J. 05 September 2006 (has links)
No description available.
|
45 |
Weakest Bus Identification Based on Modal Analysis and Singular Value Decomposition TechniquesJalboub, Mohamed K., Rajamani, Haile S., Abd-Alhameed, Raed, Ihbal, Abdel-Baset M.I. 12 February 2010 (has links)
Yes / Voltage instability problems in power system is an important issue that should be taken into consideration during the planning and operation stages of modern power system networks. The system operators always need to know when and where the voltage stability problem can occur in order to apply suitable action to avoid unexpected results. In this paper, a study has been conducted to identify the weakest bus in the power system based on multi-variable control, modal analysis, and Singular Value Decomposition (SVD) techniques for both static and dynamic voltage stability analysis. A typical IEEE 3-machine, 9-bus test power system is used to validate these techniques, for which the test results are presented and discussed.
|
46 |
An Implementation-Based Exploration of HAPOD: Hierarchical Approximate Proper Orthogonal DecompositionBeach, Benjamin Josiah 25 January 2018 (has links)
Proper Orthogonal Decomposition (POD), combined with the Method of Snapshots and Galerkin projection, is a popular method for the model order reduction of nonlinear PDEs. The POD requires the left singular vectors from the singular value decomposition (SVD) of an n-by-m "snapshot matrix" S, each column of which represents the computed state of the system at a given time. However, the direct computation of this decomposition can be computationally expensive, particularly for snapshot matrices that are too large to fit in memory. Hierarchical Approximate POD (HAPOD) (Himpe 2016) is a recent method for the approximate truncated SVD that requires only a single pass over S, is easily parallelizable, and can be computationally cheaper than direct SVD, all while guaranteeing the requested accuracy for the resulting basis. This method processes the columns of S in blocks based on a predefined rooted tree of processors, concatenating the outputs from each stage to form the inputs for the next. However, depending on the selected parameter values and the properties of S, the performance of HAPOD may be no better than that of direct SVD. In this work, we numerically explore the parameter values and snapshot matrix properties for which HAPOD is computationally advantageous over the full SVD and compare its performance to that of a parallelized incremental SVD method (Brand 2002, Brand 2003, and Arrighi2015). In particular, in addition to the two major processor tree structures detailed in the initial publication of HAPOD (Himpe2016), we explore the viability of a new structure designed with an MPI implementation in mind. / Master of Science / Singular Value Decomposition (SVD) provides a way to represent numeric data that breaks the data up into its most important components, as well as measuring how significant each part is. This decomposition is widely used to assist in finding patterns in data and making decisions accordingly, or to obtain simple, yet accurate, representations of complex physical processes. Examples of useful data to decompose include the velocity of water flowing past an obstacle in a river, a large collection of images, or user ratings for a large number of movies. However, computing the SVD directly can be computationally expensive, and usually requires repeated access to the entire dataset. As these data sets can be very large, up to hundreds of gigabytes or even several terabytes, storing all of the data in memory at once may be infeasible. Thus, repeated access to the entire dataset requires that the files be read repeatedly from the hard disk, which can make the required computations exceptionally slow. Fortunately, for many applications, only the most important parts of the data are needed, and the rest can be discarded. As a result, several methods have surfaced that can pick out the most important parts of the data while accessing the original data only once, piece by piece, and can be much faster than computing the SVD directly. In addition, the recent bottleneck in individual computer processor speeds has motivated a need for methods that can efficiently run on a large number of processors in parallel. Hierarchical Approximate POD (HAPOD) [1] is a recently-developed method that can efficiently pick out the most important parts of the data while only accessing the original data once, and which is very easy to run in parallel. However, depending on a user-defined algorithm parameter (weight), HAPOD may return more information than is needed to satisfy the requested accuracy, which determines how much data can be discarded. It turns out that the input weights that result in less extra data also result in slower computations and the eventual need for more data to be stored in memory at once. This thesis explores how to choose this input weight to best balance the amount of extra information used with the speed of the method, and also explores how the properties of the data, such as the size of the data or the distribution of levels of significance of each part, impact the effectiveness of HAPOD.
|
47 |
Research on the Application of the Full Waveform Inversion Method for Estimating Underground Structures from Seismic Waves in Exploration / 地震波動から地下構造を推定する探査における全波形インバージョン手法適用の研究Li, Jiahang 23 July 2024 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第25542号 / 工博第5274号 / 新制||工||2004(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 福山 英一, 准教授 西藤 潤, 准教授 武川 順一, 教授 後藤 浩之 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
48 |
Singular Value Computation and Subspace ClusteringLiang, Qiao 01 January 2015 (has links)
In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. To compute several eigenvalues, Wielandt is used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we first propose to consider a deflation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate efficiency and robustness of the new algorithm.
In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on self expressiveness property (SEP) such as Sparse Subspace Clustering and Low Rank Representations have been shown to enjoy superior performances than other methods. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Numerical results are given to illustrate the effectiveness and efficiency of this method.
|
49 |
Improving the efficiency and accuracy of nocturnal bird Surveys through equipment selection and partial automationLazarevic, Ljubica January 2010 (has links)
Birds are a key environmental asset and this is recognised through comprehensive legislation and policy ensuring their protection and conservation. Many species are active at night and surveys are required to understand the implications of proposed developments such as towers and reduce possible conflicts with these structures. Night vision devices are commonly used in nocturnal surveys, either to scope an area for bird numbers and activity, or in remotely sensing an area to determine potential risk. This thesis explores some practical and theoretical approaches that can improve the accuracy, confidence and efficiency of nocturnal bird surveillance. As image intensifiers and thermal imagers have operational differences, each device has associated strengths and limitations. Empirical work established that image intensifiers are best used for species identification of birds against the ground or vegetation. Thermal imagers perform best in detection tasks and monitoring bird airspace usage. The typically used approach of viewing bird survey video from remote sensing in its entirety is a slow, inaccurate and inefficient approach. Accuracy can be significantly improved by viewing the survey video at half the playback speed. Motion detection efficiency and accuracy can be greatly improved through the use of adaptive background subtraction and cumulative image differencing. An experienced ornithologist uses bird flight style and wing oscillations to identify bird species. Changes in wing oscillations can be represented in a single inter-frame similarity matrix through area-based differencing. Bird species classification can then be automated using singular value decomposition to reduce the matrices to one-dimensional vectors for training a feed-forward neural network.
|
50 |
Rapid Frequency EstimationKoski, Antti E. 28 March 2006 (has links)
Frequency estimation plays an important role in many digital signal processing applications. Many areas have benefited from the discovery of the Fast Fourier Transform (FFT) decades ago and from the relatively recent advances in modern spectral estimation techniques within the last few decades. As processor and programmable logic technologies advance, unconventional methods for rapid frequency estimation in white Gaussian noise should be considered for real time applications. In this thesis, a practical hardware implementation that combines two known frequency estimation techniques is presented, implemented, and characterized. The combined implementation, using the well known FFT and a less well known modern spectral analysis method known as the Direct State Space (DSS) algorithm, is used to demonstrate and promote application of modern spectral methods in various real time applications, including Electronic Counter Measure (ECM) techniques.
|
Page generated in 0.1101 seconds