• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 19
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 146
  • 47
  • 31
  • 22
  • 18
  • 17
  • 17
  • 17
  • 17
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

OBJECT RECOGNITION BY GROUND-PENETRATING RADAR IMAGING SYSTEMS WITH TEMPORAL SPECTRAL STATISTICS

Ono, Sashi, Lee, Hua 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / This paper describes a new approach to object recognition by using ground-penetrating radar (GPR) imaging systems. The recognition procedure utilizes the spectral content instead of the object shape in traditional methods. To produce the identification feature of an object, the most common spectral component is obtained by singular value decomposition (SVD) of the training sets. The identification process is then integrated into the backward propagation image reconstruction algorithm, which is implemented on the FMCW GPR imaging systems.
12

Probabilistic Robust Design For Dynamic Systems Using Metamodelling

Seecharan, Turuna Saraswati January 2007 (has links)
Designers use simulations to observe the behaviour of a system and to make design decisions to improve dynamic performance. However, for complex dynamic systems, these simulations are often time-consuming and, for robust design purposes, numerous simulations are required as a range of design variables is investigated. Furthermore, the optimum set is desired to meet specifications at particular instances in time. In this thesis, the dynamic response of a system is broken into discrete time instances and recorded into a matrix. Each column of this matrix corresponds to a discrete time instance and each row corresponds to the response at a particular design variable set. Singular Value Decomposition (SVD) is then used to separate this matrix into two matrices: one that consists of information in parameter-space and the other containing information in time-space. Metamodels are then used to efficiently and accurately calculate the response at some arbitrary set of design variables at any time. This efficiency is especially useful in Monte Carlo simulation where the responses are required at a very large sample of design variable sets. This work is then extended where the normalized sensitivities along with the first and second moments of the response are required at specific times. Later, the procedure of calculating the metamodel at specific times and how this metamodel is used in parameter design or integrated design for finding the optimum parameters given specifications at specific time steps is shown. In conclusion, this research shows that SVD and metamodelling can be used to apply probabilistic robust design tools where specifications at certain times are required for the optimum performance of a system.
13

Probabilistic Robust Design For Dynamic Systems Using Metamodelling

Seecharan, Turuna Saraswati January 2007 (has links)
Designers use simulations to observe the behaviour of a system and to make design decisions to improve dynamic performance. However, for complex dynamic systems, these simulations are often time-consuming and, for robust design purposes, numerous simulations are required as a range of design variables is investigated. Furthermore, the optimum set is desired to meet specifications at particular instances in time. In this thesis, the dynamic response of a system is broken into discrete time instances and recorded into a matrix. Each column of this matrix corresponds to a discrete time instance and each row corresponds to the response at a particular design variable set. Singular Value Decomposition (SVD) is then used to separate this matrix into two matrices: one that consists of information in parameter-space and the other containing information in time-space. Metamodels are then used to efficiently and accurately calculate the response at some arbitrary set of design variables at any time. This efficiency is especially useful in Monte Carlo simulation where the responses are required at a very large sample of design variable sets. This work is then extended where the normalized sensitivities along with the first and second moments of the response are required at specific times. Later, the procedure of calculating the metamodel at specific times and how this metamodel is used in parameter design or integrated design for finding the optimum parameters given specifications at specific time steps is shown. In conclusion, this research shows that SVD and metamodelling can be used to apply probabilistic robust design tools where specifications at certain times are required for the optimum performance of a system.
14

Phylogenetic analysis of multiple genes based on spectral methods

Abeysundera, Melanie 28 October 2011 (has links)
Multiple gene phylogenetic analysis is of interest since single gene analysis often results in poorly resolved trees. Here the use of spectral techniques for analyzing multi-gene data sets is explored. The protein sequences are treated as categorical time series and a measure of similarity between a pair of sequences, the spectral covariance, is used to build trees. Unlike other methods, the spectral covariance method focuses on the relationship between the sites of genetic sequences. We consider two methods with which to combine the dissimilarity or distance matrices of multiple genes. The first method involves properly scaling the dissimilarity measures derived from different genes between a pair of species and using the mean of these scaled dissimilarity measures as a summary statistic to measure the taxonomic distances across multiple genes. We introduced two criteria for computing scale coefficients which can then be used to combine information across genes, namely the minimum variance (MinVar) criterion and the minimum coefficient of variation squared (MinCV) criterion. The scale coefficients obtained with the MinVar and MinCV criteria can then be used to derive a combined-gene tree from the weighted average of the distance or dissimilarity matrices of multiple genes. The second method is based on the singular value decomposition of a matrix made up of the p-vectors of pairwise distances for k genes. By decomposing such a matrix, we extract the common signal present in multiple genes to obtain a single tree representation of the relationship between a given set of taxa. Influence functions for the components of the singular value decomposition are derived to determine which genes are most influential in determining the combined-gene tree.
15

IMPROVED DOCUMENT SUMMARIZATION AND TAG CLOUDS VIA SINGULAR VALUE DECOMPOSITION

Provost, JAMES 25 September 2008 (has links)
Automated summarization is a difficult task. World-class summarizers can provide only "best guesses" of which sentences encapsulate the important content from within a set of documents. As automated systems continue to improve, users are still not given the means to observe complex relationships between seemingly independent concepts. In this research we used singular value decompositions to organize concepts and determine the best candidate sentences for an automated summary. The results from this straightforward attempt were comparable to world-class summarizers. We then included a clustered tag cloud, using a singular value decomposition to measure term "interestingness" with respect to the set of documents. The combination of best candidate sentences and tag clouds provided a more inclusive summary than a traditionally-developed summarizer alone. / Thesis (Master, Computing) -- Queen's University, 2008-09-24 16:31:25.261
16

Extension of Kendall's tau Using Rank-Adapted SVD to Identify Correlation and Factions Among Rankers and Equivalence Classes Among Ranked Elements

Campbell, Kathlleen January 2014 (has links)
The practice of ranking objects, events, and people to determine relevance, importance, or competitive edge is ancient. Recently, the use of rankings has permeated into daily usage, especially in the fields of business and education. When determining the association among those creating the ranks (herein called sources), the traditional assumption is that all sources compare a list of the same items (herein called elements). In the twenty-first century, it is rare that any two sources choose identical elements to rank. Adding to this difficulty, the number of credible sources creating and releasing rankings is increasing. In statistical literature, there is no current methodology that adequately assesses the association among multiple sources. We introduce rank-adapted singular value decomposition (R-A SVD), a new method that uses Kendall's tau as the underlying correlation method. We begin with (P), a matrix of data ranks. The first step is to factor the covariance matrix (K) as follows: K = cov(P) = V D^2 V Here, (V) is an orthonormal basis for the rows that is useful in identifying when sources agree as to the rank order and specifically which sources. D is a diagonal of eigenvalues. By analogy with singular value decomposition (SVD), we define U^* as U^* = PVD^(-1) The diagonal matrix, D, provides the factored eigenvalues in decreasing order. The largest eigenvalue is used to assess the overall association among the sources and is a conservative unbiased method comparable to Kendall's W. Anderson's test determines whether this association is significant and also identifies other significant eigenvalues produced by the covariance matrix.. Using Anderson's test (1963) we identify the a significantly large eigenvalues from D. When one or more eigenvalues is significant, there is evidence that the association among the sources is significant. Focusing on the a corresponding vectors of V specifically identifies which sources agree. In cases where more than one eigenvalue is significant, the $a$ significant vectors of V provide insight into factions. When more than one set of sources is in agreement, each group of agreeing sources is considered a faction. In many cases, more than one set of sources will be in agreement with one another but not necessarily with another set of sources; each group that is in agreement would be considered a faction. Using the a significant vectors of U^* provides different but equally important results. In many cases, the elements that are being ranked can be subdivided into equivalence classes. An equivalence class is defined as subpopulations of ranked elements that are similar to one another but dissimilar from other classes. When these classes exist, U^* provides insight as to how many classes and which elements belong in each class. In summary, the R-A SVD method gives the user the ability to assess whether there is any underlying association among multiple rank sources. It then identifies when sources agree and allows for more useful and careful interpretation when analyzing rank data. / Statistics
17

Improving the Performance of a Hybrid Classification Method Using a Parallel Algorithm and a Novel Data Reduction Technique

Phillips, Rhonda D. 21 August 2007 (has links)
This thesis presents both a shared memory parallel version of the hybrid classification algorithm IGSCR (iterative guided spectral class rejection) and a novel data reduction technique that can be used in conjuction with pIGSCR (parallel IGSCR). The parallel algorithm is motivated by a demonstrated need for more computing power driven by the increasing size of remote sensing datasets due to higher resolution sensors, larger study regions, and the like. Even with a fast algorithm such as pIGSCR, the reduction of dimension in a dataset is desirable in order to decrease the processing time further and possibly improve overall classification accuracy. pIGSCR was developed to produce fast and portable code using Fortran 95, OpenMP, and the Hierarchical Data Format version 5 (HDF5) and accompanying data access library. The applicability of the faster pIGSCR algorithm is demonstrated by classifying Landsat data covering most of Virginia, USA into forest and non-forest classes with approximately 90 percent accuracy. Parallel results are given using the SGI Altix 3300 shared memory computer and the SGI Altix 3700 with as many as 64 processors reaching speedups of almost 77. This fast algorithm allows an analyst to perform and assess multiple classifications to refine parameters. As an example, pIGSCR was used for a factorial analysis consisting of 42 classifications of a 1.2 gigabyte image to select the number of initial classes (70) and class purity (70%) used for the remaining two images. A feature selection or reduction method may be appropriate for a specific lassification method depending on the properties and training required for the classification method, or an alternative band selection method may be derived based on the classification method itself. This thesis introduces a feature reduction method based on the singular value decomposition (SVD). This feature reduction technique was applied to training data from two multitemporal datasets of Landsat TM/ETM+ imagery acquired over a forested area in Virginia, USA and Rondonia, Brazil. Subsequent parallel iterative guided spectral class rejection (pIGSCR) forest/non-forest classifications were performed to determine the quality of the feature reduction. The classifications of the Virginia data were five times faster using SVD based feature reduction without affecting the classification accuracy. Feature reduction using the SVD was also compared to feature reduction using principal components analysis (PCA). The highest average accuracies for the Virginia dataset (88.34%) and for the Amazon dataset (93.31%) were achieved using the SVD. The results presented here indicate that SVD based feature reduction can produce statistically significantly better classifications than PCA. / Master of Science
18

On the Use of Uncalibrated Digital Phased Arrays for Blind Signal Separation for Interference Removal in Congested Spectral Bands

Lusk, Lauren O. 05 May 2023 (has links)
With usable spectrum becoming increasingly more congested, the need for robust, adaptive communications to take advantage of spatially-separated signal sources is apparent. Traditional phased array beamforming techniques used for interference removal rely on perfect calibration between elements and precise knowledge of the array configuration; however, if the exact array configuration is not known (unknown or imperfect assumption of element locations, unknown mutual coupling between elements, etc.), these traditional beamforming techniques are not viable, so a blind beamforming approach is required. A novel blind beamforming approach is proposed to address complex narrow-band interference environments where the precise array configuration is unknown. The received signal is decomposed into orthogonal narrow-band partitions using a polyphase filter-bank channelizer, and a rank-reduced version of the received matrix on each sub-channel is computed through reconstruction by retaining a subset of its singular values. The wideband spectrum is synthesized through a near-perfect polyphase reconstruction filter, and a composite wideband spectrum is obtained from the maximum eigenvector of the resulting covariance matrix.The resulting process is shown to suppress numerous interference sources (in special cases even with more than the degrees of freedom of the array), all without any knowledge of the primary signal of interest. Results are validated with both simulation and wireless laboratory over-the-air experimentation. / M.S. / As the number of devices using wireless communications increase, the amount of usable radio frequency spectrum becomes increasingly congested. As a result, the need for robust, adaptive communications to improve spectral efficiency and ensure reliable communication in the presence of interference is apparent. One solution is using beamforming techniques on digital phased array receivers to maximize the energy in a desired direction and steer nulls to remove interference. However, traditional phased array beamforming techniques used for interference removal rely on perfect calibration between antenna elements and precise knowledge of the array configuration. Consequently, if the exact array configuration is not known (unknown or imperfect assumption of element locations, unknown mutual coupling between elements, etc.), these traditional beamforming techniques are not viable, so a beamforming approach with relaxed requirements (blind beamforming) is required. This thesis proposes a novel blind beamforming approach to address complex narrow-band interference in spectrally congested environments where the precise array configuration is unknown. The resulting process is shown to suppress numerous interference sources, all without any knowledge of the primary signal of interest. Results are validated with both simulation and wireless laboratory experimentation conducted with a two-element array, verifying that proposed beamforming approach achieves a similar performance to the theoretical performance bound of receiving packets in AWGN with no interference present.
19

Sparse and orthogonal singular value decomposition

Khatavkar, Rohan January 1900 (has links)
Master of Science / Department of Statistics / Kun Chen / The singular value decomposition (SVD) is a commonly used matrix factorization technique in statistics, and it is very e ective in revealing many low-dimensional structures in a noisy data matrix or a coe cient matrix of a statistical model. In particular, it is often desirable to obtain a sparse SVD, i.e., only a few singular values are nonzero and their corresponding left and right singular vectors are also sparse. However, in several existing methods for sparse SVD estimation, the exact orthogonality among the singular vectors are often sacri ced due to the di culty in incorporating the non-convex orthogonality constraint in sparse estimation. Imposing orthogonality in addition to sparsity, albeit di cult, can be critical in restricting and guiding the search of the sparsity pattern and facilitating model interpretation. Combining the ideas of penalized regression and Bregman iterative methods, we propose two methods that strive to achieve the dual goal of sparse and orthogonal SVD estimation, in the general framework of high dimensional multivariate regression. We set up simulation studies to demonstrate the e cacy of the proposed methods.
20

Essays on Computational Problems in Insurance

Ha, Hongjun 31 July 2016 (has links)
This dissertation consists of two chapters. The first chapter establishes an algorithm for calculating capital requirements. The calculation of capital requirements for financial institutions usually entails a reevaluation of the company's assets and liabilities at some future point in time for a (large) number of stochastic forecasts of economic and firm-specific variables. The complexity of this nested valuation problem leads many companies to struggle with the implementation. The current chapter proposes and analyzes a novel approach to this computational problem based on least-squares regression and Monte Carlo simulations. Our approach is motivated by a well-known method for pricing non-European derivatives. We study convergence of the algorithm and analyze the resulting estimate for practically important risk measures. Moreover, we address the problem of how to choose the regressors, and show that an optimal choice is given by the left singular functions of the corresponding valuation operator. Our numerical examples demonstrate that the algorithm can produce accurate results at relatively low computational costs, particularly when relying on the optimal basis functions. The second chapter discusses another application of regression-based methods, in the context of pricing variable annuities. Advanced life insurance products with exercise-dependent financial guarantees present challenging problems in view of pricing and risk management. In particular, due to the complexity of the guarantees and since practical valuation frameworks include a variety of stochastic risk factors, conventional methods that are based on the discretization of the underlying (Markov) state space may not be feasible. As a practical alternative, this chapter explores the applicability of Least-Squares Monte Carlo (LSM) methods familiar from American option pricing in this context. Unlike previous literature we consider optionality beyond surrendering the contract, where we focus on popular withdrawal benefits - so-called GMWBs - within Variable Annuities. We introduce different LSM variants, particularly the regression-now and regression-later approaches, and explore their viability and potential pitfalls. We commence our numerical analysis in a basic Black-Scholes framework, where we compare the LSM results to those from a discretization approach. We then extend the model to include various relevant risk factors and compare the results to those from the basic framework.

Page generated in 0.0721 seconds