• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 346
  • 162
  • 54
  • 18
  • 18
  • 10
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 749
  • 192
  • 160
  • 130
  • 91
  • 89
  • 84
  • 78
  • 77
  • 72
  • 69
  • 66
  • 61
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Spectral factorization of matrices

Gaoseb, Frans Otto 06 1900 (has links)
Abstract in English / The research will analyze and compare the current research on the spectral factorization of non-singular and singular matrices. We show that a nonsingular non-scalar matrix A can be written as a product A = BC where the eigenvalues of B and C are arbitrarily prescribed subject to the condition that the product of the eigenvalues of B and C must be equal to the determinant of A. Further, B and C can be simultaneously triangularised as a lower and upper triangular matrix respectively. Singular matrices will be factorized in terms of nilpotent matrices and otherwise over an arbitrary or complex field in order to present an integrated and detailed report on the current state of research in this area. Applications related to unipotent, positive-definite, commutator, involutory and Hermitian factorization are studied for non-singular matrices, while applications related to positive-semidefinite matrices are investigated for singular matrices. We will consider the theorems found in Sourour [24] and Laffey [17] to show that a non-singular non-scalar matrix can be factorized spectrally. The same two articles will be used to show applications to unipotent, positive-definite and commutator factorization. Applications related to Hermitian factorization will be considered in [26]. Laffey [18] shows that a non-singular matrix A with det A = ±1 is a product of four involutions with certain conditions on the arbitrary field. To aid with this conclusion a thorough study is made of Hoffman [13], who shows that an invertible linear transformation T of a finite dimensional vector space over a field is a product of two involutions if and only if T is similar to T−1. Sourour shows in [24] that if A is an n × n matrix over an arbitrary field containing at least n + 2 elements and if det A = ±1, then A is the product of at most four involutions. We will review the work of Wu [29] and show that a singular matrix A of order n ≥ 2 over the complex field can be expressed as a product of two nilpotent matrices, where the rank of each of the factors is the same as A, except when A is a 2 × 2 nilpotent matrix of rank one. Nilpotent factorization of singular matrices over an arbitrary field will also be investigated. Laffey [17] shows that the result of Wu, which he established over the complex field, is also valid over an arbitrary field by making use of a special matrix factorization involving similarity to an LU factorization. His proof is based on an application of Fitting's Lemma to express, up to similarity, a singular matrix as a direct sum of a non-singular and nilpotent matrix, and then to write the non-singular component as a product of a lower and upper triangular matrix using a matrix factorization theorem of Sourour [24]. The main theorem by Sourour and Tang [26] will be investigated to highlight the necessary and sufficient conditions for a singular matrix to be written as a product of two matrices with prescribed eigenvalues. This result is used to prove applications related to positive-semidefinite matrices for singular matrices. / National Research Foundation of South Africa / Mathematical Sciences / M Sc. (Mathematics)
172

An Examination into the Statistics of the Singular Vectors for the Multi-User MIMO Wireless Channel

Gunyan, Scott Nathan 13 August 2004 (has links) (PDF)
Many capacity and near-capacity achieving methods in multiple-input-multipleoutput (MIMO) wireless channels make use of the singular value decomposition (SVD) of the channel matrix. For the multi-user case, the SVD of the channel matrix for each user may result in right and left singular vectors that are similar between users. This proposes another descriptive characterization of the multi-user MIMO channel. Closely aligned singular vectors between any two users could reduce the achievable signaling rates of signal processing communication methods as one user would be more difficult to resolve in space-time from another. An examination into how this alignment can be described in realistic MIMO multipath channels using a two ring channel model is presented. The effects of correlation between singular vectors on achievable signaling rates is shown for one existing algorithm that approaches the sum capacity known as block-diagonalization. Analyzed is actual data collected in several indoor and outdoor experiments performed using newly constructed measurement hardware that extends the capabilities of an existing MIMO measurement system.
173

Sparse and orthogonal singular value decomposition

Khatavkar, Rohan January 1900 (has links)
Master of Science / Department of Statistics / Kun Chen / The singular value decomposition (SVD) is a commonly used matrix factorization technique in statistics, and it is very e ective in revealing many low-dimensional structures in a noisy data matrix or a coe cient matrix of a statistical model. In particular, it is often desirable to obtain a sparse SVD, i.e., only a few singular values are nonzero and their corresponding left and right singular vectors are also sparse. However, in several existing methods for sparse SVD estimation, the exact orthogonality among the singular vectors are often sacri ced due to the di culty in incorporating the non-convex orthogonality constraint in sparse estimation. Imposing orthogonality in addition to sparsity, albeit di cult, can be critical in restricting and guiding the search of the sparsity pattern and facilitating model interpretation. Combining the ideas of penalized regression and Bregman iterative methods, we propose two methods that strive to achieve the dual goal of sparse and orthogonal SVD estimation, in the general framework of high dimensional multivariate regression. We set up simulation studies to demonstrate the e cacy of the proposed methods.
174

Essays on Computational Problems in Insurance

Ha, Hongjun 31 July 2016 (has links)
This dissertation consists of two chapters. The first chapter establishes an algorithm for calculating capital requirements. The calculation of capital requirements for financial institutions usually entails a reevaluation of the company's assets and liabilities at some future point in time for a (large) number of stochastic forecasts of economic and firm-specific variables. The complexity of this nested valuation problem leads many companies to struggle with the implementation. The current chapter proposes and analyzes a novel approach to this computational problem based on least-squares regression and Monte Carlo simulations. Our approach is motivated by a well-known method for pricing non-European derivatives. We study convergence of the algorithm and analyze the resulting estimate for practically important risk measures. Moreover, we address the problem of how to choose the regressors, and show that an optimal choice is given by the left singular functions of the corresponding valuation operator. Our numerical examples demonstrate that the algorithm can produce accurate results at relatively low computational costs, particularly when relying on the optimal basis functions. The second chapter discusses another application of regression-based methods, in the context of pricing variable annuities. Advanced life insurance products with exercise-dependent financial guarantees present challenging problems in view of pricing and risk management. In particular, due to the complexity of the guarantees and since practical valuation frameworks include a variety of stochastic risk factors, conventional methods that are based on the discretization of the underlying (Markov) state space may not be feasible. As a practical alternative, this chapter explores the applicability of Least-Squares Monte Carlo (LSM) methods familiar from American option pricing in this context. Unlike previous literature we consider optionality beyond surrendering the contract, where we focus on popular withdrawal benefits - so-called GMWBs - within Variable Annuities. We introduce different LSM variants, particularly the regression-now and regression-later approaches, and explore their viability and potential pitfalls. We commence our numerical analysis in a basic Black-Scholes framework, where we compare the LSM results to those from a discretization approach. We then extend the model to include various relevant risk factors and compare the results to those from the basic framework.
175

Mental files

Goodsell, Thea January 2013 (has links)
It is often supposed that we can make progress understanding singular thought about objects by claiming that thinkers use ‘mental files’. However, the proposal is rarely subject to sustained critical evaluation. This thesis aims to clarify and critique the claim that thinkers use mental files. In my introductory first chapter, I motivate my subsequent discussion by introducing the claim that thinkers deploy modes of presentation in their thought about objects, and lay out some of my assumptions and terminology. In the second chapter, I introduce mental files, responding to the somewhat fragmented files literature by setting out a core account of files, and outlining different ways of implementing the claim that thinkers use mental files. I highlight pressing questions about the synchronic and diachronic individuation conditions for files. In chapters three and four, I explore whether ‘de jure coreference’ can be used to give synchronic individuation conditions on mental files. I explore existing characterisations of de jure coreference before presenting my own, but conclude that de jure coreference does not give a useful account of the synchronic individuation conditions on files. In chapter five, I consider the proposal that thinkers must sometimes trade on the coreference of their mental representations, and argue that we can give synchronic individuation conditions on files in terms of trading on coreference. In chapter six, I bring together the account of files developed so far, compare it to the most developed theory of mental files published to date, and defend my account from the objection that it is circular. In chapter seven, I explore routes for giving diachronic individuation conditions on mental files. In my concluding chapter, I distinguish the core account of files from the idea that the file metaphor should be taken seriously. I suggest that my investigation of the consequences of the core account has shown that the file metaphor is unhelpful, and I outline reasons to exercise caution when using ‘files’ terminology.
176

Change-point detection in dynamical systems using auto-associative neural networks

Bulunga, Meshack Linda 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In this research work, auto-associative neural networks have been used for changepoint detection. This is a nonlinear technique that employs the use of artificial neural networks as inspired among other by Frank Rosenblatt’s linear perceptron algorithm for classification. An auto-associative neural network was used successfully to detect change-points for various types of time series data. Its performance was compared to that of singular spectrum analysis developed by Moskvina and Zhigljavsky. Fraction of Explained Variance (FEV) was also used to compare the performance of the two methods. FEV indicators are similar to the eigenvalues of the covariance matrix in principal component analysis. Two types of time series data were used for change-point detection: Gaussian data series and nonlinear reaction data series. The Gaussian data had four series with different types of change-points, namely a change in the mean value of the time series (T1), a change in the variance of the time series (T2), a change in the autocorrelation of the time series (T3), and a change in the crosscorrelation of two time series (T4). Both linear and nonlinear methods were able to detect the changes for T1, T2 and T4. None of them could detect the changes in T3. With the Gaussian data series, linear singular spectrum analysis (LSSA) performed as well as the NLSSA for the change point detection. This is because the time series was linear and the nonlinearity of the NLSSA was therefore not important. LSSA did even better than NLSSA when comparing FEV values, since it is not subject to suboptimal solutions as could sometimes be the case with autoassociative neural networks. The nonlinear data consisted of the Belousov-Zhabotinsky (BZ) reactions, autocatalytic reaction time series data and data representing a predator-prey system. With the NLSSA methods, change points could be detected accurately in all three systems, while LSSA only managed to detect the change-point on the BZ reactions and the predator-prey system. The NLSSA method also fared better than the LSSA method when comparing FEV values for the BZ reactions. The LSSA method was able to model the autocatalytic reactions fairly accurately, being able to explain 99% of the variance in the data with one component only. NLSSA with two nodes on the bottleneck attained an FEV of 87%. The performance of both NLSSA and LSSA were comparable for the predator-prey system, both systems, where both could attain FEV values of 92% with a single component. An auto-associative neural network is a good technique for change point detection in nonlinear time series data. However, it offers no advantage over linear techniques when the time series data are linear. / AFRIKAANSE OPSOMMING: In hierdie navorsing is outoassosiatiewe neurale netwerk gebruik vir veranderingspuntwaarneming. Dis is ‘n nielineêre tegniek wat neurale netwerke gebruik soos onder andere geïnspireer deur Frank Rosnblatt se lineêre perseptronalgoritme vir klassifikasie. ‘n Outoassosiatiewe neurale netwerk is suksesvol gebruik om veranderingspunte op te spoor in verskeie tipes tydreeksdata. Die prestasie van die outoassosiatiewe neurale netwerk is vergelyk met singuliere spektrale oontleding soos ontwikkel deur Moskvina en Zhigljavsky. Die fraksie van die verklaarde variansie (FEV) is ook gebruik om die prestasie van die twee metodes te vergelyk. FEV indikatore is soortgelyk aan die eiewaardes van die kovariansiematriks in hoofkomponentontleding. Twee tipes tydreeksdata is gebruik vir veranderingspuntopsporing: Gaussiaanse tydreekse en nielineêre reaksiedatareekse. Die Gaussiaanse data het vier reekse gehad met verskillende veranderingspunte, naamlik ‘n verandering in die gemiddelde van die tydreeksdata (T1), ‘n verandering in die variansie van die tydreeksdata (T2), ‘n verandering in die outokorrelasie van die tydreeksdata (T3), en ‘n verandering in die kruiskorrelasie van twee tydreekse (T4). Beide lineêre en nielineêre metodes kon die veranderinge in T1, T2 en T4 opspoor. Nie een het egter daarin geslaag om die verandering in T3 op te spoor nie. Met die Gaussiaanse tydreeks het lineêre singuliere spektrumanalise (LSSA) net so goed gevaar soos die outoassosiatiewe neurale netwerk of nielineêre singuliere spektrumanalise (NLSSA), aangesien die tydreekse lineêr was en die vermoë van die NLSSA metode om nielineêre gedrag te identifiseer dus nie belangrik was nie. Inteendeel, die LSSA metode het ‘n groter FEV waarde getoon as die NLSSA metode, omdat LSSA ook nie blootgestel is aan suboptimale oplossings, soos wat soms die geval kan wees met die afrigting van die outoassosiatiewe neural netwerk nie. Die nielineêre data het bestaan uit die Belousov-Zhabotinsky (BZ) reaksiedata, ‘n outokatalitiese reaksietydreeksdata en data wat ‘n roofdier-prooistelsel verteenwoordig het. Met die NLSSA metode kon veranderingspunte betroubaar opgespoor word in al drie tydreekse, terwyl die LSSA metode net die veranderingspuntin die BZ reaksie en die roofdier-prooistelsel kon opspoor. Die NLSSA metode het ook beter gevaaar as die LSSA metode wanneer die FEV waardes vir die BZ reaksies vergelyk word. Die LSSA metode kon die outokatalitiese reaksies redelik akkuraat modelleer, en kon met slegs een komponent 99% van die variansie in die data verklaar. Die NLSSA metode, met twee nodes in sy bottelneklaag, kon ‘n FEV waarde van slegs 87% behaal. Die prestasie van beide metodes was vergelykbaar vir die roofdier-prooidata, met beide wat FEV waardes van 92% kon behaal met hulle een komponent. ‘n Outoassosiatiewe neural netwerk is ‘n goeie metode vir die opspoor van veranderingspunte in nielineêre tydreeksdata. Dit hou egter geen voordeel in wanneer die data lineêr is nie.
177

On S₁-strictly singular operators

Teixeira, Ricardo Verotti O. 08 October 2010 (has links)
Let X be a Banach space and denote by SS₁(X) the set of all S₁-strictly singular operators from X to X. We prove that there is a Banach space X such that SS₁(X) is not a closed ideal. More specifically, we construct space X and operators T₁ and T₂ in SS₁(X) such that T₁+T₂ is not in SS₁(X). We show one example where the space X is reflexive and other where it is c₀-saturated. We also develop some results about S_alpha-strictly singular operators for alpha less than omega_1. / text
178

ON A PALEY-WIENER THEOREM FOR THE ZS-AKNS SCATTERING TRANSFORM

Walker, Ryan D. 01 January 2013 (has links)
In this thesis, we establish an analog of the Paley-Wiener Theorem for the ZS-AKNS scattering transform on a set of real potentials. We also demonstrate one application of our techniques to the study of an inverse spectral problem for a half-line Miura potential Schroedinger equation.
179

Modeling, simulation and control of redundantly actuated parallel manipulators

Ganovski, Latchezar 04 December 2007 (has links)
Redundantly actuated manipulators have only recently aroused significant scientific interest. Their advantages in terms of enlarged workspace, higher payload ratio and better manipulability with respect to non-redundantly actuated systems explain the appearance of numerous applications in various fields: high-precision machining, fault-tolerant manipulators, transport and outer-space applications, surgical operation assistance, etc. The present Ph.D. research proposes a unified approach for modeling and actuation of redundantly actuated parallel manipulators. The approach takes advantage of the actuator redundancy principles and thus allows for following trajectories that contain parallel (force) singularities, and for eliminating the negative effect of the latter. As a first step of the approach, parallel manipulator kinematic and dynamic models are generated and treated in such a way that they do not suffer from kinematic loop closure numeric problems. Using symbolic models based on the multibody formalism and a Newton-Euler recursive computation scheme, faster-than-real-time computer simulations can thus be achieved. Further, an original piecewise actuation strategy is applied to the manipulators in order to eliminate singularity effects during their motion. Depending on the manipulator and the trajectories to be followed, this strategy results in non-redundant or redundant actuation solutions that satisfy actuator performance limits and additional optimality criteria. Finally, a validation of the theoretical results and the redundant actuation benefits is performed on the basis of well-known control algorithms applied on two parallel manipulators of different complexity. This is done both by means of computer simulations and experimental runs on a prototype designed at the Center for Research in Mechatronics of the UCL. The advantages of the actuator redundancy of parallel manipulators with respect to the elimination of singularity effects during motion and the actuator load optimization are thus confirmed (virtually and experimentally) and highlighted thanks to the proposed approach for modeling, simulation and control.
180

Computational Tools and Methods for Objective Assessment of Image Quality in X-Ray CT and SPECT

Palit, Robin January 2012 (has links)
Computational tools of use in the objective assessment of image quality for tomography systems were developed for computer processing units (CPU) and graphics processing units (GPU) in the image quality lab at the University of Arizona. Fast analytic x-ray projection code called IQCT was created to compute the mean projection image for cone beam multi-slice helical computed tomography (CT) scanners. IQCT was optimized to take advantage of the massively parallel architecture of GPUs. CPU code for computing single photon emission computed tomography (SPECT) projection images was written calling upon previous research in the image quality lab. IQCT and the SPECT modeling code were used to simulate data for multimodality SPECT/CT observer studies. The purpose of these observer studies was to assess the benefit in image quality of using attenuation information from a CT measurement in myocardial SPECT imaging. The observer chosen for these studies was the scanning linear observer. The tasks for the observer were localization of a signal and estimation of the signal radius. For the localization study, area under the localization receiver operating characteristic curve (A(LROC)) was computed as A(LROC)^Meas = 0.89332 ± 0.00474 and A(LROC)^No = 0.89408 ± 0.00475, where "Meas" implies the use of attenuation information from the CT measurement, and "No" indicates the absence of attenuation information. For the estimation study, area under the estimation receiver operating characteristic curve (A(EROC)) was quantified as A(EROC)^Meas = 0.55926 ± 0.00731 and A(EROC)^No = 0.56167 ± 0.00731. Based on these results, it was concluded that the use of CT information did not improve the scanning linear observer's ability to perform the stated myocardial SPECT tasks. The risk to the patient of the CT measurement was quantified in terms of excess effective dose as 2.37 mSv for males and 3.38 mSv for females.Another image quality tool generated within this body of work was a singular value decomposition (SVD) algorithm to reduce the dimension of the eigenvalue problem for tomography systems with rotational symmetry. Agreement in the results of this reduced dimension SVD algorithm and those of a standard SVD algorithm are shown for a toy problem. The use of SVD toward image quality metrics such as the measurement and null space are also presented.

Page generated in 0.2749 seconds