• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 20
  • 20
  • 20
  • 20
  • 20
  • 20
  • 15
  • 14
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 145
  • 87
  • 87
  • 44
  • 42
  • 40
  • 40
  • 31
  • 31
  • 31
  • 21
  • 21
  • 17
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Recording classical music in Britain : the long 1950s

Curran, Terence William January 2015 (has links)
During the 1950s the experience of recording was transformed by a series of technical innovations including tape recording, editing, the LP record, and stereo sound. Within a decade recording had evolved into an art form in which multiple takes and editing were essential components in the creation of an illusory ideal performance. The British recording industry was at the forefront of development, and the rapid growth in recording activity throughout the 1950s as companies built catalogues of LP records, at first in mono but later in stereo, had a profound impact on the music profession in Britain. Despite this, there are few documented accounts of working practices, or of the experiences of those involved in recording at this time, and the subject has received sparse coverage in academic publications. This thesis studies the development of the recording of classical music in Britain in the long 1950s, the core period under discussion being 1948 to 1964. It begins by considering the current literature on recording, the cultural history of the period in relation to classical music, and the development of recording in the 1950s. Oral history informs the central part of the thesis, based on the analysis of 89 interviews with musicians, producers, engineers and others involved in recording during the 1950s and 1960s. The thesis concludes with five case studies, four of significant recordings - Tristan und Isolde (1952), Peter Grimes (1958), Elektra (1966-67), and Scheherazade (1964) - and one of a television programme, The Anatomy of a Record (1975), examining aspects of the recording process. The thesis reveals the ways in which musicians, producers, and engineers responded to the challenges and opportunities created by advances in technology, changing attitudes towards the aesthetics of performance on record, and the evolving nature of practices and relationships in the studio. It also highlights the wider impact of recording on musical practice and its central role in helping to raise standards of musical performance, develop audiences for classical music, and expand the repertoire in concert and on record.
132

EXTRAÇÃO CEGA DE SINAIS COM ESTRUTURAS TEMPORAIS UTILIZANDO ESPAÇOS DE HILBERT REPRODUZIDOS POR KERNEIS / BLIND SIGNAL EXTRACTION WITH TEMPORAL STRUCTURES USING HILBERT SPACE REPRODUCED BY KERNEL

Santana Júnior, Ewaldo éder Carvalho 10 February 2012 (has links)
Made available in DSpace on 2016-08-17T14:53:18Z (GMT). No. of bitstreams: 1 Dissertacao Ewaldo.pdf: 1169300 bytes, checksum: fc5d4b9840bbafe39d03cd1221da615e (MD5) Previous issue date: 2012-02-10 / This work derives and evaluates a nonlinear method for Blind Source Extraction (BSE) in a Reproducing Kernel Hilbert Space (RKHS) framework. For extracting the desired signal from a mixture a priori information about the autocorrelation function of that signal translated in a linear transformation of the Gram matrix of the nonlinearly transformed data to the Hilbert space. Our method proved to be more robust than methods presented in the literature of BSE with respect to ambiguities in the available a priori information of the signal to be extracted. The approach here introduced can also be seen as a generalization of Kernel Principal Component Analysis to analyze autocorrelation matrices at specific time lags. Henceforth, the method here presented is a kernelization of Dependent Component Analysis, it will be called Kernel Dependent Component Analysis (KDCA). Also in this dissertation it will be show a Information-Theoretic Learning perspective of the analysis, this will study the transformations in the extracted signals probability density functions while linear operations calculated in the RKHS. / Esta dissertação deriva e avalia um novo método nãolinear para Extração Cega de Sinais através de operações algébricas em um Espaço de Hilbert Reproduzido por Kernel (RKHS, do inglês Reproducing Kernel Hilbert Space). O processo de extração de sinais desejados de misturas é realizado utilizando-se informação sobre a estrutura temporal deste sinal desejado. No presente trabalho, esta informação temporal será utilizada para realizar uma transformação linear na matriz de Gram das misturas transformadas para o espaço de Hilbert. Aqui, mostrarse- á também que o método proposto é mais robusto, com relação a ambigüidades sobre a informação temporal do sinal desejado, que aqueles previamente apresentados na literatura para realizar a mesma operação de extração. A abordagem estudada a seguir pode ser vista como uma generalização da Análise de Componentes Principais utilizando Kerneis para analisar matriz de autocorrelação dos dados para um atraso específico. Sendo também uma kernelização da Análise de Componentes Dependentes, o método aqui desenvolvido é denominado Análise de Componentes Dependentes utilizando Kerneis (KDCA, do inglês Kernel Dependent Component Analysis). Também será abordada nesta dissertação, a perspectiva da Aprendizagem de Máquina utilizando Teoria da Informação do novo método apresentado, mostrando assim, que transformações são realizadas na função densidade de probabilidade do sinal extraído enquanto que operação lineares são calculadas no RKHS.
133

Generalization bounds for random samples in Hilbert spaces / Estimation statistique dans les espaces de Hilbert

Giulini, Ilaria 24 September 2015 (has links)
Ce travail de thèse porte sur l'obtention de bornes de généralisation pour des échantillons statistiques à valeur dans des espaces de Hilbert définis par des noyaux reproduisants. L'approche consiste à obtenir des bornes non asymptotiques indépendantes de la dimension dans des espaces de dimension finie, en utilisant des inégalités PAC-Bayesiennes liées à une perturbation Gaussienne du paramètre et à les étendre ensuite aux espaces de Hilbert séparables. On se pose dans un premier temps la question de l'estimation de l'opérateur de Gram à partir d'un échantillon i. i. d. par un estimateur robuste et on propose des bornes uniformes, sous des hypothèses faibles de moments. Ces résultats permettent de caractériser l'analyse en composantes principales indépendamment de la dimension et d'en proposer des variantes robustes. On propose ensuite un nouvel algorithme de clustering spectral. Au lieu de ne garder que la projection sur les premiers vecteurs propres, on calcule une itérée du Laplacian normalisé. Cette itération, justifiée par l'analyse du clustering en termes de chaînes de Markov, opère comme une version régularisée de la projection sur les premiers vecteurs propres et permet d'obtenir un algorithme dans lequel le nombre de clusters est déterminé automatiquement. On présente des bornes non asymptotiques concernant la convergence de cet algorithme, lorsque les points à classer forment un échantillon i. i. d. d'une loi à support compact dans un espace de Hilbert. Ces bornes sont déduites des bornes obtenues pour l'estimation d'un opérateur de Gram dans un espace de Hilbert. On termine par un aperçu de l'intérêt du clustering spectral dans le cadre de l'analyse d'images. / This thesis focuses on obtaining generalization bounds for random samples in reproducing kernel Hilbert spaces. The approach consists in first obtaining non-asymptotic dimension-free bounds in finite-dimensional spaces using some PAC-Bayesian inequalities related to Gaussian perturbations and then in generalizing the results in a separable Hilbert space. We first investigate the question of estimating the Gram operator by a robust estimator from an i. i. d. sample and we present uniform bounds that hold under weak moment assumptions. These results allow us to qualify principal component analysis independently of the dimension of the ambient space and to propose stable versions of it. In the last part of the thesis we present a new algorithm for spectral clustering. It consists in replacing the projection on the eigenvectors associated with the largest eigenvalues of the Laplacian matrix by a power of the normalized Laplacian. This iteration, justified by the analysis of clustering in terms of Markov chains, performs a smooth truncation. We prove nonasymptotic bounds for the convergence of our spectral clustering algorithm applied to a random sample of points in a Hilbert space that are deduced from the bounds for the Gram operator in a Hilbert space. Experiments are done in the context of image analysis.
134

Beurling-Lax Representations of Shift-Invariant Spaces, Zero-Pole Data Interpolation, and Dichotomous Transfer Function Realizations: Half-Plane/Continuous-Time Versions

Amaya, Austin J. 30 May 2012 (has links)
Given a full-range simply-invariant shift-invariant subspace <i>M</i> of the vector-valued <i>L<sup>2</sup></i> space on the unit circle, the classical Beurling-Lax-Halmos (BLH) theorem obtains a unitary operator-valued function <i>W</i> so that <i>M</i> may be represented as the image of of the Hardy space <i>H<sup>2</sup></i> on the disc under multiplication by <i>W</i>. The work of Ball-Helton later extended this result to find a single function representing a so-called dual shift-invariant pair of subspaces <i>(M,M<sup>Ã </sup>)</i> which together form a direct-sum decomposition of <i>L<sup>2</sup></i>. In the case where the pair <i>(M,M<sup>Ã </sup>)</i> are finite-dimensional perturbations of the Hardy space <i>H<sup>2</sup></i> and its orthogonal complement, Ball-Gohberg-Rodman obtained a transfer function realization for the representing function <i>W</i>; this realization was parameterized in terms of zero-pole data computed from the pair <i>(M,M<sup>Ã </sup>)</i>. Later work by Ball-Raney extended this analysis to the case of nonrational functions <i>W</i> where the zero-pole data is taken in an infinite-dimensional operator theoretic sense. The current work obtains analogues of these various results for arbitrary dual shift-invariant pairs <i>(M,M<sup>Ã </sup>)</i> of the <i>L<sup>2</sup></i> spaces on the real line; here, shift-invariance refers to invariance under the translation group. These new results rely on recent advances in the understanding of continuous-time infinite-dimensional input-state-output linear systems which have been codified in the book by Staffans. / Ph. D.
135

Sampling Inequalities and Applications / Sampling Ungleichungen und Anwendungen

Rieger, Christian 28 March 2008 (has links)
No description available.
136

A Comparison of Models and Methods for Spatial Interpolation in Statistics and Numerical Analysis / Eine Gegenüberstellung von Modellen und Methoden zur räumlichen Interpolation in der Statistik und der Numerischen Analysis

Scheuerer, Michael 28 October 2009 (has links)
No description available.
137

Spatial Interpolation and Prediction of Gaussian and Max-Stable Processes / Räumliche Interpolation und Vorhersage von Gaußschen und max-stabilen Prozessen

Oesting, Marco 03 May 2012 (has links)
No description available.
138

Schemes for Smooth Discretization And Inverse Problems - Case Study on Recovery of Tsunami Source Parameters

Devaraj, G January 2016 (has links) (PDF)
This thesis deals with smooth discretization schemes and inverse problems, the former used in efficient yet accurate numerical solutions to forward models required in turn to solve inverse problems. The aims of the thesis include, (i) development of a stabilization techniques for a class of forward problems plagued by unphysical oscillations in the response due to the presence of jumps/shocks/high gradients, (ii) development of a smooth hybrid discretization scheme that combines certain useful features of Finite Element (FE) and Mesh-Free (MF) methods and alleviates certain destabilizing factors encountered in the construction of shape functions using the polynomial reproduction method and, (iii) a first of its kind attempt at the joint inversion of both static and dynamic source parameters of the 2004 Sumatra-Andaman earthquake using tsunami sea level anomaly data. Following the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as having two parts, viz., the first part constituting the development and use of smooth discretization schemes in the possible presence of destabilizing factors (Chapters 2 and 3) and the second part involving solution to the inverse problem of tsunami source recovery (Chapter 4). In the context of stability requirements in numerical solutions of practical forward problems, Chapter 2 develops a new stabilization scheme. It is based on a stochastic representation of the discretized field variables, with a view to reduce or even eliminate unphysical oscillations in the MF numerical simulations of systems developing shocks or exhibiting localized bands of extreme plastic deformation in the response. The origin of the stabilization scheme may be traced to nonlinear stochastic filtering and, consistent with a class of such filters, gain-based additive correction terms are applied to the simulated solution of the system, herein achieved through the Element-Free Galerkin (EFG) method, in order to impose a set of constraints that help arresting the spurious oscillations. The method is numerically illustrated through its application to a gradient plasticity model whose response is often characterized by a developing shear band as the external load is gradually increased. The potential of the method in stabilized yet accurate numerical simulations of such systems involving extreme gradient variations in the response is thus brought forth. Chapter 3 develops the MF-based discretization motif by balancing this with the widespread adoption of the FE method. Thus it concentrates on developing a 'hybrid' scheme that aims at the amelioration of certain destabilizing algorithmic issues arising from the necessary condition of moment matrix invertibility en route to the generation of smooth shape functions. It sets forth the hybrid discretization scheme utilizing bivariate simplex splines as kernels in a polynomial reproducing approach adopted over a conventional FE-like domain discretization based on Delaunay triangulation. Careful construction of the simplex spline knotset ensures the success of the polynomial reproduction procedure at all points in the domain of interest, a significant advancement over its precursor, the DMS-FEM. The shape functions in the proposed method inherit the global continuity ( C p 1 ) and local supports of the simplex splines of degree p . In the proposed scheme, the triangles comprising the domain discretization also serve as background cells for numerical integration which here are near-aligned to the supports of the shape functions (and their intersections), thus considerably ameliorating an oft-cited source of inaccuracy in the numerical integration of MF-based weak forms. Numerical experiments establish that the proposed method can work with lower order quadrature rules for accurate evaluation of integrals in the Galerkin weak form, a feature desiderated in solving nonlinear inverse problems that demand cost-effective solvers for the forward models. Numerical demonstrations of optimal convergence rates for a few test cases are given and the hybrid method is also implemented to compute crack-tip fields in a gradient-enhanced elasticity model. Chapter 4 attempts at the joint inversion of earthquake source parameters for the 2004 Sumatra-Andaman event from the tsunami sea level anomaly signals available from satellite altimetry. Usual inversion for earthquake source parameters incorporates subjective elements, e.g. a priori constraints, posing and parameterization, trial-and-error waveform fitting etc. Noisy and possibly insufficient data leads to stability and non-uniqueness issues in common deterministic inversions. A rational accounting of both issues favours a stochastic framework which is employed here, leading naturally to a quantification of the commonly overlooked aspects of uncertainty in the solution. Confluence of some features endows the satellite altimetry for the 2004 Sumatra-Andaman tsunami event with unprecedented value for the inversion of source parameters for the entire rupture duration. A nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints is undertaken. Large and hitherto unreported variances in the parameters despite a persistently good waveform fit suggest large propagation of uncertainties and hence the pressing need for better physical models to account for the defect dynamics and massive sediment piles. Chapter 5 concludes the work with pertinent comments on the results obtained and suggestions for future exploration of some of the schemes developed here.
139

Smooth Finite Element Methods with Polynomial Reproducing Shape Functions

Narayan, Shashi January 2013 (has links) (PDF)
A couple of discretization schemes, based on an FE-like tessellation of the domain and polynomial reproducing, globally smooth shape functions, are considered and numerically explored to a limited extent. The first one among these is an existing scheme, the smooth DMS-FEM, that employs Delaunay triangulation or tetrahedralization (as approximate) towards discretizing the domain geometry employs triangular (tetrahedral) B-splines as kernel functions en route to the construction of polynomial reproducing functional approximations. In order to verify the numerical accuracy of the smooth DMS-FEM vis-à-vis the conventional FEM, a Mindlin-Reissner plate bending problem is numerically solved. Thanks to the higher order continuity in the functional approximant and the consequent removal of the jump terms in the weak form across inter-triangular boundaries, the numerical accuracy via the DMS-FEM approximation is observed to be higher than that corresponding to the conventional FEM. This advantage notwithstanding, evaluations of DMS-FEM based shape functions encounter singularity issues on the triangle vertices as well as over the element edges. This shortcoming is presently overcome through a new proposal that replaces the triangular B-splines by simplex splines, constructed over polygonal domains, as the kernel functions in the polynomial reproduction scheme. Following a detailed presentation of the issues related to its computational implementation, the new method is numerically explored with the results attesting to a higher attainable numerical accuracy in comparison with the DMS-FEM.
140

Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales / Contributions to unsupervised and nonlinear unmixing of hyperspectral data

Ammanouil, Rita 13 October 2016 (has links)
Le démélange spectral est l’un des problèmes centraux pour l’exploitation des images hyperspectrales. En raison de la faible résolution spatiale des imageurs hyperspectraux en télédetection, la surface représentée par un pixel peut contenir plusieurs matériaux. Dans ce contexte, le démélange consiste à estimer les spectres purs (les end members) ainsi que leurs fractions (les abondances) pour chaque pixel de l’image. Le but de cette thèse estde proposer de nouveaux algorithmes de démélange qui visent à améliorer l’estimation des spectres purs et des abondances. En particulier, les algorithmes de démélange proposés s’inscrivent dans le cadre du démélange non-supervisé et non-linéaire. Dans un premier temps, on propose un algorithme de démelange non-supervisé dans lequel une régularisation favorisant la parcimonie des groupes est utilisée pour identifier les spectres purs parmi les observations. Une extension de ce premier algorithme permet de prendre en compte la présence du bruit parmi les observations choisies comme étant les plus pures. Dans un second temps, les connaissances a priori des ressemblances entre les spectres à l’échelle localeet non-locale ainsi que leurs positions dans l’image sont exploitées pour construire un graphe adapté à l’image. Ce graphe est ensuite incorporé dans le problème de démélange non supervisé par le biais d’une régularisation basée sur le Laplacian du graphe. Enfin, deux algorithmes de démélange non-linéaires sont proposés dans le cas supervisé. Les modèles de mélanges non-linéaires correspondants incorporent des fonctions à valeurs vectorielles appartenant à un espace de Hilbert à noyaux reproduisants. L’intérêt de ces fonctions par rapport aux fonctions à valeurs scalaires est qu’elles permettent d’incorporer un a priori sur la ressemblance entre les différentes fonctions. En particulier, un a priori spectral, dans un premier temps, et un a priori spatial, dans un second temps, sont incorporés pour améliorer la caractérisation du mélange non-linéaire. La validation expérimentale des modèles et des algorithmes proposés sur des données synthétiques et réelles montre une amélioration des performances par rapport aux méthodes de l’état de l’art. Cette amélioration se traduit par une meilleure erreur de reconstruction des données / Spectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques

Page generated in 0.1251 seconds