• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 102
  • 39
  • 36
  • 16
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 408
  • 408
  • 99
  • 93
  • 56
  • 52
  • 42
  • 39
  • 36
  • 34
  • 32
  • 31
  • 29
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Concerto for Organ and Chamber Orchestra

Omelchenko, Stas 01 December 2013 (has links)
This composition proposes and implements a way in which to incorporate the pipe organ into a contemporary instrumental setting. Considering the instrument's wide use in concert halls and its popularity with contemporary music, much of the timbre-based music has evaded incorporating it into its settings; for one reason or another, there are currently no timbre-based works composed for organ and chamber orchestra. By using the process of spectral analysis, this timbre-based composition demonstrates one possible way of doing so by investigating timbre similarities and differences between selected ranks of the organ and selected orchestral instruments and mapping them into pitch structures.
82

Spectral analysis and resolving spatial ambiguities in human sound localization

Jin, Craig January 2001 (has links)
Doctor of Philosophy / This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
83

Nasality in the Malay language: development of an assessment protocol for Malay speaking children with cleft lip and/or palate

Mohd Ibrahim, Hasherah January 2009 (has links)
The need for a standard approach for the diagnosis of speech disorders, in particular resonance disorders associated with cleft lip and/or palate, has been recognised. A reliable and valid measure of nasality is important, because it not only affects clinical decision making but is also essential for the evaluation of treatment outcomes. In order to allow cross-linguistic comparisons of the assessment of resonance, language specific stimuli developed according to a common set of guidelines have been recommended. The aim of this thesis was to contribute to the development of an assessment protocol for use in Malay speaking individuals with clefts of the lip and/or palate, specifically focusing on the detection of nasality. A series of four studies were completed which systematically developed and then validated a set of stimuli in the Malay language using both perceptual and instrumental measures. / In the first study, three stimuli were developed for the assessment of nasality based on both the proportion of nasal phonemes in typical conversation samples in Malay and guidelines from the current international literature. The phonetic content of the stimuli were comparable to similar passages used in English and comprised of an Oral Passage, a Nasal Passage and a Set of Sentences. / In the second study, the stimuli constructed were tested in a large number of typically developing (non-cleft) Malay speaking children using both instrumental and perceptual methods of assessment. The results of this study provide the first set of normative data of nasalance scores for the three newly developed stimuli. The mean nasalance score for the Oral Passage was 13.86% (SD = 5.11, 95% CI = 13.04–14.68), 60.28% (SD = 6.99, 95% CI = 59.15–61.41) for the Nasal Passage, and 27.72% (SD = 4.74, 95% CI = 26.96–28.49) for the Set of Sentences. These scores were significantly different from each other suggesting that they can be used to detect the different types of resonance disorder in speech (e.g. hypernasality and/or hyponasality). / In the third study, the stimuli were validated in a sample of Malay speaking children with cleft of the lip and/or palate and compared with a control population. Nasality was measured using perceptual evaluation and nasometry. The results suggested that the Oral Passage and Set of Sentences developed in Malay were valid measures for detecting hypernasality for both perceptual evaluation of nasality, and for nasometry. Due to the small number of participants that were hyponasal, the validity of the Nasal Passage could not be determined. / For nasometry to be clinically relevant threshold values that indicate abnormal nasality are required. The threshold values for each of the stimuli were first ascertained after obtaining typical nasality levels from a group of healthy Malay speaking children and then tested in a sample of cleft and non-cleft Malay speaking children. In contrast to the nasalance cutoffs obtained from typical Malay speaking children, the cutoffs obtained from the cleft children yielded better outcomes for detecting resonance disorders. The cutoffs were: ≥ 22% for the Oral Passage (sensitivity = 0.91, specificity = 0.93, overall efficiency = 0.92), ≥ 30% for the Set of Sentences (sensitivity = 0.96, specificity = 0.85, overall efficiency = 0.88) and ≤ 39 on the Nasal Passage (sensitivity = 1.00, specificity = 0.99, overall efficiency = 0.99). / Finally, the fourth study explored the application of recently developed techniques for assessing nasality using spectral voice analysis and compared these results with nasometry using a sub-sample of Malay speaking children from the third study. The participants were children with cleft lip and/or palate with perceived hypernasality and a group of healthy controls perceived to have normal resonance. The potential of assessing nasality using vowels, which ideally can be an easier option to administer clinically and have minimal impact on language and literacy skills, were investigated. / The findings showed that only the one-third-octave analysis method could be successfully used to detect hypernasality in the cleft population compared to the VLHR method. Using the one-third-octave analysis, the spectral characteristics of nasalised vowel /i/ taken from /pit/ and /tip/ showed an increase in amplitude in F1, between F1 and F2 regions. The amplitude of the formants at F3 region was lower in the cleft group but did not differ from the control group as reported in previous studies. Although, the one-third-octave analysis has some potential in detecting hypernasality, the accuracy of the analysis compared to perceptual ratings of nasality was only moderate. Compared to nasometry, the diagnostic value of the one-third-octave analysis in detecting hypernasality was lower. / The overall findings suggest that, except for the Nasal Passage, the Oral Passage and the Set of Sentences developed in Malay using this systematic approach were culturally appropriate and valid for the assessment of nasality. Furthermore, by comparing two instrumental methods (nasometry and spectral analysis) with perceptual evaluation in a large number of cleft and typically developing children, the present thesis was able to demonstrate the clinical benefits of two recently proposed methods of spectral voice analyses and compare them to existing methods. Compared to spectral analysis, nasometry remains a superior method for assessing nasality. Threshold values that indicate abnormal nasality levels for the newly developed stimuli in Malay have been recommended.
84

Methods for improving foot motion measurement using inertial sensors

Charry, Edgar January 2010 (has links)
As a promising alternative to laboratory constrained video capture systems in studies of human movement, inertial sensors (accelerometers and gyroscopes) are recently gaining popularity. Secondary quantities such as velocity, displacement and joint angles can be calculated through integration of acceleration and angular velocities. However, it is broadly accepted that this procedure is significantly influenced by cumulative errors due to integration, arising from sensor noise, non-linearities, asymmetries, sensitivity variations and bias drifts. In this study, new methods for improving foot motion from inertial sensors are explored and assessed. / Sensor devices have been developed previously, for example, to detect postural changes that determine potential elderly fallers, and monitor a person’s gait. Recently, a gait variable known as minimum toe clearance (MTC) has been proposed to describe age-related declines in gait with better success as a predictor of falls risk. The MTC is the minimum vertical distance between the lowest point on the shoe and the ground during the mid-swing phase of the gait cycle. It is therefore of our interest to design a cost effective but accurate solution to measure toe clearance data which can then be used to identify the individuals at risk of falling. In this study, hardware, firmware and software features from off-the-shelf inertial sensors and wireless motes are evaluated and their configuration optimized for this application. A strap-down method, which consists of the minimizing of the integration drift due to cumulative errors, is evaluated off-line. Analysis revealed the necessity of band-pass filtering methods to correct systematic sensor errors that dramatically reduce the accuracy in estimating foot motion. / Cumulative errors were studied in the frequency domain, employing content of inertial sensor foot motion evaluated against a ’gold standard’ video-based device, namely the Optotrak Certus NDI. In addition, the effectiveness of applying band-pass filtering to raw inertial sensor data is assessed, under the assumption that sensor drift errors occur in the low frequency spectrum. The normalized correlation coefficient ρ of the Fast Fourier Transform (FFT) spectra corresponding to vertical toe acceleration from inertial sensors and from a video capture system as a function of digital band-pass filter parameters is compared. The Root Mean Square Error (RMSE) of the vertical toe displacement is calculated for 5 healthy subjects over a range of 4 walking speeds. The lowest RMSE and highest cross correlation achieved for the slowest walking speed of 2.5km/h was 3.06cmand 0.871 respectively, and 2.96cm and 0.952 for the fastest speed of 5.5km/h.
85

EEG based Macro-Sleep-Architecture and Apnea Severity Measures

Vinayak Swarnkar Unknown Date (has links)
Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) is a serious sleep disordered affecting up to 24% of men and 9% of woman in the middle aged population. The current standard for the OSAHS diagnosis is Polysomnography (PSG), which refers to the continuous monitoring of multiple physiological variables over the course of a night. The main outcomes of the PSG test are the OSAHS severity measures, such as the Respiratory Disturbance Index (RDI), Arousal Index, Latencies and other information to determine the macro sleep architecture (MSA), which is defined by Wake, Rapid-eye-movement (REM) and non-REM states of sleep. The MSA results are essential for computing the diagnostic measures reported in a PSG. The existing methods of the MSA analysis require the recording of 5-7 electrophysiological signals, including the Electroencephalogram (EEG), Electroculogram (EOG), and the Electromyogram (EMG). Sleep clinicians have to depend on the manual scoring of the overnight data records using the criteria given by Rechtschaffen and Kales (R&K, 1968). The manual analysis of MSA is tedious, subjective and suffers from inter- and intra-scorer variability. Additionally, the RDI and the Apnea-Hypopnea Index (AHI) parameters although used as the primary measures of the OSAHS severity, suffers from subjectivity, low reproducibility and a poor correlation with the symptoms of OSAHS. Sleep is essentially a neuropsychological phenomenon, and the EEG remains the best technique for the functional imaging of the brain during sleep. The EEG is the direct result of the neuronal activity of the brain. However, despite the potential, the wealth of information available in the EEG signal remains virtually untapped in current OSAHS diagnosis. Although the EEG is extensively used in traditional sleep analysis, its usage is mainly limited to staging sleep, based on the four-decade old R&K criteria. This thesis addresses these issues plaguing the PSG. We develop a novel, fully-automated algorithm (Higher-order Estimated Sleep States, HESS-algorithm) for the MSA analysis, which requires only one channel of the EEG data. We also develop an objective MSA analysis technique that uses a single, one-dimensional slice of the Bispectrum of the EEG, representing a nonlinear transformation of a system function that can be considered as the EEG generator. The agreement between the human and the proposed technology was found to be in the range of 70%-87%, which are similar to those, possible between expert human scorers. The ability of the HESS algorithm to compute the MSA parameters reliably and objectively will make a dramatic impact on the diagnosis and treatment of OSAHS and other sleep diseases, such as insomnia. The proposed technology uses low-computation-load Bispectrum techniques independent of R&K Criteria (1968) making real-time automated analysis a reality. In the thesis we also propose a new index (the IHSI) to characterise the severity of sleep apnea. The new index is based on the hemispherical asymmetry of the brain and is computed from the EEG coherence analysis. We achieved a significant (p=0.0001) accuracy of up to 91% in classifying patients into apneic and non-apneic group. Our statistical analysis results show that the IHSI carries potential for providing us with a reproducible measure to assist in diagnosing of OSAHS. With the proposed methods in this thesis it may be possible to develop the technology that will not only attempt to screen the OSAHS patients but will be able to provide OSAHS diagnosis with detailed sleep architecture via home based test. These technologies will simplify the instrumentation dramatically and will make possible to extend EEG/MSA analysis to portable systems as well.
86

ARBITRARY ORDER HILBERT SPECTRAL ANALYSIS DEFINITION AND APPLICATION TO FULLY DEVELOPED TURBULENCE AND ENVIRONMENTAL TIME SERIES

Huang, Yongxiang 23 July 2009 (has links) (PDF)
La Décomposition Modale Empirique (Empirical Mode Decomposition - EMD) ou la Transformation de Hilbert-Huang (HHT) est une nouvelle méthode d'analyse temps-fréquence qui est particulièrement adaptée pour des séries temporelles nonlinéaires et non stationnaires. Cette méthode a été proposée par NE. HUANG. il y a plus de dix ans. Pendant les dix dernières années, plus de 1000 articles ont appliqué cette méthode dans le cadre de diverses applications ou domaines de recherche. Dans cette thèse, nous appliquons cette méthode à des séries temporelles de turbulence, pour la première fois, et à des séries temporelles environnementales. Nous avons obtenu comme résultat le fait que la méthode EMD correspond à un banc de filtre dyadique (ou quasi-dyadique) pour la turbulence pleinement développée. Pour caractériser les propriétés intermittentes d'une série temporelle invariante d'échelle, nous avons généralisé l'analyse spectrale de Hilbert-Huang classique à des moments d'ordre arbitraire $q$, pour effectuer ce que nous avons appelé ``analyse spectrale de Hilbert d'ordre arbitraire''. Ceci fournit un nouveau cadre pour analyser l'invariance d'échelle directement dans un espace amplitude-fréquence, en estimant une intégrale marginale d'une pdf jointe $p(\omega,\mathcal{A})$ de la fréquence instantanée $\omega$ et de l'amplitude $\mathcal{A}$. Nous validons tout d'abord la méthode en analysant des séries temporelles de mouvement Brownien fractionnaire, et en analysant des séries temporelles multifractales synthétiques, en tant que modèle respectivement de processus monofractals et multifractals. Nous comparons les résultats obtenus avec la nouvelle méthode, à l'analyse classique utilisant les fonctions de structure: nous trouvons numériquement que la méthodologie utilisant l'approche de Hilbert fournit un estimateur plus précis pour le paramètre d'intermittence. Avec une hypothèse de stationarité, nous proposons un modèle analytique pour la fonction d'autocorrélation des incréments de séries temporelles de vitesse $\Delta u_{\ell}(t)$, où $\Delta u_{\ell}(t)=u(t+\ell)-u(t)$, et $\ell$ est l'incrément temporel. Dans le cadre de ce modèle, nous prouvons analytiquement que, si une loi de puissance est valide pour la série d'origine, la position minimisant la fonction d'autocorrélation de la variable d'origine est égale exactement au temps de séparation $\ell$ lorsque $\ell$ appartient à la zone invariante d'échelle. Ce modèle prédit une loi de puissance pour la valeur minimum, comportement vérifié par une simulation de mouvement Brownien fractionnaire et à partir de données expérimentales de turbulence. En introduisant une fonction cumulative pour la fonction d'autocorrélation, la contribution en échelle est alors caractérisée dans l'espace de fréquence de Fourier. Nous observons que la contribution principale à la fonction d'autocorrélation provient des grandes échelles. La même idée est appliquée à la fonction de structure d'ordre 2. Nous obtenons que celle-ci est également fortement influencée par les grandes échelles, ce qui montre que ceci n'est pas une bonne approche pour extraire les exposants invariants d'échelle d'une série temporelle lorsque les données sont caractérisées par des grandes échelles énergétiques. Nous appliquons ensuite cette méthodologie Hilbert-Huang à une base de données de turbulence homogène et presque isotrope, pour caractériser les propriétés multifractales invariantes d'échelle des série temporelles de vitesse en turbulence pleinement développée. Nous obtenons un comportement invariant d'échelle pour la pdf jointe $p(\omega,\mathcal{A})$ avec un exposant proche de la valeur de Kolmogorov. Nous estimons les exposants $\zeta(q)$ dans un espace amplitude-fréquence, pour la première fois. L'hypothèse d'isotropie est testée échelle par échelle dans l'espace amplitude-fréquence. Nous obtenons que le rapport d'isotropie généralisé décroit linéairement avec le moment $q$. Nous effectuons également l'analyse d'une série temporelle de température (scalaire passif) possédant un effet de rampe marqué (ramp-cliff). Pour ces données, l'approche traditionnelle utilisant les fonctions de structure ne fonctionne pas. Mais la nouvelle méthode développée dans cette thèse fournit un net régime invariant d'échelle jusqu'au moment $q=8$. Les exposants $\xi_{\theta}(q)-1$ sont très proches des exposants $\zeta(q)$ obtenus par l'approche des fonctions de structure pour la vitesse longitudinale. Nous nous intéressons ensuite à l'auto-similarité étendue (Extended Self Similarity - ESS) dans le cadre Hilbert-Huang. En ce qui concerne la méthode ESS, qui est devenue classique en turbulence, nous adaptons l'approche pour le cas Hilbert-Huang dans un espace de fréquence, et nous constatons que le modèle lognormal, avec un coefficient adéquat, fournit une très bonne estimation des exposants invariants d'échelle. Finalement nous appliquons la nouvelle méthodologie à des données environnementales: des débits de rivières, et des données de turbulence marine dans la zone de surf. Dans ce dernier cas, la méthode ESS permet de séparer les ondes de vent de la turbulence à petite échelle.
87

Aspects of two dimensional magnetic Schrödinger operators: quantum Hall systems and magnetic Stark resonances

Ferrari, Christian 06 June 2003 (has links) (PDF)
Cette thèse de doctorat concerne deux problèmes mathématiques issus de la mécanique quantique. On considère une particule quantique, non relativiste et sans spin, astreinte à se mouvoir sur une surface bidimensionnelle $\cal S$, plongée dans un champ magnétique homogène qui lui est perpendiculaire. Dans un premier problème, $(\cal S)=\R\times \mathbb(S)_L^1$, qui est un cylindre infini de circonférence $L$, ce qui correspond à des conditions aux bords periodiques. Dans le deuxième cas, $(\cal S)=\R^2$. En fonction du problème étudié, on ajoute un potentiel convenable. On est ainsi amené à étudier deux opérateurs de Schrödinger. Le premier opérateur analysé génère la dynamique d'une particule soumise à un potentiel aléatoire de type Anderson ainsi qu'un potentiel non aléatoire dont le but est de confiner la particule le long de l'axe du cylindre, sur une longueur $L$. Dans ce cas, on localise le spectre et on le classifie par le courant quantique porté par les fonctions propres correspondantes. On montre qu'il y a des régions spectrales où n'existent que des valeurs propres avec courant d'ordre un par rapport à $L$, et des régions spectrales où sont mélangées valeurs propres avec courant d'ordre un et valeurs propres avec courant infinitésimal par rapport à $L$. Ces resultats on un intétet physique dans le cadre de l'effect Hall entier. Le deuxième opérateur de Schrödinger étudié, correspond à la situation physique où le potentiel est donné par la somme d'un potentiel ``local'' et d'un potentiel dû à un petit champ électrique $F$ constant. Dans ce cas on montre que les états résonants induits par le champ électrique décroissent exponentiellement avec un taux donné par la partie imaginaire des valeurs propres d'un certain opérateur non auto-adjoint. On montre de plus que cette partie imaginaire possède une borne supérieure de l'ordre de $\exp(-1/F^2)$, pour $F$ tendant vers zéro. Ainsi, le temps de vie de l'état résonant en question est au moins de l'ordre de $\exp(1/F^2)$.
88

The colour of climate : A study of raised bogs in south-central Sweden

Borgmark, Anders January 2005 (has links)
<p>This thesis focuses on responses in raised bogs to changes in the effective humidity during the Holocene. Raised bogs are terrestrial deposits that can provide contiguous records of past climate changes. Information on and knowledge about past changes in climate is crucial for our understanding of natural climate variability. Analyses on different spatial and temporal scales have been conducted on a number of raised bogs in south-central Sweden in order to gain more knowledge about Holocene climate variability.</p><p>Peatlands are useful as palaeoenvironmental archives because they develop over the course of millennia and provide a multi-faceted contiguous outlook on the past. Peat humification, a proxy for bog surface wetness, has been used to reconstruct palaeoclimate. In addition measurements of carbon and nitrogen on sub-recent peat from two bogs have been performed. The chronologies have been constrained by AMS radiocarbon dates and tephrochronology and by SCPs for the sub-recent peat.</p><p>A comparison between a peat humification record from Värmland, south-central Sweden, and a dendrochronological record from Jämtland, north-central Sweden, indicates several synchronous changes between drier and wetter climate. This implies that changes in hydrology operate on a regional scale.</p><p>In a high resolution study of two bogs in Uppland, south-central Sweden, C, N and peat humification have been compared to bog water tables inferred from testate amoebae and with meteorological data covering the last 150 years. The results indicate that peat can be subjected to secondary decomposition, resulting in an apparent lead in peat humification and C/N compared to biological proxies and meteorological data.</p><p>Several periods of wetter conditions are indicated from the analysis of five peat sequences from three bogs in Värmland. Wetter conditions around especially c. 4500, 3500, 2800 and 1700-1000 cal yr BP can be correlated to several other climate records across the North Atlantic region and Scandinavia, indicating wetter and/or cooler climatic conditions at these times. Frequency analyses of two bogs indicate periodicities between 200 and 400 years that may be caused by cycles in solar activity.</p>
89

Shrinkage methods for multivariate spectral analysis

Böhm, Hilmar 29 January 2008 (has links)
In spectral analysis of high dimensional multivariate time series, it is crucial to obtain an estimate of the spectrum that is both numerically well conditioned and precise. The conventional approach is to construct a nonparametric estimator by smoothing locally over the periodogram matrices at neighboring Fourier frequencies. Despite being consistent and asymptotically unbiased, these estimators are often ill-conditioned. This is because a kernel smoothed periodogram is a weighted sum over the local neighborhood of periodogram matrices, which are each of rank one. When treating high dimensional time series, the result is a bad ratio between the smoothing span, which is the effective local sample size of the estimator, and dimension. In classification, clustering and discrimination, and in the analysis of non-stationary time series, this is a severe problem, because inverting an estimate of the spectrum is unavoidable in these contexts. Areas of application like neuropsychology, seismology and econometrics are affected by this theoretical problem. We propose a new class of nonparametric estimators that have the appealing properties of simultaneously having smaller L2-risk than the smoothed periodogram and being numerically more stable due to a smaller condition number. These estimators are obtained as convex combinations of the averaged periodogram and a shrinkage target. The choice of shrinkage target depends on the availability of prior knowledge on the cross dimensional structure of the data. In the absence of any information, we show that a multiple of the identity matrix is the best choice. By shrinking towards identity, we trade the asymptotic unbiasedness of the averaged periodogram for a smaller mean-squared error. Moreover, the eigenvalues of this shrinkage estimator are closer to the eigenvalues of the real spectrum, rendering it numerically more stable and thus more appropriate for use in classification. These results are derived under a rigorous general asymptotic framework that allows for the dimension p to grow with the length of the time series T. Under this framework, the averaged periodogram even ceases to be consistent and has asymptotically almost surely higher L2-risk than our shrinkage estimator. Moreover, we show that it is possible to incorporate background knowledge on the cross dimensional structure of the data in the shrinkage targets. We derive an exemplary instance of a custom-tailored shrinkage target in the form of a one factor model. This offers a new answer to problems of model choice: instead of relying on information criteria such as AIC or BIC for choosing the order of a model, the minimum order model can be used as a shrinkage target and combined with a non-parametric estimator of the spectrum, in our case the averaged periodogram. Comprehensive Monte Carlo studies we perform show the overwhelming gain in terms of L2-risk of our shrinkage estimators, even for very small sample size. We also give an overview of regularization techniques that have been designed for iid data, such as ridge regression or sparse pca, and show the interconnections between them.
90

Signal decompositions using trans-dimensional Bayesian methods.

Roodaki, Alireza 14 May 2012 (has links) (PDF)
This thesis addresses the challenges encountered when dealing with signal decomposition problems with an unknown number of components in a Bayesian framework. Particularly, we focus on the issue of summarizing the variable-dimensional posterior distributions that typically arise in such problems. Such posterior distributions are defined over union of subspaces of differing dimensionality, and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular Reversible-Jump MCMC (RJ-MCMC) sampler. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. One of the main challenges that needs to be addressed to this end is the label-switching issue, which is caused by the invariance of the posterior distribution to the permutation of the components. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional parametric distribution. We develop stochastic EM-type algorithms, driven by the RJ-MCMC sampler, to estimate the parameters of the model through the minimization of a divergence measure between the two distributions. Two signal decomposition problems are considered, to show the capability of the proposed approach both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand.

Page generated in 0.0645 seconds