• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Data analysis and results of the upgraded CRESST dark matter search

McGowan, Richard January 2010 (has links)
CRESST has an established analysis procedure to evaluate the energy of the events it detects, in an attempt to detect WIMP dark matter. It was shown that unless eight classes of contaminant event were removed prior to this analysis, the output energy spectrum would be significantly biased. For both scientific and practical reasons, the removal process should be blind, and a series of cuts were developed to flag these events automatically, without removing any true events. An event simulation package was developed to optimise these cuts. It was shown that noise fluctuations could also reduce CRESST’s sensitivity, so a noise-dependent acceptance region was introduced to resolve this. The upgraded CRESST experiment included a new electronics system to provide heating and bias currents for 66 detectors. This system was integrated into the CRESST set-up, and it was shown that the electronics contributed no extra noise to the detectors. Data with an exposure of 50 kg days were analysed using the cuts and the noise-dependent acceptance. The cuts were successful, with no contaminant event retained and a live time reduction of just 2.3%. The data were used to set an upper limit on the WIMP-nucleon cross section for elastic scattering with a minimum of 6.3 × 10^(−7) pb at a WIMP mass of 61 GeV. This is a factor of 2.5 better than the previous best CRESST limit.
2

Redukční automaty a syntaktické chyby / Reducing Automata and Syntactic Errors

Procházka, Martin January 2012 (has links)
This thesis deals with reducing automata, their normalization, and their application for a (robust) reduction analysis and localization of syntactic errors for deterministic context-free languages (DCFL). A reducing automaton is similar to a restarting automaton with two subtle differences: an explicit marking of reduced symbols (which makes it possible to determine a position of an error accurately), and moving a lookahead window inside a control unit (which brings reducing automata closer to devices of classical automata and formal language theory). In case of reducing automata, it is easier to adopt and reuse notions and approaches developed within classical theory, e.g., prefix correctness or automata minimization. For any nonempty deterministic context-free language specified by a monotone reducing automaton, both prefix correct and minimal, we propose a method of robust analysis by reduction which ensures localization of formally defined types of (real) errors, correct subwords, and subwords causing reduction conflicts (i.e., subwords with ambiguous syntactic structure that can be reduced in different words in different ways). We implement the proposed method by a new type of device (called postprefix robust analyzer) and we briefly show how to implement this method by a deterministic pushdown...
3

Analyse et détection automatique de disfluences dans la parole spontanée conversationnelle / Disfluency analysis and automatic detection in conversational spontaneous speech

Dutrey, Camille 16 December 2014 (has links)
Extraire de l'information de données langagières est un sujet de plus en plus d'actualité compte tenude la quantité toujours croissante d'information qui doit être régulièrement traitée et analysée, etnous assistons depuis les années 90 à l'essor des recherches sur des données de parole également. Laparole pose des problèmes supplémentaires par rapport à l'écrit, notamment du fait de la présence dephénomènes propres à l'oral (hésitations, reprises, corrections) mais aussi parce que les donnéesorales sont traitées par un système de reconnaissance automatique de la parole qui génèrepotentiellement des erreurs. Ainsi, extraire de l'information de données audio implique d'extraire del'information tout en tenant compte du « bruit » intrinsèque à l'oral ou généré par le système dereconnaissance de la parole. Il ne peut donc s'agir d'une simple application de méthodes qui ont faitleurs preuves sur de l'écrit. L'utilisation de techniques adaptées au traitement des données issues del'oral et prenant en compte à la fois leurs spécificités liées au signal de parole et à la transcription –manuelle comme automatique – de ce dernier représente un thème de recherche en pleindéveloppement et qui soulève de nouveaux défis scientifiques. Ces défis sont liés à la gestion de lavariabilité dans la parole et des modes d'expressions spontanés. Par ailleurs, l'analyse robuste deconversations téléphoniques a également fait l'objet d'un certain nombre de travaux dans lacontinuité desquels s'inscrivent ces travaux de thèse.Cette thèse porte plus spécifiquement sur l'analyse des disfluences et de leur réalisation dans desdonnées conversationnelles issues des centres d'appels EDF, à partir du signal de parole et destranscriptions manuelle et automatique de ce dernier. Ce travail convoque différents domaines, del'analyse robuste de données issues de la parole à l'analyse et la gestion des aspects liés àl'expression orale. L'objectif de la thèse est de proposer des méthodes adaptées à ces données, quipermettent d'améliorer les analyses de fouille de texte réalisées sur les transcriptions (traitement desdisfluences). Pour répondre à ces problématiques, nous avons analysé finement le comportement dephénomènes caractéristiques de l'oral spontané (disfluences) dans des données oralesconversationnelles issues de centres d'appels EDF, et nous avons mis au point une méthodeautomatique pour leur détection, en utilisant des indices linguistiques, acoustico-prosodiques,discursifs et para-linguistiques.Les apports de cette thèse s'articulent donc selon trois axes de recherche. Premièrement, nousproposons une caractérisation des conversations en centres d'appels du point de vue de l'oralspontané et des phénomènes qui le caractérisent. Deuxièmement, nous avons mis au point (i) unechaîne d'enrichissement et de traitement des données orales effective sur plusieurs plans d'analyse(linguistique, prosodique, discursif, para-linguistique) ; (ii) un système de détection automatique desdisfluences d'édition adapté aux données orales conversationnelles, utilisant le signal et lestranscriptions (manuelles ou automatiques). Troisièmement, d'un point de vue « ressource », nousavons produit un corpus de transcriptions automatiques de conversations issues de centres d'appelsannoté en disfluences d'édition (méthode semi-automatique). / Extracting information from linguistic data has gain more and more attention in the last decades inrelation with the increasing amount of information that has to be processed on a daily basis in the world. Since the 90’s, this interest for information extraction has converged to the development of researches on speech data. In fact, speech data involves extra problems to those encountered on written data. In particular, due to many phenomena specific to human speech (e.g. hesitations, corrections, etc.). But also, because automatic speech recognition systems applied on speech signal potentially generates errors. Thus, extracting information from audio data requires to extract information by taking into account the "noise" inherent to audio data and output of automatic systems. Thus, extracting information from speech data cannot be as simple as a combination of methods that have proven themselves to solve the extraction information task on written data. It comes that, the use of technics dedicated for speech/audio data processing is mandatory, and epsecially technics which take into account the specificites of such data in relation with the corresponding signal and transcriptions (manual and automatic). This problem has given birth to a new area of research and raised new scientific challenges related to the management of the variability of speech and its spontaneous modes of expressions. Furthermore, robust analysis of phone conversations is subject to a large number of works this thesis is in the continuity.More specifically, this thesis focuses on edit disfluencies analysis and their realisation in conversational data from EDF call centres, using speech signal and both manual and automatic transcriptions. This work is linked to numerous domains, from robust analysis of speech data to analysis and management of aspects related to speech expression. The aim of the thesis is to propose appropriate methods to deal with speech data to improve text mining analyses of speech transcriptions (treatment of disfluencies). To address these issues, we have finely analysed the characteristic phenomena and behavior of spontaneous speech (disfluencies) in conversational data from EDF call centres and developed an automatic method for their detection using linguistic, prosodic, discursive and para-linguistic features.The contributions of this thesis are structured in three areas of research. First, we proposed a specification of call centre conversations from the prespective of the spontaneous speech and from the phenomena that specify it. Second, we developed (i) an enrichment chain and effective processings of speech data on several levels of analysis (linguistic, acoustic-prosodic, discursive and para-linguistic) ; (ii) an system which detect automaticcaly the edit disfluencies suitable for conversational data and based on the speech signal and transcriptions (manual or automatic). Third, from a "resource" point of view, we produced a corpus of automatic transcriptions of conversations taken from call centres which has been annotated in edition disfluencies (using a semi-automatic method).
4

Estudo de robustez em sistemas lineares por meio de relaxações em termos de desigualdades matriciais lineares / Robustness of linear systems by means of linear matrix inequalities relaxations

Oliveira, Ricardo Coração de Leão Fontoura de, 1978- 24 March 2006 (has links)
Orientador: Pedro Luis Dias Peres / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T10:51:24Z (GMT). No. of bitstreams: 1 Oliveira_RicardoCoracaodeLeaoFontourade_D.pdf: 881205 bytes, checksum: 053263f18afcf3085a0fc073e1594d2d (MD5) Previous issue date: 2006 / Resumo: A principal contribuição desta tese é a proposta de uma metodologia para solução de desigualdades matriciais lineares dependentes de parâmetros que freqüentemente aparecem em problemas de análise e controle robusto de sistema lineares com incertezas na forma politópica. O método consiste na parametrização das soluções em termos de polinômios homogêneos com coeficientes matriciais de grau arbitrário. Para a construção dessas soluções, um procedimento baseado em resoluções de problemas de otimiza¸c¿ao na forma de um número finito de desigualdades matriciais lineares 'e proposto, resultando em seqüências de relaxações que convergem para uma solução polinomial homogênea sempre que uma solução existe. Problemas de análise robusta e custo garantido s¿ao analisados em detalhes tanto para sistemas a tempo contínuo quanto para sistemas discretos no tempo. Vários exemplos numéricos são apresentados ilustrando a eficiência dos métodos propostos em termos da acurácia dos resultados e do esforço computacional quando comparados com outros métodos da literatura / Abstract: This thesis proposes, as main contribution, a new methodology to solve parameterdependent linear matrix inequalities which frequently appear in robust analysis and control problems of linear system with polytopic uncertainties. The proposed method relies on the parametrization of the solutions in terms of homogeneous polynomials of arbitrary degree with matrix valued coefficients. For constructing such solutions, a procedure based on optimization problems formulated in terms of a finite number of linear matrix inequalities is proposed, yielding sequences of relaxations which converge to a homogeneous polynomial solution whenever a solution exists. Problems of robust analysis and guaranteed costs are analyzed in details for continuous and discrete-time uncertain systems. Several numerical examples are presented illustrating the efficiency of the proposed methods in terms of accuracy and computational burden when compared to other methods from the literature / Doutorado / Automação / Doutor em Engenharia Elétrica
5

The Effect of Amplitude Control and Randomness on Strongly Coupled Oscillator Arrays

Jiang, Hai 20 November 2009 (has links)
No description available.

Page generated in 0.0779 seconds