• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 11
  • 5
  • 4
  • 1
  • Tagged with
  • 88
  • 22
  • 19
  • 17
  • 14
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

On discriminative semi-supervised incremental learning with a multi-view perspective for image concept modeling

Byun, Byungki 17 January 2012 (has links)
This dissertation presents the development of a semi-supervised incremental learning framework with a multi-view perspective for image concept modeling. For reliable image concept characterization, having a large number of labeled images is crucial. However, the size of the training set is often limited due to the cost required for generating concept labels associated with objects in a large quantity of images. To address this issue, in this research, we propose to incrementally incorporate unlabeled samples into a learning process to enhance concept models originally learned with a small number of labeled samples. To tackle the sub-optimality problem of conventional techniques, the proposed incremental learning framework selects unlabeled samples based on an expected error reduction function that measures contributions of the unlabeled samples based on their ability to increase the modeling accuracy. To improve the convergence property of the proposed incremental learning framework, we further propose a multi-view learning approach that makes use of multiple features such as color, texture, etc., of images when including unlabeled samples. For robustness to mismatches between training and testing conditions, a discriminative learning algorithm, namely a kernelized maximal- figure-of-merit (kMFoM) learning approach is also developed. Combining individual techniques, we conduct a set of experiments on various image concept modeling problems, such as handwritten digit recognition, object recognition, and image spam detection to highlight the effectiveness of the proposed framework.
42

Transformation of Stimulus Function Through Relational Networks: The Impact of Derived Stimulus Relations on Stimulus Control of Behavior

Florentino, Samantha Rose 01 January 2012 (has links)
Relational Frame Theory research involves either of two protocols utilized to establish relational networks and functions for stimuli in those relational networks. Years of research indicate the most prevalent method involves first establishing a relational frame, conditioning one of the stimuli to acquire a particular function, and then providing a test to see if the function trained to one of the stimuli in the network transferred through the relational network to other stimuli. The less common method involves first training a particular function for a stimulus, entering that stimulus in a relational network with at least two other stimuli, and then subsequently providing a test to see if the function transferred. Hayes, Kohlenberg, and Hayes (1991) hypothesized that not only do both procedures work, but there is also no differentiation between the two with regards to transformation of stimulus function. Although both protocols have been used in the RFT literature, a direct comparison has never been made. The current study directly examines that comparison in a within-subject analysis to determine if there may be differentiated results in transformation of stimulus function based on the protocol used. A within-subjects analysis indicates that subsequent probes of transformation of stimulus function probes yielded similar levels of correct responding in both training protocols, and thus supporting the hypothesis put forth by Hayes and colleagues (1991).
43

Discriminative Alignment Models For Statistical Machine Translation

Tomeh, Nadi 27 June 2012 (has links) (PDF)
Bitext alignment is the task of aligning a text in a source language and its translation in the target language. Aligning amounts to finding the translational correspondences between textual units at different levels of granularity. Many practical natural language processing applications rely on bitext alignments to access the rich linguistic knowledge present in a bitext. While the most predominant application for bitexts is statistical machine translation, they are also used in multilingual (and monolingual) lexicography, word sense disambiguation, terminology extraction, computer-aided language learning andtranslation studies, to name a few.Bitext alignment is an arduous task because meaning is not expressed seemingly across languages. It varies along linguistic properties and cultural backgrounds of different languages, and also depends on the translation strategy that have been used to produce the bitext.Current practices in bitext alignment model the alignment as a hidden variable in the translation process. In order to reduce the complexity of the task, such approaches suppose that a word in the source sentence is aligned to one word at most in the target sentence.However, this over-simplistic assumption results in asymmetric, one-to-many alignments, whereas alignments are typically symmetric and many-to-many.To achieve symmetry, two one-to-many alignments in opposite translation directions are built and combined using a heuristic.In order to use these word alignments in phrase-based translation systems which use phrases instead of words, a heuristic is used to extract phrase pairs that are consistent with the word alignment.In this dissertation we address both the problems of word alignment and phrase pairs extraction.We improve the state of the art in several ways using discriminative learning techniques.We present a maximum entropy (MaxEnt) framework for word alignment.In this framework, links are predicted independently from one another using a MaxEnt classifier.The interaction between alignment decisions is approximated using stackingtechniques, which allows us to account for a part of the structural dependencies without increasing the complexity. This formulation can be seen as an alignment combination method,in which the union of several input alignments is used to guide the output alignment. Additionally, input alignments are used to compute a rich set of feature functions.Our MaxEnt aligner obtains state of the art results in terms of alignment quality as measured by thealignment error rate, and translation quality as measured by BLEU on large-scale Arabic-English NIST'09 systems.We also present a translation quality informed procedure for both extraction and evaluation of phrase pairs. We reformulate the problem in the supervised framework in which we decide for each phrase pair whether we keep it or not in the translation model. This offers a principled way to combine several features to make the procedure more robust to alignment difficulties. We use a simple and effective method, based on oracle decoding,to annotate phrase pairs that are useful for translation. Using machine learning techniques based on positive examples only,these annotations can be used to learn phrase alignment decisions. Using this approach we obtain improvements in BLEU scores for recall-oriented translation models, which are suitable for small training corpora.
44

Soft margin estimation for automatic speech recognition

Li, Jinyu 27 August 2008 (has links)
In this study, a new discriminative learning framework, called soft margin estimation (SME), is proposed for estimating the parameters of continuous density hidden Markov models (HMMs). The proposed method makes direct use of the successful ideas of margin in support vector machines to improve generalization capability and decision feedback learning in discriminative training to enhance model separation in classifier design. SME directly maximizes the separation of competing models to enhance the testing samples to approach a correct decision if the deviation from training samples is within a safe margin. Frame and utterance selections are integrated into a unified framework to select the training utterances and frames critical for discriminating competing models. SME offers a flexible and rigorous framework to facilitate the incorporation of new margin-based optimization criteria into HMMs training. The choice of various loss functions is illustrated and different kinds of separation measures are defined under a unified SME framework. SME is also shown to be able to jointly optimize feature extraction and HMMs. Both the generalized probabilistic descent algorithm and the Extended Baum-Welch algorithm are applied to solve SME. SME has demonstrated its great advantage over other discriminative training methods in several speech recognition tasks. Tested on the TIDIGITS digit recognition task, the proposed SME approach achieves a string accuracy of 99.61%, the best result ever reported in literature. On the 5k-word Wall Street Journal task, SME reduced the word error rate (WER) from 5.06% of MLE models to 3.81%, with relative 25% WER reduction. This is the first attempt to show the effectiveness of margin-based acoustic modeling for large vocabulary continuous speech recognition in a HMMs framework. The generalization of SME was also well demonstrated on the Aurora 2 robust speech recognition task, with around 30% relative WER reduction from the clean-trained baseline.
45

Comparing a discriminative stimulus procedure to a pairing procedure: Conditioning neutral social stimuli to function as conditioned reinforcers.

Koelker, Rachel Lee 12 1900 (has links)
Social stimuli that function as reinforcers for most children generally do not function as reinforcers for children diagnosed with autism. These important social stimuli include smiles, head nods, thumb-ups, and okay signs. It should be an important goal of therapy for children with autism to condition these neutral social stimuli to function as reinforcers for children diagnosed with autism. There is empirical evidence to support both a pairing procedure (classical conditioning) and a discriminative stimulus procedure to condition neutral stimuli to function as reinforcers. However, there is no clear evidence as to the superiority of effectiveness for either procedure. Despite this most textbooks and curriculum guides for children with autism state only the pairing procedure to condition neutral stimuli to function as reinforcers. Recent studies suggest that the discriminative stimulus procedure may in fact be more effective in conditioning neutral stimuli to function as reinforcers for children diagnosed with autism. The present research is a further comparison of these two procedures. Results from one participant support recent findings that suggest the discriminative stimulus procedure is more effective in conditioning neutral stimuli to function as reinforcers. But the results from the other participant show no effects from either procedure, suggesting future research into conditions necessary to condition neutral social stimuli to function as reinforcers for children with autism.
46

Probabilistic inference for phrase-based machine translation : a sampling approach

Arun, Abhishek January 2011 (has links)
Recent advances in statistical machine translation (SMT) have used dynamic programming (DP) based beam search methods for approximate inference within probabilistic translation models. Despite their success, these methods compromise the probabilistic interpretation of the underlying model thus limiting the application of probabilistically defined decision rules during training and decoding. As an alternative, in this thesis, we propose a novel Monte Carlo sampling approach for theoretically sound approximate probabilistic inference within these models. The distribution we are interested in is the conditional distribution of a log-linear translation model; however, often, there is no tractable way of computing the normalisation term of the model. Instead, a Gibbs sampling approach for phrase-based machine translation models is developed which obviates the need of computing this term yet produces samples from the required distribution. We establish that the sampler effectively explores the distribution defined by a phrase-based models by showing that it converges in a reasonable amount of time to the desired distribution, irrespective of initialisation. Empirical evidence is provided to confirm that the sampler can provide accurate estimates of expectations of functions of interest. The mix of high probability and low probability derivations obtained through sampling is shown to provide a more accurate estimate of expectations than merely using the n-most highly probable derivations. Subsequently, we show that the sampler provides a tractable solution for finding the maximum probability translation in the model. We also present a unified approach to approximating two additional intractable problems: minimum risk training and minimum Bayes risk decoding. Key to our approach is the use of the sampler which allows us to explore the entire probability distribution and maintain a strict probabilistic formulation through the translation pipeline. For these tasks, sampling allies the simplicity of n-best list approaches with the extended view of the distribution that lattice-based approaches benefit from, while avoiding the biases associated with beam search. Our approach is theoretically well-motivated and can give better and more stable results than current state of the art methods.
47

Les liens entre le statut parental et les infanticides des enfants de douze ans et moins

Quenneville, Jean-Philippe 04 1900 (has links)
Mondialement, l’infanticide est une cause importante de mortalité infantile. Dans ce mémoire, les infanticides sont analysés en fonction du statut parental, du mode de décès et de l’âge de l’enfant. La première hypothèse de ce mémoire propose qu’il y ait une surreprésentation des parents non biologiques dans les cas d’infanticides chez les enfants de moins de douze ans, et ce, en regard des taux de base de la population. L’hypothèse 2 prédit que les infanticides des parents biologiques devraient revêtir un caractère plus létal (utilisation d’arme à feu, empoisonnement, etc.) que ceux des parents non biologiques qui devraient être caractérisés principalement par des mauvais traitements et de la négligence. D’autres hypothèses sont examinées en fonction des taux de suicide et du sexe de l’agresseur. La présente étude porte sur les cas d’infanticides d’enfants de douze ans et moins sur le territoire du Québec provenant des archives du bureau du coroner pour la période se situant entre 1990 et 2007 (n=182). Les résultats obtenus appuient partiellement l’hypothèse 1 et confirment l’hypothèse 2. En ce sens, les résultats de cette étude viennent appuyer les hypothèses évolutionnistes qui soutiennent une influence du statut parental sur le comportement de l’infanticide. De façon générale, ces résultats mettent en lumière les différences qualitatives qui existent entre les parents biologiques et les parents non biologiques dans les cas d’infanticides. Les implications des résultats obtenus sont discutées. / Infanticide is considered as being an important part of infantile mortality. In this study, infanticide is studied according to the parental status (biological parent versus non biological parent), method of death and differential rates of suicide. The first hypothesis proposed that there should be an over-representation of the non biological parents in the homicide cases with the children of less than twelve years and this in look of the population rates. The second hypothesis proposed that the murders of the biological parents should clothe a more final character (weapon usage to fire, poisoning) that the homicides of the non biological parents that should be principally characterized by bad treatments. Other hypotheses are examined according to the rates of suicide and of the sex of the aggressor. The present study is based on the cases of homicides of child under the age of twelve on the territory of the Quebec from 1990 to 2007 (N = 182). The results support partially the hypothesis 1 and confirm the hypothesis 2. In this direction, the results of this study come to support the evolutionist hypotheses that principally are based on the theory of the parental investment. Implications of the obtained results are discussed.
48

Visual Representations and Models: From Latent SVM to Deep Learning

Azizpour, Hossein January 2016 (has links)
Two important components of a visual recognition system are representation and model. Both involves the selection and learning of the features that are indicative for recognition and discarding those features that are uninformative. This thesis, in its general form, proposes different techniques within the frameworks of two learning systems for representation and modeling. Namely, latent support vector machines (latent SVMs) and deep learning. First, we propose various approaches to group the positive samples into clusters of visually similar instances. Given a fixed representation, the sampled space of the positive distribution is usually structured. The proposed clustering techniques include a novel similarity measure based on exemplar learning, an approach for using additional annotation, and augmenting latent SVM to automatically find clusters whose members can be reliably distinguished from background class.  In another effort, a strongly supervised DPM is suggested to study how these models can benefit from privileged information. The extra information comes in the form of semantic parts annotation (i.e. their presence and location). And they are used to constrain DPMs latent variables during or prior to the optimization of the latent SVM. Its effectiveness is demonstrated on the task of animal detection. Finally, we generalize the formulation of discriminative latent variable models, including DPMs, to incorporate new set of latent variables representing the structure or properties of negative samples. Thus, we term them as negative latent variables. We show this generalization affects state-of-the-art techniques and helps the visual recognition by explicitly searching for counter evidences of an object presence. Following the resurgence of deep networks, in the last works of this thesis we have focused on deep learning in order to produce a generic representation for visual recognition. A Convolutional Network (ConvNet) is trained on a largely annotated image classification dataset called ImageNet with $\sim1.3$ million images. Then, the activations at each layer of the trained ConvNet can be treated as the representation of an input image. We show that such a representation is surprisingly effective for various recognition tasks, making it clearly superior to all the handcrafted features previously used in visual recognition (such as HOG in our first works on DPM). We further investigate the ways that one can improve this representation for a task in mind. We propose various factors involving before or after the training of the representation which can improve the efficacy of the ConvNet representation. These factors are analyzed on 16 datasets from various subfields of visual recognition. / <p>QC 20160908</p>
49

Discriminative Alignment Models For Statistical Machine Translation / Modèles Discriminants d'Alignement Pour La Traduction Automatique Statistique

Tomeh, Nadi 27 June 2012 (has links)
La tâche d'alignement d'un texte dans une langue source avec sa traduction en langue cible est souvent nommée alignement de bi-textes. Elle a pour but de faire émerger les relations de traduction qui peuvent s'exprimer à différents niveaux de granularité entre les deux faces du bi-texte. De nombreuses applications de traitement automatique des langues naturelles s'appuient sur cette étape afin d'accéder à des connaissances linguistiques de plus haut niveau.Parmi ces applications, nous pouvons citer bien sûr la traduction automatique, mais également l'extraction de lexiques et de terminologies bilingues, la désambigüisation sémantique ou l'apprentissage des langues assisté par ordinateur.La complexité de la tâche d'alignement de bi-textes s'explique par les différences linguistiques entre les langues. Ces différences peuvent être d'ordre sémantique, syntaxique, ou morphologique.Dans le cadre des approches probabilistes, l'alignement de bi-textes est modélisé par un ensemble de variables aléatoires cachés. Afin de réduire la complexité du problème, le processus aléatoire sous-jacent fait l'hypothèse simplificatrice qu'un mot en langue source est lié à au plus un mot en langue cible, ce qui induit une relation de traduction asymétrique. Néanmoins, cette hypothèse est simpliste, puisque les alignements peuvent de manière générale impliquer des groupes de mots dans chacune des langues. Afin de rétablir cette symétrie, chaque langue est considérée tour à tour comme la langue source et les deux alignements asymétriques résultants sont combinés à l'aide d'une heuristique. Cette étape de symétrisation revêt une importance particulière dans l’approche standard en traduction automatique, puisqu'elle précède l'extraction des unités de traduction, à savoir les paires de segments.L'objectif de cette thèse est de proposer de nouvelles approches pour d'une part l'alignement debi-texte, et d'autre part l'extraction des unités de traduction. La spécificité de notre approche consiste à remplacer les heuristiques utilisées par des modèles d'apprentissage discriminant.Nous présentons un modèle "Maximum d'entropie'' (ou MaxEnt) pour l'alignement de mot, pour lequel chaque lien d'alignement est prédit de manière indépendante. L'interaction entre les liens d'alignement est alors prise en compte par l'empilement ("stacking'') d'un second modèle prenant en compte la structure à prédire sans pour autant augmenter la complexité globale. Cette formulation peut être vue comme une manière d'apprendre la combinaison de différentes méthodes d'alignement: le modèle considère ainsi l'union des alignements d'entrées pour en sélectionner les liens jugés fiables. Le modèle MaxEnt proposé permet d'améliorer les performances d'un système état de l'art de traduction automatique en considérant le jeu de données de la tâche NIST'09, Arabe vers Anglais. Ces améliorations sont mesurées en terme de taux d'erreur sur les alignements et aussi en terme de qualité de traduction via la métrique automatique BLEU.Nous proposons également un modèle permettant à la fois de sélectionner et d'évaluer les unités de traduction extraites d'un bi texte aligné. Ces deux étapes sont reformulées dans le cadre de l'apprentissage supervisé, afin de modéliser la décision de garder ou pas une paire de segments comme une unité fiable de traduction. Ce cadre permet l'utilisation de caractéristiques riches et nombreuses favorisant ainsi une décision robuste. Nous proposons une méthode simple et efficace pour annoter les paires de segments utiles pour la traduction. Le problème d'apprentissage automatique qui se pose alors est particulier, puisque nous disposons que d'exemples positifs. Nous proposons donc d'utiliser l'approche SVM à une classe afin de modéliser la sélection des unités de traduction.Grâce à cette approche, nous obtenons des améliorations significatives en terme de score BLEU pour un système entrainé avec un petit ensemble de données. / Bitext alignment is the task of aligning a text in a source language and its translation in the target language. Aligning amounts to finding the translational correspondences between textual units at different levels of granularity. Many practical natural language processing applications rely on bitext alignments to access the rich linguistic knowledge present in a bitext. While the most predominant application for bitexts is statistical machine translation, they are also used in multilingual (and monolingual) lexicography, word sense disambiguation, terminology extraction, computer-aided language learning andtranslation studies, to name a few.Bitext alignment is an arduous task because meaning is not expressed seemingly across languages. It varies along linguistic properties and cultural backgrounds of different languages, and also depends on the translation strategy that have been used to produce the bitext.Current practices in bitext alignment model the alignment as a hidden variable in the translation process. In order to reduce the complexity of the task, such approaches suppose that a word in the source sentence is aligned to one word at most in the target sentence.However, this over-simplistic assumption results in asymmetric, one-to-many alignments, whereas alignments are typically symmetric and many-to-many.To achieve symmetry, two one-to-many alignments in opposite translation directions are built and combined using a heuristic.In order to use these word alignments in phrase-based translation systems which use phrases instead of words, a heuristic is used to extract phrase pairs that are consistent with the word alignment.In this dissertation we address both the problems of word alignment and phrase pairs extraction.We improve the state of the art in several ways using discriminative learning techniques.We present a maximum entropy (MaxEnt) framework for word alignment.In this framework, links are predicted independently from one another using a MaxEnt classifier.The interaction between alignment decisions is approximated using stackingtechniques, which allows us to account for a part of the structural dependencies without increasing the complexity. This formulation can be seen as an alignment combination method,in which the union of several input alignments is used to guide the output alignment. Additionally, input alignments are used to compute a rich set of feature functions.Our MaxEnt aligner obtains state of the art results in terms of alignment quality as measured by thealignment error rate, and translation quality as measured by BLEU on large-scale Arabic-English NIST'09 systems.We also present a translation quality informed procedure for both extraction and evaluation of phrase pairs. We reformulate the problem in the supervised framework in which we decide for each phrase pair whether we keep it or not in the translation model. This offers a principled way to combine several features to make the procedure more robust to alignment difficulties. We use a simple and effective method, based on oracle decoding,to annotate phrase pairs that are useful for translation. Using machine learning techniques based on positive examples only,these annotations can be used to learn phrase alignment decisions. Using this approach we obtain improvements in BLEU scores for recall-oriented translation models, which are suitable for small training corpora.
50

Phonemic variability and confusability in pronunciation modeling for automatic speech recognition / Variabilité et confusabilité phonémique pour les modèles de prononciations au sein d’un système de reconnaissance automatique de la parole

Karanasou, Panagiota 11 June 2013 (has links)
Cette thèse aborde les problèmes de variabilité et confusabilité phonémique du point de vue des modèles de prononciation pour un système de reconnaissance automatique de la parole. En particulier, plusieurs directions de recherche sont étudiées. Premièrement, on développe des méthodes de conversion automatique de graphème-phonème et de phonème-phonème. Ces méthodes engendrent des variantes de prononciation pour les mots du vocabulaire, ainsi que des prononciations et des variantes de prononciation, pour des mots hors-vocabulaire. Cependant, ajouter plusieurs prononciations par mot au vocabulaire peut introduire des homophones (ou quasi-homophones) et provoquer une augmentation de la confusabilité du système. Une nouvelle mesure de cette confusabilité est proposée pour analyser et étudier sa relation avec la performance d’un système de reconnaissance de la parole. Cette “confusabilité de prononciation” est plus élevée si des probabilités pour les prononciations ne sont pas fournies et elle peut potentiellement dégrader sérieusement la performance d’un système de reconnaissance de la parole. Il convient, par conséquent, qu’elle soit prise en compte lors de la génération de prononciations. On étudie donc des approches d’entraînement discriminant pour entraîner les poids d’un modèle de confusion phonémique qui autorise différentes facons de prononcer un mot tout en contrôlant le problème de confusabilité phonémique. La fonction objectif à optimiser est choisie afin de correspondre à la mesure de performance de chaque tâche particulière. Dans cette thèse, deux tâches sont étudiées: la tâche de reconnaissance automatique de la parole et la tâche de détection de mots-clés. Pour la reconnaissance automatique de la parole, une fonction objectif qui minimise le taux d’erreur au niveau des phonèmes est adoptée. Pour les expériences menées sur la détection de mots-clés, le “Figure of Merit” (FOM), une mesure de performance de la détection de mots-clés, est directement optimisée. / This thesis addresses the problems of phonemic variability and confusability from the pronunciation modeling perspective for an automatic speech recognition (ASR) system. In particular, several research directions are investigated. First, automatic grapheme-to- phoneme (g2p) and phoneme-to-phoneme (p2p) converters are developed that generate alternative pronunciations for in-vocabulary as well as out-of-vocabulary (OOV) terms. Since the addition of alternative pronunciation may introduce homophones (or close homophones), there is an increase of the confusability of the system. A novel measure of this confusability is proposed to analyze it and study its relation with the ASR performance. This pronunciation confusability is higher if pronunciation probabilities are not provided and can potentially severely degrade the ASR performance. It should, thus, be taken into account during pronunciation generation. Discriminative training approaches are, then, investigated to train the weights of a phoneme confusion model that allows alternative ways of pronouncing a term counterbalancing the phonemic confusability problem. The objective function to optimize is chosen to correspond to the performance measure of the particular task. In this thesis, two tasks are investigated, the ASR task and the KeywordSpotting (KWS) task. For ASR, an objective that minimizes the phoneme error rate is adopted. For experiments conducted on KWS, the Figure of Merit (FOM), a KWS performance measure, is directly maximized.

Page generated in 0.0765 seconds