Spelling suggestions: "subject:"conditional random fields"" "subject:"konditional random fields""
11 |
Extração de informações de conferências em páginas webGarcia, Cássio Alan January 2017 (has links)
A escolha da conferência adequada para o envio de um artigo é uma tarefa que depende de diversos fatores: (i) o tema do trabalho deve estar entre os temas de interesse do evento; (ii) o prazo de submissão do evento deve ser compatível com tempo necessário para a escrita do artigo; (iii) localização da conferência e valores de inscrição são levados em consideração; e (iv) a qualidade da conferência (Qualis) avaliada pela CAPES. Esses fatores aliados à existência de milhares de conferências tornam a busca pelo evento adequado bastante demorada, em especial quando se está pesquisando em uma área nova. A fim de auxiliar os pesquisadores na busca de conferências, o trabalho aqui desenvolvido apresenta um método para a coleta e extração de dados de sites de conferências. Essa é uma tarefa desafiadora, principalmente porque cada conferência possui seu próprio site, com diferentes layouts. O presente trabalho apresenta um método chamado CONFTRACKER que combina a identificação de URLs de conferências da Tabela Qualis à identificação de deadlines a partir de seus sites. A extração das informações é realizada independente da conferência, do layout do site e da forma como são apresentadas as datas (formatação e rótulos). Para avaliar o método proposto, foram realizados experimentos com dados reais de conferências da Ciência da Computação. Os resultados mostraram que CONFTRACKER obteve resultados significativamente melhores em relação a um baseline baseado na posição entre rótulos e datas. Por fim, o processo de extração é executado para todas as conferências da Tabela Qualis e os dados coletados populam uma base de dados que pode ser consultada através de uma interface online. / Choosing the most suitable conference to submit a paper is a task that depends on various factors: (i) the topic of the paper needs to be among the topics of interest of the conference; (ii) submission deadlines need to be compatible with the necessary time for paper writing; (iii) conference location and registration costs; and (iv) the quality or impact of the conference. These factors allied to the existence of thousands of conferences, make the search of the right event very time consuming, especially when researching in a new area. Intending to help researchers finding conferences, this work presents a method developed to retrieve and extract data from conference web sites. Our method combines the identification of conference URL and deadline extraction. This is a challenging task as each web site has its own layout. Here, we propose CONFTRACKER, which combines the identification of the URLs of conferences listed in the Qualis Table and the extraction of their deadlines. Information extraction is carried out independent from the page’s layout and how the dates are presented. To evaluate our proposed method, we carried out experiments with real web data from Computer Science conferences. The results show that CONFTRACKER outperformed a baseline method based on the position of labels and dates. Finaly, the extracted data is stored in a database to be searched with an online tool.
|
12 |
Algoritmy pro rozpoznávání pojmenovaných entit / Algorithms for named entities recognitionWinter, Luca January 2017 (has links)
The aim of this work is to find out which algorithm is the best at recognizing named entities in e-mail messages. The theoretical part explains the existing tools in this field. The practical part describes the design of two tools specifically designed to create new models capable of recognizing named entities in e-mail messages. The first tool is based on a neural network and the second tool uses a CRF graph model. The existing and newly created tools and their ability to generalize are compared on a subset of e-mail messages provided by Kiwi.com.
|
13 |
A Conditional Random Field (CRF) Based Machine Learning Framework for Product Review MiningMing, Yue January 2019 (has links)
The task of opinion mining from product reviews has been achieved by employing rule-based approaches or generative learning models such as hidden Markov models (HMMs). This paper introduced a discriminative model using linear-chain Conditional Random Fields (CRFs) that can naturally incorporate arbitrary, non-independent features of the input without conditional independence among the features or distributional assumptions of inputs. The framework firstly performs part-of-speech (POS) tagging tasks over each word in sentences of review text. The performance is evaluated based on three criteria: precision, recall and F-score. The result shows that this approach is effective for this type of natural language processing (NLP) tasks. Then the framework extracts the keywords associated with each product feature and summarizes into concise lists that are simple and intuitive for people to read.
|
14 |
Segmentace obrazu s využitím hlubokého učení / Image segmentation using deeplearning methodsLukačovič, Martin January 2017 (has links)
This thesis deals with the current methods of semantic segmentation using deep learning. Other approaches of neaural networks in the area of deep learning are also discussed. It contains historical solutions of neural networks, their development, and basic principle. Convolutional neural networks are nowadays the most preferable networks in solving tasks as detection, classification, and image segmentation. The functionality was verified on a freely available environment based on conditional random fields as recurrent neural networks and compered with the deep convolutional neural networks using conditional random fields as postprocess. The latter mentioned method has become the basis for training of new models on two different datasets. There are various enviroments used to implement neural networks using deep learning, which offer diverse perform possibilities. For demonstration purposes a Python application leveraging the BVLC\,/\,Caffe framework was created. The best achieved accuracy of a trained model for clothing segmentation is 50,74\,\% and 68,52\,\% for segmentation of VOC objects. The application aims to allow interaction with image segmentation based on trained models.
|
15 |
Identification de opiniónes de differentes fuentes en textos en español / Identification d'opinions issues de diverses sources dans des textes en espagnol / Identification of opinions from different sources in Spanish textsRosá, Aiala 28 September 2011 (has links)
Ce travail présente une étude linguistique des expressions d'opinions issues de différentes sources dans des textes en espagnol. Le travail comprend la définition d'un modèle pour les prédicats d'opinion et leurs arguments (la source, le sujet et le message), la création d'un lexique de prédicats d'opinions auxquels sont associées des informations provenant du modèle et la réalisation de trois systèmes informatiques.Le premier système, basé sur des règles contextuelles, obtient de bons résultats pour le score de F-mesure partielle: prédicat, 92%; source, 81%; sujet, 75%; message, 89%, opinion, 85%. En outre, l'identification de la source donne une valeur de 79% de F-mesure exacte. Le deuxième système, basé sur le modèle Conditional Random Fields (CRF), a été développé uniquement pour l'identification des sources, donnant une valeur de 76% de F-mesure exacte. Le troisième système, qui combine les deux techniques (règles et CRF), donne une valeur de 83% de F-mesure exacte, montrant ainsi que la combinaison permet d'obtenir des résultats intéressants.En ce qui concerne l'identification des sources, notre système, comparé à des travaux réalisés sur des corpus d'autres langues que l'espagnol, donne des résultats très satisfaisants. En effet ces différents travaux obtiennent des scores qui se situent entre 63% et 89,5%.Par ailleurs, en sus des systèmes réalisés pour l'identification de l'opinion, notre travail a débouché sur la construction de plusieurs ressources pour l'espagnol : un lexique de prédicats d'opinions, un corpus de 13000 mots avec des annotations sur les opinions et un corpus de 40000 mots avec des annotations sur les prédicats d'opinion et les sources. / This work presents a study of linguistic expressions of opinion from different sources in Spanish texts. The work includes the definition of a model for opinion predicates and their arguments (source, topic and message), the creation of a lexicon of opinion predicates which have information from the model associated, and the implementation of three systems.The first system, based on contextual rules, gets good results for the F-measure score (partial match): predicate, 92%; source, 81%; topic, 75%; message, 89%; full opinion, 85%. In addition, for source identification the F-measure for exact match is 79%. The second system, based on Conditional Random Fields (CRF), was developed only for the identification of sources, giving 76% of F-measure (exact match). The third system, which combines the two techniques (rules and CRF), gives a value of 83% of F-measure (exact match), showing that the combination yields interesting results.As regards the identification of sources, our system compared to other work developed for languages other than Spanish, gives very satisfactory results. Indeed these works had scores that fall between 63% and 89.5%.Moreover, in addition to the systems made for the identification of opinions, our work has led to the construction of several resources for Spanish: a lexicon of opinion predicates, a 13,000 words corpus with opinions annotated and a 40,000 words corpus with opinion predicates end sources annotated.
|
16 |
Machine Learning Methods for Articulatory DataBerry, Jeffrey James January 2012 (has links)
Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data useful to speech scientists, and for speech technology applications. This dissertation presents feature extraction methods for ultrasound images of the tongue and for data collected with an Electro-Magnetic Articulograph (EMA). The usefulness of these features is tested in several phoneme classification tasks. Feature extraction methods for ultrasound tongue images presented here consist of automatically tracing the tongue surface contour using a modified Deep Belief Network (DBN) (Hinton et al. 2006), and methods inspired by research in face recognition which use the entire image. The tongue tracing method consists of training a DBN as an autoencoder on concatenated images and traces, and then retraining the first two layers to accept only the image at runtime. This 'translational' DBN (tDBN) method is shown to produce traces comparable to those made by human experts. An iterative bootstrapping procedure is presented for using the tDBN to assist a human expert in labeling a new data set. Tongue contour traces are compared with the Eigentongues method of (Hueber et al. 2007), and a Gabor Jet representation in a 6-class phoneme classification task using Support Vector Classifiers (SVC), with Gabor Jets performing the best. These SVC methods are compared to a tDBN classifier, which extracts features from raw images and classifies them with accuracy only slightly lower than the Gabor Jet SVC method.For EMA data, supervised binary SVC feature detectors are trained for each feature in three versions of Distinctive Feature Theory (DFT): Preliminaries (Jakobson et al. 1954), The Sound Pattern of English (Chomsky and Halle 1968), and Unified Feature Theory (Clements and Hume 1995). Each of these feature sets, together with a fourth unsupervised feature set learned using Independent Components Analysis (ICA), are compared on their usefulness in a 46-class phoneme recognition task. Phoneme recognition is performed using a linear-chain Conditional Random Field (CRF) (Lafferty et al. 2001), which takes advantage of the temporal nature of speech, by looking at observations adjacent in time. Results of the phoneme recognition task show that Unified Feature Theory performs slightly better than the other versions of DFT. Surprisingly, ICA actually performs worse than running the CRF on raw EMA data.
|
17 |
Modèles graphiques discriminants pour l'étiquetage de séquences : application à la reconnaissance d'entités nommées radiophiniques / Discriminative graphical models for sequence labelling : application to named entity recognition in audio broadcast newsZidouni, Azeddine 08 December 2010 (has links)
Le traitement automatique des données complexes et variées est un processus fondamental dans les applications d'extraction d'information. L'explosion combinatoire dans la composition des textes journalistiques et l'évolution du vocabulaire rend la tâche d'extraction d'indicateurs sémantiques, tel que les entités nommées, plus complexe par les approches symboliques. Les modèles stochastiques structurels tel que les champs conditionnels aléatoires (CRF) permettent d'optimiser des systèmes d'extraction d'information avec une importante capacité de généralisation. La première contribution de cette thèse est consacrée à la définition du contexte optimal pour l'extraction des régularités entre les mots et les annotations dans la tâche de reconnaissance d'entités nommées. Nous allons intégrer diverses informations dans le but d'enrichir les observations et améliorer la qualité de prédiction du système. Dans la deuxième partie nous allons proposer une nouvelle approche d'adaptation d'annotations entre deux protocoles différents. Le principe de cette dernière est basé sur l'enrichissement d'observations par des données générées par d'autres systèmes. Ces travaux seront expérimentés et validés sur les données de la campagne ESTER. D'autre part, nous allons proposer une approche de couplage entre le niveau signal représenté par un indice de la qualité de voisement et le niveau sémantique. L'objectif de cette étude est de trouver le lien entre le degré d'articulation du locuteur et l'importance de son discours / Recent researches in Information Extraction are designed to extract fixed types of information from data. Sequence annotation systems are developed to associate structured annotations to input data presented in sequential form. The named entity recognition (NER) task consists of identifying and classifying every word in a document into some predefined categories such as person name, locations, organizations, and dates. The complexity of the NER is largely related to the definition of the task and to the complexity of the relationships between words and the semantic associated. Our first contribution is devoted to solving the NER problem using discriminative graphical models. The proposed approach investigates the use of various contexts of the words to improve recognition. NER systems are fixed in accordance with a specific annotation protocol. Thus, new applications are developed for new protocols. The challenge is how we can adapt an annotation system which is performed for a specific application to other target application? We will propose in this work an adaptation approach of sequence labelling task based on annotation enrichment using conditional random fields (CRF). Experimental results show that the proposed approach outperform rules-based approach in NER task. Finally, we propose a multimodal approach of NER by integrating low level features as contextual information in radio broadcast news data. The objective of this study is to measure the correlation between the speaker voicing quality and the importance of his speech
|
18 |
On conditional random fields: applications, feature selection, parameter estimation and hierarchical modellingTran, The Truyen January 2008 (has links)
There has been a growing interest in stochastic modelling and learning with complex data, whose elements are structured and interdependent. One of the most successful methods to model data dependencies is graphical models, which is a combination of graph theory and probability theory. This thesis focuses on a special type of graphical models known as Conditional Random Fields (CRFs) (Lafferty et al., 2001), in which the output state spaces, when conditioned on some observational input data, are represented by undirected graphical models. The contributions of thesis involve both (a) broadening the current applicability of CRFs in the real world and (b) deepening the understanding of theoretical aspects of CRFs. On the application side, we empirically investigate the applications of CRFs in two real world settings. The first application is on a novel domain of Vietnamese accent restoration, in which we need to restore accents of an accent-less Vietnamese sentence. Experiments on half a million sentences of news articles show that the CRF-based approach is highly accurate. In the second application, we develop a new CRF-based movie recommendation system called Preference Network (PN). The PN jointly integrates various sources of domain knowledge into a large and densely connected Markov network. We obtained competitive results against well-established methods in the recommendation field. / On the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures. / Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking.
|
19 |
Road Surface Modeling using Stereo Vision / Modellering av Vägyta med hjälp av StereokameraLorentzon, Mattis, Andersson, Tobias January 2012 (has links)
Modern day cars are often equipped with a variety of sensors that collect information about the car and its surroundings. The stereo camera is an example of a sensor that in addition to regular images also provides distances to points in its environment. This information can, for example, be used for detecting approaching obstacles and warn the driver if a collision is imminent or even automatically brake the vehicle. Objects that constitute a potential danger are usually located on the road in front of the vehicle which makes the road surface a suitable reference level from which to measure the object's heights. This Master's thesis describes how an estimate of the road surface can be found to in order to make these height measurements. The thesis describes how the large amount of data generated by the stereo camera can be scaled down to a more effective representation in the form of an elevation map. The report discusses a method for relating data from different instances in time using information from the vehicle's motion sensors and shows how this method can be used for temporal filtering of the elevation map. For estimating the road surface two different methods are compared, one that uses a RANSAC-approach to iterate for a good surface model fit and one that uses conditional random fields for modeling the probability of different parts of the elevation map to be part of the road. A way to detect curb lines and how to use them to improve the road surface estimate is shown. Both methods for road classification show good results with a few differences that are discussed towards the end of the report. An example of how the road surface estimate can be used to detect obstacles is also included.
|
20 |
Data-driven natural language generation using statistical machine translation and discriminative learning / L'approche discriminante à la génération de la paroleManishina, Elena 05 February 2016 (has links)
L'humanité a longtemps été passionnée par la création de machines intellectuelles qui peuvent librement intéragir avec nous dans notre langue. Tous les systèmes modernes qui communiquent directement avec l'utilisateur partagent une caractéristique commune: ils ont un système de dialogue à la base. Aujourd'hui pratiquement tous les composants d'un système de dialogue ont adopté des méthodes statistiques et les utilisent largement comme leurs modèles de base. Jusqu'à récemment la génération de langage naturel (GLN) utilisait pour la plupart des patrons/modèles codés manuellement, qui représentaient des phrases types mappées à des réalisations sémantiques particulières. C'était le cas jusqu'à ce que les approches statistiques aient envahi la communauté de recherche en systèmes de dialogue. Dans cette thèse, nous suivons cette ligne de recherche et présentons une nouvelle approche à la génération de la langue naturelle. Au cours de notre travail, nous nous concentrons sur deux aspects importants du développement des systèmes de génération: construire un générateur performant et diversifier sa production. Deux idées principales que nous défendons ici sont les suivantes: d'abord, la tâche de GLN peut être vue comme la traduction entre une langue naturelle et une représentation formelle de sens, et en second lieu, l'extension du corpus qui impliquait traditionnellement des paraphrases définies manuellement et des règles spécialisées peut être effectuée automatiquement en utilisant des méthodes automatiques d'extraction des synonymes et des paraphrases bien connues et largement utilisées. En ce qui concerne notre première idée, nous étudions la possibilité d'utiliser le cadre de la traduction automatique basé sur des modèles ngrams; nous explorons également le potentiel de l'apprentissage discriminant (notamment les champs aléatoires markoviens) appliqué à la GLN; nous construisons un système de génération qui permet l'inclusion et la combinaison des différents modèles et qui utilise un cadre de décodage efficace (automate à état fini). En ce qui concerne le second objectif, qui est l'extension du corpus, nous proposons d'élargir la taille du vocabulaire et le nombre de l'ensemble des structures syntaxiques disponibles via l'intégration des synonymes et des paraphrases. À notre connaissance, il n'y a pas eu de tentatives d'augmenter la taille du vocabulaire d'un système de GLN en incorporant les synonymes. À ce jour, la plupart d'études sur l'extension du corpus visent les paraphrases et recourent au crowdsourcing pour les obtenir, ce qui nécessite une validation supplémentaire effectuée par les développeurs du système. Nous montrons que l'extension du corpus au moyen d'extraction automatique de paraphrases et la validation automatique sont tout aussi efficaces, étant en même temps moins coûteux en termes de temps de développement et de ressources. Au cours d'expériences intermédiaires nos modèles ont montré une meilleure performance que celle obtenue par le modèle de référence basé sur les syntagmes et se sont révélés d'être plus robustes, pour le traitement des combinaisons inconnues de concepts, que le générateur à base des règles. L'évaluation humaine finale a prouvé que les modèles représent une alternative solide au générateur à base des règles / The humanity has long been passionate about creating intellectual machines that can freely communicate with us in our language. Most modern systems communicating directly with the user share one common feature: they have a dialog system (DS) at their base. As of today almost all DS components embraced statistical methods and widely use them as their core models. Until recently Natural Language Generation (NLG) component of a dialog system used primarily hand-coded generation templates, which represented model phrases in a natural language mapped to a particular semantic content. Today data-driven models are making their way into the NLG domain. In this thesis, we follow along this new line of research and present several novel data-driven approaches to natural language generation. In our work we focus on two important aspects of NLG systems development: building an efficient generator and diversifying its output. Two key ideas that we defend here are the following: first, the task of NLG can be regarded as the translation between a natural language and a formal meaning representation, and therefore, can be performed using statistical machine translation techniques, and second, corpus extension and diversification which traditionally involved manual paraphrasing and rule crafting can be performed automatically using well-known and widely used synonym and paraphrase extraction methods. Concerning our first idea, we investigate the possibility of using NGRAM translation framework and explore the potential of discriminative learning, notably Conditional Random Fields (CRF) models, as applied to NLG; we build a generation pipeline which allows for inclusion and combination of different generation models (NGRAM and CRF) and which uses an efficient decoding framework (finite-state transducers' best path search). Regarding the second objective, namely corpus extension, we propose to enlarge the system's vocabulary and the set of available syntactic structures via integrating automatically obtained synonyms and paraphrases into the training corpus. To our knowledge, there have been no attempts to increase the size of the system vocabulary by incorporating synonyms. To date most studies on corpus extension focused on paraphrasing and resorted to crowd-sourcing in order to obtain paraphrases, which then required additional manual validation often performed by system developers. We prove that automatic corpus extension by means of paraphrase extraction and validation is just as effective as crowd-sourcing, being at the same time less costly in terms of development time and resources. During intermediate experiments our generation models showed a significantly better performance than the phrase-based baseline model and appeared to be more robust in handling unknown combinations of concepts than the current in-house rule-based generator. The final human evaluation confirmed that our data-driven NLG models is a viable alternative to rule-based generators.
|
Page generated in 0.1314 seconds