21 |
Reconhecimento de entidades nomeadas na ?rea da geologia : bacias sedimentares brasileirasAmaral, Daniela Oliveira Ferreira do 14 September 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-05-03T18:01:24Z
No. of bitstreams: 1
DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-05-14T19:20:24Z (GMT) No. of bitstreams: 1
DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Made available in DSpace on 2018-05-14T19:35:09Z (GMT). No. of bitstreams: 1
DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5)
Previous issue date: 2017-09-14 / The treatment of textual information has been increasingly relevant in many do- mains. One of the first tasks for extracting information from texts is the Named Entities Recognition (NER), which consists of identifying references to certain entities and finding out their classification. There are many NER domains, among them the most usual are medicine and biology. One of the challenging domains in the recognition of Named Entities (NE) is the Geology domain, which is an area lacking computational linguistic resources. This thesis proposes a method for the recognition of relevant NE in the field of Geology, specifically to the subarea of Brazilian Sedimentary Basin, in Portuguese texts. Generic and geological features were defined for the generation of a machine learning model. Among the automatic approaches to NE classification, the most prominent is the Conditional Ran- dom Fields (CRF) probabilistic model. CRF has been effectively used for word processing in natural language. To generate our model, we created GeoCorpus, a reference corpus for Geological NER, annotated by specialists. Experimental evaluations were performed to compare the proposed method with other classifiers. The best results were achieved by CRF, which shows 76,78% of Precision and 54,33% of F-Measure. / O tratamento da informa??o textual torna-se cada vez mais relevante para muitos dom?nios. Nesse sentido, uma das primeira tarefas para Extra??o de Informa??es a partir de textos ? o Reconhecimento de Entidades Nomeadas (REN), que consiste na identifica??o de refer?ncias feitas a determinadas entidades e sua classifica??o. REN compreende muitos dom?nios, entre eles os mais usuais s?o medicina e biologia. Um dos dom?nios desafiadores no reconhecimento de EN ? o de Geologia, sendo essa uma ?rea carente de recursos lingu?sticos computacionais. A presente tese prop?e um m?todo para o reconhecimento de EN relevantes no dom?nio da Geologia, sub?rea Bacia Sedimentar Brasileira, em textos da l?ngua portuguesa. Definiram-se features gen?ricas e geol?gicas para a gera??o do modelo de aprendizado. Entre as abordagens autom?ticas para classifica??o de EN, a mais proeminente ? o modelo probabil?stico Conditional Random Fields (CRF). O CRF tem sido utilizado eficazmente no processamento de textos em linguagem natural. A fim de gerar um modelo de aprendizado foi criado o GeoCorpus, um corpus de refer?ncia para REN Geol?gicas, anotado por especialistas. Avalia??es experimentais foram realizadas com o objetivo de comparar o m?todo proposto com outros classificadores. Destacam-se os melhores resultados para o CRF, o qual alcan?ou 76,78% e 54,33% em Precis?o e Medida-F.
|
22 |
La structuration dans les entités nommées / Structuration in named entitiesDupont, Yoann 23 November 2017 (has links)
La reconnaissance des entités nommées et une discipline cruciale du domaine du TAL. Elle sert à l'extraction de relations entre entités nommées, ce qui permet la construction d'une base de connaissance (Surdeanu and Ji, 2014), le résumé automatique (Nobata et al., 2002), etc... Nous nous intéressons ici aux phénomènes de structurations qui les entourent.Nous distinguons ici deux types d'éléments structurels dans une entité nommée. Les premiers sont des sous-chaînes récurrentes, que nous appelerons les affixes caractéristiques d'une entité nommée. Le second type d'éléments est les tokens ayant un fort pouvoir discriminant, appelés des tokens déclencheurs. Nous détaillerons l'algorithme que nous avons mis en place pour extraire les affixes caractéristiques, que nous comparerons à Morfessor (Creutz and Lagus, 2005b). Nous appliquerons ensuite notre méthode pour extraire les tokens déclencheurs, utilisés pour l'extraction d'entités nommées du Français et d'adresses postales.Une autre forme de structuration pour les entités nommées est de nature syntaxique, qui suit généralement une structure d'imbrications ou arborée. Nous proposons un type de cascade d'étiqueteurs linéaires qui n'avait jusqu'à présent jamais été utilisé pour la reconnaissance d'entités nommées, généralisant les approches précédentes qui ne sont capables de reconnaître des entités de profondeur finie ou ne pouvant modéliser certaines particularités des entités nommées structurées.Tout au long de cette thèse, nous comparons deux méthodes par apprentissage automatique, à savoir les CRF et les réseaux de neurones, dont nous présenterons les avantages et inconvénients de chacune des méthodes. / Named entity recognition is a crucial discipline of NLP. It is used to extract relations between named entities, which allows the construction of knowledge bases (Surdeanu and Ji, 2014), automatic summary (Nobata et al., 2002) and so on. Our interest in this thesis revolves around structuration phenomena that surround them.We distinguish here two kinds of structural elements in named entities. The first one are recurrent substrings, that we will call the caracteristic affixes of a named entity. The second type of element is tokens with a good discriminative power, which we call trigger tokens of named entities. We will explain here the algorithm we provided to extract such affixes, which we will compare to Morfessor (Creutz and Lagus, 2005b). We will then apply the same algorithm to extract trigger tokens, which we will use for French named entity recognition and postal address extraction.Another form of structuration for named entities is of a syntactic nature. It follows an overlapping or tree structure. We propose a novel kind of linear tagger cascade which have not been used before for structured named entity recognition, generalising other previous methods that are only able to recognise named entities of a fixed depth or being unable to model certain characteristics of the structure. Ours, however, can do both.Throughout this thesis, we compare two machine learning methods, CRFs and neural networks, for which we will compare respective advantages and drawbacks.
|
23 |
Human Activity Recognition and Behavioral Prediction using Wearable Sensors and Deep LearningBergelin, Victor January 2017 (has links)
When moving into a more connected world together with machines, a mutual understanding will be very important. With the increased availability in wear- able sensors, a better understanding of human needs is suggested. The Dart- mouth Research study at the Psychiatric Research Center has examined the viability of detecting and further on predicting human behaviour and complex tasks. The field of smoking detection was challenged by using the Q-sensor by Affectiva as a prototype. Further more, this study implemented a framework for future research on the basis for developing a low cost, connected, device with Thayer Engineering School at Dartmouth College. With 3 days of data from 10 subjects smoking sessions was detected with just under 90% accuracy using the Conditional Random Field algorithm. However, predicting smoking with Electrodermal Momentary Assessment (EMA) remains an unanswered ques- tion. Hopefully a tool has been provided as a platform for better understanding of habits and behaviour.
|
24 |
Analyse d'opinion dans les interactions orales / Opinion analysis in speech interactionsBarriere, Valentin 15 April 2019 (has links)
La reconnaissance des opinions d'un locuteur dans une interaction orale est une étape cruciale pour améliorer la communication entre un humain et un agent virtuel. Dans cette thèse, nous nous situons dans une problématique de traitement automatique de la parole (TAP) sur les phénomènes d'opinions dans des interactions orales spontanées naturelles. L'analyse d'opinion est une tâche peu souvent abordée en TAP qui se concentrait jusqu'à peu sur les émotions à l'aide du contenu vocal et non verbal. De plus, la plupart des systèmes récents existants n'utilisent pas le contexte interactionnel afin d'analyser les opinions du locuteur. Dans cette thèse, nous nous penchons sur ces sujet. Nous nous situons dans le cadre de la détection automatique en utilisant des modèles d’apprentissage statistiques. Après une étude sur la modélisation de la dynamique de l'opinion par un modèle à états latents à l’intérieur d'un monologue, nous étudions la manière d’intégrer le contexte interactionnel dialogique, et enfin d'intégrer l'audio au texte avec différents types de fusion. Nous avons travaillé sur une base de données de Vlogs au niveau d'un sentiment global, puis sur une base de données d'interactions dyadiques multimodales composée de conversations ouvertes, au niveau du tour de parole et de la paire de tours de parole. Pour finir, nous avons fait annoté une base de données en opinion car les base de données existantes n'étaient pas satisfaisantes vis-à-vis de la tâche abordée, et ne permettaient pas une comparaison claire avec d'autres systèmes à l'état de l'art.A l'aube du changement important porté par l’avènement des méthodes neuronales, nous étudions différents types de représentations: les anciennes représentations construites à la main, rigides mais précises, et les nouvelles représentations apprises de manière statistique, générales et sémantiques. Nous étudions différentes segmentations permettant de prendre en compte le caractère asynchrone de la multi-modalité. Dernièrement, nous utilisons un modèle d'apprentissage à états latents qui peut s'adapter à une base de données de taille restreinte, pour la tâche atypique qu'est l'analyse d'opinion, et nous montrons qu'il permet à la fois une adaptation des descripteurs du domaine écrit au domaine oral, et servir de couche d'attention via son pouvoir de clusterisation. La fusion multimodale complexe n'étant pas bien gérée par le classifieur utilisé, et l'audio étant moins impactant sur l'opinion que le texte, nous étudions différentes méthodes de sélection de paramètres pour résoudre ces problèmes. / 2588/5000Recognizing a speaker's opinions in an oral interaction is a crucial step in improving communication between a human and a virtual agent. In this thesis, we find ourselves in a problematic of automatic speech processing (APT) on opinion phenomena in natural spontaneous oral interactions. Opinion analysis is a task that is not often addressed in TAP that focused until recently on emotions using voice and non-verbal content. In addition, most existing legacy systems do not use the interactional context to analyze the speaker's opinions. In this thesis, we focus on these topics.We are in the context of automatic detection using statistical learning models. A study on modeling the dynamics of opinion by a model with latent states within a monologue, we study how to integrate the context interactional dialogical, and finally to integrate audio to text with different types of fusion. We worked on a basic Vlogs data at a global sense, and on the basis of multimodal data dyadic interactions composed of open conversations, at the turn of speech and word pair of towers. Finally, we annotated database in opinion because existing database were not satisfactory vis-à-vis the task addressed, and did not allow a clear comparison with other systems in the state art.At the dawn of significant change brought by the advent of neural methods, we study different types of representations: the ancient representations built by hand, rigid, but precise, and new representations learned statistically, and general semantics. We study different segmentations to take into account the asynchronous nature of multi-modality. Recently, we are using a latent state learning model that can adapt to a small database, for the atypical task of opinion analysis, and we show that it allows both an adaptation of the descriptors of the written domain to the oral domain, and serve as an attention layer via its clustering power. Complex multimodal fusion is not well managed by the classifier used, and audio being less impacting on opinion than text, we study different methods of parameter selection to solve these problems.
|
25 |
Discriminative Articulatory Feature-based Pronunciation Models with Application to Spoken Term DetectionPrabhavalkar, Rohit Prakash 27 September 2013 (has links)
No description available.
|
26 |
An Efficient Ranking and Classification Method for Linear Functions, Kernel Functions, Decision Trees, and Ensemble MethodsGlass, Jesse Miller January 2020 (has links)
Structural algorithms incorporate the interdependence of outputs into the prediction, the loss, or both. Frank-Wolfe optimizations of pairwise losses and Gaussian conditional random fields for multivariate output regression are two such structural algorithms. Pairwise losses are standard 0-1 classification surrogate losses applied to pairs of features and outputs, resulting in improved ranking performance (area under the ROC curve, average precision, and F-1 score) at the cost of increased learning complexity. In this dissertation, it is proven that the balanced loss 0-1 SVM and the pairwise SVM have the same dual loss and the pairwise dual coefficient domain is a subdomain of the balanced loss 0-1 SVM with bias dual coefficient domain. This provides a theoretical advancement in the understanding of pairwise loss, which we exploit for the development of a novel ranking algorithm that is fast and memory efficient method with state the art ranking metric performance across eight benchmark data sets. Various practical advancements are also made in multivariate output regression. The learning time for Gaussian conditional random fields is greatly reduced and the parameter domain is expanded to enable repulsion between outputs. Last, a novel multivariate regression is presented that keeps the desirable elements of GCRF and infuses them into a local regression model that improves mean squared error and reduces learning complexity. / Computer and Information Science
|
27 |
3D real time object recognitionAmplianitis, Konstantinos 01 March 2017 (has links)
Die Objekterkennung ist ein natürlicher Prozess im Menschlichen Gehirn. Sie ndet im visuellen Kortex statt und nutzt die binokulare Eigenschaft der Augen, die eine drei- dimensionale Interpretation von Objekten in einer Szene erlaubt. Kameras ahmen das menschliche Auge nach. Bilder von zwei Kameras, in einem Stereokamerasystem, werden von Algorithmen für eine automatische, dreidimensionale Interpretation von Objekten in einer Szene benutzt. Die Entwicklung von Hard- und Software verbessern den maschinellen Prozess der Objek- terkennung und erreicht qualitativ immer mehr die Fähigkeiten des menschlichen Gehirns. Das Hauptziel dieses Forschungsfeldes ist die Entwicklung von robusten Algorithmen für die Szeneninterpretation. Sehr viel Aufwand wurde in den letzten Jahren in der zweidimen- sionale Objekterkennung betrieben, im Gegensatz zur Forschung zur dreidimensionalen Erkennung. Im Rahmen dieser Arbeit soll demnach die dreidimensionale Objekterkennung weiterent- wickelt werden: hin zu einer besseren Interpretation und einem besseren Verstehen von sichtbarer Realität wie auch der Beziehung zwischen Objekten in einer Szene. In den letzten Jahren aufkommende low-cost Verbrauchersensoren, wie die Microsoft Kinect, generieren Farb- und Tiefendaten einer Szene, um menschenähnliche visuelle Daten zu generieren. Das Ziel hier ist zu zeigen, wie diese Daten benutzt werden können, um eine neue Klasse von dreidimensionalen Objekterkennungsalgorithmen zu entwickeln - analog zur Verarbeitung im menschlichen Gehirn. / Object recognition is a natural process of the human brain performed in the visual cor- tex and relies on a binocular depth perception system that renders a three-dimensional representation of the objects in a scene. Hitherto, computer and software systems are been used to simulate the perception of three-dimensional environments with the aid of sensors to capture real-time images. In the process, such images are used as input data for further analysis and development of algorithms, an essential ingredient for simulating the complexity of human vision, so as to achieve scene interpretation for object recognition, similar to the way the human brain perceives it. The rapid pace of technological advancements in hardware and software, are continuously bringing the machine-based process for object recognition nearer to the inhuman vision prototype. The key in this eld, is the development of algorithms in order to achieve robust scene interpretation. A lot of recognisable and signi cant e ort has been successfully carried out over the years in 2D object recognition, as opposed to 3D. It is therefore, within this context and scope of this dissertation, to contribute towards the enhancement of 3D object recognition; a better interpretation and understanding of reality and the relationship between objects in a scene. Through the use and application of low-cost commodity sensors, such as Microsoft Kinect, RGB and depth data of a scene have been retrieved and manipulated in order to generate human-like visual perception data. The goal herein is to show how RGB and depth information can be utilised in order to develop a new class of 3D object recognition algorithms, analogous to the perception processed by the human brain.
|
28 |
Prediction of Protein-Protein Interaction Sites with Conditional Random Fields / Vorhersage der Protein-Protein Wechselwirkungsstellen mit Conditional Random FieldsDong, Zhijie 27 April 2012 (has links)
No description available.
|
29 |
[en] EXTRACTING AND CONNECTING PLAINTIFF S LEGAL CLAIMS AND JUDICIAL PROVISIONS FROM BRAZILIAN COURT DECISIONS / [pt] EXTRAÇÃO E CONEXÃO ENTRE PEDIDOS E DECISÕES JUDICIAIS DE UM TRIBUNAL BRASILEIROWILLIAM PAULO DUCCA FERNANDES 03 November 2020 (has links)
[pt] Neste trabalho, propomos uma metodologia para anotar decisões judiciais,
criar modelos de Deep Learning para extração de informação, e visualizar
de forma agregada a informação extraída das decisões. Instanciamos a
metodologia em dois sistemas. O primeiro extrai modificações de um tribunal
de segunda instância, que consiste em um conjunto de categorias legais
que são comumente modificadas pelos tribunais de segunda instância. O
segundo (i) extrai as causas que motivaram uma pessoa a propor uma ação
judicial (causa de pedir), os pedidos do autor e os provimentos judiciais dessas
ações proferidas pela primeira e segunda instância de um tribunal, e (ii)
conecta os pedidos com os provimentos judiciais correspondentes. O sistema
apresenta seus resultados através de visualizações. Extração de Informação
para textos legais tem sido abordada usando diferentes técnicas e idiomas.
Nossas propostas diferem dos trabalhos anteriores, pois nossos corpora são
compostos por decisões de primeira e segunda instância de um tribunal brasileiro.
Para extrair as informações, usamos uma abordagem tradicional de
Aprendizado de Máquina e outra usando Deep Learning, tanto individualmente
quanto como uma solução combinada. Para treinar e avaliar os sistemas,
construímos quatro corpora: Kauane Junior para o primeiro sistema,
e Kauane Insurance Report, Kauane Insurance Lower e Kauane Insurance
Upper para o segundo. Usamos dados públicos disponibilizados pelo Tribunal
de Justiça do Estado do Rio de Janeiro para construir os corpora. Para
o Kauane Junior, o melhor modelo (Fbeta=1 de 94.79 por cento) foi uma rede neural bidirecional Long Short-Term Memory combinada com Conditional Random
Fields (BILSTM-CRF); para o Kauane Insurance Report, o melhor (Fbeta=1
de 67,15 por cento) foi uma rede neural bidirecional Long Short-Term Memory com
embeddings de caracteres concatenados a embeddings de palavras combinada
com Conditional Random Fields (BILSTM-CE-CRF). Para o Kauane
Insurance Lower, o melhor (Fbeta=1 de 89,12 por cento) foi uma BILSTM-CE-CRF;
para o Kauane Insurance Upper, uma BILSTM-CRF (Fbeta=1 de 83,66 por cento). / [en] In this work, we propose a methodology to annotate Court decisions,
create Deep Learning models to extract information, and visualize the aggregated
information extracted from the decisions. We instantiate our methodology
in two systems we have developed. The first one extracts Appellate
Court modifications, a set of legal categories that are commonly modified
by Appellate Courts. The second one (i) extracts plaintiff s legal claims and
each specific provision on legal opinions enacted by lower and Appellate
Courts, and (ii) connects each legal claim with the corresponding judicial
provision. The system presents the results through visualizations. Information
Extraction for legal texts has been previously addressed using different
techniques and languages. Our proposals differ from previous work, since
our corpora are composed of Brazilian lower and Appellate Court decisions.
To automatically extract that information, we use a traditional Machine
Learning approach and a Deep Learning approach, both as alternative solutions
and also as a combined solution. In order to train and evaluate the
systems, we have built Kauane Junior corpus for the first system, and three
corpora for the second system – Kauane Insurance Report, Kauane Insurance
Lower, and Kauane Insurance Upper. We used public data disclosed by
the State Court of Rio de Janeiro to build the corpora. For Kauane Junior,
the best model, which is a Bidirectional Long Short-Term Memory network
combined with Conditional Random Fields (BILSTM-CRF), obtained an
(F)beta=1 score of 94.79 percent. For Kauane Insurance Report, the best model, which is a Bidirectional Long Short-Term Memory network with character embeddings
concatenated to word embeddings combined with Conditional Random
Fields (BILSTM-CE-CRF), obtained an (F)beta=1 score of 67.15 percent. For
Kauane Insurance Lower, the best model, which is a BILSTM-CE-CRF,
obtained an (F)beta=1 score of 89.12 percent. For Kauane Insurance Upper, the best
model, which is a BILSTM-CRF, obtained an (F)beta=1 score of 83.66 percent.
|
30 |
Segmentation d'images de documents manuscrits composites : application aux documents de chimie / Heterogenous handwritten document image segmentation : application to chemistry documentGhanmi, Nabil 30 September 2016 (has links)
Cette thèse traite de la segmentation structurelle de documents issus de cahiers de chimie. Ce travail est utile pour les chimistes en vue de prendre connaissance des conditions des expériences réalisées. Les documents traités sont manuscrits, hétérogènes et multi-scripteurs. Bien que leur structure physique soit relativement simple, une succession de trois régions représentant : la formule chimique de l’expérience, le tableau des produits utilisés et un ou plusieurs paragraphes textuels décrivant le déroulement de l’expérience, les lignes limitrophes des régions portent souvent à confusion, ajouté à cela des irrégularités dans la disposition des cellules du tableau, rendant le travail de séparation un vrai défi. La méthodologie proposée tient compte de ces difficultés en opérant une segmentation à plusieurs niveaux de granularité, et en traitant la segmentation comme un problème de classification. D’abord, l’image du document est segmentée en structures linéaires à l’aide d’un lissage horizontal approprié. Le seuil horizontal combiné avec une tolérance verticale avantage le regroupement des éléments fragmentés de la formule sans trop fusionner le texte. Ces structures linéaires sont classées en Texte ou Graphique en s’appuyant sur des descripteurs structurels spécifiques, caractéristiques des deux classes. Ensuite, la segmentation est poursuivie sur les lignes textuelles pour séparer les lignes du tableau de celles de la description. Nous avons proposé pour cette classification un modèle CAC qui permet de déterminer la séquence optimale d’étiquettes associées à la séquence des lignes d’un document. Le choix de ce type de modèle a été motivé par sa capacité à absorber la variabilité des lignes et à exploiter les informations contextuelles. Enfin, pour le problème de la segmentation de tableaux en cellules, nous avons proposé une méthode hybride qui fait coopérer deux niveaux d’analyse : structurel et syntaxique. Le premier s’appuie sur la présence des lignes graphiques et de l’alignement de texte et d’espaces ; et le deuxième tend à exploiter la cohérence de la syntaxe très réglementée du contenu des cellules. Nous avons proposé, dans ce cadre, une approche contextuelle pour localiser les champs numériques dans le tableau, avec reconnaissance des chiffres isolés et connectés. La thèse étant effectuée dans le cadre d’une convention CIFRE, en collaboration avec la société eNovalys, nous avons implémenté et testé les différentes étapes du système sur une base conséquente de documents de chimie / This thesis deals with chemistry document segmentation and structure analysis. This work aims to help chemists by providing the information on the experiments which have already been carried out. The documents are handwritten, heterogeneous and multi-writers. Although their physical structure is relatively simple, since it consists of a succession of three regions representing: the chemical formula of the experiment, a table of the used products and one or more text blocks describing the experimental procedure, several difficulties are encountered. In fact, the lines located at the region boundaries and the imperfections of the table layout make the separation task a real challenge. The proposed methodology takes into account these difficulties by performing segmentation at several levels and treating the region separation as a classification problem. First, the document image is segmented into linear structures using an appropriate horizontal smoothing. The horizontal threshold combined with a vertical overlapping tolerance favor the consolidation of fragmented elements of the formula without too merge the text. These linear structures are classified in text or graphic based on discriminant structural features. Then, the segmentation is continued on text lines to separate the rows of the table from the lines of the raw text locks. We proposed for this classification, a CRF model for determining the optimal labelling of the line sequence. The choice of this kind of model has been motivated by its ability to absorb the variability of lines and to exploit contextual information. For the segmentation of table into cells, we proposed a hybrid method that includes two levels of analysis: structural and syntactic. The first relies on the presence of graphic lines and the alignment of both text and spaces. The second tends to exploit the coherence of the cell content syntax. We proposed, in this context, a Recognition-based approach using contextual knowledge to detect the numeric fields present in the table. The thesis was carried out in the framework of CIFRE, in collaboration with the eNovalys campany.We have implemented and tested all the steps of the proposed system on a consequent dataset of chemistry documents
|
Page generated in 0.1369 seconds