• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 43
  • 17
  • 14
  • 9
  • 7
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 319
  • 319
  • 90
  • 84
  • 84
  • 72
  • 66
  • 61
  • 58
  • 56
  • 54
  • 54
  • 53
  • 52
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Modélisation conjointe des thématiques et des opinions : application à l'analyse des données textuelles issues du Web / Joint topic-sentiment modeling : an application to Web data analysis

Dermouche, Mohamed 08 June 2015 (has links)
Cette thèse se situe à la confluence des domaines de "la modélisation de thématiques" (topic modeling) et l'"analyse d'opinions" (opinion mining). Le problème que nous traitons est la modélisation conjointe et dynamique des thématiques (sujets) et des opinions (prises de position) sur le Web et les médias sociaux. En effet, dans la littérature, ce problème est souvent décomposé en sous-tâches qui sont menées séparément. Ceci ne permet pas de prendre en compte les associations et les interactions entre les opinions et les thématiques sur lesquelles portent ces opinions (cibles). Dans cette thèse, nous nous intéressons à la modélisation conjointe et dynamique qui permet d'intégrer trois dimensions du texte (thématiques, opinions et temps). Afin d'y parvenir, nous adoptons une approche statistique, plus précisément, une approche basée sur les modèles de thématiques probabilistes (topic models). Nos principales contributions peuvent être résumées en deux points : 1. Le modèle TS (Topic-Sentiment model) : un nouveau modèle probabiliste qui permet une modélisation conjointe des thématiques et des opinions. Ce modèle permet de caractériser les distributions d'opinion relativement aux thématiques. L'objectif est d'estimer, à partir d'une collection de documents, dans quelles proportions d'opinion les thématiques sont traitées. 2. Le modèle TTS (Time-aware Topic-Sentiment model) : un nouveau modèle probabiliste pour caractériser l'évolution temporelle des thématiques et des opinions. En s'appuyant sur l'information temporelle (date de création de documents), le modèle TTS permet de caractériser l'évolution des thématiques et des opinions quantitativement, c'est-à-dire en terme de la variation du volume de données à travers le temps. Par ailleurs, nous apportons deux autres contributions : une nouvelle mesure pour évaluer et comparer les méthodes d'extraction de thématiques, ainsi qu'une nouvelle méthode hybride pour le classement d'opinions basée sur une combinaison de l'apprentissage automatique supervisé et la connaissance a priori. Toutes les méthodes proposées sont testées sur des données réelles en utilisant des évaluations adaptées. / This work is located at the junction of two domains : topic modeling and sentiment analysis. The problem that we propose to tackle is the joint and dynamic modeling of topics (subjects) and sentiments (opinions) on the Web. In the literature, the task is usually divided into sub-tasks that are treated separately. The models that operate this way fail to capture the topic-sentiment interaction and association. In this work, we propose a joint modeling of topics and sentiments, by taking into account associations between them. We are also interested in the dynamics of topic-sentiment associations. To this end, we adopt a statistical approach based on the probabilistic topic models. Our main contributions can be summarized in two points : 1. TS (Topic-Sentiment model) : a new probabilistic topic model for the joint extraction of topics and sentiments. This model allows to characterize the extracted topics with distributions over the sentiment polarities. The goal is to discover the sentiment proportions specfic to each of theextracted topics. 2. TTS (Time-aware Topic-Sentiment model) : a new probabilistic model to caracterize the topic-sentiment dynamics. Relying on the document's time information, TTS allows to characterize the quantitative evolutionfor each of the extracted topic-sentiment pairs. We also present two other contributions : a new evaluation framework for measuring the performance of topic-extraction methods, and a new hybrid method for sentiment detection and classification from text. This method is based on combining supervised machine learning and prior knowledge. All of the proposed methods are tested on real-world data based on adapted evaluation frameworks.
222

Análise de sentimento e desambiguação no contexto da tv social

Lima, Ana Carolina Espírito Santo 14 December 2012 (has links)
Made available in DSpace on 2016-03-15T19:37:43Z (GMT). No. of bitstreams: 1 Ana Carolina Espirito Santo Lima.pdf: 2485278 bytes, checksum: 9843b9f756f82c023af6a2ee291f2b1d (MD5) Previous issue date: 2012-12-14 / Fundação de Amparo a Pesquisa do Estado de São Paulo / Social media have become a way of expressing collective interests. People are motivated by the sharing of information and the feedback from friends and colleagues. Among the many social media tools available, the Twitter microblog is gaining popularity as a platform for in-stantaneous communication. Millions of messages are generated daily, from over 100 million users, about the most varied subjects. As it is a rapid communication platform, this microblog spurred a phenomenon called television storytellers, where surfers comment on what they watch on TV while the programs are being transmitted. The Social TV emerged from this integration between social media and television. The amount of data generated on the TV shows is a rich material for data analysis. Broadcasters may use such information to improve their programs and increase interaction with their audience. Among the main challenges in social media data analysis there is sentiment analysis (to determine the polarity of a text, for instance, positive or negative), and sense disambiguation (to determine the right context of polysemic words). This dissertation aims to use machine learning techniques to create a tool to support Social TV, contributing specifically to the automation of sentiment analysis and disambiguation of Twitter messages. / As mídias sociais são uma forma de expressão dos interesses coletivos, as pessoas gostam de compartilhar informações e sentem-se valorizadas por causa disso. Entre as mídias sociais o microblog Twitter vem ganhando popularidade como uma plataforma para comunicação ins-tantânea. São milhões de mensagens geradas todos os dias, por cerca de 100 milhões de usuá-rios, carregadas dos mais diversos assuntos. Por ser uma plataforma de comunicação rápida esse microblog estimulou um fenômeno denominado narradores televisivos, em que os inter-nautas comentam sobre o que assistem na TV no momento em que é transmitido. Dessa inte-gração entre as mídias sociais e a televisão emergiu a TV Social. A quantidade de dados gera-dos sobre os programas de TV formam um rico material para análise de dados. Emissoras podem usar tais informações para aperfeiçoar seus programas e aumentar a interação com seu público. Dentre os principais desafios da análise de dados de mídias sociais encontram-se a análise de sentimento (determinação de polaridade em um texto, por exemplo, positivo ou negativo) e a desambiguação de sentido (determinação do contexto correto de palavras polis-sêmicas). Essa dissertação tem como objetivo usar técnicas de aprendizagem de máquina para a criação de uma ferramenta de apoio à TV Social com contribuições na automatização dos processos de análise de sentimento e desambiguação de sentido de mensagens postadas no Twitter.
223

Um modelo para predição de bolsa de valores baseado em mineração de opinião

Lima, Milson Louseiro 06 May 2016 (has links)
Made available in DSpace on 2016-08-17T14:52:40Z (GMT). No. of bitstreams: 1 Dissertacao_MilsonLouseiroLima.pdf: 4206975 bytes, checksum: 68293f1f1c80ce84d0573111677ff097 (MD5) Previous issue date: 2016-05-06 / FUNDAÇÃO DE AMPARO À PESQUISA E AO DESENVOLVIMENTO CIENTIFICO E TECNOLÓGICO DO MARANHÃO / Predicting the behavior of stocks in the stock market is a challenging task, a lot of times related to unknown factors or influenced by very distinct natures of variables, which can range from high-profile news to the collective sentiment, expressed in publications on social networks. Such market volatility may represent considerable financial losses for investors. In order to forestall such variations other mechanisms to predict the behavior of assets in the stock market have been proposed, based on pre-existing indicator data. Such mechanisms only analyze statistical data, not considering the collective human sentiment. This work aims to develop a model to predict the stock market, based on analysis of sentiment and it will make use of techniques of artificial intelligence as natural language processing (PLN) and Support Vector Machines (SVM) to predict the active behavior. However, it should be emphasized that this model is intended to be an aid tool in the decision-making process that involves buying and selling shares on the stock market. / Predizer o comportamento das ações na bolsa de valores é uma tarefa desafiadora, muita vezes relacionada a fatores desconhecidos ou influenciados por variáveis de naturezas bem distintas, que podem ir desde notícias de grande repercussão até o sentimento coletivo, expresso em publicações de redes sociais. Tal volatilidade do mercado pode representar perdas financeiras consideráveis para os investidores. No intuito de se antecipar a tais variações já foram propostos outros mecanismos para predizer o comportamento de ativos na bolsa de valores, baseados em dados de indicadores pré-existentes. Tais mecanismos analisam apenas dados estatísticos, não considerando o sentimento humano coletivo. Este trabalho tem como finalidade desenvolver um modelo para predição da bolsa de valores, baseado na mineração de opinião e, para isso, fará uso de técnicas de Inteligência artificial como processamento de linguagem natural(PLN) e Máquinas de Vetor de Suporte(SVM) para predizer o comportamento do ativo. No entanto, convém ressaltar que o referido modelo tem como finalidade ser uma ferramenta de auxílio no processo de tomada de decisão que envolve a compra e venda de ações na bolsa de valores.
224

Mineração de opiniões baseada em aspectos para revisões de produtos e serviços / Aspect-based Opinion Mining for Reviews of Products and Services

Ivone Penque Matsuno Yugoshi 27 April 2018 (has links)
A Mineração de Opiniões é um processo que tem por objetivo extrair as opiniões e suas polaridades de sentimentos expressas em textos em língua natural. Essa área de pesquisa tem ganhado destaque devido ao volume de opiniões que os usuários compartilham na Internet, como revisões em sites de e-commerce, rede sociais e tweets. A Mineração de Opiniões baseada em Aspectos é uma alternativa promissora para analisar a polaridade do sentimento em um maior nível de detalhes. Os métodos tradicionais para extração de aspectos e classificação de sentimentos exigem a participação de especialistas de domínio para criar léxicos ou definir regras de extração para diferentes idiomas e domínios. Além disso, tais métodos usualmente exploram algoritmos de aprendizado supervisionado, porém exigem um grande conjunto de dados rotulados para induzir um modelo de classificação. Os desafios desta tese de doutorado estão relacionados a como diminuir a necessidade de grande esforço humano tanto para rotular dados, quanto para tratar a dependência de domínio para as tarefas de extração de aspectos e classificação de sentimentos dos aspectos para Mineração de Opiniões. Para reduzir a necessidade de grande quantidade de exemplos rotulados foi proposta uma abordagem semissupervisionada, denominada por Aspect-based Sentiment Propagation on Heterogeneous Networks (ASPHN) em que são propostas representações de textos nas quais os atributos linguísticos, os aspectos candidatos e os rótulos de sentimentos são modelados por meio de redes heterogêneas. Para redução dos esforços para construir recursos específicos de domínio foi proposta uma abordagem baseada em aprendizado por transferência entre domínios denominada Cross-Domain Aspect Label Propagation through Heterogeneous Networks (CD-ALPHN) que utiliza dados rotulados de outros domínios para suportar tarefas de aprendizado em domínios sem dados rotulados. Nessa abordagem são propostos uma representação em uma rede heterogênea e um método de propagação de rótulos. Os vértices da rede são os aspectos rotulados do domínio de origem, os atributos linguísticos e os candidatos a aspectos do domínio alvo. Além disso, foram analisados métodos de extração de aspectos e propostas algumas variações para considerar cenários nãosupervisionados e independentes de domínio. As soluções propostas nesta tese de doutorado foram avaliadas e comparadas as do estado-da-arte utilizando coleções de revisões de diferentes produtos e serviços. Os resultados obtidos nas avaliações experimentais são competitivos e demonstram que as soluções propostas são promissoras. / Opinion Mining is a process that aims to extract opinions and their sentiment polarities expressed in natural language texts. This area of research has been in the highlight because of the volume of opinions that users share on the available visualization means on the Internet (reviews on e-commerce sites, social networks, tweets, others). Aspect-based Opinion Mining is a promising alternative for analyzing the sentiment polarity on a high level of detail. The traditional methods for aspect extraction and sentiment classification require the participation of domain experts to create lexicons or define extraction rules for different languages and domains. In addition, such methods usually exploit supervised machine learning algorithms, but require a large set of labeled data to induce a classification model. The challenges of this doctoral thesis are related on to how to reduce the need for great human effort both: (i) to label data; and (ii) to treat domain dependency for the tasks of aspect extraction and aspect sentiment classification for Opinion Mining. In order to reduce the need for a large number of labeled examples, a semi-supervised approach was proposed, called Aspect-based Sentiment Propagation on Heterogeneous Networks (ASPHN). In this approach, text representations are proposed in which linguistic attributes, candidate aspects and sentiment labels are modeled by heterogeneous networks. Also, a cross-domain learning approach called Cross-Domain Aspect Label Propagation through Heterogeneous Networks (CD-ALPHN) is proposed in order to reduce efforts to build domain-specific resources, This approach uses labeled data from other domains to support learning tasks in domains without labeled data. A representation in a heterogeneous network and a label propagation method are proposed in this cross-domain learning approach. The vertices of the network are the labeled aspects of the source domain, the linguistic attributes, and the candidate aspects of the target domain. In addition, aspect extraction methods were analyzed and some variations were proposed to consider unsupervised and domain independent scenarios. The solutions proposed in this doctoral thesis were evaluated and compared to the state-of-the-art solutions using collections of different product and service reviews. The results obtained in the experimental evaluations are competitive and demonstrate that the proposed solutions are promising.
225

A Cloud Based Platform for Big Data Science

Islam, Md. Zahidul January 2014 (has links)
With the advent of cloud computing, resizable scalable infrastructures for data processing is now available to everyone. Software platforms and frameworks that support data intensive distributed applications such as Amazon Web Services and Apache Hadoop enable users to the necessary tools and infrastructure to work with thousands of scalable computers and process terabytes of data. However writing scalable applications that are run on top of these distributed frameworks is still a demanding and challenging task. The thesis aimed to advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large data sets, collectively known as “big data”. The term “big-data” in this thesis refers to large, diverse, complex, longitudinal and/or distributed data sets generated from instruments, sensors, internet transactions, email, social networks, twitter streams, and/or all digital sources available today and in the future. We introduced architectures and concepts for implementing a cloud-based infrastructure for analyzing large volume of semi-structured and unstructured data. We built and evaluated an application prototype for collecting, organizing, processing, visualizing and analyzing data from the retail industry gathered from indoor navigation systems and social networks (Twitter, Facebook etc). Our finding was that developing large scale data analysis platform is often quite complex when there is an expectation that the processed data will grow continuously in future. The architecture varies depend on requirements. If we want to make a data warehouse and analyze the data afterwards (batch processing) the best choices will be Hadoop clusters and Pig or Hive. This architecture has been proven in Facebook and Yahoo for years. On the other hand, if the application involves real-time data analytics then the recommendation will be Hadoop clusters with Storm which has been successfully used in Twitter. After evaluating the developed prototype we introduced a new architecture which will be able to handle large scale batch and real-time data. We also proposed an upgrade of the existing prototype to handle real-time indoor navigation data.
226

財報文字分析之句子風險程度偵測研究 / Risk-related Sentence Detection in Financial Reports

柳育彣, Liu, Yu-Wen Unknown Date (has links)
本論文的目標是利用文本情緒分析技巧,針對美國上市公司的財務報表進行以句子為單位的風險評估。過去的財報文本分析研究裡,大多關注於詞彙層面的風險偵測。然而財務文本中大多數的財務詞彙與前後文具有高度的語意相關性,僅靠閱讀單一詞彙可能無法完全理解其隱含的財務訊息。本文將研究層次由詞彙拉升至句子,根據基於嵌入概念的~fastText~與~Siamese CBOW~兩種句子向量表示法學習模型,利用基於嵌入概念模型中,使用目標詞與前後詞彙關聯性表示目標詞語意的特性,萃取出財報句子裡更深層的財務意涵,並學習出更適合用於財務文本分析的句向量表示法。實驗驗證部分,我們利用~10-K~財報資料與本文提出的財務標記資料集進行財務風險分類器學習,並以傳統詞袋模型(Bag-of-Word)作為基準,利用精確度(Accuracy)與準確度(Precision)等評估標準進行比較。結果證實基於嵌入概念模型的表示法在財務風險評估上比傳統詞袋模型有著更準確的預測表現。由於近年大數據時代的來臨,網路中的資訊量大幅成長,依賴少量人力在短期間內分析海量的財務資訊變得更加困難。因此如何協助專業人員進行有效率的財務判斷與決策,已成為一項重要的議題。為此,本文同時提出一個以句子為分析單位的財報風險語句偵測系統~RiskFinder~,依照~fastText~與~Siamese CBOW~兩種模型,經由~10-K~財務報表與人工標記資料集學習出適當的風險語句分類器後,對~1996~至~2013~年的美國上市公司財務報表進行財報句子的自動風險預測,讓財務專業人士能透過系統的協助,有效率地由大量財務文本中獲得有意義的財務資訊。此外,系統會依照公司的財報發布日期動態呈現股票交易資訊與後設資料,以利使用者依股價的時間走勢比較財務文字型與數值型資料的關係。 / The main purpose of this paper is to evaluate the risk of financial report of listed companies in sentence-level. Most of past sentiment analysis studies focused on word-level risk detection. However, most financial keywords are highly context-sensitive, which may likely yield biased results. Therefore, to advance the understanding of financial textual information, this thesis broadens the analysis from word-level to sentence level. We use two sentence-level models, fastText and Siamese-CBOW, to learn sentence embedding and attempt to facilitate the financial risk detection. In our experiment, we use the 10-K corpus and a financial sentiment dataset which were labeled by financial professionals to train our financial risk classifier. Moreover, we adopt the Bag-of-Word model as a baseline and use accuracy, precision, recall and F1-score to evaluate the performance of financial risk prediction. The experimental results show that the embedding models could lead better performance than the Bag-of-word model. In addition, this paper proposes a web-based financial risk detection system which is constructed based on fastText and Siamese CBOW model called RiskFinder. There are total 40,708 financial reports inside the system and each risk-related sentence is highlighted based on different sentence embedding models. Besides, our system also provides metadata and a visualization of financial time-series data for the corresponding company according to release day of financial report. This system considerably facilitates case studies in the field of finance and can be of great help in capturing valuable insight within large amounts of textual information.
227

Automatizovaná analýza sentimentu / Automated Sentiment Analysis

Zeman, Matěj January 2014 (has links)
The goal of my master thesis is to describe the Automated Sentiment Analysis, its methods and Cross-domain problems and to test the already existing model. I have applied this model on the data from the Czech-Slovak film database website CSFD.cz, Czech e-shop MALL.cz and one of the biggest Czech websites about books Databazeknih.cz to contribute to the solution of the Cross-Domain issue by using n-grams and the analytic software RapidMiner.
228

Big data - použití v bankovní sféře / Big data - application in banking

Uřídil, Martin January 2012 (has links)
There is a growing volume of global data, which is offering new possibilities for those market participants, who know to take advantage of it. Data, information and knowledge are new highly regarded commodity especially in the banking industry. Traditional data analytics is intended for processing data with known structure and meaning. But how can we get knowledge from data with no such structure? The thesis focuses on Big Data analytics and its use in banking and financial industry. Definition of specific applications in this area and description of benefits for international and Czech banking institutions are the main goals of the thesis. The thesis is divided in four parts. The first part defines Big Data trend, the second part specifies activities and tools in banking. The purpose of the third part is to apply Big Data analytics on those activities and shows its possible benefits. The last part focuses on the particularities of Czech banking and shows what actual situation about Big Data in Czech banks is. The thesis gives complex description of possibilities of using Big Data analytics. I see my personal contribution in detailed characterization of the application in real banking activities.
229

Filtragem baseada em conteúdo auxiliada por métodos de indexação colaborativa / Content-based filtering aided by collaborative indexing methods

Rafael Martins D\'Addio 10 June 2015 (has links)
Sistemas de recomendação surgiram da necessidade de selecionar e apresentar conteúdo relevante a usuários de acordo com suas preferências. Dentre os diversos métodos existentes, aqueles baseados em conteúdo faz em uso exclusivo da informação inerente aos itens. Estas informações podem ser criadas a partir de técnicas de indexação automática e manual. Enquanto que as abordagens automáticas necessitam de maiores recursos computacionais e são limitadas á tarefa específica que desempenham, os métodos manuais são caros e propensos a erros. Por outro lado, com a expansão da Web e a possibilidade de usuários comuns criarem novos conteúdos e anotações sobre diferentes itens e produtos, uma alternativa é obter esses metadados criados colaborativamente pelos próprios usuários. Entretanto, essas informações, em especial revisões e comentários, podem conter ruídos, além de estarem em uma forma desestruturada. Deste modo, este trabalho1 tem como objetivo desenvolver métodos de construção de representações de itens baseados em descrições colaborativas para um sistema de recomendação. Objetiva-se analisar o impacto que diferentes técnicas de extração de características, aliadas à análise de sentimento, causam na precisão da geração de sugestões, avaliando-se os resultados em dois cenários de recomendação: predição de notas e geração de ranques. Dentre as técnicas analisadas, observa-se que a melhor apresenta um ganho no poder descritivo dos itens, ocasionando uma melhora no sistema de recomendação. / Recommender systems arose from the need to select and present relevant content to users according to their preferences. Among several existent methods, those based on content make exclusive use of information inherent to the items. This information can be created through automatic and manual indexing techniques. While automa-tic approaches require greater computing resources and are limited to the specific task they perform, manual methods are expensive and prone to errors. On the other hand, with the expansion of theWeb and the possibility of common users to create new content and descriptions about different items and products, an alternative is to get these metadata created collaboratively by the users. However, this information, especially reviews and comments, may contain noise, be- sides being in a unstructured fashion. Thus, this study aims to develop methods for the construction of items representations based on collaborative descriptions for a recommender system. This study aims to analyze the impact that different feature extraction techniques, combined with sentiment analysis, caused in the accuracy of the generated suggestions, evaluating the results in both recommendations cenarios: rating prediction and ranking generation. Among the analyzed techniques, it is observed that the best is able to describe items in a more effcient manner, resulting in an improvement in the recommendation system.
230

Analyse d'opinion dans les interactions orales / Opinion analysis in speech interactions

Barriere, Valentin 15 April 2019 (has links)
La reconnaissance des opinions d'un locuteur dans une interaction orale est une étape cruciale pour améliorer la communication entre un humain et un agent virtuel. Dans cette thèse, nous nous situons dans une problématique de traitement automatique de la parole (TAP) sur les phénomènes d'opinions dans des interactions orales spontanées naturelles. L'analyse d'opinion est une tâche peu souvent abordée en TAP qui se concentrait jusqu'à peu sur les émotions à l'aide du contenu vocal et non verbal. De plus, la plupart des systèmes récents existants n'utilisent pas le contexte interactionnel afin d'analyser les opinions du locuteur. Dans cette thèse, nous nous penchons sur ces sujet. Nous nous situons dans le cadre de la détection automatique en utilisant des modèles d’apprentissage statistiques. Après une étude sur la modélisation de la dynamique de l'opinion par un modèle à états latents à l’intérieur d'un monologue, nous étudions la manière d’intégrer le contexte interactionnel dialogique, et enfin d'intégrer l'audio au texte avec différents types de fusion. Nous avons travaillé sur une base de données de Vlogs au niveau d'un sentiment global, puis sur une base de données d'interactions dyadiques multimodales composée de conversations ouvertes, au niveau du tour de parole et de la paire de tours de parole. Pour finir, nous avons fait annoté une base de données en opinion car les base de données existantes n'étaient pas satisfaisantes vis-à-vis de la tâche abordée, et ne permettaient pas une comparaison claire avec d'autres systèmes à l'état de l'art.A l'aube du changement important porté par l’avènement des méthodes neuronales, nous étudions différents types de représentations: les anciennes représentations construites à la main, rigides mais précises, et les nouvelles représentations apprises de manière statistique, générales et sémantiques. Nous étudions différentes segmentations permettant de prendre en compte le caractère asynchrone de la multi-modalité. Dernièrement, nous utilisons un modèle d'apprentissage à états latents qui peut s'adapter à une base de données de taille restreinte, pour la tâche atypique qu'est l'analyse d'opinion, et nous montrons qu'il permet à la fois une adaptation des descripteurs du domaine écrit au domaine oral, et servir de couche d'attention via son pouvoir de clusterisation. La fusion multimodale complexe n'étant pas bien gérée par le classifieur utilisé, et l'audio étant moins impactant sur l'opinion que le texte, nous étudions différentes méthodes de sélection de paramètres pour résoudre ces problèmes. / 2588/5000Recognizing a speaker's opinions in an oral interaction is a crucial step in improving communication between a human and a virtual agent. In this thesis, we find ourselves in a problematic of automatic speech processing (APT) on opinion phenomena in natural spontaneous oral interactions. Opinion analysis is a task that is not often addressed in TAP that focused until recently on emotions using voice and non-verbal content. In addition, most existing legacy systems do not use the interactional context to analyze the speaker's opinions. In this thesis, we focus on these topics.We are in the context of automatic detection using statistical learning models. A study on modeling the dynamics of opinion by a model with latent states within a monologue, we study how to integrate the context interactional dialogical, and finally to integrate audio to text with different types of fusion. We worked on a basic Vlogs data at a global sense, and on the basis of multimodal data dyadic interactions composed of open conversations, at the turn of speech and word pair of towers. Finally, we annotated database in opinion because existing database were not satisfactory vis-à-vis the task addressed, and did not allow a clear comparison with other systems in the state art.At the dawn of significant change brought by the advent of neural methods, we study different types of representations: the ancient representations built by hand, rigid, but precise, and new representations learned statistically, and general semantics. We study different segmentations to take into account the asynchronous nature of multi-modality. Recently, we are using a latent state learning model that can adapt to a small database, for the atypical task of opinion analysis, and we show that it allows both an adaptation of the descriptors of the written domain to the oral domain, and serve as an attention layer via its clustering power. Complex multimodal fusion is not well managed by the classifier used, and audio being less impacting on opinion than text, we study different methods of parameter selection to solve these problems.

Page generated in 0.1973 seconds