Spelling suggestions: "subject:"sentiment"" "subject:"centiment""
121 |
Exploring the Correlation Between Ratings, Adjectives and Sentiment on Customer ReviewsSandström, Einar, Josefsson, Fredrik January 2022 (has links)
Customer reviews are important for both customers and companies. Customers want to find out if the product or service is what they need while companies want to figure out if their product is good enough for their customers. There is, however, an issue where customers very rarely write a product review. An example of a solution for this could be to let the customer choose between adjectives rather than write the entire review. To help future researchers find out if this could make customers more prone to write reviews, this study looks at the correlation between the sentiment and the rating, as well as the adjectives used when a rating and sentiment correlate. Other studies look at the correlation, or the precision of the tool used for sentiment analysis but do not go in-depth on what makes a review correlate with its rating. To study this, four datasets of reviews were used with a total of 105234 reviews. Then, using Stanford CoreNLP each review text got a predicted sentiment score. The Pearson coefficient was then used to find the correlation coefficient between ratings and sentiments. The conclusion is that there is a weak-moderate correlation between ratings and sentiment. Adjectives with a positive sentiment had a higher correlation than negative adjectives, however, most of them still had a low correlation. The sentiment correlates better when the reviews with only one sentence are omitted from the result.
|
122 |
Self-supervised text sentiment transfer with rationale predictions and pretrained transformersSinclair, Neil 21 April 2023 (has links) (PDF)
Sentiment transfer involves changing the sentiment of a sentence, such as from a positive to negative sentiment, whilst maintaining the informational content. Whilst this challenge in the NLP research domain can be constructed as a translation problem, traditional sequence-to-sequence translation methods are inadequate due to the dearth of parallel corpora for sentiment transfer. Thus, sentiment transfer can be posed as an unsupervised learning problem where a model must learn to transfer from one sentiment to another in the absence of parallel sentences. Given that the sentiment of a sentence is often defined by a limited number of sentiment-specific words within the sentence, this problem can also be posed as a problem of identifying and altering sentiment-specific words as a means of transferring from one sentiment to another. In this dissertation we use a novel method of sentiment word identification from the interpretability literature called the method of rationales. This method identifies the words or phrases in a sentence that explain the ‘rationale' for a classifier's class prediction, in this case the sentiment of a sentence. This method is then compared against a baseline heuristic sentiment word identification method. We also experiment with a pretrained encoder-decoder Transformer model, known as BART, as a method for improving upon previous sentiment transfer results. This pretrained model is fine-tuned first in an unsupervised manner as a denoising autoencoder to reconstruct sentences where sentiment words have been masked out. This fine-tuned model then generates a parallel corpus which is used to further fine-tune the final stage of the model in a self-supervised manner. Results were compared against a baseline using automatic evaluations of accuracy and BLEU score as well as human evaluations of content preservation, sentiment accuracy and sentence fluency. The results of this dissertation show that both neural network and heuristic-based methods of sentiment word identification achieve similar results across models for similar levels of sentiment word removal for the Yelp dataset. However, the heuristic approach leads to improved results with the pretrained model on the Amazon dataset. We also find that using the pretrained Transformers model improves upon the results of using the baseline LSTM trained from scratch for the Yelp dataset for all automatic metrics. The pretrained BART model scores higher across all human-evaluated outputs for both datasets, which is likely due to its larger size and pretraining corpus. These results also show a similar trade-off between content preservation and sentiment transfer accuracy as in previous research, with more favourable results on the Yelp dataset relative to the baseline.
|
123 |
The Importance of Place in an Era of Placelessness? Distance's Influence on Community Satisfaction and AttachmentMcKnight, Matthew L 01 December 2014 (has links) (PDF)
The powerful influence of global consumerism and its strong effect on rural communities has led to calls for the “death of distance” and for the placelessness of community. However, skepticism remains that all unique elements of communities of place have been erased from rural life. Using data from Montana (N=3,508), this research investigates how distance, size, and other spatially-bound factors influence sentiments of community satisfaction and attachment in communities of place. Findings suggest that distance can decrease community satisfaction in highly rural communities and increase attachment in rural communities along the urban fringe. Perceived satisfaction with community services was a key unanticipated finding as the strongest predictor of community satisfaction and attachment. Therefore, this research argues that even though rural areas are being transformed through global consumerism, levels of community satisfaction and attachment continue to be diverse across place in significant but nuanced ways because of distance and community services.
|
124 |
A Longitudinal Study of Mental Health Patterns from Social MediaYalamanchi, Neha 26 July 2021 (has links)
No description available.
|
125 |
How does Bipolar and Depressive Diagnoses Reflect in Linguistic Usage on Twitter : A Study using LIWC and Other Tools / Hur Reflekterar Bipolära respektive Depressiva Diagnoser Lingvistisk Användning på TwitterOlsson, Viktor, Lindow, Madeleine January 2018 (has links)
Depression and bipolar disorder are two mental disorders which left untreated can have a devastating effect on a persons life as they are considered both chronic and disabling. Seeking help is often a big step that can be procrastinated for years, and misdiagnosis is a very common problem once contact with psychiatric care has finally been established. This paper investigates the correlation between posting patterns on Twitter and suffering from these diagnoses. For each day of the past year we quantify cues for emotional intensity and polarity, involvement with their social network and activity as well as metrics previously shown to be associated with depression. A number of statistical tests, including Anova, t-testing and Covariance analysis, are then constructed and fitted over our data. Our results show a significant difference between our groups in affective language use tied to emotional polarity as well as an elevated use of first person personal pronouns for both the depressed and bipolar group. These findings indicate strongly that our approach is valid for finding cues about mental illness, however the strong limitations in our data collections approach needs to be addressed in order for our results to have real scientific merit. This study is motivated by the need for finding predictive models for mental disorders, and to better understand the disorders themselves. Predictive models can be helpful for proper diagnosis by a clinical psychologist as well as for helping more people seek treatment. / Depression och bipolär sjukdom är två psykiska sjukdomar som obehandlade kan ha en förödande effekt på en persons liv eftersom de anses både kroniska och förlamande. Att söka hjälp är ofta ett väldigt stort steg som kan prokrastineras i flera år, och dessutom är feldiagnosticering ett väldigt stort problem när en kontakt med psykiatrin väl har upprättats. Denna rapport undersöker korrelationen mellan inläggsmönster på Twitter och dessa två diagnoser. Vi kvantifierar varje enskild dag av det senaste året i termer av kännetäcken för en människas emotionella intensitet och polaritet, engagemang med sitt sociala nätverk och aktivitet, såväl som parametrar som i tidigare forskning visat sig associerade med depression. Vi använder sedan statistiska modeller såsom variansanalys, t-test och kovariansanalys över vår data. Våra resultat visar på en signifikant skillnad mellan våra grupper i affekterat språkbruk och hur det kopplas till emotionell polaritet. Vi visar även på ökat användande av pronomen i första person singular hos våra bipolära och deprimerade grupper. Dessa resultat tyder på att vår metod är giltig för att hitta indikationer för mental ohälsa, men begränsningar i vår datainsamling behöver adresseras innan våra resultat kan ha riktig vetenskaplig betydelse. Den här studien är motiverad av behovet av att finna modeller med prediktiv kraft för psykisk ohälsa, och att bättre förstå depression och bipolar sjukdom som helhet. Prediktiva modeller kan vara hjälpsamma för korrekt diagnossättning av en klinisk psykolog samt att hjälpa individer att söka behandling.
|
126 |
Sentiment Classification with Deep Neural NetworksKalogiras, Vasileios January 2017 (has links)
Attitydanalys är ett delfält av språkteknologi (NLP) som försöker analysera känslan av skriven text. Detta är ett komplext problem som medför många utmaningar. Av denna anledning har det studerats i stor utsträckning. Under de senaste åren har traditionella maskininlärningsalgoritmer eller handgjord metodik använts och givit utmärkta resultat. Men den senaste renässansen för djupinlärning har växlat om intresse till end to end deep learning-modeller.Å ena sidan resulterar detta i mer kraftfulla modeller men å andra sidansaknas klart matematiskt resonemang eller intuition för dessa modeller. På grund av detta görs ett försök i denna avhandling med att kasta ljus på nyligen föreslagna deep learning-arkitekturer för attitydklassificering. En studie av deras olika skillnader utförs och ger empiriska resultat för hur ändringar i strukturen eller kapacitet hos modellen kan påverka exaktheten och sättet den representerar och ''förstår'' meningarna. / Sentiment analysis is a subfield of natural language processing (NLP) that attempts to analyze the sentiment of written text.It is is a complex problem that entails different challenges. For this reason, it has been studied extensively. In the past years traditional machine learning algorithms or handcrafted methodologies used to provide state of the art results. However, the recent deep learning renaissance shifted interest towards end to end deep learning models. On the one hand this resulted into more powerful models but on the other hand clear mathematical reasoning or intuition behind distinct models is still lacking. As a result, in this thesis, an attempt to shed some light on recently proposed deep learning architectures for sentiment classification is made.A study of their differences is performed as well as provide empirical results on how changes in the structure or capacity of a model can affect its accuracy and the way it represents and ''comprehends'' sentences.
|
127 |
Sentimentalisme moral et point de vue généralKatchelewa, Shimbi Kamba January 2001 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
|
128 |
Le sentiment de présence comme précurseur d'incorporation de stimuli dans les rêvesSaucier, Sébastien January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
129 |
Predicting average response sentiments to mass sent emails using RNN / Förutspå genomsnittliga svarsuppfattningar på massutskickade meddelanden med RNNBavey, Adel January 2021 (has links)
This study is concerned with using the popular Recurrent Neural Network (RNN) model, and its variants Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM), on the novel problem of Sentiment Forecasting (SF). The goal of SF is to predict what the sentiment of a response will be in a conversation, using only the previous utterance. In more every day terms, we want to be able to predict the sentiment of person B’s response to something person A said, before B has said anything and using only A’s utterance. The RNN models were trained on a Swedish email database containing email conversations, where the task was to predict the average sentiment of the response emails to an initial mass-sent business email. The emails didn’t come with sentiment labels, so the Valence Aware Dictionary and sEntiment Reasoner (VADER) system was used to determine sentiments. Seventy-five training-and-testing experiments were run with varying RNN models and data conditions. The accuracy, precision, recall, and F1 scores were used to determine to what extent the models had been able to solve the problem. In particular, the F1 score of the models were compared to the F1 score of a dummy classifier that only answered with positive sentiment, with the success case being that a model was able to reach a higher F1 score than the dummy. The results led to the findings that the varying RNN models performed worse or comparably to the dummy classifier, with only 5 out of 75 experiments resulting in the RNN model reaching a higher F1 score than the positive classifier, and with the average performance of the rare succeeding models only going 2.6 percentage points over the positive only classifier, which isn’t considered worthwhile in relation to the time and resource investment involved in training RNNs. In the end, the results led to the conclusion that the RNN may not be able to solve the problem on its own, and a different approach might be needed. This conclusion is somewhat limited by the fact that more work could have been done on experimenting with the data and pre-processing techniques. The same experiments on a different dataset may show different results. Some of the observations showed that the RNN, particularly the Deep GRU, might be used as the basis for a more complex model. Complex models built on top of RNNs have been shown to be useful on similar research problems within Sentiment Analysis, so this may prove a valuable avenue of research. / Denna studie handlade om att använda den populära Recurrent Neural Network (RNN) modellen, och dess varianter Gated Recurrent Unit (GRU) och Long- Short Term Memory (LSTM), på det hittils understuderade problemet Sentiment Forecasting (SF). Målet med SF är att förutsäga vad sentimentet av ett svar kommer att vara i en konversation, med endast det tidigare uttalandet. I mer vardagliga termer vill vi kunna förutsäga känslan av person B: s svar på något som person A sagt, innan B har sagt någonting och att vi endast använder A:s yttrande. RNN-modellerna tränades med en svensk e-postdatabas som innehöll epostkonversationer, där uppgiften var att förutsäga den genomsnittliga känslan av svarsmeddelandena till ett initialt utskickat massmeddelande. E-postmeddelandena kom inte med sentimentetiketter, så Valence Aware Dictionary and sEntiment Reasoner (VADER)-systemet användes för att utvinna etiketter. Sjuttio-fem experiment genomfördes med varierande RNN-modeller och dataförhållanden. Accuracy, precision, recall och F1-score användes för att avgöra i vilken utsträckning modellerna hade kunnat lösa problemet. F1- Score:n för modellerna jämfördes med F1-Score:n för en dummy-klassificerare som endast svarade med positivt sentiment, med framgångsfallet att en modell kunde nå en högre F1-poäng än dummy:n. Resultaten ledde till fynden att de olika RNN-modellerna presterade sämre eller jämförbart med dummyklassificeraren, med endast 5 av 75 experiment som resulterade i att RNN-modellen nådde en högre F1-score än den positiva klassificeraren, och den genomsnittliga prestandan för de sällsynta framgångsrika modellerna bara kom 2,6 procentenheter över den positiva klassificeraren, vilket inte anses lönsamt i förhållande till den tid och resursinvestering som är involverad i träning av RNNs. I slutändan ledde resultaten till slutsatsen att RNN och dess varianter inte riktigt kan lösa problemet på egen hand, och en annan metod kan behövas. Denna slutsats begränsas något av det faktum att mer arbete kunde ha gjorts med att experimentera med data och förbehandlingstekniker. En annan databas skulle möjligtvis leda till ett annat resultat. Några av observationerna visade att RNN, särskilt Deep GRU, kan användas som grund för en mer komplex modell. Komplexa modeller bygga ovanpå RNNs har visat goda resultat på liknande forskningsproblem, och kan vara en värdefull forskningsriktning.
|
130 |
ACQUIRING FIRMS’ STRATEGIC DISCLOSURE PRACTICES AROUND MERGERS AND ACQUISITIONSWANG, JING 07 November 2016 (has links)
No description available.
|
Page generated in 0.0667 seconds