Spelling suggestions: "subject:"batural anguage aprocessing NLP"" "subject:"batural anguage eprocessing NLP""
1 |
Iterative parameter mixing for distributed large-margin training of structured predictors for natural language processingCoppola, Gregory Francis January 2015 (has links)
The development of distributed training strategies for statistical prediction functions is important for applications of machine learning, generally, and the development of distributed structured prediction training strategies is important for natural language processing (NLP), in particular. With ever-growing data sets this is, first, because, it is easier to increase computational capacity by adding more processor nodes than it is to increase the power of individual processor nodes, and, second, because data sets are often collected and stored in different locations. Iterative parameter mixing (IPM) is a distributed training strategy in which each node in a network of processors optimizes a regularized average loss objective on its own subset of the total available training data, making stochastic (per-example) updates to its own estimate of the optimal weight vector, and communicating with the other nodes by periodically averaging estimates of the optimal vector across the network. This algorithm has been contrasted with a close relative, called here the single-mixture optimization algorithm, in which each node stochastically optimizes an average loss objective on its own subset of the training data, operating in isolation until convergence, at which point the average of the independently created estimates is returned. Recent empirical results have suggested that this IPM strategy produces better models than the single-mixture algorithm, and the results of this thesis add to this picture. The contributions of this thesis are as follows. The first contribution is to produce and analyze an algorithm for decentralized stochastic optimization of regularized average loss objective functions. This algorithm, which we call the distributed regularized dual averaging algorithm, improves over prior work on distributed dual averaging by providing a simpler algorithm (used in the rest of the thesis), better convergence bounds for the case of regularized average loss functions, and certain technical results that are used in the sequel. The central contribution of this thesis is to give an optimization-theoretic justification for the IPM algorithm. While past work has focused primarily on its empirical test-time performance, we give a novel perspective on this algorithm by showing that, in the context of the distributed dual averaging algorithm, IPM constitutes a convergent optimization algorithm for arbitrary convex functions, while the single-mixture distribution algorithm is not. Experiments indeed confirm that the superior test-time performance of models trained using IPM, compared to single-mixture, correlates with better optimization of the objective value on the training set, a fact not previously reported. Furthermore, our analysis of general non-smooth functions justifies the use of distributed large-margin (support vector machine [SVM]) training of structured predictors, which we show yields better test performance than the IPM perceptron algorithm, the only version of the IPM to have previously been given a theoretical justification. Our results confirm that IPM training can reach the same level of test performance as a sequentially trained model and can reach better accuracies when one has a fixed budget of training time. Finally, we use the reduction in training time that distributed training allows to experiment with adding higher-order dependency features to a state-of-the-art phrase-structure parsing model. We demonstrate that adding these features improves out-of-domain parsing results of even the strongest phrase-structure parsing models, yielding a new state-of-the-art for the popular train-test pairs considered. In addition, we show that a feature-bagging strategy, in which component models are trained separately and later combined, is sometimes necessary to avoid feature under-training and get the best performance out of large feature sets.
|
2 |
GeneTUC: Natural Language Understanding in Medical TextSætre, Rune January 2006 (has links)
<p>Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists.</p><p>The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems.</p><p>The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities.</p><p>The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.</p>
|
3 |
GeneTUC: Natural Language Understanding in Medical TextSætre, Rune January 2006 (has links)
Natural Language Understanding (NLU) is a 50 years old research field, but its application to molecular biology literature (BioNLU) is a less than 10 years old field. After the complete human genome sequence was published by Human Genome Project and Celera in 2001, there has been an explosion of research, shifting the NLU focus from domains like news articles to the domain of molecular biology and medical literature. BioNLU is needed, since there are almost 2000 new articles published and indexed every day, and the biologists need to know about existing knowledge regarding their own research. So far, BioNLU results are not as good as in other NLU domains, so more research is needed to solve the challenges of creating useful NLU applications for the biologists. The work in this PhD thesis is a “proof of concept”. It is the first to show that an existing Question Answering (QA) system can be successfully applied in the hard BioNLU domain, after the essential challenge of unknown entities is solved. The core contribution is a system that discovers and classifies unknown entities and relations between them automatically. The World Wide Web (through Google) is used as the main resource, and the performance is almost as good as other named entity extraction systems, but the advantage of this approach is that it is much simpler and requires less manual labor than any of the other comparable systems. The first paper in this collection gives an overview of the field of NLU and shows how the Information Extraction (IE) problem can be formulated with Local Grammars. The second paper uses Machine Learning to automatically recognize protein name based on features from the GSearch Engine. In the third paper, GSearch is substituted with Google, and the task in this paper is to extract all unknown names belonging to one of 273 biomedical entity classes, like genes, proteins, processes etc. After getting promising results with Google, the fourth paper shows that this approach can also be used to retrieve interactions or relationships between the named entities. The fifth paper describes an online implementation of the system, and shows that the method scales well to a larger set of entities. The final paper concludes the “proof of concept” research, and shows that the performance of the original GeneTUC NLU system has increased from handling 10% of the sentences in a large collection of abstracts in 2001, to 50% in 2006. This is still not good enough to create a commercial system, but it is believed that another 40% performance gain can be achieved by importing more verb templates into GeneTUC, just like nouns were imported during this work. Work has already begun on this, in the form of a local Masters Thesis.
|
4 |
A Mixed Methods Study of Ranger Attrition: Examining the Relationship of Candidate Attitudes, Attributions and GoalsCoombs, Aaron Keith 01 May 2023 (has links)
Elite military selection programs like the 75th Ranger Regiment's Ranger Assessment and Selection Program (RASP) are known for their difficulty and high attrition rates, despite substantial candidate screening just to get into such programs. The current study analyzes Ranger candidates 'attitudes, attributions, and goals (AAGs) and their relationship with successful completion of pre-RASP, a preparation phase for the demanding eight-week RASP program. Candidates' entry and exit surveys were analyzed using natural language processing (NLP), as well as more traditional statistical analyses of Likert-measured survey items to determine what reasons for joining and what individual goals related to candidate success. Candidates' Intrinsic Motivations and Satisfaction as measured on entry surveys were the strongest predictors of success. Specifically, candidates' desire to deploy or serve in combat, and the goal of earning credibility in the Rangers were the most important reasons and goals provided through candidates' open-text responses. Additionally, between-groups analyses between Black Candidates, Hispanic Candidates, and White Candidates showed that differences in candidate abilities and motivations better explains pre-RASP attrition than demographic proxies such as race or ethnicity. The study's use of NLP demonstrates the practical utility of applying machine learning to quantitatively analyze open-text responses that have traditionally been limited to qualitative analysis or subject to human coding, although predictive models utilizing more traditional Likert-measurement of AAGs had better predictive accuracy. / Doctor of Philosophy / Elite military selection programs like the 75th Ranger Regiment's Ranger Assessment and Selection Program (RASP) are known for their difficulty and high attrition rates, despite substantial candidate screening just to get into such programs. The current study analyzes Ranger candidates' attitudes and goals and their relationship with successful completion of pre-RASP, a preparation phase for the demanding eight-week RASP program. Candidates' entry and exit surveys were analyzed to better understand the relationship between candidates' reasons for volunteering and their goals in the organization. Candidates' Intrinsic Motivations and their Satisfaction upon arrival for pre-RASP best predicted candidate success. Specifically, candidates' desires to deploy or serve in combat, and the goal of earning credibility in the Rangers were the most important reasons and goals provided through candidates' open-text responses. Additionally, between-groups analyses between Black Candidates, Hispanic Candidates, and White Candidates showed that differences in candidate abilities and motivations better explains pre-RASP attrition than demographic proxies such as race or ethnicity.
|
5 |
Chatbot : A qualitative study of users' experience of Chatbots / Chatbot : En kvalitativ studie om användarnas upplevelse av ChatbottarAljadri, Sinan January 2021 (has links)
The aim of the present study has been to examine users' experience of Chatbot from a business perspective and a consumer perspective. The study has also focused on highlighting what limitations a Chatbot can have and possible improvements for future development. The study is based on a qualitative research method with semi-structured interviews that have been analyzed on the basis of a thematic analysis. The results of the interview material have been analyzed based on previous research and various theoretical perspectives such as Artificial Intelligence (AI), Natural Language Processing (NLP). The results of the study have shown that the experience of Chatbot can differ between businesses that offer Chatbot, which are more positive and consumers who use it as customer service. Limitations and suggestions for improvements around Chatbotar are also a consistent result of the study. / Den föreliggande studie har haft som syfte att undersöka användarnas upplevelse av Chatbot utifrån verksamhetsperspektiv och konsumentperspektiv. Studien har också fokuserat på att lyfta fram vilka begränsningar en Chatbot kan ha och eventuella förbättringar för framtida utvecklingen. Studien är baserad på en kvalitativ forskningsmetod med semistrukturerade intervjuer som har analyserats utifrån en tematisk analys. Resultatet av intervjumaterialet har analyserat utifrån tidigare forskning och olika teoretiska perspektiv som Artificial Intelligence (AI), Natural Language Processing (NLP). Resultatet av studien har visat att upplevelsen av Chatbot kan skilja sig mellan verksamheter som erbjuder Chatbot, som är mer positiva och konsumenter som använder det som kundtjänst. Begränsningar och förslag på förbättringar kring Chatbotar är också ett genomgående resultat i studien.
|
6 |
Automation of summarization evaluation methods and their application to the summarization processNahnsen, Thade January 2011 (has links)
Summarization is the process of creating a more compact textual representation of a document or a collection of documents. In view of the vast increase in electronically available information sources in the last decade, filters such as automatically generated summaries are becoming ever more important to facilitate the efficient acquisition and use of required information. Different methods using natural language processing (NLP) techniques are being used to this end. One of the shallowest approaches is the clustering of available documents and the representation of the resulting clusters by one of the documents; an example of this approach is the Google News website. It is also possible to augment the clustering of documents with a summarization process, which would result in a more balanced representation of the information in the cluster, NewsBlaster being an example. However, while some systems are already available on the web, summarization is still considered a difficult problem in the NLP community. One of the major problems hampering the development of proficient summarization systems is the evaluation of the (true) quality of system-generated summaries. This is exemplified by the fact that the current state-of-the-art evaluation method to assess the information content of summaries, the Pyramid evaluation scheme, is a manual procedure. In this light, this thesis has three main objectives. 1. The development of a fully automated evaluation method. The proposed scheme is rooted in the ideas underlying the Pyramid evaluation scheme and makes use of deep syntactic information and lexical semantics. Its performance improves notably on previous automated evaluation methods. 2. The development of an automatic summarization system which draws on the conceptual idea of the Pyramid evaluation scheme and the techniques developed for the proposed evaluation system. The approach features the algorithm for determining the pyramid and bases importance on the number of occurrences of the variable-sized contributors of the pyramid as opposed to word-based methods exploited elsewhere. 3. The development of a text coherence component that can be used for obtaining the best ordering of the sentences in a summary.
|
7 |
Σχεδιασμός, κατασκευή και αξιολόγηση ελληνικού γραμματικού διορθωτήΓάκης, Παναγιώτης 07 May 2015 (has links)
Στόχος της παρούσας διδακτορικής διατριβής είναι ο σχεδιασμός και η υλοποίηση ενός ηλεκτρονικού εύχρηστου εργαλείου (γραμματικού διορθωτή) που θα προβαίνει στη μορφολογική και συντακτική ανάλυση φράσεων, προτάσεων και λέξεων με σκοπό τη διόρθωση γραμματικών και υφολογικών λαθών. Βάση για την αντιμετώπιση όλων αυτών των ζητημάτων συνιστούν οι ρυθμίσεις της Γραμματικής (αναπροσαρμογή της Μικρής Νεοελληνικής Γραμματικής του Μανόλη Τριανταφυλλίδη), η οποία αποτελεί την επίσημη, από το 1976, γραμματική κωδικοποίηση της νεοελληνικής γλώσσας. (Κατά την εκπόνηση της διατριβής δεν έχουν ληφθεί υπόψη οι -ελάχιστες- διαφορές της νέας σχολικής γραμματικής Ε΄ και Στ΄ Δημοτικού).
Με δεδομένη την απουσία ενός τέτοιου εργαλείου για τα ελληνικά, η ανάπτυξη του προϊόντος θα βασίζεται καταρχήν στη λεπτομερή καταγραφή, ανάλυση και τυποποίηση των λαθών του γραπτού λόγου και στη συνέχεια στην επιλογή του λογισμικού εκείνου που θα περιγράφει φορμαλιστικά τα γραμματικά λάθη. Η διατριβή παρουσιάζει στατιστικά στοιχεία που αφορούν τη σχέση των λαθών με το φύλο ή με το κειμενικό είδος των κειμένων στα οποία και συναντούνται όπως επίσης και την αναγνώρισή τους από μαθητές.
Στην παρούσα έρευνα παρουσιάζεται ο φορμαλισμός υλοποίησης που χρησιμοποιήθηκε (Mnemosyne) και παρουσιάζονται οι ιδιαιτερότητες της ελληνικής γλώσσας που δυσχεραίνουν την υπολογιστική επεξεργασία της. Ο φορμαλισμός αυτός έχει ήδη χρησιμοποιηθεί για αναγνώριση πολυλεκτικών όρων καθώς και για την υλοποίηση ηλεκτρονικών εργαλείων (γραμματικών) με στόχο την αυτόματη εξαγωγή πληροφορίας. Με αυτό τον τρόπο όλοι οι χρήστες της γλώσσας (και όχι μόνο αυτοί που έχουν την ελληνική ως μητρική γλώσσα) μπορούν να κατανοήσουν καλύτερα όχι μόνον τη λειτουργία των διαφόρων μερών του συστήματος της γλώσσας αλλά και τον τρόπο με τον οποίο λειτουργούν οι μηχανισμοί λειτουργίας του γλωσσικού συστήματος κατά τη γλωσσική ανάλυση .
14
Οι βασικές περιοχές γραμματικών λαθών όπου θα παρεμβαίνει ο γραμματικός διορθωτής θα είναι:
1) θέματα τονισμού και στίξης,
2) τελικό -ν,
3) υφολογικά ζητήματα (ρηματικοί τύποι σε περιπτώσεις διπλοτυπίας, κλιτικοί τύποι),
4) ζητήματα καθιερωμένης γραφής λέξεων ή φράσεων της νέας ελληνικής γλώσσας (στερεότυπες φράσεις, λόγιοι τύποι),
5) ζητήματα κλίσης (λανθασμένοι κλιτικοί τύποι ονομάτων ή ρημάτων είτε λόγω άγνοιας είτε λόγω σύγχυσης),
6) ζητήματα λεξιλογίου (περιπτώσεις εννοιολογικής σύγχυσης, ελληνικές αποδόσεις ξένων λέξεων, πλεονασμός, χρήση εσφαλμένης φράσης ή λέξης),
7) ζητήματα ορθογραφικής σύγχυσης (ομόηχες λέξεις),
8) ζητήματα συμφωνίας (θέματα ασυμφωνίας στοιχείων της ονοματικής ή της ρηματικής φράσης),
9) ζητήματα σύνταξης (σύνταξη ρημάτων) και
10) περιπτώσεις λαθών που απαιτούν πιο εξειδικευμένη διαχείριση ορθογραφικής διόρθωσης.
Βάση για την υλοποίηση του λεξικού αποτελεί το ηλεκτρονικό μορφολογικό λεξικό Neurolingo Lexicon1, ένα λεξικό χτισμένο σε ένα μοντέλο 5 επιπέδων με τουλάχιστον 90.000 λήμματα που παράγουν 1.200.000 κλιτικούς τύπους. Οι τύποι αυτοί φέρουν πληροφορία: α) ορθογραφική (ορθή γραφή του κλιτικού τύπου), β) μορφηματική (το είδος των μορφημάτων: πρόθημα, θέμα, επίθημα, κατάληξη, που απαρτίζουν τον κλιτικό τύπο), γ) μορφοσυντακτική (μέρος του λόγου, γένος, πτώση, πρόσωπο κτλ.), δ) υφολογική (τα υφολογικά χαρακτηριστικά του τύπου: προφορικό, λόγιο κτλ.) και ε) ορολογική (επιπλέον πληροφορία για το αν ο τύπος αποτελεί μέρος ειδικού λεξιλογίου). Το λεξικό αυτό αποτελεί και τον θεμέλιο λίθο για την υποστήριξη του γραμματικού διορθωτή (Grammar Checker). Η αξία και ο ρόλος του μορφολογικού λεξικού για την υποστήριξη ενός γραμματικού
διορθωτή είναι αυτονόητη, καθώς η μορφολογία είναι το πρώτο επίπεδο γλώσσας που εξετάζεται και το συντακτικό επίπεδο βασίζεται και εξαρτάται από τη μορφολογία των λέξεων.
Μείζον πρόβλημα αποτέλεσε η λεξική ασάφεια, προϊόν της πλούσιας μορφολογίας της ελληνικής γλώσσας. Με δεδομένο αυτό το πρόβλημα σχεδιάστηκε ο σχολιαστής (tagger) με αμιγώς γλωσσολογικά κριτήρια για τις περιπτώσεις εκείνες όπου η λεξική ασάφεια αποτελούσε εμπόδιο στην αποτύπωση λαθών στη χρήση της ελληνικής γλώσσας.
Στον γραμματικό διορθωτή δόθηκαν προς διόρθωση κείμενα που είχαν διορθωθεί από άνθρωπο. Σε ένα πολύ μεγάλο ποσοστό ο διορθωτής προσεγγίζει τη διόρθωση του ανθρώπου με μόνη διαφοροποίηση εκείνα τα λάθη που αφορούν τη συνοχή του κειμένου και κατ’ επέκταση όλα τα νοηματικά λάθη. / The aim of this thesis is to design and then to implement a useful and friendly electronic tool (grammar checker) which will carry out the morphological and syntactic analysis of sentences, phrases and words in order to correct syntactic, grammatical and stylistic errors. Our foundation so as to deal with all these issues, is the settings of Grammar (adaptation of Little Modern Grammar of Manolis Triantafyllidis), which is the formalconstituted codified grammar of Modern Greek, since 1976. (In the presentation of this thesis it has not been taken into account the -minimum- differences that appear in the new Greek grammar book of the fifth and sixth grade of the elementary school).
Bearing in mind that there is a total absence of such a tool in Greek language, the development of the product is based on the detailed record, on the analysis and on the formulation of the errors of writing speech. Additionally, for its development the right software is chosen in order to describe the grammatical errors. In this thesis the statistics demonstrate the link between the errors and the students’ gender or between the errors and the textual type in which these errors appear. Finnally, through the statistics, the link among the errors and their recognition by the students is presented .
This research presents the formalism used (the Mnemosyne) and also the particularities of the Greek language that hinder the computational processing. The formalism has already been used to identify multi-word terms and to phrase grammars, aiming to the automatic information extraction. In this way, all speakers (native or not) will be able to understand better not only the function of various parts of the system of the Greek language but also the way the mechanisms of linguistic analysis operate in the conquest and more broadly in the linguistic realization.
The main areas of the grammatical errors with which the grammar checker will interfere, are:
1) Punctuation problems,
2) Final -n,
3) Stylistic issues (verb forms in cases of duplicates, inflectional types),
4) Standardization issues (stereotyped phrases, words of literary origin),
5) Inclination issues (incorrect declension of names or verbs either through ignorance or because of confusion)
6) Vocabulary issues (cases of conceptual confusion, Greek translation of foreign words, redundancy and use of incorrect word or phrase),
7) Orthographic confusion issues (homonymous words),
8) Agreement issues (cases of elements of nominal or verbal phrase disagreement),
9) Syntax issues (verbs) and
10) Cases of errors that require more specialized management of the spelling correction.
The basis for the implementation is the electronic morphological lexicon (Neurolingo Lexicon), a 5-level lexicon which consists of, at least 90,000 entries that produce ~1,200,000 inflection types. These types carry information: a) spelling (write spelling of inflectional type), b) morpheme information (type of morphemes: prefix, theme, suffix, ending), c) morphosyntactic information (part of speech, gender, case, person, etc.), d) stylistic information (the stylistic characteristics of the type: oral, archaic, etc.) and e) terminology (additional information about whether the word form is part of a special vocabulary).This electronic lexicon is the foundation that supports the grammar checker. The value and the key role of the morphological lexicon in supporting the Greek grammar checker is obvious, since the first level in which the language is examined is the morphology level and since the structural level is not only based but also depends on the morphology of the words.
A major problem in processing the natural language was the lexical ambiguity, a product of the highly morphology of the Greek language. Given that the major problem of modern Greek is the lexical ambiguity we designe the Greek tagger grounded on linguistic criteria for those cases where the lexical ambiguity impede the imprint of the errors in Greek language.
The texts that were given for correction to the grammar checker were also corrected by a person. In a very large percentage the grammar checker approximates in accuracy the human-corrector. Only when the grammar checker had to deal with mistakes concerning the coherence of the text or with meaning errors, the humman corrector was the only accurate corrector.
|
8 |
Efficient prediction of relational structure and its application to natural language processingRiedel, Sebastian January 2009 (has links)
Many tasks in Natural Language Processing (NLP) require us to predict a relational structure over entities. For example, in Semantic Role Labelling we try to predict the ’semantic role’ relation between a predicate verb and its argument constituents. Often NLP tasks not only involve related entities but also relations that are stochastically correlated. For instance, in Semantic Role Labelling the roles of different constituents are correlated: we cannot assign the agent role to one constituent if we have already assigned this role to another. Statistical Relational Learning (also known as First Order Probabilistic Logic) allows us to capture the aforementioned nature of NLP tasks because it is based on the notions of entities, relations and stochastic correlations between relationships. It is therefore often straightforward to formulate an NLP task using a First Order probabilistic language such as Markov Logic. However, the generality of this approach comes at a price: the process of finding the relational structure with highest probability, also known as maximum a posteriori (MAP) inference, is often inefficient, if not intractable. In this work we seek to improve the efficiency of MAP inference for Statistical Relational Learning. We propose a meta-algorithm, namely Cutting Plane Inference (CPI), that iteratively solves small subproblems of the original problem using any existing MAP technique and inspects parts of the problem that are not yet included in the current subproblem but could potentially lead to an improved solution. Our hypothesis is that this algorithm can dramatically improve the efficiency of existing methods while remaining at least as accurate. We frame the algorithm in Markov Logic, a language that combines First Order Logic and Markov Networks. Our hypothesis is evaluated using two tasks: Semantic Role Labelling and Entity Resolution. It is shown that the proposed algorithm improves the efficiency of two existing methods by two orders of magnitude and leads an approximate method to more probable solutions. We also give show that CPI, at convergence, is guaranteed to be at least as accurate as the method used within its inner loop. Another core contribution of this work is a theoretic and empirical analysis of the boundary conditions of Cutting Plane Inference. We describe cases when Cutting Plane Inference will definitely be difficult (because it instantiates large networks or needs many iterations) and when it will be easy (because it instantiates small networks and needs only few iterations).
|
9 |
Weighted Aspects for Sentiment AnalysisByungkyu Yoo (14216267) 05 December 2022 (has links)
<p>When people write a review about a business, they write and rate it based on their personal experience of the business. Sentiment analysis is a natural language processing technique that determines the sentiment of text, including reviews. However, unlike computers, the personal experience of humans emphasizes their preferences and observations that they deem important while ignoring other components that may not be as important to them personally. Traditional sentiment analysis does not consider such preferences. To utilize these human preferences in sentiment analysis, this paper explores various methods of weighting aspects in an attempt to improve sentiment analysis accuracy. Two types of methods are considered. The first method applies human preference by assigning weights to aspects in calculating overall sentiment analysis. The second method uses the results of the first method to improve the accuracy of traditional supervised sentiment analysis. The results show that the methods have high accuracy when people have strong opinions, but the weights of the aspects do not significantly improve the accuracy.</p>
|
10 |
Automatic Extraction of Computer Science Concept Phrases Using a Hybrid Machine Learning ParadigmJahin, S M Abrar 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the proliferation of computer science in recent years in modern society, the number of computer science-related employment is expanding quickly. Software engineer has been chosen as the best job for 2023 based on pay, stress level, opportunity for professional growth, and balance between work and personal life. This was decided by a rankings of different news, journals, and publications. Computer science occupations are anticipated to be in high demand not just in 2023, but also for the foreseeable future. It's not surprising that the number of computer science students at universities is growing and will continue to grow. The enormous increase in student enrolment in many subdisciplines of computers has presented some distinct issues. If computer science is to be incorporated into the K-12 curriculum, it is vital that K-12 educators are competent. But one of the biggest problems with this plan is that there aren't enough trained computer science professors. Numerous new fields and applications, for instance, are being introduced to computer science. In addition, it is difficult for schools to recruit skilled computer science instructors for a variety of reasons including low salary issue. Utilizing the K-12 teachers who are already in the schools, have a love for teaching, and consider teaching as a vocation is therefore the most effective strategy to improve or fix this issue. So, if we want teachers to quickly grasp computer science topics, we need to give them an easy way to learn about computer science. To simplify and expedite the study of computer science, we must acquaint school-treachers with the terminology associated with computer science concepts so they can know which things they need to learn according to their profile.
If we want to make it easier for schoolteachers to comprehend computer science concepts, it would be ideal if we could provide them with a tree of words and phrases from which they could determine where the phrases originated and which phrases are connected to them so that they can be effectively learned. To find a good concept word or phrase, we must first identify concepts and then establish their connections or linkages. As computer science is a fast developing field, its nomenclature is also expanding at a frenetic rate. Therefore, adding all concepts and terms to the knowledge graph would be a challenging endeavor. Cre-
ating a system that automatically adds all computer science domain terms to the knowledge graph would be a straightforward solution to the issue. We have identified knowledge graph use cases for the schoolteacher training program, which motivates the development of a knowledge graph. We have analyzed the knowledge graph's use case and the knowledge graph's ideal characteristics. We have designed a webbased system for adding, editing, and removing words from a knowledge graph. In addition, a term or phrase can be represented with its children list, parent list, and synonym list for enhanced comprehension. We' ve developed an automated system for extracting words and phrases that can extract computer science idea phrases from any supplied text, therefore enriching the knowledge graph. Therefore, we have designed the knowledge graph for use in teacher education so that school-teachers can educate K-12 students computer science topicses effectively.
|
Page generated in 0.0994 seconds