Spelling suggestions: "subject:"textmining"" "subject:"detennining""
31 |
Into the Into of Earth ItselfHodes, Amanda Kay 26 May 2023 (has links)
Into the Into of Earth Itself is a poetry collection that investigates the relationship between ecological violation and the violation of women, as well as toxicity and toxic masculinity. In doing so, it draws from the histories of two Pennsylvania towns: Palmerton and Centralia. The former is a Superfund site ravaged by zinc pollution and currently under threat of hydraulic fracturing and pipeline expansion. The latter is a nearby ghost town that was condemned and evacuated due to an underground mine fire, which will continue for another 200 years. The manuscript uses visual forms and digital text mining techniques to craft poetry about these extractive relationships to land and women. The speaker asks herself: As a woman, how have I also been mined and fracked by these same societal technologies? / Master of Fine Arts / Into the Into of Earth Itself is a poetry collection.
|
32 |
Statistical Learning for Sequential Unstructured DataXu, Jingbin 30 July 2024 (has links)
Unstructured data, which cannot be organized into predefined structures, such as texts, human behavior status, and system logs, often presented in a sequential format with inherent dependencies. Probabilistic model are commonly used to capture these dependencies in the data generation process through latent parameters and can naturally extend into hierarchical forms. However, these models rely on the correct specification of assumptions about the sequential data generation process, which often limits their scalable learning abilities. The emergence of neural network tools has enabled scalable learning for high-dimensional sequential data. From an algorithmic perspective, efforts are directed towards reducing dimensionality and representing unstructured data units as dense vectors in low-dimensional spaces, learned from unlabeled data, a practice often referred to as numerical embedding. While these representations offer measures of similarity, automated generalizations, and semantic understanding, they frequently lack the statistical foundations required for explicit inference. This dissertation aims to develop statistical inference techniques tailored for the analysis of unstructured sequential data, with their application in the field of transportation safety. The first part of dissertation presents a two-stage method. It adopts numerical embedding to map large-scale unannotated data into numerical vectors. Subsequently, a kernel test using maximum mean discrepancy is employed to detect abnormal segments within a given time period. Theoretical results showed that learning from numerical vectors is equivalent to learning directly through the raw data. A real-world example illustrates how driver mismatched visual behavior occurred during a lane change. The second part of the dissertation introduces a two-sample test for comparing text generation similarity. The hypothesis tested is whether the probabilistic mapping measures that generate textual data are identical for two groups of documents. The proposed test compares the likelihood of text documents, estimated through neural network-based language models under the autoregressive setup. The test statistic is derived from an estimation and inference framework that first approximates data likelihood with an estimation set before performing inference on the remaining part. The theoretical result indicates that the test statistic's asymptotic behavior approximates a normal distribution under mild conditions. Additionally, a multiple data-splitting strategy is utilized, combining p-values into a unified decision to enhance the test's power. The third part of the dissertation develops a method to measure differences in text generation between a benchmark dataset and a comparison dataset, focusing on word-level generation variations. This method uses the sliced-Wasserstein distance to compute the contextual discrepancy score. A resampling method establishes a threshold to screen the scores. Crash report narratives are analyzed to compare crashes involving vehicles equipped with level 2 advanced driver assistance systems and those involving human drivers. / Doctor of Philosophy / Unstructured data, such as texts, human behavior records, and system logs, cannot be neatly organized. This type of data often appears in sequences with natural connections. Traditional methods use models to understand these connections, but these models depend on specific assumptions, which can limit their effectiveness. New tools using neural networks have made it easier to work with large and complex data. These tools help simplify data by turning it into smaller, manageable pieces, a process known as numerical embedding. While this helps in understanding the data better, it often requires a statistical foundation for the proceeding inferential analysis. This dissertation aims to develop statistical inference techniques for analyzing unstructured sequential data, focusing on transportation safety. The first part of the dissertation introduces a two-step method. First, it transforms large-scale unorganized data into numerical vectors. Then, it uses a statistical test to detect unusual patterns over a period. For example, it can identify when a driver's visual behavior doesn't properly aligned with the driving attention demand during lane changes. The second part of the dissertation presents a method to compare the similarity of text generation. It tests whether the way texts are generated is the same for two groups of documents. This method uses neural network-based models to estimate the likelihood of text documents. Theoretical results show that as the more data observed, the distribution of the test statistic will get closer to the desired distribution under certain conditions. Additionally, combining multiple data splits improves the test's power. The third part of the dissertation constructs a score to measure differences in text generation processes, focusing on word-level differences. This score is based on a specific distance measure. To check if the difference is not a false discovery, a screening threshold is established using resampling technique. If the score exceeds the threshold, the difference is considered significant. An application of this method compares crash reports from vehicles with advanced driver assistance systems to those from human-driven vehicles.
|
33 |
Intégration du web social dans les systèmes de recommandation / Social web integration in recommendation systemsNana jipmo, Coriane 19 December 2017 (has links)
Le Web social croît de plus en plus et donne accès à une multitude de ressources très variées, qui proviennent de sites de partage tels que del.icio.us, d’échange de messages comme Twitter, des réseaux sociaux à finalité professionnelle, comme LinkedIn, ou plus généralement à finalité sociale, comme Facebook et LiveJournal. Un même individu peut être inscrit et actif sur différents réseaux sociaux ayant potentiellement des finalités différentes, où il publie des informations diverses et variées, telles que son nom, sa localité, ses communautés, et ses différentes activités. Ces informations (textuelles), au vu de la dimension internationale du Web, sont par nature, d’une part multilingue, et d’autre part, intrinsèquement ambiguë puisqu’elles sont éditées par les individus en langage naturel dans un vocabulaire libre. De même, elles sont une source de données précieuses, notamment pour les applications cherchant à connaître leurs utilisateurs afin de mieux comprendre leurs besoins et leurs intérêts. L’objectif de nos travaux de recherche est d’exploiter, en utilisant essentiellement l’encyclopédie Wikipédia, les ressources textuelles des utilisateurs extraites de leurs différents réseaux sociaux afin de construire un profil élargi les caractérisant et exploitable par des applications telles que les systèmes de recommandation. En particulier, nous avons réalisé une étude afin de caractériser les traits de personnalité des utilisateurs. De nombreuses expérimentations, analyses et évaluations ont été réalisées sur des données réelles collectées à partir de différents réseaux sociaux. / The social Web grows more and more and gives through the web, access to a wide variety of resources, like sharing sites such as del.icio.us, exchange messages as Twitter, or social networks with the professional purpose such as LinkedIn, or more generally for social purposes, such as Facebook and LiveJournal. The same individual can be registered and active on different social networks (potentially having different purposes), in which it publishes various information, which are constantly growing, such as its name, locality, communities, various activities. The information (textual), given the international dimension of the Web, is inherently multilingual and intrinsically ambiguous, since it is published in natural language in a free vocabulary by individuals from different origin. They are also important, specially for applications seeking to know their users in order to better understand their needs, activities and interests. The objective of our research is to exploit using essentially the Wikpédia encyclopedia, the textual resources extracted from the different social networks of the same individual in order to construct his characterizing profile, which can be exploited in particular by applications seeking to understand their users, such as recommendation systems. In particular, we conducted a study to characterize the personality traits of users. Many experiments, analyzes and evaluations were carried out on real data collected from different social networks.
|
34 |
Nutzen und Benutzen von Text Mining für die MedienanalyseRichter, Matthias 26 January 2011 (has links) (PDF)
Einerseits werden bestehende Ergebnisse aus so unterschiedlichen Richtungen wie etwa der empirischen Medienforschung und dem Text Mining zusammengetragen. Es geht dabei um Inhaltsanalyse, von Hand, mit Unterstützung durch Computer, oder völlig automatisch, speziell auch im Hinblick auf die Faktoren wie Zeit, Entwicklung und Veränderung. Die Verdichtung und Zusammenstellung liefert nicht nur einen Überblick aus ungewohnter Perspektive, in diesem Prozess geschieht auch die Synthese von etwas Neuem.
Die Grundthese bleibt dabei immer eine einschließende: So wenig es möglich scheint, dass in Zukunft der Computer Analysen völlig ohne menschliche Interpretation betreiben kann und wird, so wenig werden menschliche Interpretatoren noch ohne die jeweils bestmögliche Unterstützung des Rechners in der Lage sein, komplexe Themen zeitnah umfassend und ohne allzu große subjektive Einflüsse zu bearbeiten – und so wenig werden es sich substantiell wertvolle Analysen noch leisten können, völlig auf derartige Hilfen und Instrumente der Qualitätssicherung zu verzichten.
Daraus ergeben sich unmittelbar Anforderungen: Es ist zu klären, wo die Stärken und Schwächen von menschlichen Analysten und von Computerverfahren liegen. Darauf aufbauend gilt es eine optimale Synthese aus beider Seiten Stärken und unter Minimierung der jeweiligen Schwächen zu erzielen. Praktisches Ziel ist letztlich die Reduktion von Komplexität und die Ermöglichung eines Ausgangs aus dem Zustand des systembedingten „overnewsed but uninformed“-Seins.
|
35 |
Entity-Centric Text Mining for Historical DocumentsColl Ardanuy, Maria 07 July 2017 (has links)
No description available.
|
36 |
Information extraction from chemical patentsJessop, David M. January 2011 (has links)
The automated extraction of semantic chemical data from the existing literature is demonstrated. For reasons of copyright, the work is focused on the patent literature, though the methods are expected to apply equally to other areas of the chemical literature. Hearst Patterns are applied to the patent literature in order to discover hyponymic relations describing chemical species. The acquired relations are manually validated to determine the precision of the determined hypernyms (85.0%) and of the asserted hyponymic relations (94.3%). It is demonstrated that the system acquires relations that are not present in the ChEBI ontology, suggesting that it could function as a valuable aid to the ChEBI curators. The relations discovered by this process are formalised using the Web Ontology Language (OWL) to enable re-use. PatentEye - an automated system for the extraction of reactions from chemical patents and their conversion to Chemical Markup Language (CML) - is presented. Chemical patents published by the European Patent Office over a ten-week period are used to demonstrate the capability of PatentEye - 4444 reactions are extracted with a precision of 78% and recall of 64% with regards to determining the identity and amount of reactants employed and an accuracy of 92% with regards to product identification. NMR spectra are extracted from the text using OSCAR3, which is developed to greatly increase recall. The resulting system is presented as a significant advancement towards the large-scale and automated extraction of high-quality reaction information. Extended Polymer Markup Language (EPML), a CML dialect for the description of Markush structures as they are presented in the literature, is developed. Software to exemplify and to enable substructure searching of EPML documents is presented. Further work is recommended to refine the language and code to publication-quality before they are presented to the community.
|
37 |
Extraction of chemical structures and reactions from the literatureLowe, Daniel Mark January 2012 (has links)
The ever increasing quantity of chemical literature necessitates the creation of automated techniques for extracting relevant information. This work focuses on two aspects: the conversion of chemical names to computer readable structure representations and the extraction of chemical reactions from text. Chemical names are a common way of communicating chemical structure information. OPSIN (Open Parser for Systematic IUPAC Nomenclature), an open source, freely available algorithm for converting chemical names to structures was developed. OPSIN employs a regular grammar to direct tokenisation and parsing leading to the generation of an XML parse tree. Nomenclature operations are applied successively to the tree with many requiring the manipulation of an in-memory connection table representation of the structure under construction. Areas of nomenclature supported are described with attention being drawn to difficulties that may be encountered in name to structure conversion. Results on sets of generated names and names extracted from patents are presented. On generated names, recall of between 96.2% and 99.0% was achieved with a lower bound of 97.9% on precision with all results either being comparable or superior to the tested commercial solutions. On the patent names OPSIN s recall was 2-10% higher than the tested solutions when the patent names were processed as found in the patents. The uses of OPSIN as a web service and as a tool for identifying chemical names in text are shown to demonstrate the direct utility of this algorithm. A software system for extracting chemical reactions from the text of chemical patents was developed. The system relies on the output of ChemicalTagger, a tool for tagging words and identifying phrases of importance in experimental chemistry text. Improvements to this tool required to facilitate this task are documented. The structure of chemical entities are where possible determined using OPSIN in conjunction with a dictionary of name to structure relationships. Extracted reactions are atom mapped to confirm that they are chemically consistent. 424,621 atom mapped reactions were extracted from 65,034 organic chemistry USPTO patents. On a sample of 100 of these extracted reactions chemical entities were identified with 96.4% recall and 88.9% precision. Quantities could be associated with reagents in 98.8% of cases and 64.9% of cases for products whilst the correct role was assigned to chemical entities in 91.8% of cases. Qualitatively the system captured the essence of the reaction in 95% of cases. This system is expected to be useful in the creation of searchable databases of reactions from chemical patents and in facilitating analysis of the properties of large populations of reactions.
|
38 |
The Business Value of Text MiningStolt, Richard January 2017 (has links)
Text mining is an enabling technology that will come to change the process for how businesses derive insights & knowledge from the textual data available to them. The current literature has its focus set on the text mining algorithms and techniques, whereas the practical aspects of text mining are lacking. The efforts of this study aims at helping companies understand what the business value of text mining is with the help of a case study. Subsequently, an SMS-survey method was used to identify additional business areas where text mining could be used to derive business value from. A literature review was conducted to conceptualize the business value of text mining, thus a concept matrix was established. Here a business category and its relative: derived insights & knowledge, domain, and data source are specified. The concept matrix was from then on used to decide when information was of business value, to prove that text mining could be used to derive information of business value.Text mining analyses was conducted on traffic school data of survey feedback. The results were several patterns, where the business value was derived mainly for the categories of Quality Control & Quality Assurance. After comparing the results of the SMS-survey with the case study empiricism, some difficulties emerged in the categorization of derived information, implying the categories are required to become more specific and distinct. Furthermore, the concept matrix does not comprise all of the business categories that are sure to exist.
|
39 |
Mining patient journeys from healthcare narrativesDehghan, Azad January 2015 (has links)
The aim of the thesis is to investigate the feasibility of using text mining methods to reconstruct patient journeys from unstructured clinical narratives. A novel method to extract and represent patient journeys is proposed and evaluated in this thesis. A composition of methods were designed, developed and evaluated to this end; which included health-related concept extraction, temporal information extraction, and concept clustering and automated work-flow generation. A suite of methods to extract clinical information from healthcare narratives were proposed and evaluated in order to enable chronological ordering of clinical concepts. Specifically, we proposed and evaluated a data-driven method to identify key clinical events (i.e., medical problems, treatments, and tests) using a sequence labelling algorithm, CRF, with a combination of lexical and syntactic features, and a rule-based post-processing method including label correction, boundary adjustment and false positive filter. The method was evaluated as part of the 2012 i2b2 challengeand achieved a state-of-the-art performance with a strict and lenient micro F1-measure of 83.45% and 91.13% respectively. A method to extract temporal expressions using a hybrid knowledge- (dictionary and rules) and data-driven (CRF) has been proposed and evaluated. The method demonstrated the state-of-the-art performance at the 2012 i2b2 challenge: F1-measure of 90.48% and accuracy of 70.44% for identification and normalisation respectively. For temporal ordering of events we proposed and evaluated a knowledge-driven method, with a F1-measure of 62.96% (considering the reduced temporal graph) or 70.22% for extraction of temporal links. The method developed consisted of initial rule-based identification and classification components which utilised contextual lexico-syntactic cues for inter-sentence links, string similarity for co-reference links, and subsequently a temporal closure component to calculate transitive relations of the extracted links. In a case study of survivors of childhood central nervous system tumours (medulloblastoma), qualitative evaluation showed that we were able to capture specific trends part of patient journeys. An overall quantitative evaluation score (average precision and recall) of 94-100% for individual and 97% for aggregated patient journeys were also achieved. Hence, indicating that text mining methods can be used to identify, extract and temporally organise key clinical concepts that make up a patient’s journey. We also presented an analyses of healthcare narratives, specifically exploring the content of clinical and patient narratives by using methods developed to extract patient journeys. We found that health-related quality of life concepts are more common in patient narrative, while clinical concepts (e.g., medical problems, treatments, tests) are more prevalent in clinical narratives. In addition, while both aggregated sets of narratives contain all investigated concepts; clinical narratives contain, proportionally, more health-related quality of life concepts than clinical concepts found in patient narratives. These results demonstrate that automated concept extraction, in particular health-related quality of life, as part of standard clinical practice is feasible. The proposed method presented herein demonstrated that text mining methods can be efficiently used to identify, extract and temporally organise key clinical concepts that make up a patient’s journey in a healthcare system. Automated reconstruction of patient journeys can potentially be of value for clinical practitioners and researchers, to aid large scale analyses of implemented care pathways, and subsequently help monitor, compare, develop and adjust clinical guidelines both in the areas of chronic diseases where there is plenty of data and rare conditions where potentially there are no established guidelines.
|
40 |
Analyse de données textuelles d'un forum médical pour évaluer le ressenti exprimé par les internautes au sujet des antidépresseurs et des anxyolitiques / Text Mining Analysis of an Online Forum to Evaluate Users’ Perception about Antidepressants and AnxiolyticsAbbé, Adeline 08 November 2016 (has links)
L’analyse de donnée textuelle est facilitée par l’utilisation du text mining (TM) permettant l’automatisation de l’analyse de contenu et possède de nombreuses applications en santé. L’une d’entre elles est l’utilisation du TM pour explorer le contenu des messages échangés sur Internet.Nous avons effectué une revue de la littérature systématique afin d’identifier les applications du TM en santé mentale. De plus, le TM a permis d’explorer les préoccupations des utilisateurs du forum Doctissimo.com au sujet des antidépresseurs et anxiolytiques entre 2013 et 2015 via l’analyse des fréquences des mots, des cooccurrences, de la modélisation thématique (LDA) et de la popularité des thèmes.Les quatre applications du TM en santé mentale sont l’analyse des récits des patients (psychopathologie), le ressenti exprimé sur Internet, le contenu des dossiers médicaux, et les thèmes de la littérature médicale. Quatre grands thèmes ont été identifiés sur le forum: le sevrage (le plus fréquent), l’escitalopram, l’anxiété de l’effet du traitement et les effets secondaires. Alors que les effets indésirables des traitements est un sujet qui a tendance à décroitre, les interrogations sur les effets du sevrage et le changement de traitement sont grandissantes et associées aux antidépresseurs.L’analyse du contenu d’Internet permet de comprendre les préoccupations des patients et le soutien, et améliorer l’adhérence au traitement. / Analysis of textual data is facilitated by the use of text mining (TM) allowing to automate content analysis, and is implemented in several application in healthcare. These include the use of TM to explore the content of posts shared online.We performed a systematique literature review to identify the application of TM in psychiatry. In addition, we used TM to explore users’ concerns of an online forum dedicated to antidepressants and anxiolytics between 2013 and 2015 analysing words frequency, cooccurences, topic models (LDA) and popularity of topics.The four TM applications in psychiatry retrieved are the analysis of patients' narratives (psychopathology), feelings expressed online, content of medical records, and biomedical literature screening. Four topics are identified on the forum: withdrawals (most frequent), escitalopram, anxiety related to treatment effect and secondary effects. While concerns around secondary effects of treatment declined, questions about withdrawals effects and changing medication increased related to several antidepressants.Content analysis of online textual data allow us to better understand major concerns of patients, support provided, and to improve the adherence of treatment.
|
Page generated in 0.0491 seconds