• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 254
  • 124
  • 44
  • 38
  • 31
  • 29
  • 24
  • 24
  • 13
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 636
  • 636
  • 146
  • 133
  • 122
  • 116
  • 95
  • 90
  • 88
  • 83
  • 81
  • 78
  • 73
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Sdílená ekonomika v kontextu postmateriálních hodnot: případ segmentu ubytování v Praze / Sharing Economy in the Context of Postmaterial Values: The Case of Accommodation Segment in Prague

Svobodová, Tereza January 2020 (has links)
This master's thesis is about the success of sharing economy in the accommodation segment in Prague. The thesis is based on theories conceptualizing sharing economy as a result of social and value change, not only as technological one. Using online review data, the user experience of shared accommodation via Airbnb and traditional via Booking are compared. Analysis is conducted with focus on users' satisfied needs and fulfilled values. For processing the data, text mining techniques (topic modelling and sentiment analysis) were employed. The major result is that in Prague the models of sharing economy accommodation meets the growing need in society to fulfil post-material values in the market much better than the models of traditional accommodation (hotels, hostels, boarding houses). In their experiences, Airbnb users reflect social and emotional values more often, even though most sharing economy accommodations in Prague do not involve any physical sharing with the host. The thesis thus brings a unique perspective on the Airbnb phenomenon in the Czech context and contributes to the discussion of why the market share of the sharing economy in the accommodation segment in Prague has been growing, while traditional models stagnated.
522

Alternative Media Online News during the Covid-19 Pandemic: within a Swedish Context : A comparative content analysis of Alternative Media and Mainstream media newspapers online in Sweden during the coverage of the coronavirus pandemic.

Ekberg, Robin, Svensson, Gina Michelle January 2021 (has links)
Technology has allowed for the ability to create online platforms for sources ofreliable and unreliable news media. It is therefore important to understand the roleand relevance of alternative news media today and how disinformation is spreadonline. In this paper, we will examine the role of alternative news media websites inSweden and how it compares to the mainstream media websites spread of informationduring the coverage of the Covid-19 pandemic. We will also explore what thedifference is in their portrayal of events during the coronavirus, and what makes thisdifference appealing to certain readers. Using Google, we searched the top ten articlesfrom four online media news sources. Two of which sources were mainstream mediasources and the other two sources as alternative media. These articles were searchedfor during a specific timeframe and with the keywords such as “Corona” and “Covid19”. The dates for the timeframe of this experiment come from when the coronavirusbroke out in Sweden to one year after the event occurred. The top ten search resultsfrom each news source, as provided to us by Google’s algorithm, were placedthrough a text analysis tool called Voyant. The data findings are presented in threeformats: a Word Cloud, TermsBerry and Distinctive Words Comparison. The resultsof the experiment show a stark contrast in the difference in reporting on thecoronavirus topic between different online newspaper media, specifically alternativemedia news sources. Further research is recommended on a larger scale within thetopic of online alternative media.
523

Une approche générique pour l'analyse croisant contenu et usage des sites Web par des méthodes de bipartitionnement / A generic approach to combining web content and usage analysis using biclustering algorithms

Charrad, Malika 22 March 2010 (has links)
Dans cette thèse, nous proposons une nouvelle approche WCUM (Web Content and Usage Mining based approach) permettant de relier l'analyse du contenu à l'analyse de l'usage d'un site Web afin de mieux comprendre le comportement général des visiteurs du site. Ce travail repose sur l'utilisation de l'algorithme CROKI2 de classification croisée implémenté selon deux stratégies d'optimisation différentes que nous comparons à travers des expérimentations sur des données générées artificiellement. Afin de pallier le problème de détermination du nombre de classes sur les lignes et les colonnes, nous proposons de généraliser certains indices proposés initialement pour évaluer les partitions obtenues par des algorithmes de classification simple, aux algorithmes de classification simultanée. Pour évaluer la performance de ces indices nous proposons un algorithme de génération de biclasses artificielles pour effectuer des simulations et valider les résultats. Des expérimentations sur des données artificielles ainsi qu'une application sur des données réelles ont été réalisées pour évaluer l'efficacité de l'approche proposée. / In this thesis, we propose a new approach WCUM (Web Content and Usage Mining based approach) for linking content analysis to usage analysis of a website to better understand the general behavior of the web site visitors. This work is based on the use of the block clustering algorithm CROKI2 implemented by two different strategies of optimization that we compared through experiments on artificially generated data. To mitigate the problem of determination of the number of clusters on rows and columns, we suggest to generalize the use of some indices originally proposed to evaluate the partitions obtained by clustering algorithms to evaluate bipartitions obtained by simultaneous clustering algorithms. To evaluate the performance of these indices on data with biclusters structure, we proposed an algorithm for generating artificial data to perform simulations and validate the results. Experiments on artificial data as well as on real data were realized to estimate the efficiency of the proposed approach.
524

A visual analytics approach for multi-resolution and multi-model analysis of text corpora : application to investigative journalism / Une approche de visualisation analytique pour une analyse multi-résolution de corpus textuels : application au journalisme d’investigation

Médoc, Nicolas 16 October 2017 (has links)
À mesure que la production de textes numériques croît exponentiellement, un besoin grandissant d’analyser des corpus de textes se manifeste dans beaucoup de domaines d’application, tant ces corpus constituent des sources inépuisables d’information et de connaissance partagées. Ainsi proposons-nous dans cette thèse une nouvelle approche de visualisation analytique pour l’analyse de corpus textuels, mise en œuvre pour les besoins spécifiques du journalisme d’investigation. Motivées par les problèmes et les tâches identifiés avec une journaliste d’investigation professionnelle, les visualisations et les interactions ont été conçues suivant une méthodologie centrée utilisateur, impliquant l’utilisateur durant tout le processus de développement. En l’occurrence, les journalistes d’investigation formulent des hypothèses, explorent leur sujet d’investigation sous tous ses angles, à la recherche de sources multiples étayant leurs hypothèses de travail. La réalisation de ces tâches, très fastidieuse lorsque les corpus sont volumineux, requiert l’usage de logiciels de visualisation analytique se confrontant aux problématiques de recherche abordées dans cette thèse. D’abord, la difficulté de donner du sens à un corpus textuel vient de sa nature non structurée. Nous avons donc recours au modèle vectoriel et son lien étroit avec l’hypothèse distributionnelle, ainsi qu’aux algorithmes qui l’exploitent pour révéler la structure sémantique latente du corpus. Les modèles de sujets et les algorithmes de biclustering sont efficaces pour l’extraction de sujets de haut niveau. Ces derniers correspondent à des groupes de documents concernant des sujets similaires, chacun représenté par un ensemble de termes extraits des contenus textuels. Une telle structuration par sujet permet notamment de résumer un corpus et de faciliter son exploration. Nous proposons une nouvelle visualisation, une carte pondérée des sujets, qui dresse une vue d’ensemble des sujets de haut niveau. Elle permet d’une part d’interpréter rapidement les contenus grâce à de multiples nuages de mots, et d’autre part, d’apprécier les propriétés des sujets telles que leur taille relative et leur proximité sémantique. Bien que l’exploration des sujets de haut niveau aide à localiser des sujets d’intérêt ainsi que leur voisinage, l’identification de faits précis, de points de vue ou d’angles d’analyse, en lien avec un événement ou une histoire, nécessite un niveau de structuration plus fin pour représenter des variantes de sujet. Cette structure imbriquée révélée par Bimax, une méthode de biclustering basée sur des motifs avec chevauchement, capture au sein des biclusters les co-occurrences de termes partagés par des sous-ensembles de documents pouvant dévoiler des faits, des points de vue ou des angles associés à des événements ou des histoires communes. Cette thèse aborde les problèmes de visualisation de biclusters avec chevauchement en organisant les biclusters terme-document en une hiérarchie qui limite la redondance des termes et met en exergue les parties communes et distinctives des biclusters. Nous avons évalué l’utilité de notre logiciel d’abord par un scénario d’utilisation doublé d’une évaluation qualitative avec une journaliste d’investigation. En outre, les motifs de co-occurrence des variantes de sujet révélées par Bima. sont déterminés par la structure de sujet englobante fournie par une méthode d’extraction de sujet. Cependant, la communauté a peu de recul quant au choix de la méthode et son impact sur l’exploration et l’interprétation des sujets et de ses variantes. Ainsi nous avons conduit une expérience computationnelle et une expérience utilisateur contrôlée afin de comparer deux méthodes d’extraction de sujet. D’un côté Coclu. est une méthode de biclustering disjointe, et de l’autre, hirarchical Latent Dirichlet Allocation (hLDA) est un modèle de sujet probabiliste dont les distributions de probabilité forment une structure de bicluster avec chevauchement. (...) / As the production of digital texts grows exponentially, a greater need to analyze text corpora arises in various domains of application, insofar as they constitute inexhaustible sources of shared information and knowledge. We therefore propose in this thesis a novel visual analytics approach for the analysis of text corpora, implemented for the real and concrete needs of investigative journalism. Motivated by the problems and tasks identified with a professional investigative journalist, visualizations and interactions are designed through a user-centered methodology involving the user during the whole development process. Specifically, investigative journalists formulate hypotheses and explore exhaustively the field under investigation in order to multiply sources showing pieces of evidence related to their working hypothesis. Carrying out such tasks in a large corpus is however a daunting endeavor and requires visual analytics software addressing several challenging research issues covered in this thesis. First, the difficulty to make sense of a large text corpus lies in its unstructured nature. We resort to the Vector Space Model (VSM) and its strong relationship with the distributional hypothesis, leveraged by multiple text mining algorithms, to discover the latent semantic structure of the corpus. Topic models and biclustering methods are recognized to be well suited to the extraction of coarse-grained topics, i.e. groups of documents concerning similar topics, each one represented by a set of terms extracted from textual contents. We provide a new Weighted Topic Map visualization that conveys a broad overview of coarse-grained topics by allowing quick interpretation of contents through multiple tag clouds while depicting the topical structure such as the relative importance of topics and their semantic similarity. Although the exploration of the coarse-grained topics helps locate topic of interest and its neighborhood, the identification of specific facts, viewpoints or angles related to events or stories requires finer level of structuration to represent topic variants. This nested structure, revealed by Bimax, a pattern-based overlapping biclustering algorithm, captures in biclusters the co-occurrences of terms shared by multiple documents and can disclose facts, viewpoints or angles related to events or stories. This thesis tackles issues related to the visualization of a large amount of overlapping biclusters by organizing term-document biclusters in a hierarchy that limits term redundancy and conveys their commonality and specificities. We evaluated the utility of our software through a usage scenario and a qualitative evaluation with an investigative journalist. In addition, the co-occurrence patterns of topic variants revealed by Bima. are determined by the enclosing topical structure supplied by the coarse-grained topic extraction method which is run beforehand. Nonetheless, little guidance is found regarding the choice of the latter method and its impact on the exploration and comprehension of topics and topic variants. Therefore we conducted both a numerical experiment and a controlled user experiment to compare two topic extraction methods, namely Coclus, a disjoint biclustering method, and hierarchical Latent Dirichlet Allocation (hLDA), an overlapping probabilistic topic model. The theoretical foundation of both methods is systematically analyzed by relating them to the distributional hypothesis. The numerical experiment provides statistical evidence of the difference between the resulting topical structure of both methods. The controlled experiment shows their impact on the comprehension of topic and topic variants, from analyst perspective. (...)
525

Condition-specific differential subnetwork analysis for biological systems

Jhamb, Deepali 04 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Biological systems behave differently under different conditions. Advances in sequencing technology over the last decade have led to the generation of enormous amounts of condition-specific data. However, these measurements often fail to identify low abundance genes/proteins that can be biologically crucial. In this work, a novel text-mining system was first developed to extract condition-specific proteins from the biomedical literature. The literature-derived data was then combined with proteomics data to construct condition-specific protein interaction networks. Further, an innovative condition-specific differential analysis approach was designed to identify key differences, in the form of subnetworks, between any two given biological systems. The framework developed here was implemented to understand the differences between limb regeneration-competent Ambystoma mexicanum and –deficient Xenopus laevis. This study provides an exhaustive systems level analysis to compare regeneration competent and deficient subnetworks to show how different molecular entities inter-connect with each other and are rewired during the formation of an accumulation blastema in regenerating axolotl limbs. This study also demonstrates the importance of literature-derived knowledge, specific to limb regeneration, to augment the systems biology analysis. Our findings show that although the proteins might be common between the two given biological conditions, they can have a high dissimilarity based on their biological and topological properties in the subnetwork. The knowledge gained from the distinguishing features of limb regeneration in amphibians can be used in future to chemically induce regeneration in mammalian systems. The approach developed in this dissertation is scalable and adaptable to understand differential subnetworks between any two biological systems. This methodology will not only facilitate the understanding of biological processes and molecular functions which govern a given system but also provide novel intuitions about the pathophysiology of diseases/conditions.
526

Text Mining for Social Harm and Criminal Justice Applications

Pandey, Ritika 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Increasing rates of social harm events and plethora of text data demands the need of employing text mining techniques not only to better understand their causes but also to develop optimal prevention strategies. In this work, we study three social harm issues: crime topic models, transitions into drug addiction and homicide investigation chronologies. Topic modeling for the categorization and analysis of crime report text allows for more nuanced categories of crime compared to official UCR categorizations. This study has important implications in hotspot policing. We investigate the extent to which topic models that improve coherence lead to higher levels of crime concentration. We further explore the transitions into drug addiction using Reddit data. We proposed a prediction model to classify the users’ transition from casual drug discussion forum to recovery drug discussion forum and the likelihood of such transitions. Through this study we offer insights into modern drug culture and provide tools with potential applications in combating opioid crises. Lastly, we present a knowledge graph based framework for homicide investigation chronologies that may aid investigators in analyzing homicide case data and also allow for post hoc analysis of key features that determine whether a homicide is ultimately solved. For this purpose we perform named entity recognition to determine witnesses, detectives and suspects from chronology, use keyword expansion to identify various evidence types and finally link these entities and evidence to construct a homicide investigation knowledge graph. We compare the performance over several choice of methodologies for these sub-tasks and analyze the association between network statistics of knowledge graph and homicide solvability.
527

Extrakce sémantických vztahů z textu / Extraction of Semantic Relations from Text

Schmidt, Marek January 2008 (has links)
Extraction of semantic relations from English text is the topic of this thesis. It focuses on exploitation of a dependency parser. A method based on syntactic patterns is proposed and evaluated in addition to evaluation of several statistical methods over syntactic features. It applies the methods for extraction of a hypernymy relation and evaluates it on the WordNet thesaurus. A system for extraction of semantic relations from text is designed and implemented based on these methods.
528

Dynamic Probabilistic Risk Assessment of Nuclear Power Generation Stations

Elsefy, Mohamed HM January 2021 (has links)
Risk assessment is essential for nuclear power plants (NPPs) due to the complex dynamic nature of such systems-of-systems, as well as the devastating impacts of nuclear accidents on the environment, public health, and economy. Lessons learned from the Fukushima nuclear accident demonstrated the importance of enhancing current risk assessment methodologies and developing efficient early warning decision support tools. Static probabilistic risk assessment (PRA) techniques (e.g., event and fault tree analysis) have been extensively adopted in nuclear applications to ensure NPPs comply with safety regulations. However, numerous studies have highlighted the limitations of static PRA methods such as the lack of considering the dynamic hardware/software/operator interactions inside the NPP and the timing/sequence of events. In response, several dynamic probabilistic risk assessment (DPRA) methodologies have been developed and continuously evolved over the past four decades to overcome the limitations of static PRA methods. DPRA presents a comprehensive approach to assess the risks associated with complex, dynamic systems. However, current DPRA approaches are faced with challenges associated with the intra/interdependence within/between different NPP complex systems and the massive amount of data that needs to be analyzed and rapidly acted upon. In response to these limitations of previous work, the main objective of this dissertation is to develop a physics-based DPRA platform and an intelligent data-driven prediction tool for NPP safety enhancement under normal and abnormal operating conditions. The results of this dissertation demonstrate that the developed DPRA platform is capable of simulating the dynamic interaction between different NPP systems and estimating the temporal probability of core damage under different transients with significant analysis advantages from both the computational time and data storage perspectives. The developed platform can also explicitly account for uncertainties associated with the NPP's physical parameters and operating conditions on the plant's response and probability of its core damage. Furthermore, an intelligent decision support tool, developed based on artificial neural networks (ANN), can significantly improve the safety of NPPs by providing the plant operators with fast and accurate predictions that are specific to such NPP. Such rapid prediction will minimize the need to resort to idealized physics-based simulators to predict the underlying complex physical interactions. Moving forward, the developed ANN model can be trained under plant operational data, plants operating experience database, and data from rare event simulations to consider for example plant ageing with time, operational transients, and rare events in predicting the plant behavior. Such intelligent tool can be key for NPP operators and managers to take rapid and reliable actions under abnormal conditions. / Thesis / Doctor of Philosophy (PhD)
529

Computational Prediction of Protein-Protein Interactions on the Proteomic Scale Using Bayesian Ensemble of Multiple Feature Databases

Kumar, Vivek 01 December 2011 (has links)
No description available.
530

Mining the imagery : A text mining and news media content analysis of the Swedish country image in the Guardian, 2010-2020

Tjellander, Axel January 2022 (has links)
In the field of public diplomacy, it has increasingly become relevant to develop analytical operations in order to gain knowledge on the perceptions of foreign publics and their attitudes towards countries – a construct known as the country image. In recent time, research on public diplomacy has been increasingly occupied with the impact of media on country images due to the continuous expansion and fragmentation of the hybrid media landscape. Academics and practitioners alike must navigate through large quantities of data and different choices regarding prioritization of sources and methods in order to find suitable analytical frameworks to properly investigate the country image as an analytical object. This thesis addresses these analytical challenges by developing a diachronic text mining analysis of the Swedish country image in the British newspaper the Guardian. Sweden has in recent years drawn attention from the international media, for example during the so-called refugee crisis of 2015 and during the covid-19 pandemic of 2020. In the extensive media coverage of these major events the Swedish course of action was met with a wide range approval and criticism. Using a mixed-method approach of distant and close reading, this thesis approaches the news coverage of Sweden in the Guardian through a content analysis designed in three analytical steps: topic modelling, collocation and concordance analysis, and diachronic corpus assisted discourse analysis. In each of these steps, the appearance of different dimensions of the country image was explored using a dimensional model for integrative country image analysis developed by Ingenhoff and Buhmann. The design of the mixed-method approach showcase how large quantities of textual data can be analyzed in a new diachronic approach, bringing strains of research from digital humanities to the field of public diplomacy.

Page generated in 0.0939 seconds