• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 7
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 43
  • 41
  • 38
  • 36
  • 33
  • 29
  • 27
  • 25
  • 24
  • 23
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Triple Non-negative Matrix Factorization Technique for Sentiment Analysis and Topic Modeling

Waggoner, Alexander A 01 January 2017 (has links)
Topic modeling refers to the process of algorithmically sorting documents into categories based on some common relationship between the documents. This common relationship between the documents is considered the “topic” of the documents. Sentiment analysis refers to the process of algorithmically sorting a document into a positive or negative category depending whether this document expresses a positive or negative opinion on its respective topic. In this paper, I consider the open problem of document classification into a topic category, as well as a sentiment category. This has a direct application to the retail industry where companies may want to scour the web in order to find documents (blogs, Amazon reviews, etc.) which both speak about their product, and give an opinion on their product (positive, negative or neutral). My solution to this problem uses a Non-negative Matrix Factorization (NMF) technique in order to determine the topic classifications of a document set, and further factors the matrix in order to discover the sentiment behind this category of product.
102

Inferência das áreas de atuação de pesquisadores / Inference of the area of expertise of researchers

Fonseca, Felipe Penhorate Carvalho da 30 January 2018 (has links)
Atualmente, existe uma grande gama de dados acadêmicos disponíveis na web. Com estas informações é possível realizar tarefas como descoberta de especialistas em uma dada área, identificação de potenciais bolsistas de produtividade, sugestão de colaboradores, entre outras diversas. Contudo, o sucesso destas tarefas depende da qualidade dos dados utilizados, pois dados incorretos ou incompletos tendem a prejudicar o desempenho dos algoritmos aplicados. Diversos repositórios de dados acadêmicos não contêm ou não exigem a informação explícita das áreas de atuação dos pesquisadores. Nos dados dos currículos Lattes essa informação existe, porém é inserida manualmente pelo pesquisador sem que haja nenhum tipo de validação (e potencialmente possui informações desatualizadas, faltantes ou mesmo incorretas). O presente trabalho utilizou técnicas de aprendizado de máquina na inferência das áreas de atuação de pesquisadores com base nos dados cadastrados na plataforma Lattes. Os títulos da produção científica foram utilizados como fonte de dados, sendo estes enriquecidos com informações semanticamente relacionadas presentes em outras bases, além de adotar representações diversas para o texto dos títulos e outras informações acadêmicas como orientações e projetos de pesquisa. Objetivou-se avaliar se o enriquecimento dos dados melhora o desempenho dos algoritmos de classificação testados, além de analisar a contribuição de fatores como métricas de redes sociais, idioma dos títulos e a própria estrutura hierárquica das áreas de atuação no desempenho dos algoritmos. A técnica proposta pode ser aplicada a diferentes dados acadêmicos (não sendo restrita a dados presentes na plataforma Lattes), mas os dados oriundos dessa plataforma foram utilizados para os testes e validações da solução proposta. Como resultado, identificou-se que a técnica utilizada para realizar o enriquecimento do texto não auxiliou na melhoria da precisão da inferência. Todavia, as métricas de redes sociais e representações numéricas melhoram a inferência quando comparadas com técnicas do estado da arte, assim como o uso da própria estrutura hierárquica de classes, que retornou os melhores resultados dentre os obtidos / Nowadays, there is a wide range of academic data available on the web. With this information, it is possible to solve tasks such as the discovery of specialists in a given area, identification of potential scholarship holders, suggestion of collaborators, among others. However, the success of these tasks depends on the quality of the data used, since incorrect or incomplete data tend to impair the performance of the applied algorithms. Several academic data repositories do not contain or do not require the explicit information of the researchers\' areas. In the data of the Lattes curricula, this information exists, but it is inserted manually by the researcher without any kind of validation (and potentially it is outdated, missing or even there is incorrect information). The present work utilized machine learning techniques in the inference of the researcher\'s areas based on the data registered in the Lattes platform. The titles of the scientific production were used as data source and they were enriched with semantically related information present in other bases, besides adopting other representations for the text of the titles and other academic information as orientations and research projects. The objective of this dissertation was to evaluate if the data enrichment improves the performance of the classification algorithms tested, as well as to analyze the contribution of factors such as social network metrics, the language of the titles and the hierarchical structure of the areas in the performance of the algorithms. The proposed technique can be applied to different academic data (not restricted to data present in the Lattes platform), but the data from this platform was used for the tests and validations of the proposed solution. As a result, it was identified that the technique used to perform the enrichment of the text did not improve the accuracy of the inference. However, social network metrics and numerical representations improved inference accuracy when compared to state-of-the-art techniques, as well as the use of the hierarchical structure of the classes, which returned the best results among the obtained
103

Inferência das áreas de atuação de pesquisadores / Inference of the area of expertise of researchers

Felipe Penhorate Carvalho da Fonseca 30 January 2018 (has links)
Atualmente, existe uma grande gama de dados acadêmicos disponíveis na web. Com estas informações é possível realizar tarefas como descoberta de especialistas em uma dada área, identificação de potenciais bolsistas de produtividade, sugestão de colaboradores, entre outras diversas. Contudo, o sucesso destas tarefas depende da qualidade dos dados utilizados, pois dados incorretos ou incompletos tendem a prejudicar o desempenho dos algoritmos aplicados. Diversos repositórios de dados acadêmicos não contêm ou não exigem a informação explícita das áreas de atuação dos pesquisadores. Nos dados dos currículos Lattes essa informação existe, porém é inserida manualmente pelo pesquisador sem que haja nenhum tipo de validação (e potencialmente possui informações desatualizadas, faltantes ou mesmo incorretas). O presente trabalho utilizou técnicas de aprendizado de máquina na inferência das áreas de atuação de pesquisadores com base nos dados cadastrados na plataforma Lattes. Os títulos da produção científica foram utilizados como fonte de dados, sendo estes enriquecidos com informações semanticamente relacionadas presentes em outras bases, além de adotar representações diversas para o texto dos títulos e outras informações acadêmicas como orientações e projetos de pesquisa. Objetivou-se avaliar se o enriquecimento dos dados melhora o desempenho dos algoritmos de classificação testados, além de analisar a contribuição de fatores como métricas de redes sociais, idioma dos títulos e a própria estrutura hierárquica das áreas de atuação no desempenho dos algoritmos. A técnica proposta pode ser aplicada a diferentes dados acadêmicos (não sendo restrita a dados presentes na plataforma Lattes), mas os dados oriundos dessa plataforma foram utilizados para os testes e validações da solução proposta. Como resultado, identificou-se que a técnica utilizada para realizar o enriquecimento do texto não auxiliou na melhoria da precisão da inferência. Todavia, as métricas de redes sociais e representações numéricas melhoram a inferência quando comparadas com técnicas do estado da arte, assim como o uso da própria estrutura hierárquica de classes, que retornou os melhores resultados dentre os obtidos / Nowadays, there is a wide range of academic data available on the web. With this information, it is possible to solve tasks such as the discovery of specialists in a given area, identification of potential scholarship holders, suggestion of collaborators, among others. However, the success of these tasks depends on the quality of the data used, since incorrect or incomplete data tend to impair the performance of the applied algorithms. Several academic data repositories do not contain or do not require the explicit information of the researchers\' areas. In the data of the Lattes curricula, this information exists, but it is inserted manually by the researcher without any kind of validation (and potentially it is outdated, missing or even there is incorrect information). The present work utilized machine learning techniques in the inference of the researcher\'s areas based on the data registered in the Lattes platform. The titles of the scientific production were used as data source and they were enriched with semantically related information present in other bases, besides adopting other representations for the text of the titles and other academic information as orientations and research projects. The objective of this dissertation was to evaluate if the data enrichment improves the performance of the classification algorithms tested, as well as to analyze the contribution of factors such as social network metrics, the language of the titles and the hierarchical structure of the areas in the performance of the algorithms. The proposed technique can be applied to different academic data (not restricted to data present in the Lattes platform), but the data from this platform was used for the tests and validations of the proposed solution. As a result, it was identified that the technique used to perform the enrichment of the text did not improve the accuracy of the inference. However, social network metrics and numerical representations improved inference accuracy when compared to state-of-the-art techniques, as well as the use of the hierarchical structure of the classes, which returned the best results among the obtained
104

Modélisation thématique probabiliste des services web

Aznag, Mustapha 03 July 2015 (has links)
Les travaux sur la gestion des services web utilisent généralement des techniques du domaine de la recherche d'information, de l'extraction de données et de l'analyse linguistique. Alternativement, nous assistons à l'émergence de la modélisation thématique probabiliste utilisée initialement pour l'extraction de thèmes d'un corpus de documents. La contribution de cette thèse se situe à la frontière de la modélisation thématique et des services web. L'objectif principal de cette thèse est d'étudier et de proposer des algorithmes probabilistes pour modéliser la structure thématique des services web. Dans un premier temps, nous considérons une approche non supervisée pour répondre à différentes tâches telles que la découverte et le regroupement de services web. Ensuite, nous combinons la modélisation thématique avec l'analyse de concepts formels pour proposer une méthode de regroupement hiérarchique de services web. Cette méthode permet une nouvelle démarche de découverte interactive basée sur des opérateurs de généralisation et spécialisation des résultats obtenus. Enfin, nous proposons une méthode semi-supervisée pour l'annotation automatique de services web. Nous avons concrétisé nos propositions par un moteur de recherche en ligne appelé WS-Portal. Nous offrons alors différentes fonctions facilitant la gestion de services web, par exemple, la découverte et le regroupement de services web, la recommandation des tags, la surveillance des services, etc. Nous intégrons aussi différents paramètres tels que la disponibilité et la réputation de services web et plus généralement la qualité de service pour améliorer leur classement (la pertinence du résultat de recherche). / The works on web services management use generally the techniques of information retrieval, data mining and the linguistic analysis. Alternately, we attend the emergence of the probabilistic topic models originally developed and utilized for topics extraction and documents modeling. The contribution of this thesis meets the topics modeling and the web services management. The principal objective of this thesis is to study and propose probabilistic algorithms to model the thematic structure of web services. First, we consider an unsupervised approach to meet different tasks such as web services clustering and discovery. Then we combine the topics modeling with the formal concept analysis to propose a novel method for web services hierarchical clustering. This method allows a novel interactive discovery approach based on the specialization and generalization operators of retrieved results. Finally, we propose a semi-supervised method for automatic web service annotation (automatic tagging). We concretized our proposals by developing an on-line web services search engine called WS-Portal where we incorporate our research works to facilitate web service discovery task. Our WS-Portal contains 7063 providers, 115 sub-classes of category and 22236 web services crawled from the Internet. In WS- Portal, several technologies, i.e., web services clustering, tags recommendation, services rating and monitoring are employed to improve the effectiveness of web services discovery. We also integrate various parameters such as availability and reputation of web services and more generally the quality of service to improve their ranking and therefore the relevance of the search result.
105

Réseaux de service web : construction, analyse et applications / Web service networks : analysis, construction and applications

Naim, Hafida 13 December 2017 (has links)
Cette thèse se place dans le cadre de services web en dépassant leur description pour considérer leur structuration en réseaux (réseaux d'interaction et réseaux de similitude). Nous proposons des méthodes basées sur les motifs, la modélisation probabiliste et l'analyse des concepts formels, pour améliorer la qualité des services découverts. Trois contributions sont alors proposées: découverte de services diversifiés, recommandation de services et cohérence des communautés de services détectées. Nous structurons d'abord les services sous forme de réseaux. Afin de diversifier les résultats de la découverte, nous proposons une méthode probabiliste qui se base à la fois sur la pertinence, la diversité et la densité des services. Dans le cas de requêtes complexes, nous exploitons le réseau d'interaction de services construit et la notion de diversité dans les graphes pour identifier les services web qui sont susceptibles d'être composables. Nous proposons également un système de recommandation hybride basé sur le contenu et le filtrage collaboratif. L'originalité de la méthode proposée vient de la combinaison des modèles thématiques et les motifs fréquents pour capturer la sémantique commune maximale d'un ensemble de services. Enfin, au lieu de ne traiter que des services individuels, nous considérons aussi un ensemble de services regroupés sous forme de communautés de services pour la recommandation. Nous proposons dans ce contexte, une méthode qui combine la sémantique et la topologie dans les réseaux afin d'évaluer la qualité et la cohérence sémantique des communautés détectées, et classer également les algorithmes de détection de communautés. / As a part of this thesis, we exceed the description of web services to consider their structure as networks (i.e. similarity and interaction web service networks). We propose methods based on patterns, topic models and formal concept analysis, to improve the quality of discovered services. Three contributions are then proposed: (1) diversified services discovery, (2) services recommendation and (3) consistency of detected communities. Firstly, we propose modeling the space of web services through networks. To discover the diversified services corresponding to a given query, we propose a probabilistic method to diversify the discovery results based on relevancy, diversity and service density. In case of complex requests, it is necessary to combine multiple web services to fulfill this kind of requests. In this regard, we use the interaction web service network and the diversity notion in graphs to identify all possible services compositions. We also propose a new hybrid recommendation system based on both content and collaborative filtering. Its originality comes from the combination of probabilistic topic models and pattern mining to capture the maximal common semantic of a set of services. Finally, instead of processing individual services, we consider a set of services grouped into service communities for the recommendation. We propose in this context, a new method combining both topology and semantics to evaluate the quality and the semantic consistency of detected communities, and also rank the detection communities algorithms.
106

What makes an (audio)book popular? / Vad gör en (ljud)bok populär?

Barakat, Arian January 2018 (has links)
Audiobook reading has traditionally been used for educational purposes but has in recent times grown into a popular alternative to the more traditional means of consuming literature. In order to differentiate themselves from other players in the market, but also provide their users enjoyable literature, several audiobook companies have lately directed their efforts on producing own content. Creating highly rated content is, however, no easy task and one reoccurring challenge is how to make a bestselling story. In an attempt to identify latent features shared by successful audiobooks and evaluate proposed methods for literary quantification, this thesis employs an array of frameworks from the field of Statistics, Machine Learning and Natural Language Processing on data and literature provided by Storytel - Sweden’s largest audiobook company. We analyze and identify important features from a collection of 3077 Swedish books concerning their promotional and literary success. By considering features from the aspects Metadata, Theme, Plot, Style and Readability, we found that popular books are typically published as a book series, cover 1-3 central topics, write about, e.g., daughter-mother relationships and human closeness but that they also hold, on average, a higher proportion of verbs and a lower degree of short words. Despite successfully identifying these, but also other factors, we recognized that none of our models predicted “bestseller” adequately and that future work may desire to study additional factors, employ other models or even use different metrics to define and measure popularity. From our evaluation of the literary quantification methods, namely topic modeling and narrative approximation, we found that these methods are, in general, suitable for Swedish texts but that they require further improvement and experimentation to be successfully deployed for Swedish literature. For topic modeling, we recognized that the sole use of nouns provided more interpretable topics and that the inclusion of character names tended to pollute the topics. We also identified and discussed the possible problem of word inflections when modeling topics for more morphologically complex languages, and that additional preprocessing treatments such as word lemmatization or post-training text normalization may improve the quality and interpretability of topics. For the narrative approximation, we discovered that the method currently suffers from three shortcomings: (1) unreliable sentence segmentation, (2) unsatisfactory dictionary-based sentiment analysis and (3) the possible loss of sentiment information induced by translations. Despite only examining a handful of literary work, we further found that books written initially in Swedish had narratives that were more cross-language consistent compared to books written in English and then translated to Swedish.
107

Labeling Clinical Reports with Active Learning and Topic Modeling / Uppmärkning av kliniska rapporter med active learning och topic modeller

Lindblad, Simon January 2018 (has links)
Supervised machine learning models require a labeled data set of high quality in order to perform well. Available text data often exists in abundance, but it is usually not labeled. Labeling text data is a time consuming process, especially in the case where multiple labels can be assigned to a single text document. The purpose of this thesis was to make the labeling process of clinical reports as effective and effortless as possible by evaluating different multi-label active learning strategies. The goal of the strategies was to reduce the number of labeled documents a model needs, and increase the quality of those documents. With the strategies, an accuracy of 89% was achieved with 2500 reports, compared to 85% with random sampling. In addition to this, 85% accuracy could be reached after labeling 975 reports, compared to 1700 reports with random sampling.
108

Explorer et apprendre à partir de collections de textes multilingues à l'aide des modèles probabilistes latents et des réseaux profonds / Mining and learning from multilingual text collections using topic models and word embeddings

Balikas, Georgios 20 October 2017 (has links)
Le texte est l'une des sources d'informations les plus répandues et les plus persistantes. L'analyse de contenu du texte se réfère à des méthodes d'étude et de récupération d'informations à partir de documents. Aujourd'hui, avec une quantité de texte disponible en ligne toujours croissante l'analyse de contenu du texte revêt une grande importance parce qu' elle permet une variété d'applications. À cette fin, les méthodes d'apprentissage de la représentation sans supervision telles que les modèles thématiques et les word embeddings constituent des outils importants.L'objectif de cette dissertation est d'étudier et de relever des défis dans ce domaine.Dans la première partie de la thèse, nous nous concentrons sur les modèles thématiques et plus précisément sur la manière d'incorporer des informations antérieures sur la structure du texte à ces modèles.Les modèles de sujets sont basés sur le principe du sac-de-mots et, par conséquent, les mots sont échangeables. Bien que cette hypothèse profite les calculs des probabilités conditionnelles, cela entraîne une perte d'information.Pour éviter cette limitation, nous proposons deux mécanismes qui étendent les modèles de sujets en intégrant leur connaissance de la structure du texte. Nous supposons que les documents sont répartis dans des segments de texte cohérents. Le premier mécanisme attribue le même sujet aux mots d'un segment. La seconde, capitalise sur les propriétés de copulas, un outil principalement utilisé dans les domaines de l'économie et de la gestion des risques, qui sert à modéliser les distributions communes de densité de probabilité des variables aléatoires tout en n'accédant qu'à leurs marginaux.La deuxième partie de la thèse explore les modèles de sujets bilingues pour les collections comparables avec des alignements de documents explicites. En règle générale, une collection de documents pour ces modèles se présente sous la forme de paires de documents comparables. Les documents d'une paire sont écrits dans différentes langues et sont thématiquement similaires. À moins de traductions, les documents d'une paire sont semblables dans une certaine mesure seulement. Pendant ce temps, les modèles de sujets représentatifs supposent que les documents ont des distributions thématiques identiques, ce qui constitue une hypothèse forte et limitante. Pour le surmonter, nous proposons de nouveaux modèles thématiques bilingues qui intègrent la notion de similitude interlingue des documents qui constituent les paires dans leurs processus générateurs et d'inférence.La dernière partie de la thèse porte sur l'utilisation d'embeddings de mots et de réseaux de neurones pour trois applications d'exploration de texte. Tout d'abord, nous abordons la classification du document polylinguistique où nous soutenons que les traductions d'un document peuvent être utilisées pour enrichir sa représentation. À l'aide d'un codeur automatique pour obtenir ces représentations de documents robustes, nous démontrons des améliorations dans la tâche de classification de documents multi-classes. Deuxièmement, nous explorons la classification des tweets à plusieurs tâches en soutenant que, en formant conjointement des systèmes de classification utilisant des tâches corrélées, on peut améliorer la performance obtenue. À cette fin, nous montrons comment réaliser des performances de pointe sur une tâche de classification du sentiment en utilisant des réseaux neuronaux récurrents. La troisième application que nous explorons est la récupération d'informations entre langues. Compte tenu d'un document écrit dans une langue, la tâche consiste à récupérer les documents les plus similaires à partir d'un ensemble de documents écrits dans une autre langue. Dans cette ligne de recherche, nous montrons qu'en adaptant le problème du transport pour la tâche d'estimation des distances documentaires, on peut obtenir des améliorations importantes. / Text is one of the most pervasive and persistent sources of information. Content analysis of text in its broad sense refers to methods for studying and retrieving information from documents. Nowadays, with the ever increasing amounts of text becoming available online is several languages and different styles, content analysis of text is of tremendous importance as it enables a variety of applications. To this end, unsupervised representation learning methods such as topic models and word embeddings constitute prominent tools.The goal of this dissertation is to study and address challengingproblems in this area, focusing on both the design of novel text miningalgorithms and tools, as well as on studying how these tools can be applied to text collections written in a single or several languages.In the first part of the thesis we focus on topic models and more precisely on how to incorporate prior information of text structure to such models.Topic models are built on the premise of bag-of-words, and therefore words are exchangeable. While this assumption benefits the calculations of the conditional probabilities it results in loss of information.To overcome this limitation we propose two mechanisms that extend topic models by integrating knowledge of text structure to them. We assume that the documents are partitioned in thematically coherent text segments. The first mechanism assigns the same topic to the words of a segment. The second, capitalizes on the properties of copulas, a tool mainly used in the fields of economics and risk management that is used to model the joint probability density distributions of random variables while having access only to their marginals.The second part of the thesis explores bilingual topic models for comparable corpora with explicit document alignments. Typically, a document collection for such models is in the form of comparable document pairs. The documents of a pair are written in different languages and are thematically similar. Unless translations, the documents of a pair are similar to some extent only. Meanwhile, representative topic models assume that the documents have identical topic distributions, which is a strong and limiting assumption. To overcome it we propose novel bilingual topic models that incorporate the notion of cross-lingual similarity of the documents that constitute the pairs in their generative and inference processes. Calculating this cross-lingual document similarity is a task on itself, which we propose to address using cross-lingual word embeddings.The last part of the thesis concerns the use of word embeddings and neural networks for three text mining applications. First, we discuss polylingual document classification where we argue that translations of a document can be used to enrich its representation. Using an auto-encoder to obtain these robust document representations we demonstrate improvements in the task of multi-class document classification. Second, we explore multi-task sentiment classification of tweets arguing that by jointly training classification systems using correlated tasks can improve the obtained performance. To this end we show how can achieve state-of-the-art performance on a sentiment classification task using recurrent neural networks. The third application we explore is cross-lingual information retrieval. Given a document written in one language, the task consists in retrieving the most similar documents from a pool of documents written in another language. In this line of research, we show that by adapting the transportation problem for the task of estimating document distances one can achieve important improvements.
109

Exploring NMF and LDA Topic Models of Swedish News Articles

Svensson, Karin, Blad, Johan January 2020 (has links)
The ability to automatically analyze and segment news articles by their content is a growing research field. This thesis explores the unsupervised machine learning method topic modeling applied on Swedish news articles for generating topics to describe and segment articles. Specifically, the algorithms non-negative matrix factorization (NMF) and the latent Dirichlet allocation (LDA) are implemented and evaluated. Their usefulness in the news media industry is assessed by its ability to serve as a uniform categorization framework for news articles. This thesis fills a research gap by studying the application of topic modeling on Swedish news articles and contributes by showing that this can yield meaningful results. It is shown that Swedish text data requires extensive data preparation for successful topic models and that nouns exclusively and especially common nouns are the most suitable words to use. Furthermore, the results show that both NMF and LDA are valuable as content analysis tools and categorization frameworks, but they have different characteristics, hence optimal for different use cases. Lastly, the conclusion is that topic models have issues since they can generate unreliable topics that could be misleading for news consumers, but that they nonetheless can be powerful methods for analyzing and segmenting articles efficiently on a grand scale by organizations internally. The thesis project is a collaboration with one of Sweden’s largest media groups and its results have led to a topic modeling implementation for large-scale content analysis to gain insight into readers’ interests.
110

Sdílená ekonomika v kontextu postmateriálních hodnot: případ segmentu ubytování v Praze / Sharing Economy in the Context of Postmaterial Values: The Case of Accommodation Segment in Prague

Svobodová, Tereza January 2020 (has links)
This master's thesis is about the success of sharing economy in the accommodation segment in Prague. The thesis is based on theories conceptualizing sharing economy as a result of social and value change, not only as technological one. Using online review data, the user experience of shared accommodation via Airbnb and traditional via Booking are compared. Analysis is conducted with focus on users' satisfied needs and fulfilled values. For processing the data, text mining techniques (topic modelling and sentiment analysis) were employed. The major result is that in Prague the models of sharing economy accommodation meets the growing need in society to fulfil post-material values in the market much better than the models of traditional accommodation (hotels, hostels, boarding houses). In their experiences, Airbnb users reflect social and emotional values more often, even though most sharing economy accommodations in Prague do not involve any physical sharing with the host. The thesis thus brings a unique perspective on the Airbnb phenomenon in the Czech context and contributes to the discussion of why the market share of the sharing economy in the accommodation segment in Prague has been growing, while traditional models stagnated.

Page generated in 0.0587 seconds