• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 20
  • 9
  • 8
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 100
  • 97
  • 66
  • 50
  • 48
  • 40
  • 38
  • 38
  • 37
  • 37
  • 35
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

The Impact of Training Epoch Size on the Accuracy of Collaborative Filtering Models in GraphChi Utilizing a Multi-Cyclic Training Regimen

Curnalia, James W. 04 June 2013 (has links)
No description available.
72

A sentiment analysis approach to manage the new item problem of Slope One / En ansats att använda attitydsanalys för att hantera problemet med nya föremål i Slope one

Johansson, Jonas, Runnman, Kenneth January 2017 (has links)
This report targets a specific problem for recommender algorithms which is the new item problem and propose a method with sentiment analysis as the main tool. Collaborative filtering algorithms base their predictions on a database with users and their corresponding ratings to items. The new item problem occurs when a new item is introduced in the database because the item has no ratings. The item will therefore be unavailable as a recommendation for the users until it has gathered some ratings. Products that can be rated by users in the online community often has experts that get access to these products before its release date for the consumers, this can be taken advantage of in recommender systems. The experts can be used as initial guides for predictions. The method that is used in this report relies on sentiment analysis to translate written reviews by experts into a rating based on the sentiment of the text. This way when a new item is added it is also added with the ratings of experts in the field. The result from this study shows that the recommender algorithm slope one can generate more reliable recommendations with a group of expert users than without when a new item is added to the database. The expert users that is added must have ratings for other items as well as the ratings for the new item to get more accurate recommendations. / Denna rapport studerar påverkan av problemet med nya objekt i rekommendationsalgoritmen Slope One och en metod föreslås i rapporten för att lösa det specifika problemet. Problemet uppstår när ett nytt objekt läggs till i en databas då det inte finns några betyg som getts till objektet/produkten. Då rekommendationsalgoritmer som Slope One baserar sina rekommendationer på relationerna mellan användares betyg av filmer så blir träffsäkerheten låg för en rekommendation av en film med få betyg. Metoden som föreslås i rapporten involverar attitydanalys som det huvudsakliga verktyget för att få information som kan ersätta faktiska betyg som användare gett en produkt. När produkter kan bli betygsatta av användare på olika forum på internet så finns det ofta experter får tillgång till produkten innan den släpps till omvärlden, den information som dessa experter har kan användas för att fylla det informationsgap som finns när ett nytt objekt läggs till. Dessa experter kommer då initiellt att användas som guide för rekomendationssystemet. Så när ett nytt objekt läggs till så görs det tillsammans med betyg från experter för att få mer träffsäkra rekomendationer. Resultatet från denna studie visar att Slope One genererar mer träffsäkra rekommendationer då en ny produkt läggs till i databasen med ett antal betyg som genererats genom attitydanalysanalys på experters textrecensioner. Det är värt att notera att ett betyg enbart för dessa expertanvändare inte håller utan experterna måste ha betyg av andra produkter inom samma område för kunna influera rekommendationer för den nya produkten.
73

A CONCEPT-BASED FRAMEWORK AND ALGORITHMS FOR RECOMMENDER SYSTEMS

NARAYANASWAMY, SHRIRAM 08 October 2007 (has links)
No description available.
74

Latent Factor Models for Recommender Systems and Market Segmentation Through Clustering

Zeng, Jingying 29 August 2017 (has links)
No description available.
75

User Interfaces for Topic Management of Web Sites

Amento, Brian 15 December 2003 (has links)
Topic management is the task of gathering, evaluating, organizing, and sharing a set of web sites for a specific topic. Current web tools do not provide adequate support for this task. We created and continue to develop the TopicShop system to address this need. TopicShop includes (1) a web crawler/analyzer that discovers relevant web sites and builds site profiles, and (2) user interfaces for information workspaces. We conducted an empirical pilot study comparing user performance with TopicShop vs. Yahooï . Results from this study were used to improve the design of TopicShop. A number of key design changes were incorporated into a second version of TopicShop based on results and user comments of the pilot study including (1) the tasks of evaluation and organization are treated as integral instead of separable, (2) spatial organization is important to users and must be well supported in the interface, and (3) distinct user and global datasets help users deal with the large quantity of information available on the web. A full empirical study using the second iteration of TopicShop covered more areas of the World Wide Web and validated results from the pilot study. Across the two studies, TopicShop subjects found over 80% more high-quality sites (where quality was determined by independent expert judgements) while browsing only 81% as many sites and completing their task in 89% of the time. The site profile data that TopicShop provide -- in particular, the number of pages on a site and the number of other sites that link to it -- were the key to these results, as users exploited them to identify the most promising sites quickly and easily. We also evaluated a number of link- and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 (the common information retrieval metric indicating the percentage of high quality items selected that are actually high quality) is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 32% of all documents were of high quality. Surprisingly, a simple content-based metric, which ranked documents by the total number of pages on their containing site, performed nearly as well. These studies give insight into users' needs for the task of topic management, and provide empirical evidence of the effectiveness of task-specific interfaces (such as TopicShop) for managing topical collections. / Ph. D.
76

SNAP Biclustering

Chan, William Hannibal 22 January 2010 (has links)
This thesis presents a new ant-optimized biclustering technique known as SNAP biclustering, which runs faster and produces results of superior quality to previous techniques. Biclustering techniques have been designed to compensate for the weaknesses of classical clustering algorithms by allowing cluster overlap, and allowing vectors to be grouped for a subset of their defined features. These techniques have performed well in many problem domains, particularly DNA microarray analysis and collaborative filtering. A motivation for this work has been the biclustering technique known as bicACO, which was the first to use ant colony optimization. As bicACO is time intensive, much emphasis was placed on decreasing SNAP's runtime. The superior speed and biclustering results of SNAP are due to its improved initialization and solution construction procedures. In experimental studies involving the Yeast Cell Cycle DNA microarray dataset and the MovieLens collaborative filtering dataset, SNAP has run at least 22 times faster than bicACO while generating superior results. Thus, SNAP is an effective choice of technique for microarray analysis and collaborative filtering applications. / Master of Science
77

Towards Effective and Efficient Personalized Recommendation from a Spectral Perspective / スペクトル観点から効果的かつ効率的な個人推薦に向けて

Peng, Shaowen 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25430号 / 情博第868号 / 新制||情||145(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 伊藤 孝行, 教授 田島 敬史, 教授 鹿島 久嗣, 教授 杉山 一成(大阪成蹊大学) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
78

The equivalence of contrastive learning and graph convolution in collaborative filtering

Wu, Yihong 09 1900 (has links)
Ces dernières années, les systèmes de recommandation ont gagné en importance dans le paysage informationnel en plein essor. Au cœur de ces systèmes se trouve l’algorithme de filtrage collaboratif (FC). Le graph convolutionnelle et l'apprentissage contrastif sont récemment utilisées comme des techniques importantes dans le cadre de FC. Bien que de nombreux modèles existants en FC intègrent ces méthodes dans leur conception, il semble y avoir une analyse approfondie limitée concernant les principes fondamentaux sous-jacents. Ce mémoire vise à apporter une analyse sur ces techniques afin de mieux comprendre les effets de ces deux techniques pour le FC. Nous allons relier le graph convolutionel, un élément essentiel des modèles basés sur des graphes, avec l'apprentissage contrastif à travers un cadre théorique. En examinant la dynamique d'apprentissage et l'équilibre de la fonction de perte contrastive, nous proposons une nouvelle perspective pour comprendre l'apprentissage contrastif via les principes de la théorie des graphes, à savoir le filtre passe-bas, soulignant sa capacité à capturer une connectivité d'ordre élevé. En nous appuyant sur cette analyse, nous montrons en outre que les couches de convolution de graphes souvent utilisées dans les modèles basés sur des graphes ne sont pas essentielles pour la modélisation de connectivité d'ordre élevé et au contraire, pourraient contribuer à augmenter le risque de lissage excessif. À partir de nos résultats, nous introduisons le filtrage collaboratif contrastif simple (SCCF), un algorithme simple et efficace basé sur la factorisation matricielle et une fonction de perte contrastive modifiée. L'efficacité de l'algorithme est démontrée par des expériences approfondies sur quatre ensembles de données publiques. La contribution principale de ce mémoire réside en l'établissement pour la première fois d'une connexion entre les modèles basés sur des graphes et l'apprentissage contrastif pour le FC. Ceci offre une explication pourquoi l'ajout des couches de convolution dans les modèles de graph n'était pas performant dû aux effets de surlissage. Il offre une nouvelle compréhension du FC, qui permettra de construire de nouveaux modèles de FC plus performants dans le futur. / In recent years, recommender systems have gained increased significance amidst the burgeoning information landscape. At the heart of these systems lies the Collaborative Filtering (CF) algorithm. Graph convolution and contrastive learning have recently emerged as prominent techniques within the CF framework, reflecting the evolving dynamics in this field. While many existing models in CF incorporate these methods in their design, there seems to be a limited depth of analysis regarding the foundational principles behind them. This thesis aims to provide a deeper analysis about the techniques to better understand their effects in CF. We will bridge graph convolution, a pivotal element of graph-based models, with contrastive learning through a theoretical framework. By examining the learning dynamics and equilibrium of the contrastive loss function, we offer a fresh lens to understand contrastive learning via graph theory principles, namely low-pass filter, emphasizing its capability to capture high-order connectivity. Building on this analysis, we further show that the graph convolution layers often used in graph-based models is not essential for high-order connectivity modelling and in contrary, might contribute to increase the risk of oversmoothing. Stemming from our findings, we introduce a Simple Contrastive Collaborative Filtering (SCCF), a simple and effective algorithm based on Matrix Factorization and a modified contrastive loss function. The efficacy of the algorithm is demonstrated through extensive experiments across four public datasets. The main contribution of this thesis lies in the connection established for the first time between graph-based models and contrastive learning for CF. It offers an explanation on why adding more convolution layers in graph models has not been effective due to the oversmoothing effect. The thesis provides a new understanding of CF, which can be used to build new effective models for CF in the future.
79

Enhancing Cybersecurity in Agriculture 5.0: Probabilistic Machine Learning Approaches

Bissadu, Kossi Dodzi 05 1900 (has links)
Agriculture 5.0, marked by advanced technology and intensified human-machine collaboration, addresses significant challenges in traditional farming, such as labor shortages, declining productivity, climate change impacts, and gender disparities. This study assesses the effectiveness of probabilistic machine learning methods, with a specific focus on Bayesian networks (BN), collaborative filtering (CF), and fuzzy cognitive map (FCM) techniques, in enhancing cybersecurity risk analysis and management in Agriculture 5.0. It also explores unique cybersecurity threats within Agriculture 5.0. Using a systematic literature review (SLR), and leveraging historical data, case studies, experimental datasets, probabilistic machine learning algorithms, experiments, expert insights, and data analysis tools, the study evaluates the effectiveness of these techniques in improving cybersecurity risk analysis in Agriculture 5.0. BN, CF, and FCM were found effective in enhancing the cybersecurity of Agriculture 5.0. This research enhances our understanding of how probabilistic machine learning can bolster cybersecurity within Agriculture 5.0. The study's insights will be valuable to industry stakeholders, policymakers, and cybersecurity professionals, aiding the protection of agriculture's digital transformation amid increasing technological complexity and cyber threats, and setting the stage for future investigations into Agriculture 5.0 security.
80

Fördelar med att applicera Collaborative Filtering på Steam : En utforskande studie / Benefits of Applying Collaborative Filtering on Steam : An explorative study

Bergqvist, Martin, Glansk, Jim January 2018 (has links)
Rekommendationssystem används överallt. På populära plattformar såsom Netflix och Amazon får du alltid rekommendationer på vad som är nästa lämpliga film eller inköp, baserat på din personliga profil. Detta sker genom korsreferering mellan användare och produkter för att finna sannolika mönster. Syftet med studien har varit att jämföra de två prevalenta tillvägagångssätten att skapa rekommendationer, på en annorlunda datamängd, där ”best practice” inte nödvändigtvis är tillämpbart. Som följd därav, har jämförelse gjorts på effektiviteten av Content-based Filtering kontra Collaborative Filtering, på Steams spelplattform, i syfte att etablera potential för en bättre lösning. Detta angreps genom att samla in data från Steam; Bygga en Content-based Filtering motor som baslinje för att representera Steams nuvarande rekommendationssystem, samt en motsvarande Collaborative Filtering motor, baserad på en standard-implementation, att jämföra mot. Under studiens gång visade det sig att Content-based Filtering prestanda initiellt växte linjärt medan spelarbasen på ett givet spel ökade. Collaborative Filtering däremot hade en exponentiell prestationskurva för spel med få spelare, för att sedan plana ut på en nivå som prestationsmässigt överträffade jämförelsmetoden. Den praktiska signifikansen av dessa resultat torde rättfärdiga en mer utbredd implementering av Collaborative Filtering även där man normalt avstår till förmån för Content-based Filtering då det är enklare att implementera och ger acceptabla resultat. Då våra resultat visar på såpass stor avvikelse redan vid basmodeller, är det här en attityd som mycket väl kan förändras. Collaborative Filtering har varit sparsamt använt på mer mångfacetterade datamängder, men våra resultat visar på potential att överträffa Content-based Filtering med relativt liten insats även på sådana datamängder. Detta kan gynna alla inköps- och community-kombinerade plattformar, då det finns möjlighet att övervaka användandet av inköpen i realtid, vilket möjliggör för justeringar av de faktorer som kan visa sig resultera i felrepresentation. / The use of recommender systems is everywhere. On popular platforms such as Netflix and Amazon, you are always given new recommendations on what to consume next, based on your specific profiling. This is done by cross-referencing users and products to find probable patterns. The aims of this study were to compare the two main ways of generating recommendations, in an unorthodox dataset where “best practice” might not apply. Subsequently, recommendation efficiency was compared between Content Based Filtering and Collaborative Filtering, on the gaming-platform of Steam, in order to establish if there was potential for a better solution. We approached this by gathering data from Steam, building a representational baseline Content-based Filtering recommendation-engine based on what is currently used by Steam, and a competing Collaborative Filtering engine based on a standard implementation. In the course of this study, we found that while Content-based Filtering performance initially grew linearly as the player base of a game increased, Collaborative Filtering’s performance grew exponentially from a small player base, to plateau at a performance-level exceeding the comparison. The practical consequence of these findings would be the justification to apply Collaborative Filtering even on smaller, more complex sets of data than is normally done; The justification being that Content-based Filtering is easier to implement and yields decent results. With our findings showing such a big discrepancy even at basic models, this attitude might well change. The usage of Collaborative Filtering has been used scarcely on the more multifaceted datasets, but our results show that the potential to exceed Content-based Filtering is rather easily obtainable on such sets as well. This potentially benefits all purchase/community-combined platforms, as the usage of the purchase is monitorable on-line, and allows for the adjustments of misrepresentational factors as they appear.

Page generated in 0.1124 seconds