• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Web personalization - a typology, instrument, and a test of a predictive model

Fan, Haiyan 15 May 2009 (has links)
No description available.
2

Web personalization - a typology, instrument, and a test of a predictive model

Fan, Haiyan 15 May 2009 (has links)
No description available.
3

Borgo: Book Recommender For Reading Groups

Duzgun, Sayil 01 February 2012 (has links) (PDF)
With the increasing amount of data on web, people start to need tools which will help them to deal with the most significant ones among the thousands. The idea of a system which recommends items to its users emerged to fulfill this inevitable need. But most of the recommender systems make recommendations for individuals. On the other hand, some people need recommendation for items which they will use or for activities which they will attend together. Group recommenders serve for these purposes. Group recommenders diverge from individual recommenders such that they need to aggregate members of the group in a joint model, and in order to do so, they need a user satisfaction function. There are two different aggregation methods and a few different satisfaction functions for group recommendation process. Reading groups domain is a new domain for group recommenders. In this thesis we propose a web based group recommender system which is called BoRGo: Book Recommender for Reading Groups , for reading groups domain. BoRGo uses a new information filtering technique and present a media for post recommendation processes. We present comparative evaluation results of this new technique in this thesis.
4

Utilization of neural network and agent technology combination for distributed intelligent applications and services

Huhtinen, J. (Jouni) 25 October 2005 (has links)
Abstract The use of agent systems has increased enormously, especially in the field of mobile services. Intelligent services have also increased rapidly in the web. In this thesis, the utilization of software agent technology in mobile services and decentralized intelligent services in the multimedia business is introduced and described. Both Genie Agent Architecture (GAA) and Decentralized International and Intelligent Software Architecture (DIISA) are described. The common problems in decentralized software systems are lack of intelligence, communication of software modules and system learning. Another problem is the personalization of users and services. A third problem is the matching of users and service characteristics in web application level in a non-linear way. In this case it means that web services follow human steps and are capable of learning from human inputs and their characteristics in an intelligent way. This third problem is addressed in this thesis and solutions are presented with two intelligent software architectures and services. The solutions of the thesis are based on a combination of neural network and agent technology. To be more specific, solutions are based on an intelligent agent which uses certain black box information like Self-Organized Map (SOM). This process is as follows; information agents collect information from different sources like the web, databases, users, other software agents and the environment. Information is filtered and adapted for input vectors. Maps are created from a data entry of an SOM. Using maps is very simple, input forms are completed by users (automatically or manually) or user agents. Input vectors are formed again and sent to a certain map. The map gives several outputs which are passed through specific algorithms. This information is passed to an intelligent agent. The needs for web intelligence and knowledge representation serving users is a current issue in many business solutions. The main goal is to enable this by means of autonomous agents which communicate with each other using an agent communication language and with users using their native languages via several communication channels.
5

A WEB PERSONALIZATION ARTIFACT FOR UTILITY-SENSITIVE REVIEW ANALYSIS

Flory, Long, Mrs. 01 January 2015 (has links)
Online customer reviews are web content voluntarily posted by the users of a product (e.g. camera) or service (e.g. hotel) to express their opinions about the product or service. Online reviews are important resources for businesses and consumers. This dissertation focuses on the important consumer concern of review utility, i.e., the helpfulness or usefulness of online reviews to inform consumer purchase decisions. Review utility concerns consumers since not all online reviews are useful or helpful. And, the quantity of the online reviews of a product/service tends to be very large. Manual assessment of review utility is not only time consuming but also information overloading. To address this issue, review helpfulness research (RHR) has become a very active research stream dedicated to study utility-sensitive review analysis (USRA) techniques for automating review utility assessment. Unfortunately, prior RHR solution is inadequate. RHR researchers call for more suitable USRA approaches. Our current research responds to this urgent call by addressing the research problem: What is an adequate USRA approach? We address this problem by offering novel Design Science (DS) artifacts for personalized USRA (PUSRA). Our proposed solution extends not only RHR research but also web personalization research (WPR), which studies web-based solutions for personalized web provision. We have evaluated the proposed solution by applying three evaluation methods: analytical, descriptive, and experimental. The evaluations corroborate the practical efficacy of our proposed solution. This research contributes what we believe (1) the first DS artifacts to the knowledge body of RHR and WPR, and (2) the first PUSRA contribution to USRA practice. Moreover, we consider our evaluations of the proposed solution the first comprehensive assessment of USRA solutions. In addition, this research contributes to the advancement of decision support research and practice. The proposed solution is a web-based decision support artifact with the capability to substantially improve accurate personalized webpage provision. Also, website designers can apply our research solution to transform their works fundamentally. Such transformation can add substantial value to businesses.
6

Crossing: A Framework To Develop Knowledge-based Recommenders In Cross Domains

Azak, Mustafa 01 February 2010 (has links) (PDF)
Over the last decade, excess amount of information is being provided on the web and information filtering systems such as recommender systems have become one of the most important technologies to overcome the &bdquo / Information Overload&amp / #8223 / problem by providing personalized services to users. Several researches have been made to improve quality of recommendations and provide maximum user satisfaction within a single domain based on the domain specific knowledge. However, the current infrastructures of the recommender systems cannot provide the complete mechanisms to meet user needs in several domains and recommender systems show poor performance in cross-domain item recommendations. Within this thesis work, a dynamic framework is proposed which differs from the previous works as it focuses on the easy development of knowledge-based recommenders and it proposes an intensive cross domain capability with the help of domain knowledge. The framework has a generic and flexible structure that data models and user interfaces are generated based on ontologies. New recommendation domains can be integrated to the framework easily in order to improve recommendation diversity. The cross-domain recommendation is accomplished via an abstraction in domain features if the direct matching of the domain features is not possible when the domains are not very close to each other.
7

Openmore: A Content-based Movie Recommendation System

Kirmemis, Oznur 01 May 2008 (has links) (PDF)
The tremendous growth of Web has made information overload problem increasingly serious. Users are often confused by huge amount of information available on the internet and they are faced with the problem of finding the most relevant information that meets their needs. Recommender systems have proven to be an important solution approach to this problem. This thesis will present OPENMORE, a movie recommendation system, which is primarily based on content-based filtering technique. The distinctive point of this study lies in the methodology used to construct and update user and item profiles and the optimizations used to fine-tune the constructed user models. The proposed system arranges movie content data as features of a set of dimension slots, where each feature is assigned a stable feature weight regardless of individual movies. These feature weights and the explicit feedbacks provided by the user are then used to construct the user profile, which is fine-tuned through a set of optimization mechanisms. Users are enabled to view their profile, update them and create multiple contexts where they can provide negative and positive feedback for the movies on the feature level.
8

Προσωποποιημένη προβολή περιεχομένου του Διαδικτύου με τεχνικές προ-επεξεργασίας, αυτόματης κατηγοριοποίησης και αυτόματης εξαγωγής περίληψης

Πουλόπουλος, Βασίλειος 22 November 2007 (has links)
Σκοπός της Μεταπτυχιακής Εργασίας είναι η επέκταση και αναβάθμιση του μηχανισμού που είχε δημιουργηθεί στα πλαίσια της Διπλωματικής Εργασίας που εκπόνησα με τίτλο «Δημιουργία Πύλης Προσωποποιημένης Πρόσβασης σε Περιεχόμενο του WWW». Η παραπάνω Διπλωματική εργασία περιλάμβανε τη δημιουργία ενός μηχανισμού που ξεκινούσε με ανάκτηση πληροφορίας από το Διαδίκτυο (HTML σελίδες από news portals), εξαγωγή χρήσιμου κειμένου και προεπεξεργασία της πληροφορίας, αυτόματη κατηγοριοποίηση της πληροφορίας και τέλος παρουσίαση στον τελικό χρήστη με προσωποποίηση με στοιχεία που εντοπίζονταν στις επιλογές του χρήστη. Στην παραπάνω εργασία εξετάστηκαν διεξοδικά θέματα που είχαν να κάνουν με τον τρόπο προεπεξεργασίας της πληροφορίας καθώς και με τον τρόπο αυτόματης κατηγοριοποίησης ενώ υλοποιήθηκαν αλγόριθμοι προεπεξεργασίας πληροφορίας τεσσάρων σταδίων και αλγόριθμος αυτόματης κατηγοριοποίησης βασισμένος σε πρότυπες κατηγορίες. Τέλος υλοποιήθηκε portal το οποίο εκμεταλλευόμενο την επεξεργασία που έχει πραγματοποιηθεί στην πληροφορία παρουσιάζει το περιεχόμενο στους χρήστες προσωποποιημένο βάσει των επιλογών που αυτοί πραγματοποιούν. Σκοπός της μεταπτυχιακής εργασίας είναι η εξέταση περισσοτέρων αλγορίθμων για την πραγματοποίηση της παραπάνω διαδικασίας αλλά και η υλοποίησή τους προκειμένου να γίνει σύγκριση αλγορίθμων και παραγωγή ποιοτικότερου αποτελέσματος. Πιο συγκεκριμένα αναβαθμίζονται όλα τα στάδια λειτουργίας του μηχανισμού. Έτσι, το στάδιο λήψης πληροφορίας βασίζεται σε έναν απλό crawler λήψης HTML σελίδων από αγγλόφωνα news portals. Η διαδικασία βασίζεται στο γεγονός πως για κάθε σελίδα υπάρχουν RSS feeds. Διαβάζοντας τα τελευταία νέα που προκύπτουν από τις εγγραφές στα RSS feeds μπορούμε να εντοπίσουμε όλα τα URL που περιέχουν HTML σελίδες με τα άρθρα. Οι HTML σελίδες φιλτράρονται προκειμένου από αυτές να γίνει εξαγωγή μόνο του κειμένου και πιο αναλυτικά του χρήσιμου κειμένου ούτως ώστε το κείμενο που εξάγεται να αφορά αποκλειστικά άρθρα. Η τεχνική εξαγωγής χρήσιμου κειμένου βασίζεται στην τεχνική web clipping. Ένας parser, ελέγχει την HTML δομή προκειμένου να εντοπίσει τους κόμβους που περιέχουν μεγάλη ποσότητα κειμένου και βρίσκονται κοντά σε άλλους κόμβους που επίσης περιέχουν μεγάλες ποσότητες κειμένου. Στα εξαγόμενα άρθρα πραγματοποιείται προεπεξεργασία πέντε σταδίων με σκοπό να προκύψουν οι λέξεις κλειδιά που είναι αντιπροσωπευτικές του άρθρου. Πιο αναλυτικά, αφαιρούνται όλα τα σημεία στίξης, όλοι οι αριθμοί, μετατρέπονται όλα τα γράμματα σε πεζά, αφαιρούνται όλες οι λέξεις που έχουν λιγότερους από 4 χαρακτήρες, αφαιρούνται όλες οι κοινότυπες λέξεις και τέλος εφαρμόζονται αλγόριθμοι εύρεσης της ρίζας μίας λέξεις. Οι λέξεις κλειδιά που απομένουν είναι stemmed το οποίο σημαίνει πως από τις λέξεις διατηρείται μόνο η ρίζα. Από τις λέξεις κλειδιά ο μηχανισμός οδηγείται σε δύο διαφορετικά στάδια ανάλυσης. Στο πρώτο στάδιο υπάρχει μηχανισμός ο οποίος αναλαμβάνει να δημιουργήσει μία αντιπροσωπευτική περίληψη του κειμένου ενώ στο δεύτερο στάδιο πραγματοποιείται αυτόματη κατηγοριοποίηση του κειμένου βασισμένη σε πρότυπες κατηγορίες που έχουν δημιουργηθεί από επιλεγμένα άρθρα που συλλέγονται καθ’ όλη τη διάρκεια υλοποίησης του μηχανισμού. Η εξαγωγή περίληψης βασίζεται σε ευρεστικούς αλγορίθμους. Πιο συγκεκριμένα προσπαθούμε χρησιμοποιώντας λεξικολογική ανάλυση του κειμένου αλλά και γεγονότα για τις λέξεις του κειμένου αν δημιουργήσουμε βάρη για τις προτάσεις του κειμένου. Οι προτάσεις με τα μεγαλύτερη βάρη μετά το πέρας της διαδικασίας είναι αυτές που επιλέγονται για να διαμορφώσουν την περίληψη. Όπως θα δούμε και στη συνέχεια για κάθε άρθρο υπάρχει μία γενική περίληψη αλλά το σύστημα είναι σε θέση να δημιουργήσει προσωποποιημένες περιλήψεις για κάθε χρήστη. Η διαδικασία κατηγοριοποίησης βασίζεται στη συσχέτιση συνημίτονου συγκριτικά με τις πρότυπες κατηγορίες. Η κατηγοριοποίηση δεν τοποθετεί μία ταμπέλα σε κάθε άρθρο αλλά μας δίνει τα αποτελέσματα συσχέτισης του άρθρου με κάθε κατηγορία. Ο συνδυασμός των δύο παραπάνω σταδίων δίνει την πληροφορία που εμφανίζεται σε πρώτη φάση στο χρήστη που επισκέπτεται το προσωποποιημένο portal. Η προσωποποίηση στο portal βασίζεται στις επιλογές που κάνουν οι χρήστες, στο χρόνο που παραμένουν σε μία σελίδα αλλά και στις επιλογές που δεν πραγματοποιούν προκειμένου να δημιουργηθεί προφίλ χρήστη και να είναι εφικτό με την πάροδο του χρόνου να παρουσιάζεται στους χρήστες μόνο πληροφορία που μπορεί να τους ενδιαφέρει. / The scope of this MsC thesis is the extension and upgrade of the mechanism that was constructed during my undergraduate studies under my undergraduate thesis entitled “Construction of a Web Portal with Personalized Access to WWW content”. The aforementioned thesis included the construction of a mechanism that would begin with information retrieval from the WWW and would conclude to representation of information through a portal after applying useful text extraction, text pre-processing and text categorization techniques. The scope of the MsC thesis is to locate the problematic parts of the system and correct them with better algorithms and also include more modules on the complete mechanism. More precisely, all the modules are upgraded while more of them are constructed in every aspect of the mechanism. The information retrieval module is based on a simple crawler. The procedure is based on the fact that all the major news portals include RSS feeds. By locating the latest articles that are added to the RSS feeds we are able to locate all the URLs of the HTML pages that include articles. The crawler then visits every simple URL and downloads the HTML page. These pages are filtered by the useful text extraction mechanism in order to extract only the body of the article from the HTML page. This procedure is based on the web-clipping technique. An HTML parser analyzes the DOM model of HTML and locates the nodes (leafs) that include large amounts of text and are close to nodes with large amounts of text. These nodes are considered to include the useful text. In the extracted useful text we apply a 5 level preprocessing technique in order to extract the keywords of the article. More analytically, we remove the punctuation, the numbers, the words that are smaller than 4 letters, the stopwords and finally we apply a stemming algorithm in order to produce the root of the word. The keywords are utilized into two different interconnected levels. The first is the categorization subsystem and the second is the summarization subsystem. During the summarization stage the system constructs a summary of the article while the second stage tries to label the article. The labeling is not unique but the categorization applies multi-labeling techniques in order to detect the relation with each of the standard categories of the system. The summarization technique is based on heuristics. More specifically, we try, by utilizing language processing and facts that concern the keywords, to create a score for each of the sentences of the article. The more the score of a sentence, the more the probability of it to be included to the summary which consists of sentences of the text. The combination of the categorization and summarization provides the information that is shown to our web portal called perssonal. The personalization issue of the portal is based on the selections of the user, on the non-selections of the user, on the time that the user remains on an article, on the time that spends reading similar or identical articles. After a short period of time, the system is able to adopt on the user’s needs and is able to present articles that match the preferences of the user only.
9

”Jag trivs ändå i min lilla bubbla” – En studie om studenters attityder till personalisering

Hedin, Alice January 2016 (has links)
Denna studie ämnar att undersöka studenters attityder till utvecklingen av personalisering inom webbaserade tjänster och utforska skillnader och likheter mellan studenternas attityder. Studiens empiriska material är insamlat genom fem kvalitativa intervjuer och en webbenkät med 72 respondenter. Studien behandlar fördelar och nackdelar med personalisering, möjligheter att förhindra personalisering och möjliga konsekvenser av personalisering. Majoriteten av studenterna har en positiv attityd till personalisering av webbaserade tjänster. Resultatet visar att studenterna var mest positivt inställda till personalisering av streamingtjänster och minst positiva till personalisering av nyhetstjänster. Jag fann att användare i stor utsträckning inte anser att nyhetstjänster bör vara personaliserade. Det visade sig finnas en tydlig skillnad mellan studenternas kännedom om olika verktyg som kan användas för att förhindra personalisering. Ju mer teknisk utbildning som studenterna läser, desto bättre kännedom hade studenterna om verktygen. Resultatet visade även att en stor del av studenterna önskade att de kunde stänga av personaliseringsfunktionen på tjänster. Personalisering har blivit en naturlig del av användarnas vardag och att majoriteten av användarna inte har tillräcklig kunskap om fenomenet och därför intar de en passiv attityd och undviker att reflektera närmare över personaliseringen och dess möjliga konsekvenser. / This essay aims to study student’s attitudes towards web personalization and explore where the student’s attitudes differ and converge. The empirical materials of the study where assembled by the usage of five qualitative interviews and a quantitative survey with 72 respondents. The study discusses the pros and cons, the ability to constrain web personalization and possible effects and outcomes of web personalization. The majority of the students have a positive attitude towards web personalization. The students were most positive towards personalization of streaming services and least positive towards personalization of media channels that output news. There was an explicit difference between the students’ knowledge of the possibilities to constrain web personalization through the usage of different extensions and tools. Those students who studied a more technical program showed more knowledge of extensions and tools that can be used to prevent or constrain web personalization. The results also showed that the over all students desire more control over web personalization and demand a function where the personalization of web services could be turned off. The study resulted in the findings that web personalization has become a part of the users every-day life and that the students do not have enough knowledge of web personalization which have led to a passive attitude towards it.

Page generated in 0.1482 seconds