• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Supporting the Student Research-paper Writing Process: Activities, Technologies, and Sources

Gilbert, Sarah 15 August 2011 (has links)
Students use a myriad of disparate technologies and information sources to conduct a variety of activities during the research-paper writing process. While this process is considered a complex task, there is no “information appliance” that provides support. Using established frameworks of the research-paper writing process, an online survey was conducted to describe how activities, sources, and technologies used by students during the process are related to the various phases of that process. Connections were made between activities and technologies to show how an information appliance may support the process from onset to completion. Results show that the activities conducted during the process are iterative. The design application is that some technologies, such as those that support searching, need not be viewable at all times, but must always be available. These connections provide further insight into the student research-paper writing process and provide an example of how design may support task.
2

Una metodología de evaluación de repositorios digitales para asegurar la preservación en el tiempo y el acceso a los contenidos

De Giusti, Marisa Raquel January 2014 (has links)
Un repositorio institucional es un depósito de documentos digitales, cuyo propósito es gestionar, organizar, almacenar, preservar y difundir en acceso abierto la producción resultante de la actividades de una organización. La variedad de materiales que se alojará en un repositorio institucional dependerá de la política de contenidos que determine la propia institución; los contenidos, en principio, podrían mantenerse a perpetuidad y el repositorio ser implementado de modo tal de asegurarlo, pero este punto también dependerá de la política de preservación que la institución determine. El objetivo central de esta tesis es, entonces, proponer una metodología de evaluación para repositorios institucionales. Con esto se busca mejorar la calidad de los repositorios, así como la estandarización, ayudar a la interoperabilidad y obtener una mayor visibilidad de las producciones que una institución, en este caso educativa, guarda en un repositorio. Entre los objetivos específicos está el de asegurar la preservación de los contenidos del repositorio, de modo que siempre sea posible acceder a ellos y que éstos resulten legibles tanto para usuarios humanos como máquinas. Para lograrlo es necesario conocer el campo de actividad de estos repositorios, enmarcado por la Iniciativa de Acceso Abierto que definió sus alcances y funciones, y elaborar una correcta definición de ellos, para responder a las preguntas fundamentales: ¿qué es un repositorio? y ¿qué estructura y funciones lo caracterizan mejor? Con miras a responder tales interrogantes, se relevaron los modelos que a lo largo del tiempo han servido para representar un repositorio digital, elaborando una mirada crítica en cuanto a la utilidad de cada uno de ellos, y observando cuánto de las estructuras y funciones propuestas permanecen en los repositorios actuales. Se analizaron sus similitudes y diferencias, para identificar el modelo que mejor se ajustaba y para determinar la necesidad de contar con más de un modelo que representase el repositorio. Una vez elegido el modelo, se determinaron cuáles serían los parámetros de evaluación que interesaban a los objetivos planteados. Como objeto de estudio y experimentación se seleccionó el repositorio institucional central de la Universidad Nacional de La Plata —el Servicio de Difusión de la Creación Intelectual (SEDICI)—, anticipando que las conclusiones extraídas, en cuanto a líneas de acción para cumplir con los objetivos previstos, podrían ser extensibles a otros repositorios institucionales. Se realizó el relevamiento del estado de los objetos digitales del repositorio, para luego determinar las acciones a realizar, proponer cambios y delinear un plan a largo plazo vinculado al planeamiento de la preservación de los contenidos de modo de asegurar que los contenidos siempre estén disponibles y en una condición tal que permita la legibilidad por parte de los usuarios. Luego de cada análisis, se extrajeron algunas conclusiones y reflexiones breves. La tesis culmina con la exposición de las conclusiones generales y con los trabajos proyectados para el futuro, vinculados principalmente a la selección y migración a formatos más apropiados para la preservación, a la generación de nuevas tareas de validación de metadatos asociados a los contenidos y a la realización de un plan integral de preservación.
3

Εφαρμογή παγκόσμιου ιστού για προσωποποιημένες υπηρεσίες διαιτολογίας με την χρήση οντολογιών

Οικονόμου, Φλώρα 11 June 2013 (has links)
Ο παγκόσμιος ιστός αποτελεί μία τεράστια αποθήκη πληροφοριών και αναπτύσσεται με τάχιστους ρυθμούς, ενώ η ανθρώπινη ικανότητα να εντοπίζει, να επεξεργάζεται και να αντιλαμβάνεται τις παρεχόμενες πληροφορίες παραμένει πεπερασμένη. Οι μηχανές αναζήτησης διευκολύνουν την αναζήτηση στον παγκόσμιο ιστό και έχουν γίνει αναπόσπαστο κομμάτι της καθημερινής ζωής των χρηστών του διαδικτύου. Οι χρήστες όμως χαρακτηρίζονται από διαφορετικές ανάγκες, προτιμήσεις, ιδιαιτερότητες και κατά την πλοήγησή τους μπορεί να χάσουν τον στόχο της αναζήτησής τους. Η προσωποποίηση στον παγκόσμιο ιστό, δηλαδή η εξατομίκευση των παρεχόμενων αποτελεσμάτων, αποτελεί μία πολλά υποσχόμενη προσέγγιση για την λύση του πληροφοριακού υπερφόρτου, παρέχοντας κατάλληλα προσαρμοσμένες εμπειρίες πλοήγησης. Στα πλαίσια αυτής της διπλωματικής εργασίας αναπτύχθηκε μία μεθοδολογία για την προσωποποίηση των αποτελεσμάτων μίας μηχανής αναζήτησης ώστε αυτά να ανταποκρίνονται στις προτιμήσεις των χρηστών και στα διαιτολογικά τους χαρακτηριστικά. Η μεθοδολογία αναπτύχθηκε σε δύο μέρη: στο εκτός σύνδεσης τμήμα και στο συνδεδεμένο. Στο πρώτο με την χρησιμοποίηση των αρχείων πρόσβασης μίας μηχανής αναζήτησης και των διαιτολογικών χαρακτηριστικών των χρηστών, έγινε εξαγωγή πληροφορίας για τις προτιμήσεις των τελευταίων. Στην συνέχεια με την χρήση μίας οντολογίας που κατασκευάστηκε για τα πλαίσια της διπλωματικής αυτής εργασίας, έγινε σημασιολογική κατηγοριοποίηση των επιλογών των χρηστών και κατασκευάστηκαν τα προφίλ που τους χαρακτηρίζουν. Έπειτα με την χρήση ενός αλγορίθμου ομαδοποίησης οι χρήστες κατηγοριοποιήθηκαν με βάση τα διαιτολογικά τους χαρακτηριστικά και τις επιλογές τους στην μηχανή αναζήτησης. Στο συνδεδεμένο τμήμα ο αλγόριθμος προσωποποίησης εκμεταλλευόμενος την σημασιολογική αντιστοίχιση των αποτελεσμάτων της μηχανής αναζήτησης και τις ομάδες των χρηστών που δημιουργήθηκαν στο εκτός σύνδεσης τμήμα αναδιοργανώνει τα παρεχόμενα από την μηχανή αναζήτησης αποτελέσματα. Η αναδιοργάνωση γίνεται προωθώντας στις υψηλότερες θέσεις των αποτελεσμάτων της μηχανής αναζήτησης τα αποτελέσματα που ταιριάζουν καλύτερα με τις προτιμήσεις και τα χαρακτηριστικά της ομάδας στην οποία εντάσσεται ο χρήστης. Στο τέλος έγιναν πειράματα και εξακριβώθηκαν τα επιθυμητά αποτελέσματα για την προσωποποίηση σύμφωνα με τις σημασιολογικές ομάδες των χρηστών. / The World Wide Web has become a huge data repository and it keeps growing exponentially, whereas the human capability to find, process and understand the provided content remains constant. Search engines facilitate the search process in the World Wide Web and they have become an integral part of the web users' daily lives. However users who are characterized by different needs, preferences and special characteristics, navigate through large Web structures and may lost their goal of inquiry. Web personalization, i.e. the customization of the search engines’ returned results, is one of the most promising approaches for alleviating information overload providing tailored navigation experiences to Web users. The present dissertation presents the methodology which was implemented in order to personalize a search engine’s results for corresponding users’ preferences and dietary characteristics. This methodology was implemented in two parts: the offline and the online part. The first one uses a search engines’ log files and the dietary characteristics of the users in order to extract information for the latter preferences. Afterwards, with the use of an ontology which was created explicitly for this work, semantic profiling of users’ interests was achieved and their corresponding profiles were formed. Then with the use of a clustering algorithm, users’ categorization was made based on their dietary profiles and their preferences in the search engine. In the online part the methodology re-ranks the search engines’ results, based on the semantic characterization of those results and the users’ clusters which were created at the offline part. Re-ranking is achieved by placing those results which match better the interests and the characteristics of the user’s cluster at the top of the list of the search engines’ returned results. Experimental evaluation of the presented methodology shows that the expected objectives from the semantic users’ clustering in search engines are achievable.
4

Leyline : a provenance-based desktop search system using graphical sketchpad user interface

Ghorashi, Seyed Soroush 07 December 2011 (has links)
While there are powerful keyword search systems that index all kinds of resources including emails and web pages, people have trouble recalling semantic facts such as the name, location, edit dates and keywords that uniquely identifies resources in their personal repositories. Reusing information exasperates this problem. A rarely used approach is to leverage episodic memory of file provenance. Provenance is traditionally defined as "the history of ownership of a valued object". In terms of documents, we consider not only the ownership, but also the operations performed on the document, especially those that related it to other people, events, or resources. This thesis investigates the potential advantages of using provenance data in desktop search, and consists of two manuscripts. First, a numerical analysis using field data from a longitudinal study shows that provenance information can effectively be used to identify files and resources in realistic repositories. We introduce the Leyline, the first provenance-based search system that supports dynamic relations between files and resources such as copy/paste, save as, file rename. The Leyline allows users to search by drawing search queries as graphs in a sketchpad. The Leyline overlays provenance information that may help users identify targets or explore information flow. A limited controlled experiment showed that this approach is feasible in terms of time and effort. Second, we explore the design of the Leyline, compare it to previous provenance-based desktop search systems, including their underlying assumptions and focus, search coverage and flexibility, and features and limitations. / Graduation date: 2012
5

Mining Clickthrough Data To Improve Search Engine Results

Veilumuthu, Ashok 05 1900 (has links) (PDF)
In this thesis, we aim at improving the search result quality by utilizing the search intelligence (history of searches) available in the form of click-through data. We address two key issues, namely 1) relevance feedback extraction and fusion, and 2) deciphering search query intentions. Relevance Feedback Extraction and Fusion: The existing search engines depend heavily on the web linkage structure in the form of hyperlinks to determine the relevance and importance of the documents. But these are collective judgments given by the page authors and hence, prone to collaborated spamming. To overcome the spamming attempts and language semantic issues, it is also important to incorporate the user feedback on the documents' relevance. Since users can be hardly motivated to give explicit/direct feedback on search quality, it becomes necessary to consider implicit feedback that can be collected from search engine logs. Though a number of implicit feedback measures have been proposed in the literature, we have not been able to identify studies that aggregate those feedbacks in a meaningful way to get a final ranking of documents. In this thesis, we first evaluate two implicit feedback measures namely 1) click sequence and 2) time spent on the document for their content uniqueness. We develop a mathematical programming model to collate the feedbacks collected from different sessions into a single ranking of documents. We use Kendall's τ rank correlation to determine the uniqueness of the information content present in the individual feedbacks. The experimental evaluation on top 30 select queries from an actual search log data confirms that these two measures are not in perfect agreement and hence, incremental information can potentially be derived from them. Next, we study the feedback fusion problem in which the user feedbacks from various sessions need to be combined meaningfully. Preference aggregation is a classical problem in economics and we study a variation of it where the rankers, i.e., the feedbacks, possess different expertise. We extend the generalized Mallows' model to model the feedback rankings given in user sessions. We propose a single stage and two stage aggregation framework to combine different feedbacks into one final ranking by taking their respective expertise into consideration. We show that the complexity of the parameter estimation problem is exponential in number of documents and queries. We develop two scalable heuristics namely, 1) a greedy algorithm, and 2) a weight based heuristic, that can closely approximate the solution. We also establish the goodness of fit of the model by testing it on actual log data through log-likelihood ratio test. As the independent evaluation of documents is not available, we conduct experiments on synthetic datasets devised appropriately to examine the various merits of the heuristics. The experimental results confirm the possibility of expertise oriented aggregation of feedbacks by producing orderings better than both the best ranker as well as equi-weight aggregator. Motivated with this result, we extend the aggregation framework to hold infinite rankings for the meta-search applications. The aggregation results on synthetic datasets are found to be ensuring the extension fruitful and scalable. Deciphering Search Query Intentions: The search engine often retrieves a huge list of documents based on their relevance scores for a given query. Such a presentation strategy may work if the submitted query is very specific, homogeneous and unambiguous. But many a times it so happen that the queries posed to the search engine are too short to be specific and hence ambiguous to identify clearly the exact information need, (eg. "jaguar"). These ambiguous and heterogeneous queries invite results from diverse topics. In such cases, the users may have to sift through the entire list to find their needed information and that could be a difficult task. Such a task can be simplified by organizing the search results under meaningful subtopics, which would help the users to directly move on to their topic of interest and ignore the rest. We develop a method to determine the various possible intentions of a given short generic and ambiguous query using information from the click-through data. We propose a two stage clustering framework to co-cluster the queries and documents into intentions that can readily be presented whenever it is demanded. For this problem, we adapt the spectral bipartite partitioning by extending it to automatically determine the number of clusters hidden in the log data. The algorithm has been tested on selected ambiguous queries and the results demonstrate the ability of the algorithm in distinguishing among the user intentions.
6

Evaluation of Queries on Linked Distributed XML Data / Auswertung von Anfragen an verteilte, verlinkte XML Daten

Behrends, Erik 18 December 2006 (has links)
No description available.
7

Inducing Conceptual User Models

Müller, Martin Eric 29 April 2002 (has links)
User Modeling and Machine Learning for User Modeling have both become important research topics and key techniques in recent adaptive systems. One of the most intriguing problems in the `information age´ is how to filter relevant information from the huge amount of available data. This problem is tackled by using models of the user´s interest in order to increase precision and discriminate interesting information from un-interesting data. However, any user modeling approach suffers from several major drawbacks: User models built by the system need to be inspectable and understandable by the user himself. Secondly, users in general are not willing to give feedback concerning user satisfaction by the delivered results. Without any evidence for the user´s interest, it is hard to induce a hypothetical user model at all. Finally, most current systems do not draw a line of distinction between domain knowledge and user model which makes the adequacy of a user model hard to determine. This thesis presents the novel approach of conceptual user models. Conceptual user models are easy to inspect and understand and allow for the system to explain its actions to the user. It is shown, that ILP can be applied for the task of inducing user models from feedback, and a method for using mutual feedback for sample enlargement is introduced. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine called OySTER.

Page generated in 0.1465 seconds