• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 12
  • 9
  • 1
  • Tagged with
  • 110
  • 23
  • 22
  • 12
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Motivations to upload and tag images vs. tagging practice : an investigation of the Web 2.0 site Flickr

Stuart, Emma January 2012 (has links)
Digital images are being created and uploaded online in large numbers and this can be attributed to three main interconnected factors: a change in attitudes towards photography and its role in society; technological advancements in the camera industry; and changes in web technology. Many of these digital images are being uploaded to Flickr, one of the most popular of the new web 2.0 image management and sharing applications. Flickr supports secure storage, sharing, online communities, and tagging. Tagging is intended to aid with the organisation, description, and retrieval of images, and as tagging in Flickr generally relates to personal images (e.g., photographs), the tags assigned are highly subjective. Previous research has investigated motivations to upload and tag images in web 2.0 image management and sharing applications, and types of tags used in web 2.0 image management and sharing applications, and a limited number of studies have attempted to correlate the two, however no research has attempted to correlate the two whilst also taking into account the subjective nature of image tagging. Identifying the discrepancies between why people want to use Flickr, and how they use it can help system designers and users to get the best out of these applications. This thesis compares users’ motivations to upload and tag their images in Flickr with how they tag their images in practice. The study used a quantitative survey methodology consisting of a semi-structured questionnaire to explore user motivations. Tagging practices were investigated via a manual tag classification scheme applied to automatically extracted Flickr tags. The questionnaire results show that Flickr users are primarily motivated to upload their images to Flickr for the purposes of social-communication (i.e., to draw attention to their images for comments and feedback and to express and present aspects of their personality and identity) and for socialorganisation (i.e., so other people can access and view the images uploaded). However, tagging images in Flickr is not associated with the motivation of social-organisation and is instead more closely aligned to social-communication, and self-organisation (i.e., as a way of organising images for personal search and retrieval). Self-communication (i.e., documenting and recording for memory and personal reflection) was not found to be a popular motivation for either uploading or tagging images. Flickr users that are motivated to upload their images for the purposes of self-organisation have the clearest tagging practice and they predominantly use tags that are only meaningful to themselves. Gender, pro account status, number of images, and number of contacts are also strong predictors of tagging practice. However, overall, tagging practice is more closely associated with image content than with the motivation the user has. Although the results show that socialcommunication is the most prominent factor in motivating users to upload their images and in motivating users to tag their images, the findings reveal that users who are motivated to upload their images for the purposes of social-organisation are not using the system to its full potential.
52

Ανάπτυξη διαδικτυακού συστήματος για τη διάθεση και τη διαχείριση ψηφιακού εκπαιδευτικού υλικού με χρήση μετα-δεδομένων

Πομόνης, Τζανέτος 11 September 2008 (has links)
- / -
53

Internet search techniques : using word count, links and directory structure as Internet search tools

Moghaddam, Mehdi Minachi January 2005 (has links)
As the Web grows in size it becomes increasingly important that ways are developed to maximise the efficiency of the search process and index its contents with minimal human intervention. An evaluation is undertaken of current popular search engines which use a centralised index approach. Using a number of search terms and metrics that measure similarity between sets of results, it was found that there is very little commonality between the outcome of the same search performed using different search engines. A semi-automated system for searching the web is presented, the Internet Search Agent (ISA), this employs a method for indexing based upon the idea of "fingerprint types". These fingerprint types are based upon the text and links contained in the web pages being indexed. Three examples of fingerprint type are developed, the first concentrating upon the textual content of the indexed files, the other two augment this with the use of links to and from these files. By looking at the results returned as a search progresses in terms of numbers and measures of content of results for effort expended, comparisons can be made between the three fingerprint types. The ISA model allows the searcher to be presented with results in context and potentially allows for distributed searching to be implemented.
54

Σύγκριση προγραμματιστικών διεπαφών (APIs) για διαχείριση οντολογιών ιστού και ανάπτυξη μηχανισμού υποβολής ευφυών ερωτημάτων

Τζογάνης, Γεράσιμος 25 January 2010 (has links)
Στην παρούσα διπλωματική εργασία μελετώνται και συγκρίνονται μεταξύ τους μερικές από τις κυριότερες προγραμματιστικές διεπαφές (APIs) που έχουν αναπτυχθεί για τη διαχείριση οντολογικών εγγράφων του Σημαντικού Ιστού. Αυτές οι προγραμματιστικές διεπαφές είναι εκφρασμένες σε JAVA, και κάθε μία προσφέρει ιδιαίτερα χαρακτηριστικά για την διαχείριση οντολογιών. Οι προγραμματιστικές διεπαφές μελετήθηκαν και συγκρίθηκαν ως προς διάφορα χαρακτηριστικά. Σ’ αυτά περιλαμβάνονται οι οντολογικές γλώσσες τις οποίες υποστηρίζουν, ποια ιδιαίτερα υποσύνολα («διαλέκτους») γλωσσών με συγκεκριμένα χαρακτηριστικά υποστηρίζουν, αλλά και οι δυνατότητες ανακάλυψης γνώσης που παρέχουν, με την υποστήριξη κάποιας γλώσσας συγγραφής επερωτήσεων ή όχι. Επίσης μελετήσαμε τους στόχους κάθε διεπαφής και τα πεδία εφαρμογής της με τα συγκριτικά πλεονεκτήματα και τους περιορισμούς τους οποίους εισάγει. Τέλος, προτείνεται ένας μηχανισμός ανακάλυψης γνώσης από οντολογικά έγγραφα, και περιγράφεται η ανάπτυξή του. Ο μηχανισμός που αναπτύχθηκε είναι ένας μηχανισμός υποβολής ευφυών επερωτήσεων προς OWL οντολογίες, με ευρύτερες εκφραστικές δυνατότητες από τις υπάρχουσες προτάσεις, που συνήθως προσανατολίζονται σε RDF έγγραφα ή θέτουν διάφορους περιορισμούς ως προς τη φύση των επερωτήσεων. Για το σκοπό χρησιμοποιήθηκε μια νέα πρόταση στις γλώσσες επερωτήσεων, η SPARQL-DL. Επίσης, στόχοι κατά την ανάπτυξη του μηχανισμού ήταν η λειτουργικότητα και η επεκτασιμότητα της συνολικής εφαρμογής. Στο πρώτο κεφάλαιο, αρχικά γίνεται μια εισαγωγή στο Σημαντικό Ιστό, με την περιγραφή των γενικών χαρακτηριστικών του, της αναγκαιότητας των υπηρεσιών του και των διαφόρων προσεγγίσεων υλοποίησής του. Περιγράφονται τα συγκεκριμένα χαρακτηριστικά των γλωσσών που συμμετέχουν αυτή τη στιγμή ή προορίζονται για να συμμετάσχουν στη δόμησή του, με ιδιαίτερη έμφαση στη γλώσσα που συστήνει το World Wide Web Consortium, την OWL. Στο δεύτερο κεφάλαιο περιγράφονται οι γλώσσες συγγραφής επερωτήσεων που υπάρχουν αυτή τη στιγμή για την ανακάλυψη γνώσης από οντολογίες, τα συγκεκριμένα χαρακτηριστικά και οι αδυναμίες της κάθε μιας. Και πάλι δίνουμε ιδιαίτερη έμφαση στο πρότυπο του W3C, τη γλώσσα SPARQL, και στην προτεινόμενη εξέλιξή της, τη SPARQL-DL. Στο τρίτο κεφάλαιο περιγράφονται οι πιο κοινές προγραμματιστικές διεπαφές, τα χαρακτηριστικά που παρέχουν και οι αδυναμίες τους. Περιγράφονται ως προς τις οντολογικές γλώσσες που υποστηρίζουν, τις γλώσσες και γενικότερα τις μεθόδους επερωτήσεων που υποστηρίζουν και τα πεδία εφαρμογών στα οποία χρησιμοποιούνται. Περιγράφονται επίσης οι τρόποι διασύνδεσης των διάφορων διεπαφών μεταξύ τους ή με άλλα τμήματα λογισμικού, όπως μηχανισμούς συλλογισμού (reasoners). Στο τέταρτο και τελευταίο κεφάλαιο περιγράφεται ένας μηχανισμός υποβολής ευφυών επερωτήσεων και παρουσιάζεται η ανάπτυξή του. Περιγράφονται οι επιλογές των εργαλείων που χρησιμοποιήθηκαν και οι γενικότερες σχεδιαστικές επιλογές. Εκτελούνται πειράματα ώστε να αξιολογηθεί αυτός ο μηχανισμός υποβολής επερωτήσεων και να διερευνηθούν οι περιορισμοί τους οποίους εισάγουν τα συγκεκριμένα εργαλεία που χρησιμοποιήσαμε αλλά και οι δυνατότητες που προκύπτουν από αυτά. Παρουσιάζονται οι πιθανές επεκτάσεις της εφαρμογής που αναπτύχθηκε, με επιπλέον χαρακτηριστικά, ώστε για παράδειγμα να γίνει προσβάσιμη από απομακρυσμένες εφαρμογές. Τέλος, παρουσιάζεται ένα πιθανό σενάριο χρήσης της εφαρμογής σε πραγματικό περιβάλλον, ώστε να γίνουν περισσότερο εμφανή τα πλεονεκτήματα και η χρησιμότητα ενός τέτοιου μηχανισμού ανακάλυψης γνώσης. / We study and compare the main Application Programming Interfaces (APIs) that have been developed for manipulating ontologies of the Semantic Web. These APIs are expressed in JAVA, and each one offers specific characteristics for managing ontologies. The APIs were studied and compared towards different aspects of functionality. These include the ontology languages each one supports, which specific idioms of those languages they support, and also the capabilities for retrieving knowledge that they provide, via supporting some query language or not. Furthermore, we studied the specific goals of each API and its application fields, with the respective advantages and the limitations it imports. Finally we suggest a mechanism for knowledge retrieval and describe its development. The mechanism we develop implements query answering for queries asked against OWL ontologies, with expanded capabilities compared to the existing query mechanisms, which are mostly oriented to RDF graphs or pose various restrictions regarding the queries. For this purpose we utilize the recently proposed SPARQL-DL query language. Additional goals during the development of this mechanism are ease of use and extensibility of the overall application.
55

Term selection in information retrieval

Maxwell, Kylie Tamsin January 2016 (has links)
Systems trained on linguistically annotated data achieve strong performance for many language processing tasks. This encourages the idea that annotations can improve any language processing task if applied in the right way. However, despite widespread acceptance and availability of highly accurate parsing software, it is not clear that ad hoc information retrieval (IR) techniques using annotated documents and requests consistently improve search performance compared to techniques that use no linguistic knowledge. In many cases, retrieval gains made using language processing components, such as part-of-speech tagging and head-dependent relations, are offset by significant negative effects. This results in a minimal positive, or even negative, overall impact for linguistically motivated approaches compared to approaches that do not use any syntactic or domain knowledge. In some cases, it may be that syntax does not reveal anything of practical importance about document relevance. Yet without a convincing explanation for why linguistic annotations fail in IR, the intuitive appeal of search systems that ‘understand’ text can result in the repeated application, and mis-application, of language processing to enhance search performance. This dissertation investigates whether linguistics can improve the selection of query terms by better modelling the alignment process between natural language requests and search queries. It is the most comprehensive work on the utility of linguistic methods in IR to date. Term selection in this work focuses on identification of informative query terms of 1-3 words that both represent the semantics of a request and discriminate between relevant and non-relevant documents. Approaches to word association are discussed with respect to linguistic principles, and evaluated with respect to semantic characterization and discriminative ability. Analysis is organised around three theories of language that emphasize different structures for the identification of terms: phrase structure theory, dependency theory and lexicalism. The structures identified by these theories play distinctive roles in the organisation of language. Evidence is presented regarding the value of different methods of word association based on these structures, and the effect of method and term combinations. Two highly effective, novel methods for the selection of terms from verbose queries are also proposed and evaluated. The first method focuses on the semantic phenomenon of ellipsis with a discriminative filter that leverages diverse text features. The second method exploits a term ranking algorithm, PhRank, that uses no linguistic information and relies on a network model of query context. The latter focuses queries so that 1-5 terms in an unweighted model achieve better retrieval effectiveness than weighted IR models that use up to 30 terms. In addition, unlike models that use a weighted distribution of terms or subqueries, the concise terms identified by PhRank are interpretable by users. Evaluation with newswire and web collections demonstrates that PhRank-based query reformulation significantly improves performance of verbose queries up to 14% compared to highly competitive IR models, and is at least as good for short, keyword queries with the same models. Results illustrate that linguistic processing may help with the selection of word associations but does not necessarily translate into improved IR performance. Statistical methods are necessary to overcome the limits of syntactic parsing and word adjacency measures for ad hoc IR. As a result, probabilistic frameworks that discover, and make use of, many forms of linguistic evidence may deliver small improvements in IR effectiveness, but methods that use simple features can be substantially more efficient and equally, or more, effective. Various explanations for this finding are suggested, including the probabilistic nature of grammatical categories, a lack of homomorphism between syntax and semantics, the impact of lexical relations, variability in collection data, and systemic effects in language systems.
56

Inovação disruptiva na criação e disseminação de repositórios institucionais de recursos educacionais abertos

Barchik, Rita Galgani January 2015 (has links)
Orientador : Prof. Dr. Glauco Gomes de Menezes / Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Sociais Aplicadas, Programa de Pós-Graduação em Administração. Defesa: Curitiba, 18/03/2015 / Inclui referências : fls. 119-129 / Área de concentração: Inovação e tecnologia / Resumo: O presente estudo teve como problema de pesquisa identificar quais elementos da inovação disruptiva podem ser caracterizados, na criação conjunta de um repositório interinstitucional de REA, realizado por duas instituições federais de ensino superior brasileiras, a saber: Universidade Federal do Paraná e Universidade Tecnológica Federal do Paraná. A base teórico-empírica analisou os critérios adotados para inserção de conteúdos didáticos no repositório, as políticas de licenciamento de uso dos conteúdos elaborados, as estratégias de divulgação e engajamento dos docentes e os recursos tecnológicos adotados para a criação do repositório na perspectiva da Teoria da Inovação Disruptiva. A pesquisa se caracterizou como qualitativa, de caráter exploratório-descritivo, sendo um estudo de caso único. A coleta dos dados foi realizada por meio de entrevistas semiestruturadas, questionários e documentos. Os dados foram examinados por meio da técnica de análise de conteúdo, considerando o feedback dos entrevistados e a descrição detalhada, clara e rica em informações entre as fontes de dados para uma maior confiabilidade e validade das informações. Verificou-se que no processo de criação do repositório de REA está ocorrendo uma ruptura gradativa em alguns elementos analisados, promovendo um rompimento nos mercados regulados e na disrupção da rede de valor. A disrupção é ainda morosa no ensino centrado no aluno, haja vista, que as estratégias de engajamento estão direcionadas ao corpo docente na elaboração do conteúdo de REA. A análise dos dados identificou que as instituições estão trabalhando de forma diferenciada no que se refere à postagem do conteúdo, ao padrão dos metadados e as estratégias na divulgação das práticas educacionais abertas, mas de certa forma, estes elementos até o presente momento não comprometem a interoperabilidade entre os repositórios das instituições de ensino analisadas, mesmo que estas instituições operam com seus repositórios institucionais independentes, o que foi agregado foi uma ferramenta de busca federada nos repositórios indexados ao REA Paraná. Esta pesquisa contribuiu para a disseminação de práticas e recursos educacionais abertos. Palavras-Chave: Recursos Educacionais Abertos. Inovação Disruptiva. Repositório Interinstitucional. Universidade. / Abstract: This study has as main research problem to identify which elements of the disruptive innovation can be characterized, as part of the conjoint creation of an interinstitutional OER (Open Educational Resources) repository by two federal higher education institutions: the Federal University of Paraná (UFPR) and Federal Technological University of Paraná (UTFPR). The theoretical and empirical research basis analyzed the criteria adopted regarding the insertion of didactic content in the repository, licensing policies for the elaborated content, disclosure and faculty commitment strategies and technological resources adopted for creating the repository through the Disruptive Innovation Theory perspective. The research characterized itself as qualitative, with an exploratory-descriptive trait, and concerning a single case study. The data collection was performed by means of semi structured interviews, questionnaires and documents. The data was examined with the content analysis technique, taking into account the interviewees' feedback and a detailed description, clear and information rich, among the data sources for greater reliability. It was verified that the OER repository creation process incurs in a gradual rupture of some analysis elements, promoting an even greater rupture in the regulated markets and a disruption of the value added network. The disruption is still slow in the student-focused teaching, given that the engagement strategies are directed to the faculty in the development of OER content. The data analysis identified that the institutions are working differently with regard to the content of the post, the standard metadata and strategies in the dissemination of open educational practices, but somehow, these elements so far not committed to interoperability between the repositories of both educational institutions, even if these institutions operate with their independent institutional repositories, which was added was a federated search engine in the repositories indexed to the REA Paraná. This research contributed to the spread of practices and open educational resources. Keywords: Open Educational Resources. Disruptive Innovation, Interinstitutional Repository. University.
57

Deriving and applying facet views of the Dewey Decimal Classification Scheme to enhance subject searching in library OPACs

Tinker, Amanda Jayne January 2005 (has links)
Classification is a fundamental tool in the organisation of any library collection for effective information retrieval. Several classifications exist, yet the pioneering Dewey Decimal Classification (DDC) still constitutes the most widely used scheme and international de facto standard. Although once used for the dual purpose of physical organisation and subject retrieval in the printed library catalogue, library classification is now relegated to a singular role of shelf location. Numerous studies have highlighted the problem of subject access in library online public access catalogues (OPACs). The library OPAC has changed relatively little since its inception, designed to find what is already known, not discover and explore. This research aims to enhance OPAC subject searching by deriving facets of the DDC and populating these with a library collection for display at a View-based searching OPAC interface. A novel method is devised that enables the automatic deconstruction of complex DDC notations into their component facets. Identifying facets based upon embedded notational components reveals alternative, multidimensional subject arrangements of a library collection and resolves the problem of disciplinary scatter. The extent to which the derived facets enhance users' subject searching perceptions and activities at the OPAC interface is evaluated in a small-scale usability study. The results demonstrate the successful derivation of four fundamental facets (Reference Type, Person Type, Time and Geographic Place). Such facet derivation and deconstruction of Dewey notations is recognised as a complex process, owing to the lack of a uniform notation, notational re-use and the need for distinct facet indicators to delineate facet boundaries. The results of the preliminary usability study indicate that users are receptive to facet-based searching and that the View-based searching system performs equally as well as a current form fill-in interface and, in some cases, provides enhanced benefits. It is concluded that further exploration of facet-based searching is clearly warranted and suggestions for future research are made.
58

Towards a comprehensive functional layered architecture for the Semantic Web

Gerber, Aurona J. 30 November 2006 (has links)
The Semantic Web, as the foreseen successor of the current Web, is envisioned to be a semantically enriched information space usable by machines or agents that perform sophisticated tasks on behalf of their users. The realisation of the Semantic Web prescribe the development of a comprehensive and functional layered architecture for the increasingly semantically expressive languages that it comprises of. A functional architecture is a model specified at an appropriate level of abstraction identifying system components based on required system functionality, whilst a comprehensive architecture is an architecture founded on established design principles within Software Engineering. Within this study, an argument is formulated for the development of a comprehensive and functional layered architecture through the development of a Semantic Web status model, the extraction of the function of established Semantic Web technologies, as well as the development of an evaluation mechanism for layered architectures compiled from design principles as well as fundamental features of layered architectures. In addition, an initial version of such a comprehensive and functional layered architecture for the Semantic Web is constructed based on the building blocks described above, and this architecture is applied to several scenarios to establish the usefulness thereof. In conclusion, based on the evidence collected as result of the research in this study, it is possible to justify the development of an architectural model, or more specifically, a comprehensive and functional layered architecture for the languages of the Semantic Web. / Computing / PHD (Computer Science)
59

e-Research in the life sciences : from invisible to virtual colleges

Power, Lucy A. January 2011 (has links)
e-Research in the Life Sciences examines the use of online tools in the life sciences and finds that their use has significant impact, namely the formation of a Scientific/Intellectual Movement (SIM) (Frickel & Gross, 2005) complemented by a Computerisation Movement (CM) (Kling & Iacono, 1994) which is mobilising global electronic resources to form visible colleges of life science researchers, who are enrolling others and successfully promoting their open science goals via mainstream scientific literature. Those within this movement are also using these online tools to change their work practices, producing scientific knowledge in a highly networked and distributed group which has less regard for traditional institutional and disciplinary boundaries. This thesis, by combining ideas about SIMs and CMs, fills a gap in research that is typically confined to treating new tools as a part of scientific communication or in specialist areas like distributed collaboration but not in terms of broader changes in science. Case studies have been conducted for three types of online tools: the scientific social networking tool FriendFeed, open laboratory notebooks, and science blogs. Data have been collected from semi-structured interviews, and the online writings of research participants. The case studies of exemplary use by scientists of the web form a baseline for future studies in the area. Boundaries between formal and informal scholarly communication are now blurred. At the formal level, which peer-reviewed print journals continue, many academic publishers now also have online open access, frequently in advance of print publication. At the informal level, what used to be confined to water-cooler chat and the conference circuit is now also discussed on mailing lists, forums and blogs (Borgman, 2007). As these online tools generate new practices they have potential to affect future academic assessment and dissemination practices.
60

Ontology learning for Semantic Web Services

Alfaries, Auhood January 2010 (has links)
The expansion of Semantic Web Services is restricted by traditional ontology engineering methods. Manual ontology development is time consuming, expensive and a resource exhaustive task. Consequently, it is important to support ontology engineers by automating the ontology acquisition process to help deliver the Semantic Web vision. Existing Web Services offer an affluent source of domain knowledge for ontology engineers. Ontology learning can be seen as a plug-in in the Web Service ontology development process, which can be used by ontology engineers to develop and maintain an ontology that evolves with current Web Services. Supporting the domain engineer with an automated tool whilst building an ontological domain model, serves the purpose of reducing time and effort in acquiring the domain concepts and relations from Web Service artefacts, whilst effectively speeding up the adoption of Semantic Web Services, thereby allowing current Web Services to accomplish their full potential. With that in mind, a Service Ontology Learning Framework (SOLF) is developed and applied to a real set of Web Services. The research contributes a rigorous method that effectively extracts domain concepts, and relations between these concepts, from Web Services and automatically builds the domain ontology. The method applies pattern-based information extraction techniques to automatically learn domain concepts and relations between those concepts. The framework is automated via building a tool that implements the techniques. Applying the SOLF and the tool on different sets of services results in an automatically built domain ontology model that represents semantic knowledge in the underlying domain. The framework effectiveness, in extracting domain concepts and relations, is evaluated by its appliance on varying sets of commercial Web Services including the financial domain. The standard evaluation metrics, precision and recall, are employed to determine both the accuracy and coverage of the learned ontology models. Both the lexical and structural dimensions of the models are evaluated thoroughly. The evaluation results are encouraging, providing concrete outcomes in an area that is little researched.

Page generated in 0.035 seconds