Spelling suggestions: "subject:" indexing"" "subject:" indexings""
171 |
Structure and form of folksonomy tags: The road to the public library catalogueSpiteri, Louise 06 1900 (has links)
Folksonomies have the potential to add much value to public library catalogues by enabling clients to: store, maintain, and organize items of interest in the catalogue using their own tags. The purpose of this paper is to examine how the tags that constitute folksonomies are structured. Tags were acquired over a thirty-day period from the daily tag logs of three folksonomy sites, Del.icio.us, Furl, and Technorati. The tags were evaluated against section 6 (choice and form of terms) of the National Information Standards Organization (NISO) guidelines for the construction of controlled vocabularies. This evaluation revealed that the folksonomy tags correspond closely to the NISO guidelines that pertain to the types of concepts expressed by the tags, the predominance of single tags, the predominance of nouns, and the use of recognized spelling. Potential problem areas in the structure of the tags pertain to the inconsistent use of the singular and plural form of count nouns, and the incidence of ambiguous tags in the form of homographs and unqualified abbreviations or acronyms. Should library catalogues decide to incorporate folksonomies, they could provide clear guidelines to address these noted weaknesses, as well as links to external dictionaries and references sources such as Wikipedia to help clients disambiguate homographs and to determine if the full or abbreviated forms of tags would be preferable.
|
172 |
A phenomenological framework for the relationship between the semantic web and user-centered tagging systemsCampbell, D. Grant January 2006 (has links)
This paper uses Husserlâ s theory of phenomenology to provide a model for the relationship between user-centered tagging systems, such as del.icio.us, and the more highly structured systems of the Semantic Web. Using three aspects of phenomenological theoryâ the movement of the mind out towards an entity and then back in an act of reflection, multiplicities within unity, and the sharing of intentionalities within a communityâ the discussion suggests that both tagging systems and the Semantic Web foster an intersubjective domain for the sharing and use of information resources. The Semantic Web, however, resembles traditional library systems, in that it relies for this intersubjective domain on the conscious implementation of domain-centered standards which are then encoded for machine processing, while tagging systems work on implied principles of emergence.
|
173 |
The KO roots of Taylorâ s Value-Added ModelPimentel, David M. January 2009 (has links)
The model developed by Bob Taylor for his book Value-Added Processes in Information Systems (1986) has been highly influential in the field of library and information science. Yet despite its impact on the broader field, the potential of the Value-Added Model has gone largely unexplored by knowledge organization researchers. Unraveling the history behind Taylorâ s development of the model highlights the significant role played by professional indexers. The Value-Added Model is thus reexamined for its potential as a flexible framework for evaluating knowledge organization systems.
|
174 |
IndMED and medIND: NICâ s Online Biomedical databasesPandita, Naina, Singh, Sukhdev 10 1900 (has links)
Very few Indian biomedical journals have found place in international databases due to various reasons like delayed / irregular publishing, lack of quality articles, etc. The National Library of Medicineâ s (NLM, USA) MEDLINE database covers approximately 50 Indian journals. As far as the full-text of these journals are concerned, MEDLINE has only covered three Indian journals. The ICMR-NIC Centre for Biomedical Information, the 17th International MEDLARS Centre has been catering to the biomedical information needs of the medical professionals since 1986. One of the tasks undertaken by the Centre is to meet the glaring and obvious â unavailabilityâ of Indian biomedical research literature. Hence, the Centre took up the challenging task to develop databases of Indian biomedical journals and provide a platform for making this literature available to the Indian as well as international medical community. One such database developed is the IndMED, which covers the bibliographic details from 75 peer reviewed Indian biomedical journals. IndMED has received a lot of recognition and the Centre strives to keep this database at par with the MEDLINE database. The 2nd database being developed is the Online full-text database of Indian biomedical journals, medIND, which would cover the full-text of IndMED journals and serve as one vital resource for all Indian biomedical literature.
|
175 |
Latent semantic sentence clustering for multi-document summarizationGeiss, Johanna January 2011 (has links)
No description available.
|
176 |
Τεχνικές δυναμικής δεικτοδότησης και φιλτραρίσματος XML εγγράφων / Dynamic algorithms for indexing and filtering XML documentsΠαναγιώτης, Αντωνέλλης 22 October 2007 (has links)
Η ολοένα αυξανόμενη διείσδυση και χρήση του Internet παγκοσμίως έχει οδηγήσει στην επιτακτική ανάγκη ενός καλά ορισμένου και κοινά αποδεκτού τρόπου αναπαράστασης και ανταλλαγής της πληροφορίας στο διαδίκτυο. Όλο και περισσότερα ετερογενή συστήματα και πλατφόρμες χρειάζονται να ανταλλάξουν δεδομένα και πληροφορίες μεταξύ τους, με τρόπο καλά ορισμένο αλλά ταυτόχρονα δυναμικό και ελαστικό.
H XML αναπτύχθηκε για να επιλύσει ακριβώς αυτό το πρόβλημα, της εινιαίας και καθολικά αποδεκτής αναπαράστασης της διακινούμενης πληροφορίας. Η ραγδαία αύξηση όμως του όγκου των δεδομένων που αναπαρίστανται σε XML δημιούργησε την ανάγκη αναζήτησης μέσα στην δενδρική δομή ενός ΧΜL εγγράφου για κάποια συγκεκριμένη πληροφορία. Η ανάγκη αυτή ταυτόχρονα με την ανάγκη για γρήγορη πρόσβαση στους κόμβους του ΧΜL δέντρου, οδήγησε σε διάφορα εξειδικευμένα ευρετήρια καθένα με διαφορετικά χαρακτηριστικά και δομή. Τα δεδομένα όμως στη σύγχρονη κοινωνία της πληροφόρησης δεν παραμένουν στατικά, αλλά διαρκώς αλλάζουν και μεταβάλλονται δυναμικά. Για να μπορέσουν να αναταποκριθούν στη δυναμική αυτή των δεδομένων, τα ευρετήρια θα πρέπει να έχουν τη δυνατότητα να μεταβάλλονται και αυτά δυναμικά και με ελάχιστο κόστος.
Ταυτόχρονα με την ανάγκη αναζήτησης συγκεκριμένης πληροφορίας μέσα σε ένα σύνολο XML δεδομένων, γεννήθηκε και η ακριβώς αντίστροφη ανάγκη: το φιλτράρισμα ενός συνόλου XML δεδομένων διαμέσου κάποιων προτύπων και κανόνων ώστε να βρεθούν εκείνα τα δεδομένα που ταιριάζουν με τα αποθηκευμένα πρότυπα και κανόνες. Το πρόβλημα αυτό συναντάται κυρίως στα συστήματα publish/subscribe, στα οποία οι χρήστες ορίζουν τα ενδιαφέροντά τους και το σύστημα αναλαμβάνει να τους αποστέλλει μόνο πληροφορίες και δεδομένα σχετικά με τις προτιμήσεις τους. Η αναπαράσταση της πληροφορίας σε XML οδήγησε τα συστήματα αυτά να ενσωματώσουν αλγορίθμους φιλτραρίσματος των XML δεδομένων διαμέσου ενός συνόλου προτύπων και κανόνων που έχουν ορίσει εκ των προτέρων οι χρήστες του.
Στα πλαίσια της μεταπτυχιακής αυτής εργασίας μελετάμε και συγκρίνουμε τις υπάρχουσες τεχνικές δυναμικής δεικτοδότησης και φιλτραρίσματος XML εγγράφων και παρουσιάζουμε έναν νέο, πρωτοποριακό αλγόριθμο φιλτραρίσματος που υπερτερεί των υπαρχόντων. / The increasingly use of Internet worldwide has led to the impulsory need of a standard, well-defined and wide-accepted information representation. More and more heterogeneous systems require exchanging data and information between them, in a well-defined, flexible and dynamic matter.
XML was created in order to address the need of a standard and well-defined information representation. However, the boom of information capacity expressed in XML has rised a new problem: that of searching among a huge XML data for a specific information. This problem along with the need of efficient access to the nodes of an XML tree-structured document has led to the proposal of various heuristic indexes, each one with diferrent structure and features. However, because data do not remain static but change dynamically through time, the proposed indexes should have the flexibility to change dynamically accordingly to the data changes, with the minimal cost.
Together with the problem of searching through a set of XML documents, a new problem has rised: that of filtering an XML document through a predefined set of patterns and rules. This problem comes from publish/subscribe systems where the users define their interests and preferences and the system undertakes to filter incoming information through the stored user profiles. The vast volume of XML-represented data and information has led those systems to incorporate XML filtering algorithms which filter the incoming XML data streams through the predefined user profiles.
|
177 |
Debating Regional Military Intervention:An Examination of the Australian and New Zealand Media-Government Relationship During the 2003 Solomon Islands CrisisRoche, Jessica January 2012 (has links)
This study explores the Australian and New Zealand media-government relationship during foreign instability and regional military intervention. It offers a critique of print media coverage and political communication during the 2002-2003 Solomon Islands crisis and the subsequent Regional Assistance Mission to the Solomon Islands. By reviewing the Indexing Hypothesis and CNN Effect, this thesis considers media and government data from the year preceding the intervention. By investigating the media-government relationship in the Pacific region, this study builds on the literature that has so far primarily focused on American and European led interventions. Previous research has illustrated the advantages and limitations to specific methodological practises. This study has drawn from the current literature to form a unique methodical approach. The methods to test the Australian and New Zealand media-government relationship include content analysis, and qualitative techniques for use in four complementary tests. Findings from this study indicate that while there is some degree of the media using the political elite as a cue for newsworthy issues, the media appear to often report independently from the political elite perspectives. The political elite set the range of debate, and while the media stay within this range, they appear to sensationalise certain aspects of the debate. Government also appear to benefit from this media behaviour as it uses the media to gauge responses during the policy formation process.
|
178 |
An Efficient, Extensible, Hardware-aware Indexing KernelSadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives.
This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms.
In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
|
179 |
An Efficient, Extensible, Hardware-aware Indexing KernelSadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives.
This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms.
In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
|
180 |
Image Retrieval using Landmark Indexing for Indoor NavigationSinha, Dwaipayan 25 April 2014 (has links)
A novel approach is proposed for real-time retrieval of images from a large database of overlapping images of an indoor environment. The procedure extracts visual features from images using selected computer vision techniques, and processes the extracted features to create a reduced list of features annotated with the frame numbers they appear in. This method is named landmark indexing. Unlike some state-of-the-art approaches, the proposed method does not need to consider large image adjacency graphs because the overlap of the images in the map sufficiently increases information gain, and mapping of similar features to the same landmark reduces the search space to improve search efficiency. Empirical evidence from experiments on real datasets shows high (90-100%) accuracy in image retrieval, and improvement in search time from the order of 100-200 milliseconds to the order of 10-30 milliseconds. The image retrieval technique is also demonstrated by integrating it into a 3D real-time navigation system. This system is tested in several indoor environments and all experiments show accurate localization results in large indoor areas with errors in the order of 15-20 centimeters only. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2014-04-24 12:44:41.429
|
Page generated in 0.0636 seconds