• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 619
  • 136
  • 119
  • 108
  • 108
  • 101
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Evaluation and improvement of semantically-enhanced tagging system

Alsharif, Majdah Hussain January 2013 (has links)
The Social Web or ‘Web 2.0’ is focused on the interaction and collaboration between web sites users. It is credited for the existence of tagging systems, amongst other things such as blogs and Wikis. Tagging systems like YouTube and Flickr offer their users the simplicity and freedom in creating and sharing their own contents and thus folksonomy is a very active research area where many improvements are presented to overcome existing disadvantages such as the lack of semantic meaning, ambiguity, and inconsistency. TE is a tagging system proposing solutions to the problems of multilingualism, lack of semantic meaning and shorthand writing (which is very common in the social web) through the aid of semantic and social resources. The current research is presenting an addition to the TE system in the form of an embedded stemming component to provide a solution to the different lexical form problems. Prior to this, the TE system had to be explored thoroughly and then its efficiency had to be determined in order to decide on the practicality of embedding any additional components as enhancements to the performance. Deciding on this involved analysing the algorithm efficiency using an analytical approach to determine its time and space complexity. The TE had a time growth rate of O (N²) which is polynomial, thus the algorithm is considered efficient. Nonetheless, recommended modifications like patch SQL execution can improve this. Regarding space complexity, the number of tags per photo represents the problem size which, if it grows, will increase linearly the required memory space. Based on the findings above, the TE system is re-implemented on Flickr instead of YouTube, because of a recent YouTube restriction, which is of greater benefit in multi languages tagging system since the language barrier is meaningless in this case. The re-implementation is achieved using ‘flickrj’ (Java Interface for Flickr APIs). Next, the stemming component is added to perform tags normalisation prior to the ontologies querying. The component is embedded using the Java encoding of the porter 2 stemmer which support many languages including Italian. The impact of the stemming component on the performance of the TE system in terms of the size of the index table and the number of retrieved results is investigated using an experiment that showed a reduction of 48% in the size of the index table. This also means that search queries have less system tags to compare them against the search keywords and this can speed up the search. Furthermore, the experiment runs similar search trails on two versions of the TE systems one without the stemming component and the other with the stemming component and found out that the latter produced more results on the conditions of working with valid words and valid stems. The embedding of the stemming component in the new TE system has lessened the effect of the storage overhead needed for the generated system tags by their reduction for the size of the index table which make the system suited for many applications such as text classification, summarization, email filtering, machine translation…etc.
132

iNET Interoperability Tools

Araujo, Maria S., Seegmiller, Ray D., Noonan, Patrick J., Newton, Todd A., Samiadji-Benthin, Chris S., Moodie, Myron L., Grace, Thomas B., Malatesta, William A. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The integrated Network Enhanced Telemetry (iNET) program has developed standards for network-based telemetry systems, which implementers and range users of Telemetry Network System (TmNS) equipment can use to promote interoperability between components. While standards promote interoperability, only implementation of the standards can ensure it. This paper discusses the tools that are being developed by the iNET program which implement the technologies and protocols specified in the iNET standards in order to ensure interoperability between TmNS components and provide a general framework for device development. Capabilities provided by the tools include system management, TmNS message processing, metadata processing, and time synchronization.
133

Utilizing IHAL Instrumentation Descriptions in iNET Scenarios

Hamilton, John, Darr, Timothy, Fernandes, Ronald, Sulewski, Joe, Jones, Charles 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper, we explore ways in which CTEIP's integrated Network Enhanced Telemetry (iNET) program can benefit from the hardware descriptions supported by the Instrumentation Hardware Abstraction Language (IHAL). We describe how IHAL can be used at the end of the current iNET instrumentation configuration use-case to "fine tune" the instrumentation configuration. Additionally, we describe how IHAL can be used at the beginning of the current instrumentation configuration use-case to enable cross-vendor reasoning and automated construction of multi-vendor instrumentation configurations. Finally, we investigate how IHAL can be used within the iNET system manager to enhance capabilities such as instrumentation discovery.
134

Full-Text Aggregation: An Examination Metadata Accuracy And the Implications For Resource Sharing

Cummings, Joel January 2003 (has links)
The author conducted a study comparing of two lists of full-text content available in Academic Search Full-Text Elite. EBSCO provided the lists to the University College of the Fraser Valley. The study was conducted to compare the accuracy of the claims of full-text content, because the staff and library users at University College of the Fraser Valley depend on this database as part of the librariesâ journal collection. Interlibrary loan staff routinely used a printed list of Academic Search Full Text Elite to check whether the journal was available at UCFV in electronic form; therefore, an accurate supplemental list or lists of the libraries electronic journals was essential for cost conscious interlibrary loan staff. The study found inaccuracies in the coverage of 57 percent of the journals sampled.
135

Measuring metadata quality

Király, Péter 24 June 2019 (has links)
No description available.
136

Bezkamerové násilí / Camereless Violence

Laytadze, Lali January 2018 (has links)
The code assigned to the subject, entering into interactive environment, is setting and determining its future existence. In the virtual environment, the absence of a numerical description makes a subject untrackable. Only by assigning a unique code does the affair begin to exist for the computing world. After receiving the code, the subject somehow comes to life, becoming recognisable, categorised, searchable - possessing existence. Every environment we are situated in can be perceived as panned and scanned. Cameraless photography, we meet in a moment of our existence, in an artificial space. We are scanning and at the same time we are being scanned. In the process of scanning and taking pictures, what part is the human playing? A person who can be seen as a tool of cameraless photography? Is he following the rules, or creating them? Or, is he himself a tool to the process of cameraless photography? Is he the individual who determines and recognizes the point that is set for the existence and meaning of cameraless photography?
137

Uma abordagem para promover reuso e processamento de inferências em ontologias de metadados educacionais / An approach to improve reuse and inference processing in educational metadata ontologies

Behr, André Rolim January 2016 (has links)
Metadados vêm sendo utilizados amplamente para descrever objetos de aprendizagem na Web. Contudo, mesmo que a adoção de um único padrão de metadados pudesse assegurar a reusabilidade de recursos e interoperabilidade entre aplicações, ainda não existe um esquema de metadados que seja apropriado para preencher todos os requisitos de cada aplicação. Com isso, a criação de novos padrões de metadados e perfis de aplicação torna-se constante com o passar dos anos. Atualmente, a Web está sendo estendida pela Web Semântica de forma sistemática. A integração dos seus dados vem sendo obtida em grande parte pela adoção de ontologias. A presente dissertação propõe uma abordagem de representação de conhecimento em três camadas baseada em ontologias para metadados educacionais. Esta abordagem é composta de ontologias modulares com intuito de aumentar o reuso e otimizar o processamento de inferências dos metadados. Além disto, é proposto um método de interoperabilidade entre metadados descritos em XML e OWL para a proposta de ontologias modulares. Os resultados apresentaram ganhos quanto ao uso de ontologias modulares e verificações de cardinalidades em mundo fechado. As ontologias propostas apresentam uma representação de conhecimento de forma unificada e são compatíveis com as tecnologias atuais da Web Semântica. / Metadata has been broadly employed to describe learning objects on the Web. However, even with the adoption of a unique metadata standard could secure reusability of resources and interoperability among applications, there is no metadata schema that would be enough to comply with all requirements of each application yet. Therewith, the creation of new metadata standards and profile applications are regularly over the years. Nowadays, the Semantic Web is extending the Web systematically. The integration of data has been activated mostly by the adoption of ontologies. This dissertation proposes an approach to knowledge representation in three layers based on ontologies for educational metadata. This approach is composed of modular ontologies that aim improve reuse and optimize the inference processing of metadata. Beyond that, an interoperability method is proposed to metadata described in XML and OWL to the modularized ontologies. The results show some optimization in using modular ontologies and cardinality verification in a closed world. The proposed ontologies unify the knowledge representation and they are compatible with the current Semantic Web technologies.
138

A FRAMEWORK FOR CONCEPTUAL INTEGRATION OF HETEROGENEOUS DATABASES

Srinivasan, Uma, Computer Science & Engineering, Faculty of Engineering, UNSW January 1997 (has links)
Autonomy of operations combined with decentralised management of data has given rise to a number of heterogeneous databases or information systems within an enterprise. These systems are often incompatible in structure as well as content and hence difficult to integrate. This thesis investigates the problem of heterogeneous database integration, in order to meet the increasing demand for obtaining meaningful information from multiple databases without disturbing local autonomy. In spite of heterogeneity, the unity of overall purpose within a common application domain, nevertheless, provides a degree of semantic similarity which manifests itself in the form of similar data structures and common usage patterns of existing information systems. This work introduces a conceptual integration approach that exploits the similarity in meta level information in existing systems and performs metadata mining on database objects to discover a set of concepts common to heterogeneous databases within the same application domain. The conceptual integration approach proposed here utilises the background knowledge available in database structures and usage patterns and generates a set of concepts that serve as a domain abstraction and provide a conceptual layer above existing legacy systems. This conceptual layer is further utilised by an information re-engineering framework that customises and packages information to reflect the unique needs of different user groups within the application domain. The architecture of the information re-engineering framework is based on an object-oriented model that represents the discovered concepts as customised application objects for each distinct user group.
139

Restinformation i elektroniska textdokument / Surplus information in electronic text documents

Hagel, Maria January 2005 (has links)
<p>Some word processing programs save information that not all users of the program are aware of. This information consists of a number of things. Example of that is who the writer of the document is, the time it took to write it and where on the computer the document is saved. Text that has been changed or removedcan also be saved. This information is not shown in the program and the user will therefore not be aware of its existence. If the document is opened in a text editor that only reads plain ASCII text, this information will be visible. If this information is confidential and also available to people it could become a security risk. </p><p>In this thesis I will sort out what kind of information this is and in what way it could be a security risk. I will also discuss what measures that can be taken to minimize the risk. This is done partly by studying literature combined with some smaller test that I have performed.</p>
140

Utökning av LaTeX med stöd för semantisk information

Löfqvist, Ronny January 2007 (has links)
<p>The semantic web is a vision of the Internets future, there machines and humans can understand the same information. To make this possible, documents have to be provided with metadata in a general language. W3C has created Web Ontology Language (owl) for this purpose.</p><p>This report present the creation of a LaTeX package, which makes it possible to include metadata in pdf files. It also presents how you can create annotations, which are bound to the metadata that's been generated. With the help of this package it's easy to create pdf documents with automatically generated metadata and annotations.</p>

Page generated in 0.049 seconds