• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 916
  • 916
  • 269
  • 206
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 107
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

SemDQ: A Semantic Framework for Data Quality Assessment

Zhu, Lingkai January 2014 (has links)
Objective: Access to, and reliance upon, high quality data is an enabling cornerstone of modern health delivery systems. Sadly, health systems are often awash with poor quality data which contributes both to adverse outcomes and can compromise the search for new knowledge. Traditional approaches to purging poor data from health information systems often require manual, laborious and time-consuming procedures at the collection, sanitizing and processing stages of the information life cycle with results that often remain sub-optimal. A promising solution may lie with semantic technologies - a family of computational standards and algorithms capable of expressing and deriving the meaning of data elements. Semantic approaches purport to offer the ability to represent clinical knowledge in ways that can support complex searching and reasoning tasks. It is argued that this ability offers exciting promise as a novel approach to assessing and improving data quality. This study examines the effectiveness of semantic web technologies as a mechanism by which high quality data can be collected and assessed in health settings. To make this assessment, key study objectives include determining the ability to construct of valid semantic data model that sufficiently expresses the complexity present in the data as well as the development of a comprehensive set of validation rules that can be applied semantically to test the effectiveness of the proposed semantic framework. Methods: The Semantic Framework for Data Quality Assessment (SemDQ) was designed. A core component of the framework is an ontology representing data elements and their relationships in a given domain. In this study, the ontology was developed using openEHR standards with extensions to capture data elements used in for patient care and research purposes in a large organ transplant program. Data quality dimensions were defined and corresponding criteria for assessing data quality were developed for each dimension. These criteria were then applied using semantic technology to an anonymized research dataset containing medical data on transplant patients. Results were validated by clinical researchers. Another test was performed on a simulated dataset with the same attributes as the research dataset to confirm the computational accuracy and effectiveness of the framework. Results: A prototype of SemDQ was successfully implemented, consisting of an ontological model integrating the openEHR reference model, a vocabulary of transplant variables and a set of data quality dimensions. Thirteen criteria in three data quality dimensions were transformed into computational constructs using semantic web standards. Reasoning and logic inconsistency checking were first performed on the simulated dataset, which contains carefully constructed test cases to ensure the correctness and completeness of logical computation. The same quality checking algorithms were applied to an established research database. Data quality defects were successfully identified in the dataset which was manually cleansed and validated periodically. Among the 103,505 data entries, application of two criteria did not return any error, while eleven of the criteria detected erroneous or missing data, with the error rates ranging from 0.05% to 79.9%. Multiple review sessions were held with clinical researchers to verify the results. The SemDQ framework was refined to reflect the intricate clinical knowledge. Data corrections were implemented in the source dataset as well as in the clinical system used in the transplant program resulting in improved quality of data for both clinical and research purposes. Implications: This study demonstrates the feasibility and benefits of using semantic technologies in data quality assessment processes. SemDQ is based on semantic web standards which allows easy reuse of rules and leverages generic reasoning engines for computation purposes. This mechanism avoids the shortcomings that come with proprietary rule engines which often make ruleset and knowledge developed for one dataset difficult to reuse in different datasets, even in a similar clinical domain. SemDQ can implement rules that have shown to have a greater capacity of detect complex cross-reference logic inconsistencies. In addition, the framework allows easy extension of knowledge base to cooperate more data types and validation criteria. It has the potential to be incorporated into current workflow in clinical care setting to reduce data errors during the process of data capture.
282

Construction project information management in a semantic web environment

Pan, Jiayi January 2006 (has links)
Modem construction projects, characterised by severe fragmentation from both geographical and disciplinary perspectives, require accurate and timely sharing of information. Traditional information management systems operate on a textual basis and do not always consider the meaning of information. Current web-based information management technology supports information communication to a reasonable extent but still has many limitations, such as the lack of semanticawareness and poor interoperability of software applications. This research argues that Semantic Web technologies can enhance the efficiency of information management in construction projects by providing content-based and contextspecific information to project team members, and supporting the interoperation between independent applications. A Semantic Web-based Information Management System (Sams) for construction projects was created to demonstrate the above concept. The approach adopted for this research involved creating a new framework for Semantic Web-based information management. This extensible system framework enables the system to merge diverse construction information sources, ontologies and end-user applications into the overall Semantic Web environment. The semantic components developed in this research included a project document's annotation model, a project partner's user profile model, and several lightweight IFC-based ontologies for documented information management. This supports intelligent information management and interoperation between heterogeneous information sources and applications. The system framework, prototype annotations, and ontologies were applied to a concept demonstrator that illustrated how the project documents were annotated, accessed, converted, categorised, and retrieved on the basis of content and context. The demonstrator (named SwiMS) acts as a middleware, which mediates between user needs and the information sources. Information in project partners' documents were mapped and accessed intelligently. This involved the use of rule-based filtering and thus prevented the users from being overwhelmed by irrelevant documents or missing relevant ones in heterogeneous and distributed information sources. It also enabled the adaptation of documents to individual contexts and preferences, and the dynamic composition of various document management services. Evaluation of the system framework and demonstrator revealed that the system enhances the efficiency of construction information management, with the three most beneficial areas being project knowledge management, collaborative design and communication between project team members. The Swims annotations, ontologies and deductive rules are important technologies provide an innovative approach to managing construction information. These enable the information in construction documents, both structured documents and un-structured documents, to be interpretable by computers. This ensures the efficiency and precision of construction information management.
283

The Contribution of Open Frameworks to Life Cycle Assessment

Sayan, Bianca January 2011 (has links)
Environmental metrics play a significant role in behavioural change, policy formation, education, and industrial decision-making. Life Cycle Assessment (LCA) is a powerful framework for providing information on environmental impacts, but LCA data is under-utilized, difficult to access, and difficult to understand. Some of the issues that are required to be resolved to increase relevancy and use of LCA are accessibility, validation, reporting and publication, and transparency. This thesis proposes that many of these issues can be resolved through the application of open frameworks for LCA software and data. The open source software (OSS), open data, open access, and semantic web movements advocate the transparent development of software and data, inviting all interested parties to contribute. A survey was presented to the LCA community to gauge the community’s interest and receptivity to working within open frameworks, as well as their existing concerns with LCA data. Responses indicated dissatisfaction with existing tools and some interest in open frameworks, though interest in contributing was weak. The responses also pointed out transparency, the expansion of LCA information, and feedback to be desirable areas for improvement. Software for providing online LCA databases was developed according to open source, open data, and linked data principles and practices. The produced software incorporates features that attempt to resolve issues identified in previous literature in addition to needs defined from the survey responses. The developed software offers improvements over other databases in areas of transparency, data structure flexibility, and ability to facilitate user feedback. The software was implemented as a proof of concept, as a test-bed for attracting data contributions from LCA practitioners, and as a tool for interested users. The implementation allows users to add LCA data, to search through LCA data, and to use data from the software in separate independent tools.. The research contributes to the LCA field by addressing barriers to improving LCA data and access, and providing a platform on which LCA database tools and data can develop efficiently, collectively, and iteratively.
284

Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification Applications

AlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability. This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing. Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
285

ONTOLOGY MERGING USING SEMANTICALLY-DEFINED MERGE CRITERIA AND OWL REASONING SERVICES: TOWARDS EXECUTION-TIME MERGING OF MULTIPLE CLINICAL WORKFLOWS TO HANDLE COMORBIDITIES

borna, jafarpour 16 December 2013 (has links)
Semantic web based decision support systems represent domain knowledge using ontologies that capture the domain concepts, their relationships and instances. Typically, decision support systems use a single knowledge model—i.e. a single ontology—which at times restricts the knowledge coverage to only select aspects of the domain knowledge. The integration of multiple knowledge models—i.e. multiple ontologies—provides a holistic knowledge model that encompasses multiple perspectives, orientations and instances. The challenge is the execution-time merging of multiple ontologies whilst maintaining knowledge consistency and procedural validity. Knowledge morphing aims at the intelligent merging of multiple computerized knowledge artifacts—represented as distinct ontological models—in order to create a holistic and networked knowledge model. In our research, we have investigated and developed a knowledge morphing framework—termed as OntoMorph—that supports ontology merging through: (1) Ontology Reconciliation whereby we harmonize multiple ontologies in terms of their vocabularies, knowledge coverage, and description granularities; (2) Ontology Merging where multiple reconciled ontologies are merged into a single merged ontology. To achieve ontology merging, we have formalized a set of semantically-defined merging criteria that determine ontology merge points, and describe the associated process-specific and knowledge consistency constraints that need to be satisfied to ensure consistent ontology merging; and (3) Ontology Execution whereby we have developed logic-based execution engines for both execution-time ontology merging and the execution of the merged ontology to infer knowledge-based recommendations. We have utilized OWL reasoning services, for efficient and decidable reasoning, to execute an OWL ontology. We have applied the OntoMorph framework for clinical decision support, more specifically to achieve the dynamic merging of multiple clinical practice guidelines in order to handle comorbid situations where a patient may have multiple diseases and hence multiple clinical guidelines are to be simultaneously operationalized. We have demonstrated the execution time merging of ontologically-modelled clinical guidelines, such that the decision support recommendations are derived from multiple, yet merged, clinical guidelines such that the inferred recommendations are clinically consistent. The thesis contributes new methods for ontology reconciliation, merging and execution, and presents a solution for execution-time merging of multiple clinical guidelines.
286

Relational Learning and Optimization in the Semantic Web

Fischer, Thomas 07 July 2011 (has links) (PDF)
In this paper, the author presents his current research topic, objectives of research as well as research questions. The paper motivates the integration of implicit background knowledge in data mining and optimization techniques based on semantic web knowledge bases. Furthermore, it outlines work of related research areas and states the research methodology
287

Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification Applications

AlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability. This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing. Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
288

Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

Giles, Jonathan Andrew January 2007 (has links)
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
289

Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

Giles, Jonathan Andrew January 2007 (has links)
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
290

Developing Materials Informatics Workbench for Expediting the Discovery of Novel Compound Materials

Kwok Wai Steny Cheung Unknown Date (has links)
This project presents a Materials Informatics Workbench that resolves the challenges confronting materials scientists in the aspects of materials science data assimilation and dissemination. It adopts an approach that has ingeniously combined and extended the technologies of the Semantic Web, Web Service Business Process Execution Language (WSBPEL) and Open Archive Initiative Object Reuse and Exchange (OAI-ORE). These technologies enable the development of novel user interfaces and innovative algorithms and techniques behind the major components of the proposed workbench. In recent years, materials scientists have been struggling with the challenge of dealing with the ever-increasing amount of complex materials science data that are available from online sources and generated by the high-throughput laboratory instruments and data-intensive software tools, respectively. Meanwhile, the funding organizations have encouraged, and even mandated, the sponsored researchers across many domains to make the scientifically-valuable data, together with the traditional scholarly publications, available to the public. This open access requirement provides the opportunity for materials scientists who are able to exploit the available data to expedite the discovery of novel compound materials. However, it also poses challenges for them. The materials scientists raise concerns about the difficulties of precisely locating and processing diverse, but related, data from different data sources and of effectively managing laboratory information and data. In addition, they also lack the simple tools for data access and publication, and require measures for Intellectual Property protection and standards for data sharing, exchange and reuse. The following paragraphs describe how the major workbench components resolve these challenges. First, the materials science ontology, represented in the Web Ontology Language (OWL), enables, (1) the mapping between and the integration of the disparate materials science databases, (2) the modelling of experimental provenance information acquired in the physical and digital domains and, (3) the inferencing and extraction of new knowledge within the materials science domain. Next, the federated search interface based on the materials science ontology enables the materials scientists to search, retrieve, correlate and integrate diverse, but related, materials science data and information across disparate databases. Then, a workflow management system underpinning the WSBPEL engine is not only able to manage the scientific investigation process that incorporates multidisciplinary scientists distributed over a wide geographic region and self-contained computational services, but also systematically acquire the experimental data and information generated by the process. Finally, the provenance-aware scientific compound-object publishing system provides the scientists with a view of the highly complex scientific workflow at multiple-grained levels. Thus, they can easily comprehend the science of the workflow, access experimental information and keep the confidential information from unauthorised viewers. It also enables the scientists to quickly and easily author and publish a scientific compound object that, (1) incorporates not only the internal experimental data with the provenance information from the rendered view of a scientific experimental workflow, but also external digital objects with the metadata, for example, published scholarly papers discoverable via the World Wide Web (the Web), (2) is self- contained and explanatory with IP protection and, (3) is guaranteed to be disseminated widely on the Web. The prototype systems of the major workbench components have been developed. The quality of the material science ontology has been assessed, based on Gruber’s principles for the design of ontologies used for knowledge–sharing, while its applicability has been evaluated through two of the workbench components, the ontology-based federated search interface and the provenance-aware scientific compound object publishing system. Those prototype systems have been deployed within a team of fuel cell scientists working within the Australian Institute for Bioengineering and Nanotechnology (AIBN) at the University of Queensland. Following the user evaluation, the overall feedback to date has been very positive. First, the scientists were impressed with the convenience of the ontology-based federated search interface because of the easy and quick access to the integrated databases and analytical tools. Next, they felt the surge of the relief that the complex compound synthesis process could be managed by and monitored through the WSBPEL workflow management system. They were also excited because the system is able to systematically acquire huge amounts of complex experimental data produced by self-contained computational services that is no longer handled manually with paper-based laboratory notebooks. Finally, the scientific compound object publishing system inspired them to publish their data voluntarily, because it provides them with a scientific-friendly and intuitive interface that enables scientists to, (1) intuitively access experimental data and information, (2) author self-contained and explanatory scientific compound objects that incorporate experimental data and information about research outcomes, and published scholarly papers and peer-reviewed datasets to strengthen those outcomes, (3) enforce proper measures for IP protection, (4) comply those objects with the Open Archives Initiative Protocol – Object Exchange and Reuse (OAI-ORE) to maximize its dissemination over the Web and,(5) ingest those objects into a Fedora-based digital library.

Page generated in 0.0646 seconds