251 |
[en] A SEMANTIC WEB APPLICATION FRAMEWORK / [pt] UM FRAMEWORK DE APLICAÇÕES PARA A WEB SEMÂNTICALEONARDO MAGELA CUNHA 26 June 2007 (has links)
[pt] Até alguns anos atrás, a Web disseminava principalmente
documentos. Com o advento das aplicações Web, as
organizações puderam disponibilizar informações que estavam
em seus bancos de dados e sistemas legados. Entretanto, a
comunicação entre estas aplicações ou com aplicações de
usuários finais, às vezes, não era possível devido a
diferenças no formato de representação dos dados. O
desenvolvimento de padrões (standards) e o uso da
eXtensible Markup Language (XML) resolveram muitos destes
problemas. Apesar das soluções desenvolvidas serem somente
sintáticas elas funcionam em muitos casos, como por
exemplo, na interoperabilidade de esquemas em sistemas
bussiness to bussiness de e-commerce. Entretanto, a falta
do aspecto semântico impossibilitou que as aplicações
fizessem mais uso dos dados ou os utilizassem de forma mais
inteligente. A idéia da Web Semântica é definir
explicitamente o significado dos dados que se encontram na
Web. Com isso, esperam-se aplicações capazes de entender o
que significam os dados. E uma vez que estas aplicações
entendam os dados, elas possibilitarão que os usuários
utilizem essa nova Web dirigida a dados para facilitar as
suas tarefas rotineiras. Esta tese propõe um framework para
o desenvolvimento de aplicações para a Web Semântica.
Considerando o que foi descrito no parágrafo anterior, o
número de aplicações que podem ser construídas é quase
infinito. Portanto, nós nos restringimos a observar as
aplicações que tem por objetivo solucionar o problema
apresentado pelo Semantic Web Challenge; e propor um
framework que represente estas soluções. O Challenge tem
como principal finalidade demonstrar como as aplicações
podem atrair e beneficiar o usuário final através
do uso das técnicas da Web Semântica. Conseqüentemente,
nossa intenção é possibilitar que o desenvolvedor de
aplicações possa atingir essa atração e benefícios, através
do uso das técnicas de Web Semântica e de Engenharia de
Software, utilizando um framework para o desenvolvimento
das aplicações. / [en] Documents have been the main vehicle of the Web until some
years ago.
With the advent of Web applications, data stored in
organizations databases or
legacy systems has been made available to users. However,
very often, the
exchange of data between those applications themselves or
between them and
end-users applications were not possible since they used
different formats for
the information representation. The development of
standards and the use of the
eXtensible Markup Language (XML) solved parts of the
problem. That was a
syntactic solution and it works for several cases, e.g.,
schema interoperability in
Business-to-Business e-commerce scenarios. Nevertheless,
the lack of
semantics on these data prevented applications to take more
advantage of them.
The idea behind the Semantic Web is to define explicitly
the semantics of data
available on the Web. Therefore, we expect another step
forward where
applications, being them corporative or for end-users, will
understand the
meaning of the data available on the Web. Once those
applications can
understand it, they will be able to help users to take
advantage of this data
driven Web and to perform their daily tasks easily. This
thesis proposes a
framework for the development of Semantic Web applications.
Considering the
scenario described in the previous paragraph, the number of
possible
applications that can be developed is almost infinite. For
this reason, we
restricted ourselves to examine the solutions that aim to
solve the problem
presented at the Semantic Web Challenge; and to propose a
framework that
represent those solutions. The challenge is concerned in
demonstrating how
Semantic Web techniques can provide valuable or attractive
applications to end
users. Our main concern was then to demonstrate and help a
developer to
achieve that value addition or attractiveness, through
Semantic Web techniques,
in a Software Engineering approach using frameworks.
|
252 |
Semantic Matching for Model Integration: A Web Service ApproachZeng, Chih-Jon 31 July 2007 (has links)
Model integration that allows multiple models to work together for solving a sophisticated problem has been an important research issue in the management of decision models. The recent development of the service-oriented architecture (SOA) has provided an opportunity to apply this new technology to support model integration. This is particularly critical when more and more models are delivered as web services. A web-services-based approach to model management is useful in providing effective decision support.
In the past, existing literature has adopted the approach that treated a model as a service. Model integration can be thought of as a composition of web services. In the composition process, proper components and their relationships must be properly identified. This requires accurate model definition and reasoning.
In the research, we propose a semantic-based approach for developing such as system. The approach uses DAML-S to describe the capability of a service. Then the system can discover proper services for a particular requirement by using semantic matching on these DAML-S documents. When suitable web services are found, the system uses BPEL4WS to composite them together. The resulting composite web service can be applied to decision support. A prototype that demonstrates the feasibility of the proposed approach is implemented in Java.
|
253 |
From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata HarmonizationNilsson, Mikael January 2010 (has links)
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented. / QC 20101117
|
254 |
Interval Neutrosophic Sets and Logic: Theory and Applications in ComputingWang, Haibin 12 January 2006 (has links)
A neutrosophic set is a part of neutrosophy that studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. The neutrosophic set is a powerful general formal framework that has been recently proposed. However, the neutrosophic set needs to be specified from a technical point of view. Here, we define the set-theoretic operators on an instance of a neutrosophic set, and call it an Interval Neutrosophic Set (INS). We prove various properties of INS, which are connected to operations and relations over INS. We also introduce a new logic system based on interval neutrosophic sets. We study the interval neutrosophic propositional calculus and interval neutrosophic predicate calculus. We also create a neutrosophic logic inference system based on interval neutrosophic logic. Under the framework of the interval neutrosophic set, we propose a data model based on the special case of the interval neutrosophic sets called Neutrosophic Data Model. This data model is the extension of fuzzy data model and paraconsistent data model. We generalize the set-theoretic operators and relation-theoretic operators of fuzzy relations and paraconsistent relations to neutrosophic relations. We propose the generalized SQL query constructs and tuple-relational calculus for Neutrosophic Data Model. We also design an architecture of Semantic Web Services agent based on the interval neutrosophic logic and do the simulation study.
|
255 |
Collaborative tagging : folksonomy, metadata, visualization, e-learning, thesisBateman, Scott 12 December 2007
Collaborative tagging is a simple and effective method for organizing and sharing web resources using human created metadata. It has arisen out of the need for an efficient method of personal organization, as the number of digital resources in everyday lives increases. While tagging has become a proven organization scheme through its popularity and widespread use on the Web, little is known about its implications and how it may effectively be applied in different situations. This is due to the fact that tagging has evolved through several iterations of use on social software websites, rather than through a scientific or an engineering design process. The research presented in this thesis, through investigations in the domain of e-learning, seeks to understand more about the scientific nature of collaborative tagging through a number of human subject studies. While broad in scope, touching on issues in human computer interaction, knowledge representation, Web system architecture, e-learning, metadata, and information visualization, this thesis focuses on how collaborative tagging can supplement the growing metadata requirements of e-learning. I conclude by looking at how the findings may be used in future research, through using information based in the emergent social networks of social software, to automatically adapt to the needs of individual users.
|
256 |
本體論為基礎的統計資訊整合-以政府公開資訊為例 / Ontology-Based Statistical Data Integration for Open Government梁世麒, Liang, Shih Chi Unknown Date (has links)
現代的民主國家無不致力於深化民主的價值,政府運用人民所繳納的稅金進行相關施政,在政府運用國家資源的同時,也應該提供各項施政的統計資料以便說明及用來監督政府施政的成效,政府提供的資料所涵蓋的領域及格式非常多元,若要加以運用產生具有附加價值的資訊,往往單一來源的資料無法滿足需求,必須透過多方的合併參照才能凸顯在資料背後所隱含的價值,因此使用者在運用前必須先針對不同來源的統計資料進行多方的蒐集、參考及比對,最後才能彙整成為有用的資訊,而政府將各種的資料進行公開之後也會快速累積出龐大的資料量,若要透過人工的蒐集比對其困難度也越來越高,因此如何能動態地從不同來源中萃取出有意義的內容便是一個相當大的挑戰,本研究運用語意網技術來解決此一困難,透過單一平台來進行多元資料的彙整查詢,在此平台上使用者可以依其需要選擇特定資料維度或計量單位作為整合條件,並針對特定或不特定的對象進行查詢,最後透過彙整後的結果來提高資料本身的價值,本研究最終目的為提供系統化的方法將政府公開統計資料進行有意義的萃取、彙整及再利用。 / For enhancement of the value of democracy, the governments are expected to publish statistical data to explain and monitor the performance of policy implementation while they utilize the national resources and the tax for the policies. The data provided by official departments usually contain multiple domain information with diverse formats, which cause the difficulty to generate value-added information from single source. The embedded values could be revealed only by cross-reference of multiple sources. Valued information must be collected, cross-referred, and compared from different sources. In addition, after the government publishes the data, the database would be accelerated to accumulate. The difficulty of manual data collection and comparison would be enhanced consequently. Therefore, it is challenge to extract valued content from different sources dynamically.The study utilized semantic web technology to integrate the inquiry of diverse data with single platform. Users can select specific data dimension or measurement unit based on their requirement as the condition and inquire on specific or unspecific objects. The value of data could be enhanced with the integrated results. The ultimate purpose of this study is to provide a systematized method to extract, integrate and reuse government's public statistical data.
|
257 |
Ontology Matching based On Class Context: to solve interoperability problem at Semantic WebLera Castro, Isaac 17 May 2012 (has links)
When we look at the amount of resources to convert formats to other formats, that is to say, to make information systems useful, it is the time when we realise that our communication model is inefficient. The transformation of information, as well as the transformation of energy, remains inefficient for the efficiency of the converters. In this work, we propose a new way to ``convert'' information, we propose a mapping algorithm of semantic information based on the context of the information in order to redefine the framework where this paradigm merges with multiple techniques. Our main goal is to offer a new view where we can make further progress and, ultimately, streamline and minimize the communication chain in integration process
|
258 |
SWI-Prolog as a Semantic Web Tool for semantic querying in Bioclipse: Integration and performance benchmarkingLampa, Samuel January 2010 (has links)
The huge amounts of data produced in high-throughput techniques in the life sciences and the need for integration of heterogeneous data from disparate sources in new fields such as Systems Biology and translational drug development require better approaches to data integration. The semantic web is anticipated to provide solutions through new formats for knowledge representation and management. Software libraries for semantic web formats are becoming mature, but there exist multiple tools based on foundationally different technologies. SWI-Prolog, a tool with semantic web support, was integrated into the Bioclipse bio- and cheminformatics workbench software and evaluated in terms of performance against non Prolog-based semantic web tools in Bioclipse, Jena and Pellet, for querying a data set consisting of mostly numerical, NMR shift values, in the semantic web format RDF. The integration has given access to the convenience of the Prolog language for working with semantic data and defining data management workflows in Bioclipse. The performance comparison shows that SWI-Prolog is superior in terms of performance over Jena and Pellet for this specific dataset and suggests Prolog-based tools as interesting for further evaluations.
|
259 |
Distributed Search in Semantic Web Service DiscoveryZiembicki, Joanna January 2006 (has links)
This thesis presents a framework for semantic Web Service discovery using descriptive (non-functional) service characteristics in a large-scale, multi-domain setting. The framework uses Web Ontology Language for Services (OWL-S) to design a template for describing non-functional service parameters in a way that facilitates service discovery, and presents a layered scheme for organizing ontologies used in service description. This service description scheme serves as a core for desigining the four main functions of a service directory: a template-based user interface, semantic query expansion algorithms, a two-level indexing scheme that combines Bloom filters with a Distributed Hash Table, and a distributed approach for storing service description. The service directory is, in turn, implemented as an extension of the Open Service Discovery Architecture. <br /><br /> The search algorithms presented in this thesis are designed to maximize precision and completeness of service discovery, while the distributed design of the directory allows individual administrative domains to retain a high degree of independence and maintain access control to information about their services.
|
260 |
The Contribution of Open Frameworks to Life Cycle AssessmentSayan, Bianca January 2011 (has links)
Environmental metrics play a significant role in behavioural change, policy formation, education, and industrial decision-making. Life Cycle Assessment (LCA) is a powerful framework for providing information on environmental impacts, but LCA data is under-utilized, difficult to access, and difficult to understand. Some of the issues that are required to be resolved to increase relevancy and use of LCA are accessibility, validation, reporting and publication, and transparency.
This thesis proposes that many of these issues can be resolved through the application of open frameworks for LCA software and data. The open source software (OSS), open data, open access, and semantic web movements advocate the transparent development of software and data, inviting all interested parties to contribute.
A survey was presented to the LCA community to gauge the community’s interest and receptivity to working within open frameworks, as well as their existing concerns with LCA data. Responses indicated dissatisfaction with existing tools and some interest in open frameworks, though interest in contributing was weak. The responses also pointed out transparency, the expansion of LCA information, and feedback to be desirable areas for improvement.
Software for providing online LCA databases was developed according to open source, open data, and linked data principles and practices. The produced software incorporates features that attempt to resolve issues identified in previous literature in addition to needs defined from the survey responses. The developed software offers improvements over other databases in areas of transparency, data structure flexibility, and ability to facilitate user feedback.
The software was implemented as a proof of concept, as a test-bed for attracting data contributions from LCA practitioners, and as a tool for interested users. The implementation allows users to add LCA data, to search through LCA data, and to use data from the software in separate independent tools..
The research contributes to the LCA field by addressing barriers to improving LCA data and access, and providing a platform on which LCA database tools and data can develop efficiently, collectively, and iteratively.
|
Page generated in 0.0304 seconds