Spelling suggestions: "subject:"[een] SEMANTIC WEB"" "subject:"[enn] SEMANTIC WEB""
551 |
Um modelo semântico para integração automática de conteúdo com um agente conversacionalSantos, Fabio Rodrigues dos 26 February 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-05-09T13:07:53Z
No. of bitstreams: 1
Fabio Rodrigues dos Santos_.pdf: 7109958 bytes, checksum: 136a6e535dbd26994aa6dbd702767b78 (MD5) / Made available in DSpace on 2016-05-09T13:07:53Z (GMT). No. of bitstreams: 1
Fabio Rodrigues dos Santos_.pdf: 7109958 bytes, checksum: 136a6e535dbd26994aa6dbd702767b78 (MD5)
Previous issue date: 2016-02-26 / IFRR - Instituto Federal de Educação Ciências e Tecnologia de Roraima / Um agente conversacional é capaz de interagir com usuários em linguagem natural, imitando o diálogo realizado entre seres humanos. Entretanto, a necessidade de trabalho manual por parte dos autores de conteúdo para a construção da sua base de diálogos não o torna muito atrativo para ser integrado a websites. Seria possível facilitar a criação de novos diálogos e automatizar a atualização dos mesmos usando como base os recursos disponibilizados pela Web Semântica para integrar o agente conversacional a um Sistema de Gerenciamento de Conteúdo Web (SGCW) de forma que seu conteúdo seja aproveitado. O trabalho aqui proposto descreve o modelo para utilização de informações de SGCW’s em agentes conversacionais com auxilio da web semântica, denominado Eduardo. O principal diferencial está na automatização da atualização dos diálogos, pois os mesmos são extraídos do SGCW para uma base de dados de triplas, que são consultadas para geração dos diálogos, caso haja alguma atualização na base do SGCW, basta refazer os procedimentos que já foram automatizados, para que os diálogos sejam atualizados, evitando assim o trabalho manual de edição dos arquivos escritos em linguagem AIML (Artificial Intelligence Markup Language). Foi desenvolvido um protótipo deste modelo, que permite a integração dinâmica de conteúdo com um Agente Conversacional. O protótipo foi avaliado quanto à capacidade de gerar as informações necessárias para os diálogos, acessando as páginas Web e representando seu conteúdo nos formatos RDFa e AIML. Em um segundo momento, ensaios de interação foram realizados com usuários nos componentes do Agente Conversacional, para avaliar sua funcionalidade, aceitação e demais aspectos uso. / A conversational agent is able to interact with users in natural language, imitating the dialogue held between humans. However, the need for manual labor by the authors content for the construction of dialogues base makes the not too attractive to be integrated into websites. It would be possible to facility the creation of new dialogues and automate the update them using as a basis the resources provided by the Semantic Web to integrate conversational agent to a Web Content Management System (WCMS) so that you content is used. The work proposed here describes the model for use of WCMS’s information in conversational agents with semantic web aid, called Eduardo. The main difference is in automating the update of dialogues, as they are extracted from WCMS for a triple database, which are referred to generation of dialogues, if there is any update on the basis WCMS, just redo the procedures that have been automated so that the dialogues are update, thus avoiding manual labor editing of files written in AIML (Artificial Intelligence Markup Language). It developed a prototype of this model which allows dynamic content integration with a conversational agent. The prototype was evaluated for ability to generate the information necessary for the dialogue, accessing the Web pages and their content representing in RDFa and AIML formats. In a second step, interaction assays were performed with users in the components of conversational agent to assess its functionality, acceptance and other aspects of use.
|
552 |
Web 3.0: impacto na sociedade de serviços uma análise da comunicação contemporâneaKoo, Lawrence Chung 17 October 2011 (has links)
Made available in DSpace on 2016-04-26T18:11:14Z (GMT). No. of bitstreams: 1
Lawrence Chung Koo.pdf: 6692647 bytes, checksum: c6742bc4be0ad22c2cb8e80905bfebce (MD5)
Previous issue date: 2011-10-17 / This research aims to describe and analyze Web 3.0 in the world of communication,
especially in the social media environment. With the emergence of new search engines
and innovation in new forms of collaboration among internet users, it challenges us to
study the development of this area. In particular, we studied the influence of Science
Service in the digital world. We collected facts and analyzed the reasons why the
service is to be the focus of attention in Web environment, both in economic
transactions, and in processes of knowledge creation. Dedicate part of our study to the
learning service for this segment will be the foundation of future development on the
Web.
The methodology was based on Chapter 4 of the book Research and Communication
by Santaella (2004), the main authors who inspired the research in Science Service
topics are Spohrer et al (2007), Maglio et al (2010), while in the area of Web
development the authors which supported were we based our ideas are Spivack N.,
(2007) and Siemmens (2005). The theoretical framework used in the thesis has been
exemplified with the actual experiences of internet users, myself and my dairy surfing
records through social networks, testing search engines, buying goods and services in ecommerce,
using communication tools, and finally talking to consumer in Brazil, for
example, LinkedIn, Facebook, Google, Brands Club etc.
This project was the result of data collection on academic literatures, state of art articles
and intensive Web browsing to verify the facts cited above, in the period from 2007 to
2011. It was intentional, by the researcher, to use primarily electronic books (when there
was a choice between print and e-book), specialized websites and blogs moderated by
experts on research topics / Esta pesquisa tem como objetivo descrever e analisar a Web 3.0 que está no
contexto comunicacional, em especial, no ambiente de mídias sociais. O aparecimento
de novos mecanismos de busca e a inovação nas formas de colaboração entre os
internautas instigaram-nos a pesquisar o desenvolvimento dessa área. Em especial,
estudamos a influência da Ciência de Serviços no mundo digital. Coletamos fatos e
analisamos as razões pelas quais o serviço na Web passa a ser o foco das atenções, tanto
nas transações econômicas, como nos processos de criação do conhecimento.
Dedicamos parte do nosso estudo para o serviço de aprendizagem por ser esse segmento
o alicerce do desenvolvimento futuro na Web.
A metodologia utilizada na pesquisa teve como base o capítulo 4 do livro
Pesquisa e Comunicação, de Santaella (2004). Os autores principais que
fundamentaram este estudo na área da Ciência de Serviços foram Spohrer et al (2007),
Maglio et al (2010), e, na área de desenvolvimento da Web, os autores que mais nos
inspiraram foram O Reilly (2009), Wheeler (2010), Spivack (2007) e Siemmens (2005).
Além dos textos dos autores já mencionados, foram utilizadas as informações dos blogs
e dos artigos online de autores ou articulistas especialistas em internet e aquelas
coletadas pelo próprio pesquisador na Web, por meio dos sites de colaboração, das
redes sociais, dos mecanismos de buscas, do e-commerce, das ferramentas de
comunicação, do consumo e cultura no Brasil, como, por exemplo, plataformas
Linkedin, Facebook, Google, Twitter e site de compras coletivas Brands Club, Groupon
etc.
As conclusões obtidas na pesquisa, relativas à tendência da Web para se tornar
uma Web de serviços, resultaram da pesquisa bibliográfica e de artigos sobre o estado
da arte em relação ao tema sob estudo, e intensa navegação na Web para verificação dos
fatos citados acima, no período entre 2007 e 2011. Foi totalmente intencional, por parte
do pesquisador, utilizar prioritariamente os livros eletrônicos (quando havia opção entre
o impresso e o e-book), sites e blogs dos autores especialistas nos temas
|
553 |
Um modelo para ambientes inteligentes baseado em serviços web semânticos / A model for smart environments based on semantic web servicesGuerra, Crhistian Alberto Noriega 29 August 2007 (has links)
Um ambiente inteligente é um sistema de computação ubíqua e sensível ao contexto onde os sistemas computacionais embutidos no ambiente, a comunicação entre dispositivos e o ambiente, e a acessibilidade aos serviços do ambiente são transparentes ao usuário. O presente trabalho tem como objetivo propor um modelo para ambientes inteligentes baseado em serviços web semânticos, em que os serviços disponíveis para os dispositivos do ambiente são proporcionados como serviços web e a interação dispositivo - ambiente é feita em um contexto de computação móvel, onde a disponibilidade dos serviços e a informação de contexto do dispositivo mudam freqüentemente. No modelo proposto todas as funcionalidades do ambiente são fornecidas como serviços. Estes serviços são descobertos e executados automaticamente com a finalidade de ajudar o usuário a desenvolver tarefas específicas, permitindo ao usuário se concentrar nas tarefas e não na interação com o ambiente. O modelo se fundamenta na oferta de serviços dirigida pela tarefa a ser desenvolvida, o que é conhecido como Task-driven Computing. Por outro lado, para a automação do processo de descoberta e execução dos serviços é necessário ter uma especificação não ambígua da semântica dos serviços. Empregamos para isso a ontologia WSMO (Web Services Modeling Ontology) que fornece os elementos necessários para a descrição dos serviços disponíveis no ambiente e o contexto do dispositivo. Finalmente, como prova de conceitos do modelo proposto, foi implementado um ambiente inteligente para uma biblioteca. A ativação de um ambiente inteligente baseado no modelo proposto se baseia na definição de ontologias, descrição semântica dos serviços no ambiente e a implementação de serviços web tradicionais. / A smart environment is a system computing ubiquitous computing and context awareness, in which the computational systems embedded in the environment, the communication between devices and the environment, and the accessibility to services are transparent to the users. The aim of this work is to propose a semantic web services based model for smart environments, in which services are offered to devices as web services and the device - environment interactions are based on a mobile computing environment, in which the contextual information and availability of services change frequently. In the proposed model all functionalities in the environment are offered as services. These services are automatically discovered and executed to support the user in a specific task, allowing to the user to focus on his task and not in the interactions with the environment. The model is based on a task-driven offer of services and on task-driven computing. To automate the discovery and execution of services, we need a nonambiguous specification of the semantic of services. We use the WSMO ontology (Web Services Modeling Ontology), which provides the required elements for description of the services in the environment and the context device. Finally, as a conceptual proof of the proposed model, we implemented a smart environment for a library. In the proposed model the activation of a smart environment is based in the ontologies definition, semantic description of the services.
|
554 |
Applications communautaires spontanées dynamiquement reconfigurables en environnement pervasif / Dynamically reconfigurable applications for spontaneous communities in pervasive environmentBen Nejma, Ghada 22 December 2015 (has links)
Depuis quelques années, des évolutions importantes ont lieu en matière d’infrastructures technologiques. En particulier, la démocratisation des dispositifs mobiles (comme les PCs, Smartphones, Tablettes, etc.) a rendu l’information accessible par le grand public partout et à tout moment, ce qui est l’origine du concept d’informatique ubiquitaire. L’approche classique des systèmes de l’informatique ubiquitaire, qui répondent aux besoins des utilisateurs indépendants les uns des autres, a été bouleversée par l’introduction de la dimension sociale. Ce rapprochement est à l’origine d’une discipline naissante « le pervasive social computing » ou l’informatique socio-pervasive. Les applications socio-pervasives connaissent une véritable expansion. Ces dernières intègrent de plus en plus la notion de communauté. Le succès des applications communautaires se justifie par le but poursuivi par ces dernières qui est de répondre aux besoins des communautés et d’offrir un ‘chez soi’ virtuel, spécifique à la communauté, dans lequel elle va construire sa propre identité et réaliser ses objectifs. Par ailleurs, la notion de communauté représente une source d’informations contextuelles sociales. Elle est, aujourd’hui, au cœur des problématiques de personnalisation et d’adaptation des applications informatiques. Dans le cadre de cette thèse, nous étudions sous différents aspects les applications informatiques centrées communautés existantes et soulignons un certain nombre de carences au niveau même de la notion de communauté, des modèles de communautés, ou encore des architectures dédiées à ces applications communautaires, etc. Pour remédier à ces défauts, nous proposons trois principales contributions : Un nouveau type de communauté adapté aux exigences des environnements pervasifs qui vient rompre avec les traditionnelles communautés pérennes thématiques : des communautés éphémères, géolocalisées et spontanées (sans contrainte thématique).
Un modèle de communauté basé sur les standards du web sémantique pour répondre aux problèmes liés à l’hétérogénéité de conception des communautés. Une architecture dynamiquement reconfigurable pour promouvoir les communautés spontanées en aidant les utilisateurs nomades à intégrer des communautés environnantes et à découvrir les services dédiés.
Nous montrons la faisabilité de nos propositions pour la conception et le développement d’applications communautaires spontanées grâce au prototype Taldea. Enfin, nous testons les approches proposées de découverte de communauté et de services à travers plusieurs scénarios caractérisés par la mobilité et l’ubiquité. / Advances in technology, in particular the democratization of mobile devices (PCs, smartphones and tablets), has made information accessible to anyone at any time and from anywhere while facilitating the capture of physical contextual data, thereby justifying the growing interest for pervasive computing. The classical approach of pervasive computing has been affected by the introduction of the social dimension. Ubiquitous systems do not meet the needs of users independently from each other but do take into account their social context. Fostering the social dimension has given rise to a fast growing research field called Pervasive Social Computing. Applications in this area are increasingly concerned by communities. The contextual information associated with a community can be harnessed for personalization, adaptability and dynamic deployment of services, which are important factors for Pervasive Computing. A community is considered in our approach as a set of distinct social entities that should be supported with services as a single user is. In this thesis, we look into different aspects of existing centered communities applications and we identify several weaknesses and shortcomings in the notion of community, the community models, and the architecture of communities’ applications. To overcome these shortcomings, we propose three main contributions: A new type of communities that fits better with the requirements of pervasive environments: short- lived, geolocated and spontaneous (without thematic constraint) community. Intuitively, it is the type of community that best matches with circumstantial, accidental, incidental or fortuitous situations. This kind of community has to meet specific needs, which are not taken into account by perennial thematic communities.
A model for communities based on semantic web standards to overcome the problem of heterogeneity across definitions and models. The ontological representation allows us to organize and represent social data, to make information searches easier for users and to infer new knowledge.
A dynamically reconfigurable architecture for fostering spontaneous communities in order to facilitate the user access to communities, information exchange between community members and service discovery.
The proposed architecture for community and service discovery have been validated through a prototype called Taldea and have been tested through several scenarios characterized by mobility and ubiquity.
|
555 |
Towards the French Biomedical Ontology Enrichment / Vers l'enrichissement d'ontologies biomédicales françaisesLossio-Ventura, Juan Antonio 09 November 2015 (has links)
En biomedicine, le domaine du « Big Data » (l'infobésité) pose le problème de l'analyse de gros volumes de données hétérogènes (i.e. vidéo, audio, texte, image). Les ontologies biomédicales, modèle conceptuel de la réalité, peuvent jouer un rôle important afin d'automatiser le traitement des données, les requêtes et la mise en correspondance des données hétérogènes. Il existe plusieurs ressources en anglais mais elles sont moins riches pour le français. Le manque d'outils et de services connexes pour les exploiter accentue ces lacunes. Dans un premier temps, les ontologies ont été construites manuellement. Au cours de ces dernières années, quelques méthodes semi-automatiques ont été proposées. Ces techniques semi-automatiques de construction/enrichissement d'ontologies sont principalement induites à partir de textes en utilisant des techniques du traitement du langage naturel (TALN). Les méthodes de TALN permettent de prendre en compte la complexité lexicale et sémantique des données biomédicales : (1) lexicale pour faire référence aux syntagmes biomédicaux complexes à considérer et (2) sémantique pour traiter l'induction du concept et du contexte de la terminologie. Dans cette thèse, afin de relever les défis mentionnés précédemment, nous proposons des méthodologies pour l'enrichissement/la construction d'ontologies biomédicales fondées sur deux principales contributions.La première contribution est liée à l'extraction automatique de termes biomédicaux spécialisés (complexité lexicale) à partir de corpus. De nouvelles mesures d'extraction et de classement de termes composés d'un ou plusieurs mots ont été proposées et évaluées. L'application BioTex implémente les mesures définies.La seconde contribution concerne l'extraction de concepts et le lien sémantique de la terminologie extraite (complexité sémantique). Ce travail vise à induire des concepts pour les nouveaux termes candidats et de déterminer leurs liens sémantiques, c'est-à-dire les positions les plus pertinentes au sein d'une ontologie biomédicale existante. Nous avons ainsi proposé une approche d'extraction de concepts qui intègre de nouveaux termes dans l'ontologie MeSH. Les évaluations, quantitatives et qualitatives, menées par des experts et non experts, sur des données réelles soulignent l'intérêt de ces contributions. / Big Data for biomedicine domain deals with a major issue, the analyze of large volume of heterogeneous data (e.g. video, audio, text, image). Ontology, conceptual models of the reality, can play a crucial role in biomedical to automate data processing, querying, and matching heterogeneous data. Various English resources exist but there are considerably less available in French and there is a strong lack of related tools and services to exploit them. Initially, ontologies were built manually. In recent years, few semi-automatic methodologies have been proposed. The semi-automatic construction/enrichment of ontologies are mostly induced from texts by using natural language processing (NLP) techniques. NLP methods have to take into account lexical and semantic complexity of biomedical data : (1) lexical refers to complex phrases to take into account, (2) semantic refers to sense and context induction of the terminology.In this thesis, we propose methodologies for enrichment/construction of biomedical ontologies based on two main contributions, in order to tackle the previously mentioned challenges. The first contribution is about the automatic extraction of specialized biomedical terms (lexical complexity) from corpora. New ranking measures for single- and multi-word term extraction methods have been proposed and evaluated. In addition, we present BioTex software that implements the proposed measures. The second contribution concerns the concept extraction and semantic linkage of the extracted terminology (semantic complexity). This work seeks to induce semantic concepts of new candidate terms, and to find the semantic links, i.e. relevant location of new candidate terms, in an existing biomedical ontology. We proposed a methodology that extracts new terms in MeSH ontology. The experiments conducted on real data highlight the relevance of the contributions.
|
556 |
Investigação e implementação de ferramentas computacionais para otimização de websites com ênfase na descrição de conteúdo /Garcia, Léo Manoel Lopes da Silva. January 2011 (has links)
Resumo: Quando fala-se de evolução da Web, poderia realmente ser mais apropriado falar de design inteligente. Com a Web se tornando a principal opção para quem produz e dissemina conteúdo digital, cada vez mais, as pessoas tomam a atenção para esse valioso repositório de conhecimento. Neste ambiente, os mecanismos de busca configuram-se em aplicativos populares, tornando-se intermediários entre os usuários e a miríade de informações, serviços e recursos disponíveis na grande rede. Neste sentido, o Webdesigner pode atuar de forma decisiva, proporcionando uma melhor resposta na classificação dos mecanismos de busca. A correta representação do conhecimento é a chave para a recuperação e para a disseminação efetiva de dados, de informação e de conhecimentos. Este trabalho apresenta um estudo que pode trazer um progresso relevante aos usuários desta grande rede, buscando apresentar uma ferramenta de domínio público que apoie a aplicação de técnicas de descrição semântica de informação na Web. No decorrer da pesquisa investigamos técnicas e metodologias capazes de otimizar a indexação dos Websites pelos mecanismos de busca, enfatizando a descrição do conteúdo nele presente, melhorando sua classificação e consequentemente colaborando com a qualidade na recuperação de informações realizadas por meio de mecanismos de buscas. Tais técnicas foram testadas em alguns Websites, obtendo resultado satisfatório, a partir de então a ferramenta foi implementada e submetida a usuários para sua validação, o resultado desta validação é apresentado demonstrando a viabilidade da ferramenta e enumeração de novas funcionalidades para trabalhos futuros / Abstract: When we speak of evolution of the Web, it might actually be more appropriate to speak of intelligent design. With the Web becoming the primary choice for those who produce and disseminate digital content , more people take attention to this valuable repository of knowledge. In this environment , search engines are configured in popular, becoming an intermediary between users and the myriad of information, service and resources available on the World Wide Web. In this sense, the Web designer can act decisively, providing a better response in the ranking of search engines. The correct representation of knowledge is the key to recovery and effective dissemination of data, information and knowledge. This paper presents a study that significant progress can bring a large network of users, seeking to present a public domain tool that supports the application of techniques for semantic description of Web information in the course of the research investigated techniques and methodologies that can optimize Website indexing by search engines, emphasizing the description of the content in it, improving your ranking and thus contributing to quality in information retrieval conducted through search engines. These techniques were tested on some websites, obtaining satisfactory results, since then the tool was implemented and submitted to users validation, the result of the validation is present demonstrating the feasibility of the tool and list of new features for future work / Orientador: João Fernando Marar / Coorientador: Ivan Rizzo Guilherme / Banca: Edson Costa de Barros Carvalho Filho / Banca: Antonio Carlos Sementille / Mestre
|
557 |
Linked Enterprise Data als semantischer, integrierter Informationsraum für die industrielle Datenhaltung / Linked Enterprise Data as semantic and integrated information space for industrial dataGraube, Markus 01 June 2018 (has links) (PDF)
Zunehmende Vernetzung und gesteigerte Flexibilität in Planungs- und Produktionsprozessen sind die notwendigen Antworten auf die gesteigerten Anforderungen an die Industrie in Bezug auf Agilität und Einführung von Mehrwertdiensten. Dafür ist eine stärkere Digitalisierung aller Prozesse und Vernetzung mit den Informationshaushalten von Partnern notwendig. Heutige Informationssysteme sind jedoch nicht in der Lage, die Anforderungen eines solchen integrierten, verteilten Informationsraums zu erfüllen.
Ein vielversprechender Kandidat ist jedoch Linked Data, das aus dem Bereich des Semantic Web stammt. Aus diesem Ansatz wurde Linked Enterprise Data entwickelt, welches die Werkzeuge und Prozesse so erweitert, dass ein für die Industrie nutzbarer und flexibler Informationsraum entsteht. Kernkonzept dabei ist, dass die Informationen aus den Spezialwerkzeugen auf eine semantische Ebene gehoben, direkt auf Datenebene verknüpft und für Abfragen sicher bereitgestellt werden. Dazu kommt die Erfüllung industrieller Anforderungen durch die Bereitstellung des Revisionierungswerkzeugs R43ples, der Integration mit OPC UA über OPCUA2LD, der Anknüpfung an industrielle Systeme (z.B. an COMOS), einer Möglichkeit zur Modelltransformation mit SPARQL sowie feingranularen Informationsabsicherung eines SPARQL-Endpunkts. / Increasing collaboration in production networks and increased flexibility in planning and production processes are responses to the increased demands on industry regarding agility and the introduction of value-added services. A solution is the digitalisation of all processes and a deeper connectivity to the information resources of partners. However, today’s information systems are not able to meet the requirements of such an integrated, distributed information space.
A promising candidate is Linked Data, which comes from the Semantic Web area. Based on this approach, Linked Enterprise Data was developed, which expands the existing tools and processes. Thus, an information space can be created that is usable and flexible for the industry. The core idea is to raise information from legacy tools to a semantic level, link them directly on the data level even across organizational boundaries, and make them securely available for queries. This includes the fulfillment of industrial requirements by the provision of the revision tool R43ples, the integration with OPC UA via OPCUA2LD, the connection to industrial systems (for example to COMOS), a possibility for model transformation with SPARQL as well as fine granular information protection of a SPARQL endpoint.
|
558 |
Web semântica : aspectos interdisciplinares da gestão de recursos informacionais no âmbito da ciência da informação /Ramalho, Rogério Aparecido Sá. January 2006 (has links)
Resumo: No âmbito da gestão de recursos informacionais os modelos e métodos de organização e recuperação de informações sempre estiveram condicionados às tecnologias utilizadas, de modo que com desenvolvimento e intensificação da utilização das tecnologias digitais uma nova gama de possibilidades vem sendo incorporada aos processos de produção, armazenamento, representação e recuperação de informações, atingindo um estágio em que os modelos clássicos de organização e recuperação de informações precisam ser (re)pensados sob diferentes perspectivas, pois os mesmos não parecem ser capazes de solucionar os problemas identificados no ambiente Web, evidenciando a necessidade de desenvolvimento de novas tecnologias que permitam otimizar a recuperação de informações em ambientes digitais. Nesse sentido, os estudos relacionados ao projeto Web Semântica vêm destacando-se como uma nova perspectiva no desenvolvimento de tecnologias que possibilitem um aumento na qualidade e relevância das informações recuperadas, a partir do desenvolvimento de instrumentos que permitam descrever formalmente, em um formato que possa ser processado por máquinas, os aspectos semânticos inerentes aos recursos informacionais, contribuindo para a identificação e contextualização das informações disponíveis no ambiente Web. Deste modo, a proposição deste trabalho é a realização de um estudo teórico e metodológico de caráter interdisciplinar acerca do projeto Web Semântica, buscando favorecer a "desmistificação" dos conceitos e tecnologias subjacentes e avaliar em que medida a área de Ciência da Informação pode contribuir para sua concretização, ressaltando os possíveis reflexos destas novas abordagens tecnológicas em seu corpus teórico. Assim, apresenta-se um levantamento bibliográfico acerca do desenvolvimento da Internet... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: In the scope of the information resource management, the models and methods of organization and retrieval of information were always conditioned to the used technologies, so that with the development and intensification of digital technology uses, a new scale of possibilities has been incorporated to the production process, storage, representation and retrieval of information, reaching a stage where the classic models of organization and retrieval of information need to be (re)thought under different perspectives, because they dont seem to be able to solve the problems identified in the Web environment, becoming evident the need of development of new technologies that allow to optimize the retrieval of information in digital environment. In this way, the studies related to the Semantic Web project have been detaching as a new perspective in the development of technologies that enable an increase in the quality and relevance of the recovered information through the development of instruments that allow describing them formally, in a format that can be processed by machines. The semantic aspects that are inherent to the information resources contribute to the identification and contextualization of the available information in the Web environment. In this way, the proposition of this research is the accomplishment of a theoretical and methodological study of interdisciplinary characteristic about the Semantic Web project, aiming to identify its theoretical basis, favoring the demystification of the concepts and subjacent technologies, and evaluating in what stage the Information Science area can contribute to its concretization, becoming evident the possible reflexes of these new technological approaches in its theoretical corpus. So a bibliographic review about the development of the Internet and the main concepts and technologies inherent to the Web Semantic... (Complete abstract, click electronic address below) / Orientador: Silvana Aparecida Borsetti Gregório Vidotti / Coorientador: Mariângela Spotti Lopes Fujita / Banca: Marcos Luiz Mucheroni / Banca: Plácida Leopoldina Ventura Amorim da Costa Santos / Mestre
|
559 |
Studentenkonferenz Informatik Leipzig 201118 April 2012 (has links) (PDF)
Die Studentenkonferenz Informatik Leipzig 2011 bietet die Möglichkeit, die Identifikation für das Studienfach Informatik und die Begeisterung für IT-Themen allgemein bei Studierenden zu wecken. Bei der Studentenkonferenz reichten Studierende kurze Artikel über Studien-, Abschlussarbeiten oder in der Freizeit absolvierte informatikrelevante Projekte ein. Andere Studierende, Doktoranden und wissenschaftliche Mitarbeiter der Leipziger Hochschulen bewerteten und diskutierten die eingereichten Arbeiten. Interessante und gut ausgearbeitete Einreichungen wurden zur Präsentation auf der Konferenz angenommen. Dieses Buch beinhaltet die überarbeiteten Beiträge der studentischen Autoren.
Eine Studentenkonferenz unterscheidet sich kaum von einer anderen wissenschaftlichen Konferenz. Die Themenvielfalt kann allerdings durch die Breite der vertretenen Themen größer sein und die wissenschaftliche Innovation ist bei der Bewertung der Arbeiten nicht immer das primäre Kriterium. Eine Studentenkonferenz hilft, das kreative Potential von Studierenden besser sichtbar zu machen und Studierende für die Informatik und die Forschung zu begeistern. Außerdem stärkt sie den Austausch zwischen verschiedenen Disziplinen innerhalb der Informatik und fördert insbesondere das gegenseitige Verständnis von Lehrkräften und Studierenden.
In diesem Jahr wurde am Institut für Angewandte Informatik (InfAI) e.V. zum zweiten Mal die Studentenkonferenz Informatik Leipzig (SKIL 2011) organisiert. Initiiert und maßgeblich organisiert wurde die SKIL 2011 von den Forschungsgruppen Agile Knowledge Engineering and Semantic Web (AKSW) und Service Science and Technology (SeSaT) der Universität Leipzig.
Die Konferenz fand am 02. Dezember 2011 in Leipzig statt.
|
560 |
Semantic knowledge extraction from relational databasesMogotlane, Kgotatso Desmond 05 1900 (has links)
M. Tech. (Information Technology, Department of Information and Communications Technology, Faculty of Applied an Computer Sciences), Vaal University of Technolog / One of the main research topics in Semantic Web is the semantic extraction of knowledge
stored in relational databases through ontologies. This is because ontologies are core
components of the Semantic Web. Therefore, several tools, algorithms and frameworks are being developed to enable the automatic conversion of relational databases into ontologies.
Ontologies produced with these tools, algorithms and frameworks needs to be valid and
competent for them to be useful in Semantic Web applications within the target knowledge domains. However, the main challenges are that many existing automatic ontology construction tools, algorithms, and frameworks fail to address the issue of ontology verification and ontology competency evaluation. This study investigates possible solutions to these challenges. The study began with a literature review in the semantic web field. The review let to the conceptualisation of a framework for semantic knowledge extraction to deal with the abovementioned challenges. The proposed framework had to be evaluated in a real life knowledge domain. Therefore, a knowledge domain was chosen as a case study. The data was collected and the business rules of the domain analysed to develop a relational data model. The data model was further implemented into a test relational database using Oracle RDBMS. Thereafter, Protégé plugins were applied to automatically construct ontologies from the relational database. The resulting ontologies are further validated to match their structures against existing conceptual database-to-ontology mapping principles. The matching results show the performance and accuracy of Protégé plugins in automatically converting relational databases into ontologies. Finally, the study evaluated the resulting ontologies against the requirements of the knowledge domain. The requirements of the domain are modelled with competency questions (CQs) and mapped to the ontology using SPARQL queries design, execution and analysis against users’ views of CQs answers. Experiments show that, although users have different views of the answers to CQs, the execution of the SPARQL translations of CQs against the ontology does produce outputs instances that satisfy users’ expectations. This indicates that Protégé plugins generated ontology from relational database embodies domain and semantic features to be useful in Semantic Web applications.
|
Page generated in 0.0531 seconds