Spelling suggestions: "subject:"ontologies (forminformation retrieval)"" "subject:"ontologies (informationation retrieval)""
41 |
Anotação semantica de dados geoespaciaisMacario, Carla Geovana do Nascimento 15 August 2018 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T04:11:30Z (GMT). No. of bitstreams: 1
Macario_CarlaGeovanadoNascimento_D.pdf: 3780981 bytes, checksum: 4b8ad7138779392bff940f1f95ad1f51 (MD5)
Previous issue date: 2009 / Resumo: Dados geoespaciais constituem a base para sistemas de decisão utilizados em vários domínios, como planejamento de transito, fornecimento de serviços ou controle de desastres. Entretanto, para serem usados, estes dados precisam ser analisados e interpretados, atividades muitas vezes trabalhosas e geralmente executadas por especialistas. Apesar disso estas interpretacoes nao sao armazenadas e quando o são, geralmente correspondem a alguma informacao textual e em linguagem própria, gravadas em arquivos tecnicos. A ausencia de solucoes eficientes para armazenar estas interpretaçães leva a problemas como retrabalho e dificuldades de compartilhamento de informação. Neste trabalho apresentamos uma soluçao para estes problemas que baseia-se no uso de anotações semânticas, uma abordagem que promove um entendimento comum dos conceitos usados. Para tanto, propomos a adocão de workflows científicos para descricao do processo de anotacão dos dados e tambíem de um esquema de metadados e ontologias bem conhecidas, aplicando a soluçao a problemas em agricultura. As contribuicães da tese envolvem: (i) identificacao de um conjunto de requisitos para busca semantica a dados geoespaciais; (ii) identificacao de características desejóveis para ferramentas de anotacão; (iii) proposta e implementacao parcial de um framework para a anotacão semântica de diferentes tipos de dados geoespaciais; e (iv) identificacao dos desafios envolvidos no uso de workflows para descrever o processo de anotaçcaão. Este framework foi parcialmente validado, com implementação para aplicações em agricultura / Abstract: Geospatial data are a basis for decision making in a wide range of domains, such as traffic planning, consumer services disasters controlling. However, to be used, these kind of data have to be analyzed and interpreted, which constitutes a hard task, prone to errors, and usually performed by experts. Although all of these factors, the interpretations are not stored. When this happens, they correspond to descriptive text, which is stored in technical files. The absence of solutions to efficiently store them leads to problems such as rework and difficulties in information sharing. In this work we present a solution for these problems based on semantic annotations, an approach for a common understanding of concepts being used. We propose the use of scientific workflows to describe the annotation process for each kind of data, and also the adoption of well known metadata schema and ontologies. The contributions of this thesis involves: (i) identification of requirements for semantic search of geospatial data; (ii) identification of desirable features for annotation tools; (iii) proposal, and partial implementation, of a a framework for semantic annotation of different kinds of geospatial data; and (iv) identification of challenges in adopting scientific workflows for describing the annotation process. This framework was partially validated, through an implementation to produce annotations for applications in agriculture / Doutorado / Banco de Dados / Doutora em Ciência da Computação
|
42 |
Alinhamento de metadados da indústria de broadcast multimidia no contexto da TV digital com a web semântica / Alignment of broadcast multimedia industry metadata in the context of digital tv with the semantic webAraújo, Rodrigo Cascão, 1975- 03 August 2013 (has links)
Orientador: Ivan Luiz Marques Ricarte / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-22T20:42:17Z (GMT). No. of bitstreams: 1
Araujo_RodrigoCascao_D.pdf: 4349917 bytes, checksum: 0c79f0eb04ac45b3285c6933d153852c (MD5)
Previous issue date: 2013 / Resumo: A integração da Internet e das tecnologias de comunicação móveis com as plataformas de televisão têm provido aos telespectadores novos serviços interativos de conteúdo digital. Devido a estes fatores, os equipamentos para o consumidor têm se tornado cada vez mais sofisticados, suportando uma variedade de conteúdos e conectividade com outras redes e dispositivos. A TV digital é uma plataforma híbrida que combina elementos da televisão tradicional com a Internet, provendo ao usuário o acesso a uma diversidade de conteúdos de mídia interativa. Com o crescimento do volume e da diversidade de serviços e conteúdos multimídia, a televisão está enfrentando os mesmos desafios de complexidade e excesso de informações que já vinham sendo encarados por outras mídias digitais relacionadas com a Internet. A tecnologia de metadados pode ser uma alternativa para lidar com esta complexidade de serviços e conteúdos digitais de forma prática e eficiente. Metadados são dados que complementam as informações digitais dos conteúdos multimídia com o objetivo de descrevê-los de forma sintática e semântica, facilitando a estruturação e o gerenciamento de grandes volumes de informação. O uso de metadados em TV digital não se restringe a construção de um ferramental de busca e indexação de conteúdos multimídia, e abre oportunidade para o desenvolvimento de uma gama de serviços inovadores. Atualmente existem diversas especificações de metadados utilizadas pela indústria de broadcast multimídia em redes de TV digital. Além disso, existem na Internet diversos repositórios de informação baseados em metadados que complementam as informações de metadados da TV digital. Contudo, como os padrões de metadados da TV digital e da Internet são baseados em diferentes especificações não relacionadas, surge o problema de como integrar estas informações, visando criar novos serviços para telespectadores que utilizem tanto informações de metadados da TV digital como informações de metadados da Internet. Esta tese de doutorado propõe um processo para alinhamento das especificações de metadados existentes em redes abertas de transmissão e recepção de TV digital terrestre com ontologias orientadas para a descrição de domínios de conhecimento específicos existentes em repositórios da Internet, utilizando tecnologias propostas pelo W3C para a Web Semântica. O processo proposto permitirá que o usuário da TV digital possa facilmente pesquisar conteúdos de interesse a partir da grade de programação dos canais existentes e dos conteúdos já gravados em seu receptor; receber sugestões de conteúdos para exibição ou gravação conforme o seu perfil e interesse; enriquecer sua experiência de assistir televisão acessando informações complementares sobre os programas transmitidos como sinopses, críticas especializadas, histórico do elenco e direção, premiações recebidas, fotos, vídeos e conteúdos relacionados disponíveis para livre acesso via Internet; entre outras funções. A presente proposta foi validada através de uma prova de conceito implementada em um receptor híbrido de TV digital, que demonstrou a viabilidade de sua operacionalização sem a necessidade de impactar os padrões utilizados no Brasil para transmissão de sinal de TV digital terrestre (ISDB-T) / Abstract: The integration of the Internet and communication technologies with mobile TV platforms has provided viewers with new interactive services for digital content. Due to these factors, the equipment for the consumer have become increasingly sophisticated, supporting a variety of content and connectivity with other networks and devices. Digital TV is a hybrid platform that combines elements of traditional television with the Internet, providing the user access to a variety of interactive media content. With the growth in the volume and diversity of services and multimedia content, television is experiencing the same challenges of complexity and information overload that were already being seen by other digital media related to the Internet. The metadata technology can be an alternative to deal with this complexity of digital content and services in a practical and efficient way. Metadata is data that supplement the information of digital multimedia contents in order to describe them in a syntactic and semantic form, facilitating the structuring and management of large volumes of information. The use of metadata in digital TV is not restricted to building a tool for search and indexing of multimedia content, and opens opportunities to develop a range of innovative services. Currently there are several metadata specifications used by the broadcast industry in multimedia digital TV networks. Moreover, there are many metadata-based information repositories in the Internet that complement the metadata information of digital TV. However, as the metadata standards of digital TV and Internet are based on different unrelated specifications, a problem arises of how to integrate this information in order to create new services for viewers using both metadata information of digital TV as metadata information of Internet. This thesis proposes a method for alignment of existing metadata specifications in open networks for transmission and reception of digital terrestrial TV with ontologies oriented for describing specific domains of knowledge existing in Internet repositories, using technologies proposed by the W3C for the Web semantics. The proposed process allows the digital TV user to easily search for content of interest from the program schedule of the existing channels and content already recorded in the receiver; to receive suggestions of content for viewing or recording according to his interest and profile; to enrich the experience of watching television by accessing additional information about the transmitted programs as synopses, specialized reviews, history of cast and direction, awards received, photos, videos and related content available for free access via Internet; among other functions. This proposal has been validated through a proof of concept implemented in a hybrid digital TV receiver, which demonstrated the feasibility of its implementation without impacting the standards used in Brazil for signal transmission of digital terrestrial TV (ISDB-T) / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
|
43 |
Arcabouço para anotação de componentes de imagem / A framework for semantic annotation of image componentsMuraro, Émerson, 1986- 21 August 2018 (has links)
Orientador: Ricardo da Silva Torres / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-21T20:19:19Z (GMT). No. of bitstreams: 1
Muraro_Emerson_M.pdf: 4254243 bytes, checksum: dd239dd897e8a66aa289cbf5b61988d8 (MD5)
Previous issue date: 2012 / Resumo: Com a disseminação de dispositivos mais baratos para sua aquisição, armazenamento e disponibilização, imagens vêm sendo utilizadas em várias aplicações (tais como comerciais, científicas, e pessoais). O uso de imagens nessas aplicações tem motivado a criação de objetos digitais heterogêneos. Imagens não são usadas isoladamente e podem compor outros objetos digitais. Esses novos objetos digitais são conhecidos como Objetos Complexos. Esta dissertação apresenta um arcabouço para anotação semântica automática de componentes de imagem, visando o seu uso na construção de objetos complexos. Esta proposta utiliza diversas formas de busca para encontrar termos para anotação: ontologias, busca por palavras-chaves e por conteúdo visual. Os termos encontrados são ponderados por pesos que definem sua importância, e são combinados por técnicas de fusão de dados em uma única lista de sugestões. As principais contribuições deste trabalho são: especificação do processo de anotação semântica automática de componentes de imagem, que considera o conteúdo visual da imagem, palavras-chaves definidas, ontologias e possíveis combinações envolvendo estas alternativas e especificação e implementação parcial de um arcabouço para anotação de objetos complexos de imagens encapsulados em componentes / Abstract: Due to the dissemination of low-cost devices for acquisition, storage, and sharing, images have been used in several applications, (e.g., commercial, scientific, and personal). The use of images in those applications has motivated the creation of heterogeneous digital objects. Images are not longer used in isolation and are used to compose other digital objects, named Complex Objects. In this work, we present a new framework for automatic semantic annotation of image components, aiming at supporting their use in the construction of complex objects. Our proposal uses several approaches for defining appropriate terms to be used in the annotation process: ontologies, textual terms, and image content descriptions. Found terms are weighted according to their importance, and are combined using data fusion techniques. The main contributions of this work are: the specification of an automatic semantic annotation process for image components, that takes into account image visual properties, defined textual terms, ontologies, and their combination, and the specification and partial implementation of an infrastructure for annotating image complex objects encapsulated in components / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
44 |
Raisonnement automatique basé ontologies appliqué à la hiérarchisation des alertes en télécardiologie / Ontology based Automatic Reasoning applied to telecardiology alertsRosier, Arnaud 11 September 2015 (has links)
Introduction :La télésurveillance des stimulateurs cardiaques et défibrillateurs sera à terme le standard pour le suivi des patients implantés. Pourtant, des alertes très nombreuses sont générées par ces dispositifs, et constituent un fardeau pour la prise en charge médicale. De plus, les alertes générées le sont indépendamment du contexte médical individuel du patient, et elles pourraient donc être mieux caractérisées. Cette thèse propose un outil de traitement automatique des alertes générées par la survenue de fibrillation atriale, et basé sur une modélisation des connaissances médicales de type ontologie en OWL2. En particulier, le score de risque cardio-embolique CHA2DS2VASc a été évalué par le biais de l’ontologie, ainsi que le statut d’anticoagulation du patient. Matériel et Méthodes :Une ontologie d’application a été créée en OWL2, afin de représenter les concepts nécessaires au raisonnement sur les alertes. Cette ontologie a été utilisée pour raisonner sur 1783 alertes de FA détectées chez 60 porteurs de stimulateurs cardiaques. Les alertes ont été classées automatiquement selon leur importance d’après une échelle de gravité de 1 à 4. La classification automatique a été comparée à celle réalisée par 2 experts médicaux comme référence. Résultats : 1749 alertes sur 1783 (98%) ont été classées correctement. 58 des 60 patients avaient toutes leurs alertes classées à l’identique par le système testé et par les évaluateurs-médecins. Une approche basée ontologie est à même de permettre un raisonnement automatique sur des données issues de dispositifs médicaux connectés, en les contextualisant en fonction des données médicales individuelles du patient. / Introduction :Remote monitoring of cardiac implantable electronic devices (CIED) such as pacemakers and defibrillators is the new follow-up standard. However, the numerous alerts generated in remote monitoring causes a burden for physicians. Morever, many alerts are notified despite the knowledge of patient condition and could be refined. This work proposes an automatic tool for classifying atrial fibrillation alert, based on an ontological knowledge model in OWL2. In particular, CHA2DS2VASc thrombo-embolic risk score and patient anticogulation status are accounted in order to determine alert importance. Materials and methods :An application ontology was designed in OWL2, in order to represent the concepts needed for processing alerts. This ontology was used to infer the importance of 1783 AF alerts among 60 CIED recipients, using a 4-grade scale. Automatic classification was compared to that of 2 medical experts.Results :1749 of 1783 alerts (98%) were correctly classified. 58 of 60 patients had every alerts classified with the same importance by the prototype and the human experts. An ontology-driven automatic reasoning tool is able to classify remote monitoring alerts, by using individual medical context. This technology could be important for managing data generated by connected medical devices.
|
45 |
Die Regensburger Verbundklassifikation (RVK) – „ein weites Feld“: Herausforderung von Semantic Web, Ontologien und Entitäten für dieDynamik einer KlassifikationWerr, Naoka 28 January 2011 (has links)
Schlagwörter wie „information overload“, „digital natives“ oder „digital immigrants“ prägen die heutige Informations- und Wissensgesellschaft. Zahlreiche wissenschaftliche Untersuchungen belegen zudem nachdrücklich, dass die technische Entwicklung in den nächsten Jahren noch rasanter fortschreitet als man es jemals vermuten durfte. Internet- Kommunikationsangeboten kommt bereits jetzt eine außergewöhnliche Bedeutung zu - mit steigender Tendenz. Außerdem werden Kommunikationsservices wie Web 2.0-Anwendungen als ein zunehmend wichtiger Faktor von Internetnutzung unterstrichen und der aktuelle Trend zur persönlichen Vernetzung über das Internet stets hervorgehoben. Die Bedeutung der Kernnutzungen des Internets als Inhaltsquelle und Kommunikationsform wird demnach auch weiterhin zunehmen. Diesem Trend müssen sich auch Klassifikationssysteme stellen. Die RVK hat mit dem im Oktober 2009 lancierten Web-Portal einen ersten Schritt in Richtung Vernetzung getan. Die bisher auf verschiedenen Internetseiten disparat untergebrachten Informationen zur RVK sowie die Datenbanken zur RVK sind nunmehr unter einer Oberfläche vereint, miteinander verknüpft und mit Elementen sozialer Software (RVK-Wiki zur größeren Transparenz bei Abstimmungsvorgängen) angereichert. Im Kontext des derzeit ebenfalls als beliebtes Schlagwort thematisierten Semantic Web ist das Portal der RVK ein Paradigmenwechsel in der langen Geschichte der RVK: Das gesamte Wissen zur RVK wird entsprechend seiner Bedeutung konzeptionell verbunden und bereits weitgehend maschinenlesbar (beispielsweise bezogen auf die Suchfunktion in der Datenbank RVK-Online) offeriert. Wissensmanagement sowie die Verbesserung der Qualität der umfangreichen Informationen zur RVK auf semantischer Ebene sind sehr verbessert worden, verbunden mit dem RVK-Wiki könnte man gar von einem ersten Impuls in Richtung Web 3.0 für die RVK sprechen. Auch die hierarchische Struktur der RVK trägt wesentlich zum Semantic Web bei, da in einer Klassifikation gerade hierarchische Strukturen zur „Ordnung“ des im Überfluss vorhandenen implizierten Wissens beitragen. Wesentlich ist demnach die Definition der Relationen im Web (und somit der entsprechenden Ontologien und Entitäten), um der Quantität der Angebote im World Wide Web auch entsprechend qualitativ hochwertige Services mit bibliothekarischem Mehrwert entgegenzusetzen. Für das Datenmodell des Semantic Web ist somit die Bereitstellung von nachhaltigen Normdaten wie es für die RVK ja angedacht - respektive fast umgesetzt ist – notwendig.
|
46 |
Ontology Based Security Threat Assessment and Mitigation for Cloud SystemsKamongi, Patrick 12 1900 (has links)
A malicious actor often relies on security vulnerabilities of IT systems to launch a cyber attack. Most cloud services are supported by an orchestration of large and complex systems which are prone to vulnerabilities, making threat assessment very challenging. In this research, I developed formal and practical ontology-based techniques that enable automated evaluation of a cloud system's security threats. I use an architecture for threat assessment of cloud systems that leverages a dynamically generated ontology knowledge base. I created an ontology model and represented the components of a cloud system. These ontologies are designed for a set of domains that covers some cloud's aspects and information technology products' cyber threat data. The inputs to our architecture are the configurations of cloud assets and components specification (which encompass the desired assessment procedures) and the outputs are actionable threat assessment results. The focus of this work is on ways of enumerating, assessing, and mitigating emerging cyber security threats. A research toolkit system has been developed to evaluate our architecture. We expect our techniques to be leveraged by any cloud provider or consumer in closing the gap of identifying and remediating known or impending security threats facing their cloud's assets.
|
47 |
Investigation and application of artificial intelligence algorithms for complexity metrics based classification of semantic web ontologiesKoech, Gideon Kiprotich 11 1900 (has links)
M. Tech. (Department of Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / The increasing demand for knowledge representation and exchange on the semantic web has resulted in an increase in both the number and size of ontologies. This increased features in ontologies has made them more complex and in turn difficult to select, reuse and maintain them. Several ontology evaluations and ranking tools have been proposed recently. Such evaluation tools provide a metrics suite that evaluates the content of an ontology by analysing their schemas and instances. The presence of ontology metric suites may enable classification techniques in placing the ontologies in various categories or classes. Machine Learning algorithms mostly based on statistical methods used in classification of data makes them the perfect tools to be used in performing classification of ontologies.
In this study, popular Machine Learning algorithms including K-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forest, Naïve Bayes, Linear Regression and Logistic Regression were used in the classification of ontologies based on their complexity metrics. A total of 200 biomedical ontologies were downloaded from the Bio Portal repository. Ontology metrics were then generated using the OntoMetrics tool, an online ontology evaluation platform. These metrics constituted the dataset used in the implementation of the machine learning algorithms.
The results obtained were evaluated with performance evaluation techniques, namely, precision, recall, F-Measure Score and Receiver Operating Characteristic (ROC) curves. The Overall accuracy scores for K-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forest, Naïve Bayes, Logistic Regression and Linear Regression algorithms were 66.67%, 65%, 98%, 99.29%, 74%, 64.67%, and 57%, respectively. From these scores, Decision Trees and Random Forests algorithms were the best performing and can be attributed to the ability to handle multiclass classifications.
|
48 |
The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraftLombard, Orpha Cornelia 05 1900 (has links)
This dissertation describes a research study conducted to determine the benefits and
use of ontology technologies to support a simulation environment that evaluates
countermeasures employed to protect military aircraft.
Within the military, aircraft represent a significant investment and these valuable assets
need to be protected against various threats, such as man-portable air-defence
systems. To counter attacks from these threats, countermeasures are deployed, developed
and evaluated by utilising modelling and simulation techniques. The system
described in this research simulates real world scenarios of aircraft, missiles and
countermeasures in order to assist in the evaluation of infra-red countermeasures
against missiles in specified scenarios.
Traditional ontology has its origin in philosophy, describing what exists and how
objects relate to each other. The use of formal ontologies in Computer Science have
brought new possibilities for modelling and representation of information and knowledge
in several domains. These advantages also apply to military information systems
where ontologies support the complex nature of military information. After considering
ontologies and their advantages against the requirements for enhancements
of the simulation system, an ontology was constructed by following a formal development
methodology. Design research, combined with the adaptive methodology
of development, was conducted in a unique way, therefore contributing to establish
design research as a formal research methodology. The ontology was constructed
to capture the knowledge of the simulation system environment and the use of it
supports the functions of the simulation system in the domain.
The research study contributes to better communication among people involved in
the simulation studies, accomplished by a shared vocabulary and a knowledge base
for the domain. These contributions affirmed that ontologies can be successfully use
to support military simulation systems / Computing / M. Tech. (Information Technology)
|
49 |
Word-sense disambiguation in biomedical ontologiesAlexopoulou, Dimitra 11 June 2010 (has links)
With the ever increase in biomedical literature, text-mining has emerged as an important technology to support bio-curation and search. Word sense disambiguation (WSD), the correct identification of terms in text in the light of ambiguity, is an important problem in text-mining. Since the late 1940s many approaches based on supervised (decision trees, naive Bayes, neural networks, support vector machines) and unsupervised machine learning (context-clustering, word-clustering, co-occurrence graphs) have been developed. Knowledge-based methods that make use of the WordNet computational lexicon have also been developed. But only few make use of ontologies, i.e. hierarchical controlled vocabularies, to solve the problem and none exploit inference over ontologies and the use of metadata from publications.
This thesis addresses the WSD problem in biomedical ontologies by suggesting different approaches for word sense disambiguation that use ontologies and metadata. The "Closest Sense" method assumes that the ontology defines multiple senses of the term; it computes the shortest path of co-occurring terms in the document to one of these senses. The "Term Cooc" method defines a log-odds ratio for co-occurring terms including inferred co-occurrences. The "MetaData" approach trains a classifier on metadata; it does not require any ontology, but requires training data, which the other methods do not. These approaches are compared to each other when applied to a manually curated training corpus of 2600 documents for seven ambiguous terms from the Gene Ontology and MeSH. All approaches over all conditions achieve 80% success rate on average. The MetaData approach performs best with 96%, when trained on high-quality data. Its performance deteriorates as quality of the training data decreases. The Term Cooc approach performs better on Gene Ontology (92% success) than on MeSH (73% success) as MeSH is not a strict is-a/part-of, but rather a loose is-related-to hierarchy. The Closest Sense approach achieves on average 80% success rate.
Furthermore, the thesis showcases applications ranging from ontology design to semantic search where WSD is important.
|
50 |
Comparative study of open source and dot NET environments for ontology development.Mahoro, Leki Jovial 05 1900 (has links)
M. Tech. (Department of Information & Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Many studies have evaluated and compared the existing open-sources Semantic Web platforms for ontologies development. However, none of these studies have included the dot NET-based semantic web platforms in the empirical investigations. This study conducted a comparative analysis of open-source and dot NET-based semantic web platforms for ontologies development. Two popular dot NET-based semantic web platforms, namely, SemWeb.NET and dotNetRDF were analyzed and compared against open-source environments including Jena Application Programming Interface (API), Protégé and RDF4J also known as Sesame Software Development Kit (SDK). Various metrics such as storage mode, query support, consistency checking, interoperability with other tools, and many more were used to compare two categories of platforms. Five ontologies of different sizes are used in the experiments.
The experimental results showed that the open-source platforms provide more facilities for creating, storing and processing ontologies compared to the dot NET-based tools. Furthermore, the experiments revealed that Protégé and RDF4J open-source and dotNetRDF platforms provide both graphical user interface (GUI) and command line interface for ontologies processing, whereas, Jena open-source and SemWeb.NET are command line platforms. Moreover, the results showed that the open-source platforms are capable of processing multiple ontologies’ files formats including Resource Description Framework (RDF) and Ontology Web Language (OWL) formats, whereas, the dot NET-based tools only process RDF ontologies. Finally, the experiment results indicate that the dot NET-based platforms have limited memory size as they failed to load and query large ontologies compared to open-source environments.
|
Page generated in 0.1134 seconds