331 |
Context-based supply of documents in a healthcare processIsmail, Muhammad, Jan, Attuallah January 2012 (has links)
The more enhanced and reliable healthcare facilities, depend partly on accumulated organizational knowledge. Ontology and semantic web are the key factors in long-term sustainability towards the improvement of patient treatment process. Generally, researchers have the common consensus that knowledge is hard to capture due to its implicit nature, making it hard to manage. Medical professionals spend more time on getting the right information at the right moment, which is already available on intranet/internet. Evaluating the literature is controversial but interesting debates on ontology and semantic web encouraged us to propose a method and 4-Tier Architecture for retrieving context-based document according to user’s information in healthcare organization. Medical professionals are facing problems to access relevant information and documents for performing different tasks in the patient-treatment process. We have focused to provide context-based retrieval of documents for medical professionals by developing a semantic web solution. We also developed different OWL ontology models, which are mainly used for semantic tagging in web pages and generating context to retrieve the relevant web page documents. In addition, we developed a prototype to testify our findings in health care sector with the goal of retrieving relevant documents in a practical manner. / E-Health
|
332 |
Development and analysis of a library of actions for robot arm-hand systemsAein, Mohamad Javad 16 September 2016 (has links)
No description available.
|
333 |
On Fundamental Elements of Visual Navigation SystemsSiddiqui, Rafid January 2014 (has links)
Visual navigation is a ubiquitous yet complex task which is performed by many species for the purpose of survival. Although visual navigation is actively being studied within the robotics community, the determination of elemental constituents of a robust visual navigation system remains a challenge. Motion estimation is mistakenly considered as the sole ingredient to make a robust autonomous visual navigation system and therefore efforts are made to improve the accuracy of motion estimations. On the contrary, there are other factors which are as important as motion and whose absence could result in inability to perform seamless visual navigation such as the one exhibited by humans. Therefore, it is needed that a general model for a visual navigation system be devised which would describe it in terms of a set of elemental units. In this regard, a set of visual navigation elements (i.e. spatial memory, motion memory, scene geometry, context and scene semantics) are suggested as building blocks of a visual navigation system in this thesis. A set of methods are proposed which investigate the existence and role of visual navigation elements in a visual navigation system. A quantitative research methodology in the form of a series of systematic experiments is conducted on these methods. The thesis formulates, implements and analyzes the proposed methods in the context of visual navigation elements which are arranged into three major groupings; a) Spatial memory b) Motion Memory c) Manhattan, context and scene semantics. The investigations are carried out on multiple image datasets obtained by robot mounted cameras (2D/3D) moving in different environments. Spatial memory is investigated by evaluation of proposed place recognition methods. The recognized places and inter-place associations are then used to represent a visited set of places in the form of a topological map. Such a representation of places and their spatial associations models the concept of spatial memory. It resembles the humans’ ability of place representation and mapping for large environments (e.g. cities). Motion memory in a visual navigation system is analyzed by a thorough investigation of various motion estimation methods. This leads to proposals of direct motion estimation methods which compute accurate motion estimates by basing the estimation process on dominant surfaces. In everyday world, planar surfaces, especially the ground planes, are ubiquitous. Therefore, motion models are built upon this constraint. Manhattan structure provides geometrical cues which are helpful in solving navigation problems. There are some unique geometric primitives (e.g. planes) which make up an indoor environment. Therefore, a plane detection method is proposed as a result of investigations performed on scene structure. The method uses supervised learning to successfully classify the segmented clusters in 3D point-cloud datasets. In addition to geometry, the context of a scene also plays an important role in robustness of a visual navigation system. The context in which navigation is being performed imposes a set of constraints on objects and sections of the scene. The enforcement of such constraints enables the observer to robustly segment the scene and to classify various objects in the scene. A contextually aware scene segmentation method is proposed which classifies the image of a scene into a set of geometric classes. The geometric classes are sufficient for most of the navigation tasks. However, in order to facilitate the cognitive visual decision making process, the scene ought to be semantically segmented. The semantic of indoor scenes as well as semantic of the outdoor scenes are dealt with separately and separate methods are proposed for visual mapping of environments belonging to each type. An indoor scene consists of a corridor structure which is modeled as a cubic space in order to build a map of the environment. A “flash-n-extend” strategy is proposed which is responsible for controlling the map update frequency. The semantics of the outdoor scenes is also investigated and a scene classification method is proposed. The method employs a Markov Random Field (MRF) based classification framework which generates a set of semantic maps.
|
334 |
A Semantic Interpreter for Multimodal and Multirobot DataKäshammer, Philipp Florian January 2016 (has links)
Huge natural disaster events can be so devastating that they often overwhelm human rescuers and yet, they seem to occur more often. The TRADR (Long-Term Human-Robot Teaming for Robot Assisted Disaster Response) research project aims at developing methodology for heterogeneous teams composed of human rescuers as well as ground and aerial robots. While the robots swarm the disaster sites, equipped with advanced sensors, they collect a huge amount row-data that cannot be processed efficiently by humans. Therefore, in the frame of the here presented work, a semantic interpreter has been developed that crawls through the raw data, using state of the art object detection algorithms to identify victim targets and extracts all kinds of information that is relevant for rescuers to plan their missions. Subsequently, this information is restructured by a reasoning process and then stored into a high-level database that can be queried accordingly and ensures data constancy.
|
335 |
Inteligência cibernética e uso de recursos semânticos na detecção de perfis falsos no contexto do Big Data /Oliveira, José Antonio Maurilio Milagre de. January 2016 (has links)
Orientador: José Eduardo Santarem Segundo / Banca: Ricardo César Gonçalves Sant'Ana / Banca: Mário Furlaneto Neto / Resumo: O desenvolvimento da Internet transformou o mundo virtual em um repositório infindável de informações. Diariamente, na sociedade da informação, pessoas interagem, capturam e despejam dados nas mais diversas ferramentas de redes sociais e ambientes da Web. Estamos diante do Big Data, uma quantidade inacabável de dados com valor inestimável, porém de difícil tratamento. Não se tem dimensão da quantidade de informação capaz de ser extraída destes grandes repositórios de dados na Web. Um dos grandes desafios atuais na Internet do "Big Data" é lidar com falsidades e perfis falsos em ferramentas sociais, que causam alardes, comoções e danos financeiros significativos em todo o mundo. A inteligência cibernética e computação forense objetivam investigar eventos e constatar informações extraindo dados da rede. Por sua vez, a Ciência da Informação, preocupada com as questões envolvendo a recuperação, tratamento, interpretação e apresentação da informação, dispõe de elementos que quando aplicados neste contexto podem aprimorar processos de coleta e tratamento de grandes volumes de dados, na detecção de perfis falsos. Assim, por meio da presente pesquisa de revisão de literatura, documental e exploratória, buscou-se revisar os estudos internacionais envolvendo a detecção de perfis falsos em redes sociais, investigando técnicas e tecnologias aplicadas e principalmente, suas limitações. Igualmente, apresenta-se no presente trabalho contribuições de áreas da Ciência da Informação e critério... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The development of the Internet changed the virtual world in an endless repository of information. Every single day, in an information-society, people change, catch and turn out files in different tools of social network and Web surrounding. We are in front of "The Big Data", an endless amount of data with invaluable, but hard treating. It doesn't have some dimension of measure information to be able of extracting from these big Web data repositories. One of the most challenges nowadays on the Internet from the "Big Data" is to identify feelings, anticipating sceneries dealing with falsehood and false profiles social tools, which cause fanfare, upheavals and significant financial losses worldwide in front of our true scene. The cyber intelligence has by objective to look for events and finding information, subtracting dates from the Web. On the other hand, the Information Science, worried with the questions involving recovery, processing, interpretation and presentation of information that has important areas of study capable of being applied in this context hone the collection and treatment processes of large volumes of information (datas). Thus, through this research literature review, documentary and exploratory, the researcher aimed to review the International studies implicating the analysis of large volumes of data on social networking tools in falsehoods detection, investigating applied techniques and technologies and especially their limitations. Based on the identifi... (Complete abstract click electronic access below) / Mestre
|
336 |
IntegraWeb: uma proposta de arquitetura baseada em mapeamentos semânticos e técnicas de mineração de dados / IntegraWeb: an architectural proposal based on semantic mappings and data mining techniquesPierin, Felipe Lombardi 05 December 2017 (has links)
Atualmente uma grande quantidade de conteúdo é produzida e publicada todos os dias na Internet. São documentos publicados por diferentes pessoas, por diversas organizações e em inúmeros formatos sem qualquer tipo de padronização. Por esse motivo, a informação relevante sobre um mesmo domínio de interesse acaba espalhada pela Web nos diversos portais, o que dificulta uma visão ampla, centralizada e objetiva sobre esta informação. Nesse contexto, a integração dos dados espalhados na rede torna-se um problema de pesquisa relevante, para permitir a realização de consultas mais inteligentes, de modo a obter resultados mais ricos de significado e mais próximos do interesse do usuário. No entanto, tal integração não é trivial, sendo por muitas vezes custosa devido à dependência do desenvolvimento de sistemas e mão de obra especializados, visto que são poucos os modelos reaproveitáveis e facilmente integráveis entre si. Assim, a existência de um modelo padronizado para a integração dos dados e para o acesso à informação produzida por essas diferentes entidades reduz o esforço na construção de sistemas específicos. Neste trabalho é proposta uma arquitetura baseada em ontologias para a integração de dados publicados na Internet. O seu uso é ilustrado através de casos de uso reais para a integração da informação na Internet, evidenciando como o uso de ontologias pode trazer resultados mais relevantes. / A lot of content is produced and published every day on the Internet. Those documents are published by different people, organizations and in many formats without any type of established standards. For this reason, relevant information about a domain of interest is spread through the Web in various portals, which hinders a broad, centralized and objective view of this information. In this context, the integration of the data scattered in the network becomes a relevant research problem, to enable smarter queries, in order to obtain richer results of meaning and closer to the user\'s interest. However, such integration is not trivial, and is often costly because of the reliance on the development of specialized systems by professionals, since there are few reusable and easily integrable models. Thus, the existence of a standardized model for data integration and access to the information produced by these different entities reduces the effort in the construction of specific systems. In this work we propose an architecture based on ontologies for the integration of data published on the Internet. Its use is illustrated through experimental cases for the integration of information on the Internet, showing how the use of ontologies can bring more relevant results.
|
337 |
Ativação de componentes de software com a utilização de uma ontologia de componentes / Component loading with utilization of a components ontologyLorza, Augusto Carbol 16 July 2007 (has links)
Atualmente, existem muitos estudos para agregar mais valor às informações disponíveis na Web visando melhorar os resultados da interação dos usuários com a Web; uma das linhas de estudo é a Web Semântica, que propõe a adição de informação semântica à Web atual por meio de ontologias. A organização internacional que define os padrões para a Web (W3C) já propôs vários padrões para tornar a Web Semântica viável, porém, além de padrões, também é preciso criar ou adaptar ferramentas que explorem as suas potencialidades. Uma ferramenta que dá um suporte significativo para a Web atual e que pode ser adaptada para trabalhar com a Web Semântica é o Servidor de Aplicações. Com adição de informações semânticas, na forma de ontologias, tem-se um Servidor de Aplicações Baseado em Ontologias (OBAS). Neste trabalho foi desenvolvido um sistema protótipo para oferecer as características mínimas de um OBAS, e desta forma, foram investigadas as tecnologias para a Web Semântica que viabilizassem uma solução de acordo com os padrões recomendados pela W3C. Os componentes de software de um OBAS têm suas propriedades e comportamentos relacionados de forma semântica usando-se ontologias. Como uma ontologia é um modelo conceitual explícito, suas descrições dos componentes podem ser consultadas e inferidas, melhorando o desempenho do servidor através da combinação dos componentes mais apropriados a uma tarefa, da simplificação da programação, pois não é mais necessário saber todos os detalhes de um componente para ativá-lo / Many studies have been carried out to add more value to the available information in the Web with a view to improving the results of the users\' interaction with the Web. Semantic Web is one line of research with focus on this issue and proposes the insertion of semantic information to the current Web through ontologies. Several patterns have been proposed by W3C, the international organization that defines patterns to the Web as an attempt to make the Semantic Web viable. However, besides patterns, it is also necessary to create or adapt tools to explore their potentialities. Application Server is a tool which gives significant support to the current Web and could be adapted to work with the Semantic Web. By adding semantic information, in the ontology form, we have an Ontology-Based Application Server (OBAS). This study develops a protoptype system which aims to offer the minimum characteristics of an OBAS. We have therefore investigated the semantic web which could provide a solution according to the patterns recommended by W3C. Properties and behaviors of software components of OBAS are semantically related by means of ontologies. Given that ontology is an explicit conceptual model, its component descriptions can be consulted and inferred, and hence improve the performance of the server. This is done by applying the most appropriate components to a given task and simplifying programming since components can be activated with no need to know all their details
|
338 |
Afasia e linguagem figurada: o acesso lexical dentro de contextos metafóricos / Aphasia and figurative language: the lexical access in metaphoric contextsLima, Bruna Seixas 03 February 2011 (has links)
Esta pesquisa traz a análise de fenômenos linguísticos extraídos de entrevistas realizadas com seis sujeitos afásicos com diferentes graus de dificuldade de acesso lexical. Observamos a habilidade desses sujeitos em produzir e compreender nomes de animais utilizados em contexto não-literal. Desenvolvemos uma entrevista para determinar se os sujeitos em questão apresentavam dificuldade para acessar os nomes de animais escolhidos. Numa primeira etapa, os sujeitos tiveram de nomeá-los e descrevê-los e, posteriormente, utilizá-los dentro de um contexto provido pela entrevistadora. A hipótese é que possa haver diferença entre a habilidade do sujeito para produzir e compreender nomes de animais dependendo do contexto apresentado. Duas perspectivas de análise diferentes são apresentadas aqui: primeiro, temos as teorias baseadas em correlatos biológicos da linguagem e, em segundo, a teoria linguística de Roman Jakobson sobre o processamento da linguagem e a sua divisão em dois eixos principais, a metáfora e a metonímia (habilidades de abstração baseadas na similaridade e na contiguidade, respectivamente). Alguns sujeitos apresentam dificuldade para produzir formas de palavras no seu sentido literal, mas o mesmo não acontece quando as mesmas palavras são produzidas no seu sentido não-literal, sugerindo que nesses sujeitos o sistema semântico-lexical pode estar mais preservado do que se imagina, sendo que o tipo de entrada ou saída dessas formas lexicais pode ser o elemento prejudicado. A análise das entrevistas realizadas revela que a compreensão dessas mesmas metáforas foi uma tarefa mais laboriosa para os sujeitos, o que reforça nossa hipótese, uma vez que durante a tarefa de compreensão das metáforas os sujeitos não foram providos do contexto dado na tarefa de produção. / This research proposes the analysis of language phenomena taken from interviews made with six aphasic subjects presenting different degrees of lexical access deficits. The focus of this paper is the observation of the ability of these subjects to produce and comprehend names of animals used in a metaphorical context. We developed an interview in order to determine whether the subjects presented problems to access the chosen names of animals. In the first part of the interview, the subjects were asked to name and describe the animal pictures presented and, aftermost, they had to produce and comprehend those names in the context provided by the interviewer. Two distinct perspectives are presented in this paper: first, we have theories based on biological correlates of language, and in second, the linguistic theory by Roman Jakobson about the processing of language and its division in two main axis: metaphor and metonymy (modes of relation based on similarity and contiguity, respectively). Some subjects present distress to produce word forms in their literal meaning, whereas the same does not occur when those words are used in their nonliteral meaning. This suggests that these subjects present a better preservation of the semantic-lexical system than expected, and the only affected element can be the type of input or output of the lexical form. We can see in the interviews presented here that the comprehension of the mentioned metaphors was a more laborious task for the subjects, which reinforces our hypothesis, once during the comprehension part of the interview, the subjects were not provided with the context given previously, in the production task.
|
339 |
Desenvolvimento de um portal de conhecimento para a tuberculose baseado em web semântica / Development of a knowledge portal for tuberculosis based on semantic webLima, Ricardo Roberto de 26 October 2018 (has links)
Segundo o relatório de 2017 da Organização Mundial da Saúde (OMS), a tuberculose é a nona causa de mortes no mundo e a primeira causa de mortes entre as doenças infectocontagiosas. O Brasil está entre os 20 países com maior número da doença, quando somados, esses países totalizam 84% dos casos de tuberculose no mundo. O grande volume de pesquisas e trabalhos realizados sobre a tuberculose possui um conhecimento consolidado que poderia ser extraído e aproveitado com o uso da tecnologia da informação para o provimento de informações auxiliando no trabalho de profissionais da saúde, bem como apoiando a criação de políticas e estratégias para o controle da tuberculose. Este estudo propõe o desenvolvimento de um portal web com marcação semântica para reunir e disponibilizar conhecimento, bem como indicadores públicos sobre a tuberculose. O software Drupal foi utilizado para a criação do portal com parte de seu conteúdo marcado semanticamente utilizando ontologias hospedadas em repositórios públicos na web, e por um repositório virtual de ontologias configurado na nuvem com o auxílio da plataforma D2RQ para padronização de indicadores da tuberculose. Um conjunto de nove indicadores utilizados no tratamento e prevenção da tuberculose foi selecionado para o estudo. A base de dados definida para testes foi a do Sistema de Gerenciamento de Pacientes da Tuberculose utilizado no município de Ribeirão Preto. Como resultado foi gerado um portal web de conhecimento reunindo informações e indicadores sobre a tuberculose marcada semanticamente. Também foi criado um servidor virtual de ontologias baseado na base de dados relacional do Sistema de Gerenciamento de Pacientes da Tuberculose. O estudo apresentou a utilização da Web Semântica na criação de portais de conteúdo com conhecimento para a tuberculose, objetivando uma experiência de uso enriquecedora aos usuários por meio de um portal inteligente capaz de entregar informações de maneira mais aderente às necessidades dos usuários e também a computadores que por meio de softwares inteligentes possam interpretar e entender seu conteúdo conforme preconizado pela Web Semântica. / According to the World Health Organization report of 2017, tuberculosis is the ninth leading cause of death in the world and the leading cause of death among infectious diseases. Brazil is among the 20 countries with the highest number of the disease, when added, these countries account for 84% of tuberculosis cases in the world. The large volume of research and work carried out on tuberculosis has a consolidated knowledge that could be extracted and harnessed with the use of information technology to provide information to assist the work of health professionals, as well as supporting the creation of policies and strategies for tuberculosis control. This study proposes the development of a web portal with semantic markup to gather and make available knowledge, as well as public indicators on tuberculosis. Drupal software was used to create the portal with part of its content marked semantically using ontologies hosted in public repositories on the web, and by a virtual repository of ontologies configured in the cloud using D2RQ platform for standardization of tuberculosis indicators. A set of nine indicators used in the treatment and prevention of tuberculosis were selected for the study. The database defined for testing was the Tuberculosis Patient Management System used in the city of Ribeirão Preto. As a result a web portal of knowledge was generated, gathering information and indicators about tuberculosis marked semantically. A virtual ontology server was also created based on the Tuberculosis Patient Management System relational database. The study presented the use of Semantic Web in the creation of content portals with knowledge for tuberculosis, aiming at an experience of enriching use to users through a smart portal capable of delivering information more closely to the needs of users and also to computers which through intelligent software can interpret and understand its content as recommended by the Semantic Web.
|
340 |
Modélisation des ports de Brest (France), Rosario et Mar del Plata (Argentine) en tant que macro systèmes technologiques complexes : application à la modélisation des connaissances pour l'histoire des sciences et des techniques / Modelling of the ports of Brest (France), Rosario and Mar del Plata (Argentina) as Large Technical systems : application to knowledge modeling for the history of science and technologyRohou, Bruno 13 December 2018 (has links)
Cette thèse s'insère dans le programme du Centre F. Viète "Histoire comparée des paysages culturels portuaires" et porte sur la compréhension de l’évolution scientifique et technologique des ports de Brest (France), Mar del Plata et Rosario en Argentine à l’époque contemporaine. L’hypothèse de recherche est de considérer un port comme un macro-système technologique complexe dont l’évolution spatiotemporelle en tant qu'artefact s’inscrit dans une histoire des sciences et des techniques. Ces artefacts sont considérés comme indicateurs signifiants de cette évolution. L'objectif de cette thèse est de bâtir une histoire comparée des ports, de proposer et de valider de nouvelles méthodes de travail en humanités numériques. Pour satisfaire à ces objectifs, nous avons produit une histoire comparée des ports considérés. Puis, nous avons développé un modèle d'évolution de ces ports, appelé HST-PORT, à partir du métamodèle SHS, ANY-ARTEFACT. A partir du modèle HST-PORT, nous avons conçu une ontologie de référence, appelée PHO (Port History Ontology). Cette dernière est fondée sur l’ontologie CIDOC-CRM et en reprend donc le modèle évènementiel. Cette ontologie a été évaluée avec succès en reproduisant l’histoire comparée des ports considérés faites par des historiens. A terme, il s'agit de concevoir de nouveaux systèmes d'information fondés sur ces ontologies et le web sémantique pour indexer, publier et d'interroger des sources historiques afin de produire une histoire comparée. / This thesis is part of the F. Viète Centre's "Comparative History of Port Cultural Landscapes" programme and focuses on understanding the scientific and technological evolution of the ports of Brest (France), Mar del Plata and Rosario in Argentina in contemporary times. The research hypothesis is to consider a port as a complex technological macro-system whose spatial and temporal evolution as an artifact is part of the history of science and technology. These artifacts are considered as significant indicators of this evolution.The objective of this thesis is to build a comparative history of ports, to propose and validate new research methods in digital humanities. To meet these objectives, we have produced a comparative history of the considered ports.Then, we have developed a model for the evolution of these ports, called HST-PORT, based on the SHS meta-model ANY-ARTEFACT. Based on the HSTPORT model, we have developed a reference ontology, called PHO (Port History Ontology). The latter is based on the CIDOC-CRM ontology and therefore uses the corresponding event model. This ontology has been successfully evaluated by reproducing the comparative history of the considered ports made by historians. In the long term, it will be necessary to design new information systems based on these ontologies and the semantic web to index, publish and query historical sources to produce a comparative history.
|
Page generated in 0.0681 seconds