• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 65
  • 20
  • 15
  • 11
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 332
  • 332
  • 70
  • 48
  • 48
  • 45
  • 38
  • 36
  • 35
  • 34
  • 32
  • 31
  • 31
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

A Generic BI Application for Real-time Monitoring of Care Processes

Baffoe, Shirley A. 14 June 2013 (has links)
Patient wait times and care service times are key performance measures for care processes in hospitals. Managing the quality of care delivered by these processes in real-time is challenging. A key challenge is to correlate source medical events to infer the care process states that define patient wait times and care service times. Commercially available complex event processing engines do not have built in support for the concept of care process state. This makes it unnecessarily complex to define and maintain rules for inferring states from source medical events in a care process. Another challenge is how to present the data in a real-time BI dashboard and the underlying data model to use to support this BI dashboard. Data representation architecture can potentially lead to delays in processing and presenting the data in the BI dashboard. In this research, we have investigated the problem of real-time monitoring of care processes, performed a gap analysis of current information system support for it, researched and assessed available technologies, and shown how to most effectively leverage event driven and BI architectures when building information support for real-time monitoring of care processes. We introduce a state monitoring engine for inferring and managing states based on an application model for care process monitoring. A BI architecture is also leveraged for the data model to support the real-time data processing and reporting requirements of the application’s portal. The research is validated with a case study to create a real-time care process monitoring application for an Acute Coronary Syndrome (ACS) clinical pathway in collaboration with IBM and Osler hospital. The research methodology is based on design-oriented research.
252

Data Integration with XML and Semantic Web Technologies

Tous Liesa, Rubén 04 October 2006 (has links)
En general, la integració de múltiples bases de dades heterogènies té com a objectiu oferir una visió unificada sobre un conjunt de dades preexistent. Aquesta tesi contribueix a diferents aspectes del disseny de sistemes de integració de dades moderns en el context de la World Wide Web. Per un costat, la tesi contribueix a la línia de recerca de la Integració Semàntica, que fa referència al problema de reconciliar dades de fonts autònomes mitjançant l'ús d'ontologies i altres eines semàntiques. La tesi suggereix una nova solució a la integració semàntica XML-RDF, i també contribueix al problema de l'Alineació d'Ontologies, definint una mesura de similitud semàntica rigorosa i escalable per grafs etiquetats i dirigits RDF. Per un altre costat, la tesi suggereix una nova solució al problema de traduir una consulta d'un usuari (dirigida a un esquema lògic intermediari), en consultes sobre un conjunt de fonts de dades autònomes, provistes de interfícies web restringides. / En general, la integración de múltiples bases de datos heterogenias tiene como objetivo ofrecer una visión unificada sobre un conjunto de datos preexistente. Esta tesis contribuye a diferentes aspectos del diseño de sistemas de integración de datos modernos en el contexto de la World Wide Web. Por un lado, la tesis contribuye a la línea de investigación de la Integración Semántica, que hace referencia al problema de reconciliar datos de fuentes autónomas mediante el uso de ontologías i otras herramientas semánticas. La tesis sugiere una nueva solución a la integración semántica XML-RDF, y también contribuye al problema de la Alineación de Ontologías, definiendo una medida de similitud semántica rigurosa i escalable para grafos etiquetados y dirigidos RDF. Por otro lado, la tesis sugiere una nueva solución al problema de traducir una consulta de un usuario (dirigida a un esquema lógico intermediario), en consultas sobre un conjunto de fuentes de datos autónomas, provistas de interfaces web restringidas. / In general, integration of multiple heterogeneous databases aims at giving a unified view over a set of pre-existent data. This thesis contributes to different aspects of the design of modern data integration systems in the context of the World Wide Web. On one hand, this thesis contributes to the Semantic Integration research trend, which refers to the problem of reconciling data from autonomous sources using ontologies and other semantic-based tools. The thesis suggests a novel solution to XML-RDF semantic integration and also contributes to the problem of Ontology Alignment, defining a rigorous and scalable semantic similarity measure for RDF labelled directed graphs. On the other hand, this thesis suggests a novel solution to the problem of translating a user query (targeting a logical mediated schema), into queries over a set of autonomous data sources provided with restricted web interfaces.
253

Toward semantic interoperability for software systems

Lister, Kendall January 2008 (has links)
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57] / In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. / The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. / The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. / Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. / In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
254

Integrace a konzumace důvěryhodných Linked Data / Towards Trustworthy Linked Data Integration and Consumption

Knap, Tomáš January 2013 (has links)
Title: Towards Trustworthy Linked Data Integration and Consumption Author: RNDr. Tomáš Knap Department: Department of Software Engineering Supervisor: RNDr. Irena Holubová, PhD., Department of Software Engineering Abstract: We are now finally at a point when datasets based upon open standards are being published on an increasing basis by a variety of Web communities, governmental initiatives, and various companies. Linked Data offers information consumers a level of information integration and aggregation agility that has up to now not been possible. Consumers can now "mashup" and readily integrate information for use in a myriad of alternative end uses. Indiscriminate addition of information can, however, come with inherent problems, such as the provision of poor quality, inaccurate, irrelevant or fraudulent information. All will come with associated costs of the consumed data which will negatively affect data consumer's benefit and Linked Data applications usage and uptake. In this thesis, we address these issues by proposing ODCleanStore, a Linked Da- ta management and querying tool able to provide data consumers with Linked Data, which is cleansed, properly linked, integrated, and trustworthy accord- ing to consumer's subjective requirements. Trustworthiness of data means that the data has associated...
255

Plateforme visuelle pour l'intégration de données faiblement structurées et incertaines / A visual platform to integrate poorly structured and unknown data

Da Silva Carvalho, Paulo 19 December 2017 (has links)
Nous entendons beaucoup parler de Big Data, Open Data, Social Data, Scientific Data, etc. L’importance qui est apportée aux données en général est très élevée. L’analyse de ces données est importante si l’objectif est de réussir à en extraire de la valeur pour pouvoir les utiliser. Les travaux présentés dans cette thèse concernent la compréhension, l’évaluation, la correction/modification, la gestion et finalement l’intégration de données, pour permettre leur exploitation. Notre recherche étudie exclusivement les données ouvertes (DOs - Open Data) et plus précisément celles structurées sous format tabulaire (CSV). Le terme Open Data est apparu pour la première fois en 1995. Il a été utilisé par le groupe GCDIS (Global Change Data and Information System) (États-Unis) pour encourager les entités, possédant les mêmes intérêts et préoccupations, à partager leurs données [Data et System, 1995]. Le mouvement des données ouvertes étant récent, il s’agit d’un champ qui est actuellement en grande croissance. Son importance est actuellement très forte. L’encouragement donné par les gouvernements et institutions publiques à ce que leurs données soient publiées a sans doute un rôle important à ce niveau. / We hear a lot about Big Data, Open Data, Social Data, Scientific Data, etc. The importance currently given to data is, in general, very high. We are living in the era of massive data. The analysis of these data is important if the objective is to successfully extract value from it so that they can be used. The work presented in this thesis project is related with the understanding, assessment, correction/modification, management and finally the integration of the data, in order to allow their respective exploitation and reuse. Our research is exclusively focused on Open Data and, more precisely, Open Data organized in tabular form (CSV - being one of the most widely used formats in the Open Data domain). The first time that the term Open Data appeared was in 1995 when the group GCDIS (Global Change Data and Information System) (from United States) used this expression to encourage entities, having the same interests and concerns, to share their data [Data et System, 1995]. However, the Open Data movement has only recently undergone a sharp increase. It has become a popular phenomenon all over the world. Being the Open Data movement recent, it is a field that is currently growing and its importance is very strong. The encouragement given by governments and public institutions to have their data published openly has an important role at this level.
256

[en] EXTENSION OF AN INTEGRATION SYSTEM OF LEARNING OBJECTS REPOSITORIES AIMING AT PERSONALIZING QUERIES WITH FOCUS ON ACCESSIBILITY / [pt] EXTENSÃO DE UM SISTEMA DE INTEGRAÇÃO DE REPOSITÓRIOS DE OBJETOS DE APRENDIZAGEM VISANDO A PERSONALIZAÇÃO DAS CONSULTAS COM ENFOQUE EM ACESSIBILIDADE

RAPHAEL GHELMAN 16 October 2006 (has links)
[pt] Hoje em dia e-learning está se tornando mais importante por possibilitar a disseminação de conhecimento e informação através da internet de uma forma mais rápida e menos dispendiosa. Consequentemente, de modo a filtrar o que é mais relevante e/ou de interesse do usuário, arquiteturas e técnicas de personalização vêm sendo abordadas. Dentre as muitas possibilidades de personalização existentes, a que lida com acessibilidade está se tornando essencial, pois garante que uma grande variedade de usuários possa ter acesso à informação conforme suas necessidades e características. Acessibilidade não é apenas garantir que pessoas com alguma deficiência, ou dificuldade, possam ter acesso à informação, apesar de ser importante e eventualmente ser uma exigência legal. Acessibilidade é também garantir que uma larga variedade de usuários e interfaces possam obter acesso à informação, maximizando assim a audiência potencial. Esta dissertação apresenta uma extensão do LORIS, um sistema de integração de repositórios de objetos de aprendizagem, descrevendo as alterações na sua arquitetura para ser capaz de lidar com acessibilidade e reconhecer diferentes versões de um mesmo objeto de aprendizagem, permitindo assim que um usuário execute uma consulta considerando seu perfil e preferências. Foi desenvolvido um protótipo dos serviços descritos na arquitetura utilizando serviços Web e navegação facetada, bem como padrões web, de e-learning e de acessibilidade. O uso de serviços Web e de padrões visa promover flexibilidade e interoperabilidade, enquanto a navegação facetada, como implementada, permite que o usuário aplique múltiplos filtros aos resultados da consulta sem a necessidade de re-submetê-la. / [en] Nowadays e-learning is becoming more important as it makes possible the dissemination of knowledge and information through the internet in a faster and costless way. Consequently, in order to filter what is more relevant and/or of users interest, architectures and personalization techniques have been raised. Among the many existing possibilities of personalization, the one that deals with accessibility is becoming essential because it guarantees that a wide variety of users may have access to the information according to their preferences and needs. Accessibility is not just about ensuring that disabled people can access information, although this is important and may be a legal requirement. It is also about ensuring that the wide variety of users and devices can all gain access to information, thereby maximizing the potential audience. This dissertation presents an extension of LORIS, an integration system of learning object repositories, describing the changes on its architecture to make it able to deal with accessibility and to recognize different versions of the same learning object, thus allowing a user to execute a query considering his/her preferences and needs. A prototype of the services that are described in the architecture was developed using web services and faceted navigation, as well as e-learning and accessibility standards. The use of web services and standards aims at providing flexibility and interoperability, while the faceted navigation, as implemented, allows the user to apply multiple filters to the query results without the need to resubmit it.
257

[en] AN ARCHITECTURE BASED ON MEDIATORS AND WEB SERVICES FOR INTEGRATING LEARNING OBJECTS REPOSITORIES / [pt] UMA ARQUITETURA PARA INTEGRAÇÃO DE REPOSITÓRIOS DE OBJETOS DE APRENDIZAGEM BASEADA EM MEDIADORES E SERVIÇOS WEB

SIMONE LEAL DE MOURA 10 March 2006 (has links)
[pt] Na educação baseada na Web há uma grande preocupação em relação ao compartilhamento de materiais instrucionais devido à complexidade do processo de desenvolvimento de materiais com boa qualidade. Isto leva a uma tendência em adotar a abordagem de orientação a objetos no desenvolvimento destes materiais, originando os chamados Objetos de Aprendizagem. Além disto, instituições e empresas interessadas na educação baseada na Web vêm formando parcerias no sentido de promover o compartilhamento de tais objetos. De modo a contribuir para estes esforços desenvolvemos uma arquitetura de mediadores e tradutores enriquecidos pelo uso de ontologias, que é implementada como serviços Web de modo a permitir a autonomia local com uma visão integrada. O uso de mediadores possibilita que uma consulta seja redefinida em sub-consultas que são distribuídas às fontes de dados e o resultado integrado. Os tradutores permitem que as sub-consultas sejam entendidas pelas fontes de dados e as respectivas respostas sejam entendidas pelo mediador. A implementação dos componentes da arquitetura como serviços Web possibilita uma maior flexibilidade e interoperabilidade entre os participantes da rede. O tratamento da heterogeneidade semântica faz uso do formalismo de ontologias para descrever os conceitos existentes nos metadados de cada repositório participante na rede e encontrar as possíveis equivalências entre eles. Desta forma, o desenvolvimento desta arquitetura resultou em LORIS, um sistema de integração de repositórios de objetos de aprendizagem. O LORIS está sendo aplicado no PGL, um projeto de cooperação internacional entre instituições de ensino e pesquisa para promover a educação baseada na Web. / [en] In web-based education there is an emphasis on reusing and sharing instructional content due to the complexity of the development process of highquality learning materials. It leads to the learning objects orientation as well as to partnerships among institutions to promote sharing of these objects. In order to contribute to these efforts, we proposed an architecture based on the mediators and wrappers for integrating learning objects repositories. The components of this architecture were implemented by the use of web- services and the integration processes were enriched by ontologies. The use of mediators allows a query to be redefined as sub-queries that are distributed to the data sources and the results to be integrated. The wrappers allow the data sources to understand the sub-queries and the mediator to understand the respective answers. The implementation of the architecture components as web services allows more flexibility and interoperability among the participants of the community. The formalism of ontologies is used to deal with the semantic heterogeneity as the metadata concepts of each repository are described and the equivalences are established. The development of this architecture is called LORIS, an integration system of learning objects` repositories. LORIS is being adopted by PGL, an international partnership project for promoting web-based education.
258

Análise metadimensional em inferência de redes gênicas e priorização

Marchi, Carlos Eduardo January 2017 (has links)
Orientador: Prof. Dr. David Corrêa Martins Júnior / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2017.
259

[en] EDUCO: MODELING EDUCATIONAL CONTENT / [pt] EDUCO: MODELANDO CONTEÚDO EDUCACIONAL

SEAN WOLFGAND MATSUI SIQUEIRA 04 May 2005 (has links)
[pt] No contexto de e-learning, o desenvolvimento de material de aprendizagem é um fator de sucesso. Entretanto, estes processos são caros e demorados de modo que se procura promover o reuso de materiais e estabelecer parcerias entre instituições para compartilhar conteúdo e serviços. Assim, o uso conjunto de Objetos de Aprendizagem (LOs) e respectivos metadados tem sido amplamente adotado. Entretanto, apesar do uso de padrões de descritores para LOs tornar sua aceitação mais ampla, muitos desenvolvedores demonstram uma grande dificuldade em usar e reusar LOs. Portanto, continua a haver interesse em prover meios que promovam o reuso destes LOs e a tendência atual é que estes LOs se tornem cada vez menores, estruturados conforme uma hierarquia de nós interconectados. Algumas abordagens atuais consideram o uso de mapas de tópicos (topic maps), ontologias e bases de conhecimento para trabalhar com os conteúdos contidos nos materiais educacionais. Esta tese apresenta um modelo para estruturar e representar o conteúdo contido nos materiais educacionais a partir dos tipos de informações e de unidades conceituais envolvidas. Além da proposta de modelagem é também apresentada uma arquitetura que possibilita a implantação dos diferentes níveis semânticos de informação a serem considerados em um ambiente de e-learning. Esta arquitetura se baseia em trabalhos relacionados a integração de dados e estabelece um contexto para a utilização do modelo proposto para a representação do conteúdo educacional, contribuindo para a sua adoção. / [en] In e-learning, the development of multimedia educational content material has been a success factor. However, as these processes are expensive and timeconsuming, there is a need for making the content reuse easier and institutions are establishing partnerships in order to share content and services. In this context, Learning Objects (LO) and standard metadata have been grown in acceptance. In spite of this, several developers have found it difficult to use and reuse LOs. Then there is still a need for providing mechanisms that promote LO reuse. The current trend is on making these LO even smaller, structured according to a hierarchy of interconnected nodes. Some recent approaches are based on the use of topic maps, ontology and knowledge bases in order to work with the content that are embedded into the educational material. This thesis presents a model for structuring and representing this content according to the involved information and conceptual unities. In addition, we also present an architecture that allows the different semantic levels of information to be considered in an e-learning environment. This architecture is based on related work on data integration and it establishes a context for the proposed modeling approach for representing educational content and therefore contributes for its acceptance and use by the e-learning community.
260

Faça no seu ritmo mas não perca a hora: tomada de decisão sob demandado usuário utilizando dados da Web / Take your time, but don´t be late: on-demand decision-making using web data

Silva, Manoela Camila Barbosa da 07 August 2017 (has links)
Submitted by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-16T17:29:35Z No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-16T17:29:46Z (GMT) No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-16T17:29:57Z (GMT) No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5) / Made available in DSpace on 2017-10-16T17:30:06Z (GMT). No. of bitstreams: 1 SILVA_Manoela_2017.pdf: 5765067 bytes, checksum: 241f86d72385de30ffe23c0f4d49a868 (MD5) Previous issue date: 2017-08-07 / Não recebi financiamento / In the current knowledge age, with the continuous growth of the web data volume and where business decisions must be made quickly, traditional BI mechanisms become increasingly inaccurate in order to help the decision-making process. In response to this scenario rises the BI 2.0 concept, which is a recent one and is mainly based on the Web evolution, having as one of the main characteristics the use of Web sources in decision-making. However, data from Web tend to be volatile to be stored in the DW, making them a good option for situational data. Situational data are useful for decision-making queries at a particular time and situation, and can be discarded after analysis. Many researches have been developed regarding to BI 2.0, but there are still many points to be explored. This work proposes a generic architecture for Decision Support Systems that aims to integrate situational data from Web to user queries at the right time; this is, when the user needs them for decision making. Its main contribution is the proposal of a new OLAP operator, called Drill-Conformed, enabling data integration in an automatic way and using only the domain of values from the situational data.In addition, the operator collaborates with the Semantic Web, by making available the semantics-related discoveries. The case study is a streamings provision system. The results of the experiments are presented and discussed, showing that is possible to make the data integration in a satisfactory manner and with good processing times for the applied scenario. / Na atual era do conhecimento, com o crescimento contínuo do volume de dados da Web e onde decisões de negócio devem ser feitas de maneira rápida, os mecanismos tradicionais de BI se tornam cada vez menos precisos no auxílio à tomada de decisão. Em resposta a este cenário surge o conceito de BI 2.0, que se trata de um conceito recente e se baseia principalmente na evolução da Web, tendo como uma das principais características a utilização de fontes Web na tomada de decisão. Porém, dados provenientes da Web tendem a ser voláteis para serem armazenados no DW, tornando-se uma boa opção para dados transitórios. Os dados transitórios são úteis para consultas de tomada de decisão em um determinado momento e cenário e podem ser descartados após a análise. Muitos trabalhos têm sido desenvolvidos em relação à BI 2.0, mas ainda existem muitos pontos a serem explorados. Este trabalho propõe uma arquitetura genérica para SSDs, que visa integrar dados transitórios, provenientes da Web, às consultas de usuários no momento em que o mesmo necessita deles para a tomada de decisão. Sua principal contribuição é a proposta de um novo operador OLAP , denominado Drill-Conformed, capaz de realizar a integração dos dados de maneira automática e fazendo uso somente do domínio de valores dos dados transitórios. Além disso, o operador tem o intuito de colaborar com a Web semântica, a partir da disponibilização das informações por ele descobertas acerca do domínio de dados utilizado. O estudo de caso é um sistema de disponibilização de streamings . Os resultados dos experimentos são apresentados e discutidos, mostrando que é possível realizar a integração dos dados de maneira satisfatória e com bons tempos de processamento para o cenário aplicado.

Page generated in 0.0874 seconds