• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 14
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Answering Object Queries over Knowledge Bases with Expressive Underlying Description Logics

Wu, Jiewen January 2013 (has links)
Many information sources can be viewed as collections of objects and descriptions about objects. The relationship between objects is often characterized by a set of constraints that semantically encode background knowledge of some domain. The most straightforward and fundamental way to access information in these repositories is to search for objects that satisfy certain selection criteria. This work considers a description logics (DL) based representation of such information sources and object queries, which allows for automated reasoning over the constraints accompanying objects. Formally, a knowledge base K=(T, A) captures constraints in the terminology (a TBox) T, and objects with their descriptions in the assertions (an ABox) A, using some DL dialect L. In such a setting, object descriptions are L-concepts and object identifiers correspond to individual names occurring in K. Correspondingly, object queries are the well known problem of instance retrieval in the underlying DL knowledge base K, which returns the identifiers of qualifying objects. This work generalizes instance retrieval over knowledge bases to provide users with answers in which both identifiers and descriptions of qualifying objects are given. The proposed query paradigm, called assertion retrieval, is favoured over instance retrieval since it provides more informative answers to users. A more compelling reason is related to performance: assertion retrieval enables a transfer of basic relational database techniques, such as caching and query rewriting, in the context of an assertion retrieval algebra. The main contributions of this work are two-fold: one concerns optimizing the fundamental reasoning task that underlies assertion retrieval, namely, instance checking, and the other establishes a query compilation framework based on the assertion retrieval algebra. The former is necessary because an assertion retrieval query can entail a large volume of instance checking requests in the form of K|= a:C, where "a" is an individual name and "C" is a L-concept. This work thus proposes a novel absorption technique, ABox absorption, to improve instance checking. ABox absorption handles knowledge bases that have an expressive underlying dialect L, for instance, that requires disjunctive knowledge. It works particularly well when knowledge bases contain a large number of concrete domain concepts for object descriptions. This work further presents a query compilation framework based on the assertion retrieval algebra to make assertion retrieval more practical. In the framework, a suite of rewriting rules is provided to generate a variety of query plans, with a focus on plans that avoid reasoning w.r.t. the background knowledge bases when sufficient cached results of earlier requests exist. ABox absorption and the query compilation framework have been implemented in a prototypical system, dubbed CARE Assertion Retrieval Engine (CARE). CARE also defines a simple yet effective cost model to search for the best plan generated by query rewriting. Empirical studies of CARE have shown that the proposed techniques in this work make assertion retrieval a practical application over a variety of domains.
12

Desenvolvimento de uma base de dados e de indicadores para projetos de pesquisa em rede com ruminantes / Development of a database and indicators for research projects in network with ruminants

Martins, Jorge Dubal 16 February 2016 (has links)
This thesis aims to describe the structure and the main features of PampaDB, a computer system associated with a database for network research projects on ruminant production systems. The stored information is structured as follows: a higher level has the data source, which is the basic unit of system work and that gathers metadata about the research project or a survey or an agricultural production unit. The data source is comprised of one or more of the following components: agroecosystem, management system resource, animal, pasture, soil, food and location. Each component can store events or observations. From the data entered in the database, different indicators are calculated and presented in tables and graphs. The query module is free to access and allows you to search based on keywords, time periods and data sources. The PampaDB works on client-server architecture, with the data stored on the server and access being done over the internet. / Esta tese tem o objetivo de descrever a estrutura e as principais características do PampaDB, um sistema computacional associado a uma base de dados para projetos de pesquisa em rede sobre sistemas de produção de ruminantes. A informação armazenada está estruturada da seguinte forma: num nível mais elevado tem-se a fonte de dados, que é a unidade básica de trabalho do sistema e que reúne metadados sobre o projeto de pesquisa ou um levantamento ou uma unidade de produção agropecuária. A fonte de dados é composta de um ou mais dos seguintes componentes: agroecossistema, sistema de manejo de recursos naturais, animal, pasto, solo, alimento e local. Para cada componente é possível armazenar eventos ou observações. A partir dos dados inseridos na base de dados, diferentes indicadores são calculados e apresentados na forma de tabelas e gráficos. O módulo de consulta é de livre acesso e permite a busca com base em palavras-chave, períodos de tempo e fontes de dados. O PampaDB funciona sobre a arquitetura cliente-servidor, com os dados armazenados no servidor e o acesso sendo feito através da internet.
13

Acesso a dados baseado em ontologias com NoSQL / Ontology-based data access with NoSQL

Barbara Tieko Agena 27 November 2017 (has links)
O acesso a dados baseado em ontologia (OBDA, de Ontology-Based Data Access) propõe facilitar ao usuário acesso a dados sem o conhecimento específico de como eles estão armazenados em suas fontes. Para isso, faz-se uso de uma ontologia como camada conceitual de alto nível, explorando sua capacidade de descrever o domínio e lidar com a incompletude dos dados. Atualmente, os sistemas NoSQL (Not Only SQL) estão se tornando populares, oferecendo recursos que os sistemas de bancos de dados relacionais não suportam. Desta forma, surgiu a necessidade dos sistemas OBDA se moldarem a estes novos tipos de bancos de dados. O objetivo desta pesquisa é propor uma arquitetura nova para sistemas OBDA possibilitando o acesso a dados em bancos de dados relacionais e bancos de dados NoSQL. Para tal, foi proposta a utilização de um mapeamento mais simples responsável pela comunicação entre ontologia e bancos de dados. Foram construídos dois protótipos de sistemas OBDA para sistemas NoSQL e sistemas de bancos de dados relacional para uma validação empírica da arquitetura proposta neste trabalho. / Ontology-based data access (OBDA) proposes to facilitate user access to data without specific knowledge of how they are stored in their sources. For this, an ontology is used as a high level conceptual layer, exploring its capacity to describe the domain and deal with the incompleteness of the data. Currently, NoSQL (Not Only SQL) systems are becoming popular, offering features that relational database systems do not support. In this way, the need arose for shaping OBDA systems to deal with these new types of databases. The objective of this research is to propose a new architecture for OBDA systems allowing access to data in relational databases and NoSQL databases. For this, we propose the use of a simpler mapping responsible for the communication between ontology and databases. Two OBDA system prototypes were constructed: one for NoSQL systems and one for relational database systems for an empirical validation.
14

Acesso a dados baseado em ontologias com NoSQL / Ontology-based data access with NoSQL

Agena, Barbara Tieko 27 November 2017 (has links)
O acesso a dados baseado em ontologia (OBDA, de Ontology-Based Data Access) propõe facilitar ao usuário acesso a dados sem o conhecimento específico de como eles estão armazenados em suas fontes. Para isso, faz-se uso de uma ontologia como camada conceitual de alto nível, explorando sua capacidade de descrever o domínio e lidar com a incompletude dos dados. Atualmente, os sistemas NoSQL (Not Only SQL) estão se tornando populares, oferecendo recursos que os sistemas de bancos de dados relacionais não suportam. Desta forma, surgiu a necessidade dos sistemas OBDA se moldarem a estes novos tipos de bancos de dados. O objetivo desta pesquisa é propor uma arquitetura nova para sistemas OBDA possibilitando o acesso a dados em bancos de dados relacionais e bancos de dados NoSQL. Para tal, foi proposta a utilização de um mapeamento mais simples responsável pela comunicação entre ontologia e bancos de dados. Foram construídos dois protótipos de sistemas OBDA para sistemas NoSQL e sistemas de bancos de dados relacional para uma validação empírica da arquitetura proposta neste trabalho. / Ontology-based data access (OBDA) proposes to facilitate user access to data without specific knowledge of how they are stored in their sources. For this, an ontology is used as a high level conceptual layer, exploring its capacity to describe the domain and deal with the incompleteness of the data. Currently, NoSQL (Not Only SQL) systems are becoming popular, offering features that relational database systems do not support. In this way, the need arose for shaping OBDA systems to deal with these new types of databases. The objective of this research is to propose a new architecture for OBDA systems allowing access to data in relational databases and NoSQL databases. For this, we propose the use of a simpler mapping responsible for the communication between ontology and databases. Two OBDA system prototypes were constructed: one for NoSQL systems and one for relational database systems for an empirical validation.
15

Techniques for Characterizing the Data Movement Complexity of Computations

Elango, Venmugil 08 June 2016 (has links)
No description available.
16

Integrated Mobility and Service Management for Network Cost Minimization in Wireless Mesh Networks

Li, Yinan 04 June 2012 (has links)
In this dissertation research, we design and analyze integrated mobility and service management for network cost minimization in Wireless Mesh Networks (WMNs). We first investigate the problem of mobility management in WMNs for which we propose two efficient per-user mobility management schemes based on pointer forwarding, and then a third one that integrates routing-based location update and pointer forwarding for further performance improvement. We further study integrated mobility and service management for which we propose protocols that support efficient mobile data access services with cache consistency management, and mobile multicast services. We also investigate reliable and secure integrated mobility and service man agement in WMNs, and apply the idea to the design of a protocol for secure and reliable mobile multicast. The most salient feature of our protocols is that they are optimal on a per-user basis (or on a per-group basis for mobile multicast), that is, the overall network communication cost incurred is minimized for each individual user (or group). Per-user based optimization is critical because mobile users normally have vastly different mobility and service characteristics. Thus, the overall cost saving due to per-user based optimization is cumulatively significant with an increasing mobile user population. To evaluate the performance of our proposed protocols, we develop mathematical models and computational procedures used to compute the network communication cost incurred and build simulation systems for validating the results obtained from analytical modeling. We identify optimal design settings under which the network cost is minimized for our mobility and service management protocols in WMNs. Intensive comparative performance studies are carried out to compare our protocols with existing work in the literature. The results show that our protocols significantly outperform existing protocols under identical environmental and operational settings. We extend the design notion of integrated mobility and service management for cost minimization to MANETs and propose a scalable dual-region mobility management scheme for location-based routing. The basic design concept is to use local regions to complement home regions and have mobile nodes in the home region of a mobile node serve as location servers for that node. We develop a mathematical model to derive the optimal home region size and local region size under which overall network cost incurred is minimized. Through a comparative performance study, we show that dual-region mobility management outperforms existing mobility management schemes based on static home regions. / Ph. D.
17

Archéologie et inventaire du patrimoine national : recherches sur les systèmes d'inventaire en Europe et Méditerranée occidentale (France, Espagne, Grande-Bretagne, Tunisie) : comparaisons et perspectives / Archaeology and national heritage record : research on record systems in Europe and western Méditerranée (France, Spain, Great-Britain, Tunisia) : comparisons and prospects

Ournac, Perrine 28 September 2011 (has links)
La comparaison des systèmes d'inventaire du patrimoine archéologique en France, Espagne, Grande-Bretagne et Tunisie consiste à observer l'organisation et les résultats de ces inventaires, au niveau national lorsqu'il existe, ou le cas échéant, au niveau régional. Il s'agit d'identifier, pour chaque pays, le mode de réalisation d'une base de données, dont les objectifs sont la protection et la mise en valeur du patrimoine archéologique. Ainsi, la naissance des premiers recensements, le cadre réglementaire, la structure institutionnelle, les conditions d'accessibilité, et la forme actuelle des inventaires ont été observés. L'analyse critique des différents cas, à l'issue des descriptions et des tests, permet de mettre en avant des paramètres conditionnant d'une part, l'existence réelle d'un inventaire national du patrimoine archéologique, d'autre part, le niveau d'accessibilité des données regroupées par ces inventaires. / Compare archaeological heritage record systems in France, Spain, Great-Britain and Tunisie consists in studying organization and results of these records, at the national level, or, where it does not exist, at the regional level. The database design, aimed at protecting and promoting archaeological resource, has been identified in each country. The first inventories, the legal and institutional structures, the condition of accessibility, and the current frame of archaeological records have been studied. Analysis shows, after describing and testing these case, that there are circumstances conditioning: the existence of a national archaeological record, but also, the level of accessibility of data maintained in these records.
18

Otimização de operações de entrada e saída visando reduzir o tempo de resposta de aplicações distribuídas que manipulam grandes volumes de dados / Optimization input output operations aiming at reduce execution time of distributed applications which handle large amount of data

Ishii, Renato Porfirio 01 September 2010 (has links)
Aplicações científicas atuais têm produzido volumes de dados cada vez maiores. O processamento, a manipulação e a análise desses dados requerem infraestruturas computacionais de larga escala tais como aglomerados e grades de computadores. Nesse contexto, várias pesquisas visam o aumento de desempenho dessas aplicações por meio da otimização de acesso a dados. Para alcançar tal objetivo, pesquisadores têm utilizado técnicas de replicação, migração, distribuição e paralelismo de dados. No entanto, uma das principais lacunas dessas pesquisas está na falta de emprego de conhecimento sobre aplicações com objetivo de realizar essa otimização. Essa lacuna motivou esta tese que visa empregar comportamento histórico e preditivo de aplicações a fim de otimizar suas operações de leitura e escrita sobre dados distribuídos. Os estudos foram iniciados empregando-se informações previamente monitoradas de aplicações a fim de tomar decisões relativas à replicação, migração e manutenção de consistência. Observou-se, por meio de uma nova heurística, que um conjunto histórico de eventos auxilia a estimar o comportamento futuro de uma aplicação e otimizar seus acessos. Essa primeira abordagem requer ao menos uma execução prévia da aplicação para composição de histórico. Esse requisito pode limitar aplicações reais que apresentam mudanças comportamentais ou que necessitam de longos períodos de execução para completar seu processamento. Para superar essa limitação, uma segunda abordagem foi proposta baseada na predição on-line de eventos comportamentais de aplicações. Essa abordagem não requer a execução prévia da aplicação e permite adaptar estimativas de comportamento futuro em função de alterações adjacentes. A abordagem preditiva analisa propriedades de séries temporais com objetivo de classificar seus processos geradores. Essa classificação aponta modelos que melhor se ajustam ao comportamento das aplicações e que, portanto, permitem predições com maior acurácia. As duas abordagens propostas foram implementadas e avaliadas utilizando o simulador OptorSim, vinculado ao projeto LHC/CERN, amplamente adotado pela comunidade científica. Experimentos constataram que as duas abordagens propostas reduzem o tempo de resposta (ou execução) de aplicações que manipulam grandes volumes de dados distribuídos em aproximadamente 50% / Current scientific applications produce large amount of data and handling, processing and analyzing such data require large-scale computing infrastructure such as clusters and grids. In this context, various studies have focused at improving the performance of these applications by optimizing data access. In order to achieve this goal, researchers have employed techniques of replication, migration, distribution and parallelism of data. However, these common approaches do not use knowledge about the applications at hand to perform this optimization. This gap motivated the present thesis, which aims at applying historical and predictive behavior of applications to optimize their reading and writing operations on distributed data. Based on information previously monitored from applications to make decisions regarding replication, migration and consistency of data, a new heuristic was initially proposed. Its evaluation revealed that considering sets of historical events indeed helps to estimate the behavior of future applications and to optimize their access operations. Thus it was embedded into two optimization approaches. The first one requires at least a previous execution for the history composition. This requirement may limit real world applications which present behavioral changes or take very long time to execute. In order to overcome this issue, a second technique was proposed. It performs on-line predictions about the behavior of the applications, mitigating the need of any prior execution. Additionally, this approach considers the future behavior of an application as a function of its underlying changes. This behavior can be modeled as time series. The method works by analyzing the series properties in order to classify their generating processes. This classification indicates models that best fit the applications behavior, allowing more accurate predictions. Experiments using the OptorSim simulator (LHC/CERN project) confirmed that the proposed approaches are able to reduce the response time of applications that handle large amount of distributed data in approximately 50%
19

Tillgängliggöra data inom SSBI : En kvalitativ studie hur data bör tillgängliggöras vid införande av SSBI / Data access within SSBI : A qualitative study how data should be made accessible when setting up SSBI

Götlind, Oskar January 2019 (has links)
I dagens affärsvärld är det i princip ett måste för verksamheter att ta del av den konkurrensfördel som Jinns möjlig via att använda affärsdata genererat av en verksamhets processer för att sedan ta eventuella beslut, detta möjliggörs inom en Business Intelligence- miljö. Dock behöver dessa beslut ofta tas snabbare, mer nu än förr, detta sätter allt större press på den IT-avdelning som skall tillhandahålla beslutsfattare det beslutsunderlag som är nödvändig. På grund av nödvändigheten för dom snabba besluten behöver verksamheter korta ner dom ledtider som den traditionella Business Intelligence-miljön skapar. Ledtiderna skapas via att en beslutsfattare skapar en förfrågan på en analys och IT-avdelningen sedan tar fram en rapport som kan svara på förfrågan. För att verksamheter tillsammans med IT-avdelningen skall kunna minska ledtider kan verksamheten införa en så kallad Self-Service BI-miljö[SSBI]. SSBI fokuserar mer på att göra beslutsfattare självständiga genom att möjliggöra för beslutsfattare att själva, utan hjälp av en IT-avdelning ta del av data och analysera data för att ta sina eventuella beslut. Verksamheter lyckas inte alltid med att införa SSBI, det har visat sig Jinnas många stora utmaningar med ett sådant införande. Att tillgängliggöra data för beslutsfattare inom en sådan miljö är en del av alla utmaningar, men vad Jinns det för utmaningar inom just tillgängliggörandet? Vilka faktorer bör verksamheter fokusera på vid tillgängliggörandet av data, detta är frågor som denna studie skall besvara. Studien baseras på frågeformuleringen nedan: • Hur bör data tillgängliggöras till slutanvändare vid införande av SSBI? Studien genomförs via en fallstudie som involverar ett Jlertal intervjuer av respondenter anställda på ett ledande företag inom Sverige som konsulterar stora verksamheter att inom Business Intelligence och dessutom Self-Service BI. En litteraturgranskning av forskning inom samma domän sker också under studiens gång för att skapa en grund inför intervjuerna. Samt också berika informationen från respondenterna. Resultatet av studien är en modell med diverse faktorer och utmaningar med tre huvudkategorier med tillhörande underkategorier som verksamheter bör fokusera på vid tillgängliggörande av data i en SSBI-miljö. / In today's business world, in fact, it is a must for businesses to take part of the competitive advantage that is made possible by using business data generated by a business's processes and then taking decisions, this is made possible within a Business Intelligence environment. However, these decisions often need to be taken more quickly, more now than before, this is putting increasing pressure on the IT department to provide decision makers with the necessary supporting documentation. Due to the necessity for the quick decisions, businesses need to shorten the lead times that an traditional Business Intelligence environment creates. This is because a decision maker creates an inquiry for an analysis and the IT department then produces a report that can respond to the request. In order for businesses to be able to reduce lead times together with the IT department, the company can introduce a so-called Self- Service BI environment [SSBI]. SSBI focuses more on making decision makers self-reliant by enabling decision makers to themselves, without the help of an IT department, to analyze data to make their decisions. Businesses do not always succeed in introducing SSBI, it has proved to be many major challenges with such introduction. Making data available to decision makers in such an environment is one part of all the challenges, but what are the challenges in the area of accessibility? What factors should businesses focus on when making data available, these are questions that this study answer. The study is based on the question below: • How should data be made available to end users when introducing SSBI? The study is carried out by a case study that involves a number of interviews of respondents employed at a leading company in Sweden that consult large businesses within Business Intelligence and also Self-Service BI. A literature review of research within the same domain also takes place during the study to enrich the information from the respondents. The result of the study is a model with various factors and challenges with three main categories with associated subcategories that businesses should focus on when making data available in an SSBI environment.
20

Otimização de operações de entrada e saída visando reduzir o tempo de resposta de aplicações distribuídas que manipulam grandes volumes de dados / Optimization input output operations aiming at reduce execution time of distributed applications which handle large amount of data

Renato Porfirio Ishii 01 September 2010 (has links)
Aplicações científicas atuais têm produzido volumes de dados cada vez maiores. O processamento, a manipulação e a análise desses dados requerem infraestruturas computacionais de larga escala tais como aglomerados e grades de computadores. Nesse contexto, várias pesquisas visam o aumento de desempenho dessas aplicações por meio da otimização de acesso a dados. Para alcançar tal objetivo, pesquisadores têm utilizado técnicas de replicação, migração, distribuição e paralelismo de dados. No entanto, uma das principais lacunas dessas pesquisas está na falta de emprego de conhecimento sobre aplicações com objetivo de realizar essa otimização. Essa lacuna motivou esta tese que visa empregar comportamento histórico e preditivo de aplicações a fim de otimizar suas operações de leitura e escrita sobre dados distribuídos. Os estudos foram iniciados empregando-se informações previamente monitoradas de aplicações a fim de tomar decisões relativas à replicação, migração e manutenção de consistência. Observou-se, por meio de uma nova heurística, que um conjunto histórico de eventos auxilia a estimar o comportamento futuro de uma aplicação e otimizar seus acessos. Essa primeira abordagem requer ao menos uma execução prévia da aplicação para composição de histórico. Esse requisito pode limitar aplicações reais que apresentam mudanças comportamentais ou que necessitam de longos períodos de execução para completar seu processamento. Para superar essa limitação, uma segunda abordagem foi proposta baseada na predição on-line de eventos comportamentais de aplicações. Essa abordagem não requer a execução prévia da aplicação e permite adaptar estimativas de comportamento futuro em função de alterações adjacentes. A abordagem preditiva analisa propriedades de séries temporais com objetivo de classificar seus processos geradores. Essa classificação aponta modelos que melhor se ajustam ao comportamento das aplicações e que, portanto, permitem predições com maior acurácia. As duas abordagens propostas foram implementadas e avaliadas utilizando o simulador OptorSim, vinculado ao projeto LHC/CERN, amplamente adotado pela comunidade científica. Experimentos constataram que as duas abordagens propostas reduzem o tempo de resposta (ou execução) de aplicações que manipulam grandes volumes de dados distribuídos em aproximadamente 50% / Current scientific applications produce large amount of data and handling, processing and analyzing such data require large-scale computing infrastructure such as clusters and grids. In this context, various studies have focused at improving the performance of these applications by optimizing data access. In order to achieve this goal, researchers have employed techniques of replication, migration, distribution and parallelism of data. However, these common approaches do not use knowledge about the applications at hand to perform this optimization. This gap motivated the present thesis, which aims at applying historical and predictive behavior of applications to optimize their reading and writing operations on distributed data. Based on information previously monitored from applications to make decisions regarding replication, migration and consistency of data, a new heuristic was initially proposed. Its evaluation revealed that considering sets of historical events indeed helps to estimate the behavior of future applications and to optimize their access operations. Thus it was embedded into two optimization approaches. The first one requires at least a previous execution for the history composition. This requirement may limit real world applications which present behavioral changes or take very long time to execute. In order to overcome this issue, a second technique was proposed. It performs on-line predictions about the behavior of the applications, mitigating the need of any prior execution. Additionally, this approach considers the future behavior of an application as a function of its underlying changes. This behavior can be modeled as time series. The method works by analyzing the series properties in order to classify their generating processes. This classification indicates models that best fit the applications behavior, allowing more accurate predictions. Experiments using the OptorSim simulator (LHC/CERN project) confirmed that the proposed approaches are able to reduce the response time of applications that handle large amount of distributed data in approximately 50%

Page generated in 0.3682 seconds