• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 424
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 674
  • 674
  • 274
  • 219
  • 195
  • 153
  • 128
  • 123
  • 97
  • 83
  • 80
  • 67
  • 56
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Semantic data sharing with a peer data management system /

Tatarinov, Igor. January 2004 (has links)
Thesis (Ph. D.)--University of Washington, 2004. / Vita. Includes bibliographical references (p. 113-124).
562

Integrating uncertain XML data from different sources

Eshmawi, Ala. January 1900 (has links)
Thesis (M.S.)--The University of North Carolina at Greensboro, 2009. / Directed by Fereidoon Sadri; submitted to the Dept. of Computer Science. Title from PDF t.p. (viewed May 5, 2010). Includes bibliographical references (p. 32).
563

Information systems to provide leading indicators of energy sufficiency : a report to the Federal Energy Administration

MIT Energy Lab January 1975 (has links)
Final working paper, submitted to Office of Data Policy, Federal Energy Administration in connection with A Study of information systems to provide leading indicators of energy sufficiency, (FEA Contract no. 14-01-001-2040).
564

Old institution meets new technology : GIS for quantifying church roles

Mans, Gerbrand 12 1900 (has links)
Thesis (MA)-- tellenbosch University, 2003. / ENGLISH ABSTRACT: South Africa today is facing many social and welfare problems. Three of which are very prominent: named HIV/Aids; unemployment; and sexual and/or violent crimes against woman and children. With churches being some of the biggest and most influential nongovernmental organizations in the country, government is increasingly acknowledging that churches have a very important role to play in order to help curb social and welfare problems in the community. One inhibiting factor keeps churches from playing the role that government is expecting of them: the roles and expected roles of churches have not been quantified sufficiently. A geographical information system was chosen to help in this process of quantification. Previous studies related to GIS being used by social and welfare services showed that this software give these service agencies a powerful new way to analyse services in relation to clients and the communities in which they operate. The crux throughout the study is the process by which it is shown how a GIS can be used and is central from the process of data gathering, storing and manipulation of the gathered data, deriving information from it, through to communicating and visualising the obtained results. Key words: geographical information systems; GIS; ArcGIS; Statistica; Microsoft Access; church; NGO; social services; social problems; welfare services; welfare problems; data base; data base management systems; geodatabase; Factor Analysis; quantification / AFRIKAANSE OPSOMMING: In hedendaagse Suid Afrika is daar 'n menigte van sosiale en maatskaplike probleme. Drie van die prominentste van die probleme is MN/Vigs, werkloosheid en seksuele en/of geweldsmisdade teen vroue en kinders. Kerke is van die grootste en mees invloedryke nieregeringsorganisasies in Suid Afrika. Die regering besef al meer dat kerke 'n belangrike rol kan speel in die aanspreek van die sosiale en maatskaplike probleme van die land. Daar is egter 'n inhiberende faktor wat kerke daarvan weerhou om dié rol te speel wat die regering van hul verwag; en dit is die feit dat die rol wat kerke speel, en die rol wat die publiek verwag kerke moet speel, nog nie gekwantifiseer is nie. 'n Geografiese inligting stelsel is gekies om te help in die proses van kwantifisering. Vorige studies waar daar gekyk is na die gebruik van GIS deur sosiale en maatskaplike dienste het aangedui dat die sagteware hierdie dienste 'n effektiewe en innoverende wyse gee waardeur hul dienste ontleed en gemonitor kan word. In die studie word gewys hoe 'n GIS gebruik kan word en sentraal is in die prosesse van data insameling, stoor en manipulasie van die ingesamelde data, hoe data omgesit word in inligting en laastens die kommunikasie en visualisering van die resultate wat verkry word.
565

Uppfattningar om SwePub : En enkätstudie om svenska lärosätens bild av SwePub som analysverktyg / Perspectives on SwePub : A Survey of the Views of Swedish Universities Regarding SwePub as a Tool for Analysis

Jerkert, Kajsa January 2018 (has links)
This thesis has examined a selection of Swedish universities’ views on the Swedish national publication database SwePub. The study has phenomenography as its methodology, and by means of the survey it has asked questions about the universities’ local publication databases, the national guidelines on documenting a scientific publication and how the universities regard the whole SwePub analysis project.  The purpose was to find out how the universities perceive the whole SwePub phenomenon. For selecting participants in the survey, the selection criteria were size of the university, the subjects offered there and the publishing system used. Regarding the local publication databases, the answers have focused on the difficulties and opportunities with the registration of scientific work using the own publications database. In the section on guidelines, I discuss how the universities relate to two documents on SwePub guidelines and recommendations. The analysis deals with the national guidelines related to the local practice of the universities, where national guidelines may sometimes collide with the institution's own needs and wishes. The section of the analysis that deals with the institutions' views on the SwePub analysis project at large, relates the SwePub project to the terms function and relevance. In conclusion, I discuss to what extent I have found some patterns in the answers, linked to my selection criteria for the size of the university, subject area and type of publishing system.
566

Análise e desenvolvimento de um novo algoritmo de junção espacial para SGBD geográficos / Analysis and design of a new algorithm to perform spatial join in geographic DBMS

Fornari, Miguel Rodrigues January 2006 (has links)
Um Sistema de Informação Geográfica armazena e mantém dados geográficos, combinando-os, para obter novas representações do espaço geográfico. A junção espacial combina duas relações de geometrias geo-referenciadas de acordo com algum predicado espacial, como intersecção e distância entre objetos. Trata-se de uma operação essencial, pois é constantemente utilizada e possui um alto custo de realização devido a realização de grande número de operações de Entrada/Saída e a complexidade do algoritmo. Este trabalho estuda o desempenho de algoritmos de junção espacial. Inicialmente, apresenta a análise dos algoritmos já publicados na literatura, obtendo expressões de custo para número de operações de disco e processamento. Após, descreve-se a implementação de alguns algoritmos em um ambiente de testes. Este ambiente permite ao usuário variar diversos parâmetros de entrada: cardinalidade dos conjuntos, memória disponível e predicado de junção, envolvendo dados reais e sintéticos. O ambiente de testes inclui os algoritmos de Laços Aninhados, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) para árvores R* e Iterative Spatial Stripped Join (ISSJ). Os testes demonstraram que o STT é adequado para conjuntos pequenos de dados; o ISSJ se houver memória suficiente para ordenar os conjuntos internamente; e o PBSM se houver pouca memória disponível para buffer de dados. A partir da análise um novo algoritmo, chamado Histogram-based Hash Stripped Join (HHSJ) é apresentado. O HSSJ utiliza histogramas da distribuição dos objetos no espaço para definir o particionamento, armazena os objetos em arquivos organizados em hash e subdivide o espaço em faixas (strips) para reduzir o processamento. Os testes indicam que o HHSJ é mais rápido na maioria dos cenários, sendo ainda mais vantajoso quanto maior o número de objetos envolvidos na junção. Um módulo de otimização de consultas baseado em custos, capaz de escolher o melhor algoritmo para realizar a etapa de filtragem é descrito. O módulo utiliza informações estatísticas mantidas no dicionário de dados para estimar o tempo de resposta de cada algoritmo, e indicar o mais rápido para realizar uma operação específica. Este otimizador de consultas acertou a indicação em 88,9% dos casos, errando apenas na junção de conjuntos pequenos, quando o impacto é menor. / A Geographic Information System (GIS) stores geographic data, combining them to obtain new representations of the geographic space. The spatial join operation combines two sets of spatial features, A and B, based on a spatial predicate. It is a fundamental as well as one of the most expensive operations in GIS. Combining pairs of spatial, georreferenced data objects of two different, and probably large data sets implies the execution of a significant number of Input/Output (I/O) operations as well as a large number of CPU operations. This work presents a study about the performance of spatial join algorithms. Firstly, an analysis of the algorithms is realized. As a result, mathematical expressions are identified to predict the number of I/O operations and the algorithm complexity. After this, some of the algorithms (e.g.; Nested Loops, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) to R-Trees and Iterative Spatial Stripped Join (ISSJ)) are implemented, allowing the execution of a series of tests in different spatial join scenarios. The tests were performed using both synthetic and real data sets. Based on the results, a new algorithm, called Histogram-based Hash Stripped Join (HHSJ), is proposed. The partitioning of the space is carried out according to the spatial distribution of the objects, maintained in histograms. In addition, a hash file is created for each input data set and used to enhance both the storage of and the access to the minimum bounding rectangles (MBR) of the respective set elements. Furthermore, the space is divided in strips, to reduce the processing time. The results showed that the new algorithm is faster in almost all scenarios, specially when bigger data sets are processed. Finally, a query optimizer based on costs, capable to choose the best algorithm to perform the filter step of a spatial join operation, is presented. The query optimizer uses statistical information stored in the data dictionary to estimate the response time for each algorithm and chooses the faster to realize the operation. This query optimizer choose the right one on 88.9% of cases, mistaken just in spatial join envolving small data sets, when the impact is small.
567

Dynamic Energy-Aware Database Storage and Operations

Behzadnia, Peyman 29 March 2018 (has links)
Energy consumption has become a first-class optimization goal in design and implementation of data-intensive computing systems. This is particularly true in the design of database management systems (DBMS), which is one of the most important servers in software stack of modern data centers. Data storage system is one of the essential components of database and has been under many research efforts aiming at reducing its energy consumption. In previous work, dynamic power management (DPM) techniques that make real-time decisions to transition the disks to low-power modes are normally used to save energy in storage systems. In this research, we tackle the limitations of DPM proposals in previous contributions and design a dynamic energy-aware disk storage system in database servers. We introduce a DPM optimization model integrated with model predictive control (MPC) strategy to minimize power consumption of the disk-based storage system while satisfying given performance requirements. It dynamically determines the state of disks and plans for inter-disk data fragment migration to achieve desirable balance between power consumption and query response time. Furthermore, via analyzing our optimization model to identify structural properties of optimal solutions, a fast-solution heuristic DPM algorithm is proposed that can be integrated in large-scale disk storage systems, where finding the most optimal solution might be long, to achieve near-optimal power saving solution within short periods of computational time. The proposed ideas are evaluated through running simulations using extensive set of synthetic workloads. The results show that our solution achieves up to 1.65 times more energy saving while providing up to 1.67 times shorter response time compared to the best existing algorithm in literature. Stream join is a dynamic and expensive database operation that performs join operation in real-time fashion on continuous data streams. Stream joins, also known as window joins, impose high computational time and potentially higher energy consumption compared to other database operations, and thus we also tackle energy-efficiency of stream join processing in this research. Given that there is a strong linear correlation between energy-efficiency and performance of in-memory parallel join algorithms in database servers, we study parallelization of stream join algorithms on multicore processors to achieve energy efficiency and high performance. Equi-join is the most frequent type of join in query workloads and symmetric hash join (SHJ) algorithm is the most effective algorithm to evaluate equi-joins in data streams. To best of our knowledge, we are the first to propose a shared-memory parallel symmetric hash join algorithm on multi-core CPUs. Furthermore, we introduce a novel parallel hash-based stream join algorithm called chunk-based pairing hash join that aims at elevating data throughput and scalability. We also tackle parallel processing of multi-way stream joins where there are more than two input data streams involved in the join operation. To best of our knowledge, we are also the first to propose an in-memory parallel multi-way hash-based stream join on multicore processors. Experimental evaluation on our proposed parallel algorithms demonstrates high throughput, significant scalability, and low latency while reducing the energy consumption. Our parallel symmetric hash join and chunk-based pairing hash join achieve up to 11 times and 12.5 times more throughput, respectively, compared to that of state-of-the-art parallel stream join algorithm. Also, these two algorithms provide up to around 22 times and 24.5 times more throughput, respectively, compared to that of non-parallel (sequential) stream join computation where there is one processing thread.
568

Análise e desenvolvimento de um novo algoritmo de junção espacial para SGBD geográficos / Analysis and design of a new algorithm to perform spatial join in geographic DBMS

Fornari, Miguel Rodrigues January 2006 (has links)
Um Sistema de Informação Geográfica armazena e mantém dados geográficos, combinando-os, para obter novas representações do espaço geográfico. A junção espacial combina duas relações de geometrias geo-referenciadas de acordo com algum predicado espacial, como intersecção e distância entre objetos. Trata-se de uma operação essencial, pois é constantemente utilizada e possui um alto custo de realização devido a realização de grande número de operações de Entrada/Saída e a complexidade do algoritmo. Este trabalho estuda o desempenho de algoritmos de junção espacial. Inicialmente, apresenta a análise dos algoritmos já publicados na literatura, obtendo expressões de custo para número de operações de disco e processamento. Após, descreve-se a implementação de alguns algoritmos em um ambiente de testes. Este ambiente permite ao usuário variar diversos parâmetros de entrada: cardinalidade dos conjuntos, memória disponível e predicado de junção, envolvendo dados reais e sintéticos. O ambiente de testes inclui os algoritmos de Laços Aninhados, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) para árvores R* e Iterative Spatial Stripped Join (ISSJ). Os testes demonstraram que o STT é adequado para conjuntos pequenos de dados; o ISSJ se houver memória suficiente para ordenar os conjuntos internamente; e o PBSM se houver pouca memória disponível para buffer de dados. A partir da análise um novo algoritmo, chamado Histogram-based Hash Stripped Join (HHSJ) é apresentado. O HSSJ utiliza histogramas da distribuição dos objetos no espaço para definir o particionamento, armazena os objetos em arquivos organizados em hash e subdivide o espaço em faixas (strips) para reduzir o processamento. Os testes indicam que o HHSJ é mais rápido na maioria dos cenários, sendo ainda mais vantajoso quanto maior o número de objetos envolvidos na junção. Um módulo de otimização de consultas baseado em custos, capaz de escolher o melhor algoritmo para realizar a etapa de filtragem é descrito. O módulo utiliza informações estatísticas mantidas no dicionário de dados para estimar o tempo de resposta de cada algoritmo, e indicar o mais rápido para realizar uma operação específica. Este otimizador de consultas acertou a indicação em 88,9% dos casos, errando apenas na junção de conjuntos pequenos, quando o impacto é menor. / A Geographic Information System (GIS) stores geographic data, combining them to obtain new representations of the geographic space. The spatial join operation combines two sets of spatial features, A and B, based on a spatial predicate. It is a fundamental as well as one of the most expensive operations in GIS. Combining pairs of spatial, georreferenced data objects of two different, and probably large data sets implies the execution of a significant number of Input/Output (I/O) operations as well as a large number of CPU operations. This work presents a study about the performance of spatial join algorithms. Firstly, an analysis of the algorithms is realized. As a result, mathematical expressions are identified to predict the number of I/O operations and the algorithm complexity. After this, some of the algorithms (e.g.; Nested Loops, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) to R-Trees and Iterative Spatial Stripped Join (ISSJ)) are implemented, allowing the execution of a series of tests in different spatial join scenarios. The tests were performed using both synthetic and real data sets. Based on the results, a new algorithm, called Histogram-based Hash Stripped Join (HHSJ), is proposed. The partitioning of the space is carried out according to the spatial distribution of the objects, maintained in histograms. In addition, a hash file is created for each input data set and used to enhance both the storage of and the access to the minimum bounding rectangles (MBR) of the respective set elements. Furthermore, the space is divided in strips, to reduce the processing time. The results showed that the new algorithm is faster in almost all scenarios, specially when bigger data sets are processed. Finally, a query optimizer based on costs, capable to choose the best algorithm to perform the filter step of a spatial join operation, is presented. The query optimizer uses statistical information stored in the data dictionary to estimate the response time for each algorithm and chooses the faster to realize the operation. This query optimizer choose the right one on 88.9% of cases, mistaken just in spatial join envolving small data sets, when the impact is small.
569

Núcleo gerenciador de objetos compatibilizando eficiência e flexibilidade / A core object manager balancing flexibility and eficiency

Carlos Roberto Valêncio 06 September 2000 (has links)
A tecnologia de construção de Sistemas de Gerenciamento de Base de Dados tradicionais, em particular dos que suportam o Modelo Relacional, tem atendido às exigências das aplicações na área de negócios ao longo dos anos. No entanto, aplicações mais complexas, tais como projetos de Engenharia e manufatura (CAD, CAM e CIM1), experimentos científicos, suporte à Medicina, telecomunicações, sistemas de informação geográfica e sistemas multimídia, têm exigido recursos mais sofisticados, para os quais os atuais sistemas gerenciadores de base de dados não foram concebidos e, portanto, têm encontrado dificuldades em atender. Com o objetivo de disponibilizar novas tecnologias para o gerenciamento de dados não convencionais, esse trabalho descreve um Núcleo Gerenciador de Objetos que oferece um conjunto de recursos para suporte flexível e eficiente às atividades de gerenciamento de dados. Esse gerenciador é apresentado no contexto de um modelo de dados orientado a objetos, porém a maioria dos conceitos e soluções apresentadas podem ser aproveitadas em outros tipos de gerenciadores de dados, independentemente do modelo de dados suportado. Em particular, o núcleo pode ser utilizado tanto para a implementação de um gerenciador de dados orientados a objetos quanto para um gerenciador relacional. O núcleo prevê também sua utilização como gerenciadores de dados que suportem a manipulação de dados fracamente estruturados (ou semi-estruturados) e como gerenciador de documentos multimídia em aplicações centradas na Web. A implementação do Núcleo Gerenciador de Objetos foi executada de maneira modular numa arquitetura em camadas, que delimitam a implementação das funcionalidades oferecidas. Apesar da forte integração entre as diversas camadas, necessária para a operação eficiente dos gerenciadores de dados, essa arquitetura provê uma definição precisa dos diversos módulos, permitindo que mais de uma alternativa de implementação possa existir para cada módulo, tornando o sistema resultante altamente configurável. Dentre os principais tópicos para os quais esse trabalho apresenta contribuições inovadoras encontram-se: o gerenciamento de identificadores de objetos (Oids); gerenciamento de transação e controle de concorrência baseado na semântica da aplicação; otimização de acessos ao disco para execução de operações de modificação de registros dentro de transações; uso de estruturas de tuplas e listas concorrentemente, para agilizar o acesso e a definição dos atributos de objetos; e a manutenção de esquemas e dados integrados numa mesma estrutura. / The technologies employed to build the current generation of Database Management Systems - DBMS, including those based on the Relational Model have been enough to support the needs of traditional business application. However, more demanding applications, like computer-aided design and manufacturing (CAD, CAM and CIM), scientific data retrieval and analysis, computer aided medical systems, telecommunications, geographical information systems and multimedia systems yet have not been adequately supported. The objective of this work is to develop new technologies to build DBMSs that support those non-conventional applications. To this intend, we implemented an object manager kernel, incorporating a representative set of tools able to provide a flexible and efficient support for key DBMSs operations. The kernel is described based on an object-oriented manager. However almost every new techniques proposed can be used together with data managers supporting other data models. In particular, we show that this kernel can be used to build both Object-oriented and Relational DBMSs. The kernel also supports the construction of DBMSs that maintain loosely-structured data (or semi-structured data), providing a good starting point to build web-based applications to handle multimedia documents. The kernel was implemented in a modular, multi-level architecture. Each module provides a well-defined service, and has a well defined interface, so it is possible to have more than one implementation for each module, enabling the comparison or tuning of the kernel for each specific situations. Nonetheless, the structure enforces a tight module integration, enabling the efficient execution of the resulting DBMS. The main contributions of this work include new techniques to improve the following aspects of database managers: Object Identifier management - OIds; transaction and concurrency control based on the application data semantic; disk accesses optimization to manage page shadowing during transaction execution; use of attribute tuples and lists to define structures; and integrated schema and data storage into a common structure.
570

Análise e desenvolvimento de um novo algoritmo de junção espacial para SGBD geográficos / Analysis and design of a new algorithm to perform spatial join in geographic DBMS

Fornari, Miguel Rodrigues January 2006 (has links)
Um Sistema de Informação Geográfica armazena e mantém dados geográficos, combinando-os, para obter novas representações do espaço geográfico. A junção espacial combina duas relações de geometrias geo-referenciadas de acordo com algum predicado espacial, como intersecção e distância entre objetos. Trata-se de uma operação essencial, pois é constantemente utilizada e possui um alto custo de realização devido a realização de grande número de operações de Entrada/Saída e a complexidade do algoritmo. Este trabalho estuda o desempenho de algoritmos de junção espacial. Inicialmente, apresenta a análise dos algoritmos já publicados na literatura, obtendo expressões de custo para número de operações de disco e processamento. Após, descreve-se a implementação de alguns algoritmos em um ambiente de testes. Este ambiente permite ao usuário variar diversos parâmetros de entrada: cardinalidade dos conjuntos, memória disponível e predicado de junção, envolvendo dados reais e sintéticos. O ambiente de testes inclui os algoritmos de Laços Aninhados, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) para árvores R* e Iterative Spatial Stripped Join (ISSJ). Os testes demonstraram que o STT é adequado para conjuntos pequenos de dados; o ISSJ se houver memória suficiente para ordenar os conjuntos internamente; e o PBSM se houver pouca memória disponível para buffer de dados. A partir da análise um novo algoritmo, chamado Histogram-based Hash Stripped Join (HHSJ) é apresentado. O HSSJ utiliza histogramas da distribuição dos objetos no espaço para definir o particionamento, armazena os objetos em arquivos organizados em hash e subdivide o espaço em faixas (strips) para reduzir o processamento. Os testes indicam que o HHSJ é mais rápido na maioria dos cenários, sendo ainda mais vantajoso quanto maior o número de objetos envolvidos na junção. Um módulo de otimização de consultas baseado em custos, capaz de escolher o melhor algoritmo para realizar a etapa de filtragem é descrito. O módulo utiliza informações estatísticas mantidas no dicionário de dados para estimar o tempo de resposta de cada algoritmo, e indicar o mais rápido para realizar uma operação específica. Este otimizador de consultas acertou a indicação em 88,9% dos casos, errando apenas na junção de conjuntos pequenos, quando o impacto é menor. / A Geographic Information System (GIS) stores geographic data, combining them to obtain new representations of the geographic space. The spatial join operation combines two sets of spatial features, A and B, based on a spatial predicate. It is a fundamental as well as one of the most expensive operations in GIS. Combining pairs of spatial, georreferenced data objects of two different, and probably large data sets implies the execution of a significant number of Input/Output (I/O) operations as well as a large number of CPU operations. This work presents a study about the performance of spatial join algorithms. Firstly, an analysis of the algorithms is realized. As a result, mathematical expressions are identified to predict the number of I/O operations and the algorithm complexity. After this, some of the algorithms (e.g.; Nested Loops, Partition Based Spatial Join Method (PBSM), Synchronized Tree Transversal (STT) to R-Trees and Iterative Spatial Stripped Join (ISSJ)) are implemented, allowing the execution of a series of tests in different spatial join scenarios. The tests were performed using both synthetic and real data sets. Based on the results, a new algorithm, called Histogram-based Hash Stripped Join (HHSJ), is proposed. The partitioning of the space is carried out according to the spatial distribution of the objects, maintained in histograms. In addition, a hash file is created for each input data set and used to enhance both the storage of and the access to the minimum bounding rectangles (MBR) of the respective set elements. Furthermore, the space is divided in strips, to reduce the processing time. The results showed that the new algorithm is faster in almost all scenarios, specially when bigger data sets are processed. Finally, a query optimizer based on costs, capable to choose the best algorithm to perform the filter step of a spatial join operation, is presented. The query optimizer uses statistical information stored in the data dictionary to estimate the response time for each algorithm and chooses the faster to realize the operation. This query optimizer choose the right one on 88.9% of cases, mistaken just in spatial join envolving small data sets, when the impact is small.

Page generated in 0.0628 seconds