• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Comparison of Databases Used in Clearing House Systems

Hägglund, Casper January 2024 (has links)
This paper examines what types of databases that could be useful in clearing house systems, focusing on one of Nasdaq's clearing house systems. To determine this, the reports start by looking at what characteristics are important for the database used in a clearing house system. Based on this information, databases that fit these characteristics, like SQLite, MongoDB, Couchbase lite, and Nasdaq's own database, were used in tests to give an overview of their performances related to both latency and throughput. The test results are then analyzed to determine what database has the best performance under different conditions. This paper concluded that Nasdaq's current database is a good fit for this specific system and that the other databases would most likely not result in the same or similar performances. While the other databases, in general, performed worse compared to the current solution, H2 did have better results in some of the tests.
612

l'évaluation de requêtes avec un délai constant

Kazana, Wojciech 16 September 2013 (has links) (PDF)
Cette thèse se concentre autour du problème de l'évaluation des requêtes. Étant donné une requête q et une base de données D, l'objectif est de calculer l'ensemble q(D) des nuplets résultant de l'évaluation de q sur D. Toutefois, l'ensemble q(D) peut être plus grand que la base de données elle-même car elle peut avoir une taille de la forme n^l où n est la taille de la base de données et l est l'arité de la requête. Calculer entièrement q(D) peut donc nécessiter plus que les ressources disponibles. L'objectif principal de cette thèse est une solution particulière à ce problème: une énumération de q(D) avec un délai constant. Intuitivement, cela signifie qu'il existe un algorithme avec deux phases: une phase de pré-traitement qui fonctionne en temps linéaire dans la taille de la base de données, suivie d'une phase d'énumération produisant un à un tous les éléments de q(D) avec un délai constant (indépendant de la taille de la base de données) entre deux éléments consécutifs. En outre, quatre autres problèmes sont considérés: le model-checking (où la requête q est un booléen), le comptage (où on veut calculer la taille |q(D)|), les tests (où on s'intéresse à un test efficace pour savoir si un uplet donné appartient au résultat de la requête) et la j-ième solution (où on veut accéder directement au j-ième élément de q(D)). Les résultats présentés dans cette thèse portent sur les problèmes ci-dessus concernant: - les requêtes du premier ordre sur les classes de structures de degré borné, - les requêtes du second ordre monadique sur les classes de structures de largeur d'arborescente bornée, - les requêtes du premier ordre sur les classes de structures avec expansion bornée.
613

[en] A NON-INTRUSIVE APPROACH FOR AUTOMATED PHYSICAL DESIGN TUNING / [pt] UMA ABORDAGEM NÃO-INTRUSIVA PARA A MANUTENÇÃO AUTOMÁTICA DO PROJETO FÍSICO DE BANCOS DE DADOS

JOSÉ MARIA DA SILVA MONTEIRO FILHO 14 January 2009 (has links)
[pt] O projeto físico de bancos de dados cumpre um papel primordial para assegurar um desempenho adequado. Atualmente, existe uma grande quantidade de trabalhos e ferramentas na área de seleção automática do projeto físico. Tais ferramentas, contudo, adotam uma abordagem offline na solução do problema e transferem para o DBA, dentre outras tarefas, a decisão de executar ou não as recomendações sugeridas. Todavia, em ambientes dinâmicos, com consultas ad-hoc, torna-se bastante complexo identificar configurações de projeto físico que sejam adequadas. Recentemente, algumas iniciativas apresentaram descrições de protótipos que implementam funcionalidades de sintonia automática. Estes trabalhos, porém, adotam uma abordagem intrusiva e funcionam apenas com um SGBD específico. Neste trabalho, propõe-se uma abordagem não-intrusiva para a manutenção automática e on-the-fly do projeto físico de bancos de dados. A abordagem proposta é completamente desacoplada do código do SGBD, pode ser utilizada com qualquer SGBD e executada sem intervenção humana. A estratégia adotada baseia-se em heurísticas que executam continuamente e, sempre que necessário, modificam o projeto físico corrente, reagindo a alterações na carga de trabalho. Para comprovar a viabilidade das idéias apresentadas, a abordagem proposta foi instanciada para solucionar dois importantes problemas relacionados ao projeto físico: a manutenção automática de índices e de clusters alternativos de dados. / [en] The physical design of a database plays a critical role in performance. There has been considerable work on automated physical design tuning for database systems. Existing solutions require offline invocations of the tuning tool and depend on DBAs identifying representative workloads manually. However, in dynamic environments involving various ad-hoc queries it is difficult to identify potentially useful physical design in advance. Recently, a few initiatives present brief descriptions of prototypes that address some aspects of online physical tuning. Nevertheless, these references work in an intrusive manner and work only with a specific DBMS. In this work, we propose a non intrusive approach to automated and on-the-fly physical design problems, in order to speed up processing of subsequent queries. Specifically, we design algorithms that are always-on and continuously modify the current physical design, reacting to changes in the query workload. To prove the viability of the presented ideas, the proposed approach was instantiated to solve two major problems related to the database physical design: indexing and alternative data clusters automatic maintenance.
614

Um modelo para projeto e implementação de bancos de dados analítico-temporais. / A model for design and implementation of analytic-temporal databases.

Poletto, Alex Sandro Romeu de Souza 07 December 2007 (has links)
O presente trabalho descreve um modelo para, a partir de Bancos de Dados Operacionais, projetar e implementar Bancos de Dados Analítico-Temporais, cujo principal objetivo é o de armazenar históricos de dados, os quais, por sua vez, visam servir de alicerce para, principalmente, auxiliar no processo de tomada de decisões de médio e longo prazo. O modelo é dividido em três atividades principais. Na primeira atividade o objetivo é mapear os Modelos de Dados Operacionais em um Modelo de Dados Unificado, sendo que este último modelo serve de base para a segunda atividade que é a geração do Modelo de Dados Analítico-Temporal. Para realizar essas duas primeiras atividades, foram elaborados alguns passos que englobam as principais características a serem verificadas e desenvolvidas. Na terceira atividade objetivou-se disponibilizar mecanismos que possibilitem a geração, o transporte e o armazenamento dos dados Analítico-Temporais. Para a realização dessa terceira atividade foram especificados gatilhos e procedimentos armazenados genéricos. / This work describes a model whose main objective is to store historic data, resulting in the Analytic-Temporal Databases. The origins of the data are the Operational Databases. This model can aid in the design and implementation of the Analytic-Temporal Databases that constitutes a very adequate foundation to help in the medium and long time decision taking process. The model is divided into three main activities. In the first activity the objective is to map the Operational Data Models into a Unified Data Model. This last model is the base for the second activity, which is the generation of the Analytical-Temporal Data Model. Considering these two activities, there were elaborated some steps, which encompass the main characteristics to be evaluated and developed. In the third activity the objective is to make available mechanisms, allowing the generation, the transport and the storage of the Analytical-Temporal data. For this third activity, there were specified generic triggers and stored procedures.
615

Um modelo para projeto e implementação de bancos de dados analítico-temporais. / A model for design and implementation of analytic-temporal databases.

Alex Sandro Romeu de Souza Poletto 07 December 2007 (has links)
O presente trabalho descreve um modelo para, a partir de Bancos de Dados Operacionais, projetar e implementar Bancos de Dados Analítico-Temporais, cujo principal objetivo é o de armazenar históricos de dados, os quais, por sua vez, visam servir de alicerce para, principalmente, auxiliar no processo de tomada de decisões de médio e longo prazo. O modelo é dividido em três atividades principais. Na primeira atividade o objetivo é mapear os Modelos de Dados Operacionais em um Modelo de Dados Unificado, sendo que este último modelo serve de base para a segunda atividade que é a geração do Modelo de Dados Analítico-Temporal. Para realizar essas duas primeiras atividades, foram elaborados alguns passos que englobam as principais características a serem verificadas e desenvolvidas. Na terceira atividade objetivou-se disponibilizar mecanismos que possibilitem a geração, o transporte e o armazenamento dos dados Analítico-Temporais. Para a realização dessa terceira atividade foram especificados gatilhos e procedimentos armazenados genéricos. / This work describes a model whose main objective is to store historic data, resulting in the Analytic-Temporal Databases. The origins of the data are the Operational Databases. This model can aid in the design and implementation of the Analytic-Temporal Databases that constitutes a very adequate foundation to help in the medium and long time decision taking process. The model is divided into three main activities. In the first activity the objective is to map the Operational Data Models into a Unified Data Model. This last model is the base for the second activity, which is the generation of the Analytical-Temporal Data Model. Considering these two activities, there were elaborated some steps, which encompass the main characteristics to be evaluated and developed. In the third activity the objective is to make available mechanisms, allowing the generation, the transport and the storage of the Analytical-Temporal data. For this third activity, there were specified generic triggers and stored procedures.
616

The use of human patient simulators to enhance the clinical decision making of nursing students

Powell-Laney, Sharon Kay 01 January 2010 (has links)
One of the newest teaching modalities in nursing education is the use of human patient simulators (HPS). An HPS simulation scenario creates a software program vignette in which students interact with a manikin to practice caring for critical patients in a risk-free environment. Although used extensively in schools of nursing, there is little research that examines if these expensive simulators improve the clinical decision-making ability of nursing students. The purpose of this experimental differentiated treatment study was to assess if HPS technology leads to increased clinical decision-making ability and clinical performance more than the teaching modality of a paper and pencil case study. Students (n = 133) from practical nursing programs in Pennsylvania were randomly assigned to one of 2 groups learning about the care of a patient with a myocardial infarction: an HPS simulation group or a paper and pencil case study group. One-tailed, independent t-tests were used to measure pre and post treatment exam and clinical performance scores measuring the care of a patient with a myocardial infarction. Results indicated that there was a statistically significant learning gain from the use of HPS technology compared to the paper and pencil case study ( p < 0.001). Students in the HPS simulation group also performed CPR more quickly than students in the case study group (p < 0.001). The research adds a rare control group study to the literature and confirms previous findings about the effectiveness of HPS technology. Nurse educators can benefit as the results validate the use of HPS technology in nursing education. Ultimately patients may benefit from increased quality and speed of care from practical nurses whose training was improved through the use of HPS technology.
617

A core signaling component of the notch network + a molecular interaction database accessible through an online VLSIC-like interface

Barsi, Julius Christopher 28 August 2008 (has links)
Not available / text
618

Optimization and Execution of Complex Scientific Queries

Fomkin, Ruslan January 2009 (has links)
Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development. This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses. Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.
619

Application of active rules to support database integrity constraints and view management

Visavapattamawon, Suwanna 01 January 2001 (has links)
The project demonstrates the enforcement of integrity constraints in both the conventional and active database systems. The project implements a more complex user-defined constraint, a complicated view and more detailed database auditing on the active database system.
620

Telephone directory web service

Sun, Hua 01 January 2003 (has links)
This was a project to develop a Telephone Directory Web service (TDWS) to provide convenient and cost-effective access to public telephone directory data.

Page generated in 0.2139 seconds