• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1605
  • 457
  • 422
  • 170
  • 114
  • 102
  • 60
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3643
  • 856
  • 804
  • 754
  • 608
  • 543
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 276
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Decision Tree Model to Support the Successful Selection of a Database Engine for Novice Database Administrators

Monjaras, Alvaro, Bcndezu, Enrique, Raymundo, Carlos 09 May 2019 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / There are currently several types of databases that have different ways of manipulating data that affects the performance of transactions when dealing with the information stored. And it is very important for companies to manage information fast, so they do not lose any operation because of a bad performance of a database, in the same way, they need to operate fast while keeping the integrity of the information. Likewise, every database category's purpose is to serve a specific or specifics use cases to perform fast to manage the information when needed, so in this paper, we study and analyze the SQL, NoSQL and In Memory databases to understand their fit uses cases and make performance tests to build a decision tree that can help to take the decision to choose what database category to use to maintain a good performance. The precision of the tests of relational databases was 96.26% in NoSQL databases was 91.83% and finally in IMDBS was 93.87%.
222

Bucketization Techniques for Encrypted Databases: Quantifying the Impact of Query Distributions

Raybourn, Tracey 06 May 2013 (has links)
No description available.
223

Trust Negotiation for Open Database Access Control

Porter, Paul A. 09 May 2006 (has links) (PDF)
Hippocratic databases are designed to protect the privacy of the individuals whose personal information they contain. This thesis presents a model for providing and enforcing access control in an open Hippocratic database system. Previously unknown individuals can gain access to information in the database by authenticating to roles through trust negotiation. Allowing qualified strangers to access the database increases the usefulness of the system without compromising privacy. This thesis presents the design and implementation of two methods for filtering information from database queries. First, we extend a query modification method for use in an open database system. Second, we introduce a novel filtering method that overcomes some limitations of the query modification method. We also provide results showing that the two methods have comparable performance that is suitable for interactive response time with our sample data set.
224

IMPLEMENTING EFORM-BASED BASELINE RISK DATA EXTRACTION FROM HIGH QUALITY PAPERS FOR THE BRISKET DATABASE AND TOOL

Jacob, Anand 06 1900 (has links)
This thesis was undertaken to investigate if an eForm-based extractor interface would improve the efficiency of the baseline risk extraction process for BRiskeT (Baseline Risk e-Tool). The BRiskeT database will contain the extracted baseline risk data from top prognostic research articles. BRiskeT utilizes McMaster University’s PLUS (Premium Literature Service) database to thoroughly vet articles prior to their inclusion in BRiskeT. The articles that have met inclusion criteria are then passed into the extractor interface that was developed for the purpose of this thesis, which has been called MacPrognosis. MacPrognosis displays these articles to a data extractor who fills out an electronic form which gives an overview of the baseline risk information in an article. The baseline risk information is subsequently saved to the BRiskeT database, which can then be queried according to the end user’s needs. One of the goals in switching from a paper-based extraction system to an eForm-based system was to save time in the extraction process. Another goal for MacPrognosis was to create an eForm that allowed baseline risk information to be extracted from as many disciplines as possible. To test whether MacPrognosis succeeded in saving extraction time and improving the proportion of articles from which baseline risk data could be extracted, it was subsequently utilized to extract data from a large test set of articles. The results of the extraction process were then compared with results from a previously conducted data extraction pilot utilizing a paper-based system which was created during the feasibility analysis for BRiskeT in 2012. The new eForm based extractor interface not only sped up the process of data extraction, but may also increase the proportion of articles from which data can be successfully extracted with minor future alterations when compared to a paper-based model of extraction. / Thesis / Master of Science (MSc)
225

A METHOD FOR SELECTIVE UPDATE PROPAGATION IN REPLICATED DATABASES

Jolson, Michael Jacob January 2008 (has links)
No description available.
226

Exploring the impact of varying prompts on the accuracy of database querying with an LLM

Lövlund, Pontus January 2024 (has links)
Large Language Models (LLM) and their abilities of text-to-SQL are today a very relevant topic, as utilizing an LLM as a database interface would facilitate easy access to the data in the database without any prior knowledge of SQL. What is being studied in this thesis, is how to best structure a prompt to increase the accuracy of an LLM on a text-to-SQL task. The methods of experimentation used in the study were experimentation with 5 different prompts, and a total of 22 questions asked about the database with the questions being of difficulties varying from easy to extra hard. The results showed that a simpler, less descriptive prompt performed better on the easy and medium questions, while a more descriptive prompt performed better on the hard and extra hard questions. The f indings did not fully align with the hypothesis that more descriptive prompts would have the most correct outputs. In conclusion, it seemed that prompts that contained less ”clutter” and were more straightforward were more effective on easy questions, while on harder questions a prompt with a better description and examples had a better impact.
227

The impact of the data management approach on information systems auditing

Furstenburg, Don Friedrich, 1953- 11 1900 (has links)
In establishing the impact of formal data management practices on systems and systems development auditing in the context of a corporate data base environment; the most significant aspects of a data base environment as well as the concept of data management were researched. It was established that organisations need to introduce a data management function to ensure the availability and integrity of data for the organisation. It was further established that an effective data management function can fulfil a key role in ensuring the integrity of the overall data base and as such it becomes an important general control on which the auditor can rely. The audit of information systems in a data base environment requires a more "holistic" audit approach and as a result the auditor has to expand the scope of the systems audit to include an evaluation of the overall data base environment. / Auditing / M. Com (Applied Accounting)
228

RDBMS AND XML FOR TELEMETRY ATTRIBUTES

Steele, Doug 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / One problem facing telemetry engineers is the ability to easily translate telemetry attributes from one system to another. Engineers must develop a written set of attributes that define a given telemetry stream and specify how the telemetry stream is to be transmitted, received, and processed. Telemetry engineers take this document and create the configuration for each product that will be exposed to the telemetry stream (airborne, ground, flight line). This process is time-consuming and prone to error. L-3 Telemetry-West chose to implement a solution using relational databases and eXtensible Markup Language (XML) to solve this and other issues.
229

The construction and integration of genetic maps

Collins, Andrew Richard January 1993 (has links)
No description available.
230

DEVELOPMENT OF A REQUIREMENTS REPOSITORY FOR THE ADVANCED DATA ACQUISITION AND PROCESSING SYSTEM (ADAPS)

Rush, David, Hafner, F. W. (Bill), Humphrey, Patsy 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Standards lead to the creation of requirements listings and test verification matrices allow developer and acquirer to assure themselves and each other that the requested system is actually what is being constructed. Further, in the intricacy of the software test description, traceability of test process to the requirement under test is mandated so the acceptance test process can be accomplished in an efficient manner. In the view of the logistician, the maintainability of the software and the repair of fond faults is primary, while these statistics can be gathered by the producer to ultimately enhance the Capability Maturity Module (CMM) rating of the vendor.

Page generated in 0.0427 seconds