• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 17
  • 17
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 33
  • 23
  • 21
  • 21
  • 18
  • 18
  • 16
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

'n Ondersoek na en bydraes tot navraaghantering en -optimering deur databasisbestuurstelsels / L. Muller

Muller, Leslie January 2006 (has links)
The problems associated with the effective design and uses of databases are increasing. The information contained in a database is becoming more complex and the size of the data is causing space problems. Technology must continually develop to accommodate this growing need. An inquiry was conducted in order to find effective guidelines that could support queries in general in terms of performance and productivity. Two database management systems were researched to compare die theoretical aspects with the techniques implemented in practice. Microsoft SQL Sewer and MySQL were chosen as the candidates and both were put under close scrutiny. The systems were researched to uncover the methods employed by each to manage queries. The query optimizer forms the basis for each of these systems and manages the parsing and execution of any query. The methods employed by each system for storing data were researched. The way that each system manages table joins, uses indices and chooses optimal execution plans were researched. Adjusted algorithms were introduced for various index processes like B+ trees and hash indexes. Guidelines were compiled that are independent of the database management systems and help to optimize relational databases. Practical implementations of queries were used to acquire and analyse the execution plan for both MySQL and SQL Sewer. This plan along with a few other variables such as execution time is discussed for each system. A model is used for both database management systems in this experiment. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2007.
22

The Use of Relation Valued Attributes in Support of Fuzzy Data

Williams, Larry Ritchie, Jr. 03 May 2013 (has links)
In his paper introducing fuzzy sets, L.A. Zadeh describes the difficulty of assigning some real-world objects to a particular class when the notion of class membership is ambiguous. If exact classification is not obvious, most people approximate using intuition and may reach agreement by placing an object in more than one class. Numbers or ‘degrees of membership’ within these classes are used to provide an approximation that supports this intuitive process. This results in a ‘fuzzy set’. This fuzzy set consists any number of ordered pairs to represent both the class and the class’s degree of membership to provide a formal representation that can be used to model this process. Although the fuzzy approach to reasoning and classification makes sense, it does not comply with two of the basic principles of classical logic. These principles are the laws of contradiction and excluded middle. While they play a significant role in logic, it is the violation of these principles that gives fuzzy logic its useful characteristics. The problem of this representation within a database system, however, is that the class and its degree of membership are represented by two separate, but indivisible attributes. Further, this representation may contain any number of such pairs of attributes. While the data for class and membership are maintained in individual attributes, neither of these attributes may exist without the other without sacrificing meaning. And, to maintain a variable number of such pairs within the representation is problematic. C. J. Date suggested a relation valued attribute (RVA) which can not only encapsulate the attributes associated with the fuzzy set and impose constraints on their use, but also provide a relation which may contain any number of such pairs. The goal of this dissertation is to establish a context in which the relational database model can be extended through the implementation of an RVA to support of fuzzy data on an actual system. This goal represents an opportunity to study through application and observation, the use of fuzzy sets to support imprecise and uncertain data using database queries which appropriately adhere to the relational model. The intent is to create a pathway that may extend the support of database applications that need fuzzy logic and/or fuzzy data.
23

Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval

Bell, Charles Andrew 01 January 2005 (has links)
Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS.
24

Fuzzy databáze založená na E-R schématu / Fuzzy database based on an E-R schema

Plachý, Milan January 2012 (has links)
This text is especialy intended to those who are interested into fuzzy logic and its application in relational databases. It is mainly focused on concept of fuzzyfied relational database and implementation of such database. This text consists of two parts: theoretical aspects of fuzzyfication and implementation part. Selected extension is based on fuzzy E-R model so the requirements of the real world can be better met. This paper also describes existing solutions on different level of fuzzyfication. Part of the work is design and implementation of a simple software for querying over fuzzyfied relational database. This work shoud also serve as a guide for design and implementation of fuzzy database.
25

Aktualizace XML dat / Updating XML data

Mikuš, Tomáš January 2012 (has links)
Updating XML data is very wide area, which must solve a number of difficult problems. From designing language with sufficient expressive power to the XML data repository able to apply the changes. Ways to deal with them are few. From this perspective, is this work very closely dedicated only to the language XQuery. Thus, its extension for updates, for which the candidate recommendation by the W3C were published only recently. Another specialization of this work is to focus only on the XML data stored in the object­relational database with that repository will enforce the validity of documents to the scheme described in XML Schema. This requirement, combined with the possibility of updating of data in the repository is on the contradictory requirements. In this thesis is designed language based on XQuery language, designed and implemented evaluating of the update queries of the language on the store and a description and implementation of the store in object­relational database.
26

Método de filtragem fuzzy para avaliação de bases de dados relacionais / Fuzzy filtering method for evaluation of relational databases

Penteado, Fernanda Bessani Leite 02 October 2009 (has links)
As informações imprecisas e vagas, comumente encontradas na modelagem de problemas do mundo real, muitas vezes não são manipuladas de forma adequada por meio das consultas convencionais aos bancos de dados. Alternativamente, a teoria de conjuntos fuzzy tem sido considerada uma ferramenta bem promissora para tratamento destas informações consideradas imprecisas e, em determinados casos, até mesmo ambíguas. Esse trabalho utiliza a linguagem SQL padrão para apresentar uma abordagem fuzzy de consultas a bancos de dados relacionais. Estudos de casos referentes à aplicabilidade do método desenvolvido são apresentados a fim de mostrar as suas potencialidades em relação aos métodos tradicionais de consultas. / Often, the imprecise and vague information, commonly found in the modeling of real world problems, are not dealt in an appropriate way through conventional queries used in databases. Alternatively, the fuzzy set theory has been considered a very promising tool to treat imprecise and ambiguous information. This work uses the standard SQL language and fuzzy set theory to develop a fuzzy query method for relational databases. Simulation examples are presented to illustrate its potentialities in relation to the traditional query methods.
27

Jämförelse av NoSQL-databas och SQL-baserad relationsdatabas : En förklarande studie för när NoSQL kan vara att föredra framför en relationsdatabas / Comparison of NoSQL database and SQL relational database

Hedman, Jennifer, Holmberg, Mikael January 2019 (has links)
With the explosive development of the mobile world, web applications and Big Data, new requirements for storage capacity and speed of database systems have arisen. The traditional relational database that has long dominated the marked has received competition because of its lack in speed and scalability. NoSQL is a collective name for databases that are not based on the traditional relational model. NoSQL databases are designed to easily expand their storage capacity while delivering high performance. NoSQL databases have been around for decades but the need for them is relatively new. Our partner expressed a desire to know what differences exist between NoSQL and the traditional relational database. To clarify these differences, we have answered the following questions in this work:  When can a NoSQL database be preferred to a relational database?  What are the differences in database performance? In order to answer these questions, a literature study has been conducted together with experiments where we test which performance differences exist between the selected databases. Performance tests have been performed with the benchmarking tool Yahoo Cloud Serving Benchmark, to verify or falsify the enhanced performance of the NoSQL databases. The hypotheses were falsified in both NoSQL databases. The results showed that the relational database performed better than the cloud based NoSQL databases, but also that the relational database performance deteriorates when the load increased. The results of the experiments are combined with the literature study and together answer our questions. The conclusion is that no database performs better than another one, it is the requirements of the data to be stored. From these requirements, analyses can be made to draw conclusions about what kind of database is preferable.
28

California State University, San Bernardino Chatbot

Desai, Krutarth 01 December 2018 (has links)
Now-a-days the chatbot development has been moving from the field of Artificial-Intelligence labs to the desktops and mobile domain experts. In the fastest growing technology world, most smartphone users spend major time in the messaging apps such as Facebook messenger. A chatbot is a computer program that uses messaging channels to interact with users using natural Languages. Chatbot uses appropriate mapping techniques to transform user inputs into a relational database and fetch the data by calling an existing API and then sends an appropriate response to the user to drive its chats. Drawbacks include the need to learn and use chatbot specific languages such as AIML (Artificial Intelligence Markup Language), high botmaster interference, and the use of non-matured technology. In this project, Facebook messenger based chatbot is proposed to provide domain independent, an easy to use, smart, scalable, dynamic and conversational agent in order to get information about CSUSB. It has the unique functionalities which identify user interactions made by their natural language, and the flawless support of various application domains. This provides an ample of unique scalabilities and abilities that will be evaluated in the future phases of this project.
29

Management of Time Series Data

Matus Castillejos, Abel, n/a January 2006 (has links)
Every day large volumes of data are collected in the form of time series. Time series are collections of events or observations, predominantly numeric in nature, sequentially recorded on a regular or irregular time basis. Time series are becoming increasingly important in nearly every organisation and industry, including banking, finance, telecommunication, and transportation. Banking institutions, for instance, rely on the analysis of time series for forecasting economic indices, elaborating financial market models, and registering international trade operations. More and more time series are being used in this type of investigation and becoming a valuable resource in today�s organisations. This thesis investigates and proposes solutions to some current and important issues in time series data management (TSDM), using Design Science Research Methodology. The thesis presents new models for mapping time series data to relational databases which optimise the use of disk space, can handle different time granularities, status attributes, and facilitate time series data manipulation in a commercial Relational Database Management System (RDBMS). These new models provide a good solution for current time series database applications with RDBMS and are tested with a case study and prototype with financial time series information. Also included is a temporal data model for illustrating time series data lifetime behaviour based on a new set of time dimensions (confidentiality, definitiveness, validity, and maturity times) specially targeted to manage time series data which are introduced to correctly represent the different status of time series data in a timeline. The proposed temporal data model gives a clear and accurate picture of the time series data lifecycle. Formal definitions of these time series dimensions are also presented. In addition, a time series grouping mechanism in an extensible commercial relational database system is defined, illustrated, and justified. The extension consists of a new data type and its corresponding rich set of routines that support modelling and operating time series information within a higher level of abstraction. It extends the capability of the database server to organise and manipulate time series into groups. Thus, this thesis presents a new data type that is referred to as GroupTimeSeries, and its corresponding architecture and support functions and operations. Implementation options for the GroupTimeSeries data type in relational based technologies are also presented. Finally, a framework for TSDM with enough expressiveness of the main requirements of time series application and the management of that data is defined. The framework aims at providing initial domain know-how and requirements of time series data management, avoiding the impracticability of designing a TSDM system on paper from scratch. Many aspects of time series applications including the way time series data are organised at the conceptual level are addressed. The central abstraction for the proposed domain specific framework is the notions of business sections, group of time series, and time series itself. The framework integrates comprehensive specification regarding structural and functional aspects for time series data management. A formal framework specification using conceptual graphs is also explored.
30

Computational Verification of Published Human Mutations.

Kamanu, Frederick Kinyua. January 2008 (has links)
<p>The completion of the Human Genome Project, a remarkable feat by any measure, has provided over three billion bases of reference nucleotides for comparative studies. The next, and perhaps more challenging step is to analyse sequence variation and relate this information to important phenotypes. Most human sequence variations are characterized by structural complexity and, are hence, associated with abnormal functional dynamics. This thesis covers the assembly of a computational platform for verifying these variations, based on accurate, published, experimental data.</p>

Page generated in 0.1139 seconds