• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 22
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 136
  • 136
  • 44
  • 41
  • 33
  • 24
  • 23
  • 22
  • 19
  • 19
  • 19
  • 18
  • 17
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Alchemy -- Transmuting base specifications into implementations

Yoo, Daniel. January 2008 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: program synthesis; relational specification; Alloy. Includes bibliographical references (leaves 64-66).
52

A privacy protection model to support personal privacy in relational databases.

Oberholzer, Hendrik Johannes 02 June 2008 (has links)
The individual of today incessantly insists on more protection of his/her personal privacy than a few years ago. During the last few years, rapid technological advances, especially in the field of information technology, directed most attention and energy to the privacy protection of the Internet user. Research was done and is still being done covering a vast area to protect the privacy of transactions performed on the Internet. However, it was established that almost no research has been done on the protection of the privacy of personal data that are stored in tables of a relational database. Until now the individual had no say in the way his/her personal data might have been used, indicating who may access the data or who may not. The individual also had no way to indicate the level of sensitivity with regard to the use of his/her personal data or exactly what he/she consented to. Therefore, the primary aim of this study was to develop a model to protect the personal privacy of the individual in relational databases in such a way that the individual will be able to specify how sensitive he/she regards the privacy of his/her data. This aim culminated in the development of the Hierarchical Privacy-Sensitive Filtering (HPSF) model. A secondary aim was to test the model by implementing the model into query languages and as such to determine the potential of query languages to support the implementation of the HPSF model. Oracle SQL served as an example for text or command-based query languages, while Oracle SQL*Forms served as an example of a graphical user interface. Eventually, the study showed that SQL could support implementation of the model only partially, but that SQL*Forms was able to support implementation of the model completely. An overview of the research approach employed to realise the objectives of the study: At first, the concepts of privacy were studied to narrow down the field of study to personal privacy and the definition thereof. Problems that relate to the violation or abuse of the individual’s personal privacy were researched. Secondly, the right to privacy was researched on a national and international level. Based on the guidelines set by organisations like the Organisation for Economic Co-operation and Development (OECD) and the Council of Europe (COE), requirements were determined to protect the personal privacy of the individual. Thirdly, existing privacy protection mechanisms like privacy administration, self-regulation, and automated regulation were studied to see what mechanisms are currently available and how they function in the protection of privacy. Probably the most sensitive data about an individual is his/her medical data. Therefore, to conclude the literature study, the privacy of electronic medical records and the mechanisms proposed to protect the personal privacy of patients were investigated. The protection of the personal privacy of patients seemed to serve as the best example to use in the development of a privacy model. Eventually, the Hierarchical Privacy-Sensitive Filtering model was developed and introduced, and the potential of Oracle SQL and Oracle SQL*Forms to implement the model was investigated. The conclusion at the end of the dissertation summarises the study and suggests further research topics. / Prof. M.S. Olivier
53

Hierarchical Alignment of Tuples in Databases for Fast Join Processing

Alqahatni, Zakyah 01 December 2019 (has links)
Data is distributed across interconnected relations in relational databases. Relationships between tuples can be rearranged in distinct relations by matching the values of the join attribute, a process called equi-join operation. Unlike standard attempts to design efficient join algorithms in this thesis, an approach is proposed to align tuples in relation so that joins can be readily and effectively done. We position tuples in their respective relationships, called relations alignment, which has matching join attribute values in the corresponding positions. We also address how to align relations and perform joins on aligned relations. The experiments were conducted in this research to measure and analyze the efficiency of the proposed approach compared to standard MySQL joins.
54

Plug-and-Play Web Services

Jain, Arihant 01 December 2019 (has links)
The goal of this research is to make it easier to design and create web services for relational databases. A web service is a software service for providing data over computer networks. Web services provide data endpoints for many web applications. We adopt a plug-and-play approach for web service creation whereby a designer constructs a “plug,” which is a simple specification of the output produced by the service. If the plug can be “played” on the database then the web service is generated. Our plug-and-play approach has three advantages. First, a plug is portable. You can take the plug to any data source and generate a web service. Second, a plug-and-play service is more reliable. The web service generation checks the database to determine if the service can be safely and correctly generated. Third, plug-and-play web services are easier to code for complex data since a service designer can write a simple plug, abstracting away the data’s real complexity. We describe how to build a system for plug-and-play web services, and experimentally evaluate the system. The software produced by this research will make life easier for web service designers.
55

Performance of the relational and non-relational databases / Prestanda för de relationella och icke-relationella databaserna

Alkhalaf, Ahmed, Al-Zubeidi, Hasan January 2023 (has links)
There are many types of databases, but the most common are relational and non-relational. These databases have different structures, and that affects their performance. Many studies examine the differences between relational and non-relational databases and compare them regarding performance. However, it lacks a study that collects the results from different sources and makes them available to software professionals, so they can choose a suitable database effortlessly. This thesis examines and analyzes several studies investigating the performance of relational and non-relational databases. The analysis examines the performance of typical database operations, insert, delete, update, and select, on different numbers of records. The results of this study show that the non-relational databases perform better, regardless of the number of records in the database. However, there are some cases where the relational databases perform better. The findings are based on an analysis of seven studies, encompassing databases MSSQL, MySQL, PostgreSQL, Oracle, and MongoDB. / Det finns flera typer av databaser, men de vanligaste är relationella och icke-relationella. Dessa databaser har olika strukturer, vilket påverkar deras prestanda. Många studier undersöker skillnaderna mellan relationella och icke-relationella databaser och jämför deras prestanda. Dock saknas en studie som samlar resultaten från olika källor och gör dem tillgängliga för mjukvaruproffs, för att underlätta valet av en lämplig databas. Denna examensarbete undersöker och analyserar flera studier som utforskar prestandan hos relationella och icke-relationella databaser. Analysen fokuserar på prestandan för vanliga databasoperationer, såsom infogning, borttagning, uppdatering och val, för olika antal poster. Resultaten av denna studie visar att icke-relationella databaser presterar bättre oavsett antalet poster i databasen. Det finns dock vissa fall där relationella databaser fungerar bättre. Resultaten baseras på en analys av sju studier som omfattar MSSQL, MySQL, PostgreSQL, Oracle och MongoDB-databaserna.
56

NoSQL database considerations and implications for businesses

Pretorius, Dawid Johannes 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: NoSQL databases, a new way of storing and retrieving data, can provide businesses with many benefits, although they also pose many risks for businesses. The lack of knowledge among decision-makers of businesses regarding NoSQL databases can lead to risks left unaddressed and missed opportunities. This study, by means of an extensive literature review, identifies the key drivers, characteristics and benefits of a NoSQL database, thereby providing a clear understanding of the subject. The business imperatives related to NoSQL databases are also identified and discussed. This can help businesses to determine whether a NoSQL database might be a viable solution, and to align business and information technology (IT) objectives. The key strategic and operational IT risks are also identified and discussed, based on the literature review. This can help business to ensure that the risks related to the use of NoSQL databases are appropriately addressed. Lastly, the identified risks were mapped to the processes of COBIT (Control Objectives for Information and Related Technology) to inform a business of the highest risk areas and the associated focus areas. / AFRIKAANSE OPSOMMING: NoSQL databasisse, 'n nuwe manier om data te stoor en herwin, het die potensiaal om baie voordele vir besighede in te hou, maar kan ook baie risiko's teweeg bring. Gebrekkige kennis onder besigheidsbesluitnemers oor NoSQL databasisse kan lei tot onaangespreekte risiko’s en verlore geleenthede. Hierdie studie, deur middel van 'n uitgebreide literatuuroorsig, identifiseer die sleutel eienskappe, kenmerke en voordele van 'n NoSQL databasis, om sodoende 'n duidelike begrip van die onderwerp te verkry. Die besigheidsimperatiewe wat verband hou met NoSQL databasisse is ook geïdentifiseer en bespreek. Dit kan besighede help om te bepaal of 'n NoSQL databasis 'n werkbare oplossing kan wees, asook sake- en inligtingstegnologie (IT) doelwitte in lyn met mekaar bring. Na aanleiding van die literatuurstudie is die sleutel-strategiese en operasionele IT-risiko's geïdentifiseer en bespreek. Dit kan help om aan besighede sekerheid te verskaf dat die risiko's wat verband hou met die gebruik van NoSQL databasisse toepaslik aangespreek word. Laastens is die geïdentifiseerde risiko's gekoppel aan die prosesse van COBIT om 'n besigheid van die hoë-risiko areas en die gepaardgaande fokusareas in te lig.
57

Using ontologies to semantify a Web information portal

Chimamiwa, Gibson 01 1900 (has links)
Ontology, an explicit specification of a shared conceptualisation, captures knowledge about a specific domain of interest. The realisation of ontologies, revolutionised the way data stored in relational databases is accessed and manipulated through ontology and database integration. When integrating ontologies with relational databases, several choices exist regarding aspects such as database implementation, ontology language features, and mappings. However, it is unclear which aspects are relevant and when they affect specific choices. This imposes difficulties in deciding which choices to make and their implications on ontology and database integration solutions. Within this study, a decision-making tool that guides users when selecting a technology and developing a solution that integrates ontologies with relational databases is developed. A theory analysis is conducted to determine current status of technologies that integrate ontologies with databases. Furthermore, a theoretical study is conducted to determine important features affecting ontology and database integration, ontology language features, and choices that one needs to make given each technology. Based on the building blocks stated above, an artifact-building approach is used to develop the decision-making tool, and this tool is verified through a proof-of-concept to prove the usefulness thereof. Key terms: Ontology, semantics, relational database, ontology and database integration, mapping, Web information portal. / Information Science / M. Sc. (Information Systems)
58

Database Forensics in the Service of Information Accountability

Pavlou, Kyriacos Eleftheriou January 2012 (has links)
Regulations and societal expectations have recently emphasized the need to mediate access to valuable databases, even by insiders. At one end of a spectrum is the approach of restricting access to information; at the other is information accountability. The focus of this work is on effecting information accountability of data stored in relational databases. One way to ensure appropriate use and thus end-to-end accountability of such information is through continuous assurance technology, via tamper detection in databases built upon cryptographic hashing. We show how to achieve information accountability by developing and refining the necessary approaches and ideas to support accountability in high-performance databases. These concepts include the design of a reference architecture for information accountability and several of its variants, the development of a sequence of successively more sophisticated forensic analysis algorithms and their forensic cost model, and a systematic formulation of forensic analysis for determining when the tampering occurred and what data were tampered with. We derive a lower bound for the forensic cost and prove that some of the algorithms are optimal under certain circumstances. We introduce a comprehensive taxonomy of the types of possible corruption events, along with an associated forensic analysis protocol that consolidates all extant forensic algorithms and the corresponding type(s) of corruption events they detect. Finally, we show how our information accountability solution can be used for databases residing in the cloud. In order to evaluate our ideas we design and implement an integrated tamper detection and forensic analysis system named DRAGOON. This work shows that information accountability is a viable alternative to information restriction for ensuring the correct storage, use, and maintenance of high-performance relational databases.
59

Quantifying Performance Costs of Database Fine-Grained Access Control

Kumka, David Harold 01 January 2012 (has links)
Fine-grained access control is a conceptual approach to addressing database security requirements. In relational database management systems, fine-grained access control refers to access restrictions enforced at the row, column, or cell level. While a number of commercial implementations of database fine-grained access control are available, there are presently no generalized approaches to implementing fine-grained access control for relational database management systems. Fine-grained access control is potentially a good solution for database professionals and system architects charged with designing database applications that implement granular security or privacy protection features. However, in the oral tradition of the database community, fine-grained access control is spoken of as imposing significant performance penalties, and is therefore best avoided. Regardless, there are current and emerging social, legal, and economic forces that mandate the need for efficient fine-grained access control in relational database management systems. In the study undertaken, the author was able to quantify the performance costs associated with four common implementations of fine-grained access control for relational database management systems. Security benchmarking was employed as the methodology to quantify performance costs. Synthetic data from the TPC-W benchmark as well as representative data from a real-world application were utilized in the benchmarking process. A simple graph-base performance model for Fine-grained Access Control Evaluation (FACE) was developed from benchmark data collected during the study. The FACE model is intended for use in predicting throughput and response times for relational database management systems that implement fine-grained access control using one of the common fine-grained access control mechanisms - authorization views, the Hippocratic Database, label-based access control, and transparent query rewrite. The author also addresses the issue of scalability for fine-grained access control mechanisms that were evaluated in the study.
60

Padrões de Fluxos de Processos em Banco de Dados Relacionais / Control-Flow Patterns in Relational Databases

Braghetto, Kelly Rosa 23 June 2006 (has links)
A representação e execução de processos de negócio têm gerado importantes desafios na área de Ciência da Computação. Um desses desafios é a escolha do melhor arcabouço formal para a especificação dos controles de fluxo padrões. Algumas linguagens defendem o uso de redes de Petri ou álgebras de processos como base formal. O uso de redes de Petri para especificar workflows clássicos é uma abordagem bastante conhecida. Entretanto, pesquisas recentes vêm difundindo o uso de novas extensões da álgebra de processos como uma alternativa para a especificação formal de workflows. A principal contribuição deste trabalho é a definição da Navigation Plan Definition Language (NPDL). A NPDL foi implementada como uma extensão da linguagem SQL. Ela é uma alternativa para a representação de workflows que utiliza a álgebra de processos como arcabouço formal. A NPDL promove uma separação explícita entre o ambiente de especificação e o ambiente de execução de um workflow. Esta separação propicia o reaproveitamento de passos de negócio e o uso das propriedades da álgebra de processos não só na modelagem, mas também no controle da execução dos processos. Após a especificação de um workflow por meio da NPDL, a execução dos passos que o definem é controlada pela ferramenta NavigationPlanTool. Essa ferramenta é a segunda contribuição deste trabalho de pesquisa. / The representation and execution of business processes have generated some important challenges in Computer Science. An important related concern is the choosing of the best formal foundation to represent control-flow patterns. Some of the workflow languages advocate the Petri nets or process algebra as formal foundation. The use of Petri nets is a famous approach to support classic workflows. On the other hand some researches are introducing modern process algebra extensions as an alternative formal foundation for representing workflows. The first contribution of this research is the definition of the Navigation Plan Definition Language (NPDL). NPDL was implemented as an extension of SQL language. It is an alternative to represent business processes using process algebra as formal foundation. NPDL provides the explicit separation between specification and execution workflow environment. This separation allows reusing of business steps and usage of process algebra properties in the process modeling and execution controlling tasks. After the definition of a workflow using NPDL, the business steps execution is carried out and controlled by a tool called NavigationPlanTool. This tool is the second contribution of this research.

Page generated in 0.6111 seconds