Spelling suggestions: "subject:"database managemement atemsystem"" "subject:"database managemement systsystem""
21 |
Database forensics : Investigating compromised database management systemsBeyers, Hector Quintus January 2013 (has links)
The use of databases has become an integral part of modern human life. Often the data
contained within databases has substantial value to enterprises and individuals. As
databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity,
negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics,
database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS.
PROBLEM STATEMENT
The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the
integrity of the forensic results.
APPROACH
The study divides the relational DBMS into four abstract layers, illustrates how the layers
can be prepared to be either in a found or clean forensic state, and experimentally
combines the prepared layers of the DBMS according to the forensic scenario. The study
commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore,
each layer must be studied individually in a forensic context before all four layers are
considered collectively. A forensic study is conducted on each abstract layer of the DBMS
that has the potential to influence other layers to deliver incorrect results. Ultimately, the
DBMS will be used as a forensic tool to extract evidence from its own encrypted data and
data structures. Therefore, the last chapter shall illustrate how a forensic investigator can
prepare a trustworthy forensic environment where a forensic investigation could be
conducted on an entire PostgreSQL DBMS by constructing a combination of the
appropriate forensic states of the abstract layers.
RESULTS
The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of
various states of abstract layers in the DBMS. Approaches are suggested on how to deal
with a forensic query on the data model, data dictionary and application schema layer of
the DBMS. A forensic process is suggested on how to prepare the DBMS to extract
evidence from the DBMS. Another function of this study is that it advises forensic
investigators to consider alternative possibilities on how the DBMS could be attacked.
These alternatives might not have been considered during investigations on DBMSs to
date. Our methods have been tested at hand of a practical example and have delivered
promising results. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
22 |
Optikos įmonės kompiuterizuotos IS sukūrimas ir tyrimas / Development and Research of the Computerized Information System for an Optical EnterprisePaičienė, Kristina 20 September 2004 (has links)
Many small enterprises in Lithuania don’t use information systems in their accounting. This is because almost all of already developed accounting software is quite complex, expensive and has many additional features, witch aren’t useful for a small enterprises. This is why it has been decided to develop own specific software for a goods accounting. User interface and data structure should be adapted to the specific functions of the small optical enterprise.
The purposes of the developed information system are to increase work and accounting quality, to decrease time needed for accounting, to avoid saving redundant information, to automate and simplify the process of creating analytical reports, to avoid mistakes in accounting and make accounting more efficient.
In the process of developing this information system there was analyzed functional and nonfunctional, manage mental and common requirement issues. The models of dataflow, data structure, and applications were used in the requirement specification. Architecture of components and software structure is also provided in this project.
The realization of this project was accomplished by means of Microsoft Access 2000. There was created database, graphical user interface, and integrated Microsoft Visual Basic for Applications was used to perform programming tasks. Abilities of this software are fully sufficient for these tasks. Selected design techniques and tools had proved themselves in solving software for small... [to full text]
|
23 |
ATUALIZAÇÃO DINÂMICA DE SOFTWARE EM SGBDS COM SUPORTE DO MODELO DE COMPONENTES / DYNAMIC SOFTWARE UPDATE IN DATABASE MANAGEMENT SYSTEMS WITH SUPPORT OF SOFTWARE COMPONENT MODELGasperi, Cleandro Flores de 11 October 2011 (has links)
The daily use of Internet services in the most diverse human activities creates in users the
expectation of high availability of these services. Many of them have database systems as essential
building block. Moreover, those services are subject outcomes such as errors and aging.
An error-free software or a non-aging software which does not need innovations is an utopia.
Thus, software updating is a required task. Currently, software-updating mechanisms are based
on two different solutions: (i) using of additional hardware, an expensive and complex solution,
or (ii) service interruption, which is trivial but inefficient. In this work, we explore the application
of Dynamic Software Update (DSU) techniques as an alternative to update a Data Base
Management System (DBMS) without requiring any additional hardware or service unavailability.
Our solution was developed in a hypothetical DBMS architecture with the support of
a software component model. A prototype was developed in accordance with this model using
FRACTAL. Experimental evaluation confirmed the functional viability of this approach. The
implementation overhead in a controlled environment was about 30%, which is acceptable. / O uso cotidiano da Internet nas mais diversas atividades humanas acaba por criar nos usuários
a expectativa de serviços disponíveis a qualquer momento. Muitos destes serviços tem
os Sistemas Gerenciadores de Banco de Dados (SGBDs) como ferramenta básica e essencial.
Além disso, esses softwares estão sujeitos a erros e envelhecimento. Um software livre de erros
ou que não precise de inovações é uma utopia. Assim, é necessário que o software sofra atualizações.
Atualmente, os mecanismos para atualização de software utilizam hardware adicional,
uma solução mais cara e complexa, ou optam pela indisponibilição do serviço para os clientes
(parada do sistema), que é solução trivial mas ainda eficiente. O que este trabalho traz é a
aplicação de técnicas de Atualização Dinâmica de Software (ADS) como uma alternativa para
atualizar um SGBD sem o uso de hardware adicional e a indisponibilização do sistema. Para
tanto, propõe-se o desenvolvimento de um SGBD em uma arquitetura hipotética com o suporte
de componentes de software. Criou-se um protótipo de acordo com a solução proposta, utilizando
o modelo de componentes FRACTAL. A avaliação experimental confirmou a viabilidade
funcional da solução e que a sobrecarga da implementação em um ambiente controlado foi de
aproximadamente 30%. Esta sobrecarga é aceitável, uma vez que se obtem a atualização do
SGBD sem a parada total do mesmo.
|
24 |
Study of Development of Java Applications in Eclipse Environment and Development of Java Based Calendar Application with Email NotificationsNazir, Muhammad Abid January 2013 (has links)
Eclipse is one of the mostly used software in professional development of programming applications and software solutions. It is open source software and provides extensive availability of free libraries. In this thesis work, Eclipse was studied for Java applications development. To enhance the study and to get hands on experience over Eclipse IDE, an application was developed using Java programming language. The proposed application is a desktop application that can be used on all modern operating systems. Application was developed using Java SE (standard edition) version 1.7, which is the latest version available from Oracle Corporation. Java Swing API has been used for building GUI (graphical user interface) of the application. Database for event credentials was developed by MySQL database management system. The connection between application and database has been done through Java database connectivity JDBC. Some additional Java APIs were loaded to Eclipse project work space, and a comprehensive explanation has been provided on how to use external libraries in Eclipse environment. / tel # : +46720107602 email: abidnazir89@gmail.com
|
25 |
Design And Implementation Of An OODBMS For VLSI Interconnect Parasitic AnalysisArun, N S 07 1900 (has links) (PDF)
No description available.
|
26 |
Návrh systému pro účely administrativy fotbalového svazu / Design of a Football Association System for Administration PurposesVařacha, Jan January 2015 (has links)
This master’s thesis aims to design a suitable system based on a relational database for the purposes of administrative activities of the District Football Association. The created relational database should be managed primarily by the association secretary, to a lesser extent by members of association specialist committees. The database should be able to contain all the information and records which have been dealing with the paper form so far (match fixtures, awarding fines, clubs’ fees, players’ punishments, etc.). Routine administrative work, such as reading, inserting, deleting and updating the data will be carried out through the web interface and should not place any special demands on the level of users computer skills.
|
27 |
Performance benchmarking of data-at-rest encryption in relational databasesIstifan, Stewart, Makovac, Mattias January 2022 (has links)
This thesis is based on measuring how Relational Database Management Systems utilizing data-at-rest encryption with varying AES key lengths impact the performance in terms of transaction throughput of operations through the process of a controlled experiment. By measuring the effect through a series of load tests followed by statistical analysis, the impact of adopting a specific data-at-rest encryption algorithm could be displayed. The results gathered from this experiment were measured regarding the average transactional throughput of SQL operations. An OLTP workload in the benchmarking tool HammerDB was used to generate a transactional workload. This, in turn, was used to perform load tests on SQL databases encrypted with different AES-key lengths. The data gathered from these tests then underwent statistical analysis to either keep or reject the stated hypotheses. The statistical analysis performed on the different versions of the AES-algorithm showed no significant difference in terms of transaction throughput concerning the results gathered from the load tests on MariaDB. However, statistically, significant differences are proven to exist when running the same tests on MySQL. These results answered our research question, "Is there a significant difference in transaction throughput between the AES-128, AES-192, and AES-256 algorithms used to encrypt data-at-rest in MySQL and MariaDB?". The conclusion is that the statistical evidence suggests a significant difference in transactional throughput between AES algorithms in MySQL but not in MariaDB. This conclusion led us to investigate further transactional database performance between MySQL and MariaDB, where a specific type of transaction is measured to determine if there was a difference in performance between the databases themselves using the same encryption algorithm. The statistical evidence confirmed that MariaDB vastly outperformed MySQL in transactional throughput.
|
28 |
A Plan for OLAPJaecksch, Bernhard, Lehner, Wolfgang, Faerber, Franz 30 May 2022 (has links)
So far, data warehousing has often been discussed in the light of complex OLAP queries and as reporting facility for operative data. We argue that business planning as a means to generate plan data is an equally important cornerstone of a data warehouse system, and we propose it to be a first-class citizen within an OLAP engine. We introduce an abstract model describing relevant aspects of the planning process in general and the requirements it poses to a planning engine. Furthermore, we show that business planning lends itself well to parallelization and benefits from a column-store much like traditional OLAP does. We then develop a physical model specifically targeted at a highly parallel column-store, and with our implementation, we show nearly linear scaling behavior.
|
29 |
Sample synopses for approximate answering of group-by queriesLehner, Wolfgang, Rösch, Philipp 22 April 2022 (has links)
With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. Typically, those analytical queries partition the data into groups and aggregate the values within the groups. Further, with the commonly used roll-up and drill-down operations a broad range of group-by queries is posed to the system, which makes the construction of highly-specialized synopses difficult.
In this paper, we propose a general-purpose sampling scheme that is biased in order to answer group-by queries with high accuracy. While existing techniques focus on the size of the group when computing its sample size, our technique is based on its standard deviation. The basic idea is that the more homogeneous a group is, the less representatives are required in order to give a good estimate. With an extensive set of experiments, we show that our approach reduces both the estimation error and the construction cost compared to existing techniques.
|
30 |
Compression Selection for Columnar Data using Machine-Learning and Feature EngineeringPersson, Douglas, Juelsson Larsen, Ludvig January 2023 (has links)
There is a continuously growing demand for improved solutions that provide both efficient storage and efficient retrieval of big data for analytical purposes. This thesis researches the use of machine-learning together with feature engineering to recommend the most cost-effective compression algorithm and encoding combination for columns in a columnar database management system (DBMS). The framework consists of a cost function calculated using compression time, decompression time, and compression ratio. An XGBoost machine-learning model is trained on labels provided by the cost function to recommend the most cost-effective combination for columnar data within a column or vector-oriented DBMS. While the methods are applied on ClickHouse, one of the most popular open-source column-oriented DBMS on the market, the results are broadly applicable to column-oriented data which share data type and characteristics with IoT telemetry data. Using billions of available rows of numeric real business data obtained at Axis Communications in Lund, Sweden, a set of features are engineered to accurately describe the characteristics of a given column. The proposed framework allows for weighting the business interests (compression time, decompression time, and compression ratio) to determine the individually optimal cost-effective solution. The model reaches an accuracy of 99% on the test dataset and an accuracy of 90.1% on unseen data by leveraging data features that are predictive of compression algorithms and encodings performances. Following ClickHouse strategies and the most suitable practices in the field, combinations of general-purpose compression algorithms and data encodings are analysed that together yield the best results in efficiently compressing the data of certain columns. Applying the unweighted recommended combinations on all columns, the framework’s performance impact was measured to increase the average compression speed by 95.46%. Reducing the time to compress the columns from 31.17 seconds to compress the data to 13.17 seconds. Additionally, the decompression speed was increased by 59.87%, reducing the time to decompress the columns from 2.63 seconds to 2.02 seconds, at the cost of decreasing the compression ratio by 66.05%. Increasing the storage requirements by 94.9 MB. In column and vector databases, chunks of data belonging to a certain column are often stored together on a disk. Therefore, choosing the right compression algorithm can lower the storage requirements and boost database throughput.
|
Page generated in 0.0876 seconds