• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 424
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 674
  • 674
  • 274
  • 219
  • 195
  • 153
  • 128
  • 123
  • 97
  • 83
  • 80
  • 67
  • 56
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Modelling and Evaluation of Performance, Security and Database Management Trade-offs in Cloud Computing Platforms. An investigation into quantitative modelling and simulation analysis of ‘optimal’ performance, security and database management trade-offs in Cloud Computing Platforms (CCPs), based on Stochastic Activity Networks (SANs) and a three-tier combined metrics

Akinyemi, Akinwale A. January 2020 (has links)
A framework for the quantitative analysis of performance, security and database management within a network system (e.g. a cloud computing platform) is presented within this research. Our study provides a methodology for modelling and quantitatively generating significant metrics needed in the evaluation of a network system. To narrow this research, a study is carried-out into the quantitative modelling and analysis of performance, security and database management trade-offs in cloud computing platforms, based on Stochastic Activity Networks (SANs) and combined metrics. Cloud computing is an innovative distributed computing archetypal based on the infrastructure of the internet providing computational power, application, storage and infrastructure services. Security mechanisms such as: batch rekeying, intrusion detection, encryption/decryption or security protocols come at the expense of performance and computing resources consumption. Furthermore, database management processing also has an adverse effect on performance especially in the presence of big data. Stochastic Activity Networks (SANs) that offer synchronisation, timeliness and parallelism are proposed for the modelling and quantitative evaluations of ‘optimal’ trade-offs involving performance, security and database management. Performance modelling and analysis of computer network systems has mostly been considered of utmost importance. Quantification of performance for a while has been assessed using stochastic models with a rising interest in the quantification of security stochastic modelling being applied to security problems. Quantitative techniques that includes analytical valuations founded on queuing theory, discrete-event simulations and correlated approximations have been utilised in the examination of performance. Security suffers from the point that no interpretations can be made in an optimal case. The most consequential security metrics are in analogy with reliability metrics. The express rate at which data grows increases the prominence for research into the design and development of cloud computing models that manages the workload intensity and are suitable for data exploration. Handling big data especially within cloud computing is a resource consuming, time-demanding and challenging task that necessitates titanic computational infrastructures to endorse successful data exploration. We present an improved Security State Transition Diagram (SSTD) by adding a new security state (Failed/Freeze state). The presence of this new security state signifies a security position of the computing network system were the implemented security countermeasures cannot handle the security attacks and the system fails completely. In a more sophisticated security system, when the security countermeasure(s) cannot in any form categorise the security attack, the network system is moved to the Failed/Freeze security state. At this security state, the network system can only resume operation when restored by the system administrator. In this study, we propose a cloud computing system model, defined security countermeasures and evaluated the optimisation problems for the trade-offs between performance, security and database management using SANs formalism. We designed, modelled and implemented dependency within our presented security system, developing interaction within the security countermeasures using our proposed Security Group Communication System (SGCS). The choice of Petri-Nets enables the understanding and capturing of specified metrics at different stages of the proposed cloud computing model. In this thesis, an overview of cloud computing including its classification and services is presented in conjunction with a review of existing works of literature. Subsequently, a methodology is proposed for the quantitative analysis of our proposed cloud computing model of performance-security-database trade-offs using Möbius simulator. Additionally, numerical experiments with relevant interpretations are presented and appropriate interpretations are made. We identified that there are system parameters that can be used to optimise the presented abstract combined metrics but they are optimal for neither performance or security or database management independently. Founded on the proposed quantitative simulation model framework, reliable numerical experiments were observed and indicated scope for further extensions of this work. For example, the use of Machine Learning (ML) or Artificial Intelligence (AI) in the predictive and prevention aspects of the security systems.
502

A Study of Migrating Biological Data from Relational Databases to NoSQL Databases

Moatassem, Nawal N. 18 September 2015 (has links)
No description available.
503

Database design of Ohio SPS test

Liu, Jiayan January 1997 (has links)
No description available.
504

Application for data mining in manufacturing databases

Fang, Cheng-Hung January 1996 (has links)
No description available.
505

Computer aided software engineering tool for automatically generating database management system code

Son, Ju Young January 1989 (has links)
No description available.
506

The identification of semantics for the file/database problem domain and their use in a template-based software environment /

Shubra, Charles John January 1984 (has links)
No description available.
507

The coordination of information in a highly differentiated organization : use of a computerized relational data base system as an integrating device for monitoring graduate education /

Malaney, Gary Douglas January 1985 (has links)
No description available.
508

Computerized Flow Process Charting System and Applications

Griffin, George H. 01 January 1987 (has links) (PDF)
A computerized flow process charting application program of dBase III+ has been developed to aid in resource requirements planning and operations analysis. Traditional flow process charting has used as the following data elements: assembly number, assembly sequence number, distance travelled, time required for the activity and an activity symbol. The computerized system adds several variables to these in order to customize the application at Martin Marietta Electronic Systems. These additional variables include work center identification, machine number identification, lot sizes, set up and run times and manufacturing specifications. Additionally, the circle or operations symbol has been expanded to differentiate between manual, process and test activities. Resources requirements planning and analysis is accomplished by a series of reports where a user defines search requirements and enters three independent equation variables for the calculations. The three variables are realization factor or safety factor, resource availability in hours per month and monthly production demand. The resource requirements can be used in methods engineering, make-buy decisions and resource planning. Sensitivity analyses can be easily accomplished by changing the input variables and/or data.
509

SYMBOLIC ANALYSIS OF WEAK CONCURRENCY SEMANTICS IN MODERN DATABASE PROGRAMS

Kiarash Rahmani (13171128) 28 July 2022 (has links)
<p>The goal of this dissertation is to design a collection of techniques and tools that enable<br> the ease of programming under the traditional strong concurrency guarantees, without sacrificing the performance offered by modern distributed database systems. Our main thesis<br> is that language-centric reasoning can help developers efficiently identify and eliminate con-<br> currency anomalies in modern database programs, and we have demonstrated that it results<br> in faster and safer database programs</p>
510

Sampling time-based sliding windows in bounded space

Gemulla, Rainer, Lehner, Wolfgang 12 October 2022 (has links)
Random sampling is an appealing approach to build synopses of large data streams because random samples can be used for a broad spectrum of analytical tasks. Users are often interested in analyzing only the most recent fraction of the data stream in order to avoid outdated results. In this paper, we focus on sampling schemes that sample from a sliding window over a recent time interval; such windows are a popular and highly comprehensible method to model recency. In this setting, the main challenge is to guarantee an upper bound on the space consumption of the sample while using the allotted space efficiently at the same time. The difficulty arises from the fact that the number of items in the window is unknown in advance and may vary significantly over time, so that the sampling fraction has to be adjusted dynamically. We consider uniform sampling schemes, which produce each sample of the same size with equal probability, and stratified sampling schemes, in which the window is divided into smaller strata and a uniform sample is maintained per stratum. For uniform sampling, we prove that it is impossible to guarantee a minimum sample size in bounded space. We then introduce a novel sampling scheme called bounded priority sampling (BPS), which requires only bounded space. We derive a lower bound on the expected sample size and show that BPS quickly adapts to changing data rates. For stratified sampling, we propose a merge-based stratification scheme (MBS), which maintains strata of approximately equal size. Compared to naive stratification, MBS has the advantage that the sample is evenly distributed across the window, so that no part of the window is over- or underrepresented. We conclude the paper with a feasibility study of our algorithms on large real-world datasets.

Page generated in 0.1562 seconds