• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 424
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 674
  • 674
  • 274
  • 219
  • 195
  • 153
  • 128
  • 123
  • 97
  • 83
  • 80
  • 67
  • 56
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

A multi-dimensional model for information security management

Eloff, Maria Margaretha 06 December 2011 (has links)
D.Phil. / Any organisation is dependent on its information technology resources. The challenges posed by new developments such as the World Wide Web and e-business, require new approaches to address the management and protection of IT resources. Various documents exist containing recommendations for the best practice to follow for information security management. BS7799 is such a code of practice for information security management. The most important problem to be addressed in this thesis is the need for new approaches and perspectives on information security (IS) management in an organisation to take cognisance of changing requirements in the realm of information technology. In this thesis various models and tools are developed that can assist management in understanding, adapting and using internationally accepted codes of practice for information security management to the best benefit of their organisations. The thesis consists of three parts. Chapter 1 and Chapter 2 constitute Part 1: Introduction and Background. In Chapter 1 the problem statement, objectives and deliverables are given. Further the chapter contains definitions of important terminology used in the thesis as well as an overview of the research. Chapter 2 defines various terms associated with information security management in an attempt to eliminate existing confusion. The terms are mapped onto a hierarchical framework in order to illustrate the relationship between the different terms. In Part 2: IS Management Perspectives and Models, consisting of chapters 3, 4, 5 and 6, new approaches to information security management is discussed. In Chapter 3 different perspectives on using a code of practice, such as BS7799 for IS management, is presented. The different perspectives are based on the unique characteristics of the organisation such as its size and functional purpose. These different perspectives also enable organisations to focus on the controls for specific resources or security services such as integrity or confidentiality. In Chapter 4 these different perspectives ofbusiness type/size, the security services and the resources are integrated into a multi-dimensional model and mapped onto BS7799. Using the multi-dimensional model will enable management to answer questions such as: "Which BS7799 controls must a small retail organisation interested in preserving the confidentiality of their networks implement?" In Chapter 5 the SecComp model is proposed to assist in determining how well an organisation has implemented the BS7799 controls recommended for their needs. In Chapter 6 the underlying implemented IT infrastructure, i.e. the software, hardware and network products are also incorporated into determining if the information assets of organisations are sufficiently protected. This chapter combines technology aspects with management aspects to provide a consolidated approach towards the evaluation of IS. The thesis culminates in Part 3: Conclusion, which comprises one chapter only. In this last chapter, Chapter 7, the research undertaken thus far is summarised and the pros and cons of the proposed modelling approach is weighed up. The thesis is concluded with a reflection on possible areas for further research.
492

The use of a virtual machine as an access control mechanism in a relational database management system.

Van Staden, Wynand Johannes 04 June 2008 (has links)
This dissertation considers the use of a virtual machine as an access control mechanism in a relational database management system. Such a mechanism may prove to be more flexible than the normal access control mechanism that forms part of a relational database management system. The background information provided in this text (required to clearly comprehend the issues that are related to the virtual machine and its language) introduces databases, security and security mechanisms in relational database management systems. Finally, an existing implementation of a virtual machine that is used as a pseudo access control mechanism is provided. This mechanism is used to examine data that travels across a electronic communications network. Subsequently, the language of the virtual machine is chiefly considered, since it is this language which will determine the power and flexibility that the virtual machine offers. The capabilities of the language is illustrated by showing how it can be used to implement selected access control policies. Furthermore it is shown that the language can be used to access data stored in relations in a safe manner, and that the addition of the programs to the DAC model does not cause a significant increase in the management of a decentralised access control model. Following the proposed language it is obvious that the architecture of the ìnewî access control subsystem is also important since this architecture determines where the virtual machine fits in to the access control mechanism as a whole. Other extensions to the access control subsystem which are important for the functioning of the new access control subsystem are also reected upon. Finally, before concluding, the dissertation aims to provide general considerations that have to be taken into account for any potential implementation of the virtual machine. Aspects such as the runtime support system, data types and capabilities for extensions are taken into consideration. By examining all of the previous aspects, the access control language and programs, the virtual machine and the extensions to the access control subsystem, it is shown that the virtual machine and the language offered in this text provides the capability of implementing all the basic access control policies that can normally be provided. Additionally it can equip the database administrator with a tool to implement even more complex policies which can not be handled in a simple manner by the normal access control system. Additionally it is shown that using the virtual machine does not mean that certain complex policies have to be implemented on an application level. It is also shown that the new and extended access control subsystem does not significantly alter the way in which access control is managed in a relational database management system. / Prof. M.S. Olivier
493

A Netcentric Scientific Research Repository

Harrington, Brian 12 1900 (has links)
The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them with a Web service and registering the service with the repository. Metadata associated with Web services was enhanced to allow this feature to be included. In addition, an improved binary to text encoding scheme was de-veloped to reduce the size overhead for sending large scientific data files via XML mes-sages used in Web services. (3) Integrated image analysis operations with SQL. This technique allows for images to be stored and managed conveniently in a relational da-tabase. SQL supplemented with map algebra operations is used to select and perform operations on sets of images.
494

The study, design, and implementation of Data mart functions in Windows environments

Wen, Shenning 01 January 1998 (has links)
No description available.
495

Web-based database management system for research and development laboratories: Technical service support system

Solórzano, Benito 01 January 2001 (has links)
With the use of the Internet and the emerging of e-commerce, new and improved technologies and modeling techniques have been used to design and implement web-based database management systems.
496

Karst Database Development in Minnesota: Design and Data Assembly

Gao, Y., Alexander, E. C., Tipping, R. G. 01 May 2005 (has links)
The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces.
497

A Data-Descriptive Feedback Framework for Data Stream Management Systems

Fernández Moctezuma, Rafael J. 01 January 2012 (has links)
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
498

Optimizing Data Movement in Hybrid Analytic Systems

Leyshock, Patrick Michael 21 December 2014 (has links)
Hybrid systems for analyzing big data integrate an analytic tool and a dedicated data-management platform, storing data and operating on the data at both components. While hybrid systems have benefits over alternative architectures, in order to be effective, data movement between the two hybrid components must be minimized. Extant hybrid systems either fail to address performance problems stemming from inter-component data movement, or else require the user to explicitly reason about and manage data movement. My work presents the design, implementation, and evaluation of a hybrid analytic system for array-structured data that automatically minimizes data movement between the hybrid components. The proposed research first motivates the need for automatic data-movement minimization in hybrid systems, demonstrating that under workloads whose inputs vary in size, shape, and location, automation is the only practical way to reduce data movement. I then present a prototype hybrid system that automatically minimizes data movement. The exposition includes salient contributions to the research area, including a partial semantic mapping between hybrid components, the adaptation of rewrite-based query transformation techniques to minimize data movement in array-modeled hybrid systems, and empirical evaluation of the approach's utility. Experimental results not only illustrate the hybrid system's overall effectiveness in minimizing data movement, but also illuminate contributions made by various elements of the design.
499

Chunked extendible arrays and its integration with the global array toolkit for parallel image processing

Nimako, Gideon January 2016 (has links)
A thesis submitted to the Faculty of Engineering and the Built Environment in fulfilment of the requirements for the degree of Doctor of Philosophy, 2016 / Online resource (xii, 151 leaves) / Several meetings of the Extremely Large Databases Community for large scale scientific applications have advocated the use of multidimensional arrays as the appropriate model for representing scientific databases. Scientific databases gradually grow to massive sizes of the order of terabytes and petabytes. As such, the storage of such databases requires efficient dynamic storage schemes where the array is allowed to arbitrarily extend the bounds of the dimensions. Conventional multidimensional array representations in today’s programming environments do not extend or shrink their bounds without relocating elements of the data-set. In general extendibility of the bounds of the dimensions is limited to only one dimension. This thesis presents a technique for storing dense multidimensional arrays by chunks such that the array can be extended along any dimension without compromising the access time of an element. This is done with a computed access mapping function that maps the k-dimensional index onto a linear index of the storage locations. This concept forms the basis for the implementation of an array file of any number of dimensions, where the bounds of the array dimension can be extended arbitrarily. Such a feature currently exists in the Hierarchical Data Format version 5 (HDF5). However, extending the bound of a dimension in the HDF5 array file can be unusually expensive in time. Such extensions, in our storage scheme for dense array files, can be performed while still accessing elements of the array at orders of magnitude faster than in HDF5 or conventional array-files. We also present Parallel Chunked Extendible Dense Array (PEXTA), a new parallel I/O model for the Global Array Toolkit. PEXTA provides the necessary Application Programming Interface (API) for explicit data transfer between the memory resident global array and its secondary storage counterpart but also allows the persistent array to be extended on any dimension without compromising the access time of an element or sub-array elements. Such APIs provide a platform for high speed and parallel hyperspectral image processing without performance degradation, even when the imagery files undergo extensions. / MT2017
500

Concurrency and sharing in prolog and in a picture editor for aldat

Gunnlaugsson, Bjorgvin January 1987 (has links)
No description available.

Page generated in 0.061 seconds