Spelling suggestions: "subject:"database management."" "subject:"catabase management.""
121 |
The deductive pathfinder creating derivation plans for inferential question-answering /Klahr, Philip, January 1975 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1975. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 157-162).
|
122 |
GMIS : an experimental system for data management and analysisDonovan, John J., Jacoby, Henry D. January 1975 (has links)
Prepared in association with the Sloan School of Management
|
123 |
Communication module for the micro-based small purchase contracting program /Bowman, W. Stevenson. January 1992 (has links)
Thesis (M.S. in Information Systems)--Naval Postgraduate School, March 1992. / Thesis Advisors: Matsushima, Rodney ; Sengupta, Kishore. "March, 1992." Description based on title screen as viewed on March 4, 2009. Includes bibliographical references (p. 270-271). Also available in print.
|
124 |
The impact of the data management approach on information systems auditingFurstenburg, Don Friedrich, 1953- 11 1900 (has links)
In establishing the impact of formal data management practices on systems
and systems development auditing in the context of a corporate data base
environment; the most significant aspects of a data base environment as well
as the concept of data management were researched.
It was established that organisations need to introduce a data management
function to ensure the availability and integrity of data for the organisation.
It was further established that an effective data management function can
fulfil a key role in ensuring the integrity of the overall data base and as such
it becomes an important general control on which the auditor can rely.
The audit of information systems in a data base environment requires a more
"holistic" audit approach and as a result the auditor has to expand the scope
of the systems audit to include an evaluation of the overall data base
environment. / Auditing / M. Com (Applied Accounting)
|
125 |
Extensibility in ORDBMS databases : an exploration of the data cartridge mechanism in Oracle9iNdakunda, Tulimevava Kaunapawa 18 June 2013 (has links)
To support current and emerging database applications, Object-Relational Database Management Systems (ORDBMS) provide mechanisms to extend the data storage capabilities and the functionality of the database with application-specific types and methods. Using these mechanisms, the database may contain user-defined data types, large objects (LOBs), external procedures, extensible indexing, query optimisation techniques and other features that are treated in the same way as built-in database features . The many extensibility options provided by the ORDBMS, however, raise several implementation challenges that are not always obvious. This thesis examines a few of the key challenges that arise when extending Oracle database with new functionality. To realise the potential of extensibility in Oracle, the thesis used the problem area of image retrieval as the main test domain. Current research efforts in image retrieval are lagging behind the required retrieval, but are continuously improving. As better retrieval techniques become available, it is important that they are integrated into the available database systems to facilitate improved retrieval. The thesis also reports on the practical experiences gained from integrating an extensible indexing scenario. Sample scenarios are integrated in Oracle9i database using the data cartridge mechanism, which allows Oracle database functionality to be extended with new functional components. The integration demonstrates how additional functionality may be effectively applied to both general and specialised domains in the database. It also reveals alternative design options that allow data cartridge developers, most of who are not database server experts, to extend the database. The thesis is concluded with some of the key observations and options that designers must consider when extending the database with new functionality. The main challenges for developers are the learning curve required to understand the data cartridge framework and the ability to adapt already developed code within the constraints of the data cartridge using the provided extensibility APls. Maximum reusability relies on making good choices for the basic functions, out of which specialised functions can be built. / KMBT_363 / Adobe Acrobat 9.54 Paper Capture Plug-in
|
126 |
Design of relational database schemas : the traditional dependencies are not enoughOla, Adegbemiga January 1982 (has links)
Hitherto, most relational database design methods are based
on functional dependencies (FDs) and multivalued dependencies
(MVDs). Full mappings are proposed as an alternative to FDs and
MVDs. A mapping between any two sets, apart from being one-one,
many-one, or many-many, is either total or partial on the source
and target sets. An 'into' mapping on a set, expresses the fact
that an element in the set may not be involved in the
mapping. An 'onto' mapping on a set is total on the set. A
many-many (into,onto) mapping from set A to set B is written as
A[sup=i] m----n B[sup=o].
The mappings incorporate more semantic information into data dependency specification. It is shown, informally, that the full mappings are more expressive than FDs and MVDs. Transformation rules, to generate Boyce-Codd normal form and projection-join normal form schemas from the full mappings, are defined. The full mapping/transformation rules provide a discipline for modeling nonfunctional relationships, within a synthetic approach. / Science, Faculty of / Computer Science, Department of / Graduate
|
127 |
Efficient Distributed Processing Over Micro-batched Data StreamsAhmed Abdelhamid (10539053) 07 May 2021 (has links)
<div><div><div><p>Advances in real-world applications require high-throughput processing over large data streams. Micro-batching is a promising computational model to support the needs of these applications. In micro-batching, the processing and batching of the data are interleaved, where the incoming data tuples are first buffered as data blocks, and then are processed collectively using parallel function constructs (e.g., Map-Reduce). The size of a micro-batch is set to guarantee a certain response-time latency that is to conform to the application’s service-level agreement. Compared to native tuple-at-a-time data stream processing, micro- batching can sustain higher data rates. However, existing micro-batch stream processing systems lack Load-awareness optimizations that are necessary to maintain performance and enhance resource utilization. In this thesis, we investigate the micro-batching paradigm and pinpoint some of its design principles that can benefit from further optimization. A new data partitioning scheme termed Prompt is presented that leverages the characteristics of the micro-batch processing model. Prompt enables a balanced input to the batching and processing cycles of the micro-batching model. Prompt achieves higher throughput process- ing with an increase in resource utilization. Moreover, Prompt+ is proposed to enforce la- tency by elastically adapting resource consumption according to workload changes. More specifically, Prompt+ employs a scheduling strategy that supports elasticity in response to workload changes while avoiding rescheduling bottlenecks. Moreover, we envision the use of deep reinforcement learning to efficiently partition data in distributed streaming systems. PartLy demonstrates the use of artificial neural networks to facilitate the learning of efficient partitioning policies that match the dynamic nature of streaming workloads. Finally, all the proposed techniques are abstracted and generalized over three widely used stream process- ing engines. Experimental results using real and synthetic data sets demonstrate that the proposed techniques are robust against fluctuations in data distribution and arrival rates. Furthermore, it achieves up to 5x improvement in system throughput over state-of-the-art techniques without degradation in latency.</p></div></div></div>
|
128 |
A relational algebra database system for a microcomputer /Chiu, George Kwok-Wing. January 1982 (has links)
No description available.
|
129 |
Design of a large data base a methodology comparisonWilson, James R January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
130 |
Data structures and algorithms for data representation in constrained environmentsKarras, Panagiotis. January 2007 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
|
Page generated in 0.0597 seconds