• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 32
  • 32
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Feedback design for sampled analog phase and gain detection in MDFE

Schmid, Volker, 1969- 05 May 1995 (has links)
Graduation date: 1995
12

Structure-from-motion for enclosed environments /

Hakl, Henri. January 2007 (has links)
Thesis (PhD)--University of Stellenbosch, 2007. / Bibliography. Also available via the Internet.
13

Clock and data recovery circuits

Zhang, Ruiyuan, January 2004 (has links) (PDF)
Thesis (Ph. D.)--Washington State University. / Includes bibliographical references.
14

Database system architecture for fault tolerance and disaster recovery

Nguyen, Anthony. January 2009 (has links)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2009. / Title from PDF title page (viewed on Jun. 26, 2010). Includes bibliographical references.
15

A cyclic approach to business continuity planning

Botha, Jacques January 2002 (has links)
The Information Technology (IT) industry has grown and has become an integral part in the world of business today. The importance of information, and IT in particular, will in fact only increase with time (von Solms, 1999). For a large group of organizations computer systems form the basis of their day-to-day functioning (Halliday, Badendorst & von Solms, 1996). These systems evolve at an incredible pace and this brings about a greater need for securing them, as well as the organizational information processed, transmitted and stored. This technological evolution brings about new risks for an organization’s systems and information (Halliday et. al., 1996). If IT fails, it means that the business could fail as well, creating a need for more rigorous IT management (International Business Machines Corporation, 2000). For this reason, executive management must be made aware of the potential consequences that a disaster could have on the organisation (Hawkins,Yen & Chou, 2000). A disaster could be any event that would cause a disruption in the normal day-to-day functioning of an organization. Such an event could range from a natural disaster, like a fire, an earthquake or a flood, to something more trivial, like a virus or system malfunction (Hawkins et. al., 2000). During the 1980’s a discipline known as Disaster Recovery Planning (DRP) emerged to protect an organization’s data centre, which was central to the organisation’s IT based structure, from the effects of disasters. This solution, however, focussed only on the protection of the data centre. During the early 1990’s the focus shifted towards distributed computing and client/server technology. Data centre protection and recovery were no longer enough to ensure survival. Organizations needed to ensure the continuation of their mission critical processes to support their continued goal of operations (IBM Global Services, 1999). Organizations now had to ensure that their mission critical functions could continue while the data centre was recovering from a disaster. A different approach was required. It is for this reason that Business Continuity Planning (BCP) was accepted as a formal discipline (IBM Global Services, 1999). To ensure that business continues as usual, an organization must have a plan in place that will help them ensure both the continuation and recovery of critical business processes and the recovery of the data centre, should a disaster strike (Moore, 1995). Wilson (2000) defines a business continuity plan as “a set of procedures developed for the entire enterprise, outlining the actions to be taken by the IT organization, executive staff, and the various business units in order to quickly resume operations in the event of a service interruption or an outage”. With markets being highly competitive as they are, an organization needs a detailed listing of steps to follow to ensure minimal loss due to downtime. This is very important for maintaining its competitive advantage and public stature (Wilson, 2000). The fact that the company’s reputation is at stake requires executive management to take continuity planning very serious (IBM Global Services, 1999). Ensuring continuity of business processes and recovering the IT services of an organization is not the sole responsibility of the IT department. Therefore management should be aware that they could be held liable for any consequences resulting from a disaster (Kearvell-White, 1996). Having a business continuity plan in place is important to the entire organization, as everyone, from executive management to the employees, stands to benefit from it (IBM Global Services, 1999). Despite this, numerous organizations do not have a business continuity plan in place. Organizations neglecting to develop a plan put themselves at tremendous risk and stand to loose everything (Kearvell-White, 1996).
16

Structure-from-motion for enclosed environments

Hakl, Henri 12 1900 (has links)
Thesis (PhD (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2007. / A structure-from-motion implementation for enclosed environments is presented. The various aspects covered include a discussion on optimised luminance computations—a technique to compute an optimally weighted luminance that maintains a greatest amount of data fidelity. Furthermore a visual engine is created that forms the basis of data input for reconstruction purposes; such an inexpensive solution is found to offer realistic environments along with precise control of scene and camera elements. A motion estimation system provides tracking information of scene elements and an unscented Kalman filter is used as depth estimator. The elements are combined into an accurate reconstructor for enclosed environments.
17

Empirical studies toward DRP constructs and a model for DRP development for information systems function

Ha, Wai On 01 January 2002 (has links)
No description available.
18

Logging Subsystem Performance: Model and Evaluation

Clark, Thomas K. 21 October 1994 (has links)
Transaction logging is an integral part of ensuring proper transformation of data from one state to another in modern data management. Because of this, the throughput of the logging subsystem can be critical to the throughput of an application. The purpose of this research is to break the log bottleneck at minimum cost. We first present a model for evaluating a logging subsystem, where a logging subsystem is made up of a log device, a log backup device, and the interconnect algorithm between the two, which we term the log backup method. Included in the logging model is a set of criteria for evaluating a logging subsystem and a system for weighting the criteria in order to facilitate comparisons of two logging subsystem configurations to determine the better of the two. We then present an evaluation of each of the pieces of the logging subsystem in order to increase the bandwidth of both the log device and log backup device, while selecting the best log backup method, at minimum cost. We show that the use of striping and RAID is the best alternative for increasing log device bandwidth. Along with our discussion of RAID, we introduce a new RAID algorithm that is designed to overcome the performance problems of small writes in a RAID log. In order to increase the effective bandwidth of the log backup device, we suggest the use of inexpensive magnetic tape drives and striping in the log backup device, where the bandwidth of the log backup device is increased to the point that it matches the bandwidth of the log device. For the log backup interconnect algorithm, we present the novel approach of backing up the log synchronously, where the log backup device is essentially a mirror of the log device, as well as evaluating other log backup interconnect algorithms. Finally, we present a discussion of a prototype implementation of some of the ideas in the thesis. The prototype was implemented in a commercial database system, using a beta version of INFORMIX-OnLine Dynamic Server™ version 6.0.
19

Remote data backup system for disaster recovery /

Lin, Hua. January 2004 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2004. / Includes bibliographical references (leaves 64-66). Also available via World Wide Web.
20

Enhancing availability in large scale

Seshadri, Sangeetha. January 2009 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Ling Liu; Committee Member: Brian Cooper; Committee Member: Calton Pu; Committee Member: Douglas Blough; Committee Member: Karsten Schwan. Part of the SMARTech Electronic Thesis and Dissertation Collection.

Page generated in 0.4183 seconds