• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 8
  • 6
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 79
  • 79
  • 41
  • 33
  • 28
  • 13
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Clock and data recovery circuits

Zhang, Ruiyuan, January 2004 (has links) (PDF)
Thesis (Ph. D.)--Washington State University. / Includes bibliographical references.
22

Database system architecture for fault tolerance and disaster recovery

Nguyen, Anthony. January 2009 (has links)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2009. / Title from PDF title page (viewed on Jun. 26, 2010). Includes bibliographical references.
23

A cyclic approach to business continuity planning

Botha, Jacques January 2002 (has links)
The Information Technology (IT) industry has grown and has become an integral part in the world of business today. The importance of information, and IT in particular, will in fact only increase with time (von Solms, 1999). For a large group of organizations computer systems form the basis of their day-to-day functioning (Halliday, Badendorst & von Solms, 1996). These systems evolve at an incredible pace and this brings about a greater need for securing them, as well as the organizational information processed, transmitted and stored. This technological evolution brings about new risks for an organization’s systems and information (Halliday et. al., 1996). If IT fails, it means that the business could fail as well, creating a need for more rigorous IT management (International Business Machines Corporation, 2000). For this reason, executive management must be made aware of the potential consequences that a disaster could have on the organisation (Hawkins,Yen & Chou, 2000). A disaster could be any event that would cause a disruption in the normal day-to-day functioning of an organization. Such an event could range from a natural disaster, like a fire, an earthquake or a flood, to something more trivial, like a virus or system malfunction (Hawkins et. al., 2000). During the 1980’s a discipline known as Disaster Recovery Planning (DRP) emerged to protect an organization’s data centre, which was central to the organisation’s IT based structure, from the effects of disasters. This solution, however, focussed only on the protection of the data centre. During the early 1990’s the focus shifted towards distributed computing and client/server technology. Data centre protection and recovery were no longer enough to ensure survival. Organizations needed to ensure the continuation of their mission critical processes to support their continued goal of operations (IBM Global Services, 1999). Organizations now had to ensure that their mission critical functions could continue while the data centre was recovering from a disaster. A different approach was required. It is for this reason that Business Continuity Planning (BCP) was accepted as a formal discipline (IBM Global Services, 1999). To ensure that business continues as usual, an organization must have a plan in place that will help them ensure both the continuation and recovery of critical business processes and the recovery of the data centre, should a disaster strike (Moore, 1995). Wilson (2000) defines a business continuity plan as “a set of procedures developed for the entire enterprise, outlining the actions to be taken by the IT organization, executive staff, and the various business units in order to quickly resume operations in the event of a service interruption or an outage”. With markets being highly competitive as they are, an organization needs a detailed listing of steps to follow to ensure minimal loss due to downtime. This is very important for maintaining its competitive advantage and public stature (Wilson, 2000). The fact that the company’s reputation is at stake requires executive management to take continuity planning very serious (IBM Global Services, 1999). Ensuring continuity of business processes and recovering the IT services of an organization is not the sole responsibility of the IT department. Therefore management should be aware that they could be held liable for any consequences resulting from a disaster (Kearvell-White, 1996). Having a business continuity plan in place is important to the entire organization, as everyone, from executive management to the employees, stands to benefit from it (IBM Global Services, 1999). Despite this, numerous organizations do not have a business continuity plan in place. Organizations neglecting to develop a plan put themselves at tremendous risk and stand to loose everything (Kearvell-White, 1996).
24

Modelling and applications of MOS varactors for high-speed CMOS clock and data recovery

Sameni, Pedram 05 1900 (has links)
The high-speed clock and data recovery (CDR) circuit is a key building block of modern communication systems with applications spanning a wide range from wireline long-haul networks to chip-to-chip and backplane communications. In this dissertation, our focus is on the modelling, design and analysis of devices and circuits used in this versatile system in CMOS technology. Of these blocks, we have identified the voltage-controlled oscillator (VCO) as an important circuit that contributes to the total noise performance of the CDR. Among different solutions known for this circuit, LC-VCO is acknowledged to have the best phase noise performance, due to the filtering characteristic of the LC tank circuit. We provide details on modelling and characterization of a special type of varactor, the accumulation-mode MOS varactor, used in the tank circuit as a tuning component of these types of VCOs. We propose a new sub-circuit model for this type of varactor, which can be easily migrated to other technologies as long as an accurate model exists for MOS transistors. The model is suitable whenever the numerical models have convergence problems and/or are not defined for the specific designs (e.g., minimum length structures). The model is verified directly using measurement in a standard CMOS 0.13µm process, and indirectly by comparing the tuning curves of an LC-VCO designed in CMOS 0.13µm and 0.18µm processes. Using a varactor, a circuit technique is proposed for designing a narrowband tuneable clock buffer, which can be used in a variety of applications including the CDR system. The buffer automatically adjusts its driving bandwidth to that of the VCO, using the same control voltage that controls the frequency of the VCO. In addition, a detailed analysis of the impact of large output signals on the tuning characteristics of the LC-VCO is performed. It is shown that the oscillation frequency of the VCO deviates from that of an LC tank. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
25

Signal generation and evaluation using Digital-to-Analog Converter and Signal Defined Radio

Choudhury, Aakash 08 August 2023 (has links)
In contemporary communication systems, Digital-to-Analog Converters (DAC), Signal Defined Radio (SDR) signal creation, and clock data recovery are essential components. DACs convert digital signals to analog signals, creating continuous waveforms. DACs provide versatility in the transmission of SDR by supporting a range of communication protocols. Clock data recovery enables precise signal recovery and synchronization at the receiver end. These elements work together to provide effective and high-quality communication systems across several sectors. With the development of quantum computing, these SDR systems also find extensive use in generating precisely timed signals for controlling components of a quantum computer and also for read-out operations from various specialized instruments. This thesis demonstrates an FPGA (Xilinx vcu118) with a DAC (Analog Devices AD9081) platform. It employs SDR for generating of periodic signals and also stream of bits which are then recovered using a simple Clock Data Recovery technique. The signal integrity of the generated signals and error-rate from the proposed Clock Data Recovery technique is also analyzed. / Master of Science / Communication systems in our networked world depend on key technologies to provide dependable connectivity. By converting digital data into continuous waveforms, Digital-to- Analog Converters (DACs) serve a crucial role in enabling the generation of various analog signals. This makes it possible for Software-Defined Radio (SDR) to produce a variety of modulated signals and enables smooth communication between various hardware and software systems. The Clock and Data Recovery (CDR) algorithms correct for clock fluctuations and phase offsets to provide precise signal recovery and synchronization. Together, these technologies improve communication networks' effectiveness and dependability, allowing seamless connectivity and enhancing our networked experiences. This thesis presents an SDR platform comprising Xilinx FPGA vcu118 and Analog Devices high-speed DAC/ADC AD9081. A CDR algorithm is also proposed to recover data from the signals generated by the DAC, and its effectiveness and error rate is also analyzed.
26

Structure-from-motion for enclosed environments

Hakl, Henri 12 1900 (has links)
Thesis (PhD (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2007. / A structure-from-motion implementation for enclosed environments is presented. The various aspects covered include a discussion on optimised luminance computations—a technique to compute an optimally weighted luminance that maintains a greatest amount of data fidelity. Furthermore a visual engine is created that forms the basis of data input for reconstruction purposes; such an inexpensive solution is found to offer realistic environments along with precise control of scene and camera elements. A motion estimation system provides tracking information of scene elements and an unscented Kalman filter is used as depth estimator. The elements are combined into an accurate reconstructor for enclosed environments.
27

Creating Volatility Support for FreeBSD

Bond, Elyse 11 August 2015 (has links)
Digital forensics is the investigation and recovery of data from digital hardware. The field has grown in recent years to include support for operating systems such as Windows, Linux and Mac OS X. However, little to no support has been provided for less well known systems such as the FreeBSD operating system. The project presented in this paper focuses on creating the foundational support for FreeBSD via Volatility, a leading forensic tool in the digital forensic community. The kernel and source code for FreeBSD were studied to understand how to recover various data from analysis of a given system’s memory image. This paper will focus on the base Volatility support that was implemented, as well as the additional plugins created to recover desired data, including but not limited to the retrieval of a system’s process list and mounted file systems.
28

Empirical studies toward DRP constructs and a model for DRP development for information systems function

Ha, Wai On 01 January 2002 (has links)
No description available.
29

Logging Subsystem Performance: Model and Evaluation

Clark, Thomas K. 21 October 1994 (has links)
Transaction logging is an integral part of ensuring proper transformation of data from one state to another in modern data management. Because of this, the throughput of the logging subsystem can be critical to the throughput of an application. The purpose of this research is to break the log bottleneck at minimum cost. We first present a model for evaluating a logging subsystem, where a logging subsystem is made up of a log device, a log backup device, and the interconnect algorithm between the two, which we term the log backup method. Included in the logging model is a set of criteria for evaluating a logging subsystem and a system for weighting the criteria in order to facilitate comparisons of two logging subsystem configurations to determine the better of the two. We then present an evaluation of each of the pieces of the logging subsystem in order to increase the bandwidth of both the log device and log backup device, while selecting the best log backup method, at minimum cost. We show that the use of striping and RAID is the best alternative for increasing log device bandwidth. Along with our discussion of RAID, we introduce a new RAID algorithm that is designed to overcome the performance problems of small writes in a RAID log. In order to increase the effective bandwidth of the log backup device, we suggest the use of inexpensive magnetic tape drives and striping in the log backup device, where the bandwidth of the log backup device is increased to the point that it matches the bandwidth of the log device. For the log backup interconnect algorithm, we present the novel approach of backing up the log synchronously, where the log backup device is essentially a mirror of the log device, as well as evaluating other log backup interconnect algorithms. Finally, we present a discussion of a prototype implementation of some of the ideas in the thesis. The prototype was implemented in a commercial database system, using a beta version of INFORMIX-OnLine Dynamic Server™ version 6.0.
30

Low Power Clock and Data Recovery Integrated Circuits

Ardalan, Shahab 22 October 2007 (has links)
Advances in technology and the introduction of high speed processors have increased the demand for fast, compact and commercial methods for transferring large amounts of data. The next generation of the communication access network will use optical fiber as a media for data transmission to the subscriber. In optical data or chip-to-chip data communication, the continuous received data needs to be converted to discrete data. For the conversion, a synchronous clock and data are required. A clock and data recovery (CDR) circuit recovers the phase information from the data and generates the in-phase clock and data. In this dissertation, two clock and data recovery circuits for Giga-bits per second (Gbps) serial data communication are designed and fabricated in 180nm and 90nm CMOS technology. The primary objective was to reduce the circuit power dissipation for multi-channel data communication applications. The power saving is achieved using low swing voltage signaling scheme. Furthermore, a novel low input swing Alexander phase detector is introduced. The proposed phase detector reduces the power consumption at the transmitter and receiver blocks. The circuit demonstrates a low power dissipation of 340µW/Gbps in 90nm CMOS technology. The CDR is able to recover the input signal swing of 35mVp. The peak-to-peak jitter is 21ps and RMS jitter is 2.5ps. Total core area excluding pads is approximately 0.01mm2.

Page generated in 0.0432 seconds