• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 35
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 194
  • 194
  • 183
  • 60
  • 28
  • 26
  • 25
  • 24
  • 24
  • 23
  • 18
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

High level strategy for detection of transient faults in computer systems

Modi, Nimish Harsukh January 1988 (has links)
A major portion of digital system malfunctions are due to the presence of temporary faults which are either intermittent or transient. An intermittent fault manifests itself at regular intervals, while a transient fault causes a temporary change in the state of the system without damaging any of the components. Transient faults are difficult to detect and isolate and hence become a source of major concern, especially in critical real-time applications. Since satellite systems are particularly susceptible to transient faults induced by the radiation environment, a satellite communications protocol model has been developed for experimental research purposes. The model implements the MlL-TD-1553B protocol, which dictates the modes of communication between several satellite systems. The model has been developed employing the structural and behavioral capabilities of the HILO simulation system. SEUs are injected into the protocol model and the effects on the program flow are investigated. A two-tier detection scheme employing the concept of Signature Analysis is developed. Performance evaluation of the detection mechanisms is carried out and the results are presented. / Master of Science / incomplete_metadata
172

The design of periodically self restoring redundant systems

Singh, Adit D. January 1982 (has links)
Most existing fault tolerant systems employ some form of dynamic redundancy and can be considered to be incident driven. Their recovery mechanisms are triggered by the detection of a fault. This dissertation investigates an alternative approach to fault tolerant design where the redundant system restores itself periodically to correct errors before they build up to the point of system failure. It is shown that periodically self restoring systems can be designed to be tolerant of both transient (intermittent) and permanent hardware faults. Further, the reliability of such designs is not compromised by fault latency. The periodically self restoring redundant (PSRR) systems presented in this dissertation employ, in general, N computing units (CU's) operating redundantly in synchronization. The CU's communicate with each other periodically to restore units that may have failed due to transient faults. This restoration is initiated by an interrupt from an external (fault tolerant) clocking circuit. A reliability model for such systems is developed in terms of the number of CU's in the system, their failure rates and the frequency of system restoration. Both transient and permanent faults are considered. The model allows the estimation of system reliability and mean time to failure. A restoration algorithm for implementing the periodic restoration process in PSRR systems is also presented. Finally a design procedure is described that can be used for designing PSRR systems to meet desired reliability specifications. / Ph. D.
173

Built-in tests for a real-time embedded system.

Olander, Peter Andrew. January 1991 (has links)
Beneath the facade of the applications code of a well-designed real-time embedded system lies intrinsic firmware that facilitates a fast and effective means of detecting and diagnosing inevitable hardware failures. These failures can encumber the availability of a system, and, consequently, an identification of the source of the malfunction is needed. It is shown that the number of possible origins of all manner of failures is immense. As a result, fault models are contrived to encompass prevalent hardware faults. Furthermore, the complexity is reduced by determining syndromes for particular circuitry and applying test vectors at a functional block level. Testing phases and philosophies together with standardisation policies are defined to ensure the compliance of system designers to the underlying principles of evaluating system integrity. The three testing phases of power-on self tests at system start up, on-line health monitoring and off-line diagnostics are designed to ensure that the inherent test firmware remains inconspicuous during normal applications. The prominence of the code is, however, apparent on the detection or diagnosis of a hardware failure. The authenticity of the theoretical models, standardisation policies and built-in test philosophies are illustrated by means of their application to an intricate real-time system. The architecture and the software design implementing the idealogies are described extensively. Standardisation policies, enhanced by the proposition of generic tests for common core components, are advocated at all hierarchical levels. The presentation of the integration of the hardware and software are aimed at portraying the moderately complex nature of the task of generating a set of built-in tests for a real-time embedded system. In spite of generic policies, the intricacies of the architecture are found to have a direct influence on software design decisions. It is thus concluded that the diagnostic objectives of the user requirements specification be lucidly expressed by both operational and maintenance personnel for all testing phases. Disparity may exist between the system designer and the end user in the understanding of the requirements specification defining the objectives of the diagnosis. It is thus essential for complete collaboration between the two parties throughout the development life cycle, but especially during the preliminary design phase. Thereafter, the designer would be able to decide on the sophistication of the system testing capabilities. / Thesis (M.Sc.)-University of Natal, Durban, 1991.
174

Magnetic thin film coating and coding of the memory disk from a Minuteman Missle Computer

Turner, James A. 03 June 2011 (has links)
To regain operation of a Minuteman Missile guidance computer, a ferromagnetic film was sprayed onto a previously inoperable memory disk after the original coating was removed using paint remover. The coating was then polished down to provide a smooth and uniform film, 1'he permanent data required for the clock and sector channels was determined from an operable Minuteman computer. 1-his information was then recorded on the memory disk using the write heads which were part of the complete memory unit. Digital electronics using integrated circuits provided theand generated the recording data _or the memory write heads. A "memory check" program verified the uniformity of the repaired memory by alternately writing "0' s" and "1' s" on each bit location and then reading and comparing the numbers to "0's" and "l's".Ball State UniversityMuncie, IN 47306
175

Cyberhistory

Falloon, Keith January 2002 (has links)
Cyberhistory is a thesis presented at The University of Western Australia for the Degree of Master of Science. Computer history is its prime field of focus. Cyberhistory pursues four key themes in computer history. These are, gender, the notion of the periphery, access and the role of the proselytiser. Cyberhistory argues that, gender issues are significant to computer history, culture ascribes gender to computing, and culture has driven computer development as much as technological progress. Cyberhistory identifies significant factors in the progress of computer technology in the 20th century. Cyberhistory finds that, innovation can occur on the periphery, access to computers can liberate and lead to progress, key proselytisers have impacted the development of computing and computing has become decentralised due to a need for greater access to the information machine. Cyberhistory traces a symbolic journey from the industrial periphery to the centres of computing development during WWII, then out to a marginal computer centre and into the personal space of the room. From the room, Cyberhistory connects into cyberspace. Cyberhistory finds that, despite its chaos, the Internet can act like a sanctuary for those seeking to bring imagination and creativity to computing.
176

CLUE: A Cluster Evaluation Tool

Parker, Brandon S. 12 1900 (has links)
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.
177

Development of time and workload methodologies for Micro Saint models of visual display and control systems

Moscovic, Sandra A. 22 December 2005 (has links)
The Navy, through its Total Quality Leadership (TQL) program, has emphasized the need for objective criteria in making design decisions. There are numerous tools available to aid human factors engineers meet the Navy’s need. For example, simulation modeling provides objective design decisions without incurring the high costs associated with prototype building and testing. Unfortunately, simulation modeling of human— machine systems is limited by the lack of task completion time and variance data for various objectives. Moreover, no study has explored the use of a simulation model with a Predetermined Time System (PTS) as a valid method for making design decisions for display interactive consoles. This dissertation concerns the development and validation of a methodology to incorporate a PTS known as Modapts into a simulation modeling tool known as Micro Saint. The operator task context for the model was an interactive displays and controls console known as the AN/SLQ-32(V). In addition, the dissertation examined the incorporation of a cognitive workload metric known as the Subjective Workload Assessment Technique (SWAT) into the Micro Saint model. The dissertation was conducted in three phases. In the first phase, a task analysis was performed to identify operator task and hardware interface redesign options. In the second phase data were collected from two groups of six participants who performed an operationally realistic task on 24 different configurations of a Macintosh AN/SLQ-32(V) simulator. Configurations of the simulated AN/SLQ-32(V) were defined by combinations of two display formats, two color conditions, and two emitter symbol sets, presented under three emitter density conditions. Data from Group 1 were used to assign standard deviations, probability distributions and Modapts times to a Micro Saint model of the task. The third phase of the study consisted of (1) verifying the model-generated performance scores and workload scores by comparison against scores obtained from Group 1 using regression analyses, and (2) validation of the model by comparison against Group 2. The results indicate that the Modapts/Micro Saint methodology was a valid way to predict performance scores obtained from the 24 simulated AN/SLQ-32(V) prototypes (R² = 0.78). The workload metric used in the task network model accounted for 76 percent of the variance in Group 2 mean workload scores, but the slope of the regression was different from unity (p = 0.05). The statistical finding suggests that the model does not provide an exact prediction of workload scores. Further regression analysis of Group 1 and Group 2 workload scores indicates that the two groups were not homogenous with respect to workload ratings. / Ph. D.
178

Multimodule simulation techniques for chip level modeling

Cho, Chang H. January 1986 (has links)
A design and implementation of a multimodule chip-level simulator whose source description language is based on the original GSP2 system is described. To enhance the simulation speed, a special addressing ("sharing single memory location") scheme is used in the implementation of pin connections. The basic data structures and algorithms for the simulator are described. The developed simulator can simulate many digital devices interconnected as a digital network. It also has the capability of modeling external buses and handling the suspension of processes in the environment of multimodule simulation. An example of a multimodule digital system simulation is presented. / M.S.
179

Implementing security in an IP Multimedia Subsystem (IMS) next generation network - a case study

Unknown Date (has links)
The IP Multimedia Subsystem (IMS) has gone from just a step in the evolution of the GSM cellular architecture control core, to being the de-facto framework for Next Generation Network (NGN) implementations and deployments by operators world-wide, not only cellular mobile communications operators, but also fixed line, cable television, and alternative operators. With this transition from standards documents to the real world, engineers in these new multimedia communications companies need to face the task of making these new networks secure against threats and real attacks that were not a part of the previous generation of networks. We present the IMS and other competing frameworks, we analyze the security issues, we present the topic of Security Patterns, we introduce several new patterns, including the basis for a Generic Network pattern, and we apply these concepts to designing a security architecture for a fictitious 3G operator using IMS for the control core. / by Jose M. Ortiz-Villajos. / Thesis (M.S.C.S.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
180

Equivalence Checking for High-Assurance Behavioral Synthesis

Hao, Kecheng 10 June 2013 (has links)
The rapidly increasing complexities of hardware designs are forcing design methodologies and tools to move to the Electronic System Level (ESL), a higher abstraction level with better productivity than the state-of-the-art Register Transfer Level (RTL). Behavioral synthesis, which automatically synthesizes ESL behavioral specifications to RTL implementations, plays a central role in this transition. However, since behavioral synthesis is a complex and error-prone translation process, the lack of designers' confidence in its correctness becomes a major barrier to its wide adoption. Therefore, techniques for establishing equivalence between an ESL specification and its synthesized RTL implementation are critical to bring behavioral synthesis into practice. The major research challenge to equivalence checking for behavioral synthesis is the significant semantic gap between ESL and RTL. The semantics of ESL involve untimed, sequential execution; however, the semantics of RTL involve timed, concurrent execution. We propose a sequential equivalence checking (SEC) framework for certifying a behavioral synthesis flow, which exploits information on successive intermediate design representations produced by the synthesis flow to bridge the semantic gap. In particular, the intermediate design representation after scheduling and pipelining transformations permits effective correspondence of internal operations between this design representation and the synthesized RTL implementation, enabling scalable, compositional equivalence checking. Certifications of loop and function pipelining transformations are possible by a combination of theorem proving and SEC through exploiting pipeline generation information from the synthesis flow (e.g., the iteration interval of a generated pipeline). The complexity brought by bubbles in function pipelines is creatively reduced by symbolically encoding all possible bubble insertions in one pipelined design representation. The result of this dissertation is a robust, practical, and scalable framework for certifying RTL designs synthesized from ESL specifications. We have validated the robustness, practicality, and scalability of our approach on industrial-scale ESL designs that result in tens of thousands of lines of RTL implementations.

Page generated in 0.0995 seconds