Spelling suggestions: "subject:"[een] COMPUTER"" "subject:"[enn] COMPUTER""
11 |
Avoca: An environment for programming with protocols.O'Malley, Sean William. January 1990 (has links)
This dissertation addresses three fundamental problems with the network software as it is currently designed and implemented: the poor performance of high modular (or layered) protocols, network software's inability to keep up with the rapid changes in the networking technology and applications demands, and the inordinate amount of time it takes to produce new network protocols. These problems are solved through the use of a new platform for the implementation and execution of network protocols, a new methodology for the design of network protocols and a new network architecture a new network architecture. Avoca is the result of taking a coordinated approach to network software design, implementation, and standardization and consists of three parts: the Avoca platform, the Avoca methodology and the Avoca Network Architecture. The Avoca platform is a small operating systems kernel designed from scratch to implement network protocols efficiently. The Avoca methodology (or Meta-Protocol) is set of rules governing the design and implementation of network protocols implemented on the Avoca platform. The Avoca Network Architecture is a novel architecture explicitly designed to support the rapidly changing networking environment. Using Avoca highly layered network software can be implemented efficiently. Avoca proves that modularity is not inherently slow. Avoca supports the encapsulation, underspecification, composition and reuse of protocols will demonstrated. Avoca shows that network software is amenable to the use of software engineering techniques to improve the protocol implementation process. Finally, Avoca demonstrates that a network architecture flexible enough to support a rapidly changing networking environment is possible.
|
12 |
COMPUTER AIDED PROCESS ORGANIZATION IN SOFTWARE DESIGN.KARIMI, JAHANGIR. January 1983 (has links)
Recent studies point to major problems in today's software systems. Problems in cost, reliability, maintainability, and poor responsiveness to user requirements have their origin in the early phases of the system development effort. Although increasing awareness of poor design practice has stimulated several research efforts toward making the "detailed design" process more "systematic," there is a significant need for a computer-aided methodology to help designers cope with the complex design process. A framework is established for organizing activities in support of one important aspect of "detailed design," the organization of processes into appropriate process groups and program modules. A computer-aided methodology is presented for analysis of variety of inter-process relationships in the determination of effective modularizations. The proposed methodology extends current software engineering practice through partial automation of an important software engineering problem, the effective structuring of processes according to multiple design criteria. Multiple design criteria are used to determine inter-process relationships. The system accomodates a number of design criteria including volume of data transport, distribution of data references, information and control distribution. The methodology begins with the assignment of a graph structure to subsystem components and their interdependencies. The resulting graph is partitioned in determination of subgraphs (modules) with strong intra-dependencies and weak inter-dependencies. The set of subgraphs define modules which satisfy principles of high module strength and low module coupling. The decomposition method used also produces a hierarchical structure of modules with little resource sharing. The resulting design limits "reference distribution" and "information distribution" between modules, which results in reduction of complexity of the total structure. Analytical tools in support of these activities are presented to illustrate support of the methodology by a pilot study.
|
13 |
AN EXPERIMENTAL FRAME FOR A MODULAR HIERARCHICALLY COORDINATED ADAPTIVE COMPUTER ARCHITECTURELiaw, Yih-Shyan, 1955- January 1986 (has links)
No description available.
|
14 |
Side channel attack resistance| Migrating towards high level methodsBorowczak, Mike 21 December 2013 (has links)
<p> Our world is moving towards ubiquitous networked computing with unstoppable momentum. With technology available at our every finger tip, we expect to connect quickly, cheaply, and securely on the sleekest devices. While the past four decades of design automation research has focused on making integrated circuits smaller, cheaper and quicker the past decade has drawn more attention towards security. Though security within the scope of computing is a large domain, the focus of this work is on the elimination of computationally based power byproducts from high-level device models down to physical designs and implementations The scope of this dissertation is within the analysis, attack and protection of power based side channels. Research in the field concentrates on determining, masking and/or eliminating the sources of data dependent information leakage within designs. While a significant amount of research is allocated to reducing this leakage at low levels of abstraction, significantly less research effort has gone into higher levels of abstraction. This dissertation focuses on both ends of the design spectrum while motivating the future need for hierarchical side channel resistance metrics for hardware designs. Current low level solutions focus on creating perfectly balanced standard cells through various straight-forward logic styles. Each of these existing logic styles, while enhancing side channel resistance by reducing the channels' variance, come at significant design expense in terms of area footprint, power consumption, delay and even logic style structure. The first portion of this proposal introduces a universal cell based on a dual multiplexer, implemented using a pass-transistor logic which approaches and exceeds some standard cell cost benchmarks. The proposed cell and circuit level methods shows significant improvements in security metrics over existing cells and approaches standard CMOS cell and circuit performance by reducing area, power consumption and delay. While most low level works stop at the cell level, this work also investigates the impact of environmental factors on security. On the other end of the design spectrum, existing secure architecture and algorithm research attempts to mask side channels through random noise, variable timing, instruction reordering and other similar methods. These methods attempt to obfuscate the primary source of information with side channels. Unfortunately, in most cases, the techniques are still susceptible to attack - of those with promise, most are algorithm specific. This dissertation approaches high-level security by eliminating the relationship between high level side channel models and the side channels themselves. This work discusses two different solutions targeting architecture level protection. The first, deals with the protection of Finite State Machines, while the seconds deals with protection of a class of cryptographic algorithms using Feedback Shift Registers. This dissertation includes methods for reducing the power overhead of any FSM circuit (secured or not). The solutions proposed herein render potential side channel models moot by eliminating or reducing the model's data dependent variability. Designers unwilling to compromise on a doubling of area can include some sub-optimal security to their devices. </p>
|
15 |
General Purpose MCMC Sampling for Bayesian Model AveragingBoyles, Levi Beinarauskas 26 September 2014 (has links)
<p> In this thesis we explore the problem of inference for Bayesian model averaging. Many popular topics in Bayesian analysis, such as Bayesian nonparametrics, can be cast as model averaging problems. Model averaging problems offer unique difficulties for inference, as the parameter space is not fixed, and may be infinite. As such, there is little existing work on general purpose MCMC algorithms in this area. We introduce a new MCMC sampler, which we call Retrospective Jump sampling, that is suitable for general purpose model averaging. In the development of Retrospective Jump, some practical issues arise in the need for a MCMC sampler for finite dimensions that is suitable for multimodal target densities; we introduce Refractive Sampling as a sampler suitable in this regard. Finally, we evaluate Retrospective Jump on several model averaging and Bayesian nonparametric problems, and develop a novel latent feature model with hierarchical column structure which uses Retrospective Jump for inference.</p>
|
16 |
Single sign-on solution for MYSEA servicesBui, Sonia. 09 1900 (has links)
The Monterey Security Architecture (MYSEA) is a trusted distributed environment enforcing multilevel security policies. To provide a scaleable architecture, a federation of MYSEA servers handles service requests. However, the introduction of multiple servers creates security and usability problems associated with multiple user logins. A single sign-on solution for the MYSEA server federation is needed. After user authenticates once to a single MYSEA server, the user's credentials are used to sign on to the other MYSEA servers. The goal of this thesis is to create a high-level design and specification of a single sign-on framework for MYSEA. This has entailed a review and comparison of existing single sign-on architectures and solutions, a study of the current MYSEA design, the development of a new architecture for single sign-on, an analysis of single signon threats within a MYSEA context, a derivation of single sign-on objectives in MYSEA, leading up to the security requirements for single sign-on in MYSEA. Security and functionality are the main driving factors in the design. Others factors include performance, reliability, and the feasibility of integration into the existing MYSEA MLS network. These results will serve as a basis for a detailed design and future development of sign-on in MYSEA.
|
17 |
Authentication scenario for CyberCIEGEMueller, David S. 09 1900 (has links)
Frequent media reports of the loss or compromise of data stored on computer systems indicate that attempts to educate users on proper computer security policies and procedures seem to be ineffective. In an effort to provide a means of education that will more fully engage users, the CyberCIEGE game was created. It is hoped that by playing CyberCIEGE users will absorb computer security concepts better than they have through more traditional forms of instruction, because many find games to be a compelling experience. Many users do not understand why good passwords and password management are important for information systems. This effort developed a scenario for CyberCIEGE to teach players about issues involved when developing a password policy for a computer system. Limited testing showed the scenario accomplishes this. CyberCIEGE uses a Scenario Definition Language to provide developers and educators the ability to create scenarios that focus on particular concepts. To streamline scenario development, a Scenario Definition Tool has been created. As a part of scenario development, this work also involved beta testing of the Scenario Definition Tool, a program that aids scenario developers in the creation of scenarios for the game. This testing resulted in several improvements to the tool.
|
18 |
Assessing the effects of honeypots on cyber-attackersLim, Sze Li Harry 12 1900 (has links)
A honeypot is a non-production system, design to interact with cyber-attackers to collect intelligence on attack techniques and behaviors. While the security community is reaping fruits of this collection tool, the hacker community is increasingly aware of this technology. In response, they develop anti-honeypot technology to detect and avoid honeypots. Prior to the discovery of newer intelligence collection tools, we need to maintain the relevancy of honeypot. Since the development of anti-honeypot technology indicates the deterrent effect of honeypot, we can capitalize on this deterrent effect to develop fake honeypot. Fake honeypot is real production system with deterring characteristics of honeypot that induces the avoidance behavior of cyber-attackers. Fake honeypots will provide operators with workable production systems under obfuscation of deterring honeypot when deployed in hostile information environment. Deployed in a midst of real honeynets, it will confuse and delay cyber-attackers. To understand the effects of honeypot on cyber-attackers to design fake honeypot, we exposed a tightly secured, self-contained virtual honeypot to the Internet over a period of 28 days. We conclude that it is able to withstand the duration of exposure without compromise. The metrics pertaining to the size of last packet suggested departure of cyber-attackers during reconnaissance.
|
19 |
Implementation and analysis of a threat model for IPv6 host autoconfigurationChozos, Savvas 09 1900 (has links)
IPv6, the successor of IPv4, introduces the stateless autoconfiguration feature as a convenient alternative to the Dynamic Host Configuration Protocol (DHCP). However, the security implications of this new approach have only been discussed at the conceptual level. This thesis research develops software based on the open-source packet capture library Jpcap to capture and build appropriate ICMPv6 autoconfiguration messages. The developed Java software is used to implement two DoS threats to the IPv6 autoconfiguration procedure in a laboratory IPv6 network. The results indicate that these threats are real and further studies are required to identify suitable countermeasures. During this work compliance defects are also identified for the Linux Operating System's IPv6 implementation.
|
20 |
Productive Design of Extensible On-Chip Memory HierarchiesCook, Henry Michael 02 September 2016 (has links)
<p> As Moore’s Law slows and process scaling yields only small returns, computer architecture and design are poised to undergo a renaissance. This thesis brings the productivity of modern software tools to bear on the design of future energy-efficient hardware architectures. </p><p> In particular, it targets one of the most difficult design tasks in the hardware domain: Coherent hierarchies of on-chip caches. I have extended the capabilities of Chisel, a new hardware description language, by providing libraries for hardware developers to use to describe the configuration and behavior of such memory hierarchies, with a focus on the cache coherence protocols that work behind the scenes to preserve their abstraction of global shared memory. I discuss how the methods I provide enable productive and extensible memory hierarchy design by separating the concerns of different hierarchy components, and I explain how this forms the basis for a generative approach to agile hardware design. </p><p> This thesis describes a general framework for context-dependent parameterization of any hardware generator, defines a specific set of Chisel libraries for generating extensible cache-coherent memory hierarchies, and provides a methodology for decomposing high-level descriptions of cache coherence protocols into controller-localized, object-oriented transactions. </p><p> This methodology has been used to generate the memory hierarchies of a lineage of RISC-V chips fabricated as part of the ASPIRE Lab’s investigations into application-specific processor design.</p>
|
Page generated in 0.0896 seconds