• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Enabling User Space Secure Hardware

Coughlin, Michael 02 June 2018 (has links)
<p> User space software allows developers to customize applications beyond the limits of the privileged operating system. In this dissertation, we extend this concept to the hardware in the system, providing applications with the ability to define secure hardware; effectively enabling hardware to be treated as a user space resource. This addresses a significant challenge facing industry today, which has an increasing need for secure hardware. With the ever increasing leaks of private data, increasing use of a variety of computing platforms controlled by third parties, and increasing sophistication of attacks, secure hardware, now more than ever, is needed to provide protections we need. However, the current ecosystem of secure hardware is fractured and limited. Developers are left with few choices of platforms to implement their applications and oftentimes the choices don&rsquo;t fully meet their needs. Instead of relying on manufacturers to make the correct design decisions and ensuring that these platforms are implemented correctly, we enable applications to define the exact secure hardware that it needs to protect itself and its data. </p><p> This vision leverages the emergence of programmable hardware, specifically FPGAs, to serve as the basis of user space secure hardware. The challenges of this, however, are that (i) sharing of FPGA resources among multiple applications is not currently practical, and (ii) the reprogrammability of FPGAs compromises the security properties of secure hardware. We address these challenges by introducing two systems, Cloud RTR and Software Defined Secure Hardware, which individually solve each challenge, and then combine these solutions together to realize the complete vision. Cloud RTR solves the first challenge by leveraging cloud compilation to allow for an FPGA to be shared between applications, making hardware into a user space resource. SDSHW solves the second challenge by introducing a self-provisioning system that allows for an FPGA to provisioned into a secure state, allowing for secure hardware to be run in an FPGA. We then combine these two systems to implement the user space hardware provided by Cloud RTR on the secure platform provided by SDSHW, which provides our vision of user space secure hardware.</p><p>
82

Post-Traumatic Stress Disorder Severity Prediction on Web-based Trauma Recovery Treatments through Electrodermal Activity Measurements

Mallol-Ragolta, Adria 19 May 2018 (has links)
<p> Recent studies have shown evidences regarding trauma recovery through web-based interventions. Currently, a widespread protocol is to assess trauma severity by answering the PTSD Checklist (PCL) questionnaire, which requires subjects' intervention. This thesis explores the feasibility of automatically predicting changes in trauma severity, <i>&delta;PCL</i>, through the analysis of electrodermal activity measurements in order not to bother subjects after the intense mental effort experienced during the trauma recovery treatment. Furthermore, the automatic trauma severity prediction can provide web-based trauma recovery treatments with tools to monitor subjects' progress during treatment, so its contents can be adapted to the subjects' needs. </p><p> This analysis is performed on the EASE dataset, and evaluates the performance of a trauma severity predictor system implemented when predicting global or symptom cluster-wise <i>&delta;PCL</i> scores. The machine learning models presented in this work are assessed using 3 different feature sets extracted from skin conductance signals. One of these feature sets is proposed in this thesis, while the other ones are already existing and open-source. The baseline for all evaluations is the system performance using CSE-T scores as input, since CSE has proven to be a strong indicator of changes in trauma severity symptoms in various psychological studies. </p><p> According to the results obtained, the MSEs mean measured when predicting global <i>&delta;PCL</i> scores with a system that uses <i> C</i>=1 and <i>&gamma;</i> = 10<sup>&ndash;2 </sup> equals 122.870 and 122.488 when inputting CSE-T scores and TEAP set of features extracted from skin conductance signals to the system, respectively. Furthermore, the <i>p</i>-value = 0.9772 obtained between both performances indicates that it seems feasible to replace CSE-T information with skin conductance signs to predict <i>&delta;PCL</i> scores. On the other hand, the MSEs mean measured with a system that employs <i>C</i>=100 and <i> &gamma;</i> = 10<sup>&ndash;1</sup> equals 294.916 and 138.277 when employing CSE-T scores and TEAP set of features as system input, respectively. Moreover, the <i>p</i>-value = 0.0046 obtained between both performances indicates that the use of skin conductance signals significantly outperforms the baseline. Additionally, similar results to those presented are obtained in both scenarios when predicting symptom cluster-wise <i>&delta;PCL </i> scores.</p><p>
83

Reliable and Efficient Routing for Wireless Sensor Networks

Sinha, Saket 26 August 2017 (has links)
<p> The latest advances in technology have facilitated the emergence of Wireless Sensor Networks (WSNs) as a promising technology. Because of their wide range of applications in industry, environmental monitoring, military and civilian domains, WSNs have become one of the most popular topics for research and development. The sensor nodes are low in cost and are simple in architecture. Wireless sensor networks are being employed in security-critical applications. However, their inherent characteristics make them prone to various security outbreaks that can negatively affect data collection. The current project presents an active detection-based routing scheme for WSNs, which can quickly create numerous detection routes as well as obtain nodal trust and thereby improve data security. Simulation results show that this scheme can detect Black Hole attacks, protect against them, and conserve energy, thus improving the network lifetime.</p><p>
84

Fault tolerance of feedforward artificial neural nets and synthesis of robust nets

Phatak, Dhananjay S 01 January 1994 (has links)
A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck-at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy. Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a necessary condition that holds for all feedforward nets, irrespective of the network topology or the task it is trained on. Extensive simulations indicate that the actual redundancy needed to synthesize a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data imply that the conventional TMR scheme of treplication and majority vote is the best way to achieve complete fault tolerance in most ANNs. Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with and degrade gracefully. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing. The last part of the thesis develops improved learning algorithms that favor fault tolerance. Here, the objective function for the gradient descent is modified to include extra terms that favor fault tolerance. Simulations indicate that the algorithm works only if the relative weight of the extra terms is small. There are two different ways to achieve fault tolerance: (1) Search for the minimal net and replicate (2) Provide redundancy to begin with and use improved training algorithms. A natural question is: which of these two schemes is better? Contrary to the expectation, the replication scheme seems to win in almost all cases. We provide a justification as to why this might be true. Several interesting open problems are discussed and future extensions are suggested.
85

Security issues in network virtualization for the future Internet

Natarajan, Sriram 01 January 2012 (has links)
This dissertation proposes multiple network defense mechanisms. In a typical virtualized network, the network infrastructure and the virtual network are managed by different administrative entities that may not trust each other, raising the concern that any honest-but-curious network infrastructure provider may snoop on traffic sent by the hosted virtual networks. In such a scenario, the virtual network might hesitate to disclose operational information (e.g., source and destination addresses of network traffic, routing information, etc.) to the infrastructure provider. However, the network infrastructure does need sufficient information to perform packet forwarding. We present Encrypted IP (EncrIP), a protocol for encrypting IP addresses that hides information about the virtual network while still allowing packet forwarding with longest-prefix matching techniques that are implemented in commodity routers. Using probabilistic encryption, EncrIP can avoid that an observer can identify what traffic belongs to the same source-destination pairs. Our evaluation results show that EncrIP requires only a few MB of memory on the gateways where traffic enters and leaves the network infrastructure. In our prototype implementation of EncrIP on GENI, which uses standard IP header, the success probability of a statistical inference attack to identify packets belonging to the same session is less than 0.001%. Therefore, we believe EncrIP presents a practical solution for protecting privacy in virtualized networks. While virtualizing the infrastructure components introduces flexibility by reprogramming the protocol stack, it doesn't directly solve the security issues that are encountered in the current Internet. On the contrary, the architecture increases the chances of additive vulnerabilities, thereby increasing the attack space to exploit and launch several attacks. Therefore it is important to consider a virtual network instance that ensures only authorized traffic is transmitted and attack traffic is squelched as close to their source as possible. Network virtualization provides an opportunity to host a network that can guarantee such high-levels of security features thereby protecting both the end systems and the network infrastructure components (i.e., routers, switches, etc.). In this work, we introduce a virtual network instance using capabilities-based network which present a fundamental shift in the security design of network architectures. Instead of permitting the transmission of packets from any source to any destination, routers deny forwarding by default. For a successful transmission, packets need to positively identify themselves and their permissions to each router in the forwarding path. The proposed capabilities-based system uses packet credentials based on Bloom filters. This high-performance design of capabilities makes it feasible that traffic is verified on every router in the network and most attack traffic can be contained within a single hop. Our experimental evaluation confirm that less than one percent of attack traffic passes the first hop and the performance overhead can be as low as 6% for large file transfers. Next, to identify packet forwarding misbehaviors in network virtualization, a controller-based misbehavior detection system is discussed as part of the future work. Overall, this dissertation introduces novel security mechanisms that can be instantiated as inherent security features in the network architecture for the future Internet. The technical challenges in this dissertation involves solving problems from computer networking, network security, principles of protocol design, probability and random processes, and algorithms. (Abstract shortened by UMI.)
86

Reconfigurable technologies for next generation internet and cluster computing

Unnikrishnan, Deepak 01 January 2013 (has links)
Modern web applications are marked by distinct networking and computing characteristics. As applications evolve, they continue to operate over a large monolithic framework of networking and computing equipment built from general-purpose microprocessors and Application Specific Integrated Circuits (ASICs) that offers few architectural choices.This dissertation presents techniques to diversify the next-generation Internet infrastructure by integrating Field-programmable Gate Arrays (FPGAs), a class of reconfigurable integrated circuits, with general-purpose microprocessor-based techniques. Specifically, our solutions are demonstrated in the context of two applications - network virtualization and distributed cluster computing. Network virtualization enables the physical network infrastructure to be shared among several logical networks to run diverse protocols and differentiated services. The design of a good network virtualization platform is challenging because the physical networking substrate must scale to support several isolated virtual networks with high packet forwarding rates and offer sufficient flexibility to customize networking features. The first major contribution of this dissertation is a novel high-performance heterogeneous network virtualization system that integrates FPGAs and general-purpose CPUs. Salient features of this architecture include the ability to scale the number of virtual networks in an FPGA using existing software-based network virtualization techniques, the ability to map virtual networks to a combination of hardware and software resources on demand, and the ability to use off-chip memory resources to scale virtual router features. Partial-reconfiguration has been exploited to dynamically customize virtual networking parameters. An open software framework to describe virtual networking features using a hardware-agnostic language has been developed. Evaluation of our system using a NetFPGA card demonstrates one to two orders of improved throughput over state-of-the-art network virtualization techniques. The demand for greater computing capacity grows as web applications scale. In state-of-the-art systems, an application is scaled by parallelizing the computation on a pool of commodity hardware machines using distributed computing frameworks. Although this technique is useful, it is inefficient because the sequential nature of execution in general-purpose processors does not suit all workloads equally well. Iterative algorithms form a pervasive class of web and data mining algorithms that are poorly executed on general purpose processors due to the presence of strict synchronization barriers in distributed cluster frameworks. This dissertation presents Maestro, a heterogeneous distributed computing framework that demonstrates how FPGAs can break down such synchronization barriers using asynchronous accumulative updates. These updates allow for the accumulation of intermediate results for numerous data points without the need for iteration-based barriers. The benefits of a heterogeneous cluster are illustrated by executing a general-class of iterative algorithms on a cluster of commodity CPUs and FPGAs. Computation is dynamically prioritized to accelerate algorithm convergence. We implement a general-class of three iterative algorithms on a cluster of four FPGAs. A speedup of 7X is achieved over an implementation of asynchronous accumulative updates on a general-purpose CPU. The system offers 154X speedup versus a standard Hadoop-based CPU-workstation cluster. Improved performance is achieved by clusters of FPGAs.
87

New digital structure designs of neural networks and filter banks

Liu, Xiaozhou 01 January 1996 (has links)
The rapid development of modern industry and technology has dramatically increased the demands on the signal information system. The performance of a signal system is measured by its accuracy, speed and cost for a signal process. In implementation of a digital system such as a digital computer or a micro-processor, the basic computational operations are multiplication, addition and delay. It is well known that, among these operations, multiplication is the slowest and the most complex. Thus, the existence of multipliers in hardware implementation and multiplications in software coding are often the bottleneck of an efficient design. Therefore, it is desired to implement a signal system which is multiplier-free or multiplier-minimized. This dissertation studies some structure design for the Multi-layer Neural Network (MNN) and Cellular Neural Network (CNN). The designs are based on the Differential Digital Analyzer (DDA) technique, the CORDIC algorithm and the Convergence Computation Method (CCM). These designs have the desired multiplier-free feature and low complexity and are suitable for VLSI implementation. Efficient design of the M-channel QMF bank has also been investigated and proposed in the dissertation. The design proposed is based on the Interpolated FIR design approach, in conjunction with a cosine-modulated QMF bank system. Compared with conventional design approach, our new design reduces the number of multiplications for the filter bank and can achieve more than 50% saving of computation depending on the choice of the interpolation rate and this computational saving becomes more significant when the number of channels in a filter bank is large.
88

A dynamic load balancing approach to the control of multiserver polling systems with applications to elevator system dispatching

Lewis, James Alan 01 January 1991 (has links)
This dissertation presents a new technique for the control of multi-server polled queueing systems. The new technique is referred to as dynamic load balancing (DLB). Using a simple cyclic service model, evidence is provided indicating that waiting time will be minimized if the servers of the polled queueing system remain maximally separated via a 'skip-ahead' control policy. Approximations are derived for average job waiting time in two polled queueing system 'modes'--the maximum server separation mode and the minimum server separation mode. These approximations further suggest the desirability of a 'skip-ahead' control policy to maintain maximum server separation. A discrete-event model and corresponding discrete-event simulation of the polled queueing system is developed. The DLB algorithm is developed to achieve the maximum server separation objective in the polled queueing system. Simulation results substantiate the approximations developed for the polled queueing system model over a wide range of system parameters and load levels. DLB is adapted for elevator system control. Changes to DLB were required to account for the presence of car calls and direction switching in the elevator system. Despite the added complexity of the elevator system over the multi-server polled queueing system, DLB is shown via simulation to provide improvement over a state of the art elevator system control algorithm in six of six performance measures (e.g. average waiting time).
89

Fault-tolerant aspects of memory systems

Bowen, Nicholas S 01 January 1992 (has links)
Memory system design is important for providing high reliability and availability. This dissertation presents a memory architecture to support checkpoints that can improve reliability, and also algorithms to improve recoverable virtual memory. In addition, two novel techniques of reliability analysis are presented that account for program and operating system behavior. Checkpoint and rollback recovery is a method that allows a system to tolerate a failure by periodically saving the state and, if an error occurs, rolling back to the prior checkpoint. A technique is proposed that embeds the support for checkpoint and rollback recovery directly into the virtual memory translation hardware. A system with both highly reliable and normal memory enables recoverable virtual memory by placing modified data in the highly reliable memory and read-only data in normal memory. Hybrid algorithms are proposed for use in systems with multiple classes of physical memory; that is, one virtual memory policy for the highly reliable memory and one for the normal memory. These techniques are analyzed with a trace-driven simulation. Reliability analysis of memories and their relationship to system reliability is an important aspect of system design. The dynamic aspects of the memory are very important. Two aspects studied here are memory usage patterns by a program and the memory allocation by the operating system. A new model is developed for the successful execution of a program taking into account memory reference patterns. This is contrasted against traditional memory reliability calculations showing that the actual reliability may be more optimistic when program behavior is considered. A new theory to explain correlations between increased workloads and increased failure rates is proposed. The tradeoffs in performance and reliability for memory management policies (e.g., virtual or cache memory) are studied as a function of the block-miss reload time. A very small percentage of the memory is found to contribute to a majority of the unreliability. Techniques are proposed to dramatically improve the reliability (i.e., an algorithm called selective scrubbing and the use of very small amounts of highly reliable memory).

Page generated in 0.1202 seconds