• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 269
  • 269
  • 215
  • 84
  • 56
  • 51
  • 46
  • 43
  • 39
  • 37
  • 34
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMS

Guo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up. To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.
92

Toward Biologically-Inspired Self-Healing, Resilient Architectures for Digital Instrumentation and Control Systems and Embedded Devices

Khairullah, Shawkat Sabah 01 January 2018 (has links)
Digital Instrumentation and Control (I&C) systems in safety-related applications of next generation industrial automation systems require high levels of resilience against different fault classes. One of the more essential concepts for achieving this goal is the notion of resilient and survivable digital I&C systems. In recent years, self-healing concepts based on biological physiology have received attention for the design of robust digital systems. However, many of these approaches have not been architected from the outset with safety in mind, nor have they been targeted for the automation community where a significant need exists. This dissertation presents a new self-healing digital I&C architecture called BioSymPLe, inspired from the way nature responds, defends and heals: the stem cells in the immune system of living organisms, the life cycle of the living cell, and the pathway from Deoxyribonucleic acid (DNA) to protein. The BioSymPLe architecture is integrating biological concepts, fault tolerance techniques, and operational schematics for the international standard IEC 61131-3 to facilitate adoption in the automation industry. BioSymPLe is organized into three hierarchical levels: the local function migration layer from the top side, the critical service layer in the middle, and the global function migration layer from the bottom side. The local layer is used to monitor the correct execution of functions at the cellular level and to activate healing mechanisms at the critical service level. The critical layer is allocating a group of functional B cells which represent the building block that executes the intended functionality of critical application based on the expression for DNA genetic codes stored inside each cell. The global layer uses a concept of embryonic stem cells by differentiating these type of cells to repair the faulty T cells and supervising all repair mechanisms. Finally, two industrial applications have been mapped on the proposed architecture, which are capable of tolerating a significant number of faults (transient, permanent, and hardware common cause failures CCFs) that can stem from environmental disturbances and we believe the nexus of its concepts can positively impact the next generation of critical systems in the automation industry.
93

Adaptive Performance and Power Management in Distributed Computing Systems

Chen, Ming 01 August 2010 (has links)
The complexity of distributed computing systems has raised two unprecedented challenges for system management. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, system power consumption must be controlled in order to avoid system failures caused by power capacity overload or system overheating due to increasingly high server density. However, most existing work, unfortunately, either relies on open-loop estimations based on off-line profiled system models, or evolves in a more ad hoc fashion, which requires exhaustive iterations of tuning and testing, or oversimplifies the problem by ignoring the coupling between different system characteristics (ie, response time and throughput, power consumption of different servers). As a result, the majority of previous work lacks rigorous guarantees on the performance and power consumption for computing systems, and may result in degraded overall system performance. In this thesis, we extensively study adaptive performance/power management and power-efficient performance management for distributed computing systems such as information dissemination systems, power grid management systems, and data centers, by proposing Multiple-Input-Multiple-Output (MIMO) control and hierarchical designs based on feedback control theory. For adaptive performance management, we design an integrated solution that controls both the average response time and CPU utilization in information dissemination systems to achieve bounded response time for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design a hierarchical control solution to guarantee the deadlines of real-time tasks in power grid computing by grouping them based on their characteristics, respectively. For adaptive power management, we design MIMO optimal control solutions for power control at the cluster and server level and a hierarchical solution for large-scale data centers. Our MIMO control design can capture the coupling among different system characteristics, while our hierarchical design can coordinate controllers at different levels. For power-efficient performance management, we discuss a two-layer coordinated management solution for virtualized data centers. Experimental results in both physical testbeds and simulations demonstrate that all the solutions outperform state-of-the-art management schemes by significantly improving overall system performance.
94

Intégration d'exigences de haut niveau dans les problèmes d'optimisation : théorie et applications

Roda, Fabio 01 March 2013 (has links) (PDF)
Nous utilisons, ensemble, l'Ingénierie Système et la Programmation mathématique pour intégrer les exigences de haut niveau dans des problèmes d'optimisation. Nous appliquons cette méthode à trois types différents de système. (1) Les Systèmes d'Information (SI), c.à.d. les réseaux des ressources, matérielles, logicielles et utilisateurs, utilisés dans une entreprise, doivent fournir la base des projets qui sont lancés pour répondre aux besoins commerciaux/des affaires (business). Les SI doivent être capables d'évoluer au fil des remplacements d'une technologie par une autre. Nous proposons un modèle opérationnel et une formulation de programmation mathématique qui formalise un problème de priorisation qui se présente dans le contexte de l'évolution technologique d'un système d'information. (2) Les Recommender Systems (RS) sont un type de moteur de recherche dont l'objectif est de fournir des recommandations personnalisées. Nous considérons le problème du design des Recommender Systems dans le but de fournir de bonnes, intéressantes et précises suggestions. Le transport des matériaux dangereux entraine plusieurs problèmes liés aux conséquences écologiques des incidents possibles. (3) Le système de transport doit assurer le transport, pour l'élimination en sécurité des déchets dangereux, d'une façon telle que le risque des possibles incidents soit distribué d'une manière équitable parmi la population. Nous considérons et intégrons dans des formulations de programmation mathématique deux idées différentes d'équité.
95

An Open Systems Architecture for Telemetry Receivers

Parker, Peter, Nelson, John, Pippitt, Mark 10 1900 (has links)
An open systems architecture (OSA) is one in which all of the interfaces are fully defined, available to the public, and maintained according to a group consensus. One approach to achieve this is to use modular hardware and software and to buy commercial, off-the-shelf and commodity hardware. Benefits of an OSA include providing easy access to the latest technological advances in both hardware and software, enabling net-centric operations, and allowing a flexible design that can easily change as the needs of customers may change. This paper will provide details of an OSA system designed for a telemetry receiver and list the benefits of OSA for the telemetry community.
96

A HyperNet Architecture

Huang, Shufeng 01 January 2014 (has links)
Network virtualization is becoming a fundamental building block of future Internet architectures. By adding networking resources into the “cloud”, it is possible for users to rent virtual routers from the underlying network infrastructure, connect them with virtual channels to form a virtual network, and tailor the virtual network (e.g., load application-specific networking protocols, libraries and software stacks on to the virtual routers) to carry out a specific task. In addition, network virtualization technology allows such special-purpose virtual networks to co-exist on the same set of network infrastructure without interfering with each other. Although the underlying network resources needed to support virtualized networks are rapidly becoming available, constructing a virtual network from the ground up and using the network is a challenging and labor-intensive task, one best left to experts. To tackle this problem, we introduce the concept of a HyperNet, a pre-built, pre-configured network package that a user can easily deploy or access a virtual network to carry out a specific task (e.g., multicast video conferencing). HyperNets package together the network topology configuration, software, and network services needed to create and deploy a custom virtual network. Users download HyperNets from HyperNet repositories and then “run” them on virtualized network infrastructure much like users download and run virtual appliances on a virtual machine. To support the HyperNet abstraction, we created a Network Hypervisor service that provides a set of APIs that can be called to create a virtual network with certain characteristics. To evaluate the HyperNet architecture, we implemented several example Hyper-Nets and ran them on our prototype implementation of the Network Hypervisor. Our experiments show that the Hypervisor API can be used to compose almost any special-purpose network – networks capable of carrying out functions that the current Internet does not provide. Moreover, the design of our HyperNet architecture is highly extensible, enabling developers to write high-level libraries (using the Network Hypervisor APIs) to achieve complicated tasks.
97

Efficient Anonymous Biometric Matching in Privacy-Aware Environments

Luo, Ying 01 January 2014 (has links)
Video surveillance is an important tool used in security and environmental monitoring, however, the widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. To identify these individuals for protection, the most reliable approach is to use biometric signals as they are immutable and highly discriminative. If misused, these characteristics of biometrics can seriously defeat the goal of privacy protection. In this dissertation, an Anonymous Biometric Access Control (ABAC) procedure is proposed based on biometric signals for privacy-aware video surveillance. The ABAC procedure uses Secure Multi-party Computational (SMC) based protocols to verify membership of an incoming individual without knowing his/her true identity. To make SMC-based protocols scalable to large biometric databases, I introduce the k-Anonymous Quantization (kAQ) framework to provide an effective and secure tradeoff of privacy and complexity. kAQ limits systems knowledge of the incoming individual to k maximally dissimilar candidates in the database, where k is a design parameter that controls the amount of complexity-privacy tradeoff. The relationship between biometric similarity and privacy is experimentally validated using a twin iris database. The effectiveness of the entire system is demonstrated based on a public iris biometric database. To provide the protected subjects with full access to their privacy information in video surveillance system, I develop a novel privacy information management system that allows subjects to access their information via the same biometric signals used for ABAC. The system is composed of two encrypted-domain protocols: the privacy information encryption protocol encrypts the original video records using the iris pattern acquired during ABAC procedure; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of my framework.
98

A COMPREHENSIVE HDL MODEL OF A LINE ASSOCIATIVE REGISTER BASED ARCHITECTURE

Sparks, Matthew A. 01 January 2013 (has links)
Modern processor architectures suffer from an ever increasing gap between processor and memory performance. The current memory-register model attempts to hide this gap by a system of cache memory. Line Associative Registers(LARs) are proposed as a new system to avoid the memory gap by pre-fetching and associative updating of both instructions and data. This thesis presents a fully LAR-based architecture, targeting a previously developed instruction set architecture. This architecture features an execution pipeline supporting SWAR operations, and a memory system supporting the associative behavior of LARs and lazy writeback to memory.
99

A Scalable Architecture for Simplifying Full-Range Scientific Data Analysis

Kendall, Wesley James 01 December 2011 (has links)
According to a recent exascale roadmap report, analysis will be the limiting factor in gaining insight from exascale data. Analysis problems that must operate on the full range of a dataset are among the most difficult. Some of the primary challenges in this regard come from disk access, data managment, and programmability of analysis tasks on exascale architectures. In this dissertation, I have provided an architectural approach that simplifies and scales data analysis on supercomputing architectures while masking parallel intricacies to the user. My architecture has three primary general contributions: 1) a novel design pattern and implmentation for reading multi-file and variable datasets, 2) the integration of querying and sorting as a way to simplify data-parallel analysis tasks, and 3) a new parallel programming model and system for efficiently scaling domain-traversal tasks. The design of my architecture has allowed studies in several application areas that were not previously possible. Some of these include large-scale satellite data and ocean flow analysis. The major driving example is of internal-model variability assessments of flow behavior in the GEOS-5 atmospheric modeling dataset. This application issued over 40 million particle traces for model comparison (the largest parallel flow tracing experiment to date), and my system was able to scale execution up to 65,536 processes on an IBM BlueGene/P system.
100

Fuzzy framework for robust architecture identification in concept selection

Patterson, Frank H. 07 January 2016 (has links)
An evolving set of modern physics-based, multi-disciplinary conceptual design methods seek to explore the feasibility of a new generation of systems, with new capabilities, capable of missions that conventional vehicles cannot be empirically redesigned to perform. These methods provide a more complete understanding of a concept's design space, forecasting the feasibility of uncertain systems, but are often computationally expensive and time consuming to prepare. This trend creates a unique and critical need to identify a manageable number of capable concept alternatives early in the design process. Ongoing efforts attempting to stretch capability through new architectures, like the U.S. Army's Future Vertical Lift effort and DARPA's Vertical Takeoff and Landing (VTOL) X-plane program highlight this need. The process of identifying and selecting a concept configuration is often given insufficient attention, especially when a small subset of favorable concept families is not immediately apparent. Commonly utilized methods for concept generation, like filtered morphological analysis, often identify an exponential number of alternatives. Simple approaches to concept selection then rely on designers to identify a relatively small subset of alternatives for comparison through simple methods regularly related to decision matrices (Pugh, TOPSIS, AHP, etc.). More in-depth approaches utilize modeling and simulation to compare concepts with techniques such as stochastic optimization or probabilistic decision making, but a complicated setup limits these approaches to just a discrete few alternatives. A new framework to identify and select promising, robust concept configurations utilizing fuzzy methods is proposed in this research and applied to the example problem of concept selection for DARPA's VTOL Xplane program. The framework leverages fuzzy systems in conjunction with morphological analysis to assess large design spaces of potential architecture alternatives while capturing the inherent uncertainty and ambiguity in the evaluation of these early concepts. Experiments show how various fuzzy systems can be utilized for evaluating criteria of interest across disparate architectures by modeling expert knowledge as well as simple physics-based data. The models are integrated into a single environment and variations on multi-criteria optimization are tested to demonstrate an ability to identify a non-dominated set of architectural families in a large combinatorial design space. The resulting framework is shown to provide an approach to quickly identify promising concepts in the face of uncertainty early in the design process.An evolving set of modern physics-based, multi-disciplinary conceptual design methods seek to explore the feasibility of a new generation of systems, with new capabilities, capable of missions that conventional vehicles cannot be empirically redesigned to perform. These methods provide a more complete understanding of a concept's design space, forecasting the feasibility of uncertain systems, but are often computationally expensive and time consuming to prepare. This trend creates a unique and critical need to identify a manageable number of capable concept alternatives early in the design process. Ongoing efforts attempting to stretch capability through new architectures, like the U.S. Army's Future Vertical Lift effort and DARPA's Vertical Takeoff and Landing (VTOL) X-plane program highlight this need. The process of identifying and selecting a concept configuration is often given insufficient attention, especially when a small subset of favorable concept families is not immediately apparent. Commonly utilized methods for concept generation, like filtered morphological analysis, often identify an exponential number of alternatives. Simple approaches to concept selection then rely on designers to identify a relatively small subset of alternatives for comparison through simple methods regularly related to decision matrices (Pugh, TOPSIS, AHP, etc.). More in-depth approaches utilize modeling and simulation to compare concepts with techniques such as stochastic optimization or probabilistic decision making, but a complicated setup limits these approaches to just a discrete few alternatives. A new framework to identify and select promising, robust concept configurations utilizing fuzzy methods is proposed in this research and applied to the example problem of concept selection for DARPA's VTOL Xplane program. The framework leverages fuzzy systems in conjunction with morphological analysis to assess large design spaces of potential architecture alternatives while capturing the inherent uncertainty and ambiguity in the evaluation of these early concepts. Experiments show how various fuzzy systems can be utilized for evaluating criteria of interest across disparate architectures by modeling expert knowledge as well as simple physics-based data. The models are integrated into a single environment and variations on multi-criteria optimization are tested to demonstrate an ability to identify a non-dominated set of architectural families in a large combinatorial design space. The resulting framework is shown to provide an approach to quickly identify promising concepts in the face of uncertainty early in the design process.

Page generated in 0.0553 seconds