Spelling suggestions: "subject:"lemsystems 1rchitecture"" "subject:"lemsystems 1architecture""
91 |
Parallel architectures for solving combinatorial problems of logic designHo, Phuong Minh 01 January 1989 (has links)
This thesis presents a new, practical approach to solve various NP-hard combinatorial problems of logic synthesis, logic programming, graph theory and related areas. A problem to be solved is polynomially time reduced to one of several generic combinatorial problems which can be expressed in the form of the Generalized Propositional Formula (GPF) : a Boolean product of clauses, where each clause is a sum of products of negated or non-negated literals.
|
92 |
IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMSGuo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up.
To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.
|
93 |
Toward Biologically-Inspired Self-Healing, Resilient Architectures for Digital Instrumentation and Control Systems and Embedded DevicesKhairullah, Shawkat Sabah 01 January 2018 (has links)
Digital Instrumentation and Control (I&C) systems in safety-related applications of next generation industrial automation systems require high levels of resilience against different fault classes. One of the more essential concepts for achieving this goal is the notion of resilient and survivable digital I&C systems. In recent years, self-healing concepts based on biological physiology have received attention for the design of robust digital systems. However, many of these approaches have not been architected from the outset with safety in mind, nor have they been targeted for the automation community where a significant need exists. This dissertation presents a new self-healing digital I&C architecture called BioSymPLe, inspired from the way nature responds, defends and heals: the stem cells in the immune system of living organisms, the life cycle of the living cell, and the pathway from Deoxyribonucleic acid (DNA) to protein. The BioSymPLe architecture is integrating biological concepts, fault tolerance techniques, and operational schematics for the international standard IEC 61131-3 to facilitate adoption in the automation industry. BioSymPLe is organized into three hierarchical levels: the local function migration layer from the top side, the critical service layer in the middle, and the global function migration layer from the bottom side. The local layer is used to monitor the correct execution of functions at the cellular level and to activate healing mechanisms at the critical service level. The critical layer is allocating a group of functional B cells which represent the building block that executes the intended functionality of critical application based on the expression for DNA genetic codes stored inside each cell. The global layer uses a concept of embryonic stem cells by differentiating these type of cells to repair the faulty T cells and supervising all repair mechanisms. Finally, two industrial applications have been mapped on the proposed architecture, which are capable of tolerating a significant number of faults (transient, permanent, and hardware common cause failures CCFs) that can stem from environmental disturbances and we believe the nexus of its concepts can positively impact the next generation of critical systems in the automation industry.
|
94 |
Adaptive Performance and Power Management in Distributed Computing SystemsChen, Ming 01 August 2010 (has links)
The complexity of distributed computing systems has raised two unprecedented challenges for system management. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, system power consumption must be controlled in order to avoid system failures caused by power capacity overload or system overheating due to increasingly high server density. However, most existing work, unfortunately, either relies on open-loop estimations based on off-line profiled system models, or evolves in a more ad hoc fashion, which requires exhaustive iterations of tuning and testing, or oversimplifies the problem by ignoring the coupling between different system characteristics (ie, response time and throughput, power consumption of different servers). As a result, the majority of previous work lacks rigorous guarantees on the performance and power consumption for computing systems, and may result in degraded overall system performance. In this thesis, we extensively study adaptive performance/power management and power-efficient performance management for distributed computing systems such as information dissemination systems, power grid management systems, and data centers, by proposing Multiple-Input-Multiple-Output (MIMO) control and hierarchical designs based on feedback control theory. For adaptive performance management, we design an integrated solution that controls both the average response time and CPU utilization in information dissemination systems to achieve bounded response time for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design a hierarchical control solution to guarantee the deadlines of real-time tasks in power grid computing by grouping them based on their characteristics, respectively. For adaptive power management, we design MIMO optimal control solutions for power control at the cluster and server level and a hierarchical solution for large-scale data centers. Our MIMO control design can capture the coupling among different system characteristics, while our hierarchical design can coordinate controllers at different levels. For power-efficient performance management, we discuss a two-layer coordinated management solution for virtualized data centers. Experimental results in both physical testbeds and simulations demonstrate that all the solutions outperform state-of-the-art management schemes by significantly improving overall system performance.
|
95 |
Intégration d'exigences de haut niveau dans les problèmes d'optimisation : théorie et applicationsRoda, Fabio 01 March 2013 (has links) (PDF)
Nous utilisons, ensemble, l'Ingénierie Système et la Programmation mathématique pour intégrer les exigences de haut niveau dans des problèmes d'optimisation. Nous appliquons cette méthode à trois types différents de système. (1) Les Systèmes d'Information (SI), c.à.d. les réseaux des ressources, matérielles, logicielles et utilisateurs, utilisés dans une entreprise, doivent fournir la base des projets qui sont lancés pour répondre aux besoins commerciaux/des affaires (business). Les SI doivent être capables d'évoluer au fil des remplacements d'une technologie par une autre. Nous proposons un modèle opérationnel et une formulation de programmation mathématique qui formalise un problème de priorisation qui se présente dans le contexte de l'évolution technologique d'un système d'information. (2) Les Recommender Systems (RS) sont un type de moteur de recherche dont l'objectif est de fournir des recommandations personnalisées. Nous considérons le problème du design des Recommender Systems dans le but de fournir de bonnes, intéressantes et précises suggestions. Le transport des matériaux dangereux entraine plusieurs problèmes liés aux conséquences écologiques des incidents possibles. (3) Le système de transport doit assurer le transport, pour l'élimination en sécurité des déchets dangereux, d'une façon telle que le risque des possibles incidents soit distribué d'une manière équitable parmi la population. Nous considérons et intégrons dans des formulations de programmation mathématique deux idées différentes d'équité.
|
96 |
An Open Systems Architecture for Telemetry ReceiversParker, Peter, Nelson, John, Pippitt, Mark 10 1900 (has links)
An open systems architecture (OSA) is one in which all of the interfaces are fully defined, available to the public, and maintained according to a group consensus. One approach to achieve this is to use modular hardware and software and to buy commercial, off-the-shelf and commodity hardware. Benefits of an OSA include providing easy access to the latest technological advances in both hardware and software, enabling net-centric operations, and allowing a flexible design that can easily change as the needs of customers may change. This paper will provide details of an OSA system designed for a telemetry receiver and list the benefits of OSA for the telemetry community.
|
97 |
A HyperNet ArchitectureHuang, Shufeng 01 January 2014 (has links)
Network virtualization is becoming a fundamental building block of future Internet architectures. By adding networking resources into the “cloud”, it is possible for users to rent virtual routers from the underlying network infrastructure, connect them with virtual channels to form a virtual network, and tailor the virtual network (e.g., load application-specific networking protocols, libraries and software stacks on to the virtual routers) to carry out a specific task. In addition, network virtualization technology allows such special-purpose virtual networks to co-exist on the same set of network infrastructure without interfering with each other.
Although the underlying network resources needed to support virtualized networks are rapidly becoming available, constructing a virtual network from the ground up and using the network is a challenging and labor-intensive task, one best left to experts.
To tackle this problem, we introduce the concept of a HyperNet, a pre-built, pre-configured network package that a user can easily deploy or access a virtual network to carry out a specific task (e.g., multicast video conferencing). HyperNets package together the network topology configuration, software, and network services needed to create and deploy a custom virtual network. Users download HyperNets from HyperNet repositories and then “run” them on virtualized network infrastructure much like users download and run virtual appliances on a virtual machine. To support the HyperNet abstraction, we created a Network Hypervisor service that provides a set of APIs that can be called to create a virtual network with certain characteristics.
To evaluate the HyperNet architecture, we implemented several example Hyper-Nets and ran them on our prototype implementation of the Network Hypervisor. Our experiments show that the Hypervisor API can be used to compose almost any special-purpose network – networks capable of carrying out functions that the current Internet does not provide. Moreover, the design of our HyperNet architecture is highly extensible, enabling developers to write high-level libraries (using the Network Hypervisor APIs) to achieve complicated tasks.
|
98 |
Efficient Anonymous Biometric Matching in Privacy-Aware EnvironmentsLuo, Ying 01 January 2014 (has links)
Video surveillance is an important tool used in security and environmental monitoring, however, the widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been recently proposed to automatically redact images of selected individuals in the surveillance video for protection. To identify these individuals for protection, the most reliable approach is to use biometric signals as they are immutable and highly discriminative. If misused, these characteristics of biometrics can seriously defeat the goal of privacy protection. In this dissertation, an Anonymous Biometric Access Control (ABAC) procedure is proposed based on biometric signals for privacy-aware video surveillance. The ABAC procedure uses Secure Multi-party Computational (SMC) based protocols to verify membership of an incoming individual without knowing his/her true identity. To make SMC-based protocols scalable to large biometric databases, I introduce the k-Anonymous Quantization (kAQ) framework to provide an effective and secure tradeoff of privacy and complexity. kAQ limits systems knowledge of the incoming individual to k maximally dissimilar candidates in the database, where k is a design parameter that controls the amount of complexity-privacy tradeoff. The relationship between biometric similarity and privacy is experimentally validated using a twin iris database. The effectiveness of the entire system is demonstrated based on a public iris biometric database.
To provide the protected subjects with full access to their privacy information in video surveillance system, I develop a novel privacy information management system that allows subjects to access their information via the same biometric signals used for ABAC. The system is composed of two encrypted-domain protocols: the privacy information encryption protocol encrypts the original video records using the iris pattern acquired during ABAC procedure; the privacy information retrieval protocol allows the video records to be anonymously retrieved through a GC-based iris pattern matching process. Experimental results on a public iris biometric database demonstrate the validity of my framework.
|
99 |
A COMPREHENSIVE HDL MODEL OF A LINE ASSOCIATIVE REGISTER BASED ARCHITECTURESparks, Matthew A. 01 January 2013 (has links)
Modern processor architectures suffer from an ever increasing gap between processor and memory performance. The current memory-register model attempts to hide this gap by a system of cache memory. Line Associative Registers(LARs) are proposed as a new system to avoid the memory gap by pre-fetching and associative updating of both instructions and data. This thesis presents a fully LAR-based architecture, targeting a previously developed instruction set architecture. This architecture features an execution pipeline supporting SWAR operations, and a memory system supporting the associative behavior of LARs and lazy writeback to memory.
|
100 |
A Scalable Architecture for Simplifying Full-Range Scientific Data AnalysisKendall, Wesley James 01 December 2011 (has links)
According to a recent exascale roadmap report, analysis will be the limiting factor in gaining insight from exascale data. Analysis problems that must operate on the full range of a dataset are among the most difficult. Some of the primary challenges in this regard come from disk access, data managment, and programmability of analysis tasks on exascale architectures. In this dissertation, I have provided an architectural approach that simplifies and scales data analysis on supercomputing architectures while masking parallel intricacies to the user. My architecture has three primary general contributions: 1) a novel design pattern and implmentation for reading multi-file and variable datasets, 2) the integration of querying and sorting as a way to simplify data-parallel analysis tasks, and 3) a new parallel programming model and system for efficiently scaling domain-traversal tasks.
The design of my architecture has allowed studies in several application areas that were not previously possible. Some of these include large-scale satellite data and ocean flow analysis. The major driving example is of internal-model variability assessments of flow behavior in the GEOS-5 atmospheric modeling dataset. This application issued over 40 million particle traces for model comparison (the largest parallel flow tracing experiment to date), and my system was able to scale execution up to 65,536 processes on an IBM BlueGene/P system.
|
Page generated in 0.0501 seconds