Spelling suggestions: "subject:"lemsystems 1rchitecture"" "subject:"lemsystems 1architecture""
101 |
Fuzzy framework for robust architecture identification in concept selectionPatterson, Frank H. 07 January 2016 (has links)
An evolving set of modern physics-based, multi-disciplinary conceptual design methods seek to explore the feasibility of a new generation of systems, with new capabilities, capable of missions that conventional vehicles cannot be empirically redesigned to perform. These methods provide a more complete understanding of a concept's design space, forecasting the feasibility of uncertain systems, but are often computationally expensive and time consuming to prepare. This trend creates a unique and critical need to identify a manageable number of capable concept alternatives early in the design process. Ongoing efforts attempting to stretch capability through new architectures, like the U.S. Army's Future Vertical Lift effort and DARPA's Vertical Takeoff and Landing (VTOL) X-plane program highlight this need.
The process of identifying and selecting a concept configuration is often given insufficient attention, especially when a small subset of favorable concept families is not immediately apparent. Commonly utilized methods for concept generation, like filtered morphological analysis, often identify an exponential number of alternatives. Simple approaches to concept selection then rely on designers to identify a relatively small subset of alternatives for comparison through simple methods regularly related to decision matrices (Pugh, TOPSIS, AHP, etc.). More in-depth approaches utilize modeling and simulation to compare concepts with techniques such as stochastic optimization or probabilistic decision making, but a complicated setup limits these approaches to just a discrete few alternatives.
A new framework to identify and select promising, robust concept configurations utilizing fuzzy methods is proposed in this research and applied to the example problem of concept selection for DARPA's VTOL Xplane program. The framework leverages fuzzy systems in conjunction with morphological analysis to assess large design spaces of potential architecture alternatives while capturing the inherent uncertainty and ambiguity in the evaluation of these early concepts. Experiments show how various fuzzy systems can be utilized for evaluating criteria of interest across disparate architectures by modeling expert knowledge as well as simple physics-based data. The models are integrated into a single environment and variations on multi-criteria optimization are tested to demonstrate an ability to identify a non-dominated set of architectural families in a large combinatorial design space. The resulting framework is shown to provide an approach to quickly identify promising concepts in the face of uncertainty early in the design process.An evolving set of modern physics-based, multi-disciplinary conceptual design methods seek to explore the feasibility of a new generation of systems, with new capabilities, capable of missions that conventional vehicles cannot be empirically redesigned to perform. These methods provide a more complete understanding of a concept's design space, forecasting the feasibility of uncertain systems, but are often computationally expensive and time consuming to prepare. This trend creates a unique and critical need to identify a manageable number of capable concept alternatives early in the design process. Ongoing efforts attempting to stretch capability through new architectures, like the U.S. Army's Future Vertical Lift effort and DARPA's Vertical Takeoff and Landing (VTOL) X-plane program highlight this need.
The process of identifying and selecting a concept configuration is often given insufficient attention, especially when a small subset of favorable concept families is not immediately apparent. Commonly utilized methods for concept generation, like filtered morphological analysis, often identify an exponential number of alternatives. Simple approaches to concept selection then rely on designers to identify a relatively small subset of alternatives for comparison through simple methods regularly related to decision matrices (Pugh, TOPSIS, AHP, etc.). More in-depth approaches utilize modeling and simulation to compare concepts with techniques such as stochastic optimization or probabilistic decision making, but a complicated setup limits these approaches to just a discrete few alternatives.
A new framework to identify and select promising, robust concept configurations utilizing fuzzy methods is proposed in this research and applied to the example problem of concept selection for DARPA's VTOL Xplane program. The framework leverages fuzzy systems in conjunction with morphological analysis to assess large design spaces of potential architecture alternatives while capturing the inherent uncertainty and ambiguity in the evaluation of these early concepts. Experiments show how various fuzzy systems can be utilized for evaluating criteria of interest across disparate architectures by modeling expert knowledge as well as simple physics-based data. The models are integrated into a single environment and variations on multi-criteria optimization are tested to demonstrate an ability to identify a non-dominated set of architectural families in a large combinatorial design space. The resulting framework is shown to provide an approach to quickly identify promising concepts in the face of uncertainty early in the design process.
|
102 |
Benchmarking Public and Private Blockchains and Understanding the Development of Private Blockchain NetworksTilton, Peter 01 January 2018 (has links)
This thesis paper explores the developing technology blockchain by trying to understand the technology from a technical performance standpoint and also understanding the development process of blockchain networks and applications. The first half of this paper analyzes two research papers, "Bitcoing-NG: A Scalable Blockchain Protocol" and "Untangling Blockchain: A Data Processing View of Blockchain Systems," to understand and explain some of the technical differences and shortcomings of blockchain technologies. The second half of this paper then proceeds to develop a private blockchain network on the Ethereum network and deploy a smart contract on that private blockchain network. This process gives insight into the development of blockchain applications and identifies the struggles blockchain developers face.
|
103 |
Model-based trade studies in systems architectures design phases / Etudes comparatives basées sur les modèles en phase de conception d’architectures de systèmesAlbarello, Nicolas 17 December 2012 (has links)
La conception d'architectures de systèmes est une tâche complexe qui implique des enjeux majeurs. Au cours de cette activité, les concepteurs du système doivent créer des alternatives de conception et doivent les comparer entre elles afin de sélectionner l'architecture la plus appropriée suivant un ensemble de critères. Dans le but d'étudier différentes alternatives, les concepteurs doivent généralement limiter leur étude comparative à une petite partie de l'espace de conception qui peut être composé d'un nombre immense de solutions. Traditionnellement, le processus de conception d'architecture est principalement dirigé par le jugement et l'expérience des concepteurs, et les alternatives sélectionnées sont des versions adaptées de solutions connues. Le risque est donc de sélectionner une solution pertinente mais sous-optimale. Pour gagner en confiance sur l'optimalité de la solution retenue, la couverture de l'espace de conception doit être augmentée. L'utilisation de méthodes de synthèse calculatoire d'architecture a prouvé qu'elle était un moyen efficace pour supporter les concepteurs dans la conception d'artefacts d'ingénierie (structures, circuits électriques...). Pour assister les concepteurs de systèmes durant le processus de conception d'architecture, une méthode calculatoire pour les systèmes complexes est définie. Cette méthode emploie une approche évolutionnaire (algorithmes génétiques) pour guider le processus d'exploration de l'espace de conception vers les zones optimales. La population initiale de l'algorithme génétique est créée grâce à une technique de synthèse calculatoire d'architecture qui permet de créer différentes architectures physiques et tables d'allocations pour une architecture fonctionnelle donnée. La méthode permet d'obtenir les solutions optimales du problème de conception posé. Ces solutions peuvent être ensuite utilisées par les concepteurs pour des études comparatives plus détaillées ou pour des négociations avec les fournisseurs de systèmes / The design of system architectures is a complex task which involves major stakes. During this activity, system designers must create design alternatives and compare them in order to select the most relevant system architecture given a set of criteria. In order to investigate different alternatives, designers must generally limit their trade studies to a small portion of the design-space which can be composed of a huge amount of solutions. Traditionally, the architecture design process is mainly driven by engineering judgment and designers' experiences and the selected alternatives are often adapted versions of known solutions. The risk is then to select a pertinent but yet under optimal solution. In order to increase the confidence in the optimality of the selected solution, the coverage of the design-space must be increased. The use of computational design synthesis methods proved to be an efficient way to support designers in the design of engineering artifacts (structures, electrical circuits...). In order to assist system designers during the architecture design process, a computational method for complex systems is defined. This method uses an evolutionary approach (genetic algorithms) to guide the design-space exploration process toward optimal zones. The initial population of the genetic algorithm is created thanks to a computational design synthesis technique which permits to create different physical architectures and allocation mappings for a given functional architecture. The method permits to obtain the optimal solutions of the stated design problem. These solutions can be then used by designers for more detailed trade studies or for technical negotiations with system suppliers.
|
104 |
Storage Management of Data-intensive Computing SystemsXu, Yiqi 18 March 2016 (has links)
Computing systems are becoming increasingly data-intensive because of the explosion of data and the needs for processing the data, and storage management is critical to application performance in such data-intensive computing systems. However, existing resource management frameworks in these systems lack the support for storage management, which causes unpredictable performance degradations when applications are under I/O contention. Storage management of data-intensive systems is a challenging problem because I/O resources cannot be easily partitioned and distributed storage systems require scalable management. This dissertation presents the solutions to address these challenges for typical data-intensive systems including high-performance computing (HPC) systems and big-data systems.
For HPC systems, the dissertation presents vPFS, a performance virtualization layer for parallel file system (PFS) based storage systems. It employs user-level PFS proxies to interpose and schedule parallel I/Os on a per-application basis. Based on this framework, it enables SFQ(D)+, a new proportional-share scheduling algorithm which allows diverse applications with good performance isolation and resource utilization. To manage an HPC system’s total I/O service, it also provides two complementary synchronization schemes to coordinate the scheduling of large numbers of storage nodes in a scalable manner.
For big-data systems, the dissertation presents IBIS, an interposition-based big-data I/O scheduler. By interposing the different I/O phases of big-data applications, it schedules the I/Os transparently to the applications. It enables a new proportional-share scheduling algorithm, SFQ(D2), to address the dynamics of the underlying storage by adaptively adjusting the I/O concurrency. Moreover, it employs a scalable broker to coordinate the distributed I/O schedulers and provide proportional sharing of a big-data system’s total I/O service.
Experimental evaluations show that these solutions have low-overhead and provide strong I/O performance isolation. For example, vPFS’ overhead is less than 3% in through- put and it delivers proportional sharing within 96% of the target for diverse workloads; and IBIS provides up to 99% better performance isolation for WordCount and 30% better proportional slowdown for TeraSort and TeraGen than native YARN.
|
105 |
On the Design of Real-Time Systems on Multi-Core Platforms under UncertaintyWANG, TIANYI 26 June 2015 (has links)
Real-time systems are computing systems that demand the assurance of not only the logical correctness of computational results but also the timing of these results. To ensure timing constraints, traditional real-time system designs usually adopt a worst-case based deterministic approach. However, such an approach is becoming out of sync with the continuous evolution of IC technology and increased complexity of real-time applications. As IC technology continues to evolve into the deep sub-micron domain, process variation causes processor performance to vary from die to die, chip to chip, and even core to core. The extensive resource sharing on multi-core platforms also significantly increases the uncertainty when executing real-time tasks. The traditional approach can only lead to extremely pessimistic, and thus, unpractical design of real-time systems.
Our research seeks to address the uncertainty problem when designing real-time systems on multi-core platforms. We first attacked the uncertainty problem caused by process variation. We proposed a virtualization framework and developed techniques to optimize the system's performance under process variation. We further studied the problem on peak temperature minimization for real-time applications on multi-core platforms. Three heuristics were developed to reduce the peak temperature for real-time systems. Next, we sought to address the uncertainty problem in real-time task execution times by developing statistical real-time scheduling techniques. We studied the problem of fixed-priority real-time scheduling of implicit periodic tasks with probabilistic execution times on multi-core platforms. We further extended our research for tasks with explicit deadlines. We introduced the concept of harmonic to a more general task set, i.e. tasks with explicit deadlines, and developed new task partitioning techniques. Throughout our research, we have conducted extensive simulations to study the effectiveness and efficiency of our developed techniques.
The increasing process variation and the ever-increasing scale and complexity of real-time systems both demand a paradigm shift in the design of real-time applications. Effectively dealing with the uncertainty in design of real-time applications is a challenging but also critical problem. Our research is such an effort in this endeavor, and we conclude this dissertation with discussions of potential future work.
|
106 |
System-on-a-Chip (SoC) based Hardware Acceleration in Register Transfer Level (RTL) DesignNiu, Xinwei 08 November 2012 (has links)
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications.
The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs.
Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
|
107 |
An Integrated Framework for Patent Analysis and Miningzhang, longhui 01 April 2016 (has links)
Patent documents are important intellectual resources of protecting interests of individuals, organizations and companies. These patent documents have great research values, beneficial to the industry, business, law, and policy-making communities. Patent mining aims at assisting patent analysts in investigating, processing, and analyzing patent documents, which has attracted increasing interest in academia and industry. However, despite recent advances in patent mining, several critical issues in current patent mining systems have not been well explored in previous studies.
These issues include: 1) the query retrieval problem that assists patent analysts finding all relevant patent documents for a given patent application; 2) the patent documents comparative summarization problem that facilitates patent analysts in quickly reviewing any given patent documents pairs; and 3) the key patent documents discovery problem that helps patent analysts to quickly grasp the linkage between different technologies in order to better understand the technical trend from a collection of patent documents.
This dissertation follows the stream of research that covers the aforementioned issues of existing patent analysis and mining systems. In this work, we delve into three interleaved aspects of patent mining techniques, including (1) PatSearch, a framework of automatically generating the search query from a given patent application and retrieving relevant patents to user; (2) PatCom, a framework for investigating the relationship in terms of commonality and difference between patent documents pairs, and (3) PatDom, a framework for integrating multiple types of patent information to identify important patents from a large volume of patent documents.
In summary, the increasing amount and textual complexity of patent repository lead to a series of challenges that are not well addressed in the current generation systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective integrated patent mining framework.
|
108 |
Convergence properties of perceptronsAdharapurapu, Ratnasri Krishna 01 January 1995 (has links)
No description available.
|
109 |
A tabular propositional logic: and/or Table TranslatorLee, Chen-Hsiu 01 January 2003 (has links)
The goal of this project is to design a tool to help users translate any logic statement into Disjunctive Normal Form and present the result as an AND/OR TABLE, which makes the logic relation easier to express by using a two-dimensional grid of values or expressions. This tool is implemented through a web-based and Java-based application. Thus, the user can utilize this tool via World Wide Web.
|
110 |
Network-on-Chip SynchronizationBuckler, Mark 07 November 2014 (has links)
Technology scaling has enabled the number of cores within a System on Chip (SoC) to increase significantly. Globally Asynchronous Locally Synchronous (GALS) systems using Dynamic Voltage and Frequency Scaling (DVFS) operate each of these cores on distinct and dynamic clock domains. The main communication method between these cores is increasingly more likely to be a Network-on-Chip (NoC). Typically, the interfaces between these clock domains experience multi-cycle synchronization latencies due to their use of “brute-force” synchronizers. This dissertation aims to improve the performance of NoCs and thereby SoCs as a whole by reducing this synchronization latency.
First, a survey of NoC improvement techniques is presented. One such improvement technique: a multi-layer NoC, has been successfully simulated. Given how one of the most commonly used techniques is DVFS, a thorough analysis and simulation of brute-force synchronizer circuits in both current and future process technologies is presented. Unfortunately, a multi-cycle latency is unavoidable when using brute-force synchronizers, so predictive synchronizers which require only a single cycle of latency have been proposed.
To demonstrate the impact of these predictive synchronizer circuits at a high level, multi-core system simulations incorporating these circuits have been completed. Multiple forms of GALS NoC configurations have been simulated, including multi-synchronous, NoC-synchronous, and single-synchronizer. Speedup on the SPLASH benchmark suite was measured to directly quantify the performance benefit of predictive synchronizers in a full system. Additionally, Mean Time Between Failures (MTBF) has been calculated for each NoC synchronizer configuration to determine the reliability benefit possible when using predictive synchronizers.
|
Page generated in 0.054 seconds