• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 29
  • 8
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 134
  • 38
  • 23
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design of a Power-aware Dataflow Processor Architecture

Narayanaswamy, Ramya Priyadharshini 12 August 2010 (has links)
In a sensor monitoring embedded computing environment, the data from a sensor is an event that triggers the execution of an application. A sensor node consists of multiple sensors and a general purpose processor that handles the multiple events by deploying an event-driven software model. The software overheads of the general purpose processors results in energy inefficiency. What is needed is a class of special purpose processing elements which are more energy efficient for the purpose of computation. In the past, special purpose microcontrollers have been designed which are energy efficient for the targeted application space. However, reuse of the same design techniques is not feasible for other application domains. Therefore, this thesis presents a power-aware dataflow processor architecture targeted for the electronic textile computing space. The processor architecture has no instructions, and handles multiple events inherently without deploying software methods. This thesis also shows that the power-aware implementation reduces the overall static power consumption. / Master of Science
12

Practical Analysis of the Dynamic Characteristics of JavaScript

Wei, Shiyi 05 October 2015 (has links)
JavaScript is a dynamic object-oriented programming language, which is designed with flexible programming mechanisms. JavaScript is widely used in developing sophisticated software systems, especially web applications. Despite of its popularity, there is a lack of software tools that support JavaScript for software engineering clients. Dataflow analysis approximates software behavior by analyzing the program code; it is the foundation for many software tools. However, several unique features of JavaScript render existing dataflow analysis techniques ineffective. Reflective constructs, generating code at runtime, make it difficult to acquire the complete program at compile time. Dynamic typing, resulting in changes in object behavior, poses a challenge for building accurate models of objects. Different functionalities can be observed when a function is variadic; the variance of the function behavior may be caused by the arguments whose values can only be known at runtime. Object constructors may be polymorphic such that objects created by the same constructor may contain different properties. In addition to object-oriented programming, JavaScript supports paradigms of functional and procedural programming; this feature renders dataflow analysis techniques ineffective when a JavaScript application uses multiple paradigms. Dataflow analysis needs to handle these challenges. In this work, we present an analysis framework and several dataflow analyses that can handle dynamic features in JavaScript. The first contribution of our work is the design and instantiation of the JavaScript Blended Analysis Framework (JSBAF). This general-purpose and flexible framework judiciously combines dynamic and static analyses. We have implemented an instance of JSBAF, blended taint analysis, to demonstrate the practicality of the framework. Our second contribution is an novel context-sensitive points-to analysis for JavaScript that accurately models object property changes. This algorithm uses a new program representation that enables partial flow-sensitive analysis, a more accurate object representation, and an expanded points-to graph. We have defined parameterized state sensitivity (i.e., k-state sensitivity) and evaluated the effectiveness of 1-state-sensitive analysis as the static phase of JSBAF. The third contribution of our work is an adaptive context-sensitive analysis that selectively applies context-sensitive analysis on the function level. This two-staged adaptive analysis extracts function characteristics from an inexpensive points-to analysis and uses learning-based heuristics to decide on an appropriate context-sensitive analysis per function. The experimental results show that the adaptive analysis is more precise than any single context-sensitive analysis for several programs in the benchmarks, especially for those multi-paradigm programs. / Ph. D.
13

ChipCflow - uma ferramenta para execução de algoritmos utilizando o modelo a fluxo de dados dinâmico em hardware reconfigurável - operadores e grafos a fluxo de dados / ChipCflow - tool for implementing of algorithms using the dataflow model in dynamic reconfigurable hardware - Operators and the dataflow graphs

Correia, Vasco Martins 25 March 2009 (has links)
ChipCflow é o projeto de uma ferramenta para execução de algoritmos escritos em linguagem C utilizando o modelo a fluxo de dados dinâmico em hardware com reconfiguração parcial. O objetivo principal do projeto ChipCflow é a aceleração da execução de programas por meio da execução direta em hardware, aproveitando ao máximo o paralelismo considerado natural do modelo a fluxo de dados. Em particular nesta parte do projeto, realizou-se a prova de conceito para a programação a fluxo da dados em hardware reconfigurável. O modelo de fluxo de dados utilizado foi o estático em plataforma sem reconfiguração parcial, dada a complexidade desse sistema, que faz parte de outro módulo em desenvolvimento no projeto ChipCflow / In order to convert C Language into hardware, a ChipCflow project, is a fundamental element to be used. In particular, dynamic dataflow architecture can be generated to produce a high level of parallelism to be executed into a partial reconfigurable hardware. Because of the complexity of the partial reconfigurable system, in this part of the project, a poof-of-concept was described as a program to be executed in a static reconfigurable hardware. The partial reconfiguration is a focus on another part of the ChipCflow project
14

Dataflow Analysis and Workflow Design in Business Process Management

Sun, Xiaoyun January 2007 (has links)
Workflow technology has become a standard solution for managing increasingly complex business processes. Successful business process management depends on effective workflow modeling, which has been limited mainly to modeling the control and coordination of activities, i.e. the control flow perspective. However, given a workflow specification that is flawless from the control flow perspective, errors can still occur due to incorrect dataflow specification, which is referred to as dataflow anomalies.Currently, there are no sufficient formalisms for discovering and preventing dataflow anomalies in a workflow specification. Therefore, the goal of this dissertation is to develop formal methods for automatically detecting dataflow anomalies from a given workflow model and a rigorous approach for workflow design, which can help avoid dataflow anomalies during the design stage.In this dissertation, we first propose a formal approach for dataflow verification, which can detect dataflow anomalies such as missing data, redundant data, and potential data conflicts. In addition, we propose to use the dataflow matrix, a two-dimension table showing the operations each activity has on each data item, as a way to specify dataflow in workflows. We believe that our dataflow verification framework has added more analytical rigor to business process management by enabling systematic elimination of dataflow errors.We then propose a formal dependency-analysis-based approach for workflow design. A new concept called "activity relations" and a matrix-based analytical procedure are developed to enable the derivation of workflow models in a precise and rigorous manner. Moreover, we decouple the correctness issue from the efficiency issue as a way to reduce the complexity of workflow design and apply the concept of inline blocks to further simplify the procedure. These novel techniques make it easier to handle complex and unstructured workflow models, including overlapping patterns.In addition to proving the core theorems underlying the formal approaches and illustrating the validity of our approaches by applying them to real world cases, we provide detailed algorithms and system architectures as a roadmap for the implementation of dataflow verification and workflow design procedures.
15

ChipCflow - uma ferramenta para execução de algoritmos utilizando o modelo a fluxo de dados dinâmico em hardware reconfigurável - operadores e grafos a fluxo de dados / ChipCflow - tool for implementing of algorithms using the dataflow model in dynamic reconfigurable hardware - Operators and the dataflow graphs

Vasco Martins Correia 25 March 2009 (has links)
ChipCflow é o projeto de uma ferramenta para execução de algoritmos escritos em linguagem C utilizando o modelo a fluxo de dados dinâmico em hardware com reconfiguração parcial. O objetivo principal do projeto ChipCflow é a aceleração da execução de programas por meio da execução direta em hardware, aproveitando ao máximo o paralelismo considerado natural do modelo a fluxo de dados. Em particular nesta parte do projeto, realizou-se a prova de conceito para a programação a fluxo da dados em hardware reconfigurável. O modelo de fluxo de dados utilizado foi o estático em plataforma sem reconfiguração parcial, dada a complexidade desse sistema, que faz parte de outro módulo em desenvolvimento no projeto ChipCflow / In order to convert C Language into hardware, a ChipCflow project, is a fundamental element to be used. In particular, dynamic dataflow architecture can be generated to produce a high level of parallelism to be executed into a partial reconfigurable hardware. Because of the complexity of the partial reconfigurable system, in this part of the project, a poof-of-concept was described as a program to be executed in a static reconfigurable hardware. The partial reconfiguration is a focus on another part of the ChipCflow project
16

JavaFlow : a Java DataFlow Machine

Ascott, Robert John 10 February 2015 (has links)
The JavaFlow, a Java DataFlow Machine is a machine design concept implementing a Java Virtual Machine aimed at addressing technology roadmap issues along with the ability to effectively utilize and manage very large numbers of processing cores. Specific design challenges addressed include: design complexity through a common set of repeatable structures; low power by featuring unused circuits and ability to power off sections of the chip; clock propagation and wire limits by using locality to bring data to processing elements and a Globally Asynchronous Locally Synchronous (GALS) design; and reliability by allowing portions of the design to be bypassed in case of failures. A Data Flow Architecture is used with multiple heterogeneous networks to connect processing elements capable of executing a single Java ByteCode instruction. Whole methods are cached in this DataFlow fabric, and the networks plus distributed intelligence are used for their management and execution. A mesh network is used for the DataFlow transfers; two ordered networks are used for management and control flow mapping; and multiple high speed rings are used to access the storage subsystem and a controlling General Purpose Processor (GPP). Analysis of benchmarks demonstrates the potential for this design concept. The design process was initiated by analyzing SPEC JVM benchmarks which identified a small number methods contributing to a significant percentage of the overall ByteCode operations. Additional analysis established static instruction mixes to prioritize the types of processing elements used in the DataFlow Fabric. The overall objective of the machine is to provide multi-threading performance for Java Methods deployed to this DataFlow fabric. With advances in technology it is envisioned that from 1,000 to 10,000 cores/instructions could be deployed and managed using this structure. This size of DataFlow fabric would allow all the key methods from the SPEC benchmarks to be resident. A baseline configuration is defined with a compressed dataflow structure and then compared to multiple configurations of instruction assignments and clock relationships. Using a series of methods from the SPEC benchmark running independently, IPC (Instructions per Cycle) performance of the sparsely populated heterogeneous structure is 40% of the baseline. The average ratio of instructions to required nodes is 3.5. Innovative solutions to the loading and management of Java methods along with the translation from control flow to DataFlow structure are demonstrated. / text
17

Supporting Transparent Distributed Messaging for Dataflow Applications in Power Electronics Control Systems

Mody, Parool K. 12 January 2004 (has links)
This thesis presents the design and implementation of a transparent messaging protocol for distributed communication between processors. The processors are designed using dataflow architecture. The protocol ensures transparent asynchronous communication between distributed processes. The protocol is designed such that an application can run without change in virtually any kind of distributed configuration, where configuration is the number of controllers used in the system plus the processor allocation strategy used. It also enables an automated processor allocation strategy to transparently configure an application for any number of processor nodes without requiring any changes or recompilation. The protocol works well even for single-controller applications and for a pre-defined allocation of processors to controllers. The thesis further includes an analysis of the time required for one complete cycle of inter-processor communication. / Master of Science
18

An ECA-Based ZigBee Receiver

Zhang, Chen 26 March 2008 (has links)
Element CXI's Elemental Computing Array (ECA) delivers faster reconfiguration time and higher computational density than Field Programmable Gate Arrays (FPGAs) with similar computational power. It provides higher computational power than Digital Signal Processors (DSPs) with similar power consumption and price. It also utilizes a library-based graphical development environment promoting ease of use and fast development. In this thesis, the design and implementation of a ZigBee receiver on an Element CXI ECA-64 platform is presented. The ZigBee receiver is evaluated through simulations and implementation on an ECA device. During the design and implementation of the ZigBee receiver, some design experience and tips are concluded. The design methodology on the ECA is studied in detail to assure the implementation's correctness, since the methodology of the ECA is different from that of other platforms. / Master of Science
19

Modèle de calcul et d'exécution pour des applications flots de données dynamiques avec contraintes temps réel / A model of programming languages for dynamic real-time streaming applications

Do, Xuan Khanh 17 October 2016 (has links)
Il y a un intérêt croissant pour le développement d'applications sur les plates-formes multiprocesseurs homo- et hétérogènes en raison de l'extension de leur champ d'application et de l'apparition des puces many-core, telles que Kalray MPPA-256 (256 cœurs) ou TEGRA X1 de NVIDIA (256 GPU et 8 cœurs 64 bits CPU). Étant donné l'ampleur de ces nouveaux systèmes massivement parallèles, la mise en œuvre des applications sur ces plates-formes est difficile à cause de leur complexité, qui tend à augmenter, et de leurs exigences strictes à la fois qualitatives (robustesse, fiabilité) et quantitatives (débit, consommation d’énergie). Dans ce contexte, les Modèles de Calcul (MdC) flot de données ont été développés pour faciliter la conception de ces applications. Ces MdC sont par définition composées de filtres qui échangent des flux de données via des liens de communication. Ces modèles fournissent une représentation intuitive des applications flot de données, tout en exposant le parallélisme de tâches de l’application. En outre, ils fournissent des capacités d'analyse statique pour la vivacité et l’exécution en mémoire bornée. Cependant, de nouvelles applications de signalisation et de traitement des médias complexes présentent souvent plusieurs défis majeurs qui ne correspondent pas aux restrictions des modèles flot de données statiques classiques: 1) Comment fournir des services garantis contre des interférences inévitables qui peuvent affecter des performances temps réel ?, et 2) Comment ces langages flot de données qui sont souvent trop statiques pourraient répondre aux besoins des applications embarquées émergentes, qui nécessitent une exécution plus dynamique et plus dépendante du contexte ? Pour faire face au premier défi, nous proposons un ordonnancement hybride, nommé Self-Timed Periodic (STP), qui relie des MdC flot de données classiques et des modèles de tâches temps réel. Cet ordonnancement peut aussi être considéré comme un modèle d'exécution combinant l'ordonnancement classique dirigé seulement par les contraintes de dépendance d'exécution appelé Self-Timed Scheduling (STS), évalué comme le plus approprié pour des applications modélisées sous forme de graphes flot de données, avec l'ordonnancement périodique: STS améliore les indicateurs de performance des programmes, tandis que le modèle périodique capture les aspects de synchronisation. Nous avons évalué la performance de notre ordonnancement sur un ensemble de 10 applications et nous avons constaté que dans la plupart des cas, notre approche donne une amélioration significative de la latence par rapport à un ordonnancement purement périodique ou Strictly Periodic Scheduling (SPS), et rivalise bien avec STS. Les expériences montrent également que, pour presque tous les cas de test, STP donne un débit optimal. Sur la base de ces résultats, nous avons évalué la latence entre le temps d'initiation de tous les deux acteurs dépendants, et nous avons introduit une approche basée sur la latence pour le traitement des flux à tolérance de pannes modélisée comme un graphe Cyclo-Static Dataflow (CSDF), dans le but d'aborder des problèmes de défaillance de nœud ou de réseau… / There is an increasing interest in developing applications on homo- and heterogeneous multiprocessor platforms due to their broad availability and the appearance of many-core chips, such as the MPPA-256 chip from Kalray (256 cores) or TEGRA X1 from NVIDIA (256 GPU and 8 64-bit CPU cores). Given the scale of these new massively parallel systems, programming languages based on the dataflow model of computation have strong assets in the race for productivity and scalability, meeting the requirements in terms of parallelism, functional determinism, temporal and spatial data reuse in these systems. However, new complex signal and media processing applications often display several major challenges that do not fit the classical static restrictions: 1) How to provide guaranteed services against unavoidable interferences which can affect real-time performance?, and 2) How these streaming languages which are often too static could meet the needs of emerging embedded applications, such as context- and data-dependent dynamic adaptation? To tackle the first challenge, we propose and evaluate an analytical scheduling framework that bridges classical dataflow MoCs and real-time task models. In this framework, we introduce a new scheduling policy noted Self-Timed Periodic (STP), which is an execution model combining Self-Timed scheduling (STS), considered as the most appropriate for streaming applications modeled as data-flow graphs, with periodic scheduling: STS improves the performance metrics of the programs, while the periodic model captures the timing aspects. We evaluate the performance of our scheduling policy for a set of 10 real-life streaming applications and find that in most of the cases, our approach gives a significant improvement in latency compared to the Strictly Periodic Schedule (SPS), and competes well with STS. The experiments also show that, for more than 90% of the benchmarks, STP scheduling results in optimal throughput. Based on these results, we evaluate the latency between initiation times of any two dependent actors, and we introduce a latency-based approach for fault-tolerant stream processing modeled as a Cyclo-Static Dataflow (CSDF) graph, addressing the problem of node or network failures. For the second challenge, we introduce a new dynamic Model of Computation (MoC), called Transaction Parameterized Dataflow (TPDF), extending CSDF with parametric rates and a new type of control actor, channel and port to express dynamic changes of the graph topology and time-triggered semantics. TPDF is designed to be statically analyzable regarding the essential deadlock and boundedness properties, while avoiding the aforementioned restrictions of decidable dataflow models. Moreover, we demonstrate that TPDF can be used to accurately model task timing requirements in a great variety of situations and introduce a static scheduling heuristic to map TPDF to massively parallel embedded platforms. We validate the model and associated methods using a set of realistic applications and random graphs, demonstrating significant buffer size and performance improvements (e.g., throughput) compared to state of the art models including Cyclo-Static Dataflow (CSDF) and Scenario-Aware Dataflow (SADF).
20

The Pulled-Macro-Dataflow Model: An Execution Model for Multicore Shared-Memory Computers

Richins, Daniel Joseph 13 September 2011 (has links) (PDF)
The macro-dataflow model of execution has been used in scheduling heuristics for directed acyclic graphs. Since this model was developed for the scheduling of parallel applications on distributed computing systems, it is inadequate when applied to the multicore shared-memory computers prevalent in the market today. The pulled-macro-dataflow model is put forth as an alternative to the macro-dataflow model, having been designed specifically to accurately describe the memory bandwidth limitations and request-driven nature of communications characteristic of today's machines. The performance of the common scheduling heuristics DSC and CASS-II are evaluated under the pulled-macro-dataflow model and it is shown that their poor performance motivates the development of a new scheduling heuristic. The Concurrent Tournament Reducer (ConTouR) is developed as a scheduling heuristic which operates well with the pulled-macro-dataflow model. ConTouR is compared to the existing heuristics Load Balancing and Communication Minimization in scheduling two programs. For both programs, the other reducers are shown to outperform ConTouR.

Page generated in 0.1738 seconds