• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 29
  • 8
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 134
  • 38
  • 23
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

XML manipulation by non-expert users / Manipulation des données XML par des utilisateurs non-experts

Tekli, Gilbert 04 October 2011 (has links)
Aujourd’hui, les ordinateurs et l’Internet sont partout dans le monde : dans chaque maison, domaine et plateforme. Dans ce contexte, le standard XML s’est établi comme un moyen insigne pour la représentation et l’échange efficaces des données. Les communications et les échanges d’informations entre utilisateurs, applications et systèmes d’information hétérogènes sont désormais réalisés moyennant XML afin de garantir l’interopérabilité des données. Le codage simple et robuste de XML, à base de données textuelles semi-structurées, a fait que ce standard a rapidement envahi les communications medias. Ces communications sont devenues inter-domaines, partant de l’informatique et s’intégrant dans les domaines médical, commercial, et social, etc. Par conséquent, et au vu du niveau croissant des données XML flottantes entre des utilisateurs non-experts (employés, scientifiques, etc.), que ce soit sur les messageries instantanées, réseaux sociaux, stockage de données ou autres, il devient incontournable de permettre aux utilisateurs non-experts de manipuler et contrôler leurs données (e.g., des parents qui souhaitent appliquer du contrôle parental sur les messageries instantanées de leur maison, un journaliste qui désire regrouper et filtrer des informations provenant de différents flux RSS, etc.). L'objectif principal de cette thèse est l'étude des manipulations des données XML par des utilisateurs non-experts. Quatre principales catégories ont été identifiées dans la littérature : i) les langages visuels orientés XML, ii) les Mashups, iii) les techniques de manipulation des données XML, et iv) les DFVPL (langages de programmation visuel à base de Dataflow), couvrant différentes pistes. Cependant, aucune d’entre elles ne fournit une solution complète. Dans ce travail de recherche, nous avons formellement défini un Framework de manipulation XML, intitulé XA2C (XML-oriented mAnipulAtion Compositions). XA2C représente un environnement de programmation visuel (e.g., Visual-Studio) pour un DFVPL orienté XML, intitulé XCDL (XML-oriented Composition Definition Language) qui constitue la contribution majeure de cette thèse. XCDL, basé sur les réseaux de Pétri colorés, permet aux non-experts de définir, d’arranger et de composer des opérations de manipulation orientées XML. Ces opérations peuvent être des simples sélections/projections de données, ainsi que des opérations plus complexes de modifications de données (insertion, suppression, tatouage, etc.). Le langage proposé traite les données XML à base de documents ou de fragments. En plus de la définition formelle (syntaxique et sémantique) du langage XCDL, XA2C introduit une architecture complète à base d’un compilateur et un environnement d'exécution dédiés. Afin de tester et d’évaluer notre approche théorique, nous avons développé un prototype, intitulé X-Man, avec un Framework d’évaluation pour les langages et outils visuels de programmation orientés XML. Une série d'études de cas et d’expérimentations a été réalisée afin d'évaluer la qualité d'usage de notre langage, et de le comparer aux solutions existantes. Les résultats obtenus soulignent la supériorité de note approche, notamment en termes de qualité d’interaction, de visualisation, et d’utilisation. Plusieurs pistes sont en cours d’exploration, telles que l'intégration des opérations plus complexes (opérateurs de contrôle, boucles, etc.), les compositions automatiques, et l’extension du langage pour gérer la spécificité des formats dérivés du standard XML (flux RSS, RDF, SMIL, etc.) / Computers and the Internet are everywhere nowadays, in every home, domain and field. Communications between users, applications and heterogeneous information systems are mainly done via XML structured data. XML, based on simple textual data and not requiring any specific platform or environment, has invaded and governed the communication Medias. In the 21stcentury, these communications are now inter-domain and have stepped outside the scope of computer science into other areas (i.e., medical, commerce, social, etc.). As a consequence, and due to the increasing amount of XML data floating between non-expert users (programmers, scientists, etc.), whether on instant messaging, social networks, data storage and others, it is becoming crucial and imperative to allow non-experts to be able to manipulate and control their data (e.g.,parents who want to apply parental control over instant messaging tools in their house, a journalist who wants to gather information from different RSS feeds and filter them out, etc.). The main objective of this work is the study of XML manipulations by non-expert users. Four main related categories have been identified in the literature: XML-oriented visual languages, Mashups, XML manipulation by security and adaptation techniques, and Dataflow visual programming languages. However, none of them provides a full-fledged solution for appropriate XML data manipulation. In our research, we formally defined an XML manipulation framework, entitled XA2C (XML Alteration/Adaptation Composition Framework). XA2C represents a visual studio for an XML-oriented DFVPL (Dataflow Visual Programming Language), called XCDL (XML-oriented Composition Definition Language) which constitutes the major contribution of this study. XCDL is based on Colored Petri Nets allowing non-expert users to compose manipulation operations. The XML manipulations range from simple data selection/projection to data modification (insertion, removal, obfuscation, etc.). The language is oriented to deal with XML data (XML documents and fragments), providing users with means to compose XML oriented operations. Complementary to the language syntax and semantics, XA2C formally defines also the compiler and runtime environment of XCDL. In addition to the theoretical contribution, we developed a prototype, called X-Man, and formally defined an evaluation framework for XML-oriented visual languages and tools that was used in a set of case studies and experiments to evaluate the quality of use of our language and compare it to existing approaches. The obtained assessments and results were positive and show that our approach outperforms existing ones. Several future tracks are being studied such as integration of more complex operations (control operators, loops, etc.), automated compositions, and language derivation to define specific languages oriented towards different XML-based standards (e.g., RSS, RDF, SMIL, etc.)
122

Automatic Storage Optimization of Arrays Affine Loop Nests

Bhaskaracharya, Somashekaracharya G January 2016 (has links) (PDF)
Efficient memory usage is crucial for data-intensive applications as a smaller memory footprint ensures better cache performance and allows one to run a larger problem size given a axed amount of main memory. The solutions found by existing techniques for automatic storage optimization for arrays in a new loop-nests, which minimize the storage requirements for the arrays, are often far from good or optimal and could even miss nearly all storage optimization potential. In this work, we present a new automatic storage optimization framework and techniques that can be used to achieve intra-array as well as inter-array storage reuse within a new loop-nests with a pre-determined schedule. Over the last two decades, several heuristics have been developed for achieving complex transformations of a new loop-nests using the polyhedral model. However, there are no comparably strong heuristics for tackling the problem of automatic memory footprint optimization. We tackle the problem of storage optimization for arrays by formulating it as one of ending the right storage partitioning hyperplanes: each storage partition corresponds to a single storage location. Statement-wise storage partitioning hyperplanes are determined that partition a unit end global array space so that values with overlapping live ranges are not mapped to the same partition. Our integrated heuristic for exploiting intra-array as well as inter-array reuse opportunities is driven by a fourfold objective function that not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter-statement storage reuse. We built an automatic polyhedral storage optimizer called SMO using our storage partitioning approach. Storage reduction factors and other results we report from SMO demon-strate the e activeness of our approach on several benchmarks drawn from the domains of image processing, stencil computations, high-performance computing, and the class of tiled codes in general. The reductions in storage requirement over previous approaches range from a constant factor to asymptotic in the loop blocking factor or array extents { the latter being a dramatic improvement for practical purposes. As an incidental and related topic, we also studied the problem of polyhedral compilation of graphical data programs. While polyhedral techniques for program transformation are now used in several proprietary and open source compilers, most of the research on poly-herald compilation has focused on imperative languages such as C, where the computation is species in terms of statements with zero or more nested loops and other control structures around them. Graphical data ow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and ref-eventual transparency of data ow languages impose a di errant set of challenges. In this work, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from data ow programs and to synthesize them from their equivalent polyhedral representation. We then describe Polyglot, a framework for automatic transformation of data ow programs that we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used data ow programming languages. Results show that data ow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30 while running on an 8-core Intel system.
123

Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores / Dataflow in colored Petri nets and in actors-oriented workflow graphs

Borges, Grace Anne Pontes 11 September 2008 (has links)
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas. / Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
124

Elektros energijos apskaitos ir matavimo prietaisų maršrutizavimo kompiuterizuotos informacinės sistemos sukūrimas ir tyrimas / The Creation and Investigation of the Computerized Information System for the Electricity Energy Accounting and Measuring Instruments Routine

Griškėnienė, Edita 24 September 2004 (has links)
The computerize information systems are widely used in the companies of Lithuania now. A lot of them are universal enough and fit to solve the various administrative problems in the companies. These systems excel in large complexity and high price. So develops the need to create more simple and cheap information systems. An aim of the project is to create system which are accomulating the information about received and given flows of the electricity energy instruments, are doing the namesake and guantity account of the accounting instruments, are formating analysis reports for the directed period. The client of the project is Rytų skirstomieji tinklai AB branch Alytus electricity network Elekctricity energy realization division. The need of the project to the client may be given an outline: · to boost the guality of work and account results; · to reduce expenditure of time to do account works; · to eliminate the information duplicate; · to ease analysis reports composing; · to escape mistakes; · to effective account work. The project is realized by MS Access data base with integrated Microsoft Visual Basic for Aplication. The posibilities of this base complete enough to accomplish those project. Also this packet helps to realize grafic users link (GUL). There are realized those functions to help users work in this project: buttons, the facilitation of the repetitive information installing, help. The project was created to satisfy the users all needs and to diminish the use... [to full text]
125

Analyse d'Applications Flot de Données pour la Compilation Multiprocesseur

Bodin, Bruno 20 December 2013 (has links) (PDF)
Les systèmes embarqués sont des équipements électroniques et informatiques, soumis à de nombreuses contraintes et dont le fonctionnement doit être continu. Pour définir le comportement de ces systèmes, les modèles de programmation dataflows sont souvent utilisés. Ce choix de modèle est motivé d'une part, parce qu'ils permettent de décrire un comportement cyclique, nécessaire aux systèmes embarqués ; et d'autre part, parce que ces modèles s'apprêtent à des analyses qui peuvent fournir des garanties de fonctionnement et de performance essentielles. La société Kalray propose une architecture embarquée, le MPPA. Il est accompagné du langage de programmation ΣC. Ce langage permet alors de décrire des applications sous forme d'un modèle dataflow déjà très étudié, le modèle Cyclo-Static Dataflow Graph(CSDFG). Cependant, les CSDFG générés par ce langage sont souvent trop complexes pour permettre l'utilisation des techniques d'analyse existantes. L'objectif de cette thèse est de fournir des outils algorithmiques qui résolvent les différentes étapes d'analyse nécessaires à l'étude d'une application ΣC, mais dans un temps d'exécution raisonnable, et sur des instances de grande taille. Nous étudions trois problèmes d'analyse distincts : le test de vivacité, l'évaluation du débit maximal, et le dimensionnement mémoire. Pour chacun de ces problèmes, nous fournissons des méthodes algorithmiques rapides, et dont l'efficacité a été vérifiée expérimentalement. Les méthodes que nous proposons sont issues de résultats sur les ordonnancements périodiques ; elles fournissent des résultats approchés et sans aucune garantie de performance. Pour pallier cette faiblesse, nous proposons aussi de nouveaux outils d'analyse basés sur les ordonnancements K-périodiques. Ces ordonnancements généralisent nos travaux d'ordonnancement périodiques et nous permettrons dans un avenir proche de concevoir des méthodes d'analyse bien plus efficaces.
126

混合式的Java網頁應用程式分析工具 / A hybrid security analyzer for Java web applications

江尚倫 Unknown Date (has links)
近年來網路應用蓬勃的發展,經由網頁應用程式提供服務或從事商業行為已經成為趨勢,因此網頁應用程式自然而然成為網路攻擊者的目標,攻擊手法也隨著時間不斷的翻新。已經有許多的方法被提出用來防範這些攻擊,增加網頁應用程式的安全性,如防火牆的機制以及加密連線,但是這些方法所帶來的效果有限,最根本的方法應為回歸原始的網頁應用程式設計,確實的找出應用程式本身的弱點,才能杜絕不斷變化的攻擊手法。以程式分析的技術來發現這些弱點是常見的方法之一,程式分析又分為靜態分析和動態分析,兩種分析技術都能有效的找出這些弱點。我們整理了近幾年的網頁應用程式分析技術,多採用靜態分析,然而比較後發現靜態分析的技術對於Java的網頁應用程式的分析,無法達到精確的分析結果,原因在於Java語言所具有的特性,如:變數的多型、反射機制的應用等。靜態分析在處理這些問題具有先天上的缺陷,由於並沒有實際的去執行程式,所以無法獲得這些執行時期才有的資訊。 本研究的重點將放在動態的程式分析技術上,也就是於程式執行期間所進行的分析,來解決分析Java網頁應用程式的上述問題。為了在程式執行期間得到可利用的分析資訊,我們運用了AspectJ的插碼技術。我們的工具會先將負責收集資訊的模組插入應用程式的源碼,並以單元測試的方式執行程式,於程式執行的過程中將分析資訊傳遞給分析模組,利用Java 語言的特性進行汙染資料的追蹤 。另外,我們考慮到以動態分析的方式偵測弱點會因為執行的路徑,導致一些潛在的弱點無法被發現,所以我們利用了線上分析的概念,設計出了線上的污染資料流分析模組,我們的工具結合了上述兩個分析模組所產生的分析結果,提供開網頁應用程式弱點資訊。 / In recent years, development of web application is flourishing and the increasing population of using internet, providing customer service and making business through network has been a prevalent trend. Consequently, the web applications have become the targets of the web hackers. With the progress of information technology, the technique of web attack becomes timeless and widespread. Some approaches have been taken to prevent from web attacks, such as firewall and encrypted connection. But these approaches have a limited effect against these attack techniques. The basic method should be taken is to eliminate the vulnerabilities inside the web application. Program analysis is common technique for detecting these vulnerabilities. There are two major program analysis approaches: static analysis and dynamic analysis. Both these approaches can detect vulnerabilities effectively. We reviewed several program analysis tools. Most of them are static analysis tool. However, we noticed that it is insufficient to analysis Java program in a static way due to the characteristic of Java language, e.g., polymorphism, reflection and more. Static has its congenital defects in examining these features, because static analysis happens when the program is not executing and lacks of runtime information. In this thesis, we focus on dynamic analysis of programs, where the analysis occurs when the program is executing, to solve the problems mentioned above in Java web application. In order to retrieving the runtime analysis information, we utilize the instrumentation mechanism provided by AspectJ. We instrument designed module in to the program and gather the needed information and execute the program in a unit testing approach. Our dynamic analysis module retrieves the information from instrumented executing program and utilizes the characteristic of Java to perform the tainted data tracking. We considered the dynamic tracking mechanism will leave some vulnerabilities undiscovered when the program is not completely executed. Hence we adopt the online analysis concept and design an online analysis module to find out the potential vulnerabilities which cannot be detected by dynamically tracking the tainted data. Our analysis tool finally integrates these two analysis results and provides the most soundness analysis result for developers.
127

Real-time scheduling of dataflow graphs / Ordonnancement temps-réel des graphes flots de données

Bouakaz, Adnan 27 November 2013 (has links)
Les systèmes temps-réel critiques sont de plus en plus complexes, et les exigences fonctionnelles et non-fonctionnelles ne cessent plus de croître. Le flot de conception de tels systèmes doit assurer, parmi d’autres propriétés, le déterminisme fonctionnel et la prévisibilité temporelle. Le déterminisme fonctionnel est inhérent aux modèles de calcul flot de données (ex. KPN, SDF, etc.) ; c’est pour cela qu’ils sont largement utilisés pour modéliser les systèmes embarqués de traitement de flux. Un effort considérable a été accompli pour résoudre le problème d’ordonnancement statique périodique et à mémoire de communication bornée des graphes flot de données. Cependant, les systèmes embarqués temps-réel optent de plus en plus pour l’utilisation de systèmes d’exploitation temps-réel et de stratégies d’ordonnancement dynamique pour gérer les tâches et les ressources critiques. Cette thèse aborde le problème d’ordonnancement temps-réel dynamique des graphes flot de données ; ce problème consiste à assigner chaque acteur dans un graphe à une tâche temps-réel périodique (i.e. calcul des périodes, des phases, etc.) de façon à : (1) assurer l’ordonnançabilité des tâches sur une architecture et pour une stratégie d’ordonnancement (ex. RM, EDF) données ; (2) exclure statiquement les exceptions d’overflow et d’underflow sur les buffers de communication ; et (3) optimiser les performances du système (ex. maximisation du débit, minimisation des tailles des buffers). / The ever-increasing functional and nonfunctional requirements in real-time safety-critical embedded systems call for new design flows that solve the specification, validation, and synthesis problems. Ensuring key properties, such as functional determinism and temporal predictability, has been the main objective of many embedded system design models. Dataflow models of computation (such as KPN, SDF, CSDF, etc.) are widely used to model stream-based embedded systems due to their inherent functional determinism. Since the introduction of the (C)SDF model, a considerable effort has been made to solve the static-periodic scheduling problem. Ensuring boundedness and liveness is the essence of the proposed algorithms in addition to optimizing some nonfunctional performance metrics (e.g. buffer minimization, throughput maximization, etc.). However, nowadays real-time embedded systems are so complex that real-time operating systems are used to manage hardware resources and host real-time tasks. Most of real-time operating systems rely on priority-driven scheduling algorithms (e.g. RM, EDF, etc.) instead of static schedules which are inflexible and difficult to maintain. This thesis addresses the real-time scheduling problem of dataflow graph specifications; i.e. transformation of the dataflow specification to a set of independent real-time tasks w.r.t. a given priority-driven scheduling policy such that the following properties are satisfied: (1) channels are bounded and overflow/underflow-free; (2) the task set is schedulable on a given uniprocessor (or multiprocessor) architecture. This problem requires the synthesis of scheduling parameters (e.g. periods, priorities, processor allocation, etc.) and channel capacities. Furthermore, the thesis considers two performance optimization problems: buffer minimization and throughput maximization.
128

Compiling For Coarse-Grained Reconfigurable Architectures Based On Dataflow Execution Paradigm

Alle, Mythri 12 1900 (has links) (PDF)
Coarse-Grained Reconfigurable Architectures(CGRAs) can be employed for accelerating computational workloads that demand both flexibility and performance. CGRAs comprise a set of computation elements interconnected using a network and this interconnection of computation elements is referred to as a reconfigurable fabric. The size of application that can be accommodated on the reconfigurable fabric is limited by the size of instruction buffers associated with each Compute element. When an application cannot be accommodated entirely, application is partitioned such that each of these partitions can be executed on the reconfigurable fabric. These partitions are scheduled by an orchestrator. The orchestrator employs dynamic dataflow execution paradigm. Dynamic dataflow execution paradigm has inherent support for synchronization and helps in exploitation of parallelism that exists across application partitions. In this thesis, we present a compiler that targets such CGRAs. The compiler presented in this thesis is capable of accepting applications specified in C89 standard. To enable architectural design space exploration, the compiler is designed such that it can be customized for several instances of CGRAs employing dataflow execution paradigm at the orchestrator. This can be achieved by specifying the appropriate configuration parameters to the compiler. The focus of this thesis is to provide efficient support for various kinds of parallelism while ensuring correctness. The compiler is designed to support fine-grained task level parallelism that exists across iterations of loops and function calls. Additionally, compiler can also support pipeline parallelism, where a loop is split into multiple stages that execute in a pipelined manner. The prototype compiler, which targets multiple instances of a CGRA, is demonstrated in this thesis. We used this compiler to target multiple variants of CGRAs employing dataflow execution paradigm. We varied the reconfigur-able fabric, orchestration mechanism employed, size of instruction buffers. We also choose applications from two different domains viz. cryptography and linear algebra. The execution time of the CGRA (the best among all instances) is compared against an Intel Quad core processor. Cryptography applications show a performance improvement ranging from more than one order of magnitude to close to two orders of magnitude. These applications have large amounts of ILP and our compiler could successfully expose the ILP available in these applications. Further, the domain customization also played an important role in achieving good performance. We employed two custom functional units for accelerating Cryptography applications and compiler could efficiently use them. In linear algebra kernels we observe multiple iterations of the loop executing in parallel, effectively exploiting loop-level parallelism at runtime. Inspite of this we notice close to an order of magnitude performance degradation. The reason for this degradation can be attributed to the use of non-pipelined floating point units, and the delays involved in accessing memory. Pipeline parallelism was demonstrated using this compiler for FFT and QR factorization. Thus, the compiler is capable of efficiently supporting different kinds of parallelism and can support complete C89 standard. Further, the compiler can also support different instances of CGRAs employing dataflow execution paradigm.
129

Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores / Dataflow in colored Petri nets and in actors-oriented workflow graphs

Grace Anne Pontes Borges 11 September 2008 (has links)
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas. / Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
130

Interconnection Optimization for Dataflow Architectures

Moser, Nico, Gremzow, Carsten, Menge, Matthias 08 June 2007 (has links)
In this paper we present a dataflow processor architecture based on [1], which is driven by controlflow generated tokens. We will show the special properties of this architecture with regard to scalability, extensibility, and parallelism. In this context we outline the application scope and compare our approach with related work. Advantages and disadvantages will be discussed and we suggest solutions to solve the disadvantages. Finally an example of the implementation of this architecture will be given and we have a look at further developments. We believe the features of this basic approach predestines the architecture especially for embedded systems and system on chips.

Page generated in 0.027 seconds