• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 29
  • 8
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 134
  • 38
  • 23
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Implementação de um simulador para a arquitetura de dados Wolf. / Implementation of a simulator for the Wolf architecture.

Cavenaghi, Marcos Antônio 02 December 1992 (has links)
Esse trabalho apresenta a Proto-Arquitetura a fluxo de dados WOLF e trata da implementação de um simulador simplificado dirigido a eventos para essa arquitetura. O projeto WOLF propõe- se a implementar e estudar as características de um supercomputador de alta velocidade baseado no modelo de fluxo de dados dinâmico com granularidade variável. Para situar o trabalho é apresentado alguns conceitos envolvidos em simulações as principais implementações em f1uxo de dados. Os resultados preliminares obtidos com o simulador são apresentados. Esses resultados são analisados e comparados com dados conhecidos da arquitetura f1uxo de dados da Maquina de Manchester. Quando a maquina Proto-WOLF tiver suas características particulares desativadas, essa deve comportar-se como a Maquina de Manchester. Os resultados obtidos refletem esse comportamento. Conclui-se, portanto que o simulador implementado está se comportando de uma maneira apropriada. / This work presents the Proto-WOLF dataflow architecture and implementation of a simplified event-driven simulator for this architecture. The WOLF project is a proposal for the implementation of a supercomputer based on the dynamic dataflow model with variable granularity. In order to place the work in context, some of the basic concepts involved in simulation are presented and a survey of the most relevant works in dataflow is presented. The preliminary simulation results are presented. These results are canalized and compared with known results from the dataflow architecture of the Manchester Dataflow Machine. When the simulated Proto-WOLF machine has all its unique features disabled, it is expected that it should behave in a Manchester-like fashion. The results obtained fully agree with these. It is, then, possible to conclude that the implemented simulator is behaving in a proper manner.
22

Implementação e estudo da arquitetura a fluxo de dados Wolf. / Implementation and study of the Wolf Dataflow Architecture.

Cavenaghi, Marcos Antônio 25 November 1997 (has links)
Esse trabalho apresenta a arquitetura a fluxo de dados Wolf. Essa arquitetura foi proposta considerando-se alguns problemas conhecidos em execução de código em arquiteturas a fluxo de dados tais como tratamento de código seqüencial e tratamento de estruturas de dados (vetores e matrizes). Wolf é baseado no modelo de fluxo de dados dinâmico e explora granulosidade variável, sendo a mais fina a nível de instrução. Alguns conceitos explorados por outras arquiteturas a fluxo de dados guiaram o desenvolvimento do Wolf. Entre esses estão o conceito de macro-dataflow e multithreading. Para o estudo da arquitetura Wolf foi desenvolvido um simulador dirigido a tempo que implementa suas características. O simulador Saw foi escrito na linguagem orientada a objetos C++ e pode ser compilado em qualquer plataforma que use um compilador padrão ANSI de 32 bits. O código do simulador foi exaustivamente testado e os resultados numéricos dos programas executados estão de acordo com resultados apresentados por uma arquitetura Von Neumann padrão. Após o estudo dos resultados obtidos com as simulações, foram identificados alguns problemas com a arquitetura Wolf. Algumas hipóteses foram testadas para solucionar tais problemas. Os resultados desses testes levaram à alteração da arquitetura. Desta alteração originou-se a arquitetura Wolf II que será apresentada também nesse trabalho. Por se tratar de uma conseqüência dos estudos realizados com a arquitetura Wolf, não serão feitos experimentos com o Wolf II. / This work presents the dataflow architecture Wolf. Wolf has been proposed in the focus of some known problems identified in previous works: the execution of data structures (vectors and matrices) and sequential code: to name a few. Wolf is based in the dynamic dataflow model and explores variable granularity (the thinnest is at instruction level). Some concepts developed in the designing of other hybrid architectures: guided the Wolf implementation. The macro-dataflow and multithreading were two of them. Focusing the study of the Wolf architecture: it has been developed a time driven simulator (Saw). The object oriented language C++ was used for this implementation. The code can be compiled on any ANSI standard 32 bits compiler. This code was exhaustively tested and the numeric results obtained with the experiments were equal to the ones obtained with Von Neumann architecture. This study identified some problems with the Wolf architecture. Some proposals were implemented in the simulator to try to identify the causes for the problems. The results led to an alteration in the Wolf architecture. The new proposed architecture (Wolf II) is described in the last chapter: but it was not submitted to experiments as Wolf was.
23

Implementing CAL Actor Component on Massively Parallel Processor Array

Khanfar, Husni January 2010 (has links)
No description available.
24

Reconstructing Data Flow Diagrams from Structure Charts Based on the Input and Output Relationship

YAMAMOTO, Shuichiro 20 September 1995 (has links)
No description available.
25

Comparison of Points-to Analyses

Gutzmann, Tobias January 2008 (has links)
<p>Points-to analysis is a static program analysis which computes possible reference relations between different parts of a program. It serves as input to many high-level analyses. Points-to analyses differ, among others, in flow- and context-sensitivity, program representation, and object abstraction. Most program representations used for points-to analysis are sparse representations which abstract from, e.g., primitive data types and intra-procedural control-flow. Thus, a certain degree of information is sacrificed for compact program representation, which results in scalable performance. In this thesis, we present a framework which allows building different versions of Points-to SSA (P2SSA), a sparse, Memory SSA based program representation. Distinct instantiations of P2SSA contain different levels of abstraction from a program's full representation. We present another framework which allows running Points-to analyses on these program representations. We use these two frameworks to instantiate different versions of P2SSA and compare them in terms of analysis precision and execution time.</p>
26

SimITK: Visual Programming of the ITK Image Processing Library within Simulink

DICKINSON, ANDREW WILLIAM LAIRD 13 September 2011 (has links)
The Insight Segmentation and Registration Toolkit (ITK) is a long-established image processing library used for image analysis, visualisation, and image-guided surgery applications. ITK is a collection of C++ classes that can potentially pose usability problems for users without appropriate C++ programming experience. In order to remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within the visual programming environment of MATLAB, Simulink, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily C++ templated nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective datatypes and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around the ITK class responsible for resolving the datatypes used by ITK to native types used by Simulink. Part of this challenge surrounds the automated capturing and storage of the pertinent class information (name, inputs/outputs, acceptable datatypes, and allowed dimensionalities) that needs to be reflected within the final block representation. The WrapITK package, included with ITK, serves to generate initial class representations as complex eXtended Markup Language (XML) files that are consolidated and refined to organise the information into an easily accessible structure when extracting during wrapping. Once refined, the data are transferred into custom-written SimITK-templates - one template for each required filetype - through a series of custom-keyword substitutions that replace special keywords with appropriately retrieved XML information and/or programming code. The primary result from the SimITK wrapping procedure is the generation of multiple Simulink Block libraries. From these libraries, blocks are selected and interconnected to demonstrate case study examples: a 3D segmentation workflow using cranial-CT data and a 3D MRI-to-CT registration workflow. Suggestions for future development are included as well as several appendices containing a list of classes, code comparisons between ITK C++ and SimITK workflow, installation documentation (both user and developer), as well as example file templates used to integrate ITK within Simulink. / Thesis (Master, Computing) -- Queen's University, 2011-09-13 16:48:53.906
27

Flexible and efficient computation in large data centres

Gog, Ionel Corneliu January 2018 (has links)
Increasingly, online computer applications rely on large-scale data analyses to offer personalised and improved products. These large-scale analyses are performed on distributed data processing execution engines that run on thousands of networked machines housed within an individual data centre. These execution engines provide, to the programmer, the illusion of running data analysis workflows on a single machine, and offer programming interfaces that shield developers from the intricacies of implementing parallel, fault-tolerant computations. Many such execution engines exist, but they embed assumptions about the computations they execute, or only target certain types of computations. Understanding these assumptions involves substantial study and experimentation. Thus, developers find it difficult to determine which execution engine is best, and even if they did, they become “locked in” because engineering effort is required to port workflows. In this dissertation, I first argue that in order to execute data analysis computations efficiently, and to flexibly choose the best engines, the way we specify data analysis computations should be decoupled from the execution engines that run the computations. I propose an architecture for decoupling data processing, together with Musketeer, my proof-of-concept implementation of this architecture. In Musketeer, developers express data analysis computations using their preferred programming interface. These are translated into a common intermediate representation from which code is generated and executed on the most appropriate execution engine. I show that Musketeer can be used to write data analysis computations directly, and these can execute on many execution engines because Musketeer automatically generates code that is competitive with optimised hand-written implementations. The diverse execution engines cause different workflow types to coexist within a data centre, opening up both opportunities for sharing and potential pitfalls for co-location interference. However, in practice, workflows are either placed by high-quality schedulers that avoid co-location interference, but choose placements slowly, or schedulers that choose placements quickly, but with unpredictable workflow run time due to co-location interference. In this dissertation, I show that schedulers can choose high-quality placements with low latency. I develop several techniques to improve Firmament, a high-quality min-cost flow-based scheduler, to choose placements quickly in large data centres. Finally, I demonstrate that Firmament chooses placements at least as good as other sophisticated schedulers, but at the speeds associated with simple schedulers. These contributions enable more efficient and effective use of data centres for large-scale computation than current solutions.
28

Comparison of Points-to Analyses

Gutzmann, Tobias January 2008 (has links)
Points-to analysis is a static program analysis which computes possible reference relations between different parts of a program. It serves as input to many high-level analyses. Points-to analyses differ, among others, in flow- and context-sensitivity, program representation, and object abstraction. Most program representations used for points-to analysis are sparse representations which abstract from, e.g., primitive data types and intra-procedural control-flow. Thus, a certain degree of information is sacrificed for compact program representation, which results in scalable performance. In this thesis, we present a framework which allows building different versions of Points-to SSA (P2SSA), a sparse, Memory SSA based program representation. Distinct instantiations of P2SSA contain different levels of abstraction from a program's full representation. We present another framework which allows running Points-to analyses on these program representations. We use these two frameworks to instantiate different versions of P2SSA and compare them in terms of analysis precision and execution time.
29

Runtime multicore scheduling techniques for dispatching parameterized signal and vision dataflow applications on heterogeneous MPSoCs / Techniques d'ordonnancement en ligne pour la répartition d'applications flot de données de traitement de signal et de l'image sur architectures multi-cœur hétérogène embarqué

Heulot, Julien 24 November 2015 (has links)
Une tendance importante dans le domaine de l’embarqué est l’intégration de plus en plus d’éléments de calcul dans les systèmes multiprocesseurs sur puce (MPSoC). Cette tendance est due en partie aux limitations des puissances individuelles de ces éléments causées par des considérations de consommation d’énergie. Dans le même temps, en raison de leur sophistication croissante, les applications de traitement du signal ont des besoins en puissance de calcul de plus en plus dynamique. Dans la conception et le développement d’applications de traitement de signal multicoeur, l’un des principaux défis consiste à répartir efficacement les différentes tâches sur les éléments de calcul disponibles, tout en tenant compte des changements dynamiques des fonctionnalités de l’application et des ressources disponibles. Une utilisation inefficace peut conduire à une durée de traitement plus longue et/ou une consommation d’énergie plus élevée, ce qui fait de la répartition des tâches sur un système multicoeur une tâche difficile à résoudre. Les modèles de calcul (MoC) flux de données sont communément utilisés dans la conception de systèmes de traitement du signal. Ils décomposent la fonctionnalité de l’application en acteurs qui communiquent exclusivement par l’intermédiaire de canaux. L’interconnexion des acteurs et des canaux de communication est modélisée et manipulée comme un graphe orienté, appelé un graphique de flux de données. Il existe différents MoCs de flux de données qui offrent différents compromis entre la prédictibilité et l’expressivité. Ces modèles de calculs sont communément utilisés dans la conception de systèmes de traitement du signal en raison de leur analysabilité et leur expressivité naturelle du parallélisme de l’application. Dans cette thèse, une nouvelle méthode de répartition de tâches est proposée afin de répondre au défi que propose la programmation multicoeur. Cette méthode de répartition de tâches prend ses décisions en temps réel afin d’optimiser le temps d’exécution global de l’application. Les applications sont décrites en utilisant le modèle paramétrée et interfacé flux de données (PiSDF). Ce modèle permet de décrire une application paramétrée en autorisant des changements dans ses besoins en ressources de calcul lors de l’exécution. A chaque exécution, le modèle de flux de données paramétré est déroulé en un modèle intermédiaire faisant apparaitre toute les tâches de l’application ainsi que leurs dépendances. Ce modèle est ensuite utilisé pour répartir efficacement les tâches de l’application. La méthode proposé a été testée et validé sur plusieurs applications des domaines de la vision par ordinateur, du traitement du signal et du multimédia. / An important trend in embedded processing is the integration of increasingly more processing elements into Multiprocessor Systemson- Chip (MPSoC). This trend is due in part to limitations in processing power of individual elements that are caused by power consumption considerations. At the same time, signal processing applications are becoming increasingly dynamic in terms of their hardware resource requirements due to the growing sophistication of algorithms to reach higher levels of performance. In design and implementation of multicore signal processing systems, one of the main challenges is to dispatch computational tasks efficiently onto the available processing elements while taking into account dynamic changes in application functionality and resource requirements. An inefficient use can lead to longer processing times and higher energy consumption, making multicore task scheduling a very difficult problem to solve. Dataflow process network Models of Computation (MoCs) are widely used in design of signal processing systems. It decomposes application functionality into actors that communicate data exclusively through channels. The interconnection of actors and communication channels is modeled and manipulated as a directed graph, called a dataflow graph. There are different dataflow MoCs which offer different trade-off between predictability and expressiveness. These MoCs are widely used in design of signal processing systems due to their analyzability and their natural parallel expressivity. In this thesis, we propose a novel scheduling method to address multicore scheduling challenge. This scheduling method determines scheduling decisions strategically at runtime to optimize the overall execution time of applications onto heterogeneous multicore processing resources. Applications are described using the Parameterized and Interfaced Synchronous DataFlow (PiSDF) MoC. The PiSDF model allows describing parameterized application, making possible changes in application’s resource requirement at runtime. At each execution, the parameterized dataflow is then transformed into a locally static one used to efficiently schedule the application with an a priori knowledge of its behavior. The proposed scheduling method have been tested and benchmarked on multiple state-of-the-art applications from computer vision, signal processing and multimedia domains.
30

Implementação de um simulador para a arquitetura de dados Wolf. / Implementation of a simulator for the Wolf architecture.

Marcos Antônio Cavenaghi 02 December 1992 (has links)
Esse trabalho apresenta a Proto-Arquitetura a fluxo de dados WOLF e trata da implementação de um simulador simplificado dirigido a eventos para essa arquitetura. O projeto WOLF propõe- se a implementar e estudar as características de um supercomputador de alta velocidade baseado no modelo de fluxo de dados dinâmico com granularidade variável. Para situar o trabalho é apresentado alguns conceitos envolvidos em simulações as principais implementações em f1uxo de dados. Os resultados preliminares obtidos com o simulador são apresentados. Esses resultados são analisados e comparados com dados conhecidos da arquitetura f1uxo de dados da Maquina de Manchester. Quando a maquina Proto-WOLF tiver suas características particulares desativadas, essa deve comportar-se como a Maquina de Manchester. Os resultados obtidos refletem esse comportamento. Conclui-se, portanto que o simulador implementado está se comportando de uma maneira apropriada. / This work presents the Proto-WOLF dataflow architecture and implementation of a simplified event-driven simulator for this architecture. The WOLF project is a proposal for the implementation of a supercomputer based on the dynamic dataflow model with variable granularity. In order to place the work in context, some of the basic concepts involved in simulation are presented and a survey of the most relevant works in dataflow is presented. The preliminary simulation results are presented. These results are canalized and compared with known results from the dataflow architecture of the Manchester Dataflow Machine. When the simulated Proto-WOLF machine has all its unique features disabled, it is expected that it should behave in a Manchester-like fashion. The results obtained fully agree with these. It is, then, possible to conclude that the implemented simulator is behaving in a proper manner.

Page generated in 0.0446 seconds