• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 17
  • 14
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 43
  • 34
  • 28
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

On the use of control- and data-ow in fault localization / Sobre o uso de fluxo de controle e de dados para a localizao de defeitos

Ribeiro, Henrique Lemos 19 August 2016 (has links)
Testing and debugging are key tasks during the development cycle. However, they are among the most expensive activities during the development process. To improve the productivity of developers during the debugging process various fault localization techniques have been proposed, being Spectrum-based Fault Localization (SFL), or Coverage-based Fault Localization (CBFL), one of the most promising. SFL techniques pinpoints program elements (e.g., statements, branches, and definition-use associations), sorting them by their suspiciousness. Heuristics are used to rank the most suspicious program elements which are then mapped into lines to be inspected by developers. Although data-flow spectra (definition-use associations) has been shown to perform better than control-flow spectra (statements and branches) to locate the bug site, the high overhead to collect data-flow spectra has prevented their use on industry-level code. A data-flow coverage tool was recently implemented presenting on average 38% run-time overhead for large programs. Such a fairly modest overhead motivates the study of SFL techniques using data-flow information in programs similar to those developed in the industry. To achieve such a goal, we implemented Jaguar (JAva coveraGe faUlt locAlization Ranking), a tool that employ control-flow and data-flow coverage on SFL techniques. The effectiveness and efficiency of both coverages are compared using 173 faulty versions with sizes varying from 10 to 96 KLOC. Ten known SFL heuristics to rank the most suspicious lines are utilized. The results show that the behavior of the heuristics are similar both to control- and data-flow coverage: Kulczynski2 and Mccon perform better for small number of lines investigated (from 5 to 30 lines) while Ochiai performs better when more lines are inspected (30 to 100 lines). The comparison between control- and data-flow coverages shows that data-flow locates more defects in the range of 10 to 50 inspected lines, being up to 22% more effective. Moreover, in the range of 20 and 100 lines, data-flow ranks the bug better than control-flow with statistical significance. However, data-flow is still more expensive than control-flow: it takes from 23% to 245% longer to obtain the most suspicious lines; on average data-flow is 129% more costly. Therefore, our results suggest that data-flow is more effective in locating faults because it tracks more relationships during the program execution. On the other hand, SFL techniques supported by data-flow coverage needs to be improved for practical use at industrial settings / Teste e depuração são tarefas importantes durante o ciclo de desenvolvimento de programas, no entanto, estão entre as atividades mais caras do processo de desenvolvimento. Diversas técnicas de localização de defeitos têm sido propostas a fim de melhorar a produtividade dos desenvolvedores durante o processo de depuração, sendo a localização de defeitos baseados em cobertura de código (Spectrum-based Fault Localization (SFL) uma das mais promissoras. A técnica SFL aponta os elementos de programas (e.g., comandos, ramos e associações definição-uso), ordenando-os por valor de suspeição. Heursticas são usadas para ordenar os elementos mais suspeitos de um programa, que então são mapeados em linhas de código a serem inspecionadas pelos desenvolvedores. Embora informações de fluxo de dados (associações definição-uso) tenham mostrado desempenho melhor do que informações de fluxo de controle (comandos e ramos) para localizar defeitos, o alto custo para coletar cobertura de fluxo de dados tem impedido a sua utilização na prática. Uma ferramenta de cobertura de fluxo de dados foi recentemente implementada apresentando, em média, 38% de sobrecarga em tempo de execução em programas similares aos desenvolvidos na indústria. Tal sobrecarga, bastante modesta, motiva o estudo de SFL usando informações de fluxo de dados. Para atingir esse objetivo, Jaguar (Java coveraGe faUlt locAlization Ranking), uma ferramenta que usa técnicas SFL com cobertura de fluxo de controle e de dados, foi implementada. A eficiência e eficácia de ambos os tipos de coberturas foram comparados usando 173 versões com defeitos de programas com tamanhos variando de 10 a 96 KLOC. Foram usadas dez heursticas conhecidas para ordenar as linhas mais suspeitas. Os resultados mostram que o comportamento das heursticas são similares para fluxo de controle e de dados: Kulczyski2 e Mccon obtêm melhores resultados para números menores de linhas investigadas (de 5 a 30), enquanto Ochiai é melhor quando mais linhas são inspecionadas (de 30 a 100). A comparação entre os dois tipos de cobertura mostra que fluxo de dados localiza mais defeitos em uma variação de 10 a 50 linhas inspecionadas, sendo até 22% mais eficaz. Além disso, na faixa entre 20 e 100 linhas, fluxo de dados classifica com significância estatstica melhor os defeitos. No entanto, fluxo de dados é mais caro do que fluxo de controle: leva de 23% a 245% mais tempo para obter os resultados; fluxo de dados é em média 129% mais custoso. Portanto, os resultados indicam que fluxo de dados é mais eficaz para localizar os defeitos pois rastreia mais relacionamentos durante a execução do programa. Por outro lado, técnicas SFL apoiadas por cobertura de fluxo de dados precisam ser mais eficientes para utilização prática na indústria
52

Diretrizes para projetos de edifícios de escritórios. / Office buildings design guidelines.

Liu, Ana Wansul 05 April 2010 (has links)
A complexidade no desenvolvimento de projetos para edifícios de escritórios está relacionada a dificuldades na conciliação de interesses de empreendedores, projetistas, construtores e usuários finais, e a diversidade e especialização cada vez maiores das disciplinas envolvidas. A clareza quanto aos pontos que devem ser definidos, e quem deve defini-los, ainda na fase de concepção deste tipo de projeto, é fundamental para que o empreendimento apresente viabilidades técnica, construtiva e de negócio, e a gestão do processo do projeto deve ter domínio total destas questões nesta fase. A proposta deste trabalho é apresentar as informações críticas das diversas disciplinas, que devem ser definidas ainda na concepção da arquitetura, e sua correta seqüência de inserção no processo. Para tal, a metodologia adotada baseia-se em revisão bibliográfica e na realização de um estudo de caso, cujas condições de contorno são consideradas ímpares: a empresa contratante de projetos é uma incorporadora que tem o domínio das informações sobre as necessidades mercadológicas do produto, tem um corpo técnico que apresenta condições de avaliar e escolher soluções técnicas construtivas, e também é uma empresa de administração predial, ou seja, opera o funcionamento do edifício construído, resultando em decisões de projeto que realmente focam o custo do empreendimento em seu ciclo da vida, o que não ocorre freqüentemente no mercado brasileiro. Propõe-se o desenvolvimento de um fluxo de informações de projetos que indique a necessidade e a etapa de cada informação na fase de concepção do projeto, o que ajuda a esclarecer o correto papel de cada agente no processo e constitui uma ferramenta extremamente útil para a gestão de projetos. / The complexity in office buildings design development is related to difficulties in incorporating the interests of all the players involved (owners, designers, contractors and end-users) and to the increasing diversity of specialist designers. The clarity about key points definitions and who should make them, during the design conceptual phase, is imperative for technical, constructive and commercial feasibilities of the project itself, and design management must have complete control of these aspects. The aim is to investigate what critical information from several design subjects should be defined during this conceptual phase and its correct insertion sequence in the design process. In order to achieve this investigation, the research is based on the case study method, the studied object of which has distinctive conditions: the design team contractor is a real estate company that fully understands office building market needs, holds an experienced technical team to evaluate and select constructive solutions and, also, is a facility manager. Due to this, their design decisions actually focus on the project entire life cycle, which is not common in the Brazilian market. In conclusion, the development of an information flow is proposed, during the design conceptual phase, which indicates when each piece of information should be located in the design process, which is helpful to elucidate the correct function of each related player and to establish a useful tool for design management.
53

On Efficiency and Accuracy of Data Flow Tracking Systems

Jee, Kangkook January 2015 (has links)
Data Flow Tracking (DFT) is a technique broadly used in a variety of security applications such as attack detection, privacy leak detection, and policy enforcement. Although effective, DFT inherits the high overhead common to in-line monitors which subsequently hinders their adoption in production systems. Typically, the runtime overhead of DFT systems range from 3× to 100× when applied to pure binaries, and 1.5× to 3× when inserted during compilation. Many performance optimization approaches have been introduced to mitigate this problem by relaxing propagation policies under certain conditions but these typically introduce the issue of inaccurate taint tracking that leads to over-tainting or under-tainting. Despite acknowledgement of these performance / accuracy trade-offs, the DFT literature consistently fails to provide insights about their implications. A core reason, we believe, is the lack of established methodologies to understand accuracy. In this dissertation, we attempt to address both efficiency and accuracy issues. To this end, we begin with libdft, a DFT framework for COTS binaries running atop commodity OSes and we then introduce two major optimization approaches based on statically and dynamically analyzing program binaries. The first optimization approach extracts DFT tracking logics and abstracts them using TFA. We then apply classic compiler optimizations to eliminate redundant tracking logic and minimize interference with the target program. As a result, the optimization can achieve 2× speed-up over base-line performance measured for libdft. The second optimization approach decouples the tracking logic from execution to run them in parallel leveraging modern multi-core innovations. We apply his approach again applied to libdft where it can run four times as fast, while concurrently consuming fewer CPU cycles. We then present a generic methodology and tool for measuring the accuracy of arbitrary DFT systems in the context of real applications. With a prototype implementation for the Android framework – TaintMark, we have discovered that TaintDroid’s various performance optimizations lead to serious accuracy issues, and that certain optimizations should be removed to vastly improve accuracy at little performance cost. The TaintMark approach is inspired by blackbox differential testing principles to test for inaccuracies in DFTs, but it also addresses numerous practical challenges that arise when applying those principles to real, complex applications. We introduce the TaintMark methodology by using it to understand taint tracking accuracy trade-offs in TaintDroid, a well-known DFT system for Android. While the aforementioned works focus on the efficiency and accuracy issues of DFT systems that dynamically track data flow, we also explore another design choice that statically tracks information flow by analyzing and instrumenting the application source code. We apply this approach to the different problem of integer error detection in order to reduce the number of false alarmings.
54

Verification of Weakly-Hard Requirements on Quasi-Synchronous Systems / Vérification de propriétés faiblement dures des systèmes quasi- synchrones

Smeding, Gideon 19 December 2013 (has links)
L’approche synchrone aux systèmes réactifs, où le temps global est une séquence d’instants discrets, a été proposée afin de faciliter la conception des systèmes embarqués critiques. Des systèmes synchrones sont souvent réalisés sur des architectures asynchrones pour des raisons de performance ou de contraintes physiques de l’application. Une répartition d’un système synchrone sur une architecture asynchrone nécessite des protocoles de communication et de synchronisation pour préserver la sémantique synchrone. En pratique, les protocoles peut avoir un coût important qui peut entrer en conflit avec les contraintes de l’application comme, par exemple, la taille de mémoire disponible, le temps de réaction, ou le débit global.L’approche quasi-synchrone utilise des composants synchrones avec des horloges indépendantes. Les composants communiquent par échantillonnage de mémoire partagée ou par des tampons FIFO. On peut exécuter un tel système de façon synchrone, où toutes les horloges avancent simultanément, ou de façon asynchrone avec moins de contraintes sur les horloges, sans ajouter des protocoles .Plus les contraintes sont relâchées, plus de comportements se rajoutent en fonction de l’entrelacement des tics des horloges. Dans le cas de systèmes flots de données, un comportement est différent d’un autre si les valeurs ou le cadencement ont changé. Pour certaines classes de systèmes l’occurrence des déviations est acceptable, tant que la fréquence de ces événements reste bornée.Nous considérons des limites dures sur la fréquence des déviations avec ce que nous appelons les exigences faiblement dures, par exemple, le nombre maximal d’éléments divergents d’un flot par un nombre d’éléments consécutifs.Nous introduisons des limites de dérive sur les apparitions relatives des paires d’événements récurrents comme les tics d’une horloge, l’occurrence d’une différence,ou l’arrivée d’un message. Les limites de dérive expriment des contraintes entre les horloges, par exemple, une borne supérieure de deux tics d’une horloge entre trois tics consécutifs d’une autre horloge. Les limites permettent également de caractériser les exigences faiblement dures. Cette thèse présente des analyses pour la vérification et l’inférence des exigences faiblement dures pour des programmes de flots de données synchrones étendu avec de la communication asynchrone par l’échantillonnage de mémoire partagée où les horloges sont décrites par des limites de dérive. Nous proposons aussi une analyse de performance des systèmes répartis avec de la communication par tampons FIFO, en utilisant les limites de dérive comme abstraction. / The synchronous approach to reactive systems, where time evolves by globally synchronized discrete steps, has proven successful for the design of safetycriticalembedded systems. Synchronous systems are often distributed overasynchronous architectures for reasons of performance or physical constraintsof the application. Such distributions typically require communication and synchronizationprotocols to preserve the synchronous semantics. In practice, protocolsoften have a significant overhead that may conflict with design constraintssuch as maximum available buffer space, minimum reaction time, and robustness.The quasi-synchronous approach considers independently clocked, synchronouscomponents that interact via communication-by-sampling or FIFO channels. Insuch systems we can move from total synchrony, where all clocks tick simultaneously,to global asynchrony by relaxing constraints on the clocks and withoutadditional protocols. Relaxing the constraints adds different behaviors dependingon the interleavings of clock ticks. In the case of data-flow systems, onebehavior is different from another when the values and timing of items in a flowof one behavior differ from the values and timing of items in the same flow ofthe other behavior. In many systems, such as distributed control systems, theoccasional difference is acceptable as long as the frequency of such differencesis bounded. We suppose hard bounds on the frequency of deviating items in aflow with, what we call, weakly-hard requirements, e.g., the maximum numberdeviations out of a given number of consecutive items.We define relative drift bounds on pairs of recurring events such as clockticks, the occurrence of a difference or the arrival of a message. Drift boundsexpress constraints on the stability of clocks, e.g., at least two ticks of one perthree consecutive ticks of the other. Drift bounds also describe weakly-hardrequirements. This thesis presents analyses to verify weakly-hard requirementsand infer weakly-hard properties of basic synchronous data-flow programs withasynchronous communication-by-sampling when executed with clocks describedby drift bounds. Moreover, we use drift bounds as an abstraction in a performanceanalysis of stream processing systems based on FIFO-channels.
55

On the use of control- and data-ow in fault localization / Sobre o uso de fluxo de controle e de dados para a localizao de defeitos

Henrique Lemos Ribeiro 19 August 2016 (has links)
Testing and debugging are key tasks during the development cycle. However, they are among the most expensive activities during the development process. To improve the productivity of developers during the debugging process various fault localization techniques have been proposed, being Spectrum-based Fault Localization (SFL), or Coverage-based Fault Localization (CBFL), one of the most promising. SFL techniques pinpoints program elements (e.g., statements, branches, and definition-use associations), sorting them by their suspiciousness. Heuristics are used to rank the most suspicious program elements which are then mapped into lines to be inspected by developers. Although data-flow spectra (definition-use associations) has been shown to perform better than control-flow spectra (statements and branches) to locate the bug site, the high overhead to collect data-flow spectra has prevented their use on industry-level code. A data-flow coverage tool was recently implemented presenting on average 38% run-time overhead for large programs. Such a fairly modest overhead motivates the study of SFL techniques using data-flow information in programs similar to those developed in the industry. To achieve such a goal, we implemented Jaguar (JAva coveraGe faUlt locAlization Ranking), a tool that employ control-flow and data-flow coverage on SFL techniques. The effectiveness and efficiency of both coverages are compared using 173 faulty versions with sizes varying from 10 to 96 KLOC. Ten known SFL heuristics to rank the most suspicious lines are utilized. The results show that the behavior of the heuristics are similar both to control- and data-flow coverage: Kulczynski2 and Mccon perform better for small number of lines investigated (from 5 to 30 lines) while Ochiai performs better when more lines are inspected (30 to 100 lines). The comparison between control- and data-flow coverages shows that data-flow locates more defects in the range of 10 to 50 inspected lines, being up to 22% more effective. Moreover, in the range of 20 and 100 lines, data-flow ranks the bug better than control-flow with statistical significance. However, data-flow is still more expensive than control-flow: it takes from 23% to 245% longer to obtain the most suspicious lines; on average data-flow is 129% more costly. Therefore, our results suggest that data-flow is more effective in locating faults because it tracks more relationships during the program execution. On the other hand, SFL techniques supported by data-flow coverage needs to be improved for practical use at industrial settings / Teste e depuração são tarefas importantes durante o ciclo de desenvolvimento de programas, no entanto, estão entre as atividades mais caras do processo de desenvolvimento. Diversas técnicas de localização de defeitos têm sido propostas a fim de melhorar a produtividade dos desenvolvedores durante o processo de depuração, sendo a localização de defeitos baseados em cobertura de código (Spectrum-based Fault Localization (SFL) uma das mais promissoras. A técnica SFL aponta os elementos de programas (e.g., comandos, ramos e associações definição-uso), ordenando-os por valor de suspeição. Heursticas são usadas para ordenar os elementos mais suspeitos de um programa, que então são mapeados em linhas de código a serem inspecionadas pelos desenvolvedores. Embora informações de fluxo de dados (associações definição-uso) tenham mostrado desempenho melhor do que informações de fluxo de controle (comandos e ramos) para localizar defeitos, o alto custo para coletar cobertura de fluxo de dados tem impedido a sua utilização na prática. Uma ferramenta de cobertura de fluxo de dados foi recentemente implementada apresentando, em média, 38% de sobrecarga em tempo de execução em programas similares aos desenvolvidos na indústria. Tal sobrecarga, bastante modesta, motiva o estudo de SFL usando informações de fluxo de dados. Para atingir esse objetivo, Jaguar (Java coveraGe faUlt locAlization Ranking), uma ferramenta que usa técnicas SFL com cobertura de fluxo de controle e de dados, foi implementada. A eficiência e eficácia de ambos os tipos de coberturas foram comparados usando 173 versões com defeitos de programas com tamanhos variando de 10 a 96 KLOC. Foram usadas dez heursticas conhecidas para ordenar as linhas mais suspeitas. Os resultados mostram que o comportamento das heursticas são similares para fluxo de controle e de dados: Kulczyski2 e Mccon obtêm melhores resultados para números menores de linhas investigadas (de 5 a 30), enquanto Ochiai é melhor quando mais linhas são inspecionadas (de 30 a 100). A comparação entre os dois tipos de cobertura mostra que fluxo de dados localiza mais defeitos em uma variação de 10 a 50 linhas inspecionadas, sendo até 22% mais eficaz. Além disso, na faixa entre 20 e 100 linhas, fluxo de dados classifica com significância estatstica melhor os defeitos. No entanto, fluxo de dados é mais caro do que fluxo de controle: leva de 23% a 245% mais tempo para obter os resultados; fluxo de dados é em média 129% mais custoso. Portanto, os resultados indicam que fluxo de dados é mais eficaz para localizar os defeitos pois rastreia mais relacionamentos durante a execução do programa. Por outro lado, técnicas SFL apoiadas por cobertura de fluxo de dados precisam ser mais eficientes para utilização prática na indústria
56

Silicon Compilation and Test for Dataflow Implementations in GasP and Click

Mettala Gilla, Swetha 17 January 2018 (has links)
Many modern computer systems are distributed over space. Well-known examples are the Internet of Things and IBM's TrueNorth for deep learning applications. At the Asynchronous Research Center (ARC) at Portland State University we build distributed hardware systems using self-timed computation and delay-insensitive communication. Where appropriate, self-timed hardware operations can reduce average and peak power, energy, latency, and electromagnetic interference. Alternatively, self-timed operations can increase throughput, tolerance to delay variations, scalability, and manufacturability. The design of complex hardware systems requires design automation and support for test, debug, and product characterization. This thesis focuses on design compilation and test support for dataflow applications. Both parts are necessary to go from self-timed circuits to large-scale hardware systems. As part of the research in design compilation, the ARCwelder compiler designed by Willem Mallon (previously with NXP and Philips Handshake Solutions) was extended. The key to testing distributed systems, including self-timed systems, is to identify the actions in the systems. In distributed systems there is no such thing as a global action. To test, debug, characterize, and even initialize distributed systems, it is necessary to control the local actions. The designs developed at the ARC separate the actions from the states ab initio. As part of the research in test and debug, a special circuit to control actions, called MrGO, was implemented. A scan and JTAG test interface was also implemented. The test implementations have been built into two silicon test experiments, called Weaver and Anvil, and were used successfully for testing, debug, and performance characterizations.
57

Etude d'une architecture cellulaire programmable : définition fonctionnelle et méthodologie de programmation

Payan, Eric 11 June 1991 (has links) (PDF)
Pour répondre à des besoins toujours croissants en puissance de calcul, on a vu se multiplier depuis quelques années les études concernant les architectures parallèles. Malgré la variété des solutions proposées il existe encore une classe d'applications difficiles a exécuter en parallèle. Nous proposons dans cette thèse une architecture massivement parallèle basée sur un réseau régulier de cellules, qui ont la particularité d'être totalement asynchrones et de pouvoir communiquer entre elles grâce a un mécanisme d'acheminement de messages. Chaque cellule comprend une partie de traitement composée d'un petit microprocesseur 8 bits et sa mémoire (donnée plus programme), et une partie routage permettant d'acheminer les messages. Notre second objectif consistait a imaginer puis développer une methode de programmation adaptée a la fois a notre nouvelle architecture et a la classe d'algorithmes visée. La solution étudiée consiste a placer un graphe data flow obtenu a partir du langage lustre sur notre réseau cellulaire. Un premier prototype de ce compilateur a été réalisé, il a permis d'étudier l'importance de paramètres comme la répartition de la charge de calcul entre les cellules ou l'enchainement de l'exécution de plusieurs nœuds du graphe places sur la même cellule
58

Efficiency of LTTng as a Kernel and Userspace Tracer on Multicore Environment

Guha Anjoy, Romik, Chakraborty, Soumya Kanti January 2010 (has links)
<p><em>With the advent of huge multicore processors, complex hardware, intermingled networks and huge disk storage capabilities the programs that are used in the system and the code which is written to control them are increasingly getting large and often much complicated. There is increase in need of a framework which tracks issues, debugs the program, helps to analyze the reason behind degradation of system and program performance. Another big concern for deploying such a framework in complex systems is to the footprint of the framework upon the setup. LTTng project aims to provide such an effective tracing and debugging toolset for Linux systems. Our work is to measure the effectiveness of LTTng in a Multicore Environment and evaluate its affect on the system and program performance. We incorporate Control and Data Flow analysis of the system and the binaries of LTTng to reach for a conclusion.</em></p>
59

Efficiency of LTTng as a Kernel and Userspace Tracer on Multicore Environment

Guha Anjoy, Romik, Chakraborty, Soumya Kanti January 2010 (has links)
With the advent of huge multicore processors, complex hardware, intermingled networks and huge disk storage capabilities the programs that are used in the system and the code which is written to control them are increasingly getting large and often much complicated. There is increase in need of a framework which tracks issues, debugs the program, helps to analyze the reason behind degradation of system and program performance. Another big concern for deploying such a framework in complex systems is to the footprint of the framework upon the setup. LTTng project aims to provide such an effective tracing and debugging toolset for Linux systems. Our work is to measure the effectiveness of LTTng in a Multicore Environment and evaluate its affect on the system and program performance. We incorporate Control and Data Flow analysis of the system and the binaries of LTTng to reach for a conclusion.
60

Hardware-Software Partitioning and Pipelined Scheduling of Multimedia Systems

Huang, Kuo-Chin 26 July 2004 (has links)
Due to the rapid advancement of VLSI technology, functions of multimedia systems (e.g. MP3, image processing etc,) become more complex nowadays. Therefore, more complicated system architecture and powerful computing ability are required to attain real-time process. In order to fulfill the requirement of cost and efficiency, multimedia systems usually consist of processors, ASICs and other various components. Diverse real-time functions of multimedia system can be implemented via co-operation of these components. However, these components have the differences in area, efficiency, and cost. Accordingly, it is necessary to establish a useful method and tool to decide better system architecture. ¡@¡@The main objective of the thesis is to develop a decision tool of system architecture. Given the system specification and constraints of a multimedia system, it can quickly decide a good system architecture to satisfy the constraints in accordance with the system specification . Except for partitioning various functions and mapping functions to hardware components, the decision tool must implement scheduling and pipelining to fulfill resource constraints and reach the real-time requirement. A great deal of time and cost for implementing the multimedia system can be reduced by careful scheduling and pipelining. Besides, owing to the pressure of time to market in system development, we propose a series of design flow to speed up the design process. All decision tasks in the flow are finished by automatic or semi-automatic method to reduce the exhaustion of manpower and time. Finally, we make experiments with the proposed tool on several multimedia systems. The results show that the automatic process can conform to constraints which set up by system specifications and ensure the accuracy of the process.

Page generated in 0.0486 seconds