Spelling suggestions: "subject:"[een] CODESIGN"" "subject:"[enn] CODESIGN""
1 |
Cosynthesis of embedded systems using coloured interpreted petri netsSananikone, Dang S. January 1996 (has links)
The rising complexity in systems design, and the shift in the hardware/software functionality boundary has spurred research into development of EDA (Electronic Design Automation) tools at the systems level. Codesign is a methodology that proposes an integrated approach to systems design unifying both hardware and software approaches. Cosynthesis is a major field of research within codesign; cosynthesis takes a behavioural description and generates a hardware/software partition which satisfies system constraints. Current research is concerned with the automatic partitioning of systems. COSYN was developed to address the cosynthesis of embedded systems. A CIPN (Coloured Interpreted Petri Net) is used to model multiple processes and interprocess communication. The partitioning algorithm, which adopts a fine-grained approach to system partitioning (it considers moving nodes at the basic block level), is based on selecting blocks based on their potential speedup and extra hardware requirements, using hardware and software execution time estimators. The interdependence between interprocess communication primitives is exploited to achieve a better hardware/software partition. Results for an input example pdi are given which illustrate the benefits of the approach presented in this thesis.
|
2 |
Design of High-performance, Low-power and Memory-efficient EBCOT and MQ Coder for JPEG2000Chang, Tso-Hsuan 01 September 2003 (has links)
JPEG2000 is an emerging state-of-the-art standard for still image compression. The standard not only offers superior rate-distortion performance, but also provides a wide range of features and functionality compared to JPEG. However, advantages of JPEG2000 come at the expense of computational complexity and memory requirement in bit-plane coding. So the low cost ASIC design for JPEG2000 hardware implementation remains a challenge. Therefore, a dedicated hardware implementation for EBCOT block coder is necessary. In this thesis a high-throughput EBCOT block coder is proposed. There are two main parts in the EBCOT block coder: context modeling and MQ-coder. For context modeling a novel pass-parallel module based on vertical causal mode is proposed. Pass-parallel modeling which reduces the cycles to check the sample to be coded processes three original sequential passes in a single pass and generates one or two context labels every cycle. It is fast and saves 8K bits internal memory. Since context modeling will generate one or two context labels in one cycle, multi-bit MQ-coder which could avoid the buffer between context modeling and MQ-coder overflows is needed. For MQ-coder three approaches which process one or two context labels in one cycle are proposed. Furthermore, we modified the architecture of MQ-coder and proposed two low-power implementation concepts : reduction of memory access and disabling unused block.
|
3 |
Compiler Directed Codesign for FPGA-based Embedded SystemsHauff, Martin Anthony, marty@extendabilities.com.au January 2008 (has links)
As embedded systems designers increasingly turn to programmable logic technologies in place of off-the-shelf microprocessors, there is a growing interest in the development of optimised custom processing cores that can be designed on a per-application basis. FPGAs blur the traditional distinction between hardware and software and offer the promise of application specific hardware acceleration. But realizing this in a general sense requires a significant departure from traditional embedded systems development flows. Whereas off-the-shelf processors have a fixed architecture, the same cannot be said of purpose-built FPGA-based processors. With this freedom comes the challenge of empirically determining the optimal boundary point between hardware and software. The fluidity of the hardware/software partition also poses an interesting challenge for compiler developers. This thesis presents a tool and methodology that addresses these codesign challenges in a new way. Described as 'compiler-directed codesign', it makes use of a suitably modified compiler to help direct the development of a custom processor core on a per-application basis. By exposing the compiler's internal representation of a compiled target program, visibility into those instructions, and hardware resources, that are most sought after by the compiler can be gained. This information is then used to inform further processor development and to determine the optimal partition between hardware and software. At each design iteration, the machine model is updated to reflect the available hardware resources, the compiler is rebuilt, and the target application is compiled once again. By including the compiler 'in-the-loop' of custom processor design, developers can accurately quantify the impact on performance caused by the addition or removal of specific hardware resources and iteratively converge on an optimal solution. Compiler Directed Codesign has advantages over existing codesign methodologies because it offers both a concrete point from which to begin the partitioning process as well as providing quantifiable and rapid feedback of the merits of different partitioning choices. When applied to an Adaptive PCM Encoder/Decoder case study, the Compiler Directed Codesign technique yielded a custom processor core that was between 36% and 73% smaller, consumed between 11% to 19% less memory, and performed up to 10X faster than comparable general-purpose FPGA-based processor cores. The conclusion of this work is that a suitably modified compiler can serve a valuable role in directing hardware/software partitioning on a per-application basis.
|
4 |
HW/SW Codesign and Design, Evaluation of Software Framework for AcENoCs : An FPGA-Accelerated NoC Emulation PlatformPai, Vinayak 2010 December 1900 (has links)
Majority of the modern day compute intensive applications are heterogeneous
in nature. To support their ever increasing computational requirements, present
day System-on-Chip (SoC) architectures have adapted multicore style of modeling,
thereby incorporating multiple, heterogeneous processing cores on a single chip. The
emerging Network-On-Chip (NoC) interconnect paradigm provides a scalable and
power-efficient solution for communication among multiple cores, serving as a powerful
replacement for traditional bus based architectures. A fast, robust and
exible
emulation platform is the key to successful realization and validation of such architectures
within a very short span of time.
This research focuses on various aspects of Hardware/Software (HW/SW) codesign
for AcENoCs (Accelerated Emulation Platform for NoCs), a Field Programmable
Gate Array (FPGA) accelerated, con gurable, cycle accurate platform for emulation
and validation of NoC architectures. This work also details the design, implementation
and evaluation of AcENoCs' software framework along with the various design
optimizations carried out and tradeoffs considered in AcENoCs' HW/SW codesign
for achieving an optimum balance between emulated network dimensions and emulation
performance. AcENoCs emulation platform is realized on a Xilinx Virtex-5
FPGA. AcENoCs' hardware framework consists of the NoC built using configurable
hardware library components, while the software framework consists of Traffic Generators
(TGs) and their associated source queues, Traffic Receptors (TRs) along with statistics analysis module and dynamically controlled emulation clock generator. The
software framework is implemented using on-chip Xilinx MicroBlaze processor. This
report also describes the interaction between various HW/SW events in an emulation
cycle and assesses AcENoCs' performance speedup and tradeoffs over existing FPGA
emulators and software simulators.
FPGA synthesis results showed that networks with dimensions upto 5x5 could be
accommodated inside the device. Varying synthetic traffic workloads, generated by
TGs, were used to evaluate the network. Real application based traces were also run
on AcENoCs platform to evaluate the performance improvement achieved in comparison
to software simulators. For improving the emulator performance, software
profiling was carried out to identify and optimize the software components consuming
highest number of processor cycles in an emulation cycle. Emulation testcases
were run and latency values recorded for varying traffic patterns in order to evaluate
AcENoCs platform. Experimental results showed emulation speedups in order
of 10000-12000X over HDL (Hardware Description Language) simulators and 14-47X
over software simulators, without sacri cing cycle accuracy.
|
5 |
SHARP: Sustainable Hardware Acceleration for Rapidly-evolving Pre-existing systems.Beeston, Julie 13 September 2012 (has links)
The goal of this research is to present a framework to accelerate the execution of software
legacy systems without having to redesign them or limit future changes. The speedup is
accomplished through hardware acceleration, based on a semi-automatic infrastructure which
supports design decisions and simulate their impact.
Many programs are available for translating code written in C into VHDL (Very High Speed
Integrated Circuit Hardware Description Language). What is missing is simpler and more
direct strategies to incorporate encapsulatable portions of the code, translate them to VHDL
and to allow the VHDL code and the C code to communicate through a flexible interface.
SHARP is a streamlined, easily understood infrastructure which facilitates this process in
two phases. In the first part, the SHARP GUI (An interactive Graphical User Interface)
is used to load a program written in a high level general purpose programming language,
to scan the code for SHARP POINTs (Portions Only Including Non-interscoping Types)
based on user defined constraints, and then automatically translate such POINTs to a HDL.
Finally the infrastructure needed to co-execute the updated program is generated. SHARP
POINTs have a clearly defined interface and can be used by the SHARP scheduler.
In the second part, the SHARP scheduler allows the SHARP POINTs to run on the chosen
reconfigurable hardware, here an FPGA (Field Programmable Gate Array) and to commu-
nicate cleanly with the original processor (for the software).
The resulting system will be a good (though not necessarily optimal) acceleration of the
original software application, that is easily maintained as the code continues to develop and
evolve. / Graduate
|
6 |
Social discovery and information sharing in sport climbingSipinen, Anders January 2015 (has links)
In this report i examine current practices of knowledge sharing in the climbing community. Through the use of co-design methods ideas are developed and brought out by the community. The role of the interaction designer becomes the enabler for the community to create a participatory design of a system supporting information creation and knowledge sharing in the context of exploration. The aim is to use established interaction design research methods to examine the activity and with the practitioners from the community explore what role technology can play in enhancing the experience for information sharing between individuals and groups.In order to find an answer to the thesis central research about the role co-design can play in engaging a non-designers in this diverse community to design of an interactive service, practitioners from the community is engaged through a co-design workshops and discussions. This collaborative methodology is applied and used from the initial ideation phase all the way to the exploration phase and into the last design formation stage. By using the creativity of the community through the entire process, the result is not only grounded in theory but also in the expectations and specific activities of its intended users. Another reason for engaging the community when researching is attempting to involve and build on the individual’s unique perspectives that comes from each participant's specific experiences.The information is explored in relation to current technologies and practice and how it can be applied to the main activity to better achieve the community's or participants goals. Using methods of research through design to perform technological exploration ideas are evaluated by constructing prototypes that can be used to introduce technology to the participants and test concepts. Outcome and documentation is evaluated to examine the overall experience and identify desired results. The final result is a prototype of an interactive system called The Circuit Tool that supports the exploration of a new form of curated information produced by individuals or groups of climbers to share within the community.
|
7 |
Interface Design and Synthesis for Structural Hybrid Microarchitectural SimulatorsRuan, Zhuo 01 December 2013 (has links) (PDF)
Computer architects have discovered the potential of using FPGAs to accelerate software microarchitectural simulators. One type of FPGA-accelerated microarchitectural simulator, namedthe hybrid structural microarchitectural simulator, is very promising. This is because a hybrid structural microarchitectural simulator combines structural software and hardware, and this particular organization provides both modeling flexibility and fast simulation speed. The performance of a hybrid simulator is significantly affected by how the interface between software and hardware is constructed. The work of this thesis creates an infrastructure, named Simulator Partitioning Research Infrastructure (SPRI), to implement the synthesis of hybrid structural microarchitectural simulators which includes simulator partitioning, simulator-to-hardware synthesis, interface synthesis. With the support of SPRI, this thesis characterizes the design space of interfaces for synthesized hybrid structural microarchitectural simulators and provides the implementations for several such interfaces. The evaluation of this thesis thoroughly studies the important design tradeoffs and performance factors (e.g. hardware capacity, design scalability, and interface latency) involved in choosing an efficient interface. The work of this thesis is essential to the research community of computer architecture. It not only contributes a complete synthesis infrastructure, but also provides guidelines to architects on how to organize software microarchitectural models and choose a proper software/hardware interface so the hybrid microarchitectural simulators synthesized from these software models can achieve desirable speedup
|
8 |
Security-driven Design Optimization of Mixed Cryptographic Implementations in Distributed, Reconfigurable, and Heterogeneous Embedded SystemsNam, HyunSuk, Nam, HyunSuk January 2017 (has links)
Distributed heterogeneous embedded systems are increasingly prevalent in numerous applications, including automotive, avionics, smart and connected cities, Internet of Things, etc. With pervasive network access within these systems, security is a critical design concern. This dissertation presents a modeling and optimization framework for distributed, reconfigurable, and heterogeneous embedded systems. Distributed embedded systems consist of numerous interconnected embedded devices, each composed of different computing resources, such single core processors, asymmetric multicore processors, field-programmable gate arrays (FPGAs), and various combinations thereof.
A dataflow-based modeling framework for streaming applications integrates models for computational latency, mixed cryptographic implementations for inter-task and intra task communication, security levels, communication latency, and power consumption. For the security model, we present a level-based modeling of cryptographic algorithms using mixed cryptographic implementations, including both symmetric and asymmetric implementations. We utilize a multi-objective genetic optimization algorithm to optimize security and energy consumption subject to latency and minimum security level constraints. The presented methodology is evaluated using a video-based object detection and tracking application and several synthetic benchmarks representing various application types. Experimental results for these design and optimization frameworks demonstrate the benefits of mixed cryptographic algorithm security model compared to single cryptographic algorithm alternatives. We further consider several distributed heterogeneous embedded systems architectures.
|
9 |
Técnicas de profiling para o co-projeto de hardware e software baseado em computação reconfigurável aplicadas ao processador softcore Nios II da Altera / Hardware and software codesing profiling techniques based on reconfigurable computing applied to the Altera´s Nios soft core processorKiehn, Luiz Henrique 21 September 2012 (has links)
Como avanço dos paradigmas de desenvolvimento de sistemas eletrônicos, novos conceitos, modelos e técnicas resultaram dessa evolução, gerando ferramentas mais eficientes e objetivas. Entre estas, as de automação de projetos eletrônicos (EDA - Electronic Design Automation) em nível de sistema (ESL - Electronic System Level) trouxeram um incremento considerável de produtividade à confecção de sistemas eletrônicos, inclusive de sistemas embarcados. Já no que se refere ao desempenho do sistema elaborado, monitorar sua execução e determinar seu perfil de funcionamento são tarefas essenciais para avaliar, a partir do seu comportamento, quais os pontos que representam gargalos ou pontos críticos, afetando sua eficiência geral. Dessa forma, faz-se necessário pesquisar princípios de verificação e otimização dos sistemas elaborados que estejam mais bem adaptados aos novos paradigmas de desenvolvimento de projetos. O presente trabalho tem por objetivo implementar um módulo de coleta e processamento de dados para análise de perfil de programas escritos na linguagem C e que sejam executados em processadores soft core, como o NiosII, da Altera. Entretanto, diferentemente das estatísticas oferecidas pela ferramenta GProf (GNU Profiling) com relação à análise de desempenho, em que cada amostra obtida implica no incremento de um contador para a função flagrada, o presente trabalho volta seu interesse à análise do perfil de uso de memória heap, que encontra-se mormente no volume alocado constatado em cada amostragem. Dessa forma, para diferentes amostragens de uma mesma função interessa saber qual a maior quantidade de memória utilizada pela função entre todas as amostras coletadas. Isso significa que, ao invés de incremento por amostragem, adotar-se-á o princípio do registro do maior valor, em número de bytes, de uso de memória constatado em cada função. Os principais recursos do módulo proposto são: a) o armazenamento das informações de uso de memória heap obtidas no processo de Profiling em formato apropriado para uso posterior por aplicações de co-projeto de hardware e software; e b) a geração de relatórios de Profiling que apresentem o volume de memória dinâmica alocada durante o processamento dos programas analisados para que se possa identificar os locais onde esse uso é mais crítico, permitindo ao projetista tomar decisões quanto à reformulação do código fonte, ou quanto ao incremento no tamanho da memória a ser instalada no sistema, ou quanto à reformulação da arquitetura de um modo geral / Due to the advancement of the paradigms of development of electronic systems, new concepts, models and techniques resulted from this evolution, generating more eficient and objective tools. Among them, the system-level (ESL) electronic design automation (EDA) ones has brought a considerable increase to the productivity of electronic systems manufacturing, especially including the embedded systems. In what refers to elaborated systems, monitoring its execution and determining its operating profile are the essential tasks to assess, from its behavior, which points in this system represent bottlenecks or hot spots, affecting its overall efficiency. Thus, it is necessary to study the principles of verification and optimization of the elaborated systems that are better adapted to the new paradigms of projects development. The present work has as its aim implementing a processing module for data collection and analysis of C language writen programs profile, wich will run in soft core processors, like Alteras NiosII. However, unlike the statistics offered by the tool GProf (GNU Profiling) tool with respect to performance analysis, in which each sample obtained implies the increment of a counter to the function caught, this paper turns his interest to the analysis of memory usage profiling, which is especially found in volume allocated in each sample. Thus, for different samples of the same function, the matter is to know the most amount of memory used by the function among all samples collected. This means that instead of increasing sample we will adopt the principle of registration of the highest number of bytes of memory usage observed in each function. So, this tools main features are: a) storing the information of memory use in the heap memory obtained in the process of Profiling in an appropriate format for later use by hardware and software codesign applications; and b) the reporting of Profiling that shows the dynamic memory volume allocated during analyzed programs processing so one can identify where such use is more critical, allowing the designer to make decisions regarding the reformulation of source code, or as to the increase in memory size to be installed int the system, or as to the architecture redesign
|
10 |
Técnicas de profiling para o co-projeto de hardware e software baseado em computação reconfigurável aplicadas ao processador softcore Nios II da Altera / Hardware and software codesing profiling techniques based on reconfigurable computing applied to the Altera´s Nios soft core processorLuiz Henrique Kiehn 21 September 2012 (has links)
Como avanço dos paradigmas de desenvolvimento de sistemas eletrônicos, novos conceitos, modelos e técnicas resultaram dessa evolução, gerando ferramentas mais eficientes e objetivas. Entre estas, as de automação de projetos eletrônicos (EDA - Electronic Design Automation) em nível de sistema (ESL - Electronic System Level) trouxeram um incremento considerável de produtividade à confecção de sistemas eletrônicos, inclusive de sistemas embarcados. Já no que se refere ao desempenho do sistema elaborado, monitorar sua execução e determinar seu perfil de funcionamento são tarefas essenciais para avaliar, a partir do seu comportamento, quais os pontos que representam gargalos ou pontos críticos, afetando sua eficiência geral. Dessa forma, faz-se necessário pesquisar princípios de verificação e otimização dos sistemas elaborados que estejam mais bem adaptados aos novos paradigmas de desenvolvimento de projetos. O presente trabalho tem por objetivo implementar um módulo de coleta e processamento de dados para análise de perfil de programas escritos na linguagem C e que sejam executados em processadores soft core, como o NiosII, da Altera. Entretanto, diferentemente das estatísticas oferecidas pela ferramenta GProf (GNU Profiling) com relação à análise de desempenho, em que cada amostra obtida implica no incremento de um contador para a função flagrada, o presente trabalho volta seu interesse à análise do perfil de uso de memória heap, que encontra-se mormente no volume alocado constatado em cada amostragem. Dessa forma, para diferentes amostragens de uma mesma função interessa saber qual a maior quantidade de memória utilizada pela função entre todas as amostras coletadas. Isso significa que, ao invés de incremento por amostragem, adotar-se-á o princípio do registro do maior valor, em número de bytes, de uso de memória constatado em cada função. Os principais recursos do módulo proposto são: a) o armazenamento das informações de uso de memória heap obtidas no processo de Profiling em formato apropriado para uso posterior por aplicações de co-projeto de hardware e software; e b) a geração de relatórios de Profiling que apresentem o volume de memória dinâmica alocada durante o processamento dos programas analisados para que se possa identificar os locais onde esse uso é mais crítico, permitindo ao projetista tomar decisões quanto à reformulação do código fonte, ou quanto ao incremento no tamanho da memória a ser instalada no sistema, ou quanto à reformulação da arquitetura de um modo geral / Due to the advancement of the paradigms of development of electronic systems, new concepts, models and techniques resulted from this evolution, generating more eficient and objective tools. Among them, the system-level (ESL) electronic design automation (EDA) ones has brought a considerable increase to the productivity of electronic systems manufacturing, especially including the embedded systems. In what refers to elaborated systems, monitoring its execution and determining its operating profile are the essential tasks to assess, from its behavior, which points in this system represent bottlenecks or hot spots, affecting its overall efficiency. Thus, it is necessary to study the principles of verification and optimization of the elaborated systems that are better adapted to the new paradigms of projects development. The present work has as its aim implementing a processing module for data collection and analysis of C language writen programs profile, wich will run in soft core processors, like Alteras NiosII. However, unlike the statistics offered by the tool GProf (GNU Profiling) tool with respect to performance analysis, in which each sample obtained implies the increment of a counter to the function caught, this paper turns his interest to the analysis of memory usage profiling, which is especially found in volume allocated in each sample. Thus, for different samples of the same function, the matter is to know the most amount of memory used by the function among all samples collected. This means that instead of increasing sample we will adopt the principle of registration of the highest number of bytes of memory usage observed in each function. So, this tools main features are: a) storing the information of memory use in the heap memory obtained in the process of Profiling in an appropriate format for later use by hardware and software codesign applications; and b) the reporting of Profiling that shows the dynamic memory volume allocated during analyzed programs processing so one can identify where such use is more critical, allowing the designer to make decisions regarding the reformulation of source code, or as to the increase in memory size to be installed int the system, or as to the architecture redesign
|
Page generated in 0.0506 seconds