Spelling suggestions: "subject:"[een] CO-DESIGN"" "subject:"[enn] CO-DESIGN""
221 |
Entwurf, Methoden und Werkzeuge für komplexe Bildverarbeitungssysteme auf Rekonfigurierbaren System-on-Chip-Architekturen / Design, methodologies and tools for complex image processing systems on reconfigurable system-on-chip-architecturesMühlbauer, Felix January 2011 (has links)
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem.
Einerseits ist eine hohe Rechenleistung erforderlich.
Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist.
Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt.
Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren.
Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden.
Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft.
Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen.
Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten.
Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren.
Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke.
Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV.
Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch.
Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung.
Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert. / Image processing applications have special requirements to the executing computational system.
On the one hand a high computational power is necessary.
On the other hand a high flexibility is an advantage because the development tends to be an experimental and interactive process.
For new applications the developer tend to choose a computational architecture which they know well instead of using that one which fits best to the application.
Image processing algorithms are inherently parallel while common image processing systems are mostly based on sequentially operating processors.
In contrast to this "mismatch", highly efficient systems can be setup of a directed synergy of software and hardware components.
However, the construction of such systems is complex and lots of solutions, like gross-grained architectures or application specific programming languages, are often too academic for the usage in commerce.
The present work should contribute to reduce the complexity of hardware-software-systems and thus increase the economy of and simplify the development of high-performance on-chip systems in the domain of image processing.
In doing so, a value was set on keeping the effort low on making familiar to the topic, on development and also extensions.
A design flow was developed and implemented which allows the software developer to accelerate calculations with hardware components and to prototype the whole embedded system.
Here complex image processing systems, like distributed camera sensor networks, are examined which need an operating system.
The used software is based upon Linux and the image processing library OpenCV.
The distribution of the calculations to software and hardware components and the resulting scheduling and generation of architectures is done automatically.
The design space exploration is based on answer set programming which involves advantages for modelling in terms of simplicity and extensions.
The software is synthesized with the help of OpenEmbedded/Bitbake and the generated on-chip architectures are implemented on FPGAs.
|
222 |
DIAMOND : Une approche pour la conception de systèmes multi-agents embarquésJamont, Jean-Paul 29 September 2005 (has links) (PDF)
Cette thèse propose une méthode pour l'analyse de problèmes relevant des systèmes complexes physiques ouverts avec des systèmes multi-agents physiques. Cette méthode que nous appelons DIAMOND (Decentralized Iterative Approach for Multiagent Open Networks Design) agence quatre phases en un cycle de vie en spirale. Elle propose d'utiliser, pour le recueil des besoins, des notations d'UML mais elle structure le fonctionnement global du système via une étude de modes de marche et d'arrêt. Elle utilise le raffinement notamment entre le niveau local et le niveau global du système et assemble les comportements individuels et les comportements sociaux tout en identifiant les influences de l'un sur l'autre. Elle guide le concepteur durant la phase de conception générique en utilisant les composants comme unité opératoire. En fin de cycle, le partitionnement logiciel/matériel du système intervient et permet la génération du code ou des descriptions matérielles.<br />Il n'était pas suffisant de proposer une méthode : considérer les composants des systèmes complexes physiques comme des noeuds coopérants d'un réseau sans fil est une démarche attrayante qui peut être vue comme la traduction physique extrême de la décentralisation. De fait, des besoins spécifiques en architectures doivent être traités. Pour cela, nous proposons le modèle MWAC (Multi-Wireless-Agent Communication) qui repose sur l'auto-organisation des entités du système.<br />Ces deux contributions sont exploitées au sein de l'application EnvSys qui a pour objectif l'instrumentation d'un réseau hydrographique.
|
223 |
Playing and Learning Across Locations: : Indentifying Factors for the Design of Collaborative Mobile LearningSpikol, Daniel January 2008 (has links)
<p>The research presented in this thesis investigates the design challenges associated with the development and use of mobile applications and tools for supporting collaboration in educational activities. These technologies provide new opportunities to promote and enhance collaboration by engaging learners in a variety of activities across different places and contexts. A basic challenge is to identify how to design and deploy mobile tools and services that could be used to support collaboration in different kinds of settings. There is a need to investigate how to design collaborative learning processes and to support flexible educational activities that take advantage of mobility. The main research question that I focus on is the identification of factors that influence the design of mobile collaborative learning.</p><p>The theoretical foundations that guide my work rely on the concepts behind computer supported collaborative learning and design-based research. These ideas are presented at the beginning of this thesis and provide the basis for developing an initial framework for understanding mobile collaboration. The empirical results from three different projects conducted as part of my efforts at the Center for Learning and Knowledge Technologies at Växjö University are presented and analyzed. These results are based on a collection of papers that have been published in two refereed international conference proceedings, a journal paper, and a book chapter. The educational activities and technological support have been developed in accordance with a grounded theoretical framework. The thesis ends by discussing those factors, which have been identified as having a significant influence when it comes to the design and support of mobile collaborative learning.</p><p>The findings presented in this thesis indicate that mobility changes the contexts of learning and modes of collaboration, requiring different design approaches than those used in traditional system development to support teaching and learning. The major conclusion of these efforts is that the learners’ creations, actions, sharing of experiences and reflections are key factors to consider when designing mobile collaborative activities in learning. The results additionally point to the benefit of directly involving the learners in the design process by connecting them to the iterative cycles of interaction design and research.</p>
|
224 |
A microprocessor performance and reliability simulation framework using the speculative functional-first methodologyYuan, Yi 13 February 2012 (has links)
With the high complexity of modern day microprocessors and the slow speed of cycle-accurate simulations, architects are often unable to adequately evaluate their designs during the architectural exploration phases of chip design. This thesis presents the design and implementation of the timing partition of the cycle-accurate, microarchitecture-level SFFSim-Bear simulator. SFFSim-Bear is an implementation of the speculative functional-first (SFF) methodology, and utilizes a hybrid software-FPGA platform to accelerate simulation throughput. The timing partition, implemented in FPGA, features throughput-oriented, latency-tolerant designs to cope with the challenges of the hybrid platform. Furthermore, a fault injection framework is added to this implementation that allows designers to study the reliability aspects of their processors. The result is a simulator that is fast, accurate, flexible, and extensible. / text
|
225 |
Compact physical models for power supply noise and chip/package co-design in gigascale integration (GSI) and three-dimensional (3-D) integration systemsHuang, Gang 25 September 2008 (has links)
The objective of this dissertation is to derive a set of compact physical models addressing power integrity issues in high performance gigascale integration (GSI) systems and three-dimensional (3-D) systems. The aggressive scaling of CMOS integrated circuits makes the design of power distribution networks a serious challenge. This is because the supply current and clock frequency are increasing, which increases the power supply noise. The scaling of the supply voltage slowed down in recent years, but the logic on the integrated circuit (IC) still becomes more sensitive to any supply voltage change because of the decreasing clock cycle and therefore noise margin. Excessive power supply noise can lead to severe degradation of chip performance and even logic failure. Therefore, power supply noise modeling and power integrity validation are of great significance in GSI systems and 3-D systems.
Compact physical models enable quick recognition of the power supply noise without doing dedicated simulations. In this dissertation, accurate and compact physical models for the power supply noise are derived for power hungry blocks, hot spots, 3-D chip stacks, and chip/package co-design. The impacts of noise on transmission line performance are also investigated using compact physical modeling schemes. The models can help designers gain sufficient physical insights into the complicated power delivery system and tradeoff various important chip and package design parameters during the early stages of design. The models are compared with commercial tools and display high accuracy.
|
226 |
Conception conjointe d’antenne active pour futurs modules de transmissions RF miniatures et faible pertes / Active antenna co-design for future compact and high efficient RF front-endBen abdallah, Essia 12 December 2016 (has links)
L’évolution des différentes générations de systèmes de télécommunications cellulaires a entraîné une complexité du frontal des terminaux mobiles caractérisés notamment par la multiplication des chaînes RF qui le constituent. Chaque chaîne est dédiée à un standard, ce qui n’est pas optimale ni du point de vue du coût, ni de l’encombrement. Afin d’optimiser les performances et la consommation du transmetteur radiofréquence, l’approche retenue dans cette thèse consiste à concevoir de façon globale différents blocs afin de partager les contraintes. Dans cette thèse, l’approche globale de la co-conception est organisée en deux sous études. Celles-ci sont destinées à terme à être intégrées dans un même frontal RF entièrement configurable.La première étude aborde la problématique de la conception conjointe entre une antenne et un amplificateur de puissance (PA) qui sont traditionnellement conçus séparément. Nous avons tout d’abord déterminé les spécifications de l’antenne permettant de maximiser le transfert d’énergie entre ces deux blocs. Ensuite, nous avons conçu l’antenne en partageant les contraintes d’impédance à la fois dans la bande utile et aux harmoniques entre cette dernière et le PA afin de relâcher les spécifications sur le réseau d’adaptation d’impédance. Cette approche permet de maintenir la linéarité du PA à des niveaux de puissances supérieures par rapport au cas où l’antenne est adaptée sur 50 Ω.La seconde étude s’intéresse à la conception conjointe d’antennes et de composants agiles. Nous avons réparti l’effort de miniaturisation et les pertes ohmiques associées entre la structure d’antenne et le composant agile (capacité commutable numériquement). Les développements présentés se sont appuyés sur des simulations électromagnétiques, des modélisations, des caractérisations système (linéarité et temps de commutation) et des mesures en rayonnement (efficacité) de prototypes d’antennes miniatures dans les bandes basses 4G. Nos études ont abouti à la conception d’une antenne fente reconfigurable fonctionnant sur la bande instantanée maximale autorisée par la 4G. Pour une intégration sur smartphone, l’élément rayonnant n’occupe que 18 x 3 mm2 de surface soit λ_0/30×λ_0/180 à 560 MHz. La fréquence de résonance de l’antenne varie entre 560 MHz et 1.03 GHz et l’efficacité totale varie entre 50% et 4%. Un banc de mesure de la linéarité a été implémenté afin d’évaluer la linéarité des antennes agiles. La spécification de linéarité exigée par le standard est maintenu jusqu’à une puissance de 22 dBm. / The recent development of cellular communication standards has led to an increasing RF front-end complexity due to the ever increasing number of RF needed paths. Each RF path is dedicated to a frequency bands group which might not be optimal for cost and occupied space area. Consequently, in order to optimize the RF performances and energy consumption, the approach used in this thesis is to share the constraints between the PA and the antenna of the front-end: this is called co-design. In this thesis, the considered co-design approach is twofold and in near future both results should be simultaneously considered and integrated into one fully reconfigurable RF front-end design.The first study addresses the co-design of an antenna and its associated power amplifier (PA), which are traditionally designed separately. We first determine the antenna impedance specifications to maximize the tradeoff between the energy transfer and PA linearity. Then, we propose to remove the impedance matching network between antenna and PA, while demonstrating that a low impedance antenna can maintain the RF performances. Contrarily to the classical approach where the antenna is matched to 50 Ω, the proposed co-design shows the possibility to keep the linearity of the PA even for high power levels (> 20 dBm).The second study focuses on the co-design of an antenna and tunable components. We are sharing the miniaturization effort and the resistive losses between the antenna structure and the tunable capacitor (DTC). The achieved developments are based on electromagnetic simulations, modeling, system characterization (linearity and switching time) and radiation measurements (efficiency) of miniature reconfigurable antenna prototypes in the 4G low bands. The considered studies have led to the design of a frequency reconfigurable antenna addressing the maximum instantaneous available bandwidth authorized by 4G. The radiator occupies only 18 x 3 mm2 (λ0/30 x λ0/180 at 560 MHz), and thus it is extremely suitable for a possible integration onto smartphones. The antenna resonance frequency is tuned between 560 MHz and 1030 MHz and the total efficiency varies between 50% and 4%. For the first time, the impact of SOI DTC implemented on the antenna radiating structure on linearity is measured with a dedicated test bench. The linearity specified by 4G is maintained up to 22 dBm of transmitted power.
|
227 |
De la conception de produit à la conception de filière : Quelles méthodologies pour les étapes amont de l’innovation ? / From product design to supply chain design : Which methodologies for the upstream stages of innovation?Marche, Brunelle 22 November 2018 (has links)
Ce travail contribue à la recherche scientifique à travers différents aspects. Tout d’abord, le couple produit/filière, traditionnellement pensé de façon causaliste, a été envisagé à travers le prisme du paradigme de la complexité. Cette contribution théorique souligne la nécessité de co-concevoir le couple produit/filière afin d’atténuer les efforts associés au lancement d’un produit innovant sur le marché et de s’assurer de son succès. Cependant, une étude empirique a souligné que peu d’entreprises tenaient compte de la filière lors de la conception de leur produit innovant. Dans ce contexte, une ingénierie de conception de filière a été élaborée en se basant sur les données de conception du produit afin de concevoir, spécifier, valider et mettre en œuvre la filière d’un nouveau produit. Cette ingénierie se décompose en trois étapes majeures : une étape de co-conception, une étape de positionnement et une étape d’évaluation. L’étape de co-conception vise à collecter et à traiter les données de conception du produit fournies par l’équipe projet. Un modèle instancié de la filière a été développé afin de collecter les données nécessaires à la conception de la filière qui sont ensuite traités pour faciliter la modélisation. L’étape de positionnement vise à souligner le rôle de l’entreprise innovante au sein des différents scénarios de filière obtenus. Basée sur le processus Harmony for System Engineering et son outil Rational Rhapsody®, cette étape détaille la filière d’un point de vue exigences, acteurs, processus et comportement (chacun représenté par différents diagrammes) afin d’élaborer différents scenarios. Enfin, la dernière étape vise à évaluer ces scénarios de filière afin d’établir une stratégie cohérente. En effet, de nombreux chercheurs ont montré qu’une filière agile était plus apte à supporter un produit innovant lors de son lancement afin de s’adapter plus rapidement aux changements (organisationnels, tactiques, marketing, environnementaux…). Par conséquent, une trame basée sur des phénomènes observables a été développée afin de faciliter la mise en œuvre de stratégie d’agilité, ce qui permet d’évaluer la typologie de la filière actuelle et de décider des actions à mettre en place pour obtenir une filière plus agile. Cette ingénierie a été testée auprès d’entreprises manufacturières / This thesis contributes to scientific research through different aspects. First of all, the product/supply chain couple, traditionally thought of in a causalistic way, was considered through the prism of the complexity paradigm. This theoretical contribution underlines the need to co-design the product/supply chain couple in order to mitigate the efforts associated with launching an innovative product on the market and to ensure its success. However, an empirical study has pointed out that few companies consider the supply chain when designing their innovative product. In this context, supply chain design engineering was developed based on product design data in order to design, specify, validate and implement the supply chain of a new product. This engineering is divided into three major stages: a co-design stage, a positioning stage and an evaluation stage. The co-design stage aims to collect and process the product design data provided by the project team. An instantiated supply chain model was developed to collect the data needed to design the supply chain which is then processed to facilitate modeling. The positioning stage aims to highlight the role of the innovative company within the various supply chain scenarios obtained. Based on the Harmony for System Engineering process and its Rational Rhapsody® tool, this step details the supply chain from a point of view of requirements, stakeholders, processes and behavior (each represented by different diagrams) in order to elaborate different scenarios. Finally, the last step aims to evaluate these supply chain scenarios in order to establish a coherent strategy. Indeed, many researchers have shown that an agile supply chain is better able to support an innovative product when it is launched in order to adapt more quickly to changes (organizational, tactical, marketing, environmental…). Consequently, a framework based on observable phenomena has been developed to facilitate the implementation of an agility strategy, which makes it possible to evaluate the typology of the current supply chain and decide which actions to implement to obtain a more agile supply chain. This engineering has been tested with manufacturing companies
|
228 |
Extração e reconhecimento de caracteres ópticos a partir do co-projeto de hardware e software sobre plataforma reconfigurável / Extraction and recognition of optical characters based on hardware and software co-design over reconfigurable platformDessbesell, Gustavo Fernando 07 March 2008 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / This work presents the implementation and analysis of a system devoted to the extraction and recognition of optical characters which is based on the hardware and software co-design methodology and built over a reconfigurable platform. Since vision is a very important sense, the research in the field of artificial vision systems has been carried out since the very beginning of the digital era, in the early 60 s. Taking into account the recent evolution experienced by the configurable computing area, a new tendency of research and
development of heterogeneous artificial vision systems emerges. Among the main benefits provided by the so called systems on chip are the reduction of power dissipation, financial costs and physical area. In this sense, taking a License Plate Recognition System (LPRS) as a case study, the focus of this work is the implementation of the character localization and recognition steps, while the partitioning of hardware and software resources is based in costbenefit heuristics. Initially, a software-only version of the system is build over an x86 platform. More than to allow the evaluation of several character localization related methods, this software-only version is also intended to be used as parameter of comparison for the embedded version of the system. Regarding the character recognition step, it is performed by the means of an Artificial Neural Network. Based on the results provided by the software-only evaluation system, the implementation of the embedded version is performed, considering an FPGA as platform. In this embedded version, the character localization step consists of a
dedicated hardware block, while the character recognition step comprises a piece of software executed in a microprocessor that is physically implemented inside the FPGA. Taking into account a 10 times higher frequency of operation for the processor of the x86 platform, as
well as the fact that most of the embedded hardware block employs a clock frequency smaller or equal to 25 MHz, the most noticeable result is the 2.25 times faster speed of processing achieved by the embedded version. Regarding the plate recognition capability, both systems have the same performance, being able to successfully recognize plates in 51.62 % of the cases (considering the best case). Beyond LPRSs, the system developed here could also be
employed to build other applications that require optical character recognition features, such as automatic traffic signs recognition and serial number reading of items in a production line. / Este trabalho apresenta a implementação e análise de um sistema voltado à extração e reconhecimento de caracteres ópticos a partir do co-projeto de hardware e software sobre uma plataforma reconfigurável. Por conta da importância atribuída ao sentido da visão, sistemas artificiais capazes de emular as tarefas envolvidas neste processo biológico têm sido alvo de pesquisas desde o surgimento dos primeiros computadores digitais, na década de 60. Tendo em vista a recente evolução experimentada na área da computação configurável, surge uma tendência natural à pesquisa e desenvolvimento de sistemas heterogêneos (compostos por uma combinação de blocos de hardware e software) de visão artificial baseados em tal plataforma. Dentre os principais benefícios proporcionados por sistemas em chip podem ser citados a redução no consumo de potência, custos financeiros e área física. Neste sentido, tomando
como estudo de caso um Sistema de Reconhecimento de Placas de Licenciamento Veicular (SRPLV), o foco do trabalho está situado na implementação das etapas de localização e
reconhecimento de caracteres, sendo o particionamento dos blocos de hardware e software baseado em heurísticas de custo-benefício. Inicialmente é realizada a implementação de uma versão totalmente em software do sistema aqui proposto, sobre plataforma x86, no intuito de avaliar os diversos métodos passíveis de implementação, bem como o de possibilitar um parâmetro de comparação com a versão embarcada do sistema. Os métodos avaliados dizem
respeito à etapa de localização de caracteres, haja vista a definição à priori do emprego de Redes Neurais Artificiais no reconhecimento dos mesmos. A partir dos resultados obtidos por esta avaliação é realizada a implementação da versão embarcada do sistema, tendo como plataforma um FPGA. Nesta versão, a etapa de localização de caracteres é implementada como um bloco dedicado de hardware, enquanto a de reconhecimento constitui-se num
software executado sobre um microprocessador fisicamente embutido no interior do FPGA. Considerando uma freqüência de operação 10 vezes superior para o processador da
plataforma x86, bem como o fato da maior parte do hardware embarcado utilizar um clock menor ou igual a 25 MHz, o principal resultado consiste no ganho de 2,25 vezes no tempo de execução obtido na segunda versão do sistema. No tocante à capacidade de reconhecimento de placas, os sistemas são equivalentes, sendo capazes de reconhecê-las corretamente em 51,62% das vezes, no melhor caso. Além de SRPLVs, o sistema aqui desenvolvido pode ser empregado na criação de outras aplicações que envolvam a problemática do reconhecimento de caracteres óticos, como reconhecimento automático de placas de trânsito e do número de série de itens numa linha de produção.
|
229 |
Uma metodologia para estimativa de área baseada em redes de Petri temporizadas para ambientes de sistemas de hardware/software co-designPortela Machado, Albano January 2004 (has links)
Made available in DSpace on 2014-06-12T15:58:27Z (GMT). No. of bitstreams: 2
arquivo4484_1.pdf: 6966497 bytes, checksum: 24a281b3de8ed514a81a117af5c76238 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2004 / A maioria dos sistemas electrônicos modernos consiste em hardware dedicado e
componentes programáveis (chamados componentes de software). Ao longo dos
últimos anos, o número de metodologias que aplicaram simultaneamente técnicas de
diferentes áreas para desenvolver sistemas mistos de hardware e software tem crescido
consideravelmente.
Projetos concorrentes de sistemas mistos de hardware/software têm mostrado ser
vantajoso quando considerado como um todo ao invés de se considerar entidades
independentes. Hoje em dia, o mercado eletrônico demanda sistemas de alto
desempenho e de baixo custo. Estes requisitos são essenciais para a competitividade de
mercado. Além disso, um curto time-to-market é um fator importante. A demora no
lançamento do produto causa sérias reduções no lucro, desde que é mais simples vender
um produto quando se tem pouca ou nenhuma competição. Isto significa que facilitando
o re-uso de projetos anteriores, uma rápida exploração de projeto, análise/verificação
qualitativa em fases iniciais do projeto, prototipação e a redução do tempo requerido
para testes, reduzem o tempo global exigido de uma especificação até o produto final.
Ao projetar tais sistemas mistos de hardware/software, a análise de alternativas de
projeto e a decisão de onde implementar cada parte de sistema, isto é, em hardware ou
em software, são tarefas muito importantes. A estimativa de métricas de qualidade
permite a exploração do espaço de projeto e pode guiar a decisão de implementação de
partes do sistema. Tais métricas são calculadas no nível de sistema, ou seja, sem
implementação real. Conseqüentemente, tais estimativas também aceleraram o projeto do sistema e permitem a análise de restrições de projeto, fornecendo uma
retroalimetação para decisões de projeto.
As redes de Petri são técnicas de especificação formal que permitem uma representação
gráfica e matemática. Têm métodos poderosos que permitem aos projetistas realizar
análises qualitativa e quantitativa. Redes de Petri Timed, são extensões de redes de Petri
nas quais as informações de tempo são expressas por duração (rede com tempo
determinístico, política de disparo em três fases) e são associadas às transições.
Para uma descrição comportamental de alto nível, o projeto de hardware é dividido em
classes de blocos funcionais: caminho de dados e controladores.
O caminho de dados consiste em três tipos de componentes RT: unidades de
armazenamento (registradores e latches), unidades funcionais (ALUS e comparadores),
e unidades de interconexão (multiplexadores e barramentos).
As unidades de armazenamento são requeridas para armazenar valores de dados
como constantes, variáveis e vetores no comportamento. As unidades funcionais são
necessárias para implementar as operações no comportamento. Após todas as variáveis
e operações no comportamento terem sido mapeadas às unidades de armazenamento e
funcionais, respectivamente, podemos estimar o número de unidades de interconexão,
como os barramentos e multiplexadores, os quais são requeridos para interligar as
unidades de armazenamento e funcionais.
Este trabalho propõe uma abordagem para estimar a área de hardware a partir do
número de unidades de armazenamento, funcionais e de interconexão, levando-se em
consideração restrições de tempo e dependência de dados, e estende alguns trabalhos
anteriores com o objetivo de melhorar a precisão dos métodos de estimativa de área.
Isto é, o método proposto considera uma rede de fluxo de dados que captura
dependência de dados e calcula a área do caminho de dados a partir do número e tipo
dos seus componentes, considerando a relação de dependência temporal
|
230 |
Facilitating consumer involvement in design for additive manufacturing/3D printing productsAriadi, Yudhi January 2016 (has links)
This research investigates the potential of the general public to actively design their own products and let consumers either manufacture by themselves or send the files to manufacturers to be produced. This approach anticipates the rapid growth of fabrication technology, particularly in Additive Manufacturing (AM)/3D printing. Recent developments in the field of AM/3D printing have led to renewed interest in how to manufacture customised products and in a way that will allow consumers to create bespoke products more easily. These technologies can enhance the understanding of non-technology compliant consumers and bring the manufacturing process closer to them. Consequently, to make AM/3D printing more accessible and easier to employ by the general public, design aspects need to be developed to be as simple to operate in the same manner as AM/3D printing technologies. These technologies will then attract consumers who want to produce Do-It-Yourself (DIY) products. This study suggests a Computer-aided Consumer Design (CaCODE) system as user- friendly design software to simplify the Computer Aided Design (CAD) stages that are required to produce 3D model data required by the AM/3D printing process. This software will be an easy-to-operate design system where consumers interact with parameters of designed forms easily instead of operating conventional CAD. In addition, this research investigates the current capabilities of AM/3D printing technologies in producing consumer products. To uncover the potential of consumer-led design and manufacturing, CaCODE has been developed for consumer evaluation, which is needed to measure the appropriateness of the tool. In addition, a range of consumer product samples as pens has been built using a range of different materials, AM/3D printing technologies and additional post-processing methods. This was undertaken to evaluate consumer acceptance of the AM/3D printed product based on products perceived quality. Forty non-designer participants, 50% male and 50% female, from 5 to 64 years old, 6-7 participants per ten-year age groups in 6 groups, were recruited. The results indicated that 75% of the participants would like to design their own product using consumer design software. The study compared how consumers interacted with the 3D model to manipulate the shape by using two methods: indirect manipulation (sliders) and direct manipulation (drag points). The majority of the participants would prefer to use the direct manipulation because they felt it was easy to use and enabled them to enjoy the design process. The study concluded that the direct manipulation was more acceptable because it enabled users to touch the digital product and manipulate it, making it more intuitive and natural. The research finds that there is a potential for consumers to design a product using user-friendly design tools. Using these findings, a consumer design tool concept was created for future development. The study indicated that 53% of participants would like to use products made by AM/3D printing although they still wanted the surface finish of injection moulded parts. However, the AM/3D printing has advantages that can fulfil the participants preference such as multi-materials from the material jetting method and it is proved that additional post-processing can increase participants acceptance level.
|
Page generated in 0.0427 seconds