Spelling suggestions: "subject:"c.design"" "subject:"candesign""
241 |
Co-Projeto de hardware/software para correlação de imagens / Hardware/software co-design for imge cross-correlationMaurício Acconcia Dias 26 July 2011 (has links)
Este trabalho de pesquisa tem por objetivo o desenvolvimento de um coprojeto de hardware/software para o algoritmo de correlação de imagens visando atingir um ganho de desempenho com relação à implementação totalmente em software. O trabalho apresenta um comparativo entre um conjunto bastante amplo e significativo de configurações diferentes do soft-processor Nios II implementadas em FPGA, inclusive com a adição de novas instruções dedicadas. O desenvolvimento do co-projeto foi feito com base em uma modificação do método baseado em profiling adicionando-se um ciclo de desenvolvimento e de otimização de software. A comparação foi feita com relação ao tempo de execução para medir o speedup alcançado durante o desenvolvimento do co-projeto que atingiu um ganho de desempenho significativo. Também analisou-se a influência de estruturas de hardware básicas e dedicadas no tempo de execução final do algoritmo. A análise dos resultados sugere que o método se mostrou eficiente considerando o speedup atingido, porém o tempo total de execução ainda ficou acima do esperado, considerando-se a necessidade de execução e processamento de imagens em tempo real dos sistemas de navegação robótica. No entanto, destaca-se que as limitações de processamento em tempo real estão também ligadas as restrições de desempenho impostas pelo hardware adotado no projeto, baseado em uma FPGA de baixo custo e capacidade média / This work presents a FPGA based hardware/software co-design for image normalized cross correlation algorithm. The main goal is to achieve a significant speedup related to the execution time of the all-software implementation. The co-design proposed method is a modified profiling-based method with a software development step. The executions were compared related to execution time resulting on a significant speedup. To achieve this speedup a comparison between 21 different configurations of Nios II soft-processor was done. Also hardware influence on execution time was evaluated to know how simple hardware structures and specific hardware structures influence algorithm final execution time. Result analysis suggest that the method is very efficient considering achieved speedup but the final execution time still remains higher, considering the need for real time image processing on robotic navigation systems. However, the limitations for real time processing are a consequence of the hardware adopted in this work, based on a low cost and capacity FPGA
|
242 |
Inovação social: Um desafio para o design: O papel do design estratégico no processo de inovação socialEichenberg, Carolina Hermes 27 March 2013 (has links)
Submitted by William Justo Figueiro (williamjf) on 2015-06-30T23:55:28Z
No. of bitstreams: 1
62.pdf: 8466038 bytes, checksum: 5c62ce51432ca32e83e4cf59e32833fa (MD5) / Made available in DSpace on 2015-06-30T23:55:28Z (GMT). No. of bitstreams: 1
62.pdf: 8466038 bytes, checksum: 5c62ce51432ca32e83e4cf59e32833fa (MD5)
Previous issue date: 2013 / Nenhuma / O sistema de organização social contemporâneo requer soluções inovadoras e sustentáveis que permitam à sociedade continuar se desenvolvendo da melhor maneira possível, por um longo período de tempo e em equilíbrio com o ecossistema. Nas últimas décadas, foram adotadas diversas medidas com o intuito de reduzir o impacto das ações do homem sobre a biosfera. Todavia, essas iniciativas possuem um caráter pontual, que opera segundo critérios reducionistas, de acordo com o entendimento do modelo mental vigente. O tema da inovação social envolve a reflexão sobre como transformar as relações sociais, de modo que elas modifiquem a compreensão dos indivíduos sobre como lidar com as coisas. Nessa perspectiva, a presente pesquisa propõe-se a uma reflexão sobre o papel do design estratégico nesse processo. Por tratar-se de uma pesquisa relacionada à área do design, a análise incide em maior profundidade no conceito de descontinuidade sistêmica, proposto por Ezio Manzini (2008). O autor propõe que se analisem iniciativas de inovação radical praticadas em contextos locais. Para tanto, estuda-se o caso da Rede Ideia. Essa rede é formada por empreendimentos solidários localizados na cidade de Porto Alegre-RS e tem por objetivo promover a transformação social através da compreensão de uma economia solidária. A análise incide sobre dois aspectos interpretativos: o primeiro, de observação do caso; e o segundo, de concepts de projetos, resultados de um workshop realizado com alunos de especialização em design estratégico. Um dos resultados centrais deste estudo evidencia que o papel do design estratégico nesse cenário é servir de referência para a compreensão do sistema aberto e de experimento para a concepção de soluções de caráter complexo. / The contemporary system of social organization requires innovative and sustainable solutions that enable society to continue to develop in the best way it can, for a long time, and in equilibrium with the ecosystem. Over the last decades, several actions have been taken to reduce the impact of the human action on the biosphere. However, such initiatives have been sporadic, operating according to reductionist criteria and the current mindset. The theme of social innovation involves a reflection on how to change social relationships so that they modify the individuals’ understanding of the way of handling with things. From this perspective, this research aims at reflecting on the role of strategic design in this process. As this research is related to the design area, the analysis has focused on the concept of systemic discontinuity as proposed by Ezio Manzini (2008). The author has proposed the analysis of radically innovative initiatives carried out in local contexts. In order to do that, the situation of Rede Ideia has been studied. This network consists of solidarity enterprises situated in Porto Alegre-RS. Its objective is to foster social change through the understanding of a solidarity economy. The analysis has been concentrated on two interpretative aspects: the first one has involved case observation; the second has addressed project concepts resulting from a workshop held with students of a specialization course in strategic design. One of the central results of this study is that the role of strategic design in this scenario is to function as both a reference for the understanding of the open system, and an experiment for the conception of complex solutions.
|
243 |
What improves the user-designer communication in co-design?Zeb, Irfan, Fahad, Shah January 2013 (has links)
Today’s business and IT systems have strongly focused on effective communication. The communication based on poor foundation might create huge communication problems for the system designer and user. These communication problems have a severe impact on the efficiency of information system and most importantly when it comes to the building of a new information system through co design process. For any business organization, IT plays a huge role these days. Although this is not given much emphasis sometimes, it is important to understand that the use of IT in business cannot be taken for granted because it is viewed as part of business organization these days. The business needs to continuously make investments in their IT systems. This will not only help the organization, but the industry as a whole. Effective communication is extremely important for business and IT systems now-a-days. With the help of this particular thesis, the importance of effective communication would have been reflected accordingly. The purpose of this research is therefore to analyze the communication problem between the designer and user during co-design mainly in the field of business and information technology and to create an understanding for how it is possible to create a better communication between the different parties in system development through co-design. Research can also be classified on the basis of the structure of the problem to be solved into exploratory, descriptive and casual research. The research can be regarded as exploratory research because large amounts of data can be gathered from the past researches and literature. Exploratory research explores the parameters of the problems in order to identify what should be measured and how best to undertake a study. In this research the qualitative data is gathered through detailed interviews and literature review. This helps in better understanding through words. Data is generated through the method of triangulation. The results will be presented using detailed analysis of data gathered from interviews and the analysis of theoretical part. It is a very challenging task to meet the changing needs of the business world. Designing effective information technology for this purpose is also very challenging. The co-design of business and IT systems has a lot of benefits for the organizations. The information technology is basically used to support business and its functions. Therefore it is extremely important that the information technology is aligned with the business processes. It should be considered as a part of the business and should not be designed independently. Effective Communication is important for managers in the companies in order to perform the fundamental management functions, i.e., Planning, Leading, Organizing, and Controlling. Communication facilitates managers to execute their jobs as well as responsibilities. Communication provides a foundation for planning. All the vital information should be communicated to the managers who consecutively should communicate the plans in order to apply them. Organizing also needs efficient communication with others regarding their job task. Hence, we can say that “effective communication is a basic element of successful business”. In other words, communication works as blood of organization. Strong literature review as well as strong capabilities towards research methodology and analytical part will certainly enhances the productivity of this thesis and will furnish good understandings among the readers. / Program: Masterutbildning i Informatik
|
244 |
Hardware and software co-design toward flexible terabits per second traffic processing / Co-conception matérielle et logicielle pour du traitement de trafic flexible au-delà du terabit par secondeCornevaux-Juignet, Franck 04 July 2018 (has links)
La fiabilité et la sécurité des réseaux de communication nécessitent des composants efficaces pour analyser finement le trafic de données. La diversification des services ainsi que l'augmentation des débits obligent les systèmes d'analyse à être plus performants pour gérer des débits de plusieurs centaines, voire milliers de Gigabits par seconde. Les solutions logicielles communément utilisées offrent une flexibilité et une accessibilité bienvenues pour les opérateurs du réseau mais ne suffisent plus pour répondre à ces fortes contraintes dans de nombreux cas critiques.Cette thèse étudie des solutions architecturales reposant sur des puces programmables de type Field-Programmable Gate Array (FPGA) qui allient puissance de calcul et flexibilité de traitement. Des cartes équipées de telles puces sont intégrées dans un flot de traitement commun logiciel/matériel afin de compenser les lacunes de chaque élément. Les composants du réseau développés avec cette approche innovante garantissent un traitement exhaustif des paquets circulant sur les liens physiques tout en conservant la flexibilité des solutions logicielles conventionnelles, ce qui est unique dans l'état de l'art.Cette approche est validée par la conception et l'implémentation d'une architecture de traitement de paquets flexible sur FPGA. Celle-ci peut traiter n'importe quel type de paquet au coût d'un faible surplus de consommation de ressources. Elle est de plus complètement paramétrable à partir du logiciel. La solution proposée permet ainsi un usage transparent de la puissance d'un accélérateur matériel par un ingénieur réseau sans nécessiter de compétence préalable en conception de circuits numériques. / The reliability and the security of communication networks require efficient components to finely analyze the traffic of data. Service diversification and through put increase force network operators to constantly improve analysis systems in order to handle through puts of hundreds,even thousands of Gigabits per second. Commonly used solutions are software oriented solutions that offer a flexibility and an accessibility welcome for network operators, but they can no more answer these strong constraints in many critical cases.This thesis studies architectural solutions based on programmable chips like Field-Programmable Gate Arrays (FPGAs) combining computation power and processing flexibility. Boards equipped with such chips are integrated into a common software/hardware processing flow in order to balance short comings of each element. Network components developed with this innovative approach ensure an exhaustive processing of packets transmitted on physical links while keeping the flexibility of usual software solutions, which was never encountered in the previous state of theart.This approach is validated by the design and the implementation of a flexible packet processing architecture on FPGA. It is able to process any packet type at the cost of slight resources over consumption. It is moreover fully customizable from the software part. With the proposed solution, network engineers can transparently use the processing power of an hardware accelerator without the need of prior knowledge in digital circuit design.
|
245 |
Commande robuste structurée : application au co-design mécanique / contrôle d’attitude d’un satellite flexible / Integrated Control/Structure Design of a Flexible Satellite Using Structured Robust Control SynthesisPerez Gonzalez, Jose Alvaro 14 November 2016 (has links)
Dans cette étude de thèse, le problème du co-design mécanique/contrôle d’attitude avec méthodesde la commande robuste structurée est considéré. Le problème est abordé en développant une techniquepour la modélisation de systèmes flexibles multi-corps, appelé modèle Two-Input Two-Output Port (TITOP).En utilisant des modèles d’éléments finis comme données d’entrée, ce cadre général permet de déterminer, souscertaines hypothèses, un modèle linéaire d’un système de corps flexibles enchaînés. De plus, cette modélisationTITOP permet de considérer des variations paramétriques dans le système, une caractéristique nécessaire pourréaliser des études de co-design contrôle/structure. La technique de modélisation TITOP est aussi étenduepour la prise en compte des actionneurs piézoélectriques et des joints pivots qui peuvent apparaître dans lessous-structures. Différentes stratégies de contrôle des modes rigides et flexibles sont étudiées avec les modèles obtenus afin de trouver la meilleure architecture de contrôle pour la réjection des perturbations basse fréquence etl’amortissement des vibrations. En exploitant les propriétés d’outils de synthèse H1 structurée, la mise enoeuvre d’un schéma de co-design est expliquée, en considérant les spécifications du système (bande passantedu système et amortissement des modes) sous forme de contraintes H1. L’étude d’un tel co-design contrôled’attitude/mécanique d’un satellite flexible est illustré en utilisant toutes les techniques développées, optimisantsimultanément une loi de contrôle optimisée et certains paramètres structuraux. / In this PhD thesis, the integrated control/structure design of a large flexible spacecraft isaddressed using structured H1 synthesis. The problem is endeavored by developing a modeling technique forflexible multibody systems, called the Two Input Two Output Port (TITOP) model. This general frameworkallows the assembly of a flexible multibody system in chain-like or star-like structure, using finite elementmodels as input data. Additionally, the TITOP modeling technique allows the consideration of parametricvariations inside the system, a necessary characteristic in order to perform integrated control/structure design. In contrast to another widely used method, the assumed modes method, the TITOP modelling technique is robust against changes in the boundary conditions which link the flexible bodies. Furthermore, the TITOP modeling technique can be used as an accurate approximation even when kinematic nonlinearities can be large. The TITOP modeling technique is extended to the modeling of piezoelectric actuators and sensors for the control of flexible structures and revolute joints. Different control strategies, either for controlling rigid body and flexible body motion, are tested with the developed models for obtaining the best controller’s architecture in terms of perturbation rejection and vibration damping. The implementation of the integrated control/structure design in the structured H1 scheme is developed considering the different system’s specifications, such as system’s bandwidth or modes damping, in the form of H1 weighting functions. The integrated attitude control/structure design of a flexiblesatellite is performed using all the developed techniques and the optimization of the control law and severalstructural parameters is achieved.
|
246 |
High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control SystemsPalm, Johan January 2009 (has links)
<p>The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.</p><p>Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.</p><p>In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.</p><p>Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.</p>
|
247 |
MARTE based model driven design methodology for targeting dynamically reconfigurable FPGA based SoCsQuadri, Imran Rafiq 20 April 2010 (has links) (PDF)
Les travaux présentés dans cette thèse sont effectuées dans le cadre des Systèmes sur puce (SoC, Systemon Chip) et la conception de systèmes embarqués en temps réel, notamment dédiés au domaine de la reconfiguration dynamique, liés à ces systèmes complexes. Dans ce travail, nous présentons un nouveau flot de conception basé sur l'Ingénierie Dirigée par les Modèles (IDM/MDE) et le profilMARTE pour la conception conjointe du SoC, la spécification et la mise en oeuvre de ces systèmes sur puce reconfigurables, afin d'élever les niveaux d'abstraction et de réduire la complexité du système. La première contribution relative à cette thèse est l'identification des parties de systèmes sur puce reconfigurable dynamiquement qui peuvent être modélisées au niveau d'abstraction élevé. Cette thèse adapte une approche dirigée par l'application et cible les modèles d'application de haut niveau pour être traités comme des régions dynamiques des SoCs reconfigurables. Nous proposons aussi des modèles de contrôle générique pour la gestion de ces régions au cours de l'exécution en temps réel. Bien que cette sémantique puisse être introduite à différents niveaux d'abstraction d'un environnent pour la conception conjointe du SoC, nous insistons tout particulièrement sur sa fusion au niveau du déploiement, qui relie la propriété intellectuelle avec les éléments modélisés à haut niveau de conception. En outre, ces concepts ont été intégrés dans le méta-modèleMARTE et le profil correspondant afin de fournir une extension adéquate pour exprimer les caractéristiques de reconfiguration à la modélisation de haut niveau. La seconde contribution est la proposition d'un méta-modèle intermédiaire, qui isole les concepts présents au niveau transfert de registre (RTL-Register Transfer Level). Ce méta-modèle intègre les concepts chargés de l'exécution matérielle des applications modélisées, tout en enrichissant la sémantique de contrôle, provoquant la création d'un accélérateur matériel reconfigurable dynamiquement avec plusieurs implémentations disponibles. Enfin, en utilisant les transformations de modèlesMDE et les principes correspondants, nous sommes en mesure de générer des codeHDL équivalents à différentes implémentations de l'accélérateur reconfigurable ainsi que différents codes source en langage C/C++ liés au contrôleur de reconfiguration, qui est finalement responsable de la commutation entre les différentes mplémentations. Enfin, notre flot de conception a été vérifié avec succès dans une étude de cas liée à un système anti-radar de détection de collision. Une composante clé intégrante de ce système a été modélisée en utilisant les spécifications MARTE étendu et le code généré a été utilisé dans la conception et la mise en oeuvre d'un SoC sur un FPGA reconfigurable dynamiquement.
|
248 |
High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control SystemsPalm, Johan January 2009 (has links)
The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors. Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware. In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware. Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.
|
249 |
Entwurf, Methoden und Werkzeuge für komplexe Bildverarbeitungssysteme auf Rekonfigurierbaren System-on-Chip-Architekturen / Design, methodologies and tools for complex image processing systems on reconfigurable system-on-chip-architecturesMühlbauer, Felix January 2011 (has links)
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem.
Einerseits ist eine hohe Rechenleistung erforderlich.
Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist.
Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt.
Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren.
Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden.
Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft.
Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen.
Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten.
Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren.
Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke.
Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV.
Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch.
Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung.
Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert. / Image processing applications have special requirements to the executing computational system.
On the one hand a high computational power is necessary.
On the other hand a high flexibility is an advantage because the development tends to be an experimental and interactive process.
For new applications the developer tend to choose a computational architecture which they know well instead of using that one which fits best to the application.
Image processing algorithms are inherently parallel while common image processing systems are mostly based on sequentially operating processors.
In contrast to this "mismatch", highly efficient systems can be setup of a directed synergy of software and hardware components.
However, the construction of such systems is complex and lots of solutions, like gross-grained architectures or application specific programming languages, are often too academic for the usage in commerce.
The present work should contribute to reduce the complexity of hardware-software-systems and thus increase the economy of and simplify the development of high-performance on-chip systems in the domain of image processing.
In doing so, a value was set on keeping the effort low on making familiar to the topic, on development and also extensions.
A design flow was developed and implemented which allows the software developer to accelerate calculations with hardware components and to prototype the whole embedded system.
Here complex image processing systems, like distributed camera sensor networks, are examined which need an operating system.
The used software is based upon Linux and the image processing library OpenCV.
The distribution of the calculations to software and hardware components and the resulting scheduling and generation of architectures is done automatically.
The design space exploration is based on answer set programming which involves advantages for modelling in terms of simplicity and extensions.
The software is synthesized with the help of OpenEmbedded/Bitbake and the generated on-chip architectures are implemented on FPGAs.
|
250 |
DIAMOND : Une approche pour la conception de systèmes multi-agents embarquésJamont, Jean-Paul 29 September 2005 (has links) (PDF)
Cette thèse propose une méthode pour l'analyse de problèmes relevant des systèmes complexes physiques ouverts avec des systèmes multi-agents physiques. Cette méthode que nous appelons DIAMOND (Decentralized Iterative Approach for Multiagent Open Networks Design) agence quatre phases en un cycle de vie en spirale. Elle propose d'utiliser, pour le recueil des besoins, des notations d'UML mais elle structure le fonctionnement global du système via une étude de modes de marche et d'arrêt. Elle utilise le raffinement notamment entre le niveau local et le niveau global du système et assemble les comportements individuels et les comportements sociaux tout en identifiant les influences de l'un sur l'autre. Elle guide le concepteur durant la phase de conception générique en utilisant les composants comme unité opératoire. En fin de cycle, le partitionnement logiciel/matériel du système intervient et permet la génération du code ou des descriptions matérielles.<br />Il n'était pas suffisant de proposer une méthode : considérer les composants des systèmes complexes physiques comme des noeuds coopérants d'un réseau sans fil est une démarche attrayante qui peut être vue comme la traduction physique extrême de la décentralisation. De fait, des besoins spécifiques en architectures doivent être traités. Pour cela, nous proposons le modèle MWAC (Multi-Wireless-Agent Communication) qui repose sur l'auto-organisation des entités du système.<br />Ces deux contributions sont exploitées au sein de l'application EnvSys qui a pour objectif l'instrumentation d'un réseau hydrographique.
|
Page generated in 0.0382 seconds