• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 37
  • 32
  • 27
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 315
  • 315
  • 58
  • 57
  • 52
  • 43
  • 41
  • 34
  • 34
  • 30
  • 28
  • 26
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems

Palm, Johan January 2009 (has links)
<p>The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.</p><p>Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.</p><p>In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.</p><p>Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.</p>
242

MARTE based model driven design methodology for targeting dynamically reconfigurable FPGA based SoCs

Quadri, Imran Rafiq 20 April 2010 (has links) (PDF)
Les travaux présentés dans cette thèse sont effectuées dans le cadre des Systèmes sur puce (SoC, Systemon Chip) et la conception de systèmes embarqués en temps réel, notamment dédiés au domaine de la reconfiguration dynamique, liés à ces systèmes complexes. Dans ce travail, nous présentons un nouveau flot de conception basé sur l'Ingénierie Dirigée par les Modèles (IDM/MDE) et le profilMARTE pour la conception conjointe du SoC, la spécification et la mise en oeuvre de ces systèmes sur puce reconfigurables, afin d'élever les niveaux d'abstraction et de réduire la complexité du système. La première contribution relative à cette thèse est l'identification des parties de systèmes sur puce reconfigurable dynamiquement qui peuvent être modélisées au niveau d'abstraction élevé. Cette thèse adapte une approche dirigée par l'application et cible les modèles d'application de haut niveau pour être traités comme des régions dynamiques des SoCs reconfigurables. Nous proposons aussi des modèles de contrôle générique pour la gestion de ces régions au cours de l'exécution en temps réel. Bien que cette sémantique puisse être introduite à différents niveaux d'abstraction d'un environnent pour la conception conjointe du SoC, nous insistons tout particulièrement sur sa fusion au niveau du déploiement, qui relie la propriété intellectuelle avec les éléments modélisés à haut niveau de conception. En outre, ces concepts ont été intégrés dans le méta-modèleMARTE et le profil correspondant afin de fournir une extension adéquate pour exprimer les caractéristiques de reconfiguration à la modélisation de haut niveau. La seconde contribution est la proposition d'un méta-modèle intermédiaire, qui isole les concepts présents au niveau transfert de registre (RTL-Register Transfer Level). Ce méta-modèle intègre les concepts chargés de l'exécution matérielle des applications modélisées, tout en enrichissant la sémantique de contrôle, provoquant la création d'un accélérateur matériel reconfigurable dynamiquement avec plusieurs implémentations disponibles. Enfin, en utilisant les transformations de modèlesMDE et les principes correspondants, nous sommes en mesure de générer des codeHDL équivalents à différentes implémentations de l'accélérateur reconfigurable ainsi que différents codes source en langage C/C++ liés au contrôleur de reconfiguration, qui est finalement responsable de la commutation entre les différentes mplémentations. Enfin, notre flot de conception a été vérifié avec succès dans une étude de cas liée à un système anti-radar de détection de collision. Une composante clé intégrante de ce système a été modélisée en utilisant les spécifications MARTE étendu et le code généré a été utilisé dans la conception et la mise en oeuvre d'un SoC sur un FPGA reconfigurable dynamiquement.
243

High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems

Palm, Johan January 2009 (has links)
The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors. Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware. In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware. Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.
244

Entwurf, Methoden und Werkzeuge für komplexe Bildverarbeitungssysteme auf Rekonfigurierbaren System-on-Chip-Architekturen / Design, methodologies and tools for complex image processing systems on reconfigurable system-on-chip-architectures

Mühlbauer, Felix January 2011 (has links)
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem. Einerseits ist eine hohe Rechenleistung erforderlich. Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist. Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt. Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren. Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden. Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft. Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen. Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten. Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren. Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke. Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV. Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch. Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung. Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert. / Image processing applications have special requirements to the executing computational system. On the one hand a high computational power is necessary. On the other hand a high flexibility is an advantage because the development tends to be an experimental and interactive process. For new applications the developer tend to choose a computational architecture which they know well instead of using that one which fits best to the application. Image processing algorithms are inherently parallel while common image processing systems are mostly based on sequentially operating processors. In contrast to this "mismatch", highly efficient systems can be setup of a directed synergy of software and hardware components. However, the construction of such systems is complex and lots of solutions, like gross-grained architectures or application specific programming languages, are often too academic for the usage in commerce. The present work should contribute to reduce the complexity of hardware-software-systems and thus increase the economy of and simplify the development of high-performance on-chip systems in the domain of image processing. In doing so, a value was set on keeping the effort low on making familiar to the topic, on development and also extensions. A design flow was developed and implemented which allows the software developer to accelerate calculations with hardware components and to prototype the whole embedded system. Here complex image processing systems, like distributed camera sensor networks, are examined which need an operating system. The used software is based upon Linux and the image processing library OpenCV. The distribution of the calculations to software and hardware components and the resulting scheduling and generation of architectures is done automatically. The design space exploration is based on answer set programming which involves advantages for modelling in terms of simplicity and extensions. The software is synthesized with the help of OpenEmbedded/Bitbake and the generated on-chip architectures are implemented on FPGAs.
245

DIAMOND : Une approche pour la conception de systèmes multi-agents embarqués

Jamont, Jean-Paul 29 September 2005 (has links) (PDF)
Cette thèse propose une méthode pour l'analyse de problèmes relevant des systèmes complexes physiques ouverts avec des systèmes multi-agents physiques. Cette méthode que nous appelons DIAMOND (Decentralized Iterative Approach for Multiagent Open Networks Design) agence quatre phases en un cycle de vie en spirale. Elle propose d'utiliser, pour le recueil des besoins, des notations d'UML mais elle structure le fonctionnement global du système via une étude de modes de marche et d'arrêt. Elle utilise le raffinement notamment entre le niveau local et le niveau global du système et assemble les comportements individuels et les comportements sociaux tout en identifiant les influences de l'un sur l'autre. Elle guide le concepteur durant la phase de conception générique en utilisant les composants comme unité opératoire. En fin de cycle, le partitionnement logiciel/matériel du système intervient et permet la génération du code ou des descriptions matérielles.<br />Il n'était pas suffisant de proposer une méthode : considérer les composants des systèmes complexes physiques comme des noeuds coopérants d'un réseau sans fil est une démarche attrayante qui peut être vue comme la traduction physique extrême de la décentralisation. De fait, des besoins spécifiques en architectures doivent être traités. Pour cela, nous proposons le modèle MWAC (Multi-Wireless-Agent Communication) qui repose sur l'auto-organisation des entités du système.<br />Ces deux contributions sont exploitées au sein de l'application EnvSys qui a pour objectif l'instrumentation d'un réseau hydrographique.
246

Playing and Learning Across Locations: : Indentifying Factors for the Design of Collaborative Mobile Learning

Spikol, Daniel January 2008 (has links)
<p>The research presented in this thesis investigates the design challenges associated with the development and use of mobile applications and tools for supporting collaboration in educational activities. These technologies provide new opportunities to promote and enhance collaboration by engaging learners in a variety of activities across different places and contexts. A basic challenge is to identify how to design and deploy mobile tools and services that could be used to support collaboration in different kinds of settings. There is a need to investigate how to design collaborative learning processes and to support flexible educational activities that take advantage of mobility. The main research question that I focus on is the identification of factors that influence the design of mobile collaborative learning.</p><p>The theoretical foundations that guide my work rely on the concepts behind computer supported collaborative learning and design-based research. These ideas are presented at the beginning of this thesis and provide the basis for developing an initial framework for understanding mobile collaboration. The empirical results from three different projects conducted as part of my efforts at the Center for Learning and Knowledge Technologies at Växjö University are presented and analyzed. These results are based on a collection of papers that have been published in two refereed international conference proceedings, a journal paper, and a book chapter. The educational activities and technological support have been developed in accordance with a grounded theoretical framework. The thesis ends by discussing those factors, which have been identified as having a significant influence when it comes to the design and support of mobile collaborative learning.</p><p>The findings presented in this thesis indicate that mobility changes the contexts of learning and modes of collaboration, requiring different design approaches than those used in traditional system development to support teaching and learning. The major conclusion of these efforts is that the learners’ creations, actions, sharing of experiences and reflections are key factors to consider when designing mobile collaborative activities in learning. The results additionally point to the benefit of directly involving the learners in the design process by connecting them to the iterative cycles of interaction design and research.</p>
247

A microprocessor performance and reliability simulation framework using the speculative functional-first methodology

Yuan, Yi 13 February 2012 (has links)
With the high complexity of modern day microprocessors and the slow speed of cycle-accurate simulations, architects are often unable to adequately evaluate their designs during the architectural exploration phases of chip design. This thesis presents the design and implementation of the timing partition of the cycle-accurate, microarchitecture-level SFFSim-Bear simulator. SFFSim-Bear is an implementation of the speculative functional-first (SFF) methodology, and utilizes a hybrid software-FPGA platform to accelerate simulation throughput. The timing partition, implemented in FPGA, features throughput-oriented, latency-tolerant designs to cope with the challenges of the hybrid platform. Furthermore, a fault injection framework is added to this implementation that allows designers to study the reliability aspects of their processors. The result is a simulator that is fast, accurate, flexible, and extensible. / text
248

Compact physical models for power supply noise and chip/package co-design in gigascale integration (GSI) and three-dimensional (3-D) integration systems

Huang, Gang 25 September 2008 (has links)
The objective of this dissertation is to derive a set of compact physical models addressing power integrity issues in high performance gigascale integration (GSI) systems and three-dimensional (3-D) systems. The aggressive scaling of CMOS integrated circuits makes the design of power distribution networks a serious challenge. This is because the supply current and clock frequency are increasing, which increases the power supply noise. The scaling of the supply voltage slowed down in recent years, but the logic on the integrated circuit (IC) still becomes more sensitive to any supply voltage change because of the decreasing clock cycle and therefore noise margin. Excessive power supply noise can lead to severe degradation of chip performance and even logic failure. Therefore, power supply noise modeling and power integrity validation are of great significance in GSI systems and 3-D systems. Compact physical models enable quick recognition of the power supply noise without doing dedicated simulations. In this dissertation, accurate and compact physical models for the power supply noise are derived for power hungry blocks, hot spots, 3-D chip stacks, and chip/package co-design. The impacts of noise on transmission line performance are also investigated using compact physical modeling schemes. The models can help designers gain sufficient physical insights into the complicated power delivery system and tradeoff various important chip and package design parameters during the early stages of design. The models are compared with commercial tools and display high accuracy.
249

Conception conjointe d’antenne active pour futurs modules de transmissions RF miniatures et faible pertes / Active antenna co-design for future compact and high efficient RF front-end

Ben abdallah, Essia 12 December 2016 (has links)
L’évolution des différentes générations de systèmes de télécommunications cellulaires a entraîné une complexité du frontal des terminaux mobiles caractérisés notamment par la multiplication des chaînes RF qui le constituent. Chaque chaîne est dédiée à un standard, ce qui n’est pas optimale ni du point de vue du coût, ni de l’encombrement. Afin d’optimiser les performances et la consommation du transmetteur radiofréquence, l’approche retenue dans cette thèse consiste à concevoir de façon globale différents blocs afin de partager les contraintes. Dans cette thèse, l’approche globale de la co-conception est organisée en deux sous études. Celles-ci sont destinées à terme à être intégrées dans un même frontal RF entièrement configurable.La première étude aborde la problématique de la conception conjointe entre une antenne et un amplificateur de puissance (PA) qui sont traditionnellement conçus séparément. Nous avons tout d’abord déterminé les spécifications de l’antenne permettant de maximiser le transfert d’énergie entre ces deux blocs. Ensuite, nous avons conçu l’antenne en partageant les contraintes d’impédance à la fois dans la bande utile et aux harmoniques entre cette dernière et le PA afin de relâcher les spécifications sur le réseau d’adaptation d’impédance. Cette approche permet de maintenir la linéarité du PA à des niveaux de puissances supérieures par rapport au cas où l’antenne est adaptée sur 50 Ω.La seconde étude s’intéresse à la conception conjointe d’antennes et de composants agiles. Nous avons réparti l’effort de miniaturisation et les pertes ohmiques associées entre la structure d’antenne et le composant agile (capacité commutable numériquement). Les développements présentés se sont appuyés sur des simulations électromagnétiques, des modélisations, des caractérisations système (linéarité et temps de commutation) et des mesures en rayonnement (efficacité) de prototypes d’antennes miniatures dans les bandes basses 4G. Nos études ont abouti à la conception d’une antenne fente reconfigurable fonctionnant sur la bande instantanée maximale autorisée par la 4G. Pour une intégration sur smartphone, l’élément rayonnant n’occupe que 18 x 3 mm2 de surface soit λ_0/30×λ_0/180 à 560 MHz. La fréquence de résonance de l’antenne varie entre 560 MHz et 1.03 GHz et l’efficacité totale varie entre 50% et 4%. Un banc de mesure de la linéarité a été implémenté afin d’évaluer la linéarité des antennes agiles. La spécification de linéarité exigée par le standard est maintenu jusqu’à une puissance de 22 dBm. / The recent development of cellular communication standards has led to an increasing RF front-end complexity due to the ever increasing number of RF needed paths. Each RF path is dedicated to a frequency bands group which might not be optimal for cost and occupied space area. Consequently, in order to optimize the RF performances and energy consumption, the approach used in this thesis is to share the constraints between the PA and the antenna of the front-end: this is called co-design. In this thesis, the considered co-design approach is twofold and in near future both results should be simultaneously considered and integrated into one fully reconfigurable RF front-end design.The first study addresses the co-design of an antenna and its associated power amplifier (PA), which are traditionally designed separately. We first determine the antenna impedance specifications to maximize the tradeoff between the energy transfer and PA linearity. Then, we propose to remove the impedance matching network between antenna and PA, while demonstrating that a low impedance antenna can maintain the RF performances. Contrarily to the classical approach where the antenna is matched to 50 Ω, the proposed co-design shows the possibility to keep the linearity of the PA even for high power levels (> 20 dBm).The second study focuses on the co-design of an antenna and tunable components. We are sharing the miniaturization effort and the resistive losses between the antenna structure and the tunable capacitor (DTC). The achieved developments are based on electromagnetic simulations, modeling, system characterization (linearity and switching time) and radiation measurements (efficiency) of miniature reconfigurable antenna prototypes in the 4G low bands. The considered studies have led to the design of a frequency reconfigurable antenna addressing the maximum instantaneous available bandwidth authorized by 4G. The radiator occupies only 18 x 3 mm2 (λ0/30 x λ0/180 at 560 MHz), and thus it is extremely suitable for a possible integration onto smartphones. The antenna resonance frequency is tuned between 560 MHz and 1030 MHz and the total efficiency varies between 50% and 4%. For the first time, the impact of SOI DTC implemented on the antenna radiating structure on linearity is measured with a dedicated test bench. The linearity specified by 4G is maintained up to 22 dBm of transmitted power.
250

De la conception de produit à la conception de filière : Quelles méthodologies pour les étapes amont de l’innovation ? / From product design to supply chain design : Which methodologies for the upstream stages of innovation?

Marche, Brunelle 22 November 2018 (has links)
Ce travail contribue à la recherche scientifique à travers différents aspects. Tout d’abord, le couple produit/filière, traditionnellement pensé de façon causaliste, a été envisagé à travers le prisme du paradigme de la complexité. Cette contribution théorique souligne la nécessité de co-concevoir le couple produit/filière afin d’atténuer les efforts associés au lancement d’un produit innovant sur le marché et de s’assurer de son succès. Cependant, une étude empirique a souligné que peu d’entreprises tenaient compte de la filière lors de la conception de leur produit innovant. Dans ce contexte, une ingénierie de conception de filière a été élaborée en se basant sur les données de conception du produit afin de concevoir, spécifier, valider et mettre en œuvre la filière d’un nouveau produit. Cette ingénierie se décompose en trois étapes majeures : une étape de co-conception, une étape de positionnement et une étape d’évaluation. L’étape de co-conception vise à collecter et à traiter les données de conception du produit fournies par l’équipe projet. Un modèle instancié de la filière a été développé afin de collecter les données nécessaires à la conception de la filière qui sont ensuite traités pour faciliter la modélisation. L’étape de positionnement vise à souligner le rôle de l’entreprise innovante au sein des différents scénarios de filière obtenus. Basée sur le processus Harmony for System Engineering et son outil Rational Rhapsody®, cette étape détaille la filière d’un point de vue exigences, acteurs, processus et comportement (chacun représenté par différents diagrammes) afin d’élaborer différents scenarios. Enfin, la dernière étape vise à évaluer ces scénarios de filière afin d’établir une stratégie cohérente. En effet, de nombreux chercheurs ont montré qu’une filière agile était plus apte à supporter un produit innovant lors de son lancement afin de s’adapter plus rapidement aux changements (organisationnels, tactiques, marketing, environnementaux…). Par conséquent, une trame basée sur des phénomènes observables a été développée afin de faciliter la mise en œuvre de stratégie d’agilité, ce qui permet d’évaluer la typologie de la filière actuelle et de décider des actions à mettre en place pour obtenir une filière plus agile. Cette ingénierie a été testée auprès d’entreprises manufacturières / This thesis contributes to scientific research through different aspects. First of all, the product/supply chain couple, traditionally thought of in a causalistic way, was considered through the prism of the complexity paradigm. This theoretical contribution underlines the need to co-design the product/supply chain couple in order to mitigate the efforts associated with launching an innovative product on the market and to ensure its success. However, an empirical study has pointed out that few companies consider the supply chain when designing their innovative product. In this context, supply chain design engineering was developed based on product design data in order to design, specify, validate and implement the supply chain of a new product. This engineering is divided into three major stages: a co-design stage, a positioning stage and an evaluation stage. The co-design stage aims to collect and process the product design data provided by the project team. An instantiated supply chain model was developed to collect the data needed to design the supply chain which is then processed to facilitate modeling. The positioning stage aims to highlight the role of the innovative company within the various supply chain scenarios obtained. Based on the Harmony for System Engineering process and its Rational Rhapsody® tool, this step details the supply chain from a point of view of requirements, stakeholders, processes and behavior (each represented by different diagrams) in order to elaborate different scenarios. Finally, the last step aims to evaluate these supply chain scenarios in order to establish a coherent strategy. Indeed, many researchers have shown that an agile supply chain is better able to support an innovative product when it is launched in order to adapt more quickly to changes (organizational, tactical, marketing, environmental…). Consequently, a framework based on observable phenomena has been developed to facilitate the implementation of an agility strategy, which makes it possible to evaluate the typology of the current supply chain and decide which actions to implement to obtain a more agile supply chain. This engineering has been tested with manufacturing companies

Page generated in 0.1307 seconds