Spelling suggestions: "subject:"como design""
241 |
Commande robuste structurée : application au co-design mécanique / contrôle d’attitude d’un satellite flexible / Integrated Control/Structure Design of a Flexible Satellite Using Structured Robust Control SynthesisPerez Gonzalez, Jose Alvaro 14 November 2016 (has links)
Dans cette étude de thèse, le problème du co-design mécanique/contrôle d’attitude avec méthodesde la commande robuste structurée est considéré. Le problème est abordé en développant une techniquepour la modélisation de systèmes flexibles multi-corps, appelé modèle Two-Input Two-Output Port (TITOP).En utilisant des modèles d’éléments finis comme données d’entrée, ce cadre général permet de déterminer, souscertaines hypothèses, un modèle linéaire d’un système de corps flexibles enchaînés. De plus, cette modélisationTITOP permet de considérer des variations paramétriques dans le système, une caractéristique nécessaire pourréaliser des études de co-design contrôle/structure. La technique de modélisation TITOP est aussi étenduepour la prise en compte des actionneurs piézoélectriques et des joints pivots qui peuvent apparaître dans lessous-structures. Différentes stratégies de contrôle des modes rigides et flexibles sont étudiées avec les modèles obtenus afin de trouver la meilleure architecture de contrôle pour la réjection des perturbations basse fréquence etl’amortissement des vibrations. En exploitant les propriétés d’outils de synthèse H1 structurée, la mise enoeuvre d’un schéma de co-design est expliquée, en considérant les spécifications du système (bande passantedu système et amortissement des modes) sous forme de contraintes H1. L’étude d’un tel co-design contrôled’attitude/mécanique d’un satellite flexible est illustré en utilisant toutes les techniques développées, optimisantsimultanément une loi de contrôle optimisée et certains paramètres structuraux. / In this PhD thesis, the integrated control/structure design of a large flexible spacecraft isaddressed using structured H1 synthesis. The problem is endeavored by developing a modeling technique forflexible multibody systems, called the Two Input Two Output Port (TITOP) model. This general frameworkallows the assembly of a flexible multibody system in chain-like or star-like structure, using finite elementmodels as input data. Additionally, the TITOP modeling technique allows the consideration of parametricvariations inside the system, a necessary characteristic in order to perform integrated control/structure design. In contrast to another widely used method, the assumed modes method, the TITOP modelling technique is robust against changes in the boundary conditions which link the flexible bodies. Furthermore, the TITOP modeling technique can be used as an accurate approximation even when kinematic nonlinearities can be large. The TITOP modeling technique is extended to the modeling of piezoelectric actuators and sensors for the control of flexible structures and revolute joints. Different control strategies, either for controlling rigid body and flexible body motion, are tested with the developed models for obtaining the best controller’s architecture in terms of perturbation rejection and vibration damping. The implementation of the integrated control/structure design in the structured H1 scheme is developed considering the different system’s specifications, such as system’s bandwidth or modes damping, in the form of H1 weighting functions. The integrated attitude control/structure design of a flexiblesatellite is performed using all the developed techniques and the optimization of the control law and severalstructural parameters is achieved.
|
242 |
High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control SystemsPalm, Johan January 2009 (has links)
<p>The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.</p><p>Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.</p><p>In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.</p><p>Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.</p>
|
243 |
MARTE based model driven design methodology for targeting dynamically reconfigurable FPGA based SoCsQuadri, Imran Rafiq 20 April 2010 (has links) (PDF)
Les travaux présentés dans cette thèse sont effectuées dans le cadre des Systèmes sur puce (SoC, Systemon Chip) et la conception de systèmes embarqués en temps réel, notamment dédiés au domaine de la reconfiguration dynamique, liés à ces systèmes complexes. Dans ce travail, nous présentons un nouveau flot de conception basé sur l'Ingénierie Dirigée par les Modèles (IDM/MDE) et le profilMARTE pour la conception conjointe du SoC, la spécification et la mise en oeuvre de ces systèmes sur puce reconfigurables, afin d'élever les niveaux d'abstraction et de réduire la complexité du système. La première contribution relative à cette thèse est l'identification des parties de systèmes sur puce reconfigurable dynamiquement qui peuvent être modélisées au niveau d'abstraction élevé. Cette thèse adapte une approche dirigée par l'application et cible les modèles d'application de haut niveau pour être traités comme des régions dynamiques des SoCs reconfigurables. Nous proposons aussi des modèles de contrôle générique pour la gestion de ces régions au cours de l'exécution en temps réel. Bien que cette sémantique puisse être introduite à différents niveaux d'abstraction d'un environnent pour la conception conjointe du SoC, nous insistons tout particulièrement sur sa fusion au niveau du déploiement, qui relie la propriété intellectuelle avec les éléments modélisés à haut niveau de conception. En outre, ces concepts ont été intégrés dans le méta-modèleMARTE et le profil correspondant afin de fournir une extension adéquate pour exprimer les caractéristiques de reconfiguration à la modélisation de haut niveau. La seconde contribution est la proposition d'un méta-modèle intermédiaire, qui isole les concepts présents au niveau transfert de registre (RTL-Register Transfer Level). Ce méta-modèle intègre les concepts chargés de l'exécution matérielle des applications modélisées, tout en enrichissant la sémantique de contrôle, provoquant la création d'un accélérateur matériel reconfigurable dynamiquement avec plusieurs implémentations disponibles. Enfin, en utilisant les transformations de modèlesMDE et les principes correspondants, nous sommes en mesure de générer des codeHDL équivalents à différentes implémentations de l'accélérateur reconfigurable ainsi que différents codes source en langage C/C++ liés au contrôleur de reconfiguration, qui est finalement responsable de la commutation entre les différentes mplémentations. Enfin, notre flot de conception a été vérifié avec succès dans une étude de cas liée à un système anti-radar de détection de collision. Une composante clé intégrante de ce système a été modélisée en utilisant les spécifications MARTE étendu et le code généré a été utilisé dans la conception et la mise en oeuvre d'un SoC sur un FPGA reconfigurable dynamiquement.
|
244 |
High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control SystemsPalm, Johan January 2009 (has links)
The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors. Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware. In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware. Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.
|
245 |
Entwurf, Methoden und Werkzeuge für komplexe Bildverarbeitungssysteme auf Rekonfigurierbaren System-on-Chip-Architekturen / Design, methodologies and tools for complex image processing systems on reconfigurable system-on-chip-architecturesMühlbauer, Felix January 2011 (has links)
Bildverarbeitungsanwendungen stellen besondere Ansprüche an das ausführende Rechensystem.
Einerseits ist eine hohe Rechenleistung erforderlich.
Andererseits ist eine hohe Flexibilität von Vorteil, da die Entwicklung tendentiell ein experimenteller und interaktiver Prozess ist.
Für neue Anwendungen tendieren Entwickler dazu, eine Rechenarchitektur zu wählen, die sie gut kennen, anstatt eine Architektur einzusetzen, die am besten zur Anwendung passt.
Bildverarbeitungsalgorithmen sind inhärent parallel, doch herkömmliche bildverarbeitende eingebettete Systeme basieren meist auf sequentiell arbeitenden Prozessoren.
Im Gegensatz zu dieser "Unstimmigkeit" können hocheffiziente Systeme aus einer gezielten Synergie aus Software- und Hardwarekomponenten aufgebaut werden.
Die Konstruktion solcher System ist jedoch komplex und viele Lösungen, wie zum Beispiel grobgranulare Architekturen oder anwendungsspezifische Programmiersprachen, sind oft zu akademisch für einen Einsatz in der Wirtschaft.
Die vorliegende Arbeit soll ein Beitrag dazu leisten, die Komplexität von Hardware-Software-Systemen zu reduzieren und damit die Entwicklung hochperformanter on-Chip-Systeme im Bereich Bildverarbeitung zu vereinfachen und wirtschaftlicher zu machen.
Dabei wurde Wert darauf gelegt, den Aufwand für Einarbeitung, Entwicklung als auch Erweiterungen gering zu halten.
Es wurde ein Entwurfsfluss konzipiert und umgesetzt, welcher es dem Softwareentwickler ermöglicht, Berechnungen durch Hardwarekomponenten zu beschleunigen und das zu Grunde liegende eingebettete System komplett zu prototypisieren.
Hierbei werden komplexe Bildverarbeitungsanwendungen betrachtet, welche ein Betriebssystem erfordern, wie zum Beispiel verteilte Kamerasensornetzwerke.
Die eingesetzte Software basiert auf Linux und der Bildverarbeitungsbibliothek OpenCV.
Die Verteilung der Berechnungen auf Software- und Hardwarekomponenten und die daraus resultierende Ablaufplanung und Generierung der Rechenarchitektur erfolgt automatisch.
Mittels einer auf der Antwortmengenprogrammierung basierten Entwurfsraumexploration ergeben sich Vorteile bei der Modellierung und Erweiterung.
Die Systemsoftware wird mit OpenEmbedded/Bitbake synthetisiert und die erzeugten on-Chip-Architekturen auf FPGAs realisiert. / Image processing applications have special requirements to the executing computational system.
On the one hand a high computational power is necessary.
On the other hand a high flexibility is an advantage because the development tends to be an experimental and interactive process.
For new applications the developer tend to choose a computational architecture which they know well instead of using that one which fits best to the application.
Image processing algorithms are inherently parallel while common image processing systems are mostly based on sequentially operating processors.
In contrast to this "mismatch", highly efficient systems can be setup of a directed synergy of software and hardware components.
However, the construction of such systems is complex and lots of solutions, like gross-grained architectures or application specific programming languages, are often too academic for the usage in commerce.
The present work should contribute to reduce the complexity of hardware-software-systems and thus increase the economy of and simplify the development of high-performance on-chip systems in the domain of image processing.
In doing so, a value was set on keeping the effort low on making familiar to the topic, on development and also extensions.
A design flow was developed and implemented which allows the software developer to accelerate calculations with hardware components and to prototype the whole embedded system.
Here complex image processing systems, like distributed camera sensor networks, are examined which need an operating system.
The used software is based upon Linux and the image processing library OpenCV.
The distribution of the calculations to software and hardware components and the resulting scheduling and generation of architectures is done automatically.
The design space exploration is based on answer set programming which involves advantages for modelling in terms of simplicity and extensions.
The software is synthesized with the help of OpenEmbedded/Bitbake and the generated on-chip architectures are implemented on FPGAs.
|
246 |
DIAMOND : Une approche pour la conception de systèmes multi-agents embarquésJamont, Jean-Paul 29 September 2005 (has links) (PDF)
Cette thèse propose une méthode pour l'analyse de problèmes relevant des systèmes complexes physiques ouverts avec des systèmes multi-agents physiques. Cette méthode que nous appelons DIAMOND (Decentralized Iterative Approach for Multiagent Open Networks Design) agence quatre phases en un cycle de vie en spirale. Elle propose d'utiliser, pour le recueil des besoins, des notations d'UML mais elle structure le fonctionnement global du système via une étude de modes de marche et d'arrêt. Elle utilise le raffinement notamment entre le niveau local et le niveau global du système et assemble les comportements individuels et les comportements sociaux tout en identifiant les influences de l'un sur l'autre. Elle guide le concepteur durant la phase de conception générique en utilisant les composants comme unité opératoire. En fin de cycle, le partitionnement logiciel/matériel du système intervient et permet la génération du code ou des descriptions matérielles.<br />Il n'était pas suffisant de proposer une méthode : considérer les composants des systèmes complexes physiques comme des noeuds coopérants d'un réseau sans fil est une démarche attrayante qui peut être vue comme la traduction physique extrême de la décentralisation. De fait, des besoins spécifiques en architectures doivent être traités. Pour cela, nous proposons le modèle MWAC (Multi-Wireless-Agent Communication) qui repose sur l'auto-organisation des entités du système.<br />Ces deux contributions sont exploitées au sein de l'application EnvSys qui a pour objectif l'instrumentation d'un réseau hydrographique.
|
247 |
Playing and Learning Across Locations: : Indentifying Factors for the Design of Collaborative Mobile LearningSpikol, Daniel January 2008 (has links)
<p>The research presented in this thesis investigates the design challenges associated with the development and use of mobile applications and tools for supporting collaboration in educational activities. These technologies provide new opportunities to promote and enhance collaboration by engaging learners in a variety of activities across different places and contexts. A basic challenge is to identify how to design and deploy mobile tools and services that could be used to support collaboration in different kinds of settings. There is a need to investigate how to design collaborative learning processes and to support flexible educational activities that take advantage of mobility. The main research question that I focus on is the identification of factors that influence the design of mobile collaborative learning.</p><p>The theoretical foundations that guide my work rely on the concepts behind computer supported collaborative learning and design-based research. These ideas are presented at the beginning of this thesis and provide the basis for developing an initial framework for understanding mobile collaboration. The empirical results from three different projects conducted as part of my efforts at the Center for Learning and Knowledge Technologies at Växjö University are presented and analyzed. These results are based on a collection of papers that have been published in two refereed international conference proceedings, a journal paper, and a book chapter. The educational activities and technological support have been developed in accordance with a grounded theoretical framework. The thesis ends by discussing those factors, which have been identified as having a significant influence when it comes to the design and support of mobile collaborative learning.</p><p>The findings presented in this thesis indicate that mobility changes the contexts of learning and modes of collaboration, requiring different design approaches than those used in traditional system development to support teaching and learning. The major conclusion of these efforts is that the learners’ creations, actions, sharing of experiences and reflections are key factors to consider when designing mobile collaborative activities in learning. The results additionally point to the benefit of directly involving the learners in the design process by connecting them to the iterative cycles of interaction design and research.</p>
|
248 |
A microprocessor performance and reliability simulation framework using the speculative functional-first methodologyYuan, Yi 13 February 2012 (has links)
With the high complexity of modern day microprocessors and the slow speed of cycle-accurate simulations, architects are often unable to adequately evaluate their designs during the architectural exploration phases of chip design. This thesis presents the design and implementation of the timing partition of the cycle-accurate, microarchitecture-level SFFSim-Bear simulator. SFFSim-Bear is an implementation of the speculative functional-first (SFF) methodology, and utilizes a hybrid software-FPGA platform to accelerate simulation throughput. The timing partition, implemented in FPGA, features throughput-oriented, latency-tolerant designs to cope with the challenges of the hybrid platform. Furthermore, a fault injection framework is added to this implementation that allows designers to study the reliability aspects of their processors. The result is a simulator that is fast, accurate, flexible, and extensible. / text
|
249 |
Compact physical models for power supply noise and chip/package co-design in gigascale integration (GSI) and three-dimensional (3-D) integration systemsHuang, Gang 25 September 2008 (has links)
The objective of this dissertation is to derive a set of compact physical models addressing power integrity issues in high performance gigascale integration (GSI) systems and three-dimensional (3-D) systems. The aggressive scaling of CMOS integrated circuits makes the design of power distribution networks a serious challenge. This is because the supply current and clock frequency are increasing, which increases the power supply noise. The scaling of the supply voltage slowed down in recent years, but the logic on the integrated circuit (IC) still becomes more sensitive to any supply voltage change because of the decreasing clock cycle and therefore noise margin. Excessive power supply noise can lead to severe degradation of chip performance and even logic failure. Therefore, power supply noise modeling and power integrity validation are of great significance in GSI systems and 3-D systems.
Compact physical models enable quick recognition of the power supply noise without doing dedicated simulations. In this dissertation, accurate and compact physical models for the power supply noise are derived for power hungry blocks, hot spots, 3-D chip stacks, and chip/package co-design. The impacts of noise on transmission line performance are also investigated using compact physical modeling schemes. The models can help designers gain sufficient physical insights into the complicated power delivery system and tradeoff various important chip and package design parameters during the early stages of design. The models are compared with commercial tools and display high accuracy.
|
250 |
Conception conjointe d’antenne active pour futurs modules de transmissions RF miniatures et faible pertes / Active antenna co-design for future compact and high efficient RF front-endBen abdallah, Essia 12 December 2016 (has links)
L’évolution des différentes générations de systèmes de télécommunications cellulaires a entraîné une complexité du frontal des terminaux mobiles caractérisés notamment par la multiplication des chaînes RF qui le constituent. Chaque chaîne est dédiée à un standard, ce qui n’est pas optimale ni du point de vue du coût, ni de l’encombrement. Afin d’optimiser les performances et la consommation du transmetteur radiofréquence, l’approche retenue dans cette thèse consiste à concevoir de façon globale différents blocs afin de partager les contraintes. Dans cette thèse, l’approche globale de la co-conception est organisée en deux sous études. Celles-ci sont destinées à terme à être intégrées dans un même frontal RF entièrement configurable.La première étude aborde la problématique de la conception conjointe entre une antenne et un amplificateur de puissance (PA) qui sont traditionnellement conçus séparément. Nous avons tout d’abord déterminé les spécifications de l’antenne permettant de maximiser le transfert d’énergie entre ces deux blocs. Ensuite, nous avons conçu l’antenne en partageant les contraintes d’impédance à la fois dans la bande utile et aux harmoniques entre cette dernière et le PA afin de relâcher les spécifications sur le réseau d’adaptation d’impédance. Cette approche permet de maintenir la linéarité du PA à des niveaux de puissances supérieures par rapport au cas où l’antenne est adaptée sur 50 Ω.La seconde étude s’intéresse à la conception conjointe d’antennes et de composants agiles. Nous avons réparti l’effort de miniaturisation et les pertes ohmiques associées entre la structure d’antenne et le composant agile (capacité commutable numériquement). Les développements présentés se sont appuyés sur des simulations électromagnétiques, des modélisations, des caractérisations système (linéarité et temps de commutation) et des mesures en rayonnement (efficacité) de prototypes d’antennes miniatures dans les bandes basses 4G. Nos études ont abouti à la conception d’une antenne fente reconfigurable fonctionnant sur la bande instantanée maximale autorisée par la 4G. Pour une intégration sur smartphone, l’élément rayonnant n’occupe que 18 x 3 mm2 de surface soit λ_0/30×λ_0/180 à 560 MHz. La fréquence de résonance de l’antenne varie entre 560 MHz et 1.03 GHz et l’efficacité totale varie entre 50% et 4%. Un banc de mesure de la linéarité a été implémenté afin d’évaluer la linéarité des antennes agiles. La spécification de linéarité exigée par le standard est maintenu jusqu’à une puissance de 22 dBm. / The recent development of cellular communication standards has led to an increasing RF front-end complexity due to the ever increasing number of RF needed paths. Each RF path is dedicated to a frequency bands group which might not be optimal for cost and occupied space area. Consequently, in order to optimize the RF performances and energy consumption, the approach used in this thesis is to share the constraints between the PA and the antenna of the front-end: this is called co-design. In this thesis, the considered co-design approach is twofold and in near future both results should be simultaneously considered and integrated into one fully reconfigurable RF front-end design.The first study addresses the co-design of an antenna and its associated power amplifier (PA), which are traditionally designed separately. We first determine the antenna impedance specifications to maximize the tradeoff between the energy transfer and PA linearity. Then, we propose to remove the impedance matching network between antenna and PA, while demonstrating that a low impedance antenna can maintain the RF performances. Contrarily to the classical approach where the antenna is matched to 50 Ω, the proposed co-design shows the possibility to keep the linearity of the PA even for high power levels (> 20 dBm).The second study focuses on the co-design of an antenna and tunable components. We are sharing the miniaturization effort and the resistive losses between the antenna structure and the tunable capacitor (DTC). The achieved developments are based on electromagnetic simulations, modeling, system characterization (linearity and switching time) and radiation measurements (efficiency) of miniature reconfigurable antenna prototypes in the 4G low bands. The considered studies have led to the design of a frequency reconfigurable antenna addressing the maximum instantaneous available bandwidth authorized by 4G. The radiator occupies only 18 x 3 mm2 (λ0/30 x λ0/180 at 560 MHz), and thus it is extremely suitable for a possible integration onto smartphones. The antenna resonance frequency is tuned between 560 MHz and 1030 MHz and the total efficiency varies between 50% and 4%. For the first time, the impact of SOI DTC implemented on the antenna radiating structure on linearity is measured with a dedicated test bench. The linearity specified by 4G is maintained up to 22 dBm of transmitted power.
|
Page generated in 0.0476 seconds