• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 12
  • 9
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 161
  • 105
  • 43
  • 42
  • 32
  • 29
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Emulation platform synthesis and NoC evaluation for embedded systems : towards next generation networks / Synthèse de plateformes d’émulation et évaluation de NoCs pour les systèmes embarqués : vers les réseaux du futur

Alcantara de Lima, Otavio Junior 09 September 2015 (has links)
La complexité croissante des systèmes embarqués multi-coeur exige des structures de communication flexibles et capables de supporter de nombreuses requêtes de trafics au moment de l’exécution. Les Réseaux sur Puce (NoC) émergent comme la technologie de communication la plus prometteuse pour les SoCs (Systèmes sur Puce), du fait de leur plus grande flexibilité par rapport aux autres solutions comme les bus et les connexions points à points. Les NoCs sont devenus le standard comme support de communication pour les SoC, mais les outils d’évaluation de performances deviennent critiques pour ces systèmes. Les outils d’émulation sur FPGA accélèrent l’analyse comparative de NoC ainsi que l’exploration de l’espace de conception. Ces outils ont une grande précision et un faible temps d’exécution par rapport aux simulateurs de NoC. Un outil d’émulation basé sur FPGA est composé de dizaines ou de centaines de composants distribués. Ces composants doivent être correctement gérés afin d’exécuter différents scénarii d’évaluation de trafic. Pour cela, il faut être à même de re-programmer les composants, en utilisant un protocole standard qui permet alors de piloter l’émulateur de NoC sur FPGA. Ces protocoles facilitent l’intégration des composants d’émulation développés par différents concepteurs et simplifient la configuration des noeuds d’émulation sans resynthèse ainsi que l’extraction des résultats d’émulation. Bien que l’émulation matérielle de NoC soit assez difficile, il est important de valider de nouvelles architectures de NoC avec des trafics basés sur les applications réelles pour permettre d’obtenir des résultats plus précis. La génération de modèles de trafic basés sur des applications est une préoccupation majeure pour l’émulation de NoC. Les traces intégrant des informations de dépendances sont plus précises que les traces ordinaires, ceci pour un large éventail d’architectures de NoC. Cependant, elles ont tendance à être plus grosses que les traces originales et exigent plus de ressources FPGA. L’objectif de cette thèse est la synthèse de plateformes d’émulation de NoC sur FPGA pour les futurs systèmes embarqués multi-noeuds. Une recherche approfondie s’est portée sur les stratégies éventuelles pour la génération des modèles réalistes de trafic pour le NoC émulé sur FPGA, et pour la gestion des plateformes d’émulation en utilisant des protocoles standard inspirés des protocoles de réseaux informatiques. Une première contribution de cette thèse est une structure (« framework ») d’analyse de traces capable d’extraire les dépendances de paquets. La plateforme proposée analyse un ensemble de traces extraites d’une application embarquée basée sur l’échange de messages afin de construire un modèle de calcul (MoC). Un générateur de trafic (TG) intégrant cette dépendance est créé à partir du MoC proposé. Ce TG reproduit le motif de trafic d’une application pour une plateforme d’émulation sur FPGA. Une seconde contribution est une version allégée du protocole SNMP (Simple Network Management Protocol) pour la gestion d’une plateforme d’émulation de NoC sur FPGA. L’architecture de la plateforme d’émulation proposée est basée sur les concepts du protocole SNMP. Elle offre une interface standard de haut niveau pour les composants d’émulation fournis par le protocole SNMP. Ce protocole facilite également l’intégration de composants d’émulation créés par différents concepteurs. Une analyse prospective des futures architectures de NoC constitue également une contribution dans cette thèse. Dans cette analyse, une architecture conceptuelle d’un système embarqué multi-noeuds du futur constitue un modèle pour extraire les contraintes de ces réseaux. Un autre mécanisme présenté est un NoC tolérant aux pannes, basé sur l’utilisation de liens de contournement. Enfin, la dernière contribution repose sur une analyse de base des besoins des futurs NoC pour les outils d’émulation sur FPGA / The ever-increasing complexity of many-core embedded system applications demands a flexible communication structure capable of supporting different traffics requirements at run-time. The Networks-on-Chip (NoCs) emerge as the most promising communication technology for the modern many-cores SoC (System-on-Chip), whereby they have greater scalability than other solutions such as buses and point to point connections. As NoCs become de facto standard for on chip systems, NoC performance evaluation tools become critical for SoCs design. The FPGA based emulation platforms accelerate NoC benchmarking as well as design space exploration. Those platforms have high accuracy and low execution time in relation to NoC simulators. An FPGA-based emulation platform is composed by tens or hundreds of distributed components. These components should be timely managed in order to execute an evaluation scenario. There is a lack of standard protocols to drive FPGA-based NoC emulators. Such protocols could ease the integration of emulation components developed by different designers, as well as they could enable the configuration of the emulation nodes without FPGA re-synthesis and the extraction of emulation results. The NoC hardware emulation is quite challenging. It is important to validate new NoC architectures with realistic workloads, because they provide much more accurate results. The generation of applications traffic patterns is a key concern for NoC emulation. The dependency aware traces are an appealing solution for the generation of realistic traffic workloads. They are more accurate than ordinary traces for a broad range of NoC architectures because they contain packets dependencies information. However, they tend to be bigger than the original ones what demands more FPGA resources. This thesis aims the synthesis of FPGA-based NoC emulation platforms for the future multi-core embedded systems. We are interested in investigating strategies to generate realistic traffic patterns for NoCs emulated on FPGAs, as well as the management of the emulation platform using standard protocols inspired by the computer networks protocols. One contribution of this thesis is a trace analysis framework which addresses the packets dependencies extraction problem. The proposed framework analyzes traces from a message passing application in order to build a Model of Computation (MoC). This MoC reproduces the communicative behavior of an application node. A dependency-aware Traffic Generator (TG) is created from the proposed MoC. This TG generates the application traffic pattern during an FPGA-based NoC emulation. Another contribution is a light version of SNMP (Simple Network Management Protocol) to manage an FPGA-based NoC emulation platform. An FPGA-based emulation platform architecture is proposed based on the principles of SNMP protocol. This platform has a high-level interface to the emulation components provided by that protocol, which also eases the integration of emulation components created by different designers. The emulation platform and the protocol capacities are evaluated during a task mapping and mesh topology design space exploration. A prospective analysis of future NoCs architectures is also a contribution of this thesis. In this analysis, a conceptual architecture of a future multi-core embedded system is used as model to extract these networks requirements. From this analysis, it is proposed some networking mechanisms. The first mechanism is a congestion-aware routing algorithm, which is an adaptive routing algorithm that selects the output path for a given packet based on a simple prioritized scheme of sets of rules. It is also proposed a congestion-control mechanisms for the vertical links interconnecting the layers of a 3D NoC. This mechanism is based upon the diffusion of congestion information by a piggyback protocol
142

Architectural exploration of network Interface for energy efficient 3D optical network-on-chip / Exploration architecturale d'un système 3D multi-coeurs communiquant par réseau optique embarqué sur puce

Pham, Van Dung 13 December 2018 (has links)
Depuis quelques années, les réseaux optiques sur puce (ONoC) sont devenus une solution intéressante pour surpasser les limitations des interconnexions électriques, compte tenu de leurs caractéristiques attractives concernant la consommation d’énergie, le délai de transfert et la bande passante. Cependant, les éléments optiques nécessaires pour définir un tel réseau souffrent d’imperfections qui introduisent des pertes durant les communications. De plus, l'utilisation de la technique de multiplexage en longueurs d'ondes (WDM) permet d'augmenter les performances, mais introduit de nouvelles pertes et de la diaphonie entre les longueurs d'ondes, ce qui a pour effet de réduire le rapport signal sur bruit et donc la qualité de la communication. Les contributions présentées dans ce manuscrit adressent cette problématique d’amélioration de performance des liens optiques dans un ONoC. Pour cela, nous proposons tout d’abord un modèle analytique des pertes et de la diaphonie dans un réseau optique sur puce WDM. Nous proposons ensuite une méthodologie pour améliorer les performances globales du système s'appuyant sur l'utilisation de codes correcteurs d'erreurs. Nous présentons deux types de codes, le premier(Hamming) est d'une complexité d'implémentation faible alors que le second(Reed-Solomon) est plus complexe, mais offre un meilleur taux de correction. Nous avons implémenté des blocs matériels supportant ces corrections d'erreurs avec une technologie 28nm FDSOI. Finalement, nous proposons la définition d'une interface complète entre le domaine électrique et le domaine optique permettant d'allouer les longueurs d'ondes, de coder l'information, de sérialiser le flux de données et de contrôler le driver du laser pour obtenir la modulation à la puissance optique souhaitée. / Electrical Network-on-Chip (ENoC) has long been considered as the de facto technology for interconnects in multiprocessor systems-on-chip (MPSoCs). However, with the increase of the number of cores integrated on a single chip, ENoCs are less and less suitable to adapt the bandwidth and latency requirements of nowadays complex and highly-parallel applications. In recent years, due to power consumption constraint, low latency, and high data bandwidth requirements, optical interconnects became an interesting solution to overcome these limitations. Indeed, Optical Networks on Chip (ONoC) are based on waveguides which drive optical signals  from source to destination with very low latency. Unfortunately, the optical devices used to built  ONoCs suffer from some imperfections which introduce losses during communications. These losses (crosstalk noises and optical losses)  are very important factors which impact the energy efficiency and the performance of the system. Furthermore, Wavelength Division Multiplexing (WDM) technology can help the designer to improve ONoC performance, especially the bandwidth and the latency. However, using the WDM technology leads to introduce new losses and crosstalk noises which negatively impact the Signal to Noise Ratio (SNR) and Bit Error Rate (BER). In detail, this results in higher BER and increases power consumption, which therefore reduces the energy efficiency of the optical interconnects. The contributions presented in this manuscript address these issues. For that, we first model and analyze the optical losses and crosstalk in WDM based ONoC. The model can provide an analytical evaluation of the worst case of loss and crosstalk with different parameters for optical ring network-on-chip. Based on this model, we propose a methodology to improve the performance and then to reduce the power consumption of optical interconnects relying on the use of forward error correction (FEC). We present two case studies of lightweight FEC with low implementation complexity and high error-correction performance under 28nm Fully-Depleted Silicon-On-Insulator (FDSOI) technology. The results demonstrate the advantages of using FEC on the optical interconnect in the context of the CHAMELEON ONoC. Secondly, we propose a complete design of Optical Network Interface (ONI) which is composed of data flow allocation, integrated FECs, data serialization/deserialization, and control of the laser driver. The details of these different elements are presented in this manuscript.  Relying on this network interface, an allocation management to improve energy efficiency can be supported at runtime depending on the application demands. This runtime management of energy vs. performance can be integrated into the ONI manager through configuration manager located in each ONI. Finally, the design of an ONoC configuration sequencer (OCS), located at the center of the optical layer, is presented. By using the ONI manager, the OCS can configure ONoC at runtime according to the application performance and energy requirements.
143

Architecture logicielle et matérielle d'un système de détection des émotions utilisant les signaux physiologiques. Application à la mnémothérapie musicale / Hardware and software architecture of an emotions detection system using physiological signals. Application to the musical mnemotherapy

Koné, Chaka 01 June 2018 (has links)
Ce travail de thèse s’inscrit dans le domaine de l’informatique affective et plus précisément de l’intelligence artificielle et de l’exploration d’architecture. L’objectif de ce travail est de concevoir un système complet de détection des émotions en utilisant des signaux physiologiques. Ce travail se place donc à l’intersection de l’informatique pour la définition d’algorithme de détection des émotions et de l’électronique pour l’élaboration d’une méthodologie d’exploration d’architecture et pour la conception de nœuds de capteurs. Dans un premier temps, des algorithmes de détection multimodale et instantanée des émotions ont été définis. Deux algorithmes de classification KNN puis SVM, ont été implémentés et ont permis d’obtenir un taux de reconnaissance des émotions supérieurs à 80%. Afin de concevoir un tel système alimenté sur pile, un modèle analytique d’estimation de la consommation à haut niveau d’abstraction a été proposé et validé sur une plateforme réelle. Afin de tenir compte des contraintes utilisateurs, un outil de conception et de simulation d’architecture d’objets connectés pour la santé a été développé, permettant ainsi d’évaluer les performances des systèmes avant leur conception. Une architecture logicielle/matérielle pour la collecte et le traitement des données satisfaisant les contraintes applicatives et utilisateurs a ainsi été proposée. Doté de cette architecture, des expérimentations ont été menées pour la Mnémothérapie musicale. EMOTICA est un système complet de détection des émotions utilisant des signaux physiologiques satisfaisant les contraintes d’architecture, d’application et de l’utilisateur. / This thesis work is part of the field of affective computing and more specifically artificial intelligence and architectural exploration. The goal of this work is to design a complete system of emotions detection using physiological signals. This work is therefore situated at the intersection of computer science for the definition of algorithm of detection of emotions and electronics for the development of an architecture exploration methodology for the design of sensor nodes. At first, algorithms for multimodal and instantaneous detection of emotions were defined. Two algorithms of classification KNN then SVM, were implemented and made it possible to obtain a recognition rate of the emotions higher than 80%. To design such a battery-powered system, an analytical model for estimating the power consumption at high level of abstraction has been proposed and validated on a real platform. To consider user constraints, a connected object architecture design and simulation tool for health has been developed, allowing the performance of systems to be evaluated prior to their design. Then, we used this tool to propose a hardware/software architecture for the collection and the processing of the data satisfying the architectural and applicative constraints. With this architecture, experiments have been conducted for musical Mnemotherapy. EMOTICA is a complete system for emotions detection using physiological signals satisfying the constraints of architecture, application and user.
144

Dynamic instruction set extension of microprocessors with embedded FPGAs

Bauer, Heiner 31 March 2017 (has links)
Increasingly complex applications and recent shifts in technology scaling have created a large demand for microprocessors which can perform tasks more quickly and more energy efficient. Conventional microarchitectures exploit multiple levels of parallelism to increase instruction throughput and use application specific instruction sets or hardware accelerators to increase energy efficiency. Reconfigurable microprocessors adopt the same principle of providing application specific hardware, however, with the significant advantage of post-fabrication flexibility. Not only does this offer similar gains in performance but also the flexibility to configure each device individually. This thesis explored the benefit of a tight coupled and fine-grained reconfigurable microprocessor. In contrast to previous research, a detailed design space exploration of logical architectures for island-style field programmable gate arrays (FPGAs) has been performed in the context of a commercial 22nm process technology. Other research projects either reused general purpose architectures or spent little effort to design and characterize custom fabrics, which are critical to system performance and the practicality of frequently proposed high-level software techniques. Here, detailed circuit implementations and a custom area model were used to estimate the performance of over 200 different logical FPGA architectures with single-driver routing. Results of this exploration revealed similar tradeoffs and trends described by previous studies. The number of lookup table (LUT) inputs and the structure of the global routing network were shown to have a major impact on the area delay product. However, results suggested a much larger region of efficient architectures than before. Finally, an architecture with 5-LUTs and 8 logic elements per cluster was selected. Modifications to the microprocessor, whichwas based on an industry proven instruction set architecture, and its software toolchain provided access to this embedded reconfigurable fabric via custom instructions. The baseline microprocessor was characterized with estimates from signoff data for a 28nm hardware implementation. A modified academic FPGA tool flow was used to transform Verilog implementations of custom instructions into a post-routing netlist with timing annotations. Simulation-based verification of the system was performed with a cycle-accurate processor model and diverse application benchmarks, ranging from signal processing, over encryption to computation of elementary functions. For these benchmarks, a significant increase in performance with speedups from 3 to 15 relative to the baseline microprocessor was achieved with the extended instruction set. Except for one case, application speedup clearly outweighed the area overhead for the extended system, even though the modeled fabric architecturewas primitive and contained no explicit arithmetic enhancements. Insights into fundamental tradeoffs of island-style FPGA architectures, the developed exploration flow, and a concrete cost model are relevant for the development of more advanced architectures. Hence, this work is a successful proof of concept and has laid the basis for further investigations into architectural extensions and physical implementations. Potential for further optimizationwas identified on multiple levels and numerous directions for future research were described. / Zunehmend komplexere Anwendungen und Besonderheiten moderner Halbleitertechnologien haben zu einer großen Nachfrage an leistungsfähigen und gleichzeitig sehr energieeffizienten Mikroprozessoren geführt. Konventionelle Architekturen versuchen den Befehlsdurchsatz durch Parallelisierung zu steigern und stellen anwendungsspezifische Befehlssätze oder Hardwarebeschleuniger zur Steigerung der Energieeffizienz bereit. Rekonfigurierbare Prozessoren ermöglichen ähnliche Performancesteigerungen und besitzen gleichzeitig den enormen Vorteil, dass die Spezialisierung auf eine bestimmte Anwendung nach der Herstellung erfolgen kann. In dieser Diplomarbeit wurde ein rekonfigurierbarer Mikroprozessor mit einem eng gekoppelten FPGA untersucht. Im Gegensatz zu früheren Forschungsansätzen wurde eine umfangreiche Entwurfsraumexploration der FPGA-Architektur im Zusammenhang mit einem kommerziellen 22nm Herstellungsprozess durchgeführt. Bisher verwendeten die meisten Forschungsprojekte entweder kommerzielle Architekturen, die nicht unbedingt auf diesen Anwendungsfall zugeschnitten sind, oder die vorgeschlagenen FGPA-Komponenten wurden nur unzureichend untersucht und charakterisiert. Jedoch ist gerade dieser Baustein ausschlaggebend für die Leistungsfähigkeit des gesamten Systems. Deshalb wurden im Rahmen dieser Arbeit über 200 verschiedene logische FPGA-Architekturen untersucht. Zur Modellierung wurden konkrete Schaltungstopologien und ein auf den Herstellungsprozess zugeschnittenes Modell zur Abschätzung der Layoutfläche verwendet. Generell wurden die gleichen Trends wie bei vorhergehenden und ähnlich umfangreichen Untersuchungen beobachtet. Auch hier wurden die Ergebnisse maßgeblich von der Größe der LUTs (engl. "Lookup Tables") und der Struktur des Routingnetzwerks bestimmt. Gleichzeitig wurde ein viel breiterer Bereich von Architekturen mit nahezu gleicher Effizienz identifiziert. Zur weiteren Evaluation wurde eine FPGA-Architektur mit 5-LUTs und 8 Logikelementen ausgewählt. Die Performance des ausgewählten Mikroprozessors, der auf einer erprobten Befehlssatzarchitektur aufbaut, wurde mit Ergebnissen eines 28nm Testchips abgeschätzt. Eine modifizierte Sammlung von akademischen Softwarewerkzeugen wurde verwendet, um Spezialbefehle auf die modellierte FPGA-Architektur abzubilden und eine Netzliste für die anschließende Simulation und Verifikation zu erzeugen. Für eine Reihe unterschiedlicher Anwendungs-Benchmarks wurde eine relative Leistungssteigerung zwischen 3 und 15 gegenüber dem ursprünglichen Prozessor ermittelt. Obwohl die vorgeschlagene FPGA-Architektur vergleichsweise primitiv ist und keinerlei arithmetische Erweiterungen besitzt, musste dabei, bis auf eine Ausnahme, kein überproportionaler Anstieg der Chipfläche in Kauf genommen werden. Die gewonnen Erkenntnisse zu den Abhängigkeiten zwischen den Architekturparametern, der entwickelte Ablauf für die Exploration und das konkrete Kostenmodell sind essenziell für weitere Verbesserungen der FPGA-Architektur. Die vorliegende Arbeit hat somit erfolgreich den Vorteil der untersuchten Systemarchitektur gezeigt und den Weg für mögliche Erweiterungen und Hardwareimplementierungen geebnet. Zusätzlich wurden eine Reihe von Optimierungen der Architektur und weitere potenziellen Forschungsansätzen aufgezeigt.
145

Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

Tuzov, Ilya 25 January 2021 (has links)
[ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7. / [CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7. / [EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA. / Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883 / TESIS
146

Real Time Design Space Exploration of Static and Vibratory Structural Responses in Turbomachinery Through Surrogate Modeling with Principal Components

Bunnell, Spencer Reese 04 June 2020 (has links)
Design space exploration (DSE) is used to improve and understand engineering designs. Such designs must meet objectives and structural requirements. Design improvement is non-trivial and requires new DSE methods. Turbomachinery manufacturers must continue to improve existing engines to keep up with global demand. Two challenges of turbomachinery DSE are: the time required to evaluate designs, and knowing which designs to evaluate. This research addressed these challenges by developing novel surrogate and principal component analysis (PCA) based DSE methods. Node and PCA-based surrogates were created to allow faster DSE of turbomachinery blades. The surrogates provided static stress estimation within 10% error. Surrogate error was related to the number of sampled finite element (FE) models used to train the surrogate and the variables used to change the designs. Surrogates were able to provide structural evaluations three to five orders of magnitude faster than FEA evaluations. The PCA-based surrogates were then used to create a PCA-based design workflow to help designers know which designs to evaluate. The workflow used either two-point correlation or stress and geometry coupling to relate the design variables to principal component (PC) scores. These scores were projections of the FE models onto the PCs obtained from PCA. Analysis showed that this workflow could be used in DSE to better explore and improve designs. The surrogate methods were then applied to vibratory stress. A computationally simplified analysis workflow was developed to allow for enough fluid and structural analyses to create a surrogate model. The simplified analysis workflow introduced 10% error but decreased the computational cost by 90%. The surrogate methods could not directly be applied to emulation of vibration due to the large spikes which occur near resonance. A novel, indirect emulation method was developed to better estimate vibratory responses Surrogates were used to estimate the inputs to calculate the vibratory responses. During DSE these estimations were used to calculate the vibratory responses. This method reduced the error between the surrogate and FEA from 85% to 17%. Lastly, a PCA-based multi-fidelity surrogate method was developed. This assumed the PCs of the high and low-fidelities were similar. The high-fidelity FE models had tens of thousands of nodes and the low-fidelity FE models had a few hundred nodes. The computational cost to create the surrogate was decreased by 75% for the same errors. For the same computational cost, the error was reduced by 50%. Together, the methods developed in this research were shown to decrease the cost of evaluating the structural responses of turbomachinery blade designs. They also provided a method to help the designer understand which designs to explore. This research paves the way for better, and more thoroughly understood turbomachinery blade designs.
147

Linear and Nonlinear Dimensionality-Reduction-Based Surrogate Models for Real-Time Design Space Exploration of Structural Responses

Bird, Gregory David 03 August 2020 (has links)
Design space exploration (DSE) is a tool used to evaluate and compare designs as part of the design selection process. While evaluating every possible design in a design space is infeasible, understanding design behavior and response throughout the design space may be accomplished by evaluating a subset of designs and interpolating between them using surrogate models. Surrogate modeling is a technique that uses low-cost calculations to approximate the outcome of more computationally expensive calculations or analyses, such as finite element analysis (FEA). While surrogates make quick predictions, accuracy is not guaranteed and must be considered. This research addressed the need to improve the accuracy of surrogate predictions in order to improve DSE of structural responses. This was accomplished by performing comparative analyses of linear and nonlinear dimensionality-reduction-based radial basis function (RBF) surrogate models for emulating various FEA nodal results. A total of four dimensionality reduction methods were investigated, namely principal component analysis (PCA), kernel principal component analysis (KPCA), isometric feature mapping (ISOMAP), and locally linear embedding (LLE). These methods were used in conjunction with surrogate modeling to predict nodal stresses and coordinates of a compressor blade. The research showed that using an ISOMAP-based dual-RBF surrogate model for predicting nodal stresses decreased the estimated mean error of the surrogate by 35.7% compared to PCA. Using nonlinear dimensionality-reduction-based surrogates did not reduce surrogate error for predicting nodal coordinates. A new metric, the manifold distance ratio (MDR), was introduced to measure the nonlinearity of the data manifolds. When applied to the stress and coordinate data, the stress space was found to be more nonlinear than the coordinate space for this application. The upfront training cost of the nonlinear dimensionality-reduction-based surrogates was larger than that of their linear counterparts but small enough to remain feasible. After training, all the dual-RBF surrogates were capable of making real-time predictions. This same process was repeated for a separate application involving the nodal displacements of mode shapes obtained from a FEA modal analysis. The modal assurance criterion (MAC) calculation was used to compare the predicted mode shapes, as well as their corresponding true mode shapes obtained from FEA, to a set of reference modes. The research showed that two nonlinear techniques, namely LLE and KPCA, resulted in lower surrogate error in the more complex design spaces. Using a RBF kernel, KPCA achieved the largest average reduction in error of 13.57%. The results also showed that surrogate error was greatly affected by mode shape reversal. Four different approaches of identifying reversed mode shapes were explored, all of which resulted in varying amounts of surrogate error. Together, the methods explored in this research were shown to decrease surrogate error when performing DSE of a turbomachine compressor blade. As surrogate accuracy increases, so does the ability to correctly make engineering decisions and judgements throughout the design process. Ultimately, this will help engineers design better turbomachines.
148

Současné výzvy odstraňování vesmírného odpadu: souhrn a perspektiva / Contemporary Challenges of Space Debris Removal: Overview and Outlook

Vojáková, Eliška January 2021 (has links)
CHARLES UNIVERSITY FACULTY OF SOCIAL SCIENCES Institute of Political Studies Department of International Security Studies Contemporary Challenges of Space Debris Removal: Overview and Outlook Abstract in English Author: Eliška Vojáková Study programme: Security Studies Supervisor: Mgr. Bohumil Doboš, Ph.D. Year of the defence: 2021 Abstract The sustainability of the outer space environment is necessary for all actors to execute all existing and future human space operations safely. While the severe negative consequences of the uncontrolled space debris population are not new, government agencies and intergovernmental organizations' initiatives to lessen the predicament continue to be insufficient. Scientific research and simulation models show that mere mitigation measures cannot stop the ongoing degradation of the outer space environment polluted from the past space missions. Instead, research supports the development of space projects designed with a primary objective to remove debris from space. National administrations attempt to cooperate at the international level to formulate uniform debris mitigation standards and hold each other mutually accountable for worsening the space debris situation. However, joint public international missions to actively remove debris remain unthinkable. The privatization...
149

Mission-based Design Space Exploration and Traffic-in-the-Loop Simulation for a Range-Extended Plug-in Hybrid Delivery Vehicle

Anil, Vijay Sankar January 2020 (has links)
No description available.
150

Identification of Improvement areas in Servitization within European Space Exploration : A multi-stakeholder case study of challenges in servitization / Identifiering av förbättringsområden inom tjänstefiering i Europeisk rymdutforskning: En case studie av utmaningar inom tjänstefiering

Malmberg, Jonathan January 2023 (has links)
The space industry is currently undergoing a significant servitization-shift as space agencies globally are transitioning from the government-led product-oriented procurement approach, that has been the standard for decades, to a more commercial service-oriented procurement approach. The purpose of the thesis is to identify what challenges exists within servitization in European space exploration and translate these into related improvement areas for the service-oriented procurement approach adopted by the European Space Agency (ESA). The thesis adopts a qualitative case-study approach where four different commercial services, developed through a commercial partnership between ESA and private enterprises, are being studied. In total, the study identifies 21 challenges across three different life-cycle stages for the commercial services. First, the study identifies cultural challenges for both the space agency and industry as they struggle in transitioning to a service culture from the existing culture that is strongly linked with the traditional approach. Second, the study also identifies several challenges related to how the processes established within the frame of the commercial partnership are currently inadequate to support the transition to commercial services. In particular, the study highlights knowledge gaps related to business planning and marketing, insufficient processes to ensure a balance in cost and quality incentives and high barriers of entry for SMEs. Finally, the study identifies relational challenges with regards to the collaboration between the space agency and the commercial partner. The results indicate that the collaboration between ESA and the commercial partners currently lack the necessary transparency and efficiency in collaboration needed to succeed with servitization. In order to resolve these challenges, the study proposes 21 different improvement areas for ESA in relation to its commercialisation initiative. In particular, the thesis highlights process improvements related to the choice of procurement approach, development of business plans, evaluation of upfront commitment to utilization and visibility into the service design. The thesis concludes by highlighting the need for continued work with development of improvements. The thesis results serve as a starting point for developing a future approach of planning and managing development of commercial services within space exploration. / Rymdindustrin genomgår för närvarande en betydande tjänstefiering där rymdorganisationer globalt övergår från en statligt styrd produkt-orienterad tjänste-orienterad upphandlingsmetod. Syftet med examensarbetet är att identifiera vilkautmaningar som finns inom tjänstefiering i europeisk rymdutforskning samt vilka relaterade förbättringsområden som följaktligen finns inom den tjänste-orienterade upphandlingsmetod som European Space Agency (ESA) har antagit. Examensarbetet baseras på en fallstudie där fyra olika kommersiella tjänster, utvecklade genom ett kommersiellt partnerskap mellan ESA och privata företag, studeras. Studien identifierar totalt sett 21 utmaningar över tre olika livscykelfaser för de kommersiella tjänsterna. För det första identifierar studien kulturella utmaningar för både rymdorganisationen och industrin upplever svårigheter i att övergå från den befintliga kulturen som starkt är kopplad till den traditionella metoden till en tjänste-orienterad kultur. För det andra identifierar studien även flera utmaningar relaterade till hur processerna som etablerats inom ramen för det kommersiella partnerskapet för närvarande är otillräckliga för att stödja övergången till kommersiella tjänster. Studien lyfter särskilt fram kunskapsluckor inom affärsplanering och marknadsföring, otillräckliga processer för att säkerställa balans mellan kostnads- och kvalitetsincitament samt höga inträdeshinder för små och medelstora företag. Slutligen identifierar studien relationsmässiga utmaningar med avseende på samarbetet mellan rymdorganisationen och den kommersiella partnern. Resultaten indikerar att samarbetet mellan ESA och industrin idag saknar den nödvändiga transparensen och effektiviteten i samarbetet som krävs för att lyckas med tjänstefiering. För att lösa dessa utmaningar föreslår studien 21 olika förbättringsområden för ESA i relation till dess kommersialiseringsinitiativ. Särskilt framhävs processförbättringar relaterade till val av upphandlingsmetod, utveckling av affärsplaner, utvärdering av tidiga åtaganden för utnyttjande och insyn i tjänstedesignen. Examensarbetet avslutas med att betona behovet av fortsatt arbete med utveckling av förbättringar. Resultaten utgör en startpunkt för att utveckla en framtida strategi för planering och hantering av utvecklingen av kommersiella tjänster inom rymdutforskning.

Page generated in 0.1201 seconds