• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 40
  • 28
  • 27
  • 20
  • 17
  • 15
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analysis and Optimisation of Real-Time Systems with Stochastic Behaviour

Manolache, Sorin January 2005 (has links)
Embedded systems have become indispensable in our life: household appliances, cars, airplanes, power plant control systems, medical equipment, telecommunication systems, space technology, they all contain digital computing systems with dedicated functionality. Most of them, if not all, are real-time systems, i.e. their responses to stimuli have timeliness constraints. The timeliness requirement has to be met despite some unpredictable, stochastic behaviour of the system. In this thesis, we address two causes of such stochastic behaviour: the application and platform-dependent stochastic task execution times, and the platform-dependent occurrence of transient faults on network links in networks-on-chip. We present three approaches to the analysis of the deadline miss ratio of applications with stochastic task execution times. Each of the three approaches fits best to a different context. The first approach is an exact one and is efficiently applicable to monoprocessor systems. The second approach is an approximate one, which allows for designer-controlled trade-off between analysis accuracy and analysis speed. It is efficiently applicable to multiprocessor systems. The third approach is less accurate but sufficiently fast in order to be placed inside optimisation loops. Based on the last approach, we propose a heuristic for task mapping and priority assignment for deadline miss ratio minimisation. Our contribution is manifold in the area of buffer and time constrained communication along unreliable on-chip links. First, we introduce the concept of communication supports, an intelligent combination between spatially and temporally redundant communication. We provide a method for constructing a sufficiently varied pool of alternative communication supports for each message. Second, we propose a heuristic for exploring the space of communication support candidates such that the task response times are minimised. The resulting time slack can be exploited by means of voltage and/or frequency scaling for communication energy reduction. Third, we introduce an algorithm for the worst-case analysis of the buffer space demand of applications implemented on networks-on-chip. Last, we propose an algorithm for communication mapping and packet timing for buffer space demand minimisation. All our contributions are supported by sets of experimental results obtained from both synthetic and real-world applications of industrial size.
42

Ordnungsreduktion von elektrostatisch-mechanischen Finite Elemente Modellen für die Mikrosystemtechnik

Bennini, Fouad 07 October 2005 (has links) (PDF)
In der vorliegenden Arbeit wird eine Prozedur zur Ordnungsreduktion von Finite Elemente Modellen mikromechanischer Struktur mit elektrostatischem Wirkprinzip entwickelt und analysiert. Hintergrund der Ordnungsreduktion ist eine Koordinatentransformation von lokalen Finite Elemente Koordinaten in globale Koordinaten. Die globalen Koordinaten des reduzierten Modells werden durch einige wenige Formfunktionen beschrieben. Damit wird das Makromodell nicht mehr durch lokale Knotenverschiebungen beschrieben, sondern durch globale Formfunktionen, welche die gesamte Deformation der Struktur beeinflussen. Es wird gezeigt, dass Eigenvektoren der linearisierten mechanischen Struktur einfache und effiziente Formfunktionen darstellen. Weiterhin kann diese Methode für bestimmte Nichtlinearitäten und für verschiedene in Mikrosystemen auftretende Lasten angewendet werden. Das Ergebnis sind Makromodelle, die über Klemmen in Systemsimulatoren eingebunden werden können, die Genauigkeiten einer Finite Elemente Analyse erreichen und für Systemsimulationen typische Laufzeitverhalten besitzen.
43

Integration of virtual platform models into a system-level design framework

Salinas Bomfim, Pablo E. 24 November 2010 (has links)
The fields of System-On-Chip (SOC) and Embedded Systems Design have received a lot of attention in the last years. As part of an effort to increase productivity and reduce the time-to-market of new products, different approaches for Electronic System-Level Design frameworks have been proposed. These different methods promise a transparent co-design of hardware and software without having to focus on the final hardware/software split. In our work, we focused on enhancing the component database, modeling and synthesis capabilities of the System-On-Chip Environment (SCE). We investigated two different virtual platform emulators (QEMU and OVP) for integration into SCE. Based on a comparative analysis, we opted on integrating the Open Virtual Platforms (OVP) models and tested the enhanced SCE simulation, design and synthesis capabilities with a JPEG encoder application, which uses both custom hardware and software as part of the system. Our approach proves not only to provide fast functional verification support for designers (10+ times faster than cycle accurate models), but also to offer a good speed/accuracy relationship when compared against integration of cycle accurate or behavioral (host-compiled) models. / text
44

Une méthodologie de conception de modèles analytiques de surface et de puissance de réseaux sur puce hautement paramétriques basée sur une méthode d’apprentissage automatique / A machine-learning based methodology to design analytical area and power models of highly parametric networks-on-chip

Dubois, Florentine 04 July 2013 (has links)
Les réseaux sur puces (SoCs - Networks-on-chip) sont apparus durant la dernière décennie en tant que solution flexible et efficace pour interconnecter le nombre toujours croissant d'éléments inclus dans les systèmes sur puces (SoCs - Systems-on-chip). Les réseaux sur puces sont en mesure de répondre aux besoins grandissants en bande-passante et en scalabilité tout en respectant des contraintes fortes de performances. Cependant, ils sont habituellement caractérisés par un grand nombre de paramètres architecturaux et d'implémentation qui forment un vaste espace de conception. Dans ces conditions, trouver une architecture de NoC adaptée aux besoins d'une plateforme précise est un problème difficile. De plus, la plupart des grands choix architecturaux (topologie, routage, qualité de service) sont généralement faits au niveau architectural durant les premières étapes du flot de conception, mais mesurer les effets de ces décisions majeures sur les performances finales du système est complexe à un tel niveau d'abstraction. Les analyses statiques (méthodes non basées sur des simulations) sont apparues pour répondre à ce besoin en méthodes d'estimations des performances des SoCs fiables et disponibles rapidement dans le flot de conception. Au vu du haut niveau d'abstraction utilisé, il est irréaliste de s'attendre à une estimation précise des performances et coûts de la puce finale. L'objectif principal est alors la fidélité (caractérisation des grandes tendances d'une métrique permettant une comparaison équitable des alternatives) plutôt que la précision. Cette thèse propose une méthodologie de modélisation pour concevoir des analyses statiques des coûts des composants des NoCs. La méthode proposée est principalement orientée vers la généralité. En particulier, aucune hypothèse n'est faite ni sur le nombre de paramètres des composants ni sur la nature des dépendances de la métrique considérée sur ces mêmes paramètres. Nous sommes alors en mesure de modéliser des composants proposant des millions de possibilités de configurations (ordre de 1e+30 possibilités de configurations) et d'estimer le coût de réseaux sur puce composés d'un grand nombre de ces composants au niveau architectural. Il est complexe de modéliser ce type de composants avec des modèles analytiques expérimentaux à cause du trop grand nombre de possibilités de configurations. Nous proposons donc un flot entièrement automatisé qui peut être appliqué tel quel à n'importe quelles architectures et technologies. Le flot produit des prédicteurs de coûts des composants des réseaux sur puce capables d'estimer les différentes métriques pour n'importe quelles configurations de l'espace de conception en quelques secondes. Le flot conçoit des modèles analytiques à grains fins sur la base de résultats obtenus au niveau porte et d'une méthode d'apprentissage automatique. Il est alors capable de concevoir des modèles présentant une meilleure fidélité que les méthodes basées uniquement sur des théories mathématiques tout en conservant leurs qualités principales (basse complexité, disponibilité précoce). Nous proposons d'utiliser une méthode d'interpolation basée sur la théorie de Kriging. La théorie de Kriging permet de minimiser le nombre d'exécutions du flot d'implémentation nécessaires à la modélisation tout en caractérisant le comportement des métriques à la fois localement et globalement dans l'espace. La méthode est appliquée pour modéliser la surface logique des composants clés des réseaux sur puces. L'inclusion du trafic dans la méthode est ensuite traitée et un modèle de puissance statique et dynamique moyenne des routeurs est conçu sur cette base. / In the last decade, Networks-on-chip (NoCs) have emerged as an efficient and flexible interconnect solution to handle the increasing number of processing elements included in Systems-on-chip (SoCs). NoCs are able to handle high-bandwidth and scalability needs under tight performance constraints. However, they are usually characterized by a large number of architectural and implementation parameters, resulting in a vast design space. In these conditions, finding a suitable NoC architecture for specific platform needs is a challenging issue. Moreover, most of main design decisions (e.g. topology, routing scheme, quality of service) are usually made at architectural-level during the first steps of the design flow, but measuring the effects of these decisions on the final implementation at such high level of abstraction is complex. Static analysis (i.e. non-simulation-based methods) has emerged to fulfill this need of reliable performance and cost estimation methods available early in the design flow. As the level of abstraction of static analysis is high, it is unrealistic to expect an accurate estimation of the performance or cost of the chip. Fidelity (i.e. characterization of the main tendencies of a metric) is thus the main objective rather than accuracy. This thesis proposes a modeling methodology to design static cost analysis of NoC components. The proposed method is mainly oriented towards generality. In particular, no assumption is made neither on the number of parameters of the components nor on the dependences of the modeled metric on these parameters. We are then able to address components with millions of configurations possibilities (order of 1e+30 configuration possibilities) and to estimate cost of complex NoCs composed of a large number of these components at architectural-level. It is difficult to model that kind of components with experimental analytical models due to the huge number of configuration possibilities. We thus propose a fully-automated modeling flow which can be applied directly to any architecture and technology. The output of the flow is a NoC component cost predictor able to estimate a metric of interest for any configuration of the design space in few seconds. The flow builds fine-grained analytical models on the basis of gate-level results and a machine-learning method. It is then able to design models with a better fidelity than purely-mathematical methods while preserving their main qualities (i.e. low complexity, early availability). Moreover, it is also able to take into account the effects of the technology on the performance. We propose to use an interpolation method based on Kriging theory. By using Kriging methodology, the number of implementation flow runs required in the modeling process is minimized and the main characteristics of the metrics in space are modeled both globally and locally. The method is applied to model logic area of key NoC components. The inclusion of traffic is then addressed and a NoC router leakage and average dynamic power model is designed on this basis.
45

Estudo de um Sistema de NÃvel com Dois Tanques Interligados Sujeito a PerturbaÃÃes Utilizando Desigualdades Matriciais Lineares / Study of a system level with two tanks interconnected subject to disturbances using linear matrix inequalities

Kelson de Sousa Leite 24 January 2012 (has links)
Universidade Federal do Cearà / A teoria de controle robusto evoluiu consideravelmente ao longo das Ãltimas dÃcadas, apresentando soluÃÃes para vÃrios tipos de problemas de anÃlise, desempenho e sÃntese de sistemas lineares incertos. As desigualdades matriciais lineares (LMIs) e suas tÃcnicas surgiram como poderosas ferramentas em diversas Ãreas de engenharia de controle para projetos estruturais. Uma propriedade importante das LMIs reside no fato de que o seu conjunto soluÃÃo à convexo. Esta propriedade à fundamental para que se possam formular problemas em controle robusto como sendo problemas de otimizaÃÃo convexa que minimizam uma funÃÃo objetivo. Diante destas afirmaÃÃes o presente trabalho utiliza um sistema de nÃvel de lÃquido com dois tanques interligados como planta onde a mesma foi modelada, e, em seguida, foi desenvolvido um controlador para garantir a sua estabilidade quadrÃtica, quando submetido a perturbaÃÃes externas incertas definidas em um politopo. Utilizou-se o regulador linear quadrÃtico com aÃÃo integral (LQI) como controlador, porÃm, o conceito Ãtimo do LQR nÃo leva em consideraÃÃo as incertezas paramÃtricas existentes nas plantas de projeto, com isso, foi apresentado um mÃtodo de resoluÃÃo do LQR utilizando otimizaÃÃo convexa. O LQR otimizado via LMIs permite a adiÃÃo de incertezas para a obtenÃÃo do ganho de realimentaÃÃo de estado. Os resultados obtidos comprovaram que a estratÃgia de controle LQI via resoluÃÃo LMI à eficaz como controle robusto, pois à capaz de incluir caracterÃsticas referentes à imprecisÃo do processo, alÃm disso, o controle LQI garante a otimalidade do controle. / The robust control theory has evolved considerably over the past decades, providing solutions for various problems of analysis, synthesis and performance of uncertain linear systems. The linear matrix inequalities (LMI) and its techniques have emerged as powerful tools in various areas of control engineering for structural projects. An important property of LMIs is the fact that its solution set is convex. This property is crucial in order to be able to make robust control problems as convex optimization problems that minimize an objective function. Given these statements the present work uses a liquid level system with two tanks connected to the plant where it was modeled, and then a controller is designed to ensure quadratic stability when subjected to external disturbances defined in an uncertain polytope. We used the linear quadratic regulator with integral action (LQI) as a controller, however, the concept of optimal LQR does not take into account the parametric uncertainties in the existing plant design, with it, was presented a method of solving the LQR using convex optimization. LQR optimized via LMI allows the addition of uncertainty to obtain the state feedback gain. The results obtained proved that the strategy of LQI control via LMI resolution is effective as robust control, because it can include features related to the imprecision of the process, moreover, the LQI control ensures the optimality of control.
46

Synthesis-driven Derivation of Process Graphs from Functional Blocks for Time-Triggered Embedded Systems

Sivatki, Ghennadii January 2005 (has links)
Embedded computer systems are used as control systems in many products, such as VCRs, digital cameras, washing machines, automobiles, airplanes, etc. As the complexity of embedded applications grows and time-to-market of the products they are used in reduces, designing reliable systems satisfying multiple require-ments is a great challenge. Successful design, nowadays, cannot be performed without good design tools based on powerful design methodologies. These tools should explore different design alternatives to find the best one and do that at high abstraction levels to manage the complexity and reduce the design time. A design is specified using models. Different models are used at different de-sign stages and abstraction levels. For example, the functionality of an application can be specified using hierarchical functional blocks. However, for such design tasks as mapping and scheduling, a lower-level flat model of interacting processes is needed. Deriving this model from a higher-level model of functional blocks is the main focus of this thesis. Our objective is to develop efficient strategies for such derivations, aiming at producing a process graph specification, which helps the synthesis tasks to find schedulable implementations. We proposed several strategies and evaluated them experimentally.
47

System-Level Hardwa Synthesis of Dataflow Programs with HEVC as Study Use Case / Synthèse matérielle au niveau système des programmes flots-de-données : étude de cas du décodeur HEVC

Abid, Mariem 28 April 2016 (has links)
Les applications de traitement d'image et vidéo sont caractérisées par le traitement d'une grande quantité de données. La conception de ces applications complexes avec des méthodologies de conception traditionnelles bas niveau provoque 1'augmentation des coûts de développement. Afin de résoudre ces défis, des outils de synthèse haut niveau ont été proposés. Le principe de base est de modéliser le comportement de l'ensemble du système en utilisant des spécifications haut niveau afin de permettre la synthèse automatique vers des spécifications bas niveau pour implémentation efficace en FPGA. Cependant, l'inconvénient principal de ces outils de synthèse haut niveau est le manque de prise en compte de la totalité du système, c.-à-d. la création de la communication entre les différents composants pour atteindre le niveau système n'est pas considérée. Le but de cette thèse est d'élever le niveau d'abstraction dans la conception des systèmes embarqués au niveau système. Nous proposons un flot de conception qui permet une synthèse matérielle efficace des applications de traitement vidéo décrites en utilisant un langage spécifique à un domaine pour la programmation flot-de- données. Le flot de conception combine un compilateur flot- de-données pour générer des descriptions à base de code C et d'un synthétiseur pour générer des descriptions niveau de transfert de registre. Le défi majeur de l'implémentation en FPGA des canaux de communication des programmes flot-de-données basés sur un modèle de calcul est la minimisation des frais généraux de la communication. Pour cela, nous avons introduit une nouvelle approche de synthèse de l'interface qui mappe les grandes quantités des données vidéo, à travers des m'mémoires partagées sur FPGA. Ce qui conduit à une diminution considérable de la latence et une augmentation du débit. Ces résultats ont été démontrés sur la synthèse matérielle du standard vidéo émergent High-Efficiency Video Coding (HEVC). / Image and video processing applications are characterized by the processing of a huge amount of data. The design of such complex applications with traditional design methodologies at lowlevel of abstraction causes increasing development costs. In order to resolve the above mentioned challenges, Electronic System Level (ESL) synthesis or High-Level Synthesis (HLS) tools were proposed. The basic premise is to model the behavior of the entire system using high level specifications, and to enable the automatic synthesis to low-level specifications for efficient implementation in Field-Programmable Gate array (FPGA). However, the main downside of the HLS tools is the lack of the entire system consideration, i.e. the establishment of the communications between these components to achieve the system-level is not yet considered. The purpose of this thesis is to raise the level of abstraction in the design of embedded systems to the system-level. A novel design flow was proposed that enables an efficient hardware implementation of video processing applications described using a Domain Specific Language (DSL) for dataflow programming. The design flow combines a dataflow compiler for generating C-based HLS descriptions from a dataflow description and a C-to-gate synthesizer for generating Register-Transfer Level (RTL) descriptions. The challenge of implementing the communication channels of dataflow programs relying on Model of Computation (MoC) in FPGA is the minimization of the communication overhead. In this issue, we introduced a new interface synthesis approach that maps the large amounts of data that multimedia and image processing applications process, to shared memories on the FPGA. This leads to a tremendous decrease in the latency and an increase in the throughput. These results were demonstrated upon the hardware synthesis of the emerging High-Efficiency Video Coding (HEVC) standard.
48

A thermofluid network-based methodology for integrated simulation of heat transfer and combustion in a pulverized coal-fired furnace

van Der Meer, Willem Arie 02 March 2021 (has links)
Coal-fired power plant boilers consist of several complex subsystems that all need to work together to ensure plant availability, efficiency and safety, while limiting emissions. Analysing this multi-objective problem requires a thermofluid process model that can simulate the water/steam cycle and the coal/air/flue gas cycle for steady-state and dynamic operational scenarios, in an integrated manner. The furnace flue gas side can be modelled using a suitable zero-dimensional model in a quasi-steady manner, but this will only provide an overall heat transfer rate and a single gas temperature. When more detail is required, CFD is the tool of choice. However, the solution times can be prohibitive. A need therefore exists for a computationally efficient model that captures the three-dimensional radiation effects, flue gas exit temperature profile, carbon burnout and O2 and CO2 concentrations, while integrated with the steam side process model for dynamic simulations. A thermofluid network-based methodology is proposed that combines the zonal method to model the radiation heat transfer in three dimensions with a one-dimensional burnout model for the heat generation, together with characteristic flow maps for the mass transfer. Direct exchange areas are calculated using a discrete numerical integration approximation together with a suitable smoothing technique. Models of Leckner and Yin are applied to determine the gas and particle radiation properties, respectively. For the heat sources the burnout model developed by the British Coal Utilisation Research Association is employed and the advection terms of the mass flow are accounted for by superimposing a mass flow map that is generated via an isothermal CFD solution. The model was first validated by comparing it with empirical data and other numerical models applied to the IFRF single-burner furnace. The full scale furnace model was then calibrated and validated via detailed CFD results for a wall-fired furnace operating at full load. The model was shown to scale well to other load conditions and real plant measurements. Consistent results were obtained for sensitivity studies involving coal quality, particle size distribution, furnace fouling and burner operating modes. The ability to do co-simulation with a steam-side process model in Flownex® was successfully demonstrated for steady-state and dynamic simulations.
49

Modularization of Test Rigs / Modularisering av provningsriggar

Williamsson, David January 2015 (has links)
This Master of Science Thesis contains the result of a product development project, conducted in collaboration with Scania CV AB in Södertälje. Scania has a successful history in vehicle modularization and therefore wanted to investigate the possibility to modularize their test rigs as well, in order to gain various types of benefits. The section UTT (Laboratory Technology) at Scania, where the project was conducted, had however little experience in product modularization. The author of the thesis therefore identified a specific test rig and modularized it by using appropriate methods. Moreover, a new method was developed by the author, in order to modularize the test rig according to both product complexity and company strategies. This was done by adapting the DSM (Design Structure Matrix) with strategies from the MIM (Module Indication Matrix), before clustering it with the IGTA++ clustering algorithm. The result of the different modularization methods was finally evaluated and compared, before choosing the most suitable modular test rig architecture. The chosen architecture was then analyzed, in order to determine potential benefits that it could offer. Another purpose of the thesis was to answer the research questions about the possibility to combine a DSM and MIM, and if that would improve the result when modularizing a product. The thesis also aimed at providing the project owners with a theoretical background in the field of product modularization and System-Level design (embodiment design). The conclusions of the thesis is that the chosen modular test rig architecture has 41% less complexity (compared with the original architecture) and could potentially increase the flexibility, reduce the risk of design mistakes and reduce the development time by up to 70%. It would also be theoretically possible to reuse up to 57% of the modules, when redesigning the test rig in the future. The thesis also identified that it is possible to transfer some information from a MIM and import it to a DSM, which answered one of the research questions, it was however not possible to claim that it will always improve the result. / Detta M.Sc. examensarbete innehåller resultatet av ett produktframtagningsprojekt som genomfördes i samarbete med Scania CV AB i Södertälje. Scania har en framgångsrik historia inom modularisering av fordon och var därför intresserade av att undersöka möjligheten att modularisera sina provningsriggar, för att uppnå olika typer av strategiska fördelar. Sektionen UTT (Laboratorieteknik) på Scania, där projektet genomfördes, hade dock lite erfarenhet av modularisering av produkter. Författaren av detta examensarbete identifierade därför en specifik provningsrigg och modulariserade den med hjälp av lämpliga metoder. Dessutom utvecklades en ny metod av författaren för att både kunna betrakta företagsstrategier och produktkomplexiteten under modulariseringen. Detta gjordes genom att anpassa en DSM (Design Structure Matrix) med strategier från en MIM (Module Indication Matrix), innan den klustrades med hjälp av algoritmen IGTA++. Resultatet av de olika modulariseringsmetoderna utvärderades och jämfördes slutligen innan den lämpligaste modulära provriggsarkitekturen valdes. Den valda arkitekturen analyserades sedan för att identifiera tänkbara strategiska fördelar som den skulle kunna möjliggöra. Ett annat syfte med examensarbetet var att besvara forskningsfrågorna om möjligheten att kombinera en DSM och MIM, och om det i så fall skulle förbättra resultatet av modulariseringen. Målet med examensarbetet var också att förse sektionen UTT med en teoretisk bakgrund inom modularisering och systemkonstruktion. Slutsatserna av examensarbetet är att den valda modulära produktarkitekturen har 41% lägre komplexitet (jämfört med den ursprungliga arkitekturen) och skulle dessutom potentiellt kunna öka flexibiliteten, minska risken för konstruktionsfel samt minska ledtiden (under utvecklingen) med upp till 70%. Det skulle också vara teoretiskt möjligt att återanvända upp till 57% av modulerna när den studerade provningsriggen behöver utvecklas i framtiden. Under examensarbetet identifierades också möjligheten att överföra information från en MIM till en DSM, vilket besvarade en av forskningsfrågorna. Det var dock inte möjligt att besvara frågan om det alltid förbättrar resultatet.
50

Developing multi-criteria performance estimation tools for Systems-on-chip

Vander Biest, Alexis 23 March 2009 (has links)
The work presented in this thesis targets the analysis and implementation of multi-criteria performance prediction methods for System-on-Chips (SoC).<p>These new SoC architectures offer the opportunity to integrate complete heterogeneous systems into a single chip and can be used to design battery powered handhelds, security critical systems, consumer electronics devices, etc. However, this variety in terms of application usually comes with a lot of different performance objectives like power consumption, yield, design cost, production cost, silicon area and many others. These performance requirements are often very difficult to meet together so that SoC design usually relies on making the right design choices and finding the best performance compromises.<p>In parallel with this architectural paradigm shift, new Very Deep Submicron (VDSM) silicon processes have more and more impact on the performances and deeply modify the way a VLSI system is designed even at the first stages of a design flow.<p>In such a context where many new technological and system related variables enter the game, early exploration of the impact of design choices becomes crucial to estimate the performance of the system to design and reduce its time-to-market.<p>In this context, this thesis presents: <p>- A study of state-of-the-art tools and methods used to estimate the performances of VLSI systems and an original classification based on several features and concepts that they use. Based on this comparison, we highlight their weaknesses and lacks to identify new opportunities in performance prediction.<p>- The definition of new concepts to enable the automatic exploration of large design spaces based on flexible performance criteria and degrees of freedom representing design choices.<p>- The implementation of a couple of two new tools of our own:<p>- Nessie, a tool enabling hierarchical representation of an application along with its platform and automatically performs the mapping and the estimation of their performance.<p>-Yeti, a C++ library enabling the defintion and value estimation of closed-formed expressions and table-based relations. It provides the user with input and model sensitivity analysis capability, simulation scripting, run-time building and automatic plotting of the results. Additionally, Yeti can work in standalone mode to provide the user with an independent framework for model estimation and analysis.<p><p>To demonstrate the use and interest of these tools, we provide in this thesis several case studies whose results are discussed and compared with the literature.<p>Using Yeti, we successfully reproduced the results of a model estimating multi-core computation power and extended them thanks to the representation flexibility of our tool.<p>We also built several models from the ground up to help the dimensioning of interconnect links and clock frequency optimization.<p>Thanks to Nessie, we were able to reproduce the NoC power consumption results of an H.264/AVC decoding application running on a multicore platform. These results were then extended to the case of a 3D die stacked architecture and the performance benefits are then discussed.<p>We end up by highlighting the advantages of our technique and discuss future opportunities for performance prediction tools to explore. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.0696 seconds