• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 12
  • 9
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 150
  • 150
  • 101
  • 36
  • 36
  • 29
  • 27
  • 24
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Une approche système pour l'estimation de la consommation de puissance des plateformes MPSoC / System-Level Power Estimation Methodology for MPSoC based Platforms

Rethinagiri, Santhosh Kumar 14 March 2013 (has links)
Avec l'essor des nouvelles technologies d'intégration sur silicium submicroniques, la consommation de puissance dans les systèmes sur puce multiprocesseur (MPSoC) est devenue un facteur primordial au niveau du flot de conception. La prise en considération de ce facteur clé dès les premières phases de conception, joue un rôle primordial puisqu'elle permet d'augmenter la fiabilité des composants et de réduire le temps d'arrivée sur le marché du produit final. / Shifting the design entry point up to the system-level is the most important countermeasure adopted to manage the increasing complexity of Multiprocessor System on Chip (MPSoC). The reason is that decisions taken at this level, early in the design cycle, have the greatest impact on the final design in terms of power and energy efficiency. However, taking decisions at this level is very difficult, since the design space is extremely wide and it has so far been mostly a manual activity. Efficient system-level power estimation tools are therefore necessary to enable proper Design Space Exploration (DSE) based on power/energy and timing.
82

Approche orientée modèles pour la sûreté et la sécurité des systèmes embarqués / Safe and secure model-driven design for embedded systems

Li, Letitia 03 September 2018 (has links)
La présence de systèmes et d'objets embarqués communicants dans notre vie quotidienne nous a apporté une myriade d'avantages, allant de l'ajout de commodité et de divertissement à l'amélioration de la sûreté de nos déplacements et des soins de santé. Cependant, les défauts et les vulnérabilités de ces systèmes exposent leurs utilisateurs à des risques de dommages matériels, de pertes financières, et même des dommages corporels. Par exemple, certains véhicules commercialisés, qu'ils soient connectés ou conventionnels, ont déjà souffert d'une variété de défauts de conception entraînant des blessures et la mort. Dans le même temps, alors que les véhicules sont de plus en plus connectés (et dans un avenir proche, autonomes), les chercheurs ont démontré la possibilité de piratage de leurs capteurs ou de leurs systèmes de contrôle interne, y compris l'injection directe de messages sur le bus CAN.Pour assurer la sûreté des utilisateurs et des passants, il faut considérer plusieurs facteurs. La sûreté conventionnelle suggère qu'un système ne devrait pas contenir de défauts logiciels et matériels qui peuvent l'empêcher de fonctionner correctement. La "sûreté de la fonction attendue" consiste à éviter les situations que le système ou ses composants ne peuvent pas gérer, comme des conditions environnementales extrêmes. Le timing peut être critique pour certains systèmes en temps réel, car afin d'éviter des situations dangereuses, le système devra réagir à certains événements, comme l'évitement d'obstacles, dans un délai déterminé. Enfin, la sûreté d'un système dépend de sa sécurité. Un attaquant qui peut envoyer des commandes fausses ou modifier le logiciel du système peut changer son comportement et le mettre dans diverses situations dangereuses. Diverses contre-mesures de sécurité et de sûreté pour les systèmes embarqués, en particulier les véhicules connectés, ont été proposées. Pour mettre en oeuvre correctement ces contre-mesures, il faut analyser et vérifier que le système répond à toutes les exigences de sûreté, de sécurité et de performance, et les faire la plus tôt possible dans les premières phases de conception afin de réduire le temps de mise sur le marché, et éviter les reprises. Cette thèse s'intéresse à la sécurité et la sûreté des les systèmes embarqués, dans le contexte du véhicule autonome de l'Institut Vedecom. Parmi les approches proposées pour assurer la sûreté et la sécurité des les systèmes embarqués, l'ingénierie dirigée par modèle est l'une de ces approches qui couvre l'ensemble du processus de conception, depuis la définition des exigences, la conception du matériel et des logiciels, la simulation/vérification formelle et la génération du code final. Cette thèse propose une méthodologie de modélisation pour une conception sûre et sécurisée, basée sur la méthodologie SysML-Sec, qui implique de nouvelles méthodes de modélisation et de vérification. La modélisation de la sécurité est généralement effectuée dans les dernières phases de la conception. Cependant, la sécurité a un impact sur l'architecture/allocation; les décisions de partitionnement logiciel/matériel devraient être prises en fonction de la capacité de l'architecture à satisfaire aux exigences de sécurité. Cette thèse propose comment modéliser les mécanismes de sécurité et l'impact d'un attaquant dans la phase de partitionnement logiciel/matériel. Comme les protocoles de sécurité ont un impact négatif sur le performance d'un système, c'est important de mesurer l'utilisation des composants matériels et les temps de réponse du système. Des composants surchargés peuvent entraîner des performances imprévisibles et des retards indésirables. Cette thèse traite aussi des mesures de latence des événements critiques pour la sécurité, en se concentrant sur un exemple critique pour les véhicules autonomes : le freinage/réponse après la détection d'obstacles. Ainsi, nos contributions soutiennent la conception sûre et sécurisée des systèmes embarqués. / The presence of communicating embedded systems/IoTs in our daily lives have brought a myriad of benefits, from adding conveniences and entertainment, to improving the safety of our commutes and health care. However, the flaws and vulnerabilities in these devices expose their users to risks of property damage, monetary losses, and personal injury. For example, consumer vehicles, both connected and conventional, have succumbed to a variety of design flaws resulting in injuries and death. At the same time, as vehicles are increasingly connected (and in the near future, autonomous), researchers have demonstrated possible hacks on their sensors or internal control systems, including direct injection of messages on the CAN bus.Ensuring the safety of users or bystanders involves considering multiple factors. Conventional safety suggests that a system should not contain software and hardware flaws which can prevent it from correct function. `Safety of the Intended Function' involves avoiding the situations which the system or its components cannot handle, such as adverse extreme environmental conditions. Timing can be critical for certain real-time systems, as the system will need to respond to certain events, such as obstacle avoidance, within a set period to avoid dangerous situations. Finally, the safety of a system depends on its security. An attacker who can send custom commands or modify the software of the system may change its behavior and send it into various unsafe situations. Various safety and security countermeasures for embedded systems, especially connected vehicles, have been proposed. To place these countermeasures correctly requires methods of analyzing and verifying that the system meets all safety, security, and performance requirements, preferably at the early design phases to minimize costly re-work after production. This thesis discusses the safety and security considerations for embedded systems, in the context of Institut Vedecom's autonomous vehicle. Among the proposed approaches to ensure safety and security in embedded systems, Model-Driven Engineering is one such approach that covers the full design process, from elicitation of requirements, design of hardware and software, simulation/formal verification, and final code generation. This thesis proposes a modeling-based methodology for safe and secure design, based on the SysML-Sec Methodology, which involve new modeling and verification methods. Security modeling is generally performed in the last phases of design. However, security impacts the early architecture/mapping and HW/SW partitioning decisions should be made based on the ability of the architecture to satisfy security requirements. This thesis proposes how to model the security mechanisms and the impact of an attacker as relevant to the HW/SW Partitioning phase. As security protocols negatively impact performance, it becomes important to measure both the usage of hardware components and response times of the system. Overcharged components can result in unpredictable performance and undesired delays. This thesis also discusses latency measurements of safety-critical events, focusing on one critical to autonomous vehicles: braking as after obstacle detection. Together, these additions support the safe and secure design of embedded systems.
83

Modeling the Mechanical Morphospace of Neotropical Leaf-nosed Bat Skull: A 3d Parametric Cad and Fe Study

Samavedam, Krishna C 01 January 2011 (has links) (PDF)
In order to understand the relationship between feeding behavior and the evolution of mammalian skull form, it is essential to evaluate the impact of bite force over large regions of skull. There are about 1,100 bat species worldwide, which represent about 20% of all classified mammal species. Hence, a study in the evolution of bat skull form may provide general understanding of the overall evolution of skull form in mammals. These biomechanical studies are generally performed by first building solid Finite Element (FE) models of skull from micro CT scans. This process of building FE models from micro CT scans is both tedious and time consuming. Therefore a new approach is developed in this research project to build these FE models quickly and efficiently. I have used SolidWorks to build a parameterized, three dimensional surface CAD model of a skull of the short-tailed fruit bat, Carollia perspicillata, by using coordinate data from an STL model of the species. The overall shape of this model closely resembled that of solid model of C. perspiciallata constructed from micro CT scans. Finite element analyses of the solid and surface models yielded comparable results in terms of magnitude and distribution of von Mises stress and mechanical advantage. Using this parametric surface model, the FE plate or shell element models of different bat species were generated by varying two parameters, palate length and palate width. Parametric analyses were performed on these FE plate models of skulls and response surfaces of performance criteria: von Mises stress, strain energy and mechanical advantage were generated by varying the input parameters. After generating response surfaces, species of bats from the morphologically diverse family of New World leaf-nosed bats (Family Phyllostomidae) were overlain on these response surfaces to determine which portions of the performance design space (palate length X width) are and are not occupied. These plots serve as a foundation for understanding the affect of different performance criteria on the evolution of bat skull form.
84

A Hybrid Bishop-Hill Model for Microstructure Sensitive Design

Takahashi, Ribeka 08 November 2012 (has links) (PDF)
A method is presented for adapting the classical Bishop-Hill model to the requirements of elastic/yield-limited design in metals of arbitrary crystallographic texture. The proposed Hybrid Bishop-Hill (HBH) model, which will be applied to ductile FCC metals, retains the `stress corners' of the polyhedral Bishop-Hill yield surface. However, it replaces the `maximum work criterion' with a criterion that minimizes the Euclidean distance between the applicable local corner stress state and the macroscopic stress state. This compromise leads to a model that is much more accessible to yield-limited design problems. Demonstration of performance for the HBH model is presented for an extensive database for oxygen free electronic (OFE) copper. The study also implements the HBH model to the polycrystalline yield surface via standard finite element analysis (FEA) tools to carry out microstructure-sensitive design. Anisotropic elastic properties are incorporated into the FEA software, as defined by the sample texture. The derived local stress tensor is assessed using the HBH approach to determine a safety factor relating to the distance from the yield surface, and thereby highlighting vulnerable spots in the component and obtaining a quantitative ranking for suitability of the given design. By following standard inverse design techniques, an ideal microstructure (meaning texture in this context) may be arrived at. The design problems considered is a hole-in-plate configuration of sheets loaded in uniaxial tension and simple compliant mechanisms. The further improvement of HBH model is discussed by introducing geometrically necessary dislocation (GND) densities in addition to the crystal orientations procedure in standard microstructure-based method. The correlations between crystal orientations and GND densities are studied. The shape of the yield surface most influenced by the texture of the material, while the volume of the envelope scales in accordance with the GND density. However, correlations between crystal orientation and GND content modify the yield surface shape and size. While correlations between GND density and crystal orientation are not strong for most copper samples, there are sufficient dependencies to demonstrate the benefits of the detailed four-parameter model. The four-parameter approach has potential for improving estimates of elastic-yield limit in all polycrystalline FCC materials.
85

Airship Systems Design, Modeling, and Simulation for Social Impact

Richards, Daniel C. 03 June 2022 (has links)
Although there have been oscillations in airship interest since their use in the early 1900s, technological advancements and the need for more flexible and environmentally friendly transportation modes have caused a stream of study and surge in airship development in recent years. For companies and governments to understand how airships can be incorporated into their fleets to fulfil new or existing mission types, system design space exploration is an important step in understanding airships, their uses, and their design parameters. A decision support system (DSS), Design Exploration of Lighter-Than-Air Systems (DELTAS), was developed to help stakeholders with this task. DELTAS allows users to design airships and missions to determine how a design will perform in the scenario. Simulations can also be run for a given mission to find the Pareto-optimal designs for user-defined ranges of high-level airship design parameters. A case study is provided that demonstrates how DELTAS can be used to explore the airship design space for three specified missions. These three mission case studies show how design of experiments is important to more thoroughly cover the design space and to find and understand the relationships between airship design variables that lead to optimal mission times and costs. This research also explores the impacts of introducing an airship into operation. Engineered products have economic, environmental, and social impacts, which comprise the major dimensions of sustainability. This paper seeks to determine the interaction between design parameters when social impacts are incorporated into the concept development phase of the systems design process. Social impact evaluation is increasing in importance similar to what has happened in recent years with environmental impact consideration in the design of engineered products. Concurrently, research into new airship design has increased. Airships have yet to be reintroduced at a large scale or for a range of applications in society. Although airships have the potential for positive environmental and economic impacts, the social impacts are still rarely considered. This paper presents a case study of the hypothetical introduction of airships in the Amazon region of Brazil to help local farmers transport their produce to market. It explores the design space in terms of both engineering parameters and social impacts using a discrete-event simulation to model the system. The social impacts are found to be dependent not only on the social factors and airship design parameters, but also on the farmer-airship system, suggesting that socio-technical systems design will benefit from integrated social impact metric analysis. This thesis seeks to demonstrate how computer-aided engineering tools can be used to predict social impacts, to more effectively explore a system's design space, and to optimize the system design for maximum positive impact, using the modern airship as a case study.
86

OPTIMIZATION TECHNIQUES FOR PHARMACEUTICAL MANUFACTURING AND DESIGN SPACE ANALYSIS

Daniel Joseph Laky (13120485) 21 July 2022 (has links)
<p>In this dissertation, numerical analysis frameworks and software tools for digital design of process systems are developed. More specifically, these tools have been focused on digital design within the pharmaceutical manufacturing space. Batch processing represents the traditional and still predominant pathway to manufacture pharmaceuticals in both the drug substance and drug product spaces. Drug substance processes start with raw materials or precursors to produce an active pharmaceutical ingredient (API) through synthesis and purification. Drug product processes take this pure API in powder form, add excipients, and process the powder into consumer doses such as capsules or tablets.  Continuous manufacturing has allowed many other chemical industries to take advantage of real-time process management through process control, process optimization, and real-time detection of off-spec material. Also, the possibility to reduce total cleaning time of units and encourage green chemistry through solvent reduction or recycling make continuous manufacturing an attractive alternative to batch manufacturing. However, to fully understand and take advantage of real-time process management, digital tools are required, both as soft sensors during process control or during process design and optimization.  Since the shift from batch to continuous manufacturing will proceed in stages, processes will likely adopt both continuous and batch unit operations in the same process, which we will call {\em hybrid} pharmaceutical manufacturing routes. Even though these processes will soon become common in the industry, digital tools that address comparison of batch, hybrid, and continuous manufacturing routes in the pharmaceutical space are lacking. This is especially true when considering hybrid routes. For this reason, PharmaPy, an open-source tool for pharmaceutical process development, was created to address rapid in-silico design of hybrid pharmaceutical processes.  Throughout this work, the focus is on analyzing alternative operating modes within the drug substance manufacturing context. First, the mathematical models for PharmaPy's synthesis, crystallization, and filtration units are discussed. Then, the simulation capabilities of PharmaPy are highlighted, showcasing dynamic simulation of both fully continuous and hybrid processes. However, the technical focus of the work as a whole is primarily on optimization techniques for pharmaceutical process design. Thus, many derivative-free optimization frameworks for simulation-optimization were constructed and utilized with PharmaPy performing simulations of pharmaceutical processes.  The timeline of work originally began with derivative-based methods to solve mixed-integer programs (MIP) for water network sampling and security, as well as nonlinear programs (NLPs) and some mixed-integer nonlinear programs (MINLPs) for design space and feasibility analysis. Therefore, a method for process design that combines both the ease of implementation from a process simulator (PharmaPy) with the computational performance of derivative-based optimization was implemented. Recent developments in Pyomo through the PyNumero package allow callbacks to an input-output or black-box model while using {\sc Ipopt} as a derivative-based solver through the cyipopt interface. Using this approach, it was found that using a PharmaPy simulation as a black box within a derivative-based solver resulted in quicker solve times when compared with traditional derivative-free optimization strategies, and offers a much quicker implementation strategy than using a simultaneous equation-oriented algebraic definition of the problem.  Also, uncertainty exists in virtually all process systems. Traditionally, uncertainty is analyzed through sampling approaches such as Monte Carlo simulation. These sampling approaches quickly become computational obstacles as problem scale increases. In the 1980s, chemical plant design under uncertainty through {\em flexibility analysis} became an option for explicitly considering model uncertainty using mathematical programming. However, such formulations provide computational obstacles of their own as most process models produce challenging MINLPs under the flexibility analysis framework.  Specifically when considering pharmaceutical processes, recent initiatives by the FDA have peaked interest in flexibility analysis because of the so called {\em design space}. The design space is the region for which critical quality attributes (CQAs) may be guaranteed over a set of interactions between the inputs and process parameters. Since uncertainty is intrinsic to such operations, industry is interested in guaranteeing that CQAs hold with a set confidence level over a given operating region. In this work, the {\em probabilistic design space} defined by these levels of confidence is presented to address the computational advantages of using a fully model-based flexibility analysis framework instead of a Monte Carlo sampling approach. From the results, it is seen that using the flexibility analysis framework decreased design space identification time by more than two orders of magnitude.  Given implementation difficulty with new digital tools for both students and professionals, educational material was developed for PharmaPy and was presented as part of a pharmaceutical API process development course at Purdue. The students were surveyed afterward and many of the students found the framework to be approachable through the use of Jupyter notebooks, and would consider using PharmaPy and Python for pharmaceutical modeling and data analysis in the future, respectively.  Through software development and the development of numerical analysis frameworks, digital design of pharmaceutical processes has expanded and become more approachable. The incorporation of rigorous simulations under process uncertainty promotes the use of digital tools in regulatory filings and reduces unnecessary process development costs using model-based design. Examples of these improvements are evident through the development of PharmaPy, a simulation-optimization framework using PharmaPy, and flexibility analysis tools. These tools resulted in a computational benefit of 1 to 2 orders of magnitude when compared to methods used in practice and in some cases reduce the modeling time required to determine optimal operating conditions, or the design space of a pharmaceutical manufacturing process.</p>
87

Design space exploration using HLS in relation to code structuring / Utforskning av design space med HLS i förhållande till kodstrukturering

Das, Debraj January 2022 (has links)
High Level Synthesis (HLS) is a methodology to translate a model developed in a high abstraction layer, e.g. C/C++/SystemC, that describes the algorithm into a Register-Transfer level (RTL) description like Verilog or VHDL. The resulting RTL description from the translation is subject to multiple user-controlled directives and an internal design space exploration algorithm specific to the toolchain used. HLS allow designers to focus on the behaviour of the design at a higher abstraction compared to the behavioural modelling available within the Hardware Description Language (HDL) as the compiler decides the movement of data and timing in the resulting design. Ericsson uses a legacy Advanced Peripheral Bus (APB) like interface called Memory/Register Interface (MIRI) interface for data movement in a subsystem of one of their Application-Specific Integrated Circuit (ASIC). The thesis attempts to upgrade the protocol to the more performant ARM Advanced Microcontroller Bus Architecture (AMBA) protocols’ Advanced High-performance Bus (AHB) or Advanced eXtensible Interface (AXI) interfaces. SystemC provides a host of functionalities to define the complete behaviour of the circuit at a high level of abstraction. This thesis will explore the effect of the structuring SystemC models on their synthesis, and perform design space exploration to understand the best design methodology to adopt in a SystemC model design and compare the models based on the final synthesis metrics like area, timing, and register counts. The toolchain for the thesis will be the Stratus HLS compiler developed by Cadence. Stratus supports all synthesizable constructs of SystemC. Most HLS research focuses on improving Design Space Exploration algorithms used internally in the HLS tools. However, designers can utilize algorithm structuring to provide the HLS engines with a better starting point. In this thesis, the Stratus toolchain will be used to experiment with different models with equivalent behaviour and performance. Thereafter, extract which constructs used in the models are optimal for allowing the internal design space exploration algorithm to perform in the best way possible. / HLS är en metod för att översätta en modell utvecklad på hög abstraktionsnivå t.ex. C/C++/SystemC som beskriver algoritmen på registeröverföringsnivå (RTL) som Verilog eller VHDL. Den resulterande RTL-beskrivningen utsätts för flera användarkontrollerade direktiv och en intern Design Space Exploration (DSE) algoritm, vilken är specifik för den verktygskedja som används. Detta gör det möjligt för en designer att fokusera på konstruktion beteende på en högre abstraktionsnivå jämfört med den beteendemodellering som finns tillgänglig inom det hårdvarubeskrivande språket (HDL:en) när kompilatorn bestämmer tidpunkten för utbytet av data i den resulterande designen. Ericsson använder ett äldre gränssnitt för Advanced Peripheral Bus (APB) som kallas Memory/Register Interface (MIRI), vilket är ett gränssnitt för utbyte av data i ett delsystem i en av deras Application-Specific Integrated Circuit (ASIC:ar). Avhandlingen försöker uppgradera protokollet till ett av de det mer högpresterande ARM Advanced Microcontroller Bus Architecture – protokollen Advanced High-Performance Bus (AHB) eller Advanced eXtensible Interface (AXI). SystemC tillhandahåller en mängd funktioner för att definiera kretsens fullständiga beteende vid en hög abstraktionsnivå. Denna avhandling utforskar effekten av strukturerade SystemC-modeller och deras syntesresultat samt konstruktionsrymden, för att förstå den bästa designmetodiken i ett SystemC-modelleringsdesignflöde och jämföra modellerna baserade på de slutliga syntesmätvärdena som storlek, timing, etc. Verktygskedjan för avhandlingen kommer att vara Stratus HLS -kompilatorn som utvecklats av Cadence. Stratus stöder alla syntetiserbara konstruktioner av SystemC. HLS-forskningen fokuserar främst på att förbättra Design Space Exploration, dvs de algoritmer som används internt i HLS-verktygen för att komma fram till lösningar. För att ge HLS -motorerna en bättre utgångspunkt. I denna avhandling kommer Stratus att användas för att utvärdera olika modeller med ekvivalent beteende och nästan samma prestanda efter Syntes, för att komma fram till vilka konstruktioner är optimala för att den interna DSE-algoritmen skall fungera bäst.
88

Graphical Tools, Incorporating Cost and Optimizing Central Composite Designs for Split-Plot Response Surface Methodology Experiments

Liang, Li 14 April 2005 (has links)
In many industrial experiments, completely randomized designs (CRDs) are impractical due to restrictions on randomization, or the existence of one or more hard-to-change factors. Under these situations, split-plot experiments are more realistic. The two separate randomizations in split-plot experiments lead to different error structure from in CRDs, and hence this affects not only response modeling but also the choice of design. In this dissertation, two graphical tools, three-dimensional variance dispersion graphs (3-D VDGs) and fractions of design space (FDS) plots are adapted for split-plot designs (SPDs). They are used for examining and comparing different variations of central composite designs (CCDs) with standard, V- and G-optimal factorial levels. The graphical tools are shown to be informative for evaluating and developing strategies for improving the prediction performance of SPDs. The overall cost of a SPD involves two types of experiment units, and often each individual whole plot is more expensive than individual subplot and measurement. Therefore, considering only the total number of observations is likely not the best way to reflect the cost of split-plot experiments. In this dissertation, cost formulation involving the weighted sum of the number of whole plots and the total number of observations is discussed and the three cost adjusted optimality criteria are proposed. The effects of considering different cost scenarios on the choice of design are shown in two examples. Often in practice it is difficult for the experimenter to select only one aspect to find the optimal design. A realistic strategy is to select a design with good balance for multiple estimation and prediction criteria. Variations of the CCDs with the best cost-adjusted performance for estimation and prediction are studied for the combination of D-, G- and V-optimality criteria and each individual criterion. / Ph. D.
89

Fast Code Exploration for Pipeline Processing in FPGA Accelerators / Exploração Rápida de Códigos para Processamento Pipeline em Aceleradores FPGA

Rosa, Leandro de Souza 31 May 2019 (has links)
The increasing demand for energy efficient computing has endorsed the usage of Field-Programmable Gate Arrays to create hardware accelerators for large and complex codes. However, implementing such accelerators involve two complex decisions. The first one lies in deciding which code snippet is the best to create an accelerator, and the second one lies in how to implement the accelerator. When considering both decisions concomitantly, the problem becomes more complicated since the code snippet implementation affects the code snippet choice, creating a combined design space to be explored. As such, a fast design space exploration for the accelerators implementation is crucial to allow the exploration of different code snippets. However, such design space exploration suffers from several time-consuming tasks during the compilation and evaluation steps, making it not a viable option to the snippets exploration. In this work, we focus on the efficient implementation of pipelined hardware accelerators and present our contributions on speeding up the pipelines creation and their design space exploration. Towards loop pipelining, the proposed approaches achieve up to 100× speed-up when compared to the state-uf-the-art methods, leading to 164 hours saving in a full design space exploration with less than 1% impact in the final results quality. Towards design space exploration, the proposed methods achieve up to 9:5× speed-up, keeping less than 1% impact in the results quality. / A demanda crescente por computação energeticamente eficiente tem endossado o uso de Field- Programmable Gate Arrays para a criação de aceleradores de hardware para códigos grandes e complexos. Entretanto, a implementação de tais aceleradores envolve duas decisões complexas. O primeiro reside em decidir qual trecho de código é o melhor para se criar o acelerador, e o segundo reside em como implementar tal acelerador. Quando ambas decisões são consideradas concomitantemente, o problema se torna ainda mais complicado dado que a implementação do trecho de código afeta a seleção dos trechos de código, criando um espaço de projeto combinatorial a ser explorado. Dessa forma, uma exploração do espaço de projeto rápida para a implementação de aceleradores é crucial para habilitar a exploração de diferentes trechos de código. Contudo, tal exploração do espaço de projeto é impedida por várias tarefas que consumem tempo durante os passos de compilação a análise, o que faz da exploração de trechos de códigos inviável. Neste trabalho, focamos na implementação eficiente de aceleradores pipeline em hardware e apresentamos nossas contribuições para o aceleramento da criações de pipelines e de sua exploração do espaço de projeto. Referente à criação de pipelines, as abordagens propostas alcançam uma aceleração de até 100× quando comparadas às abordagens do estado-da-arte, levando à economia de 164 horas em uma exploração de espaço de projeto completa com menos de 1% de impacto na qualidade dos resultados. Referente à exploração do espaço de projeto, as abordagens propostas alcançam uma aceleração de até 9:5×, mantendo menos de 1% de impacto na qualidade dos resultados.
90

Exploração de sequências de otimização do compilador baseada em técnicas hibridas de mineração de dados complexos / Exploration of optimization sequences of the compiler based on hybrid techniques of complex data mining

Martins, Luiz Gustavo Almeida 25 September 2015 (has links)
Devido ao grande número de otimizações fornecidas pelos compiladores modernos e à ampla possibilidade de ordenação dessas transformações, uma eficiente Exploração do Espaço de Projeto (DSE) se faz necessária para procurar a melhor sequência de otimização de uma determinada função ou fragmento de código. Como esta exploração é uma tarefa complexa e dispendiosa, apresentamos uma nova abordagem de DSE capaz de reduzir esse tempo de exploração e selecionar sequências de otimização que melhoraram o desempenho dos códigos transformados. Nossa abordagem utiliza um conjunto de funções de referência, para as quais uma representação simbólica do código (DNA) e a melhor sequência de otimização são conhecidas. O DSE de novas funções é baseado em uma abordagem de agrupamento aplicado sobre o código DNA que identifica similaridades entre funções. O agrupamento utiliza três técnicas para a mineração de dados: distância de compressão normalizada, algoritmo de reconstrução de árvores filogenéticas (Neighbor Joining) e identificação de grupos por ambiguidade. As otimizações das funções de referência identificadas como similares formam o espaço que é explorado para encontrar a melhor sequência para a nova função. O DSE pode utilizar o conjunto reduzido de otimizações de duas formas: como o espaço de projeto ou como a configuração inicial do algoritmo. Em ambos os casos, a adoção de uma pré-seleção baseada no agrupamento permite o uso de algoritmos de busca simples e rápidos. Os resultados experimentais revelam que a nova abordagem resulta numa redução significativa no tempo total de exploração, ao mesmo tempo que alcança um desempenho próximo ao obtido através de uma busca mais extensa e dispendiosa baseada em algoritmos genéticos. / Due to the large number of optimizations provided in modern compilers and to compiler optimization specific opportunities, a Design Space Exploration (DSE) is necessary to search for the best sequence of compiler optimizations for a given code fragment (e.g., function). As this exploration is a complex and time consuming task, we present new DSE strategies to reduce the exploration time and still select optimization sequences able to improve the performance of each function. The DSE is based on a clustering approach which groups functions with similarities and then explore the reduced search space provided by the optimizations previously suggested for the functions in each group. The identification of similarities between functions uses a data mining method which is applied to a symbolic representation of the source code. The DSE strategies uses the reduced optimizations set identified by clustering in two ways: as the design space or as the initial configuration of the algorithm. In both ways, the adoption of a pre-selection based on clustering allows the use of simple and fast DSE algorithms. Several experiments for evaluating the effectiveness of the proposed approach address the exploration of compiler optimization sequences. Besides, we investigate the impact of each technique or component employed in the selection process. Experimental results reveal that the use of our new clustering-based DSE approach achieved a significant reduction on the total exploration time of the search space at the same time that obtained performance speedups close to a traditional genetic algorithmbased approach.

Page generated in 0.0792 seconds