• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Influence of Temporal Saliency on Young Children's Estimates of Performance

Beilstein, Elizabeth A. 13 April 2007 (has links)
No description available.
12

Development of a Plug and Play Solution for Commercial Off-grid Solar Refrigeration : Presenting a Battery Supported System Providing the AC Power Required to run a Coolfinity 300L Commercial Refrigerator

de Groot, Martijn January 2021 (has links)
In this report the design and testing of a plug and play system to run Coolfinity’s Icevolt 300 refrigerator on solar panels is discussed. Such a system will be able to provide adequate cooling for food & beverages in area’s with unreliable or no electricity available. Currently such systems are only available for small chest refrigerators, while the Icevolt 300 is a large standing commercial refrigerator with a glass door. This is ideal for shops, cafés restaurants and smaller distribution centres. The system contains a solar charge controller, a battery and an inverter. First the component specifications and required solar panels are calculated. From those calculations system components are evaluated. A custom casing is designed to fit the components. An OEM is chosen and the chosen Inverter is tested extensively. The tests show that the inverter does not have any problems starting the Icevolt 300 compressor at a reduced voltage. Many battery manufacturers are evaluated and samples from three different manufacturers are obtained and tested. Samples of one of the manufacturers match specifications and have no issues with the high start up power of the compressor. A full system test proofs that the system works, but also indicates that the original refrigerator consumption estimate was too low. This means more PV panels are needed than originally estimated. With the information from the tests a new model is build that estimates the performance more accurate. A program is written to estimate the performance and decide the PV panels required. The pilot series of the case showed a lot of improvements are needed in the case design, especially on cost. A test is prepared in Mali but no test data is obtained yet. Based on the work done it would be recommend to investigate DC direct refrigerators instead of continuing the path of PV to AC systems. / I denna rapport diskuteras design och testning av ett plug and play - system för att köra Coolfinity’s Icevolt 300 -kylskåp på solpaneler. Ett sådant system kommer att kunna tillhandahålla tillräcklig kylning för mat och dryck i områden med opålitlig eller ingen tillgänglig el. För närvarande är sådana system endast tillgängliga för små kylboxar, medan Icevolt 300 är ett stort stående kommersiellt kylskåp med en glasdörr. Detta är idealiskt för butiker, kaféer och mindre distributionscentra. Systemet innehåller en laddningsregulator för solpaneler, ett batteri och en växelriktare. Först beräknas komponentspecifikationerna och nödvändiga solpaneler. Utifrån dessa beräkningar utvärderas systemkomponenter. Ett anpassat hölje är utformat för att passa komponenterna. En OEM väljs och den valda växelriktaren testas utförligt. Testerna visar att växelriktaren inte har några problem att starta Icevolt 300 -kompressorn med reducerad spänning. Många batteritillverkare utvärderas och prover från tre olika tillverkare erhålls och testas. Prover från en av tillverkarna matchar specifikationerna och har inga problem med kompressorns höga starteffekt. Ett fullständigt systemtest bevisar att systemet fungerar, men indikerar också att den ursprungliga uppskattningen av kylförbrukningen var för låg. Det betyder att fler PV -paneler behövs än vad som ursprungligen beräknades. Med informationen från testerna byggs en ny modell som uppskattar prestandan mer exakt. Ett program skrivs för att uppskatta prestanda och bestämma vilka PV -paneler som krävs. Pilotserien för höljet visade att många förbättringar behöver göras vad beträffar höljets design, särskilt vad gäller kostnaden. Ett test förbereds i Mali men inga testdata har erhållits ännu. Baserat på det utförda arbetet skulle det rekommenderas att undersöka direkta DC -kylskåp istället för att fortsätta vägen för PV till AC-system.
13

Promoting Systematic Practices for Designing and Developing Edge Computing Applications via Middleware Abstractions and Performance Estimation

Dantas Cruz, Breno 09 April 2021 (has links)
Mobile, IoT, and wearable devices have been transitioning from passive consumers to active generators of massive amounts of user-generated data. Edge-based processing eliminates network bottlenecks and improves data privacy. However, developing edge applications remains hard, with developers often have to employ ad-hoc software development practices to meet their requirements. By doing so, developers introduce low-level and hard-to-maintain code to the codebase, which is error-prone, expensive to maintain, and vulnerable in terms of security. The thesis of this research is that modular middleware abstractions, exemplar use cases, and ML-based performance estimation can make the design and development of edge applications more systematic. To prove this thesis, this dissertation comprises of three research thrusts: (1) understand the characteristics of edge-based applications, in terms of their runtime, architecture, and performance; (2) provide exemplary use cases to support the development of edge-based application; (3) innovate in the realm of middleware to address the unique challenges of edge-based data transfer and processing. We provide programming support and performance estimation methodologies to help edge-based application developers improve their software development practices. This dissertation is based on three conference papers, presented at MOBILESoft 2018, VTC 2020, and IEEE SMDS 2020. / Doctor of Philosophy / Mobile, IoT, and wearable devices are generating massive volumes of user data. Processing this data can reveal valuable insights. For example, a wearable device collecting its user's vitals can use the collected data to provide health advice. Typically the collected data is sent to some remote computing resources for processing. However, due to the vastly increasing volumes of such data, it becomes infeasible to efficiently transfer it over the network. Edge computing is an emerging system architecture that employs nearby devices for processing and can be used to alleviate the aforementioned data transfer problem. However, it remains hard to design and develop edge computing applications, making it a task reserved for expert developers. This dissertation is concerned with democratizing the development of edge applications, so the task would become accessible for regular developers. The overriding idea is to make the design and implementation of edge applications more systematic by means of programming support, exemplary use cases, and methodologies.
14

Técnicas de otimização combinatória multiobjetivo aplicadas na estimação do desempenho elétrico de redes de distribuição. / Multiobjective combinatorial optimization techniques applied on electrical performance estimation of distribution networks.

Kleber Hashimoto 27 September 2004 (has links)
Neste trabalho são apresentadas contribuições para a estimação do desempenho elétrico na distribuição de energia elétrica, com implicações nos mais diversos problemas da operação e do planejamento da distribuição. Entende-se por desempenho elétrico, a avaliação dos parâmetros de congestionamento de redes, as perdas e o nível de tensão. A motivação deste trabalho está na agregação dos esforços advindos da campanha de medição compulsória das concessionárias de distribuição e da necessidade do órgão regulador de estabelecer parâmetros de avaliação do desempenho operacional das empresas, como previsto no documento intitulado “Procedimentos da Distribuição” da Aneel. A estimação do desempenho elétrico é formulada segundo um problema de otimização multiobjetivo onde as funções objetivo compõem uma avaliação de probabilidade de ocorrência e uma avaliação de proximidade dos parâmetros elétricos calculados com os valores obtidos por medição. Os valores das cargas são discretizados segundo probabilidades de ocorrência em cada intervalo, de modo que a formulação resulte em um problema de otimização combinatória multiobjetivo de dimensão exponencial. Propõe-se um procedimento de redução de rede, que diminua consideravelmente o espaço de decisões, e um procedimento de expansão de redes para recompô-la. Também são propostas heurísticas específicas para a obtenção de soluções com cargas diversificadas e desequilibradas. Para uma aplicação adequada destas heurísticas, propôs-se e aplicou-se um método evolucionário metaheurístico para composição das soluções factíveis, ordenadas de acordo com o conceito de dominância de Pareto. Para cada fronteira de dominância, ou conjunto de fronteiras, o aplicativo constrói a distribuição probabilística da corrente e fluxo de potência de cada trecho, o nível de tensão em todas as barras e as perdas técnicas totais do circuito. A formulação matemática de otimização é flexível o bastante para a aplicação prática, considerando os diversos estágios de implementação dos atuais sistemas supervisórios. O modelo evolucionário metaheurístico proposto foi aplicado para um caso ilustrativo evidenciando as suas potencialidades e os pontos a serem aprimorados. / This thesis aims at contributing for the estimation of electrical performance in the distribution of electrical energy. Electrical performance is assumed to be the evaluation of network congestion parameters, losses and voltage level. The development of this work was impelled due to distribution utilities compulsory measurement permanent campaigns, and due to the need of the regulatory agency in establishing operational performance standards, as stated in the Distribution Code of Aneel, the Brazilian Energy Regulatory Agency. The electrical performance estimation is formulated according to an optimization problem where the objective functions correspond to an evaluation of occurrence probability, and correspond to a proximity evaluation of calculated parameters with values obtained by measurement as well. Load values are discretized according to ocurrence probabilities within each interval, so that formulation results in a multiobjective combinatorial optimization of exponential dimension. Network reduction procedures to substantially reduce Decision Domain and network expansion procedures to recompose it are proposed. Specific heuristics are also proposed to get solutions with load diversity and unbalanced loads. In order to adequately apply these heuristics, a metaheuristic evolutionary method to build feasible solutions is proposed and applied, and ranked according to Pareto´s concept. For each dominance frontier or group of frontiers, the application builds the probabilistic: current and load flow distribution of for each branch, voltage level for each bar and circuit technical losses. The mathematical formulation of optimization is flexible enough to be effectively applied taking into account different levels of supervisory systems developed in the utilities. The metaheuristic evolutionary model proposed was applied to a representative case with main potentialities and weak points to be improved.
15

Técnicas de otimização combinatória multiobjetivo aplicadas na estimação do desempenho elétrico de redes de distribuição. / Multiobjective combinatorial optimization techniques applied on electrical performance estimation of distribution networks.

Hashimoto, Kleber 27 September 2004 (has links)
Neste trabalho são apresentadas contribuições para a estimação do desempenho elétrico na distribuição de energia elétrica, com implicações nos mais diversos problemas da operação e do planejamento da distribuição. Entende-se por desempenho elétrico, a avaliação dos parâmetros de congestionamento de redes, as perdas e o nível de tensão. A motivação deste trabalho está na agregação dos esforços advindos da campanha de medição compulsória das concessionárias de distribuição e da necessidade do órgão regulador de estabelecer parâmetros de avaliação do desempenho operacional das empresas, como previsto no documento intitulado “Procedimentos da Distribuição" da Aneel. A estimação do desempenho elétrico é formulada segundo um problema de otimização multiobjetivo onde as funções objetivo compõem uma avaliação de probabilidade de ocorrência e uma avaliação de proximidade dos parâmetros elétricos calculados com os valores obtidos por medição. Os valores das cargas são discretizados segundo probabilidades de ocorrência em cada intervalo, de modo que a formulação resulte em um problema de otimização combinatória multiobjetivo de dimensão exponencial. Propõe-se um procedimento de redução de rede, que diminua consideravelmente o espaço de decisões, e um procedimento de expansão de redes para recompô-la. Também são propostas heurísticas específicas para a obtenção de soluções com cargas diversificadas e desequilibradas. Para uma aplicação adequada destas heurísticas, propôs-se e aplicou-se um método evolucionário metaheurístico para composição das soluções factíveis, ordenadas de acordo com o conceito de dominância de Pareto. Para cada fronteira de dominância, ou conjunto de fronteiras, o aplicativo constrói a distribuição probabilística da corrente e fluxo de potência de cada trecho, o nível de tensão em todas as barras e as perdas técnicas totais do circuito. A formulação matemática de otimização é flexível o bastante para a aplicação prática, considerando os diversos estágios de implementação dos atuais sistemas supervisórios. O modelo evolucionário metaheurístico proposto foi aplicado para um caso ilustrativo evidenciando as suas potencialidades e os pontos a serem aprimorados. / This thesis aims at contributing for the estimation of electrical performance in the distribution of electrical energy. Electrical performance is assumed to be the evaluation of network congestion parameters, losses and voltage level. The development of this work was impelled due to distribution utilities compulsory measurement permanent campaigns, and due to the need of the regulatory agency in establishing operational performance standards, as stated in the Distribution Code of Aneel, the Brazilian Energy Regulatory Agency. The electrical performance estimation is formulated according to an optimization problem where the objective functions correspond to an evaluation of occurrence probability, and correspond to a proximity evaluation of calculated parameters with values obtained by measurement as well. Load values are discretized according to ocurrence probabilities within each interval, so that formulation results in a multiobjective combinatorial optimization of exponential dimension. Network reduction procedures to substantially reduce Decision Domain and network expansion procedures to recompose it are proposed. Specific heuristics are also proposed to get solutions with load diversity and unbalanced loads. In order to adequately apply these heuristics, a metaheuristic evolutionary method to build feasible solutions is proposed and applied, and ranked according to Pareto´s concept. For each dominance frontier or group of frontiers, the application builds the probabilistic: current and load flow distribution of for each branch, voltage level for each bar and circuit technical losses. The mathematical formulation of optimization is flexible enough to be effectively applied taking into account different levels of supervisory systems developed in the utilities. The metaheuristic evolutionary model proposed was applied to a representative case with main potentialities and weak points to be improved.
16

Parameterized Partition Valuation for Parallel Logic Simulation

Hering, Klaus, Haupt, Reiner, Petri, Udo 01 February 2019 (has links)
Parallelization of logic simulation on register-transfer and gate level is a promising way to accelerate extremely time-extensive system simulation processes during the design of whole processor structures. The background of this paper is given by the functional simulator parallelTEXSIM realizing simulation based on the clock-cycle algorithm over loosely-coupled parallel processor systems. In preparation for parallel cycle simulation, partitioning of hardware models is necessary, which essentially determines the efficiency of the following simulation. We introduce a new method of parameterized partition valuation for use within model partitioning algorithms. It is based on a formal definition of parallel cycle simulation involving a model of parallel computation called Communicating Processors. Parameters within the valuation function permit consideration of specific properties related to both the simulation target architecture and the hardware design to be simulated. Our partition valuation method allows performance estimation with respect to corresponding parallel simulation. This has been confirmed by tests concerning several models of real processors as, for instance, the PowerPC 604 with parallel simulation running on an IBM SP2.
17

Development and validation of NESSIE: a multi-criteria performance estimation tool for SoC / Développement et validation de NESSIE: un outil d'estimation de performances multi-critères pour systèmes-sur-puce.

Richard, Aliénor 18 November 2010 (has links)
The work presented in this thesis aims at validating an original multicriteria performances estimation tool, NESSIE, dedicated to the prediction of performances to accelerate the design of electronic embedded systems. <p><p>This tool has been developed in a previous thesis to cope with the limitations of existing design tools and offers a new solution to face the growing complexity of the current applications and electronic platforms and the multiple constraints they are subjected to. <p><p>More precisely, the goal of the tool is to propose a flexible framework targeting embedded systems in a generic way and enable a fast exploration of the design space based on the estimation of user-defined criteria and a joint hierarchical representation of the application and the platform.<p><p>In this context, the purpose of the thesis is to put the original framework NESSIE to the test to analyze if it is indeed useful and able to solve current design problems. Hence, the dissertation presents :<p><p>- A study of the State-of-the-Art related to the existing design tools. I propose a classification of these tools and compare them based on typical criteria. This substantial survey completes the State-of-the-Art done in the previous work. This study shows that the NESSIE framework offers solutions to the limitations of these tools.<p>- The framework of our original mapping tool and its calculation engine. Through this presentation, I highlight the main ingredients of the tool and explain the implemented methodology.<p>- Two external case studies that have been chosen to validate NESSIE and that are the core of the thesis. These case studies propose two different design problems (a reconfigurable processor, ADRES, applied to a matrix multiplication kernel and a 3D stacking MPSoC problem applied to a video decoder) and show the ability of our tool to target different applications and platforms. <p><p>The validation is performed based on the comparison of a multi-criteria estimation of the performances for a significant amount of solutions, between NESSIE and the external design flow. In particular, I discuss the prediction capability of NESSIE and the accuracy of the estimation. <p><p>-The study is completed, for each case study, by a quantification of the modeling time and the design time in both flows, in order to analyze the gain achieved by our tool used upstream from the classical tool chain compared to the existing design flow alone. <p><p><p>The results showed that NESSIE is able to predict with a high degree of accuracy the solutions that are the best candidates for the design in the lower design flows. Moreover, in both case studies, modeled respectively at a low and higher abstraction level, I obtained a significant gain in the design time. <p><p>However, I also identified limitations that impact the modeling time and could prevent an efficient use of the tool for more complex problems. <p><p>To cope with these issues, I end up by proposing several improvements of the framework and give perspectives to further develop the tool. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
18

Native simulation of MPSoC : instrumentation and modeling of non-functional aspects / Simulation native des MPSoC : instrumentation et modélisation des aspects non fonctionnels

Matoussi, Oumaima 30 November 2017 (has links)
Les systèmes embarqués modernes intègrent des dizaines, voire des centaines, de cœurs sur une même puce communiquant à travers des réseaux sur puce, afin de répondre aux exigences de performances édictées par le marché. On parle de systèmes massivement multi-cœurs ou systèmes many-cœurs. La complexité de ces systèmes fait de l’exploration de l’espace de conception architecturale, de la co-vérification du matériel et du logiciel, ainsi que de l’estimation de performance, un vrai défi. Cette complexité est généralement com-pensée par la flexibilité du logiciel embarqué. La dominance du logiciel dans ces architectures nécessite de commencer le développement et la vérification du matériel et du logiciel dès les premières étapes du flot de conception, bien avant d’avoir accès à un prototype matériel.Ainsi, il faut disposer d’un modèle abstrait qui reproduit le comportement de la puce cible en un temps raisonnable. Un tel modèle est connu sous le nom de plateforme virtuelle ou de simulation. L’exécution du logiciel sur une telle plateforme est couramment effectuée au moyen d’un simulateur de jeu d’instruction (ISS). Ce type de simulateur, basé sur l’interprétation des instructions une à une, est malheureusement caractérisé par une vitesse de simulation très lente, qui ne fait qu’empirer par l’augmentation du nombre de cœurs.La simulation native est considérée comme une candidate adéquate pour réduire le temps de simulation des systèmes many-cœurs. Le principe de la simulation native est de compiler puis exécuter la quasi totalité de la pile logicielle directement sur la machine hôte tout en communiquant avec des modèles réalistes des composants matériels de l’architecture cible, permettant ainsi de raccourcir les temps de simulation. La simulation native est beau-coup plus rapide qu’un ISS mais elle ne prend pas en compte les aspects non-fonctionnels,tel que le temps d’exécution, dépendant de l’architecture matérielle réelle, ce qui empêche de faire des estimations de performance du logiciel.Ceci dresse le contexte des travaux menés dans cette thèse qui se focalisent sur la simulation native et s’articulent autour de deux contributions majeures. La première s’attaque à l’introduction d’informations non-fonctionnelles dans la représentation intermédiaire (IR)du compilateur. L’insertion précise de telles informations dans le modèle fonctionnel est réalisée grâce à un algorithme dont l’objectif est de trouver des correspondances entre le code binaire cible et le code IR tout en tenant compte des optimisations faites par le compilateur. La deuxième contribution s’intéresse à la modélisation d’un cache d’instruction et d’un tampon d’instruction d’une architecture VLIW pour générer des estimations de performance précises.Ainsi, la plateforme de simulation native associée à des modèles de performance précis et à une technique d’annotation efficace permet, malgré son haut niveau d’abstraction, non seulement de vérifier le bon fonctionnement du logiciel mais aussi de fournir des estimations de performances précises en des temps de simulation raisonnables. / Modern embedded systems are endowed with a high level of parallelism and significantprocessing capabilities as they integrate hundreds of cores on a single chip communicatingthrough network on chip. The complexity of these systems and their dedicated softwareshould not be an excuse for long design cycles, even though the design space is enormousand the underlying design decisions are critical. Thus, design space exploration, hard-ware/software co-verification and performance estimation need to be conducted within areasonable amount of time and early enough in the design process to avoid any tardy de-tection of functional or performance deficiencies.Co-simulation platforms are becoming an increasingly important part in design and ver-ification steps. With instruction interpretation-based software simulation platforms beingtoo slow as they model low-level details of the target system, an alternative software sim-ulation approach known as native simulation or host-compiled simulation has gained mo-mentum this past decade. Native simulation consists of compiling the embedded softwareto the host binary format and executing it directly on the host machine. However, this tech-nique fails to reflect the performance of the embedded software and its actual interactionwith the target hardware. So, the speedup gained by native simulation comes at a price,which is the absence of non-functional information (such as time and energy) needed for es-timating the performance of the entire system and ensuring its proper functioning. Withoutsuch information, native simulation approaches are limited to functional validation.Yielding accurate estimates entails the integration of high-level abstract models thatmimic the behavior of target-specific micro-architectural components in the simulation plat-form and the accurate placement of the obtained non-functional information in the high-level code. Back-annotating non-functional information at the right place requires a map-ping between the binary instructions and the high-level code statements, which can be chal-lenging particularly when compiler optimizations are enabled.In this thesis, we propose an annotation framework working at the compiler interme-diate representation level to accurately annotate performance metrics extracted from thebinary code, thanks to a dedicated mapping algorithm. This mapping algorithm is furtherenhanced to deal with aggressive compiler optimizations, such as loop unrolling, that radi-cally alter the structure of the code. Our target architecture being a VLIW processor, we alsomodel at a high level its instruction buffer to faithfully reproduce its timing behavior.The experiments we conducted to validate our mapping algorithm and component mod-els yielded accurate results and high simulation speed compared to a cycle accurate ISS ofthe target platform.
19

Scheduling Local and Remote Memory in Cluster Computers

Serrano Gómez, Mónica 02 September 2013 (has links)
Los cl'usters de computadores representan una soluci'on alternativa a los supercomputadores. En este tipo de sistemas, se suele restringir el espacio de direccionamiento de memoria de un procesador dado a la placa madre local. Restringir el sistema de esta manera es mucho m'as barato que usar una implementaci'on de memoria compartida entre las placas. Sin embargo, las diferentes necesidades de memoria de las aplicaciones que se ejecutan en cada placa pueden dar lugar a un desequilibrio en el uso de memoria entre las placas. Esta situaci'on puede desencadenar intercambios de datos con el disco, los cuales degradan notablemente las prestaciones del sistema, a pesar de que pueda haber memoria no utilizada en otras placas. Una soluci'on directa consiste en aumentar la cantidad de memoria disponible en cada placa, pero el coste de esta soluci'on puede ser prohibitivo. Por otra parte, el hardware de acceso a memoria remota (RMA) es una forma de facilitar interconexiones r'apidas entre las placas de un cl'uster de computadores. En trabajos recientes, esta caracter'¿stica se ha usado para aumentar el espacio de direccionamiento en ciertas placas. En este trabajo, la m'aquina base usa esta capacidad como mecanismo r'apido para permitir al sistema operativo local acceder a la memoria DRAM instalada en una placa remota. En este contexto, una plani¿caci'on de memoria e¿ciente constituye una cuesti'on cr'¿tica, ya que las latencias de memoria tienen un impacto importante sobre el tiempo de ejecuci'on global de las aplicaciones, debido a que las latencias de memoria remota pueden ser varios 'ordenes de magnitud m'as altas que los accesos locales. Adem'as, el hecho de cambiar la distribuci'on de memoria es un proceso lento que puede involucrar a varias placas, as'¿ pues, el plani¿cador de memoria ha de asegurarse de que la distribuci'on objetivo proporciona mejores prestaciones que la actual. La presente disertaci'on pretende abordar los asuntos mencionados anteriormente mediante la propuesta de varias pol'¿ticas de plani¿caci'on de memoria. En primer lugar, se presenta un algoritmo ideal y una estrategia heur'¿stica para asignar memoria principal ubicada en las diferentes regiones de memoria. Adicionalmente, se ha dise¿nado un mecanismo de control de Calidad de Servicio para evitar que las prestaciones de las aplicaciones en ejecuci'on se degraden de forma inadmisible. El algoritmo ideal encuentra la distribuci'on de memoria 'optima pero su complejidad computacional es prohibitiva dado un alto n'umero de aplicaciones. De este inconveniente se encarga la estrategia heur'¿stica, la cual se aproxima a la mejor distribuci'on de memoria local y remota con un coste computacional aceptable. Los algoritmos anteriores se basan en pro¿ling. Para tratar este defecto potencial, nos centramos en soluciones anal'¿ticas. Esta disertaci'on propone un modelo anal'¿tico que estima el tiempo de ejecuci'on de una aplicaci'on dada para cierta distribuci'on de memoria. Dicha t'ecnica se usa como un predictor de prestaciones que proporciona la informaci'on de entrada a un plani¿cador de memoria. El plani¿cador de memoria usa las estimaciones para elegir din'amicamente la distribuci'on de memoria objetivo 'optima para cada aplicaci'on que se est'e ejecutando en el sistema, de forma que se alcancen las mejores prestaciones globales. La plani¿caci'on a granularidad m'as alta permite pol'¿ticas de plani¿caci'on m'as simples. Este trabajo estudia la viabilidad de plani¿car a nivel de granularidad de p'agina del sistema operativo. Un entrelazado convencional basado en hardware a nivel de bloque y un entrelazado a nivel de p'agina de sistema operativo se han tomado como esquemas de referencia. De la comparaci'on de ambos esquemas de referencia, hemos concluido que solo algunas aplicaciones se ven afectadas de forma signi¿cativa por el uso del entrelazado a nivel de p'agina. Las razones que causan este impacto en las prestaciones han sido estudiadas y han de¿nido la base para el dise¿no de dos pol'¿ticas de distribuci'on de memoria basadas en sistema operativo. La primera se denomina on-demand (OD), y es una estrategia simple que funciona colocando las p'aginas nuevas en memoria local hasta que dicha regi'on se llena, de manera que se bene¿cia de la premisa de que las p'aginas m'as accedidas se piden y se ubican antes que las menos accedidas para mejorar las prestaciones. Sin embargo, ante la ausencia de dicha premisa para algunos de los benchmarks, OD funciona peor. La segunda pol'¿tica, denominada Most-accessed in-local (Mail), se propone con el objetivo de evitar este problema. / Cluster computers represent a cost-effective alternative solution to supercomputers. In these systems, it is common to constrain the memory address space of a given processor to the local motherboard. Constraining the system in this way is much cheaper than using a full-fledged shared memory implementation among motherboards. However, memory usage among motherboards may be unfairly balanced depending on the memory requirements of the applications running on each motherboard. This situation can lead to disk-swapping, which severely degrades system performance, although there may be unused memory on other motherboards. A straightforward solution is to increase the amount of available memory in each motherboard, but the cost of this solution may become prohibitive. On the other hand, remote memory access (RMA) hardware provides fast interconnects among the motherboards of a cluster computer. In recent works, this characteristic has been used to extend the addressable memory space of selected motherboards. In this work, the baseline machine uses this capability as a fast mechanism to allow the local OS to access to DRAM memory installed in a remote motherboard. In this context, efficient memory scheduling becomes a major concern since main memory latencies have a strong impact on the overall execution time of the applications, provided that remote memory accesses may be several orders of magnitude higher than local accesses. Additionally, changing the memory distribution is a slow process which may involve several motherboards, hence the memory scheduler needs to make sure that the target distribution provides better performance than the current one. This dissertation aims to address the aforementioned issues by proposing several memory scheduling policies. First, an ideal algorithm and a heuristic strategy to assign main memory from the different memory regions are presented. Additionally, a Quality of Service control mechanism has been devised in order to prevent unacceptable performance degradation for the running applications. The ideal algorithm finds the optimal memory distribution but its computational cost is prohibitive for a high number of applications. This drawback is handled by the heuristic strategy, which approximates the best local and remote memory distribution among applications at an acceptable computational cost. The previous algorithms are based on profiling. To deal with this potential shortcoming we focus on analytical solutions. This dissertation proposes an analytical model that estimates the execution time of a given application for a given memory distribution. This technique is used as a performance predictor that provides the input to a memory scheduler. The estimates are used by the memory scheduler to dynamically choose the optimal target memory distribution for each application running in the system in order to achieve the best overall performance. Scheduling at a higher granularity allows simpler scheduler policies. This work studies the feasibility of scheduling at OS page granularity. A conventional hardware-based block interleaving and an OS-based page interleaving have been assumed as the baseline schemes. From the comparison of the two baseline schemes, we have concluded that only the performance of some applications is significantly affected by page-based interleaving. The reasons that cause this impact on performance have been studied and have provided the basis for the design of two OS-based memory allocation policies. The first one, namely on-demand (OD), is a simple strategy that works by placing new pages in local memory until this region is full, thus benefiting from the premise that most of the accessed pages are requested and allocated before than the least accessed ones to improve the performance. Nevertheless, in the absence of this premise for some benchmarks, OD performs worse. The second policy, namely Most-accessed in-local (Mail), is proposed to avoid this problem / Serrano Gómez, M. (2013). Scheduling Local and Remote Memory in Cluster Computers [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31639 / Alfresco
20

Estimación de prestaciones para Exploración de Diseño en Sistemas Embebidos Complejos HW/SW

Posadas Cobo, Héctor 01 July 2011 (has links)
La estimación y verificación de las prestaciones de los diseños de sistemas embebidos de la forma más rápida posible al principio del proceso de diseño es un hito de gran importancia. Por ello, esta tesis propone una nueva solución basada en simulación por anotación de código fuente, que a costa de algo de precisión, permite realizar simulaciones muy rápidas con un mínimo esfuerzo de diseño. La primera tarea realizada en esta tesis ha sido extender el lenguaje SystemC para incluir primitivas de un sistema operativo de tiempo real(RTOS) que permiten la ejecución y el refinado de módulos software. La segunda parte de la tesis se ha centrado en la generación de una librería capaz de obtener datos dinámicamente sobre las prestaciones temporales de dichos sistemas a partir del código fuente, para poder verificar el cumplimiento de las características requeridas. Junto con los elementos SW se han desarrollado componentes SystemC de alto nivel capaces de modelar los elementos principales de un sistema embebido, como buses, memorias, redes de comunicaciones, etc. Por último se han desarrollado los componentes necesarios para poder incluir toda esta infraestructura en procesos de exploración automática del proceso de diseño, de forma que en base a descripciones iniciales del sistema en formato XML. La infraestructura de simulación y estimación de rendimiento ha sido desarrollada y probada en diversos proyectos europeos. / Estimating and verifying system performance of embedded designs at the beginning of the design process is a very important task. Fast estimation tools are required in order to evaluate different design possibilities, such as HW/SW partitioning or resource allocation, to verify the fulfillment of the system constraints, or to support design space exploration flows. In this context, the thesis proposes a tool capable of simulating embedded systems using source code annotation. As a consequence, fast estimations are obtained with minimal design effort, obtaining an adequate accuracy. For developing such tool several tasks has been performed. First, the SystemC language has been extended to provide the designer with a model of a real-time operating system. This model enables the correct simulation, scheduling and debugging of embedded SW. The second element added is an infrastructure capable of estimating and annotating performance information for each basic block in the source code. This infrastructure enables obtaining timed simulations of the SW. Additionally generic TLM elements have been developed to enable creating models of the HW platforms. Finally, additional components has been developed to use the proposed tool in a complete Design Space Exploration flow. The simulation infrastructure has been developed and checked in several European projects, and in collaboration with private companies.

Page generated in 0.5397 seconds