• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 52
  • 52
  • 12
  • 11
  • 11
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Throughput Constrained and Area Optimized Dataflow Synthesis for FPGAs

Sun, Hua 21 February 2008 (has links) (PDF)
Although high-level synthesis has been researched for many years, synthesizing minimum hardware implementations under a throughput constraint for computationally intensive algorithms remains a challenge. In this thesis, three important techniques are studied carefully and applied in an integrated way to meet this challenging synthesis requirement. The first is pipeline scheduling, which generates a pipelined schedule that meets the throughput requirement. The second is module selection, which decides the most appropriate circuit module for each operation. The third is resource sharing, which reuses a circuit module by sharing it between multiple operations. This work shows that combining module selection and resource sharing while performing pipeline scheduling can significantly reduce the hardware area, by either using slower, more area-efficient circuit modules or by time-multiplexing faster, larger circuit modules, while meeting the throughput constraint. The results of this work show that the combined approach can generate on average 43% smaller hardware than possible when a single technique (resource sharing or module selection) is applied. There are four major contributions of this work. First, given a fixed throughput constraint, it explores all feasible frequency and data introduction interval design points that meet this throughput constraint. This enlarged pipelining design space exploration results in superior hardware architectures than previous pipeline synthesis work because of the larger sapce. Second, the module selection algorithm in this work considers different module architectures, as well as different pipelining options for each architecture. This not only addresses the unique architecture of most FPGA circuit modules, it also performs retiming at the high-level synthesis level. Third, this work proposes a novel approach that integrates the three inter-related synthesis techniques of pipeline scheduling, module selection and resource sharing. To the author's best knowledge, this is the first attempt to do this. The integrated approach is able to identify more efficient hardware implementations than when only one or two of the three techniques are applied. Fourth, this work proposes and implements several algorithms that explore the combined pipeline scheduling, module selection and resource sharing design space, and identifies the most efficient hardware architecture under the synthesis constraint. These algorithms explore the combined design space in different ways which represents the trade off between algorithm execution time and the size of the explored design space.
42

Distribuované systémy na platformě .NET Framework / Distributed Systems on the .NET Framework Platform

Vítek, Martin January 2009 (has links)
With the expansion of the Internet communication and related availability of increasing number of services built on different technologies, distributed systems represent a solution to integrate these network services and provide them to users in a coherent form. The .NET Framework which provides an environment for application development in a highly distributed environment of Internet and intranet can be used to achieve this goal. This PhD thesis deals with access to shared resources in the context of distributed systems using the .NET platform. The first part of the work is devoted to describing the basic principles of distributed systems and .NET platform techniques, which can be used for implementation of the principles. For the purposes of request processing having asynchronous nature not only in distributed systems a universal interface for the description of asynchronous operations was designed and implemented. The interface extends standard asynchronous techniques on the .NET platform. In order to address the issue of access to shared resources model was designed based on the principles of object-oriented programming, along with basic algorithm to avoid deadlock in the case of use resources by multiple processes (threads) simultaneously. This extendable model has been successfully implemented and its functionality verified in basic scenarios of access to shared resources. After the definition of resources and their dependencies the implemented model allows working with resources as with any other objects on .NET platform. The synchronization processes proceed transparently in background.
43

SecuRES: Secure Resource Sharing System : AN INVESTIGATION INTO USE OF PUBLIC LEDGER TECHNOLOGY TO CREATE DECENTRALIZED DIGITAL RESOURCE-SHARING SYSTEMS

Leung, Philip, Svensson, Daniel January 2015 (has links)
The project aims at solving the problem of non-repudiation, integrity and confidentiality of data when digitally exchanging sensitive resources between parties that need to be able to trust each other without the need for a trusted third party. This is done in the framework of answering to what extent digital resources can be shared securely in a decentralized public ledger-based system compared to trust-based alternatives. A background of existing resource sharing solutions is explored which shows an abundance third party trust-based systems, but also an interest in public ledger solutions in the form of the Storj network which uses such technology, but focuses on storage rather than sharing. The proposed solution, called SecuRES, is a communication protocol based on public ledger technology which acts similar to Bitcoin. A prototype based on the protocol has been implemented which proves the ability to share encrypted files with one or several recipients through a decentralized public ledger-based network. It was concluded that the SecuRES solution could do away with the requirement of trust in third parties for all but some optional operations using external authentication services. This is done while still maintaining data integrity of a similar or greater degree to trust-based solutions and offers the additional benefits of non-repudiation, high confidentiality and high transparency from the ability to make source code and protocol documentation openly available without endangering the system. Further research is needed to investigate whether the system can scale up for widespread adoption while maintaining security and reasonable performance requirements. / Projektet ämnar lösa problemen med oförnekbarhet, integritet och konfidentialitet när man delar känsligt data mellan parter som behöver lita på varandra utan inblanding av betrodd tredje part. Detta diskuteras för att besvara till vilken omfattning digitala resurser kan delas säkert i ett decentraliserat system baserat på publika liggare jämfört med existerande tillitsbaserade alternativ. En undersökning av nuvarande resursdelningslösningar visar att det existerar många tillitsbaserade system men även en växande andel lösningar baserade på publika liggare. En intressant lösning som lyfts fram är Storj som använder sådan teknologi men fokuserar på resurslagring mer är delning. Projektets föreslagna lösning, kallad SecuRES, är ett kommunikationsprotokoll baserat på en publik liggare likt Bitcoin. En prototyp baserad på protokollet har tagits fram som visar att det är möjligt att dela krypterade filer med en eller flera mottagare genom ett decentraliserat nätverk baserat på publika liggare. Slutsatsen som dras är att SecuRES klarar sig utan betrodda tredje parter för att dela resurser medan vissa operationer kan göras mer användarvänliga genom externa autentiseringstjänster. Själva lösningen garanterar integritet av data och medför ytterligare fördelar såsom oförnekbarhet, konfidentialitet och hög transparens då man kan göra källkoden och protocoldokumentation fritt läsbar utan att utsätta systemet för fara. Vidare forskning behövs för att undersöka om systemet kan skalas upp för allmän användning och alltjämt bibehålla säkerhets- samt prestandakrav.
44

Managing resource sharing in selected Seventh-day Adventist tertiary institutions in Sub-Saharan Africa: problems and prospects

Adeogun, Margaret Olufunke 30 November 2004 (has links)
Universities in the new millennium find themselves in a knowledge-driven economy that is challenging them to produce a qualified and adaptable work force if they are to contribute to societal development. Owing to the structural change in the economy, entrepreneurs require high level scientists, professionals and technicians who not only have the capability to create and support innovations by adapting knowledge to local use but also people with managerial and lifelong learning skills. Such are they who can accelerate changes and make organizations more productive and efficient in the services they render. Consequently, universities in Sub-Saharan Africa are challenged to transform learning so as to produce graduates who have both knowledge and competencies. Such a system will create a balance between university education and the changing labour market. Satisfying these new educational demands are only possible through research and unhindered access to global information resources. Paradoxically, some private university libraries, because of limited funding, find themselves fiscally constrained in the provision of unhindered access to global stores of information particularly at a time of exponential growth both in number and cost of information resources. This had led libraries to re-examine resource sharing as a viable option to meeting the new demands placed on universities. It is for the reasons above that this study examines the practice, problems and prospects of resource-sharing in selected Seventh-day Adventist university libraries in Sub-Saharan Africa. It examines scientifically the causes of poor sharing practices that are unique to each library, the situational and environmental factors that can enhance resource sharing. It provides also research-based information that will help to determine the best ways by which each library can have greater access to information resources. There are proposals for resolving the problems, and there are recommendations for dealing with the matter on a more permanent basis. The study advances resource-sharing model called Consortium of Adventist University Libraries in Africa (CAULA) as a resource sharing network for Seventh-day Adventist libraries in Africa. The organizational structure for CAULA are outlined and discussed. The proposed cooperation is not only sustainable but also structured to provide efficiency and greater regional cooperation of SDA libraries in Sub-Saharan Africa. / Information Science / DLITT ET PHIL (INF SCIENCE)
45

Library automation as a prerequisite for 21st century library service provision for Lesotho library consortium libraries

Monyane, Mamoeletsi Cecilia 07 1900 (has links)
Library automation is approaching its 90th birthday (deduced from Pace, 2009:1), and many librarians no longer remember the inefficiencies of the manual systems that were previously in place. For some, however, automation has not gone nearly far enough. In this second decade of the new millennium some libraries in Lesotho face multiple challenges in automating their services while libraries internationally are staying relevant by rapidly adapting their services to address the needs and demands of the clients. It was anticipated that full library automation is a prerequisite for delivering 21st-century library services and the researcher embarked on a process to establish whether libraries belonging to the Lesotho Library Consortium (LELICO) have automated to the extent where they will be able to provide the services that are currently in demand. The purpose of this study was to analysewhether full library automation is indeed a prerequisite for libraries to offer the services required in the current millennium. The study focused on LELICO member libraries. Benchmarking was done with selected South African academic libraries. Data were collected by means of interviews with all respondents, namely, LELICO member libraries, librarians from South African libraries and with international system vendors operating from South Africa. The study found that LELICO member libraries are indeed lagging behindin terms of service provision. LELICO member libraries do not appear to understand; which library services are possible when state-of-the-art technology is fully implemented. The study found furthermore that the laggard status is caused by factors such as a lack of funding, too few professional staff and ineffective support from management. These and other findings helped formulate recommendations that would underpin a renewal strategy for LELICO. The proposed recommendations include that LELICO should deliver a more meaningful service to its current members. LELICO member libraries should be using technology more effectively in their operations and good relationship between a system vendor and its clients should be seen as an asset that should be maintained.LELICO should be playing a key role in making change a reality. / Information Science / M.A. (Information Science)
46

Energy-aware scheduling : complexity and algorithms

Renaud-Goud, Paul 05 July 2012 (has links) (PDF)
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
47

Managing resource sharing in selected Seventh-day Adventist tertiary institutions in Sub-Saharan Africa: problems and prospects

Adeogun, Margaret Olufunke 30 November 2004 (has links)
Universities in the new millennium find themselves in a knowledge-driven economy that is challenging them to produce a qualified and adaptable work force if they are to contribute to societal development. Owing to the structural change in the economy, entrepreneurs require high level scientists, professionals and technicians who not only have the capability to create and support innovations by adapting knowledge to local use but also people with managerial and lifelong learning skills. Such are they who can accelerate changes and make organizations more productive and efficient in the services they render. Consequently, universities in Sub-Saharan Africa are challenged to transform learning so as to produce graduates who have both knowledge and competencies. Such a system will create a balance between university education and the changing labour market. Satisfying these new educational demands are only possible through research and unhindered access to global information resources. Paradoxically, some private university libraries, because of limited funding, find themselves fiscally constrained in the provision of unhindered access to global stores of information particularly at a time of exponential growth both in number and cost of information resources. This had led libraries to re-examine resource sharing as a viable option to meeting the new demands placed on universities. It is for the reasons above that this study examines the practice, problems and prospects of resource-sharing in selected Seventh-day Adventist university libraries in Sub-Saharan Africa. It examines scientifically the causes of poor sharing practices that are unique to each library, the situational and environmental factors that can enhance resource sharing. It provides also research-based information that will help to determine the best ways by which each library can have greater access to information resources. There are proposals for resolving the problems, and there are recommendations for dealing with the matter on a more permanent basis. The study advances resource-sharing model called Consortium of Adventist University Libraries in Africa (CAULA) as a resource sharing network for Seventh-day Adventist libraries in Africa. The organizational structure for CAULA are outlined and discussed. The proposed cooperation is not only sustainable but also structured to provide efficiency and greater regional cooperation of SDA libraries in Sub-Saharan Africa. / Information Science / DLITT ET PHIL (INF SCIENCE)
48

Gestion hospitalière en situation d'exception : optimisation des ressources critiques / Hospital disaster management : optimization of critical resources

Nouaouri, Issam 12 May 2010 (has links)
Selon le rapport annuel de la croix rouge et du croissant rouge pour l’année 2006, le nombre de catastrophes, d’origine naturelle et humaine, a augmenté ces dernières décennies dans des proportions importantes. Ces catastrophes engendrent souvent un nombre de victimes important nécessitant des interventions urgentes. Face à une telle situation, les moyens sanitaires classiques et de routines se trouvent souvent dépassés, et par conséquent inefficaces pour absorber cet afflux massif de victimes. Ainsi, la mise en œuvre d’un système de gestion hospitalier conditionné par une optimisation des différentes ressources médicales est indispensable pour sauver le maximum de vies humaines. Dans ce contexte, nous proposons dans cette thèse, d’étudier le problème d’optimisation des ressources humaines et matérielles critiques à savoir, les chirurgiens et les salles opératoires en situation de crise. L’objectif est de traiter le maximum de victimes, autrement dit sauver le maximum de vies humaines. Notre étude comprend deux niveaux : (1) un niveau préparatoire qui consiste à dimensionner les ressources dans le cadre des exercices de simulation du plan blanc, et (2) un niveau opérationnel permettant d’optimiser l’ordonnancement des interventions dans les salles opératoires. Aussi, nous étudions l’impact de la mutualisation des ressources sur le nombre de victimes traitées. L’un des défis posés à la programmation opératoire en situation d’exception est l’aptitude à faire face aux perturbations. Dans ce cadre, nous abordons le problème réactif d’optimisation de l’ordonnancement des interventions dans les salles opératoires. Nous considérons diverses perturbations possibles telles : une durée opératoire qui dépasse la durée estimée, l’insertion d’une nouvelle victime dans le programme opératoire, et l’évolution du degré d’urgence d’une victime. Cette thèse est menée avec la collaboration de plusieurs structures sanitaires publiques en France et en Tunisie. Les résultats expérimentaux mettent en exergue l’apport de ces approches pour l’aide à la décision. / Disaster like terrorist attack, earthquake, and hurricane, often cause a high degree of damage. Thousands of people might be affected. The 2006’s annual report of the International Federation of Red Cross and Red Crescent Societies proves that the number of disasters increased during these last decades. In such situations, hospitals must be able to receive injured persons for medical and surgical treatments. For these reasons medical resources optimization of different is fundamental in human life save.In this context, we propose in this thesis, to study the optimization of human and material resources in relation with hospital management. We focus more precisely on critical resources: operating rooms and surgeons. The goal is to handle the maximum of victims and then to save the maximum of human lives. Our research consists of two phases: (1) Sizing critical resources during the preparedness phase of disaster management plan so called white plan. (2) Operational phase that provides the optimization of surgical acts scheduling in the operating rooms. Also, we study the impact of sharing resources on the number of treated victims. A disaster situation is characterized by different disruptions. In this setting, we approach a reactive problem for optimization of surgical acts scheduling in the operating rooms. We consider various possible disruptions: the overflow of assessed surgical care duration, the insertion of a new victim in the scheduling program, and the evolution of victim’s emergency level.This work is achieved with the collaboration of several public health institutions (hospitals, ministry, etc.) both in France and Tunisia. Empirical study shows that a substantial aid is proposed by using the proposed approaches.
49

Sécurité et vie privée dans les applications web / Web applications security and privacy

Somé, Dolière Francis 29 October 2018 (has links)
Dans cette thèse, nous nous sommes intéressés aux problématiques de sécurité et de confidentialité liées à l'utilisation d'applications web et à l'installation d'extensions de navigateurs. Parmi les attaques dont sont victimes les applications web, il y a celles très connues de type XSS (ou Cross-Site Scripting). Les extensions sont des logiciels tiers que les utilisateurs peuvent installer afin de booster les fonctionnalités des navigateurs et améliorer leur expérience utilisateur. Content Security Policy (CSP) est une politique de sécurité qui a été proposée pour contrer les attaques de type XSS. La Same Origin Policy (SOP) est une politique de sécurité fondamentale des navigateurs, régissant les interactions entre applications web. Par exemple, elle ne permet pas qu'une application accède aux données d'une autre application. Cependant, le mécanisme de Cross-Origin Resource Sharing (CORS) peut être implémenté par des applications désirant échanger des données entre elles. Tout d'abord, nous avons étudié l'intégration de CSP avec la Same Origin Policy (SOP) et démontré que SOP peut rendre CSP inefficace, surtout quand une application web ne protège pas toutes ses pages avec CSP, et qu'une page avec CSP imbrique ou est imbriquée dans une autre page sans ou avec un CSP différent et inefficace. Nous avons aussi élucidé la sémantique de CSP, en particulier les différences entre ses 3 versions, et leurs implémentations dans les navigateurs. Nous avons ainsi introduit le concept de CSP sans dépendances qui assure à une application la même protection contre les attaques, quelque soit le navigateur dans lequel elle s'exécute. Finalement, nous avons proposé et démontré comment étendre CSP dans son état actuel, afin de pallier à nombre de ses limitations qui ont été révélées dans d'autres études. Les contenus tiers dans les applications web permettent aux propriétaires de ces contenus de pister les utilisateurs quand ils naviguent sur le web. Pour éviter cela, nous avons introduit une nouvelle architecture web qui une fois déployée, supprime le pistage des utilisateurs. Dans un dernier temps, nous nous sommes intéressés aux extensions de navigateurs. Nous avons d'abord démontré que les extensions qu'un utilisateur installe et/ou les applications web auxquelles il se connecte, peuvent le distinguer d'autres utilisateurs. Nous avons aussi étudié les interactions entre extensions et applications web. Ainsi avons-nous trouvé plusieurs extensions dont les privilèges peuvent être exploités par des sites web afin d'accéder à des données sensibles de l'utilisateur. Par exemple, certaines extensions permettent à des applications web d'accéder aux contenus d'autres applications, bien que cela soit normalement interdit par la Same Origin Policy. Finalement, nous avons aussi trouvé qu'un grand nombre d'extensions a la possibilité de désactiver la Same Origin Policy dans le navigateur, en manipulant les entêtes CORS. Cela permet à un attaquant d'accéder aux données de l'utilisateur dans n'importe qu'elle autre application, comme par exemple ses mails, son profile sur les réseaux sociaux, et bien plus. Pour lutter contre ces problèmes, nous préconisons aux navigateurs un système de permissions plus fin et une analyse d'extensions plus poussée, afin d'alerter les utilisateurs des dangers réels liés aux extensions. / In this thesis, we studied security and privacy threats in web applications and browser extensions. There are many attacks targeting the web of which XSS (Cross-Site Scripting) is one of the most notorious. Third party tracking is the ability of an attacker to benefit from its presence in many web applications in order to track the user has she browses the web, and build her browsing profile. Extensions are third party software that users install to extend their browser functionality and improve their browsing experience. Malicious or poorly programmed extensions can be exploited by attackers in web applications, in order to benefit from extensions privileged capabilities and access sensitive user information. Content Security Policy (CSP) is a security mechanism for mitigating the impact of content injection attacks in general and in particular XSS. The Same Origin Policy (SOP) is a security mechanism implemented by browsers to isolate web applications of different origins from one another. In a first work on CSP, we analyzed the interplay of CSP with SOP and demonstrated that the latter allows the former to be bypassed. Then we scrutinized the three CSP versions and found that a CSP is differently interpreted depending on the browser, the version of CSP it implements, and how compliant the implementation is with respect to the specification. To help developers deploy effective policies that encompass all these differences in CSP versions and browsers implementations, we proposed the deployment of dependency-free policies that effectively protect against attacks in all browsers. Finally, previous studies have identified many limitations of CSP. We reviewed the different solutions proposed in the wild, and showed that they do not fully mitigate the identified shortcomings of CSP. Therefore, we proposed to extend the CSP specification, and showed the feasibility of our proposals with an example of implementation. Regarding third party tracking, we introduced and implemented a tracking preserving architecture, that can be deployed by web developers willing to include third party content in their applications while preventing tracking. Intuitively, third party requests are automatically routed to a trusted middle party server which removes tracking information from the requests. Finally considering browser extensions, we first showed that the extensions that users install and the websites they are logged into, can serve to uniquely identify and track them. We then studied the communications between browser extensions and web applications and demonstrate that malicious or poorly programmed extensions can be exploited by web applications to benefit from extensions privileged capabilities. Also, we demonstrated that extensions can disable the Same Origin Policy by tampering with CORS headers. All this enables web applications to read sensitive user information. To mitigate these threats, we proposed countermeasures and a more fine-grained permissions system and review process for browser extensions. We believe that this can help browser vendors identify malicious extensions and warn users about the threats posed by extensions they install.
50

Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing

Pons Escat, Lucía 01 September 2023 (has links)
[ES] Una de las principales preocupaciones de los centros de datos actuales es maximizar la utilización de los servidores. En cada servidor se ejecutan simultáneamente varias aplicaciones para aumentar la eficiencia de los recursos. Sin embargo, las prestaciones dependen en gran medida de la proporción de recursos que recibe cada aplicación. El mayor número de núcleos (y de aplicaciones ejecutándose) con cada nueva generación de procesadores hace que crezca la preocupación por la interferencia en los recursos compartidos. Esta tesis se centra en mitigar la interferencia cuando diferentes aplicaciones se consolidan en un mismo procesador desde dos perspectivas: computación de alto rendimiento (HPC) y computación en la nube. En el contexto de HPC, esta tesis propone políticas de gestión para dos de los recursos más críticos: la caché de último nivel (LLC) y los núcleos del procesador. La LLC desempeña un papel clave en las prestaciones de los procesadores actuales al reducir considerablemente el número de accesos de alta latencia a memoria principal. Se proponen estrategias de particionado de la LLC tanto para cachés inclusivas como no inclusivas, ambos diseños presentes en los procesadores para servidores actuales. Para los esquemas, se detectan nuevos comportamientos problemáticos y se asigna un mayor espacio de caché a las aplicaciones que hacen mejor uso de este. En cuanto a los núcleos del procesador, muchas aplicaciones paralelas (como aplicaciones de grafos) no escalan bien con un mayor número de núcleos. Además, el planificador de Linux aplica una estrategia de tiempo compartido que no ofrece buenas prestaciones cuando se ejecutan aplicaciones de grafo. Para maximizar la utilización del sistema, esta tesis propone ejecutar múltiples aplicaciones de grafo en el mismo procesador, asignando a cada una el número óptimo de núcleos (y adaptando el número de hilos creados) dinámicamente. En cuanto a la computación en la nube, esta tesis aborda tres grandes retos: la compleja infraestructura de estos sistemas, las características de sus aplicaciones y el impacto de la interferencia entre máquinas virtuales (MV). Primero, esta tesis presenta la plataforma experimental desarrollada con los principales componentes de un sistema en la nube. Luego, se presenta un amplio estudio de caracterización sobre un conjunto de aplicaciones de latencia crítica representativas con el fin de identificar los puntos que los proveedores de servicios en la nube deben tener en cuenta para mejorar el rendimiento y la utilización de los recursos. Por último, se realiza una propuesta que permite detectar y estimar dinámicamente la interferencia entre MV. El enfoque usa métricas que pueden monitorizarse fácilmente en la nube pública, ya que las MV deben tratarse como "cajas negras". Toda la investigación descrita se lleva a cabo respetando las restricciones y cumpliendo los requisitos para ser aplicable en entornos de producción de nube pública. En resumen, esta tesis aborda la contención en los principales recursos compartidos del sistema en el contexto de la consolidación de servidores. Los resultados experimentales muestran importantes ganancias sobre Linux. En los procesadores con LLC inclusiva, el tiempo de ejecución (TT) se reduce en más de un 40%, mientras que se mejora el IPC más de un 3%. Con una LLC no inclusiva, la equidad y el TT mejoran en un 44% y un 24%, respectivamente, al mismo tiempo que se mejora el rendimiento hasta un 3,5%. Al distribuir los núcleos del procesador de forma eficiente, se alcanza una equidad casi perfecta (94%), y el TT se reduce hasta un 80%. En entornos de computación en la nube, la degradación del rendimiento puede estimarse con un error de un 5% en la predicción global. Todas las propuestas presentadas han sido diseñadas para ser aplicadas en procesadores comerciales sin requerir ninguna información previa, tomando las decisiones dinámicamente con datos recogidos de los contadores de prestaciones. / [CAT] Una de les principals preocupacions dels centres de dades actuals és maximitzar la utilització dels servidors. A cada servidor s'executen simultàniament diverses aplicacions per augmentar l'eficiència dels recursos. Tot i això, el rendiment depèn en gran mesura de la proporció de recursos que rep cada aplicació. El nombre creixent de nuclis (i aplicacions executant-se) amb cada nova generació de processadors fa que creixca la preocupació per l'efecte causat per les interferències en els recursos compartits. Aquesta tesi se centra a mitigar la interferència en els recursos compartits quan diferents aplicacions es consoliden en un mateix processador des de dues perspectives: computació d'alt rendiment (HPC) i computació al núvol. En el context d'HPC, aquesta tesi proposa polítiques de gestió per a dos dels recursos més crítics: la memòria cau d'últim nivell (LLC) i els nuclis del processador. La LLC exerceix un paper clau a les prestacions del sistema en els processadors actuals reduint considerablement el nombre d'accessos d'alta latència a la memòria principal. Es proposen estratègies de particionament de la LLC tant per a caus inclusives com no inclusives, ambdós dissenys presents en els processadors actuals. Per als dos esquemes, se detecten nous comportaments problemàtics i s'assigna un major espai de memòria cau a les aplicacions que en fan un millor ús. Pel que fa als nuclis del processador, moltes aplicacions paral·leles (com les aplicacions de graf) no escalen bé a mesura que s'incrementa el nombre de nuclis. A més, el planificador de Linux aplica una estratègia de temps compartit que no ofereix bones prestacions quan s'executen aplicacions de graf. Per maximitzar la utilització del sistema, aquesta tesi proposa executar múltiples aplicacions de grafs al mateix processador, assignant a cadascuna el nombre òptim de nuclis (i adaptant el nombre de fils creats) dinàmicament. Pel que fa a la computació al núvol, aquesta tesi aborda tres grans reptes: la complexa infraestructura d'aquests sistemes, les característiques de les seues aplicacions i l'impacte de la interferència entre màquines virtuals (MV). En primer lloc, aquesta tesi presenta la plataforma experimental desenvolupada amb els principals components d'un sistema al núvol. Després, es presenta un ampli estudi de caracterització sobre un conjunt d'aplicacions de latència crítica representatives per identificar els punts que els proveïdors de serveis al núvol han de tenir en compte per millorar el rendiment i la utilització dels recursos. Finalment, es fa una proposta que de manera dinàmica permet detectar i estimar la interferència entre MV. L'enfocament es basa en mètriques que es poden monitoritzar fàcilment al núvol públic, ja que les MV han de tractar-se com a "caixes negres". Tota la investigació descrita es duu a terme respectant les restriccions i complint els requisits per ser aplicable en entorns de producció al núvol públic. En resum, aquesta tesi aborda la contenció en els principals recursos compartits del sistema en el context de la consolidació de servidors. Els resultats experimentals mostren que s'obtenen importants guanys sobre Linux. En els processadors amb una LLC inclusiva, el temps d'execució (TT) es redueix en més d'un 40%, mentres que es millora l'IPC en més d'un 3%. En una LLC no inclusiva, l'equitat i el TT es milloren en un 44% i un 24%, respectivament, al mateix temps que s'obté una millora del rendiment de fins a un 3,5%. Distribuint els nuclis del processador de manera eficient es pot obtindre una equitat quasi perfecta (94%), i el TT pot reduir-se fins a un 80%. En entorns de computació al núvol, la degradació del rendiment pot estimar-se amb un error de predicció global d'un 5%. Totes les propostes presentades en aquesta tesi han sigut dissenyades per a ser aplicades en processadors de servidors comercials sense requerir cap informació prèvia, prenent decisions dinàmicament amb dades recollides dels comptadors de prestacions. / [EN] One of the main concerns of today's data centers is to maximize server utilization. In each server processor, multiple applications are executed concurrently, increasing resource efficiency. However, performance and fairness highly depend on the share of resources that each application receives, leading to performance unpredictability. The rising number of cores (and running applications) with every new generation of processors is leading to a growing concern for interference at the shared resources. This thesis focuses on addressing resource interference when different applications are consolidated on the same server processor from two main perspectives: high-performance computing (HPC) and cloud computing. In the context of HPC, resource management approaches are proposed to reduce inter-application interference at two major critical resources: the last level cache (LLC) and the processor cores. The LLC plays a key role in the system performance of current multi-cores by reducing the number of long-latency main memory accesses. LLC partitioning approaches are proposed for both inclusive and non-inclusive LLCs, as both designs are present in current server processors. In both cases, newly problematic LLC behaviors are identified and efficiently detected, granting a larger cache share to those applications that use best the LLC space. As for processor cores, many parallel applications, like graph applications, do not scale well with an increasing number of cores. Moreover, the default Linux time-sharing scheduler performs poorly when running graph applications, which process vast amounts of data. To maximize system utilization, this thesis proposes to co-locate multiple graph applications on the same server processor by assigning the optimal number of cores to each one, dynamically adapting the number of threads spawned by the running applications. When studying the impact of system-shared resources on cloud computing, this thesis addresses three major challenges: the complex infrastructure of cloud systems, the nature of cloud applications, and the impact of inter-VM interference. Firstly, this thesis presents the experimental platform developed to perform representative cloud studies with the main cloud system components (hardware and software). Secondly, an extensive characterization study is presented on a set of representative latency-critical workloads which must meet strict quality of service (QoS) requirements. The aim of the studies is to outline issues cloud providers should consider to improve performance and resource utilization. Finally, we propose an online approach that detects and accurately estimates inter-VM interference when co-locating multiple latency-critical VMs. The approach relies on metrics that can be easily monitored in the public cloud as VMs are handled as ``black boxes''. The research described above is carried out following the restrictions and requirements to be applicable to public cloud production systems. In summary, this thesis addresses contention in the main system shared resources in the context of server consolidation, both in HPC and cloud computing. Experimental results show that important gains are obtained over the Linux OS scheduler by reducing interference. In inclusive LLCs, turnaround time (TT) is reduced by over 40% while improving IPC by more than 3%. In non-inclusive LLCs, fairness and TT are improved by 44% and 24%, respectively, while improving performance by up to 3.5%. By distributing core resources efficiently, almost perfect fairness can be obtained (94%), and TT can be reduced by up to 80%. In cloud computing, performance degradation due to resource contention can be estimated with an overall prediction error of 5%. All the approaches proposed in this thesis have been designed to be applied in commercial server processors without requiring any prior information, making decisions dynamically with data collected from hardware performance counters. / Pons Escat, L. (2023). Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/195840

Page generated in 0.0762 seconds