• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 354
  • 79
  • 42
  • 1
  • Tagged with
  • 476
  • 476
  • 117
  • 94
  • 71
  • 45
  • 44
  • 43
  • 40
  • 40
  • 40
  • 40
  • 37
  • 34
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

GRID superscalar: a programming model for the Grid

Sirvent Pardell, Raül 03 February 2009 (has links)
Durant els darrers anys el Grid ha sorgit com una nova plataforma per la computació distribuïda. La tecnologia Gris permet unir diferents recursos de diferents dominis administratius i formar un superordinador virtual amb tots ells. Molts grups de recerca han dedicat els seus esforços a desenvolupar un conjunt de serveis bàsics per oferir un middleware de Grid: una capa que permet l'ús del Grid. De tota manera, utilitzar aquests serveis no és una tasca fácil per molts usuaris finals, cosa que empitjora si l'expertesa d'aquests usuaris no està relacionada amb la informàtica.Això té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix elsobjectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors. / During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments.
42

Spectral analysis of executions of computer programs and its applications on performance analysis

Casas Guix, Marc 09 March 2010 (has links)
This work is motivated by the growing intricacy of high performance computing infrastructures. For example, supercomputer MareNostrum (installed in 2005 at BSC) has 10240 processors and currently there are machines with more than 100.000 processors. The complexity of this systems increases the complexity of the manual performance analysis of parallel applications. For this reason, it is mandatory to use automatic tools and methodologies.The performance analysis group of BSC and UPC has a large experience in analyzing parallel applications. The approach of this group consists mainly in the analysis of tracefiles (obtained from parallel applications executions) using performance analysis and visualization tools, such as Paraver. Taking into account the general characteristics of the current systems, this method can sometimes be very expensive in terms of time and inefficient. To overcome these problems, this thesis makes several contributions.The first one is an automatic system able to detect the internal structure of executions of high performance computing applications. This automatic system is able to rule out nonsignificant regions of executions, to detect redundancies and, finally, to select small but significant execution regions. This automatic detection process is based on spectral analysis (wavelet transform, fourier transform, etc..) and works detecting the most important frequencies of the application's execution. These main frequencies are strongly related to the internal loops of the application' source code. Finally, it is important to state that an automatic detection of small but significant execution regions reduces remarkably the complexity of the performance analysis process.The second contribution is an automatic methodology able to show general but nontrivial performance trends. They can be very useful for the analyst in order to carry out a performance analysis of the application. The automatic methodology is based on an analytical model. This model consists in several performance factors. Such factors modify the value of the linear speedup in order to fit the real speedup. That is, if this real speedup is far from the linear one, we will detect immediately which one of the performance factors is undermining the scalability of the application. The second main characteristic of the analytical model is that it can be used to predict the performance of high performance computing applications. From several execution on a few of processors, we extract model's performance factors and we extrapolate these values to executions on higher number of processors. Finally, we obtain a speedup prediction using the analytical model.The third contribution is the automatic detection of the optimal sampling frequency of applications. We show that it is possible to extract this frequency using spectral analysis. In case of sequential applications, we show that to use this frequency improves existing results of recognized techniques focused on the reduction of serial application's instruction execution stream (SimPoint, Smarts, etc..). In case of parallel benchmarks, we show that the optimal frequency is very useful to extract significant performance information very efficiently and accurately.In summary, this thesis proposes a set of techniques based on signal processing. The main focus of these techniques is to perform an automatic analysis of the applications, reporting and initial diagnostic of their performance and showing their internal iterative structure. Finally, these methods also provide a reduced tracefile from which it is easy to start manual finegrain performance analysis. The contributions of the thesis are not reduced to proposals and publications. The research carried out these last years has provided a tool for analyzing applications' structure. Even more, the methodology is general and it can be adapted to many performance analysis methods, improving remarkably their efficiency, flexibility and generality.
43

Formal mission specification and execution mechanisms for unmanned aircraft systems

Santamaría Barnadas, Eduard 15 June 2010 (has links)
Unmanned Aircraft Systems (UAS) are rapidly gaining attention due to the increasing potential of their applications in the civil domain. UAS can provide great value performing environmental applications, during emergency situations, as monitoring and surveillance tools, and operating as communication relays among other uses. In general, they are specially well suited for the so-called D-cube operations (Dirty, Dull or Dangerous).Most current commercial solutions, if not remotely piloted, rely on waypoint based flight control systems for their navigation and are unable to coordinate UAS flight with payload operation. Therefore, automation capabilities and the ability for the system to operate in an autonomous manner are very limited. Some motivators that turn autonomy into an important requirement include limited bandwidth, limits on long-term attention spans of human operators, faster access to sensed data, which also results in better reaction times, as well as benefits derived from reducing operators workload and training requirements.Other important requirements we believe are key to the success of UAS in the civil domain are reconfigurability and cost-effectiveness. As a result, an affordable platform should be able to operate in different application scenarios with reduced human intervention.To increase capabilities of UAS and satisfy the aforementioned requirements, we propose adding flight plan and mission management layers on top of a commercial off-the-shelf flight control system. By doing so, a high level of autonomy can be achieved while taking advantage of available technologies and avoiding huge investments. Reconfiguration is made possible by separating flight and mission execution from its specification.The flight and mission management components presented in this thesis integrate into a wider hardware/software architecture being developed by the ICARUS research group.This architecture follows a service oriented approach where UAS subsystems are connected together through a common networking infrastructure. Components can be added and removed from the network in order to adapt the system to the target mission.The first contribution of this thesis consists, then, in a flight specification language that enables the description of the flight plan in terms of legs. Legs provide a higher level of abstraction compared to plain waypoints since they not only specify a destination but also the trajectory that should be followed to reach it. This leg concept is extended with additional constructs that enable specification of alternative routes, repetition and generation of complex trajectories from a reduced number of parameters.A Flight Plan Manager (FPM) service has been developed that is responsible for the execution of the flight plan. Since the underlying flight control system is still waypoint based, additional intermediate waypoints are automatically generated to adjust the flight to the desired trajectory.In order to coordinate UAS flight and payload operation a Mission Manager (MMa) service has also been developed. The MMa is able to adapt payload operation according to the current flight phase, but it can also act on the FPM and make modifications on the flight plan for a better adaption to the mission needs. To specify UAS behavior, instead of designing a new language, we propose using an in-development standard for the specification of state machines called State Chart XML.Finally, validation of the proposed specification and execution elements is carried out with two example missions executed in a simulation environment. The first mission mimics the procedures required for inspecting navigation aids and shows the UAS performance in a complex flight scenario. In this mission only the FPM is involved. The second example combines operation of the FPM with the MMa. In this case the mission consists in the detection of hotspots on a given area after a hypothetical wildfire. This second simulation shows how the MMa is able to modify the flight plan in order to adapt the trajectory to the mission needs. In particular, an eight pattern is flown over each of the dynamically detected potential hot spots.
44

Adaptive execution environments for application servers

Carrera Pérez, David 08 July 2008 (has links)
El creixement experimentat tant per la web com per Internet en els últims anys ha potenciat l'introducció de servidors d'aplicacions dins de la majoria d'entorns d'execució distribuits. El servidors d'aplicacions web porten les aplicacions distribuides un pas endavant pel que fa a accessibilitat, facilitat d'ús i estandardització, mitjançant l'ús dels protocols de comunicació més extesos i proveïnt rics entorns de desenvolupament.Seguint l'evolució dels entorns d'execució dels servidors d'aplicacions, els factors que determinin el seu rendiment també ha evolucionat, amb l'aparició de nous factors relacionats amb la creixent complexitat de l'entorn, mentres que els ja existents que determinaven el rendiment dels servidors d'aplicacions en les etapes inicials d'aquesta tecnologia encara són importants en l'actualitat. Inicialment, el rendiment d'un servidor d'aplicacions era principalment determinat pel comportament de la seva pila d'execució local, que normalment era l'origen de tots els problemes de rendiment. Més tard, quan el middleware va esdevenir més eficient, més càrrega es podia executar en cada instància del servidor d'aplicacions i per tant la gestió d'un nombre gran de clients va resultar ser un nou punt calent en termes de rendiment. Finalment, quan la capacitat d'un node va ser sobrepassada, els entorns d'execució van esdevenir massivament clusteritzats per tal de dividir la càrrega entre un nombre gran d'instàncies de servidors d'aplicacions, fet aquest que va significar que cadascuna de les instàncies havia de rebre una certa quantitat de recursos del sistema. El resultat d'aquest procés és que fins i tot en l'arquitectura de gestió del servei més avançada que pugui ser trobada avui dia, 1) comprendre l'impacte de rendiment causat per la pila d'execució del servidor d'aplicacions, 2) gestionar eficientment les connexions dels clients, i 3) assignar recursos adequadament a cada instància del servidor d'aplicacions, són tres passos incrementals de vital importància per tal d'optimitzar el rendiment d'un entorn tan complex. I donada la mida i complexitat dels centres de processat de dades actuals, tots aquests passos haurien de funcionar de manera automàtica sense necessitat d'intervenció humana.Seguint els tres elements presentats abans, aquesta tesis aporta a la gestió del rendiment dels complexos entorns d'execució per a servidors d'aplicacions tres contribucions: 1) la proposta d'un entorn de monitorització automàtic que proporciona important informació de rendiment dins del contexte d'un sol node; 2) la proposta i evaluació d'un nou disseny arquitectònic per a servidors d'aplicacions que millora l'adaptabilitat en condicions de càrrega variable; i 3) la proposta i evaluació d'una tècnica d'assignació automàtica de recursos per entorns d'execució virtualitzats i clusteritzats. La suma de les tres contribucions proposades en aquesta tesis proporcionen un nou rang d'opcions per a millorar el rendiment del sistema tant off-line (1) com on-line (2 i 3). / The growth experienced by the web and by the Internet over the last years has fuelled web application servers to break into most of the existing distributed execution environments. Web application servers take distributed applications one step forward in their accessibility, easiness of development and standardization, by using the most extended communication protocols and by providing rich development frameworks.Following the evolution of the application server execution environment, the factors that determine their performance have evolved too, with new ones that have come out with the raising complexity of the environment, while the already known ones that determined the performance in the early stages of the application server technology remain relevant in modern scenarios. In the old times, the performance of an application server was mainly determined by the behavior of its local execution stack, what usually resulted to be the source of most performance bottlenecks. Later, when the middleware became more efficient, more load could be put on each application server instance and thus the management of such a large number of concurrent client connections resulted to be a new hot spot in terms of performance. Finally, when the capacity of any single node was exceeded, the execution environments were massively clusterized to spread the load across a very large number of application server instances, what meant that each instance was allocated a certain amount of resources. The result of this process is that even in the most advanced service management architecture that can be currently found, 1) understanding the performance impact caused by the application server execution stack, 2) efficiently managing client connections, and 3) adequately allocating resources to each application server instance, are three incremental steps of crucial importance in order to optimize the performance of such a complex facility. And given the size and complexity of modern data centers, all of them should operate automatically without need of human interaction.Following the three items described above, this thesis contributes to the performance management of a complex application server execution environment by 1) proposing an automatic monitoring framework that provides a performance insight in the context of a single machine; 2) proposing and evaluating a new architectural application server design that improves the adaptability to changing workload conditions; and 3) proposing and evaluating an automatic resource allocation technique for clustered and virtualized execution environments. The sum of the three techniques proposed in this thesis opens up a new range of options to improve the performance of the system both off-line (1) and on-line (2 and 3).
45

Improved self-management of datacenter systems applying machine learning

Berral García, Josep Lluís 22 November 2013 (has links)
Autonomic Computing is a Computer Science and Technologies research area, originated during mid 2000's. It focuses on optimization and improvement of complex distributed computing systems through self-control and self-management. As distributed computing systems grow in complexity, like multi-datacenter systems in cloud computing, the system operators and architects need more help to understand, design and optimize manually these systems, even more when these systems are distributed along the world and belong to different entities and authorities. Self-management lets these distributed computing systems improve their resource and energy management, a very important issue when resources have a cost, by obtaining, running or maintaining them. Here we propose to improve Autonomic Computing techniques for resource management by applying modeling and prediction methods from Machine Learning and Artificial Intelligence. Machine Learning methods can find accurate models from system behaviors and often intelligible explanations to them, also predict and infer system states and values. These models obtained from automatic learning have the advantage of being easily updated to workload or configuration changes by re-taking examples and re-training the predictors. So employing automatic modeling and predictive abilities, we can find new methods for making "intelligent" decisions and discovering new information and knowledge from systems. This thesis departs from the state of the art, where management is based on administrators expertise, well known data, ad-hoc studied algorithms and models, and elements to be studied from computing machine point of view; to a novel state of the art where management is driven by models learned from the same system, providing useful feedback, making up for incomplete, missing or uncertain data, from a global network of datacenters point of view. - First of all, we cover the scenario where the decision maker works knowing all pieces of information from the system: how much will each job consume, how is and will be the desired quality of service, what are the deadlines for the workload, etc. All of this focusing on each component and policy of each element involved in executing these jobs. -Then we focus on the scenario where instead of fixed oracles that provide us information from an expert formula or set of conditions, machine learning is used to create these oracles. Here we look at components and specific details while some part of the information is not known and must be learned and predicted. - We reduce the problem of optimizing resource allocations and requirements for virtualized web-services to a mathematical problem, indicating each factor, variable and element involved, also all the constraints the scheduling process must attend to. The scheduling problem can be modeled as a Mixed Integer Linear Program. Here we face an scenario of a full datacenter, further we introduce some information prediction. - We complement the model by expanding the predicted elements, studying the main resources (this is CPU, Memory and IO) that can suffer from noise, inaccuracy or unavailability. Once learning predictors for certain components let the decision making improve, the system can become more ¿expert-knowledge independent¿ and research can focus on an scenario where all the elements provide noisy, uncertainty or private information. Also we introduce to the management optimization new factors as for each datacenter context and costs may change, turning the model as "multi-datacenter" - Finally, we review of the cost of placing datacenters depending on green energy sources, and distribute the load according to green energy availability.
46

Regularized approximate policy iteration using kernel for on-line reinforcement learning

Esposito, Gennaro 17 June 2015 (has links)
By using Reinforcement Learning (RL), an autonomous agent interacting with the environment can learn how to take adequate actions for every situation in order to optimally achieve its own goal. RL provides a general methodology able to solve uncertain and complex decision problems which may be present in many real-world applications. RL problems are usually modeled as a Markov Decision Processes (MDPs) deeply studied in the literature. The main peculiarity of a RL algorithm is that the RL agent is assumed to learn the optimal policies from its experiences without knowing the parameters of the MDP. The key element in solving the MDP is learning a value function which gives the expectation of total reward an agent might expect at its current state taking a given action. This value function allows to obtain the optimal policy. In this thesis we study the capacity of SVR using kernel methods to adapt and solve complex RL problems in large or continuous state space. SVR can be studied using a geometrical interpretation in terms of optimal margin or can be seen as a regularization problem given in a Reproducing Kernel Hilbert Space (RKHS) SVR have good properties over the generalization ability and as they are based a on convex optimization problem, they do not suffer from sub-optimality. SVR are non-parametric showing the ability to automatically adapt to the complexity of the problem. Accordingly, applying SVR to approximate value functions sounds to be a good approach. SVR can be solved both in batch mode when the whole set of training sample are at disposal of the learning agents or incrementally which enables the addition or removal of training samples very effectively. Incremental SVR finds the appropriate KKT conditions for new or updated data by modifying their influences into the regression function maintaining consistence in the KKT conditions for the rest of data used for learning. In RL problems an incremental SVR should be able to approximate the action value function leading to the optimal policy. Accordingly, computation load should be lower, learning speed faster and generalization more effective than other existing method The overall contribution coming from of our work is to develop, formalize, implement and study a new RL technique for generalization in discrete and continuous state spaces with finite actions. Our method uses the Approximate Policy Iteration (API) framework with the BRM criterion which allows to represent the action value function using SVR. This approach for RL is the first one we know using SVR compatible to the agent interaction- with-the-environment framework of RL which shows his power by solving a large number of benchmark problems, including very difficult ones, like the bicycle driving and riding control problem. In addition, unlike most RL approaches to generalization, we develop a proof finding theoretical bounds for the convergence of the method to the optimal solution under given conditions. / Mediante el uso de aprendizaje por refuerzo (RL), un agente autónomo interactuando con el medio ambiente puede aprender a tomar adecuada acciones para cada situación con el fin de lograr de manera óptima su propia meta. RL proporciona una metodología general capaz de resolver problemas de decisión complejos que pueden estar presentes en muchas aplicaciones del mundo real. Problemas RL usualmente se modelan como una Procesos de Decisión de Markov (MDP) estudiados profundamente en la literatura. La principal peculiaridad de un algoritmo de RL es que el agente es asumido para aprender las políticas óptimas de sus experiencias sin saber los parámetros de la MDP. El elemento clave en resolver el MDP está en el aprender una función de valor que da la expectativa de recompensa total que un agente puede esperar en su estado actual para tomar una acción determinada. Esta función de valor permite obtener la política óptima. En esta tesis se estudia la capacidad del SVR utilizando núcleo métodos para adaptarse y resolver problemas RL complejas en el espacio estado grande o continua. RVS puede ser estudiado mediante un interpretación geométrica en términos de margen óptimo o puede ser visto como un problema de regularización dado en un Reproducing Kernel Hilbert Space (RKHS). SVR tiene buenas propiedades sobre la capacidad de generalización y ya que se basan en una optimización convexa problema, ellos no sufren de sub-optimalidad. SVR son no paramétrico que muestra la capacidad de adaptarse automáticamente a la complejidad del problema. En consecuencia, la aplicación de RVS para aproximar funciones de valor suena para ser un buen enfoque. SVR puede resolver tanto en modo batch cuando todo el conjunto de muestra de entrenamiento están a disposición de los agentes de aprendizaje o incrementalmente que permite la adición o eliminación de muestras de entrenamiento muy eficaz. Incremental SVR encuentra las condiciones adecuadas para KKT nuevas o actualizadas de datos modificando sus influencias en la función de regresión mantener consistencia en las condiciones KKT para el resto de los datos utilizados para el aprendizaje. En los problemas de RL una RVS elemental será capaz de aproximar la función de valor de acción que conduce a la política óptima. En consecuencia, la carga de cálculo debería ser menor, la velocidad de aprendizaje más rápido y generalización más efectivo que el otro método existente La contribución general que viene de nuestro trabajo es desarrollar, formalizar, ejecutar y estudiar una nueva técnica de RL para la generalización en espacio de estados discretos y continuos con acciones finitas. Nuestro método utiliza el marco de la Approximate Policy Iteration (API) con el criterio de BRM que permite representar la función de valor de acción utilizando SVR. Este enfoque de RL es el primero que conocemos usando SVR compatible con el marco de RL con agentes interaccionado con el ambiente que muestra su poder mediante la resolución de un gran número de problemas de referencia, incluyendo los muy difíciles, como la conducción de bicicletas y problema de control de conducción. Además, a diferencia de la mayoría RL se acerca a la generalización, desarrollamos un hallazgo prueba límites teóricos para la convergencia del método a la solución óptima en condiciones dadas.
47

Client-side privacy-enhancing technologies in web search

Romero Tris, Cristina 15 September 2014 (has links)
Els motors de cerca (En anglès, Web Search Engines - WSEs-), són eines que permeten als usuaris localitzar informació específica a Internet. Un dels objectius dels WSEs és retornar els resultats que millor coincideixen amb els interessos de cada usuari. Amb aquesta finalitat, l'WSEs recull i analitza l' historial de cerca per construir perfils. Com a resultat, un usuari que envia una certa consulta rebrà els resultats més interessants en les primeres posicions. Encara que proporcionen un servei molt útil, també representen una amenaça per a la privacitat dels seus usuaris. Es construeixen els perfils basats en la història de les consultes i altres dades relacionades que poden contenir informació personal i privada. Per evitar aquesta amenaça de privacitat, és necessari establir mecanismes per a la protecció de la privacitat dels usuaris dels motors de cerca. Actualment, hi ha diverses solucions en la literatura per proporcionar privacitat a aquests usuaris. Un dels objectius d'aquest estudi és analitzar les solucions existents, estudiar les seves diferències i els avantatges i inconvenients de cada proposta. Llavors, basat en l'estat de l'art, presentem noves propostes per protegir la privadesa dels usuaris. Més concretament, aquesta tesi proposa tres protocols per preservar la privacitat dels usuaris en la cerca web. La idea general és distribuir als usuaris en grups on intercanvi consultes, com a mètode d'ofuscació ocultar les consultes reals de cada usuari. El primer protocol distribuït que proposem es centra en la reducció del temps d'espera de consulta, és a dir, el temps que cada membre del grup ha d'esperar per rebre els resultats de la seva consulta. El segon protocol proposat millora les propostes anteriors ja que resisteix els atacs interns, i obté millors resultats que les propostes similars en termes de càlcul i comunicació. La tercera proposta és un protocol P2P, on els usuaris estan agrupats segons les seves preferències. Això permet ocultar els perfils d'usuari però conservar els interessos generals. En conseqüència, el motor de cerca és capaç de classificar millor els resultats de les seves consultes. / Los motores de búsqueda (en inglés, Web Search Engines -WSEs-) son herramientas que permiten a los usuarios localizar información específica en Internet. Uno de los objetivos de los WSEs es devolver los resultados que mejor coinciden con los intereses de cada usuario. Para ello, los WSEs recogen y analizan el historial de búsqueda de los usuarios para construir perfiles. Como resultado, un usuario que envía una cierta consulta recibirá los resultados más interesantes en las primeras posiciones. Aunque ofrecen un servicio muy útil, también representan una amenaza para la privacidad de sus usuarios. Los perfiles se construyen a partir del historial de consultas y otros datos relacionados que pueden contener información privada y personal. Para evitar esta amenaza de privacidad, es necesario establecer mecanismos de protección de privacidad de motores de búsqueda. En la actualidad, existen varias soluciones en la literatura para proporcionar privacidad a estos usuarios. Uno de los objetivos de este trabajo es examinar las soluciones existentes, analizando sus diferencias y las ventajas y desventajas de cada propuesta. Después, basándonos en el estado del arte actual, presentamos nuevas propuestas que protegen la privacidad de los usuarios. Más concretamente, esta tesis doctoral propone tres protocolos que preservan la privacidad de los usuarios en las búsquedas web. La idea general es distribuir a los usuarios en grupos donde intercambian sus consultas, como método de ofuscación para ocultar las consultas reales de cada usuario. El primer protocolo distribuido que proponemos se centra en reducir el tiempo de espera de la consulta, es decir, el tiempo que cada miembro del grupo tiene que esperar para recibir los resultados de la consulta. El segundo protocolo propuesto mejora anteriores propuestas porque resiste ataques internos, mejorando propuestas similares en términos de cómputo y comunicación. La tercera propuesta es un protocolo P2P, donde los usuarios se agrupan según sus preferencias. Esto permite ofuscar los perfiles de los usuarios pero conservando a sus intereses generales. En consecuencia, el WSE es capaz de clasificar mejor los resultados de sus consultas. / Web search engines (WSEs) are tools that allow users to locate specific information on the Internet. One of the objectives of WSEs is to return the results that best match the interests of each user. For this purpose, WSEs collect and analyze users’ search history in order to build profiles. Consequently, a profiled user who submits a certain query will receive the results which are more interesting for her in the first positions. Although they offer a very useful service, they also represent a threat for their users’ privacy. Profiles are built from past queries and other related data that may contain private and personal information. In order to avoid this privacy threat, it is necessary to provide privacy-preserving mechanisms that protect users. Nowadays, there exist several solutions that intend to provide privacy in this field. One of the goals of this work is to survey the current solutions, analyzing their differences and remarking the advantages and disadvantages of each approach. Then, based on the current state of the art, we present new proposals that protect users’ privacy. More specifically, this dissertation proposes three different privacy-preserving multi-party protocols for web search. A multi-party protocol for web search arranges users into groups where they exchange their queries. This serves as an obfuscation method to hide the real queries of each user. The first multi-party protocol that we propose focuses on reducing the query delay. This is the time that every group member has to wait in order to receive the query results. The second proposed multi-party protocol improves current literature because it is resilient against internal attacks, outperforming similar proposals in terms of computation and communication. The third proposal is a P2P protocol, where users are grouped according to their preferences. This allows to obfuscate users’ profiles but conserving their general interests. Consequently, the WSE is able to better rank the results of their queries.
48

Automated design of domain-specific custom instructions : Geautomatiseerd ontwerp van domeinspecifieke gespecialiseerde instructies

González Alvárez, Cecilia Noemí 16 November 2015 (has links)
Cotutela Universitat Politècnica de Catalunya i Universiteit Gent, Faculteit Ingenieurswetenschappen en Architectuur Vakgroep Elektronica en Informatiesystemen / In the last years, hardware specialization has received renewed attention as chips approach a utilization wall. Specialized accelerators can take advantage of underutilized transistors implementing custom hardware that complements the main processor. However, specialization adds complexity to the design process and limits reutilization. Application-Specific Instruction Processors (ASIPs) balance performance and reusability, extending a general-purpose processor with custom instructions (CIs) specific for an application, implemented in Specialized Functional Units (SFUs). Still, time-to-market is a major issue with application-specific designs because, if CIs are not frequently executed, the acceleration benefits will not compensate for the overall design cost. Domain-specific acceleration increases the applicability of ASIPs, as it targets several applications that run on the same hardware. Also, reconfigurable SFUs and the automation of the CIs design can solve the aforementioned problems. In this dissertation, we explore different automated approaches to the design of CIs that extend a baseline processor for domain-specific acceleration to improve both performance and energy efficiency. First, we develop automated techniques to identify code sequences within a domain to create CI candidates. Due to the disparity among coding styles of different programs, it is difficult to find patterns that are represented by a unique CI across applications. Therefore, we propose an analysis at the basic block level that identifies equivalent CIs within and across different programs. We use the Taylor Expansion Diagram (TED) canonical representation to find not only structurally equivalent CIs, but also functionally similar ones, as opposed to the commonly applied directed acyclic graph (DAG) isomorphism detection. We combine both methods into a new Hybrid DAG/TED technique to identify more patterns across applications that map to the same CI. Then, we select a subset of the CI candidates that fits in the available SFU area. Because of the complexity of the problem, we propose four scoring heuristics to reduce the design space and smooth the potential performance speedup across applications. We include these methods in the FuSInG framework, and we estimate performance with hardware models on a set of media benchmarks. Results show that, when limiting core area devoted to specialization, the SFU customization with the largest speedups includes a mix of application and domain-specific custom instructions. If we target larger CIs to obtain higher speedups, reusability across applications becomes critical; without enough equivalences, CIs cannot be generalized for a domain. We aim to share partially common operations among CIs to accelerate more code, especially across basic blocks, and to reduce the hardware area needed for specialization. Hence, we create a new canonical representation across basic blocks, the Merging Diagram, to facilitate similarity detection and improve code coverage. We also introduce clustering-based partial matching to identify partially-similar domain-specific CIs, which generally leads to better performance than application-specific ones. Yet, at small areas, merging two CIs induces a high additional overhead that might penalize energy-efficiency. Thus, we also detect fragments of CIs and we join them with the existing merged clusters resulting in minimal extra overhead. Also, using speedup as the deciding factor for CI selection may not be optimal for devices with limited power budgets. For that reason, we propose a linear programming-based selection that balances performance and energy consumption. We implement these techniques in the MInGLE framework and evaluate them with media benchmarks. The selected CIs significantly improve the energy-delay product and performance, demonstrating that we can accelerate a domain covering more code while reducing the needed area for the CI implementation. / La especialización de hardware ha recibido renovado interés debido al utilization wall, ya que transistores infrautilizados pueden implementar hardware a medida que complemente el procesador principal. Sin embargo, el proceso de diseño se complica y se limita la reutilización. Procesadores de instrucciones para aplicaciones específicas (ASIPs) equilibran rendimiento y reuso, extendiendo un procesador con instruciones especializadas (custom instructions ¿ CIs) para una aplicación, implementadas en unidades funcionales especializadas (SFUs). No obstante, los plazos de comercialización suponen un obstáculo en diseños específicos ya que, si las CIs no se ejecutan con frecuencia, los beneficios de la aceleración no compensan los costes de diseño. La aceleración de un dominio específico incrementa la aplicabilidad de los ASIPs, acelerando diferentes aplicaciones en el mismo hardware, mientras que una SFU reconfigurable y un diseño automatizado pueden resolver los problemas mencionados. En esta tesis, exploramos diferentes alternativas al diseño de CIs que extienden un procesador para acelerar un dominio, mejorando el rendimiento y la eficiencia energética. Proponemos primero técnicas automatizadas para identificar código acelerable en un dominio. Sin embargo, la identificación se ve dificultada por la diversidad de estilos entre diferentes programas. Por tanto, proponemos identificar en el bloque básico CIs equivalentes utilizando la representación canónica Taylor Expansion Diagram (TED). Con TEDs encontramos no sólo código estructuralmente equivalente, sino también con similitud funcional, en contraposición a la detección isomórfica de grafos acíclicos dirigidos (DAG). Combinamos ambos métodos en una nueva técnica híbrida DAG/TED, que identifica en varias aplicaciones más secuencias representadas por la misma CI. Tras esto, seleccionamos un subconjunto de CIs que puede ser contenido en el área de la SFU. Por la complejidad del problema, proponemos cuatro heurísticas de selección para reducir el espacio de búsqueda y homogeneizar el rendimiento de las aplicaciones. Incluimos estas técnicas en la infraestructura FuSInG y estimamos el rendimiento para un conjunto de benchmarks multimedia. Los resultados muestran que, al limitar el área de especialización, la configuración de la SFU con las mayores ganancias incluye una mezcla de CIs específicas tanto para una aplicación como para todo el dominio. Si nos centramos en CIs más grandes para obtener mayores ganancias, la reutilización se vuelve crítica; sin suficientes equivalencias las CIs no pueden ser generalizadas. Nuestro objetivo es que las CIs compartan parcialmente operaciones, especialmente a través de bloques básicos, y reducir el área de especialización. Por ello, creamos una representación canónica de CIs que cubre varios bloques básicos, Merging Diagram, para mejorar el alcance de la aceleración y facilitar la detección de similitud. Además, proponemos una búsqueda de coincidencias parciales basadas en clustering para identificar CIs de dominio específico parcialmente similares, las cuales derivan generalmente mejor rendimiento. Pero en áreas reducidas, la fusión de CIs induce un coste adicional que penalizaría la eficiencia energética. Así, detectamos fragmentos de CIs y los unimos con grupos de CIs previamente fusionadas con un coste extra mínimo. Usar el rendimiento como el factor decisivo en la selección puede no ser óptimo para disposivos con consumo de energía limitado. Por eso, proponemos un mecanismo de selección basado en programación lineal que equilibra rendimiento y consumo energético. Implementamos estas técnicas en la infraestructura MInGLE y las evaluamos con benchmarks multimedia. Las CIs seleccionadas mejoran notablemente la eficiencia energética y el rendimiento, demostrando que podemos acelerar un dominio cubriendo más código a la vez que reducimos el área de implementación.
49

Run-time support for multi-level disjoint memory address spaces

Bueno Hedo, Javier 05 November 2015 (has links)
High Performance Computing (HPC) systems have become widely used tools in many industry areas and research fields. Research to produce more powerful and efficient systems has grown in par with their popularity. As a consequence, the complexity of modern HPC architectures has increased in order to provide systems with the highest levels of performance. This increased complexity has also affected the way HPC systems are programmed. HPC users have to deal with new devices, languages and tools, and this is can be a significant access barrier to people that do not have a deep knowledge in computer science. On par with the evolution of HPC systems, programming models have also evolved to ease the task of developing applications for these machines. Two well-known examples are OpenMP and MPI. The former can be used in shared memory systems and is praised for offering an easy methodology of software development. The latter is more popular because it targets distributed environments but it is considered burdensome to use. Besides these two, many programming models have emerged to propose new methodologies or to handle new hardware devices. One of these models is OmpSs. OmpSs is a programming model for modern HPC systems that is based on OpenMP and StarSs. Developed by the Programming Models group at the Barcelona Supercomputing Center, it targets the latest generation of HPC systems while benefiting from the ease of use of OpenMP. OmpSs offers asynchronous parallelism with the concept of tasks with data dependencies. These tasks allow the specification of sections of code that can be executed in parallel while the dependencies specify the restrictions about the order in which the tasks can be executed. With this, OmpSs programs can adapt to a many different system configurations while fundamentally still being sequential programs with annotations. This thesis explores the benefits of providing OmpSs the capability to target architectures with complex memory hierarchies. An example of such systems can be the new generation of clusters that use accelerators to power their computing capabilities. The memory hierarchy of these machines is composed of a first level of distributed memory formed by the memory of each individual node, and a second level formed by the private memory of each accelerator devices. Our first contribution shows the implementation of the support of cluster of multi-cores for the OmpSs programming model. We also present two optimizations to boost the performance of applications running on top of cluster systems: a specific task scheduling policy and the addition of slave-to-slave transfers. We evaluate our implementation using a set of benchmarks coded in OmpSs and we also compare them against the same applications implemented using MPI, the most widely used programming model for these systems. We extend our initial implementation in our second contribution, which provides OmpSs with support for clusters of GPUs. We show that OmpSs programs targeting these complex systems are capable of achieving a good performance when compared against MPI+CUDA implementations. The third contribution of this thesis presents an implementation and evaluation of the performance and programmability impact of supporting non-contiguous memory regions. Offering this feature allows applications with complex data accesses to be easily annotated with OmpSs. This is important to widen the spectrum of applications that can be handled by the programming model. / Els sistemes de computació d'altes prestacions (CAP) han esdevingut eines importants en diferents sectors industrials i camps de recerca. La recerca per produir sistemes més potents i eficients ha crescut proporcionalment a aquesta popularitat. Com a conseqüència, la complexitat d'aquest tipus de sistemes s'ha incrementat per tal de dotar-los d'altes prestacions. Aquest increment en la complexitat també ha afectat la manera de programar aquest tipus de sistemes. Els usuaris de sistemes CAP han de treballar amb nous dispositius, llenguatges i eines, i això pot convertir-se en una barrera d'entrada significativa per aquelles persones que no tinguin uns alts coneixements informàtics. Seguin l'evolució dels sistemes CAP, els models de programació també han evolucionat per tal de facilitar la tasca de desenvolupar aplicacions per aquests sistemes. Dos exemples ben coneguts son OpenMP i MPI. El primer es pot utilitzar en sistemes de memòria compartida i es reconegut per oferir una metodologia de desenvolupament senzilla. El segon és més popular perquè està dissenyat per sistemes distribuïts, però està considerat difícil d'utilitzar. A part d'aquests dos, altres models de programació han sorgit per proposar noves metodologies o per suportar nous components hardware. Un d'aquests nous models és OmpSs. OmpSs és un model de programació per sistemes CAP moderns que està basat en OpenMP i StarSs. Desenvolupat pel grup de Models de Programació del Barcelona Supercomputing Center, està dissenyat per suportar la darrera generació de sistemes CAP i alhora oferir la facilitat d'us d'OpenMP. OmpSs ofereix paral·lelisme asíncron mitjançant el concepte de tasques amb dependències de dades. Aquestes tasques permeten especificar regions de codi que poden ser executades en paral·lel, mentre que les dependències especifiquen les restriccions sobre l'ordre en que aquestes tasques poden ser executades. Amb això, els programes fets amb OmpSs poden adaptar-se a sistemes amb diferents configuracions tot i ser fonamentalment programes seqüencials amb anotacions. Aquesta tesi explora els beneficis de proveir a OmpSs amb la capacitat de funcionar sobre arquitectures amb jerarquies de memòria complexes. Un exemple d'un sistema així pot ser un dels clústers de nova generació que utilitzen acceleradors per tal d'oferir més capacitat de càlcul. La jerarquia de memòria en aquestes màquines està composada per un primer nivell de memòria distribuïda formada per la memòria de cada node individual, i el segon nivell està format per la memòria privada de cada accelerador. La primera contribució d'aquesta tesi mostra la implementació del suport de clústers de multi-cores pel model de programació OmpSs. També presentem dos optimitzacions per millorar el rendiment de les aplicacions quan s'executen en sistemes clúster: una política de planificació de tasques específica i la incorporació dels missatges entre nodes esclaus. Avaluem la nostra implementació usant un conjunt d'aplicacions programades en OmpSs i també les comparem amb les mateixes aplicacions implementades usant MPI, el model de programació més estès per aquest tipus de sistemes. En la segona contribució estenem la nostra implementació inicial per tal de dotar OmpSs de suport per clústers de GPUs. Mostrem que els programes OmpSs son capaços d'obtenir un bon rendiment sobre aquests tipus de sistemes, fins i tot quan els comparem amb versions implementades usant MPI+CUDA. La tercera contribució descriu la implementació i avaluació del rendiment i de l'impacte de suportar regions de memòria no contigües. Oferir aquesta funcionalitat permet implementar fàcilment amb OmpSs aplicacions amb accessos complexes a memòria, cosa que és important de cara a ampliar l'espectre d'aplicacions que poden ser tractades pel model de programació.
50

Intelligent instrumentation techniques to improve the traces information-volume ratio

Llort Sánchez, Germán M. 03 November 2015 (has links)
With ever more powerful machines being constantly deployed, it is crucial to manage the computational resources efficiently. This is important both from the point of view of the individual user, who expects fast results; and the supercomputing center hosting the whole infrastructure, that is interested in maximizing its overall productivity. Nevertheless, the real sustained performance achieved by the applications can be significantly lower than the theoretical peak performance of the machines. A key factor to bridge this performance gap is to understand how parallel computers behave. Performance analysis tools are essential not only to understand the behavior of parallel applications, but to identify why performance expectations might not have been met, serving as guidelines to improve the inefficiencies that caused poor performance, and driving both software and hardware optimizations. However, detailed analysis of the behavior of a parallel application requires to process a large amount of data that also grows extremely fast. Current large scale systems already comprise hundreds of thousands of cores, and upcoming exascale systems are expected to assemble more than a million processing elements. With such number of hardware components, the traditional analysis methodologies consisting in blindly collecting as much data as possible and then performing exhaustive lookups are no longer applicable, because the volume of performance data generated becomes absolutely unmanageable to store, process and analyze. The evolution of the tools suggests that more complex approaches are needed, incorporating intelligence to perform competently the challenging and important task of detailed analysis. In this thesis, we address the problem of scalability of performance analysis tools in large scale systems. In such scenarios, in-depth understanding of the interactions between all the system components is more compelling than ever for an effective use of the parallel resources. To this end, our work includes a thorough review of techniques that have been successfully applied to aid in the task of Big Data Analytics in fields like machine learning, data mining, signal processing and computer vision. We have leveraged these techniques to improve the analysis of large-scale parallel applications by automatically uncovering repetitive patterns, finding data correlations, detecting performance trends and further useful analysis information. Combinining their use, we have minimized the volume of performance data captured from an execution, while maximizing the benefit and insight gained from this data, and have proposed new and more effective methodologies for single and multi-experiment performance analysis. / Con el incesante aumento de potencia y capacidad de los superordenadores, la habilidad de emplear de forma efectiva todos los recursos disponibles se ha convertido en un factor crucial. La necesidad de un uso eficiente radica tanto en la aspiración de los usuarios por obtener resultados en el menor tiempo posible, como en el interés del propio centro de cálculo que alberga la infraestructura computacional por maximizar la productividad de los recursos. Sin embargo, el rendimiento real que las aplicaciones son capaces de alcanzar suele ser significativamente menor que el rendimiento teórico de las máquinas. Y la clave para salvar esta distancia consiste en comprender el comportamiento de las máquinas paralelas. Las herramientas de análisis de rendimiento son instrumentos fundamentales no solo para entender como funcionan las aplicaciones paralelas, sino también para identificar los problemas por los que el rendimiento obtenido dista del esperado, sirviendo como guías para mejorar aquellas deficiencias software y/o hardware que son causas de degradación. No obstante, un análisis en detalle del comportamiento de una aplicación paralela requiere procesar una gran cantidad de datos que crece extremadamente rápido. Los sistemas actuales de gran escala ya comprenden cientos de miles de procesadores, y se espera que los inminentes sistemas exa-escala reunan millones de elementos de procesamiento. Con semejante número de componentes, las estrategias tradicionales de obtención indiscriminada de datos para mejorar la precisión de las herramientas de análisis caerán en desuso debido a las dificultades que entraña almacenarlos y procesarlos. En este aspecto, la evolución de las herramientas sugiere que son necesarios métodos más sofisticados, que incorporen inteligencia para desarrollar la tarea de análisis de manera más competente. Esta tesis aborda el problema de escalabilidad de las herramientas de análisis en sistemas de gran escala, donde es primordial el conocimiento detallado de las interacciones entre todos los componentes para emplear los recursos paralelos de la forma más óptima. Con este fin, esta investigación incluye una revisión exhaustiva de las técnicas que se han aplicado satisfactoriamente para extraer información de grandes volumenes de datos en otras áreas como aprendizaje automático, minería de datos y procesado de señal. Hemos adaptado estas técnicas para mejorar el análisis de aplicaciones paralelas de gran escala, detectando automáticamente patrones repetitivos, correlaciones de datos, tendencias de rendimiento, y demás información relevante. Combinando el uso de estas técnicas, se ha conseguido disminuir el volumen de datos generado durante una ejecución, a la vez que aumentar la cantidad de información útil que se puede extraer de los datos mediante la aplicación de nuevas y más efectivas metodologías de análisis para el estudio del rendimiento de experimentos individuales o en serie

Page generated in 0.4465 seconds