• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling and Optimization Frameworks for Runtime Adaptable Embedded Systems

Lizarraga, Adrian, Lizarraga, Adrian January 2016 (has links)
The widespread adoption of embedded computing systems has resulted in the realization of numerous sensing, decision, and control applications with diverse application-specific requirements. However, such embedded systems applications are becoming increasingly difficult to design, simulate, and optimize due to the multitude of interdependent parameters that must be considered to achieve optimal, or near-optimal, performance that meets design constraints. This situation is further exacerbated for data-adaptable embedded systems (DAES) applications due to the dynamic characteristics of the deployment environment and the data streams on which these systems operate. As operating conditions change, these embedded systems must continue to adapt their configuration and composition at runtime in order to meet application requirements. To assist both platform developers and application domain experts, this dissertation presents design and optimization frameworks for the synthesis of runtime adaptable embedded systems. For sensor network applications, we present an initial dynamic profiling and optimization platform that profiles network and sensor node activity to generate optimal node configurations at runtime based on designed-specified application requirements. To support a broader class of DAES applications, we present a modeling and optimization framework that supports the specification of application task flows, data types, and runtime estimation models for the runtime adaptation of task implementations and device mappings. Experimental results for these design and optimization frameworks demonstrate the benefits of dynamic optimization compared to static optimization alternatives. For the presented sensor network and video-based collision avoidance applications, dynamic configurations exhibited improvements of up to 109% and 76%, respectively. Moreover, the performance of the heuristic design space exploration (DSE) algorithms utilized by the runtime optimization frameworks is compared to exhaustive DSE implementations, resulting in speedups of up to 1662X and 544X for the same two applications, respectively.
2

Efficient XML Stream Processing with Automata and Query Algebra

Jian, Jinhuj 27 August 2003 (has links)
"XML Stream Processing is an emerging technology designed to support declarative queries over continuous streams of data. The interest in this novel technology is growing due to the increasing number of real world applications such as monitoring systems for stock, email, and sensor data that need to analyze incoming data streams. There are however several open challenges. One, we must develop efficient techniques for pattern matching over the nested tag structure of XML as data streams in token by token. Two, we must develop techniques for query optimization to cope with complex user queries while given only incomplete knowledge of source data. When considering these challenges separately, then automata models have been shown by several recent works to be suited to tackle the first problem, while algebraic query models have been regarded as appropriate foundations to tackle the second problem. The question however remains how best to put these two models together to have an overall effective system. This thesis aims to exactly fill this gap. We propose a unified query framework to augment automata-style processing with algebra-based query optimization capabilities. We use the automata model to handle the token-oriented streaming XML data and use the algebraic model to support set-oriented optimization techniques. The framework has been designed in two layers such that the logical layer provides a uniform abstraction across the two models and any optimization techniques can be applied in either model uniformly using query rewritings. The physical layer, on the other hand, allows us to refine the implementation details after the logical layer optimization. We have successfully applied this framework in the Raindrop stream processing system. We have identified several trade-offs regarding which query functionality should be realized in which specific query model. We have developed novel optimization techniques to exploit these trade-offs. For example, a query rewrite rule can flexibly push down a pattern matching into the automata model when the optimizer decides that it is more efficient to do so. To deal with incomplete knowledge of source data, we have also developed novel techniques to monitor data statistics, based on which we can apply optimization techniques to choose the optimal query plan at runtime. Our experimental study confirms that considerable performance gains are being achieved when these optimization techniques are applied in our system."
3

Runtime optimization of binary through vectorization transformations / Optimisation dynamique de code binaire par des transformations vectorielles

Hallou, Nabil 18 December 2017 (has links)
Les applications ne sont pas toujours optimisées pour le matériel sur lequel elles s'exécutent, comme les logiciels distribués sous forme binaire, ou le déploiement des programmes dans des fermes de calcul. On se concentre sur la maximisation de l'efficacité du processeur pour les extensions SIMD. Nous montrons que de nombreuses boucles compilées pour x86 SSE peuvent être converties dynamiquement en versions AVX plus récentes et plus puissantes. Nous obtenons des accélérations conformes à celles d'un compilateur natif ciblant AVX. De plus, on vectorise en temps réel des boucles scalaires. Nous avons intégré des logiciels libres pour (1) transformer dynamiquement le binaire vers la forme de représentation intermédiaire, (2) abstraire et vectoriser les boucles fréquemment exécutées dans le modèle polyédrique (3) enfin les compiler. Les accélérations obtenues sont proches du nombre d'éléments pouvant être traités simultanément par l'unité SIMD. / In many cases, applications are not optimized for the hardware on which they run. This is due to backward compatibility of ISA that guarantees the functionality but not the best exploitation of the hardware. Many reasons contribute to this unsatisfying situation such as legacy code, commercial code distributed in binary form, or deployment on compute farms. Our work focuses on maximizing the CPU efficiency for the SIMD extensions. The first contribution is a lightweight binary translation mechanism that does not include a vectorizer, but instead leverages what a static vectorizer previously did. We show that many loops compiled for x86 SSE can be dynamically converted to the more recent and more powerful AVX; as well as, how correctness is maintained with regards to challenges such as data dependencies and reductions. We obtain speedups in line with those of a native compiler targeting AVX. The second contribution is a runtime auto-vectorization of scalar loops. For this purpose, we use open source frame-works that we have tuned and integrated to (1) dynamically lift the x86 binary into the Intermediate Representation form of the LLVM compiler, (2) abstract hot loops in the polyhedral model, (3) use the power of this mathematical framework to vectorize them, and (4) finally compile them back into executable form using the LLVM Just-In-Time compiler. In most cases, the obtained speedups are close to the number of elements that can be simultaneously processed by the SIMD unit. The re-vectorizer and auto-vectorizer are implemented inside a dynamic optimization platform; it is completely transparent to the user, does not require any rewriting of the binaries, and operates during program execution.
4

An Internal Representation for Adaptive Online Parallelization

Rehme, Koy D. 29 May 2009 (has links) (PDF)
Future computer processors may have tens or hundreds of cores, increasing the need for efficient parallel programming models. The nature of multicore processors will present applications with the challenge of diversity: a variety of operating environments, architectures, and data will be available and the compiler will have no foreknowledge of the environment until run time. Adaptive Online Parallelization (ADOPAR) is a unifying framework that attempts to overcome diver sity by separating discovery and packaging of parallelism. Scheduling for execution may then occur at run time when diversity may best be resolved. This work presents a compact representation of parallelism based on the task graph programming model, tailored especially for ADOPAR and for regular and irregular parallel computations. Task graphs can be unmanageably large for fine-grained parallelism. Rather than representing each task individually, similar tasks are grouped into task descriptors. From these, a task descriptor graph, with relationship descriptors forming the edges of the graph, may be represented. While even highly irregular computations often have structure, previous representations have chosen to restrict what can be easily represented, thus limiting full exploitation by the back end. Therefore, in this work, task and relationship descriptors have been endowed with instantiation functions (methods of descriptors that act as factories) so the front end may have a full range of expression when describing the task graph. The representation uses descriptors to express a full range of regular and irregular computations in a very flexible and compact manner. The representation also allows for dynamic optimization and transformation, which assists ADOPAR in its goal of overcoming various forms of diversity. We have successfully implemented this representation using new compiler intrinsics, allow ADOPAR schedulers to operate on the described task graph for parallel execution, and demonstrate the low code size overhead and the necessity for native schedulers.
5

Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing

Pons Escat, Lucía 01 September 2023 (has links)
[ES] Una de las principales preocupaciones de los centros de datos actuales es maximizar la utilización de los servidores. En cada servidor se ejecutan simultáneamente varias aplicaciones para aumentar la eficiencia de los recursos. Sin embargo, las prestaciones dependen en gran medida de la proporción de recursos que recibe cada aplicación. El mayor número de núcleos (y de aplicaciones ejecutándose) con cada nueva generación de procesadores hace que crezca la preocupación por la interferencia en los recursos compartidos. Esta tesis se centra en mitigar la interferencia cuando diferentes aplicaciones se consolidan en un mismo procesador desde dos perspectivas: computación de alto rendimiento (HPC) y computación en la nube. En el contexto de HPC, esta tesis propone políticas de gestión para dos de los recursos más críticos: la caché de último nivel (LLC) y los núcleos del procesador. La LLC desempeña un papel clave en las prestaciones de los procesadores actuales al reducir considerablemente el número de accesos de alta latencia a memoria principal. Se proponen estrategias de particionado de la LLC tanto para cachés inclusivas como no inclusivas, ambos diseños presentes en los procesadores para servidores actuales. Para los esquemas, se detectan nuevos comportamientos problemáticos y se asigna un mayor espacio de caché a las aplicaciones que hacen mejor uso de este. En cuanto a los núcleos del procesador, muchas aplicaciones paralelas (como aplicaciones de grafos) no escalan bien con un mayor número de núcleos. Además, el planificador de Linux aplica una estrategia de tiempo compartido que no ofrece buenas prestaciones cuando se ejecutan aplicaciones de grafo. Para maximizar la utilización del sistema, esta tesis propone ejecutar múltiples aplicaciones de grafo en el mismo procesador, asignando a cada una el número óptimo de núcleos (y adaptando el número de hilos creados) dinámicamente. En cuanto a la computación en la nube, esta tesis aborda tres grandes retos: la compleja infraestructura de estos sistemas, las características de sus aplicaciones y el impacto de la interferencia entre máquinas virtuales (MV). Primero, esta tesis presenta la plataforma experimental desarrollada con los principales componentes de un sistema en la nube. Luego, se presenta un amplio estudio de caracterización sobre un conjunto de aplicaciones de latencia crítica representativas con el fin de identificar los puntos que los proveedores de servicios en la nube deben tener en cuenta para mejorar el rendimiento y la utilización de los recursos. Por último, se realiza una propuesta que permite detectar y estimar dinámicamente la interferencia entre MV. El enfoque usa métricas que pueden monitorizarse fácilmente en la nube pública, ya que las MV deben tratarse como "cajas negras". Toda la investigación descrita se lleva a cabo respetando las restricciones y cumpliendo los requisitos para ser aplicable en entornos de producción de nube pública. En resumen, esta tesis aborda la contención en los principales recursos compartidos del sistema en el contexto de la consolidación de servidores. Los resultados experimentales muestran importantes ganancias sobre Linux. En los procesadores con LLC inclusiva, el tiempo de ejecución (TT) se reduce en más de un 40%, mientras que se mejora el IPC más de un 3%. Con una LLC no inclusiva, la equidad y el TT mejoran en un 44% y un 24%, respectivamente, al mismo tiempo que se mejora el rendimiento hasta un 3,5%. Al distribuir los núcleos del procesador de forma eficiente, se alcanza una equidad casi perfecta (94%), y el TT se reduce hasta un 80%. En entornos de computación en la nube, la degradación del rendimiento puede estimarse con un error de un 5% en la predicción global. Todas las propuestas presentadas han sido diseñadas para ser aplicadas en procesadores comerciales sin requerir ninguna información previa, tomando las decisiones dinámicamente con datos recogidos de los contadores de prestaciones. / [CAT] Una de les principals preocupacions dels centres de dades actuals és maximitzar la utilització dels servidors. A cada servidor s'executen simultàniament diverses aplicacions per augmentar l'eficiència dels recursos. Tot i això, el rendiment depèn en gran mesura de la proporció de recursos que rep cada aplicació. El nombre creixent de nuclis (i aplicacions executant-se) amb cada nova generació de processadors fa que creixca la preocupació per l'efecte causat per les interferències en els recursos compartits. Aquesta tesi se centra a mitigar la interferència en els recursos compartits quan diferents aplicacions es consoliden en un mateix processador des de dues perspectives: computació d'alt rendiment (HPC) i computació al núvol. En el context d'HPC, aquesta tesi proposa polítiques de gestió per a dos dels recursos més crítics: la memòria cau d'últim nivell (LLC) i els nuclis del processador. La LLC exerceix un paper clau a les prestacions del sistema en els processadors actuals reduint considerablement el nombre d'accessos d'alta latència a la memòria principal. Es proposen estratègies de particionament de la LLC tant per a caus inclusives com no inclusives, ambdós dissenys presents en els processadors actuals. Per als dos esquemes, se detecten nous comportaments problemàtics i s'assigna un major espai de memòria cau a les aplicacions que en fan un millor ús. Pel que fa als nuclis del processador, moltes aplicacions paral·leles (com les aplicacions de graf) no escalen bé a mesura que s'incrementa el nombre de nuclis. A més, el planificador de Linux aplica una estratègia de temps compartit que no ofereix bones prestacions quan s'executen aplicacions de graf. Per maximitzar la utilització del sistema, aquesta tesi proposa executar múltiples aplicacions de grafs al mateix processador, assignant a cadascuna el nombre òptim de nuclis (i adaptant el nombre de fils creats) dinàmicament. Pel que fa a la computació al núvol, aquesta tesi aborda tres grans reptes: la complexa infraestructura d'aquests sistemes, les característiques de les seues aplicacions i l'impacte de la interferència entre màquines virtuals (MV). En primer lloc, aquesta tesi presenta la plataforma experimental desenvolupada amb els principals components d'un sistema al núvol. Després, es presenta un ampli estudi de caracterització sobre un conjunt d'aplicacions de latència crítica representatives per identificar els punts que els proveïdors de serveis al núvol han de tenir en compte per millorar el rendiment i la utilització dels recursos. Finalment, es fa una proposta que de manera dinàmica permet detectar i estimar la interferència entre MV. L'enfocament es basa en mètriques que es poden monitoritzar fàcilment al núvol públic, ja que les MV han de tractar-se com a "caixes negres". Tota la investigació descrita es duu a terme respectant les restriccions i complint els requisits per ser aplicable en entorns de producció al núvol públic. En resum, aquesta tesi aborda la contenció en els principals recursos compartits del sistema en el context de la consolidació de servidors. Els resultats experimentals mostren que s'obtenen importants guanys sobre Linux. En els processadors amb una LLC inclusiva, el temps d'execució (TT) es redueix en més d'un 40%, mentres que es millora l'IPC en més d'un 3%. En una LLC no inclusiva, l'equitat i el TT es milloren en un 44% i un 24%, respectivament, al mateix temps que s'obté una millora del rendiment de fins a un 3,5%. Distribuint els nuclis del processador de manera eficient es pot obtindre una equitat quasi perfecta (94%), i el TT pot reduir-se fins a un 80%. En entorns de computació al núvol, la degradació del rendiment pot estimar-se amb un error de predicció global d'un 5%. Totes les propostes presentades en aquesta tesi han sigut dissenyades per a ser aplicades en processadors de servidors comercials sense requerir cap informació prèvia, prenent decisions dinàmicament amb dades recollides dels comptadors de prestacions. / [EN] One of the main concerns of today's data centers is to maximize server utilization. In each server processor, multiple applications are executed concurrently, increasing resource efficiency. However, performance and fairness highly depend on the share of resources that each application receives, leading to performance unpredictability. The rising number of cores (and running applications) with every new generation of processors is leading to a growing concern for interference at the shared resources. This thesis focuses on addressing resource interference when different applications are consolidated on the same server processor from two main perspectives: high-performance computing (HPC) and cloud computing. In the context of HPC, resource management approaches are proposed to reduce inter-application interference at two major critical resources: the last level cache (LLC) and the processor cores. The LLC plays a key role in the system performance of current multi-cores by reducing the number of long-latency main memory accesses. LLC partitioning approaches are proposed for both inclusive and non-inclusive LLCs, as both designs are present in current server processors. In both cases, newly problematic LLC behaviors are identified and efficiently detected, granting a larger cache share to those applications that use best the LLC space. As for processor cores, many parallel applications, like graph applications, do not scale well with an increasing number of cores. Moreover, the default Linux time-sharing scheduler performs poorly when running graph applications, which process vast amounts of data. To maximize system utilization, this thesis proposes to co-locate multiple graph applications on the same server processor by assigning the optimal number of cores to each one, dynamically adapting the number of threads spawned by the running applications. When studying the impact of system-shared resources on cloud computing, this thesis addresses three major challenges: the complex infrastructure of cloud systems, the nature of cloud applications, and the impact of inter-VM interference. Firstly, this thesis presents the experimental platform developed to perform representative cloud studies with the main cloud system components (hardware and software). Secondly, an extensive characterization study is presented on a set of representative latency-critical workloads which must meet strict quality of service (QoS) requirements. The aim of the studies is to outline issues cloud providers should consider to improve performance and resource utilization. Finally, we propose an online approach that detects and accurately estimates inter-VM interference when co-locating multiple latency-critical VMs. The approach relies on metrics that can be easily monitored in the public cloud as VMs are handled as ``black boxes''. The research described above is carried out following the restrictions and requirements to be applicable to public cloud production systems. In summary, this thesis addresses contention in the main system shared resources in the context of server consolidation, both in HPC and cloud computing. Experimental results show that important gains are obtained over the Linux OS scheduler by reducing interference. In inclusive LLCs, turnaround time (TT) is reduced by over 40% while improving IPC by more than 3%. In non-inclusive LLCs, fairness and TT are improved by 44% and 24%, respectively, while improving performance by up to 3.5%. By distributing core resources efficiently, almost perfect fairness can be obtained (94%), and TT can be reduced by up to 80%. In cloud computing, performance degradation due to resource contention can be estimated with an overall prediction error of 5%. All the approaches proposed in this thesis have been designed to be applied in commercial server processors without requiring any prior information, making decisions dynamically with data collected from hardware performance counters. / Pons Escat, L. (2023). Interference Analysis and Resource Management in Server Processors: from HPC to Cloud Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/195840

Page generated in 0.1348 seconds