• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Increasing and Detecting Memory Address Congruence

Larsen, Samuel, Witchel, Emmett, Amarasinghe, Saman P. 01 1900 (has links)
A static memory reference exhibits a unique property when its dynamic memory addresses are congruent with respect to some non-trivial modulus. Extraction of this congruence information at compile-time enables new classes of program optimization. In this paper, we present methods for forcing congruence among the dynamic addresses of a memory reference. We also introduce a compiler algorithm for detecting this property. Our transformations do not require interprocedural analysis and introduce almost no overhead. As a result, they can be incorporated into real compilation systems. On average, our transformations are able to achieve a five-fold increase in the number of congruent memory operations. We are then able to detect 95% of these references. This success is invaluable in providing performance gains in a variety of areas. When congruence information is incorporated into a vectorizing compiler, we can increase the performance of a G4 AltiVec processor up to a factor of two. Using the same methods, we are able to reduce energy consumption in a data cache by as much as 35%. / Singapore-MIT Alliance (SMA)
2

Program Improvement by Automatic Redistribution of Intermediate Results

Hall, Robert Joseph 01 February 1991 (has links)
Introducing function sharing into designs allows eliminating costly structure by adapting existing structure to perform its function. This can eliminate many inefficiencies of reusing general componentssin specific contexts. "Redistribution of intermediate results'' focuses on instances where adaptation requires only addition/deletion of data flow and unused code removal. I show that this approach unifies and extends several well-known optimization classes. The system performs search and screening by deriving, using a novel explanation-based generalization technique, operational filtering predicates from input teleological information. The key advantage is to focus the system's effort on optimizations that are easier to prove safe.
3

Mathematical Models and Solution Approach for Staff Scheduling with Cross-Training at CallCenters.

Kilincli Taskiran, Gamze 31 August 2015 (has links)
No description available.
4

A program manipulation system based on partial evaluation

Haraldsson, Anders January 1977 (has links)
Program manipulation is the task to perform transformations on program code, and is normally done in order to optimize the code with respect of the utilization of some computer resource. Partial evaluation is the task when partial computations can be performed in a program before it is actually executed. If a parameter to a procedure is constant a specialized version of that procedure can be generated if the constant is inserted instead of the parameter in the procedure body and as much computations in the code as possible are performed. A system is described which works on programs written in INTERLISP, and which performs partial evaluation together with other transformations such as beta-expansion and certain other optimization operations. The system works on full LISP and not only for a "pure" LISP dialect, and deals with problems occurring there involving side-effects, variable assignments etc. An analysis of a previous system, REDFUN, results in a list of problems, desired extensions and new features. This is used as a basis for a new design, resulting in a new implementation, REDFUN-2. This implementation, design considerations, constraints in the system, remaining problems, and other experience from the development and experiments with the system are reported in this paper.
5

Restructuration interactive des programmes / Interactive Program Restructuring

Zinenko, Oleksandr 25 November 2016 (has links)
Le développement des logiciels et leur restructuration deviennent de plus en plus complexes à cause de l'adoption massive des architectures parallèles, ce qui nécessite une expertise considérable de la part des développeurs. Bien que des nombreux modèles et langages de programmation permettent de créer des programmes efficaces, ils n'offrent pas de support spécifique à la restructuration des programmes existants afin d'en augmenter l'efficacité. En même temps, les approches automatiques sont trop conservatives et insuffisamment précises pour atteindre une partie substantielle de la performance du système sans que le développeur aie à fournir des informations sémantiques supplémentaires. Pour répondre à ces défis, nous adoptons l'approche de la restructuration interactive des programmes qui lie la manipulation semi-automatique des programmes avec la visualisation des logiciels. Dans cette thèse, l'approche de restructuration interactive est illustrée par l'extension du modèle polyédrique - une représentation des programmes moderne et puissante - pour permettre la manipulation de haut niveau ainsi que par la conception et l'évaluation d'une interface visuelle à manipulation directe pour la restructuration des programmes. Cette interface visualise l'information qui n'était pas immédiatement accessible dans la représentation textuelle et permet de manipuler des programmes sans en réécrire le code. Nous proposons également une représentation de l'optimisation de programme, calculée automatiquement, telle que le développeur puisse la comprendre et réutiliser facilement ainsi que la modifier d'une manière textuelle ou visuelle dans le cadre du partenariat homme-machine. Afin de représenter plusieurs aspects de la restructuration des programmes, nous concevons et évaluons une nouvelle interaction qui permet de communiquer l'information supplémentaire et non-cruciale pour la tâche à accomplir. Après une étude empirique de la distribution d'attention des développeurs face aux représentations textuelles et visuelles des programmes, nous discutons des implications pour la conception des outils d'aide à la programmation dans le cadre du modèle d'interaction instrumentale. La restructuration interactive des programmes est supposée faciliter la manipulation des programmes dans le but d'optimisation, la rendre plus efficace et plus largement adopté. / Software development and program manipulation become increasingly complex with the massive adoption of parallel architectures, requiring significant expertise from developers. While numerous programming models and languages allow for creating efficient programs, they fall short at helping developers to restructure existing programs for more effective execution. At the same time, automatic approaches are overly conservative and imprecise to achieve a decent portion of the systems' performance without supplementary semantic information from the developer. To answer these challenges, we propose the interactive program restructuring approach, a bridge between semi-automatic program manipulation and software visualization. It is illustrated in this thesis by, first, extending a state-of-the-art polyhedral model for program representation so that it supports high-level program manipulation and, second, by designing and evaluating a direct manipulation visual interface for program restructuring. This interface provides information about the program that was not immediately accessible in the code and allows to manipulate programs without rewriting. We also propose a representation of an automatically computed program optimization in an understandable form, easily modifiable and reusable by the developer both visually and textually in a sort of human-machine partnership. To support different aspects of program restructuring, we design and evaluate a new interaction to communicate supplementary information, not critical for the task at hand. After an empirical study of developers' attention distribution when faced with visual and textual program representation, we discuss the implications for design of program manipulation tools in the instrumental interaction paradigm. We expect interactive program restructuring to make program manipulation for optimization more efficient and widely adopted.
6

Elimination of redundant polymorphism queries in object-oriented design patterns

Brown, Rhodes Hart Fraser 07 May 2010 (has links)
This thesis presents an investigation of two new techniques for eliminating redundancy inherent in uses of dynamic polymorphism operations such as virtual dispatches and type tests. The novelty of both approaches derives from taking a subject-oriented perspective which considers multiple applications to the same run-time values, as opposed to previous site-oriented reductions which treat each operation independently. The first optimization (redundant polymorphism elimination -- RPE) targets reuse over intraprocedural contexts, while the second (instance-specializing polymorphism elimination -- ISPE) considers repeated uses of the same fields over the lifetime of individual object and class instances. In both cases, the specific formulations of the techniques are guided by a study of intentionally polymorphic constructions as seen in applications of common object-oriented design patterns. The techniques are implemented in Jikes RVM for the dynamic polymorphism operations supported by the Java programming language, namely virtual and interface dispatching, type tests, and type casts. In studying the complexities of Jikes RVM's adaptive optimization system and run-time environment, an improved evaluation methodology is derived for characterizing the performance of adaptive just-in-time compilation strategies. This methodology is applied to demonstrate that the proposed optimization techniques yield several significant improvements when applied to the DaCapo benchmarks. Moreover, dramatic improvements are observed for two programs designed to highlight the costs of redundant polymorphism. In the case of the intraprocedural RPE technique, a speed up of 14% is obtained for a program designed to focus on the costs of polymorphism in applications of the Iterator pattern. For the instance-specific technique, an improvement of 29% is obtained for a program designed to focus on the costs inherent in constructions similar to the Decorator pattern. Further analyses also point to several ways in which the results of this work may be used to complement and extend existing optimization techniques, and provide clarification regarding the role of polymorphism in object-oriented design.
7

Optimization of Triola Plc. Loyalty Programme / Optimalizace věrnostního programu společnosti Triola a.s.

Wachtlová, Dominika January 2015 (has links)
The aim of this thesis is to suggest optimizations for Triola Plc. loyalty program based on theoretical understanding of loyalty and its role in success of a company, as well as numer-ous practical analyses. The thesis is divided into two main parts. The first part overviews the theoretical knowledge about consumer loyalty and how loyalty program can be an ef-fective tool in building said loyalty. The second part is formed of variety of practical anal-yses, including analysis of Triola sales data, data mining analysis using MML-TGI data ana-lyser, consumer survey and lastly benchmarking analysis. The last chapter is a synthesis of the theoretical and practical parts and suggest recommendations for Triola loyalty program based on the insights gathered throughout the whole thesis.
8

Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytics / Crowdtuning : towards practical and reproducible auto-tuning via crowdsourcing and predictive analytict

Memon, Abdul Wahid 17 June 2016 (has links)
Le réglage des heuristiques d'optimisation de compilateur pour de multiples cibles ou implémentations d’une même architecture est devenu complexe. De plus, ce problème est généralement traité de façon ad-hoc et consomme beaucoup de temps sans être nécessairement reproductible. Enfin, des erreurs de choix de paramétrage d’heuristiques sont fréquentes en raison du grand nombre de possibilités d’optimisation et des interactions complexes entre tous les composants matériels et logiciels. La prise en compte de multiples exigences, comme la performance, la consommation d'énergie, la taille de code, la fiabilité et le coût, peut aussi nécessiter la gestion de plusieurs solutions candidates. La compilation itérative avec profil d’exécution (profiling feedback), le réglage automatique (auto tuning) et l'apprentissage automatique ont montré un grand potentiel pour résoudre ces problèmes. Par exemple, nous les avons utilisés avec succès pour concevoir le premier compilateur qui utilise l'apprentissage pour l'optimisation automatique de code. Il s'agit du compilateur Milepost GCC, qui apprend automatiquement les meilleures optimisations pour plusieurs programmes, données et architectures en se basant sur les caractéristiques statiques et dynamiques du programme. Malheureusement, son utilisation en pratique, a été très limitée par le temps d'apprentissage très long et le manque de benchmarks et de données représentatives. De plus, les modèles d'apprentissage «boîte noire» ne pouvaient pas représenter de façon pertinente les corrélations entre les caractéristiques des programme ou architectures et les meilleures optimisations. Dans cette thèse, nous présentons une nouvelle méthodologie et un nouvel écosystème d’outils(framework) sous la nomination Collective Mind (cM). L’objectif est de permettre à la communauté de partager les différents benchmarks, données d’entrée, compilateurs, outils et autres objets tout en formalisant et facilitant la contribution participative aux boucles d’apprentissage. Une contrainte est la reproductibilité des expérimentations pour l’ensemble des utilisateurs et plateformes. Notre cadre de travail open-source et notre dépôt (repository) public permettent de rendre le réglage automatique et l'apprentissage d’optimisations praticable. De plus, cM permet à la communauté de valider les résultats, les comportements inattendus et les modèles conduisant à de mauvaises prédictions. cM permet aussi de fournir des informations utiles pour l'amélioration et la personnalisation des modules de réglage automatique et d'apprentissage ainsi que pour l'amélioration des modèles de prévision et l'identification des éléments manquants. Notre analyse et évaluation du cadre de travail proposé montre qu'il peut effectivement exposer, isoler et identifier de façon collaborative les principales caractéristiques qui contribuent à la précision de la prédiction du modèle. En même temps, la formalisation du réglage automatique et de l'apprentissage nous permettent d'appliquer en permanence des techniques standards de réduction de complexité. Ceci permet de se contenter d'un ensemble minimal d'optimisations pertinentes ainsi que de benchmarks et de données d’entrée réellement représentatifs. Nous avons publié la plupart des résultats expérimentaux, des benchmarks et des données d’entrée à l'adresse http://c-mind.org tout en validant nos techniques dans le projet EU FP6 Milepost et durant un stage de thèse HiPEAC avec STMicroelectronics. / Tuning general compiler optimization heuristics or optimizing software for rapidly evolving hardware has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. Iterative feedback-directed compilation, auto-tuning and machine learning have been showing a high potential to solve above problems. For example, we successfully used them to enable the world's first machine learning based self-tuning compiler, Milepost GCC, which automatically learns the best optimizations across multiple programs, data sets and architectures based on static and dynamic program features. Unfortunately, its practical use was very limited by very long training times and lack of representative benchmarks and data sets. Furthermore, "black box" machine learning models alone could not get full insight into correlations between features and best optimizations. In this thesis, we present the first to our knowledge methodology and framework, called Collective Mind (cM), to let the community share various benchmarks, data sets, compilers, tools and other artifacts while formalizing and crowdsourcing optimization and learning in reproducible way across many users (platforms). Our open-source framework and public optimization repository helps make auto-tuning and machine learning practical. Furthermore, cM let the community validate optimization results, share unexpected run-time behavior or model mispredictions, provide useful feedback for improvement, customize common auto-tuning and learning modules, improve predictive models and find missing features. Our analysis and evaluation of the proposed framework demonstrates that it can effectively expose, isolate and collaboratively identify the key features that contribute to the model prediction accuracy. At the same time, formalization of auto-tuning and machine learning allows us to continuously apply standard complexity reduction techniques to leave a minimal set of influential optimizations and relevant features as well as truly representative benchmarks and data sets. We released most of the experimental results, benchmarks and data sets at http://c-mind.org while validating our techniques in the EU FP6 MILEPOST project and during HiPEAC internship at STMicroelectronics.

Page generated in 1.546 seconds