• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Intercepting functions for memoization / Interception de fonctions pour la mémoïsation

Suresh, Arjun 10 May 2016 (has links)
Nous avons proposé des mécanismes pour mettre en œuvre la mémoïsation de fonction au niveau logiciel dans le cadre de nos efforts pour améliorer les performances du code séquentiel. Nous avons analysé le potentiel de la mémoïsation de fonction sur des applications et le gain de performance qu'elle apporte sur des architectures actuelles. Nous avons proposé trois approches - une approche simple qui s'applique au chargement et qui fonctionne pour toute fonction de bibliothèque liée dynamiquement, une approche à la compilation utilisant LLVM qui peut permettre la mémoïsation pour toute fonction du programme, ainsi qu'une proposition d'implémentation de la mémoïsation en matériel et ses avantages potentiels. Nous avons démontré avec les fonctions transcendantales que l'approche au chargement est applicable et donne un bon avantage, même avec des architectures et des compilateurs (avec la restriction qu'elle ne peut être appliquée que pour les fonctions liées dynamiquement) modernes. Notre approche à la compilation étend la portée de la mémoïsation et en augmente également les bénéfices. Cela fonctionne pour les fonctions définies par l’utilisateur ainsi que pour les fonctions de bibliothèque. Nous pouvons gérer certains types de fonctions non pures comme les fonctions avec des arguments de type pointeur et l'utilisation de variables globales. La mémoïsation en matériel abaisse encore le seuil de profitabilité de la mémoïsation et donne plus de gain de performance en moyenne. / We have proposed mechanisms to implement function memoization at a software level as part of our effort to improve sequential code performance. We have analyzed the potential of function memoization on applications and its performance gain on current architectures. We have proposed three schemes - a simple load time approach which works for any dynamically linked function, a compile time approach using LLVM framework which can enable memoization for any program function and also a hardware proposal for doing memoization in hardware and its potential benefits. Demonstration of the link time approach with transcendental functions showed that memoization is applicable and gives good benefit even under modern architectures and compilers (with the restriction that it can be applied only for dynamically linked functions). Our compile time approach extends the scope of memoization and also increases the benefit due to memoization. This works for both user defined functions as well as library functions. It can handle certain kind of non pure functions like those functions with pointer arguments and global variable usage. Hardware memoization lowers the threshold for a function to be memoized and gives more performance gain on average.
2

Applying Memoization as an Approximate Computing Method for Transiently Powered Systems / Tillämpa Memoization i en Ungefärlig Beräkningsmetod för Transientdrivna System

Perju, Dragos-Stefan January 2019 (has links)
Internet of Things (IoT) is becoming a more and more prevailing technology, as it not only makes the routine of our life easier, but it also helps industry and enteprise become more efficient. The high potential of IoT can also help support our own population on Earth, through precision agriculture, smart transportation, smart city and so on. It is therefore important that IoT is made scalable in a sustainable manner, in order to secure our own future as well.The current work is concerning transiently powered systems (TPS), which are embedded systems that use energy harvesting as their only power source. In their basic form, TPS suffer frequent reboots due to unreliable availability of energy from the environment. Initially, the throughput of such systems are therefore lower than their battery-enabled counterparts. To improve this, TPS involve checkpointing of RAM and processor state to non-volatile memory, as to keep computation progress saved throughout power loss intervals.The aim of this project is to lower the number of checkpoints necessary during an application run on a TPS in the generic case, by using approximate computing. The energy need of TPS is lowered using approximations, meaning more results are coming through when the system is working between power loss periods. For this study, the memoization technique is implemented in the form of a hash table. The Kalman filter is taken as the testing application of choice, to run on the Microchip SAM-L11 embedded platform.The memoization technique manages to yield an improvement for the Kalman application considered, versus the initial baseline version of the program. A user is allowed to ”balance” between more energy savings but more inaccurate results or the opposite, by adjusting a ”quality knob” variable epsilon ϵ.For example, for an epsilon ϵ = 0.7, the improvement is of 32% fewer checkpoints needed than for the baseline version, with the output deviating by 42% on average and 71% at its maximum point.The proof of concept has been made, being that approximate computing can indeed improve the throughput of TPS and make them more feasiable. It is pointed out however that only one single application type was tested, with a certain input trace. The hash table method implemented can behave differently depending on what application and/or data it is working with. It is therefore suggested that a pre-analysis of the specific dataset and application can be done at design time, in order to check feasiability of applying approximations for the certain case considered. / Internet of Things (IoT) håller på att bli en mer och mer utbredd teknik, eftersom det inte bara underlättar rutiner i vårt liv, utan det hjälper också industrin och företag att bli effektivare. Den höga potentialen med IoT kan också hjälpa till att ge stöd åt vår egen befolkning på jorden, genom precisionslantbruk, smart transport, smarta städer och mer. Det är därför viktigt att IoT görs skalbart på ett hållbart sätt för att säkra vår egen framtid.Det nuvarande arbetet handlar om transientdrivna system (TPS), vilket är inbäddade system som använder energiskörning som sin enda kraftkälla. I sin grundform har TPS ofta återställningar på grund av opålitlig tillgång till energi från miljön. Ursprungligen är därför sådana systems genomströmning lägre än deras batteriaktiverade motsvarigheter. För att förbättra detta använder TPS kontrollpunkter i RAM och processortillstånd till icke-flyktigt minne, för att hålla beräkningsförloppet sparat under strömförlustintervaller.Syftet med detta projekt är att sänka antalet kontrollpunkter som krävs under en applikationskörning på en TPS i ett generiskt fall, genom att använda ungefärlig datorberäkning. Energibehovet för TPS sänks med ungefärliga belopp, vilket innebär att fler resultat kommer när systemet arbetar mellan strömförlustperioder. För denna studie implementeras memoiseringstekniken i form av en hashtabell. Kalman-filtret tas som testapplikation för att köra på Microchip SAM-L11 inbäddad plattform.Memoization-tekniken lyckas ge en förbättring för Kalman-applikationen som beaktades, jämfört med den ursprungliga baslinjeversionen av programmet. En användare får ”balansera” mellan mer energibesparingar men mer felaktiga resultat eller motsatsen genom att justera en ”kvalitetsrat”-variabel epsilon ϵ. Till exempel, för en epsilon ϵ = 0.7, är förbättringen 32% färre kontrollpunkter som behövs än för baslinjeversionen, med en utdata avvikelse med 42% i genomsnitt och 71% vid sin högsta punkt.Beviset på konceptet har gjorts, att ungefärlig databeräkning verkligen kan förbättra genomströmning av TPS och göra dem mer genomförbara. Det påpekas dock att endast en enda applikationstyp testades, med ett visst inmatningsspår. Den implementerade hashtabellmetoden kan bete sig annorlunda beroende på vilken applikation och/eller data den arbetar med. Det föreslås därför att en föranalys av det specifika datasättet och applikationen kan göras vid designtidpunkten för att kontrollera genomförbarheten av att tillämpa ungefärliga belopp för det aktuella fallet.
3

Identifying Method Memoization Opportunities in Java Programs

Chugh, Pallavi January 2016 (has links) (PDF)
Memorization of a method is a commonly used re-factoring wherein developer modules the code of a method to save return values for some or all incoming parameter values. Whenever a parameter-tuple is received for the second or subsequent time, the method's execution can be elided and the corresponding saved value can be returned. It is quite challenging for developers to identify suitable methods for memorization, as these may not necessarily be the methods that account for a high fraction of the running time in the program. What are really sought are the methods that cumulatively incur signi_cant execution time in invocations that receive repeat parameter values. Our primary contribution is a novel dynamic analysis approach that emits a report that contains, for each method in an application, an estimate of the execution time savings to be expected from memorizing this method. The key technical novelty of our approach is a set of design elements that allow our approach to target real-world programs, and to compute the estimates in a re-grained manner. We describe our approach in detail, and evaluate an implementation of it on several real-world programs. Our evaluation reveals that there do exist many methods with good estimated savings that the approach is reasonably ancient, and that it has good precision (relative to actual savings).
4

FFRU: A Time- and Space-Efficient Caching Algorithm

Garrett, Benjamin, 0000-0003-1587-6585 January 2021 (has links)
Cache replacement policies have applications that are nearly ubiquitous in technology. Among these is an interesting subset which occurs when referentially transparent functions are memoized, eg. in compilers, in dynamic programming, and other software caches. In many applications the least recently used (LRU) approach likely preserves items most needed by memoized function calls. However, despite its popularity LRU is expensive to implement, which has caused a spate of research initiatives aimed at approximating its cache miss performance in exchange for faster and more memory efficient implementations. We present a novel caching algorithm, Far From Recently Used (FFRU), which offers a simple, but highly configurable mechanism for providing lower bounds on the usage recency of items evicted from the cache. This algorithm preserves the constant time amortized cost of insertions and updates and minimizes the memory overhead needed to administer the eviction guarantees. We study the cache miss performance of several memoized optimization problems which vary in the number of subproblems generated and the access patterns exhibited by their recursive calls. We study their cache miss performance using LRU cache replacement, then show the performance of FFRU in these same problem scenarios. We show that for equivalent minimum eviction age guarantees, FFRU incurs fewer cache misses than LRU, and does so using less memory. We also present some variations of the algorithms studied (Fibonacci, KMP, LCS, and Euclidean TSP) which exploit the characteristics of the cache replacement algorithms being employed, further resulting in improved cache miss performance. We present a novel implementation of a well known approximation algorithm for the Euclidean Traveling Salesman Problem due to Sanjeev Arora. Our implementation of this algorithm outperforms the currently known implementations of the same. It has long remained an open question whether or not algorithms relying on geometric divisions of space can be implemented into practical tools, and our powerful implementation of Arora's algorithm establishes a new benchmark in that arena. / Computer and Information Science
5

Simplifying the Analysis of C++ Programs

Solodkyy, Yuriy 16 December 2013 (has links)
Based on our experience of working with different C++ front ends, this thesis identifies numerous problems that complicate the analysis of C++ programs along the entire spectrum of analysis applications. We utilize library, language, and tool extensions to address these problems and offer solutions to many of them. In particular, we present efficient, expressive and non-intrusive means of dealing with abstract syntax trees of a program, which together render the visitor design pattern obsolete. We further extend C++ with open multi-methods to deal with the broader expression problem. Finally, we offer two techniques, one based on refining the type system of a language and the other on abstract interpretation, both of which allow developers to statically ensure or verify various run-time properties of their programs without having to deal with the full language semantics or even the abstract syntax tree of a program. Together, the solutions presented in this thesis make ensuring properties of interest about C++ programs available to average language users.

Page generated in 0.1067 seconds