Spelling suggestions: "subject:"bperformance optimization"" "subject:"deperformance optimization""
41 |
Applications, performance analysis, and optimization of weather and air quality modelsSobhani, Negin 01 December 2017 (has links)
Atmospheric particulate matter (PM) is linked to various adverse environmental and health impacts. PM in the atmosphere reduces visibility, alters precipitation patterns by acting as cloud condensation nuclei (CCN), and changes the Earth’s radiative balance by absorbing or scattering solar radiation in the atmosphere. The long-range transport of pollutants leads to increase in PM concentrations even in remote locations such as polar regions and mountain ranges. One significant effect of PM on the earth’s climate occurs while light absorbing PM, such as Black Carbon (BC), deposits over snow. In the Arctic, BC deposition on highly reflective surfaces (e.g. glaciers and sea ices) has very intense effects, causing snow to melt more quickly. Thus, characterizing PM sources, identifying long-range transport pathways, and quantifying the climate impacts of PM are crucial in order to inform emission abatement policies for reducing both health and environmental impacts of PM.
Chemical transport models provide mathematical tools for better understanding atmospheric system including chemical and particle transport, pollution diffusion, and deposition. The technological and computational advances in the past decades allow higher resolution air quality and weather forecast simulations with more accurate representations of physical and chemical mechanisms of the atmosphere.
Due to the significant role of air pollutants on public health and environment, several countries and cities perform air quality forecasts for warning the population about the future air pollution events and taking local preventive measures such as traffic regulations to minimize the impacts of the forecasted episode. However, the costs associated with the complex air quality forecast models especially for simulations with higher resolution simulations make “forecasting” a challenge. This dissertation also focuses on applications, performance analysis, and optimization of meteorology and air quality modeling forecasting models.
This dissertation presents several modeling studies with various scales to better understand transport of aerosols from different geographical sources and economic sectors (i.e. transportation, residential, industry, biomass burning, and power) and quantify their climate impacts. The simulations are evaluated using various observations including ground site measurements, field campaigns, and satellite data.
The sector-based modeling studies elucidated the importance of various economical sector and geographical regions on global air quality and the climatic impacts associated with BC. This dissertation provides the policy makers with some implications to inform emission mitigation policies in order to target source sectors and regions with highest impacts. Furthermore, advances were made to better understand the impacts of light absorbing particles on climate and surface albedo.
Finally, for improving the modeling speed, the performances of the models are analyzed, and optimizations were proposed for improving the computational efficiencies of the models. Theses optimizations show a significant improvement in the performance of Weather Research and Forecasting (WRF) and WRF-Chem models. The modified codes were validated and incorporated back into the WRF source code to benefit all WRF users. Although weather and air quality models are shown to be an excellent means for forecasting applications both for local and hemispheric scale, further studies are needed to optimize the models and improve the performance of the simulations.
|
42 |
Cost-Efficient Designs for Assessing Work-Related Biomechanical ExposuresRezagholi, Mahmoud January 2012 (has links)
Work-related disorders due to biomechanical exposures have been subject to extensive research. Studies addressing these exposures have, however, paid limited attention to an efficient use of resources in exposure assessment. The present thesis investigates cost-efficient procedures for assessment of work-related biomechanical exposures, i.e. procedures aiming at a proper balance between statistical and economic performance. Paper I is a systematic review of tools used in literature providing cost-efficient data collection designs. Two main approaches were identified in nine publications, i.e. comparing cost efficiency among alternative data collection designs, and optimizing resource allocation between different stages of data collection, e.g. subjects and samples within subjects. The studies presented, in general, simplified analyses, in particular with respect to economics. Paper II compared the cost-efficiency of four video-based techniques for assessing upper arm postures. The comparison was based both on a comprehensive model of cost and error and additionally on two simplified models. Labour costs were a dominant factor in the cost efficiency comparison. Measurement bias and costs other than labour cost influenced the rank and economic evaluation of the assessment techniques. Paper III compared the cost efficiency of different combinations of direct and indirect methods for exposure assessments. Although a combination of methods could significantly reduce the total cost of obtaining a desired level of precision, the total cost was, in the investigated scenario, lowest when only direct measurements were performed. However, when the total number of measurements was fixed, a combination was the most cost efficient choice. In Paper IV, demand functions were derived for a four-stage measurement strategy with the focus of either minimizing the cost for a required precision, or maximizing the precision for a predetermined budget. The paper presents algorithms for identifying optimal values of measurement inputs at all four stages, adjusted to integers, as necessary for practical application. In summary, the thesis shows that it is important to address all sources of costs and errors associated with alternative measurement designs in any particular study, and that an optimal determination of samples at different stages can be identified in several cases not previously addressed in the literature.
|
43 |
Computational Redundancy in Image ProcessingKhalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms.
The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware.
Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped.
Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software.
In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
|
44 |
Computational Redundancy in Image ProcessingKhalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms.
The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware.
Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped.
Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software.
In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
|
45 |
Secure and high-performance big-data systems in the cloudTang, Yuzhe 21 September 2015 (has links)
Cloud computing and big data technology continue to revolutionize how computing and data analysis are delivered today and in the future. To store and process the fast-changing big data, various scalable systems (e.g. key-value stores and MapReduce) have recently emerged in industry. However, there is a huge gap between what these open-source software systems can offer and what the real-world applications demand. First, scalable key-value stores are designed for simple data access methods, which limit their use in advanced database applications. Second, existing systems in the cloud need automatic performance optimization for better resource management with minimized operational overhead. Third, the demand continues to grow for privacy-preserving search and information sharing between autonomous data providers, as exemplified by the Healthcare information networks.
My Ph.D. research aims at bridging these gaps.
First, I proposed HINDEX, for secondary index support on top of write-optimized key-value stores (e.g. HBase and Cassandra). To update the index structure efficiently in the face of an intensive write stream, HINDEX synchronously executes append-only operations and defers the so-called index-repair operations which are expensive. The core contribution of HINDEX is a scheduling framework for deferred and lightweight execution of index repairs. HINDEX has been implemented and is currently being transferred to an IBM big data product.
Second, I proposed Auto-pipelining for automatic performance optimization of streaming applications on multi-core machines. The goal is to prevent the bottleneck scenario in which the streaming system is blocked by a single core while all other cores are idling, which wastes resources. To partition the streaming workload evenly to all the cores and to search for the best partitioning among many possibilities, I proposed a heuristic based search strategy that achieves locally optimal partitioning with lightweight search overhead. The key idea is to use a white-box approach to search for the theoretically best partitioning and then use a black-box approach to verify the effectiveness of such partitioning. The proposed technique, called Auto-pipelining, is implemented on IBM Stream S.
Third, I proposed ǫ-PPI, a suite of privacy preserving index algorithms that allow data sharing among unknown parties and yet maintaining a desired level of data privacy. To differentiate privacy concerns of different persons, I proposed a personalized privacy definition and substantiated this new privacy requirement by the injection of false positives in the published ǫ-PPI data. To construct the ǫ-PPI securely and efficiently, I proposed to optimize the performance of multi-party computations which are otherwise expensive; the key idea is to use addition-homomorphic secret sharing mechanism which is inexpensive and to do the distributed computation in a scalable P2P overlay.
|
46 |
Duomenų filtravimo ir atrankos sprendimų analizė / The analysis of data filtration and selection solutionsVairaitė, Rūta 10 July 2008 (has links)
Esant dideliems saugomų duomenų kiekiams, yra svarbus našus jų apdorojimas, taigi, vartotojams reikia vis didesnio duomenų bazių našumo. Šiame darbe sprendžiama problema, kaip paskatinti duomenų bazes veikti greičiau, kai duomenų bazių lentelės turi labai daug įrašų. Todėl skiriamas dėmesys duomenų bazių spartos derinimui, ar duomenų bazių spartos optimizavimui. Išnagrinėjus duomenų bazių esamus spartinimo metodus ir priežastis, kurios mažina našumą, yra siūlomas metodas, kuris leidžia sparčiau apdoroti ir filtruoti duomenis bei greičiau pateikti vartotojui užklausos rezultatą. Darbui atlikti pasirinkta MS SQL Server duomenų bazių valdymo sistema. Eksperimento metu atliktas užklausų greičio tyrimas, palyginant sudarytą metodą su virtualių lentelių metodu. / When the amount of stored data is growing, it is very important to get them fast and users are expecting to see how database performance is rising. Using database performance tuning, or database performance optimization, it is possible to make a database system run faster. In this paper after analysis of database performance optimization and performance tuning methods was suggested a method which enables to process data from database more quick and to user to get query result faster. To perform the research the MS SQL Server Database Management System was chosen. The experiment was performed in order to evaluate how method works. The experiment results show that compared with views, this method has better query performance.
|
47 |
Μία μέθοδος ανάλυσης της αποδοτικότητας μεγάλων οργανισμώνΚαρατζάς, Ανδρέας 18 February 2010 (has links)
Σκοπός αυτής της διπλωματικής είναι να παρουσιάσει τη μεθοδολογία της Data Envelopment Analysis, μιας τεχνικής σύγκρισης οργανισμών με απώτερο σκοπό τη βελτιστοποίηση της λειτουργίας τους και τη μέτρηση της αποδοτικότητας τους. Γενικά οι μέθοδοι της ΠΕΑ χρησιμοποιούνται ως εργαλείο ώστε να αντιληφθούμε καλυτέρα και να αναλύσουμε το πώς κατανέμονται οι πόροι μιας επιχείρησης και πως αυτοί συνεισφέρουν στην παραγωγή της. Επιπρόσθετα, έχουν ως σκοπό να μεγιστοποιήσουν την απόδοση της επιχείρησης είτε περιορίζοντας τους πόρους της διατηρώντας το παραγόμενο προϊόν σταθερό είτε ελαχιστοποιώντας το παραγόμενο προϊόν διατηρώντας τους πόρους σταθερούς.
Προκειμένου να μελετηθεί μια μονάδα απόφασης διαχωρίζεται σε «εισόδους» και «εξόδους». Όταν μιλάμε για εισόδους της υπό μελέτης μονάδας εννοούμε τους αναγκαίους πόρους που χρειάζεται για να λειτουργήσει ενώ αντίστοιχα ως εξόδους αναφέρουμε τα παραγόμενα προϊόντα ή υπηρεσίες που προσφέρει. Για παράδειγμα, ως είσοδοι σε τραπεζικά υποκαταστήματα μπορεί να θεωρηθούν τα λειτουργικά κόστη του ακινήτου και το προσωπικό ενώ ως έξοδοι το σύνολο των χρηματικών συναλλαγών που πραγματοποιούνται ή ο αριθμός των πελατών που έχουν συνάψει συνεργασία με το υποκατάστημα αυτό. Παρουσιάζεται αρχικά το μοντέλο CCR που είναι το βασικότερο ανάμεσα στις μεθόδους της DEA (παρουσιάστηκε το 1978 από τους Charnes, Cooper και
Rhodes). Παρατίθεται η υπολογιστική διαδικασία του μοντέλου ΕΟΚ καθώς και μία επέκταση του με την χρήση του δυϊκού προβλήματος το οποίο υπερτερεί έναντι του πρωτεύοντος σε ταχύτητα επίλυσης σε μεγάλο αριθμό μονάδων απόφασης καθώς επίσης και στο εύρος λύσεων του προβλήματος. Το μοντέλο CCR καθώς και οι επεκτάσεις του στηρίχτηκαν στον ορισμό των σταθερών οικονομιών κλίμακας δηλαδή θα πρέπει να ισχύει ότι εάν μια δράση (x,y) είναι εφικτή, τότε για κάθε θετικό αριθμό t, η δράση (tx,ty) είναι επίσης εφικτή. Έτσι, αν αποδώσουμε γραφικά την αποδοτικότητα όλων των μονάδων απόφασης και σχεδιάσουμε το σύνορο αποδοτικότητας, αυτό θα αποτελείται από ευθύγραμμα τμήματα με ακμές τις μονάδες απόφασης.
Το επόμενο μοντέλο που παρουσιάζεται είναι αυτό των Banker, Charnes και Cooper (BBC) όπου η κύρια διάφορα του με το προηγούμενο έγκειται στο γεγονός ότι το σύνολο παραγωγικότητας είναι το κυρτό σύνολο των σημείων που απεικονίζουν τις μονάδες απόφασης (και όχι τα ευθύγραμμα τμήματα τους).
Κοινό χαρακτηριστικό και των δύο παραπάνω μοντέλων είναι ότι κάθε φορά στην ανάλυση των δεδομένων πρέπει να εστιάσουμε είτε στην ελαχιστοποίηση των εισόδων είτε στην μεγιστοποίηση των εξόδων για να εξάγουμε συμπεράσματα. Τα προσθετικά μοντέλα που παρουσιάζονται στη συνέχεια δεν κάνουν αυτόν τον διαχωρισμό καθώς στηρίζονται στην ελαχιστοποίηση της περίσσειας πόρων εισόδου και την ταυτόχρονη μεγιστοποίηση των παραγόμενων εξόδων. Επιπρόσθετα, πλεονέκτημα των προσθετικών μοντέλων είναι η ανάλυση αρνητικών δεδομένων κάτι που δεν ήταν εφικτό από τα προηγούμενα μοντέλα. Παρουσιάζεται, επίσης, μια επέκταση του προσθετικού μοντέλου όπου η μέτρηση της αποδοτικότητας δεν επηρεάζεται από τυχόν διαφορές στις μονάδες μέτρησης ανάμεσα στις εξόδους και τις εισόδους.
Τέλος περιγράφεται η ανάλυση ευαισθησίας που είναι μία σημαντική παράμετρος των μεθόδων της DEA καθώς δίνει την δυνατότητα σε κάποιον να μελετήσει τις διαφοροποιήσεις όταν εισάγονται η διαγράφονται μονάδες λήψης αποφάσεων η όταν εισάγονται η διαγράφονται είσοδοι και έξοδοι σε ένα πρόβλημα. / The purpose of this thesis is to present the methodology of Data Envelopment Analysis, a technique to compare organizations with a view to optimizing the operation and measurement of their profitability.
Generally, methods of DEA are used as a tool to better understand and analyze how resources are distributed and how each one contributes to company’s production. Additionally, they are designed to maximize the performance of business by limiting its resources while maintaining the output constant or by minimizing the product obtained by maintaining the resources constant.
The unique characteristics of every decision making unit are "inputs" and "outputs". Inputs of a unit correspond to the resources needed for the company to operate and outputs correspond to the products or services offered. For example, inputs in bank branches can be considered all the operating costs of the property and the personnel occupied and us outputs all financial transactions carried out or the whole number of customers that have made transactions with a particular branch.
Initially the CCR model is presented as it is considered to be the very first method of DEA (firstly introduced by Charnes, Cooper and Rhodes). The whole process of the CCR model is presented and also an extension and the use of the dual problem that outweighs the computational speed of the primary model in solving a large number of decision points as well as the range of solutions to the problem. The CCR model and its extensions are based on the definition of constant economies of scale which can be expressed as if an action (x, y) is feasible, then for each positive number t, the action (tx, ty) is also feasible. Thus, if we depict the performance of every decision making unit in a single graph with their corresponding performance, then the efficient frontier consists of segments that have decision making units in each edge
The next model presented is that of Banker, Charnes and Cooper (BBC) where the main difference with the previous lies in the fact that total productivity is the convex set of points that reflects the decision making units (and not their segments). A common feature of both these models is that each time the data analysis should focus either on minimizing inputs or on maximizing outputs to come over a conclusion. The additive models presented does not make this distinction as they are based on the minimization of the excess resources of inputs and simultaneously on the maximization of produced outputs. Additionally, a competitive advantage of the additive model is the analysis of negative data which was not possible with previous models. An extension of the additive model is presented where the measurement of efficiency is not affected by any differences in units between the inputs and outputs. Finally, the sensitivity analysis is described as an important parameter of DEA’s methods as it analyses the differences in production when a decision making unit is imported or deleted or when inputs and outputs are being inserted or deleted in a problem set.
|
48 |
A techno-economic environmental approach to improving the performance of PV, battery, grid-connected, diesel hybrid energy systems : A case study in KenyaWilson, Jason Clifford January 2018 (has links)
Backup diesel generator (DG) systems continue to be a heavily polluting and costly solution for institutions with unreliable grid connections. These systems slow economic growth and accelerate climate change. Photovoltaic (PV), energy storage (ES), grid connected, DG – Hybrid Energy Systems (HESs) or, PV-HESs, can alleviate overwhelming costs and harmful emissions incurred from traditional back-up DG systems and improve the reliability of power supply. However, from project conception to end of lifetime, PV-HESs face significant barriers of uncertainty and variable operating conditions. The fit-and-forget solution previously applied to backup DG systems should not be adopted for PV-HESs. To maximize cost and emission reductions, PV-HESs must be adapted to their boundary conditions for example, irradiance, temperature, and demand. These conditions can be defined and monitored using measurement equipment. From this, an opportunity for performance optimization can be established. The method demonstrated in this study is a techno-economic and environmental approach to improving the performance of PV-HESs. The method has been applied to a case study of an existing PV-HES in Kenya. A combination of both analytical and numerical analyses has been conducted. The analytical analysis has been carried out in Microsoft Excel with the intent of being easily repeatable and practical in a business environment. Simulation analysis has been conducted in improved Hybrid Optimization by Genetic Algorithms (iHOGA), which is a commercially available software for simulating HESs. Using six months of measurement data, the method presented identifies performance inefficiencies and explores corrective interventions. The proposed interventions are evaluated, by simulation analyses, using a set of techno-economic and environment key performance indicators, namely: Net Present Cost (NPC), generator runtime, fuel consumption, total system emissions, and renewable fraction. Five corrective interventions are proposed, and predictions indicate that if these are implemented fuel consumption can be reduced by 70 % and battery lifetime can be extended by 28 %, net present cost can be reduced by 30 % and emissions fall by 42 %. This method has only been applied to a single PV-HES; however, the impact this method could have on sub-Saharan Africa as well as similar regions with unreliable grid connections is found to be significant. In the future, in sub-Saharan Africa alone, over $500 million dollars (USD) and 1.7 billion kgCO2 emissions could be saved annually if only 25 % of the fuel savings identified in this study were realized. The method proposed here could be improved with additional measurement data and refined simulation models. Furthermore, this method could potentially be fully automated, which could allow it to be implemented more frequently and at lower cost.
|
49 |
Extraction and traceability of annotations for WCET estimation / Extraction et traçabilité d’annotations pour l’estimation de WCETLi, Hanbing 09 October 2015 (has links)
Les systèmes temps-réel devenaient omniprésents, et jouent un rôle important dans notre vie quotidienne. Pour les systèmes temps-réel dur, calculer des résultats corrects n’est pas la seule exigence, il doivent de surcroît être produits dans un intervalle de temps borné. Connaître le pire cas de temps d’exécution (WCET - Worst Case Execution Time) est nécessaire, et garantit que le système répond à ses contraintes de temps. Pour obtenir des estimations de WCET précises, des annotations sont nécessaires. Ces annotations sont généralement ajoutées au niveau du code source, tandis que l’analyse de WCET est effectuée au niveau du code binaire. L’optimisation du compilateur est entre ces deux niveaux et a un effet sur la structure du code et annotations. Nous proposons dans cette thèse une infrastructure logicielle de transformation, qui pour chaque optimisation transforme les annotations du code source au code binaire. Cette infrastructure est capable de transformer les annotations sans perte d’information de flot. Nous avons choisi LLVM comme compilateur pour mettre en œuvre notre infrastructure. Et nous avons utilisé les jeux de test Mälardalen, TSVC et gcc-loop pour démontrer l’impact de notre infrastructure sur les optimisations du compilateur et la transformation d’annotations. Les résultats expérimentaux montrent que de nombreuses optimisations peuvent être activées avec notre système. Le nouveau WCET estimé est meilleur (plus faible) que l’original. Nous montrons également que les optimisations du compilateur sont bénéfiques pour les systèmes temps-réel. / Real-time systems have become ubiquitous, and play an important role in our everyday life. For hard real-time systems, computing correct results is not the only requirement. In addition, the worst-case execution times (WCET) are needed, and guarantee that they meet the required timing constraints. For tight WCET estimation, annotations are required. Annotations are usually added at source code level but WCET analysis is performed at binary code level. Compiler optimization is between these two levels and has an effect on the structure of the code and annotations.We propose a transformation framework for each optimization to trace the annotation information from source code level to binary code level. The framework can transform the annotations without loss of flow information. We choose LLVM as the compiler to implement our framework. And we use the Mälardalen, TSVC and gcc-loops benchmarks to demonstrate the impact of our framework on compiler optimizations and annotation transformation. The experimental results show that with our framework, many optimizations can be turned on, and we can still estimate WCET safely. The estimated WCET is better than the original one. We also show that compiler optimizations are beneficial for real-time systems.
|
50 |
Identifying Method Memoization Opportunities in Java ProgramsChugh, Pallavi January 2016 (has links) (PDF)
Memorization of a method is a commonly used re-factoring wherein developer modules the code of a method to save return values for some or all incoming parameter values. Whenever a parameter-tuple is received for the second or subsequent time, the method's execution can be elided and the corresponding saved value can be returned. It is quite challenging for developers to identify suitable methods for memorization, as these may not necessarily be the methods that account for a high fraction of the running time in the program. What are really sought are the methods that cumulatively incur signi_cant execution time in invocations that receive repeat parameter values. Our primary contribution is a novel dynamic analysis approach that emits a report that contains, for each method in an application, an estimate of the execution time savings to be expected from memorizing this method. The key technical novelty of our approach is a set of design elements that allow our approach to target real-world programs, and to compute the estimates in a re-grained manner. We describe our approach in detail, and evaluate an implementation of it on several real-world programs. Our evaluation reveals that there do exist many methods with good estimated savings that the approach is reasonably ancient, and that it has good precision (relative to actual savings).
|
Page generated in 0.1367 seconds