Spelling suggestions: "subject:"bperformance prediction"" "subject:"deperformance prediction""
1 |
Achievement in education : improving measurement and testing models and predictorsMcIlroy, David January 2000 (has links)
No description available.
|
2 |
An investigation of the role of simulation in the performance prediction of data parallel Fortran (HPF) programsVassiliou, Vassilios January 1999 (has links)
No description available.
|
3 |
Performance Prediction of Quantization Based Automatic Target Recognition AlgorithmsHorvath, Matthew Steven January 2015 (has links)
No description available.
|
4 |
Re-thinking termination guarantee of eBPFSahu, Raj 10 June 2024 (has links)
In the rapidly evolving landscape of BPF as kernel extensions, where the industry is deploying an increasing count of simultaneously running BPF programs, the need for accounting BPF- induced overhead on latency-sensitive kernel functions is becoming critical. We also find that eBPF's termination guarantee is insufficient to protect systems from BPF programs running extraordinarily long due to compute-heavy operations and runtime factors such as contention. Operators lack a crucial mechanism to identify and avoid installing long-running BPF programs while also requiring a mechanism to abort such BPF programs when found to be adding high latency overhead on performance-critical kernel functions. In this work, we propose a runtime estimator and a dynamic termination mechanism to solve these two issues, respectively. We use a hybrid of static and dynamic analysis to provide a runtime range that we demonstrate to encompass the actual runtime of the BPF program. For safe BPF termination, we propose a short-circuiting approach to skip all costly operations and quickly reach completion. We evaluate the proposed solutions to find the obtained performance estimate as too broad, but when paired with the dynamic termination, can be used by a BPF Orchestrator to impose policies on the overhead due to BPF programs in a call path. The proposed dynamic termination solution has zero overhead on BPF programs for no-termination cases while having a verification overhead proportional to the number of helper calls in a BPF program. In the future, we aim to make BPF execution atomic to guarantee that kernel objects modified within a BPF program are always left in a consistent state in the event of program termination. / Master of Science / The Linux kernel OS has a relatively recent feature called eBPF which allows adding new code into a running system without needing a system reboot. Due to the flexibility offered by eBPF, the technology is attracting widespread adoption for diverse use cases such as system health monitoring, security, accelerating programs, etc. In this work, we identify that eBPF programs have a non-negligible performance impact on a system which, in the extreme case, can cause Denial-of-Service attacks on the host machine despite going through all security checks enforced by eBPF. We propose a two-part solution: the eBPF runtime estimator and the Fast-Path termination mechanism. The runtime estimator aims to prevent the instal- lation of eBPF programs that can cause a large performance impact, while the Fast-Path termination will act as a safety net for cases when the installed program unexpectedly runs longer. The overall solution will enable better management of eBPF programs concerning their performance impact and enforce strict bounds on the added latency. Potential future work includes factoring in the impacts other than performance in our solution such as inter- BPF interaction and designing easy-to-use knobs which an operator can easily tune to relax or constrain the side-effects of the eBPF programs installed in the system.
|
5 |
Iterative compilation and performance prediction for numerical applicationsFursin, Grigori G. January 2004 (has links)
As the current rate of improvement in processor performance far exceeds the rate of memory performance, memory latency is the dominant overhead in many performance critical applications. In many cases, automatic compiler-based approaches to improving memory performance are limited and programmers frequently resort to manual optimisation techniques. However, this process is tedious and time-consuming. Furthermore, a diverse range of a rapidly evolving hardware makes the optimisation process even more complex. It is often hard to predict the potential benefits from different optimisations and there are no simple criteria to stop optimisations i.e. when optimal memory performance has been achieved or sufficiently approached. This thesis presents a platform independent optimisation approach for numerical applications based on iterative feedback-directed program restructuring using a new reasonably fast and accurate performance prediction technique for guiding optimisations. New strategies for searching the optimisation space, by means of profiling to find the best possible program variant, have been developed. These strategies have been evaluated using a range of kernels and programs on different platforms and operating systems. A significant performance improvement has been achieved using new approaches when compared to the state-of-the-art native static and platform-specific feedback directed compilers.
|
6 |
Evaluation Of the NRC 1996 winter feed requirements for beef cows In western CanadaBourne, Jodi Lynn 28 February 2007
A trial was conducted to evaluate the accuracy of the 1996 NCR beef model to predict DMI and ADG of pregnant cows under western Canadian conditions. Over two consecutive years, 90 Angus (587±147 kg) cows assigned to 15 pens (N=6) were fed typical diets ad libitum, formulated to stage of pregnancy. Data collection included pen DMI and ADG (corrected for pregnancy), calving date, calf weight, body condition scores and ultrasound fat measurements, weekly feed samples and daily ambient temperature. DMI and ADG for each pen of cows in each trimester was predicted using the computer program Cowbytes based on the 1996 NRC beef model. The results indicate that in the 2nd and 3rd trimester of both years the model under predicted (P≤0.05) ADG based on observed DMI. Ad libitum intake was over predicted (P≤0.05) during the 2nd trimester, and under predicted (P≤0.05) during the 3rd trimester of pregnancy. A second evaluation was carried out assuming thermal neutral (TN) conditions. In this case, it was found that during the 2nd and 3rd trimesters there was an over prediction (P≤0.05) of ADG relative to observed. Under these same TN conditions, the ad libitum intake of these cows was under predicted (P≤0.05) for both the 2nd and 3rd trimesters. These results suggest current energy equations for modelling environmental stress, over predict maintenance requirements for wintering beef cows in western Canada. The results also suggest that the cows experienced some degree of cold stress, but not as severe as modelled by the NRC (1996) equations. Further research is required to more accurately model cold stress felt by mature cattle, and their ability to acclimatise to western Canadian winter conditions.
|
7 |
Metareasoning about propagators for constraint satisfactionThompson, Craig Daniel Stewart 11 July 2011
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
|
8 |
Evaluation Of the NRC 1996 winter feed requirements for beef cows In western CanadaBourne, Jodi Lynn 28 February 2007 (has links)
A trial was conducted to evaluate the accuracy of the 1996 NCR beef model to predict DMI and ADG of pregnant cows under western Canadian conditions. Over two consecutive years, 90 Angus (587±147 kg) cows assigned to 15 pens (N=6) were fed typical diets ad libitum, formulated to stage of pregnancy. Data collection included pen DMI and ADG (corrected for pregnancy), calving date, calf weight, body condition scores and ultrasound fat measurements, weekly feed samples and daily ambient temperature. DMI and ADG for each pen of cows in each trimester was predicted using the computer program Cowbytes based on the 1996 NRC beef model. The results indicate that in the 2nd and 3rd trimester of both years the model under predicted (P≤0.05) ADG based on observed DMI. Ad libitum intake was over predicted (P≤0.05) during the 2nd trimester, and under predicted (P≤0.05) during the 3rd trimester of pregnancy. A second evaluation was carried out assuming thermal neutral (TN) conditions. In this case, it was found that during the 2nd and 3rd trimesters there was an over prediction (P≤0.05) of ADG relative to observed. Under these same TN conditions, the ad libitum intake of these cows was under predicted (P≤0.05) for both the 2nd and 3rd trimesters. These results suggest current energy equations for modelling environmental stress, over predict maintenance requirements for wintering beef cows in western Canada. The results also suggest that the cows experienced some degree of cold stress, but not as severe as modelled by the NRC (1996) equations. Further research is required to more accurately model cold stress felt by mature cattle, and their ability to acclimatise to western Canadian winter conditions.
|
9 |
Metareasoning about propagators for constraint satisfactionThompson, Craig Daniel Stewart 11 July 2011 (has links)
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
|
10 |
Evaluating MapReduce System Performance: A Simulation ApproachWang, Guanying 13 September 2012 (has links)
Scale of data generated and processed is exploding in the Big Data era. The MapReduce system popularized by open-source Hadoop is a powerful tool for the exploding data problem, and is widely employed in many areas involving large scale of data. In many circumstances, hypothetical MapReduce systems must be evaluated, e.g. to provision a new MapReduce system to provide certain performance goal, to upgrade a currently running system to meet increasing business demands, to evaluate novel network topology, new scheduling algorithms, or resource arrangement schemes. The traditional trial-and-error solution involves the time-consuming and costly process in which a real cluster is first built and then benchmarked. In this dissertation, we propose to simulate MapReduce systems and evaluate hypothetical MapReduce systems using simulation. This simulation approach offers significantly lower turn-around time and lower cost than experiments. Simulation cannot entirely replace experiments, but can be used as a preliminary step to reveal potential flaws and gain critical insights.
We studied MapReduce systems in detail and developed a comprehensive performance model for MapReduce, including sub-task phase level performance models for both map and reduce tasks and a model for resource contention between multiple processes running in concurrent. Based on the performance model, we developed a comprehensive simulator for MapReduce, MRPerf. MRPerf is the first full-featured MapReduce simulator. It supports both workload simulation and resource contention, and it still offers the most complete features among all MapReduce simulators to date. Using MRPerf, we conducted two case studies to evaluate scheduling algorithms in MapReduce and shared storage in MapReduce, without building real clusters.
Furthermore, in order to further integrate simulation and performance prediction into MapReduce systems and leverage predictions to improve system performance, we developed online prediction framework for MapReduce, which periodically runs simulations within a live Hadoop MapReduce system. The framework can predict task execution within a window in near future. These predictions can be used by other components in MapReduce systems in order to improve performance. Our results show that the framework can achieve high prediction accuracy and incurs negligible overhead. We present two potential use cases, prefetching and dynamic adapting scheduler. / Ph. D.
|
Page generated in 0.1192 seconds