Spelling suggestions: "subject:"execution time"" "subject:"execution lime""
1 |
Execution Time Control : A hardware accelerated Ada implementation with novel support for interrupt handlingGregertsen, Kristoffer Nyborg January 2012 (has links)
Execution time control is a technique that allows execution time budgets to be set and overruns to be handled dynamically to prevent deadline misses. This removes the need for the worst-case execution time (WCET) of tasks to be found by offline timing analysis – a problem that can be very hard to solve for modern computer architectures. Execution time control can also increase the processor utilization, as the WCET will often be much higher than the average execution time. This thesis describes how the GNU Ada Compiler and a bare-board Ravenscar run-time environment were ported to the Atmel AVR32 UC3 microcontroller series making the Ada programming language available on this architecture for the first time, and an implementation of Ada execution time control for this system that supports full execution time control for interrupt handling. Usage patterns for this brand new feature are demonstrated in Ada by extending the object-oriented real-time framework with execution time servers for interrupt handling, allowing the system to be protected against unexpected bursts of interrupts that could otherwise result in deadline misses. Separate execution time measurement for interrupt handling also improves the accuracy of measurement for tasks. As a direct result of the work presented in this thesis separate execution time measurement for interrupts will be included in the forthcoming ISO-standard for Ada 2012. While the implementation of execution time control is for the Ada programming language and the UC3 microcontroller series, the design and implementation should be portable to other architectures, and the principles of execution time control for interrupt handling applicable to other programming languages. Low run-time overhead is important for execution time control to be useful for real-time systems. Therefore a hardware Time Management Unit (TMU) was designed to reduce the overhead of execution time control. This design has been implemented for the UC3 and performance tests with the developed run-time environment shows that it gives a significant reduction of overhead. The memory-mapped design of the TMU also allows it to be implemented on other architectures.
|
2 |
Processor pipelines and static worst-case execution time analysis /Engblom, Jakob, January 2002 (has links)
Diss. Uppsala : Univ., 2002.
|
3 |
Execution time analysis for dynamic real-time systemsZhou, Yongjun January 2002 (has links)
No description available.
|
4 |
Performance evaluation of Multithreading, Hashtables, and Anonymous Functions for Rust and C++ : in Game DevelopmentNordström, Oscar, Raivio, Lowe January 2023 (has links)
Background C++ is a programming language introduced in 1985, while Rust was introduced in 2010. Rust focuses on speed and safety and was created with the need for concurrency in mind.These languages have different memory management systems as C++ originally only supported manual memory management, while Rust's memory management system performs checks before the compilation of the application begins to prevent issues such as dereferencing null pointers, use-after-free errors, and buffer overflows.These languages' standard libraries have some features in common such as anonymous functions, hashtables, and threads.These features can be utilized in games by implementing resource management with hashtables, event systems with anonymous functions, and parallelization with threads. Objectives The objectives included designing two equivalent game implementations, one with Rust and one with C++. These games were the testing grounds used to test the standard library implementations of anonymous functions, hashtables, and threads. These features' execution times were measured and compared to determine if there existed a difference between them in Rust and C++. Methods Using Raylib, two identical games have been created that utilized and collected execution time metrics for anonymous functions, hashtables, and threads. These games were executed 90 times for a duration of 10 seconds. When all tests were completed, the execution time data was compiled. This data was visualized and analyzed to determine the differences in execution time between Rust and C++ for these specific features. Results The results indicate that Rust performs better at creating anonymous functions, searching and deleting entries in hashtables, and joining threads. The results also reveal that C++ performs better at calling anonymous functions, inserting into hashtables, and creating and starting threads. Conclusions A substantial statistical difference exists between the execution times for the selected features in Rust and C++. The performance differences are significant to the extent that a developer can gain some performance by selecting the language that performs best depending on their needs. In the end, both languages are well suited for game development based on the result of this limited study.
|
5 |
Optimizing scoped and immortal memory management in real-time JavaHamza, Hamza January 2013 (has links)
The Real-Time Specification for Java (RTSJ) introduces a new memory management model which avoids interfering with the garbage collection process and achieves better deterministic behaviour. In addition to the heap memory, two types of memory areas are provided - immortal and scoped. The research presented in this Thesis aims to optimize the use of the scoped and immortal memory model in RTSJ applications. Firstly, it provides an empirical study of the impact of scoped memory on execution time and memory consumption with different data objects allocated in scoped memory areas. It highlights different characteristics for the scoped memory model related to one of the RTSJ implementations (SUN RTS 2.2). Secondly, a new RTSJ case study which integrates scoped and immortal memory techniques to apply different memory models is presented. A simulation tool for a real-time Java application is developed which is the first in the literature that shows scoped memory and immortal memory consumption of an RTSJ application over a period of time. The simulation tool helps developers to choose the most appropriate scoped memory model by monitoring memory consumption and application execution time. The simulation demonstrates that a developer is able to compare and choose the most appropriate scoped memory design model that achieves the least memory footprint. Results showed that the memory design model with a higher number of scopes achieved the least memory footprint. However, the number of scopes per se does not always indicate a satisfactory memory footprint; choosing the right objects/threads to be allocated into scopes is an important factor to be considered. Recommendations and guidelines for developing RTSJ applications which use a scoped memory model are also provided. Finally, monitoring scoped and immortal memory at runtime may help in catching possible memory leaks. The case study with the simulation tool developed showed a space overhead incurred by immortal memory. In this research, dynamic code slicing is also employed as a debugging technique to explore constant increases in immortal memory. Two programming design patterns are presented for decreasing immortal memory overheads generated by specific data structures. Experimental results showed a significant decrease in immortal memory consumption at runtime.
|
6 |
Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-ChipsVäyrynen, Mikael January 2009 (has links)
<p>Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.</p>
|
7 |
Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-ChipsVäyrynen, Mikael January 2009 (has links)
Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.
|
8 |
Android Elastic Service Execution and EvaluationHeidari, Ramin January 2013 (has links)
Context. Mobile devices recently have attained huge popularity in people’s life. During recent years, there have been many attempts for proposing several approaches to delegate and execute the computing intensive part of the mobile applications on more powerful remote servers due to shortage of resources on mobile devices. However, there are still research challenges in this area regarding the models as well as principles that govern circumstances of executing a part of mobile application remotely on a server along with effects of execution on the smartphone resources. Objectives. The aim behind conducting this research is to propose a model for executing the service component of an Android application on the remote server. This study exploits the enhancement of Android operating system functionality to execute services components on a remote powerful machine. It reports the model as well as the enhancements to achieve this purpose. Additionally, an experiment is conducted to realize what factors rule to execute a computation locally on mobile device or offload it to be executed on a remote machine. Methods. Two research methodologies have been used in preforming this research; Case study and controlled experiment. In the case study we investigates feasibility of functionality enhancement in Android operating system to run service components of Android applications on a remote server. We propose a new model for this purpose and motivate it by several different resources such as journal and conference papers and the Android developer site. A prototype of the model is implemented in order to put into use in the next part of our study. Second, a controlled experiment is conducted on the outcome prototype of the case study to explore the principles that governs executing the service component of Android application on a remote powerful machines and the affection of this execution on the mobile resources. Results. A Model for executing the service component of Android application on a powerful remote server is proposed. Also, a prototype implemented according to the Model. The effects of executing Android service components in a remote machine on energy consumption as well as performance of a smartphone are investigated. Moreover, we examined when would be beneficial to offload an intensive computation in order to be executed on the remote server. Conclusions. We conclude that it’s applicable to enhance the Android OS to execute service component of an Android application on a remote server. Also, We conclude that there is a strong coloration between amount of payload and computation of data that require to be executed on a remote server. Basically, offloading the computation is beneficial when there is a large amount of computation with small amount of communication and payload. Furthermore we conclude that the execution time for the intensive computations drastically increase when it’s executed on the server but for less computation data the performance is better when the execution is on the smartphone. Besides that, we express that the energy consumption on the smartphone growth gradually when the payload passes over a particular size.
|
9 |
Performance comparison between OOD and DOD with multithreading in gamesWingqvist, David, Wickström, Filip January 2022 (has links)
Background. The frame rate of a game is important for both the end-user and the developer. Maintaining at least 60 FPS in a PC game is the current standard, and demands for efficient game applications rise. Currently, the industry standard within programming is to use Object-Oriented Design (OOD). But with the trend of larger sized games, this frame rate might not be maintainable using OOD. A design pattern that mitigates this is the Data-Oriented Design (DOD) which focuses on utilizing the CPU and memory efficiently. These design patterns differ in how they handle the data associated with them. Objectives. In this thesis, two games were created with two versions that used either OOD or DOD. The first game had multithreading included. New hardware utilizes several CPU cores, therefore, this thesis compares both singlethreaded and multithreaded versions of these design patterns.Methods. Experiments were made to measure the execution time and cache misses on the CPU. Each experiment started with a baseline that was gradually increased to stress the systems under test.Results. The results gathered from the experiments showed that the sections of the code that used DOD were significantly faster than OOD. DOD also had a better affinity with multithreading and was able to achieve at certain parts up to 13 times the speed of equivalent conditioned OOD. In the special case comparison DOD, even though it had larger objects, proved to be faster than OOD.Conclusions. DOD has shown to be significantly faster in execution time with fewer cache misses compared to OOD. Using multithreading for DOD presented to be the most efficient.
|
10 |
Compiler optimization VS WCET : Battle of the ages / Kompilatoroptimering VS WCETHarrius, Tova, Nordin, Max January 2022 (has links)
Optimization by a compiler can be executed with many different methods. The defence company Saab provided us with a mission, to see if we could optimize their code with the help of the GCC compiler and its optimization flags. For this thesis we have conducted a study of the optimization flags to decrease the worst case execution time. The first step to assemble an effective base of flags was reading the documentation for the flags. We then tested the different flags and analysed them. In the end we ended up with four chosen sets that we saw fitted to be discussed and analyzed further. The results did not live up to our expectations, as we thought the flags would optimize the execution time. The flags int he majority of cases gave an, although small, increase of the execution time. We only had one set where the flags gave us a decrease, which we called the Expensive Optimization.With these results we can conclude that Saab do not need to change their existing set of optimization flags to optimize their compiler further.
|
Page generated in 0.0845 seconds