Spelling suggestions: "subject:"5oftware bperformance"" "subject:"5oftware deperformance""
1 |
The safety of industrially-based controllers incorporating softwareBennett, P. A. January 1984 (has links)
This thesis is concerned with the safety of industrial controllers which incorporate software. Software safety is compared with software reliability as a means of discussing the special concerns of safety. Definitions are given for the terms hazard, risk, danger and safe. A relationship between these terms has been attempted and the philosophy of safety is discussed. A formal definition of software safety is given. The factors influencing the development of software are examined. The subjectivity of safety is discussed in the context of safety measurement being a conjoint measurement. Methods of assessing the risk resulting from the use of software are described along with a discussion on the impracticability of using state transition diagrams to isolate catastrophic failure conditions. Categories of danger are discussed and three categories are advanced. The structuring of the software for safety is discussed and the principle of using safety modules and integrity locks is proposed. In discussing the reasons for errors remaining present in the software after testing two methods of measurement are suggested; Plexus and Fallibility Index. The need to declare variables is discussed. An experiment involving 119 volunteers was conducted to examine the influence of the length of variable names'on the correct usage. It was found that variables with a character length of 7 have a better probability of correct interpretation than others. The methods of assessing safety are discussed and the measurements proposed were applied to a commercially available product in the form of a Software Safety Audit. It is concluded that some aspects of the safety of controllers incorporating software can be quantified and that further research is needed.
|
2 |
A Study on applying project management framework to improve software development performance : The case of IT department transformation in a high-technology CompanyChuker, Tsung 05 June 2008 (has links)
The sweeping penetration of the internet into our daily life has greatly encourageed the international exposure of domestic software companies and engendered an unprecedented demand for information. However, the increasing cost of manual labor throughout the world in recent years has caused the labor force in the mainland China and India to exceed that in Taiwan. Therefore, if enterprises still hope to keep applying internal resources to support system software development and maintenance in the future, they should have a plan to request IT (Information Technology) department to promote the cost control as well as effectiveness development software from now on. The purpose of this study is to delve into the IT department¡¦s response to the reform and restructuring within a company and its assimilation of new management strategies to boost efficiency.
This research aims to research how one domestic global enterprise learn project management to further improve the quality of internal software development and promote usage effects. It can be observed from this in-depth comparative study that different management methodologies induce a corresponding difference in the efficiency of developing new software, which, in turn, reflects on the reform of the entire department.
The methodology of this research is using qualitative case study analysis by designing a questionnaire to interview internal technical specialists from different areas and carrying out two different structures of project management methods to heavily compare and investigate. From this research, we could actually discover that different project management methods could show different levels of promotion and advancement in software development efficiency. When it comes to corporate trainings in software development management or the reform of IT departments, this study may serve as an invaluable index or example for future reference.
|
3 |
Performance Benchmarking Software-Defined Radio Frameworks: GNURadio and CRTSv.2Gadgil, Kalyani Surendra 08 April 2020 (has links)
In this thesis, we benchmark the Cognitive Radios Test System version 2.0 (CRTSv.2) to analyze its software performance with respect to its internal structure and design choices. With the help of system monitoring and profiling tools, CRTSv.2 is tested to quantitatively evaluate its features and understand its shortcomings. With the help of GNU Radio, a popular, easy-to-use software radios framework, we ascertain that CRTSv.2 has a low memory footprint, fewer dependencies and overall, is a lightweight framework that can potentially be used for real-time signal processing. Several open-source measurement tools such as valgrind, perf, top, etc. are used to evaluate the CPU utilization, memory footprint and to postulate the origins of latencies. Based on our evaluation, we observe that CRTSv.2 shows a CPU utilization of approximately 9% whereas GNU Radio is 59%. CRTSv.2 has lower heap memory consumption of approximately 3MB to GNU Radio's 25MB. This study establishes a methodology to evaluate the performance of two SDR frameworks systematically and quantitatively. / Master of Science / When picking the best person for the job, we rely on the person's performance in past projects of a similar nature. The same can be said for software. Software radios provide the capability to perform signal processing functions in software, making them prime candidates towards solving modern problems such as spectrum scarcity, internet-of-things(IoT) adoption, vehicle-to-vehicle communication etc. In order to operate and configure software radios, software frameworks are provided that let the user make changes to the waveform, perform signal processing and data management. In this thesis, we consider two such frameworks,GNU Radio and CRTSv.2. A software performance evaluation is conducted to assess framework overheads contributing to operation of an orthogonal frequency-division multiplexing (OFDM) digital modulation scheme. This provides a quantitative analysis of a signals-specific use case which can be used by researchers to evaluate the optimal framework for research. This analysis can be generalized for different signal processing capabilities by understanding the total framework overhead removed from signal processing costs.
|
4 |
Automated Analysis of Load Testing ResultsJiang, Zhen Ming 29 January 2013 (has links)
Many software systems must be load tested to ensure that they can scale up under high load while maintaining functional and non-functional requirements. Studies show that field problems are often related to systems not scaling to field workloads instead of feature bugs. To assure the quality of these systems, load testing is a required testing procedure in addition to conventional functional testing procedures, such as unit and integration testing. Current industrial practices for checking the results of a load test remain ad-hoc, involving high-level manual checks. Few research efforts are devoted to the automated analysis of load testing results, mainly due to the limited access to large scale systems for use as case studies. Approaches for the automated and systematic analysis of load tests are needed, as many services are being offered online to an increasing number of users. This dissertation proposes automated approaches to assess the quality of a system under load by mining some of the recorded load testing data (execution logs). Execution logs, which are readily available yet rarely used, are generated by output statements which developers insert into the source code. Execution logs are hard to parse and analyze automatically due to their free-form structure. We first propose a log abstraction approach that uncovers the internal structure of each log line. Then we propose automated approaches to assess the quality of a system under load by deriving various models (functional, performance and reliability models) from the large set of execution logs. Case studies show that our approaches scale well to large enterprise and open source systems and output high precision results that help load testing practitioners effectively analyze the quality of the system under load. / Thesis (Ph.D, Computing) -- Queen's University, 2013-01-26 22:58:29.881
|
5 |
Design-time performance testingHopkins, Ian Keith 01 April 2009
Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT).<p>
Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes.<p>
This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites.
|
6 |
Design-time performance testingHopkins, Ian Keith 01 April 2009 (has links)
Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT).<p>
Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes.<p>
This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites.
|
7 |
Improve game performance tracking tools : Heatmap as a tool / Förbättra prestandaspårningsverktyg : Färgdiagram för visualisering av prestandaWessman, Niklas January 2022 (has links)
Software testing is a crucial development technique to capture defects and slow code. When testing 3D graphics, it is hard to create automatic tests that detect errors or slow performance. Finding performance issues in game maps is a complex task that requires much manual work. Gaming companies such as EA DICE could benefit from automating the process of finding these performance issues in their game maps. This thesis tries to solve the problem by creating automatic tests where the camera is placed in a top-down perspective and flies over the in-game map, recording the time it takes to create render and client simulation frames for each map segment. The resulting trace is then visualised as a heatmap, where the mean frame creation times are rendered with pseudo colouring techniques to help pinpoint possible issues for the test engineers. The key findings of this thesis are that a heatmap visualisation of frame creation times saves much time for the developers trying to find these issues; it also lowers the amount of knowledge needed to find performance issues. This tool automates a process that formerly needed considerable manual work to get the same result. Now, artists with low coding experience can find performance issues without the technical knowledge of a Quality Assurance engineer. The thesis also highlights the drawbacks of a top-down perspective of camera trace since this is not how EA DICE games are usually rendered for the player in runtime. With this thesis as a base, other tests could be made with other ways of moving the camera and visualising the trace. / Mjukvarutestning är en viktig programvaruutvecklings teknik för att fånga felaktig eller långsam kod. Det är svårt att skapa automatiska tester för 3D grafik som hittar fel eller dålig prestanda i koden. Att hitta prestandaproblem i spelkartor är en komplex uppgift som kräver mycket manuellt arbete. Spelföretag såsom EA DICE skulle dra fördel av att automatisera processen att hitta dessa prestandaproblem i spelkartor. Denna uppsats försöker lösa detta genom att skapa automatiska tester där kameran placeras i ett uppifrån-och-ned-perspektiv och sedan flyger genom banan i spelet samtidigt som den samlar in data på hur lång tid det tar för renderings-bildrutor och klient-simulerings-bildrutor att skapas för varje ban-segment. Den resulterande datan är därefter visualiserade som ett färgdiagram, där medelvärdet på tiden för att skapa varje bildruta ritas upp med en psuedofärgningsteknik för att markera möjliga problemområden för testingenjörerna. Nyckelupptäckter för denna uppsats är att färgdiagramsvisualiseringen av bildruta-skapande-tider sparar mycket tid för utvecklare som försöker hitta prestandaproblem. Det minskar också kunskapströskeln som behövs för att lokalisera prestandaproblem. Detta verktyg automatiserar en process som tidigare krävde omfattande manuellt arbete för att få samma resultat. Numera kan game artists med låg koderfarenhet hitta dessa prestandaproblem utan den tekniska kunskapen hos en kvalitetskontroll-ingenjör. Den här uppsatsen visar också nackdelar med ett uppifrån-och-ned-perspektiv för kameran då det inte är så EA DICE spel normalt renderas för spelarna. Den här uppsatsen kan användas som utgångspunkt för andra som vill utveckla testverktyg och med fördel ta i beaktning de utvecklingspunkter denna uppsats belyser.
|
8 |
Mapping HW resource usage towards SW performanceSuljevic, Benjamin January 2019 (has links)
With the software applications increasing in complexity, description of hardware is becoming increasingly relevant. To ensure the quality of service for specific applications, it is imperative to have an insight into hardware resources. Cache memory is used for storing data closer to the processor needed for quick access and improves the quality of service of applications. The description of cache memory usually consists of the size of different cache levels, set associativity, or line size. Software applications would benefit more from a more detailed model of cache memory.In this thesis, we offer a way of describing the behavior of cache memory which benefits software performance. Several performance events are tested, including L1 cache misses, L2 cache misses, and L3 cache misses. With the collected information, we develop performance models of cache memory behavior. Goodness of fit is tested for these models and they are used to predict the behavior of the cache memory during future runs of the same application.Our experiments show that L1 cache misses can be modeled to predict the future runs. L2 cache misses model is less accurate but still usable for predictions, and L3 cache misses model is the least accurate and is not feasible to predict the behavior of the future runs.
|
9 |
Reducing the cost of heuristic generation with machine learningOgilvie, William Fraser January 2018 (has links)
The space of compile-time transformations and or run-time options which can improve the performance of a given code is usually so large as to be virtually impossible to search in any practical time-frame. Thus, heuristics are leveraged which can suggest good but not necessarily best configurations. Unfortunately, since such heuristics are tightly coupled to processor architecture performance is not portable; heuristics must be tuned, traditionally manually, for each device in turn. This is extremely laborious and the result is often outdated heuristics and less effective optimisation. Ideally, to keep up with changes in hardware and run-time environments a fast and automated method to generate heuristics is needed. Recent works have shown that machine learning can be used to produce mathematical models or rules in their place, which is automated but not necessarily fast. This thesis proposes the use of active machine learning, sequential analysis, and active feature acquisition to accelerate the training process in an automatic way, thereby tackling this timely and substantive issue. First, a demonstration of the efficiency of active learning over the previously standard supervised machine learning technique is presented in the form of an ensemble algorithm. This algorithm learns a model capable of predicting the best processing device in a heterogeneous system to use per workload size, per kernel. Active machine learning is a methodology which is sensitive to the cost of training; specifically, it is able to reduce the time taken to construct a model by predicting how much is expected to be learnt from each new training instance and then only choosing to learn from those most profitable examples. The exemplar heuristic is constructed on average 4x faster than a baseline approach, whilst maintaining comparable quality. Next, a combination of active learning and sequential analysis is presented which reduces both the number of samples per training example as well as the number of training examples overall. This allows for the creation of models based on noisy information, sacrificing accuracy per training instance for speed, without having a significant affect on the quality of the final product. In particular, the runtime of high-performance compute kernels is predicted from code transformations one may want to apply using a heuristic which was generated up to 26x faster than with active learning alone. Finally, preliminary work demonstrates that an automated system can be created which optimises both the number of training examples as well as which features to select during training to further substantially accelerate learning, in cases where each feature value that is revealed comes at some cost.
|
10 |
Modeling and Evaluating Energy Performance of SmartphonesPalit, Rajesh January 2012 (has links)
With advances in hardware miniaturization and wireless communication technologies even small portable wireless devices have much communication bandwidth and computing power. These devices include smartphones, tablet computers, and personal digital assistants. Users of these devices expect to run software applications that they usually have on their desktop computers as well as the new applications that are being developed for mobile devices. Web browsing, social networking, gaming, online multimedia playing, global positioning system based navigation, and accessing emails are examples of a few popular applications. Mobile versions of thousands of desktop applications are already available in mobile application markets, and consequently, the expected operational time of smartphones is rising rapidly.
At the same time, the complexity of these applications is growing in terms of computation and communication needs, and there is a growing demand for energy in smartphones. However, unlike the exponential growth in computing and communication technologies, in terms of speed and packaging density, battery technology has not kept pace with the rapidly growing energy demand of these devices. Therefore, designers are faced with the need to enhance the battery life of smartphones. Knowledge of how energy is used and lost in the system components of the devices is vital to this end. With this view, we focus on modeling and evaluating the energy performance of smartphones in this thesis. We also propose techniques for enhancing the energy efficiency and functionality of smartphones.
The detailed contributions of the thesis are as follows: (i) we present a nite state machine based model to estimate the energy cost of an application running on a smartphone, and provide practical approaches to extract model parameters; (ii) the concept of energy cost pro le is introduced to assess the impact of design decisions on energy cost at an early stage of software design; (iii) a generic architecture is proposed and implemented for enhancing the capabilities of smartphones by sharing resources; (iv) we have analyzed the Internet tra c of smartphones to observe the energy saving potentials, and have studied the implications on the existing energy saving techniques; and nally, (v) we have provided a methodology to select user level test cases for performing energy cost evaluation of applications. All of our concepts and proposed methodology have been validated with extensive measurements on a real test bench.
Our work contributes to both theoretical understanding of energy e ciency of software applications and practical methodologies for evaluating energy e ciency. In summary, the results of this work can be used by application developers to make implementation level decisions that affect the energy efficiency of software applications on smartphones. In addition, this work leads to the design and implementation of energy e cient smartphones.
|
Page generated in 0.1134 seconds