1 |
An approach to operational design co-ordinationCoates, Graham January 2001 (has links)
Design co-ordination is aimed at improving the performance of the design development process. It can be viewed as providing the continuous coherent organisation and control of the assignment of inter-related tasks to the most relevant resources such that they can be undertaken and completed in a suitable order in a timely and appropriate manner. The nature of operational design co-ordination is discussed resulting in the identification of key issues, i. e. coherence, communication/interaction, task management, resource management, schedule management and real-time support. Based on these key issues, existing approaches related to operational engineering management have been critically reviewed and found to exhibit a number of fundamental limitations. In addition, aimed at addressing the key issues identified and overcoming the limitations of existing approaches, a set of requirements have been established that define an approach to operational design co-ordination. A novel, integrated and holistic approach to operational design co-ordination has been developed enabling the performance of the design development process to be improved. This approach consists of two components: a methodology and a knowledge modelling formalism. Further, the methodology consists of two parts: real-time and prospective. Real-time operational design co-ordination enables the coherent, timely and appropriate structured undertaking of inter-related tasks while continuously optimising the utilisation of the resources, in accordance with dynamically derived schedules, within a changeable design development process. Prospective operational design co-ordination facilitates the identification of deficiencies in terms of existing resources with respect to scheduled tasks and, thus, the assessment of proposed improvements to the resources. The knowledge modelling formalism of tasks, resources and schedules supports the methodology. Three practical case studies from engineering industry have been used to evaluate the approach. A prototype agent-oriented system, called the Design Co-ordination System, has been developed to evaluate the implementation of the real-time part of the methodology by applying it to a turbine blade design process. The prospective part of the methodology has been applied to practical case studies concerning a marine vessel conversion design programme and a rotary drum dryer design development process. Based on the evaluation of the approach, its strengths and weaknesses have been identified. Finally, areas of possible future work have been recommended to improve the approach and develop the Design Co-ordination System. In addition, based on industrial feedback, further applications of the approach have been suggested.
|
2 |
Editorial Performance Engineering of Communication Systems and ApplicationsAwan, Irfan U. 19 November 2012 (has links)
No
|
3 |
Performance engineering of computer and communication systemsAwan, Irfan U., Fretwell, Rod J. January 2005 (has links)
Yes
|
4 |
Automated Analysis of Load Testing ResultsJiang, Zhen Ming 29 January 2013 (has links)
Many software systems must be load tested to ensure that they can scale up under high load while maintaining functional and non-functional requirements. Studies show that field problems are often related to systems not scaling to field workloads instead of feature bugs. To assure the quality of these systems, load testing is a required testing procedure in addition to conventional functional testing procedures, such as unit and integration testing. Current industrial practices for checking the results of a load test remain ad-hoc, involving high-level manual checks. Few research efforts are devoted to the automated analysis of load testing results, mainly due to the limited access to large scale systems for use as case studies. Approaches for the automated and systematic analysis of load tests are needed, as many services are being offered online to an increasing number of users. This dissertation proposes automated approaches to assess the quality of a system under load by mining some of the recorded load testing data (execution logs). Execution logs, which are readily available yet rarely used, are generated by output statements which developers insert into the source code. Execution logs are hard to parse and analyze automatically due to their free-form structure. We first propose a log abstraction approach that uncovers the internal structure of each log line. Then we propose automated approaches to assess the quality of a system under load by deriving various models (functional, performance and reliability models) from the large set of execution logs. Case studies show that our approaches scale well to large enterprise and open source systems and output high precision results that help load testing practitioners effectively analyze the quality of the system under load. / Thesis (Ph.D, Computing) -- Queen's University, 2013-01-26 22:58:29.881
|
5 |
Are the frameworks good enough? : A study of performance implications of JavaScript framework choice through load- and stress-testing Angular, Vue, React and SvelteMarx-Raacz Von Hidvég, Tomas January 2022 (has links)
The subject of JavaScript frameworks/libraries and which to choose as a developer, organization or client has been the target of quite some research, and even more debate. This thesis aims to investigate four of these, namely Angular, React, Svelte and Vue by building a functionally identical application and running performance tests towards them through Apache JMeter to measure throughput. Furthermore, Webdriver plugin will be added to measure full render response times to add another dimension to the discussion. The result of the investigation shows that even if some of the tests speak very highly in favor of some of these frameworks/libraries, some do not, and adding in previous research into perceived values as well as performance metrics of these frameworks make the picture even more complex. Furthermore, the frameworks/libraries in question evolve very rapidly, constantly contending against each other. What this study does, in the end, is to provide a basic method of comparison that can be extended, which will aid stakeholders in researching the frameworks/libraries strengths and weaknesses as well as which framework that fits their project’s needs. / Ämnet JavaScript och dess ramverk och bibliotek, samt vilket man bör välja som utvecklare, organisation eller kund har varit mål för en stor mängd forskning, och än mer debatt. Den här uppsatsen ämnar undersöka fyra av dessa ramverk/bibliotek genom att bygga en funktionellt identisk applikation och genomföra prestanda tester med Apache JMeter mot dem för att mäta genomströmning. Utöver detta kommer även ett instickningsprogram till JMeter vid namn Webdriver användas på samma applikationer för att mäta responstid och ytterligare expandera diskussionen. Resultatet av den här undersökningen visar att vissa av dessa tester talar i förmån för vissa ramverk/bibliotek, medan vissa inte gör det. Adderar man dessutom tidigare forskning rörande mjukare värden och prestandamätningar så blir bilden än mer komplex. Vidare kan nämnas att dessa ramverk och bibliotek utvecklats oerhört under de senaste åren och fortfarande utvecklas, och är i konstant kamp mot varandra. I slutändan skulle jag hävda att det bästa man kan göra som intressent är att samla så mycket fakta rörande både sitt eget projekt och de ramverk/bibliotek som kan tänkas användas, och sen välja. Vad den här studien kan ge en intressent är en reproducerbar metod med expansionsmöjligheter för att få ut nödvändig information inför nämnda val.
|
6 |
Comparative Study of Open-Source Performance Testing tools versus OMEXUS / Komparerande studie av verktyg för prestandatestning med öppen källkod jämfört med OMEXUSXia, Ziqi January 2021 (has links)
With the development of service digitalization and the increased adoption of web services, modern large-scale software systems often need to support a large volume of concurrent transactions. Therefore, performance testing focused on evaluating the performance of systems under workload has gained greater attention in current software development. Although there are many performance testing tools available for providing assistance in load generation, there is a lack of a systematic evaluation process to provide guidance and parameters for tool selection for a specific domain. Focusing on business operations as the specific domain and the Nasdaq Central Securities Depository (NCSD) system as an example of large-scale software systems, this thesis explores opportunities and challenges of existing open- source performance testing tools as measured by usability and feasibility metrics. The thesis presents an approach to evaluate performance testing tools concerning requirements from the business domain and the system under test. This approach consists of a user study conducted with four quality assurance experts discussing general performance metrics and specific analytical needs. The outcome of the user study provided the assessment metrics for a comparative experimental evaluation of three open-source performance testing tools (JMeter, Locust, and Gatling) with a realistic test scenario. These three tools were evaluated in terms of their affordance and limitations in presenting analytical details of performance metrics, efficiency of load generation, and ability to implement realistic load models. The research shows that the user study with potential tool users provided a clear direction when evaluating the usability of the three tools. Additionally, the realistic test case was sufficient to reveal each tool’s capability to achieve the same scale of performance as the Nasdaq’s in-house testing tool OMEXUS and provide additional value with realistic simulation of user population and user behavior during performance testing with regard to the specified requirements. / Med utvecklingen av tjänste-digitalisering och ökad användning av webbtjänster behöver moderna storskaliga mjukvarusystem ofta stödja en stor mängd samtidiga transaktioner. Prestandatestning med fokus på att utvärdera prestanda för system under arbetsbelastning har därför fått större uppmärksamhet i den aktuella programvaru utvecklingen. Även om det finns många verktyg för prestandatestning tillgängliga för att ge hjälp i belastnings generering, saknas det en systematisk utvärderingsprocess för att ge vägledning och parametrar för verktygsval för en viss domän. Med fokus på affärsverksamhet som den specifika domänen och Nasdaq Central Securities Depository (NCSD) -systemet, som ett exempel på storskaliga mjukvarusystem, utforskar denna avhandling möjligheter och utmaningar med befintliga verktyg för prestandatestning med öppen källkod mätt med användbarhets- och genomförbarhet mått. Avhandlingen presenterar ett tillvägagångssätt för att utvärdera prestandatestverktyg avseende krav från företagsdomänen och det system som testas. Detta tillvägagångssätt består av en användarstudie utförd med fyra kvalitetssäkringsexperter som diskuterar allmänna prestandamått och specifika analytiska behov. Resultatet av användarstudien gav bedömningsmåtten för en jämförande experimentell utvärdering av tre verktyg för prestandatestning med öppen källkod (JMeter, Locust och Gatling) med ett realistiskt testscenario. Dessa tre verktyg utvärderades i termer av deras överkomlighet och begränsningar när det gäller att presentera analytiska detaljer om prestandamått, effektiviteten i lastgenereringen och förmågan att implementera realistiska belastningsmodeller. Forskningen visar att användarstudien med potentiella verktygsanvändare gav en tydlig riktning vid utvärdering av användbarheten av de tre verktygen. Dessutom var det realistiska testfallet tillräckligt för att avslöja varje verktygs förmåga att uppnå samma skala av prestanda som Nasdaqs interna testverktyg OMEXUS och ge ytterligare värde med realistisk simulering av användarpopulation och användarbeteende under prestandatestning med avseende på de angivna kraven.
|
7 |
Traffic and performance evaluation for optical networks : an investigation into modelling and characterisation of traffic flows and performance analysis and engineering for optical network architecturesMouchos, Charalampos January 2009 (has links)
The convergence of multiservice heterogeneous networks and ever increasing Internet applications, like peer to peer networking and the increased number of users and services, demand a more efficient bandwidth allocation in optical networks. In this context, new architectures and protocols are needed in conjuction with cost effective quantitative methodologies in order to provide an insight into the performance aspects of the next and future generation Internets. This thesis reports an investigation, based on efficient simulation methodologies, in order to assess existing high performance algorithms and to propose new ones. The analysis of the traffic characteristics of an OC-192 link (9953.28 Mbps) is initially conducted, a requirement due to the discovery of self-similar long-range dependent properties in network traffic, and the suitability of the GE distribution for modelling interarrival times of bursty traffic in short time scales is presented. Consequently, using a heuristic approach, the self-similar properties of the GE/G/∞ are being presented, providing a method to generate self-similar traffic that takes into consideration burstiness in small time scales. A description of the state of the art in optical networking providing a deeper insight into the current technologies, protocols and architectures in the field, which creates the motivation for more research into the promising switching technique of 'Optical Burst Switching' (OBS). An investigation into the performance impact of various burst assembly strategies on an OBS edge node's mean buffer length is conducted. Realistic traffic characteristics are considered based on the analysis of the OC-192 backbone traffic traces. In addition the effect of burstiness in the small time scales on mean assembly time and burst size distribution is investigated. A new Dynamic OBS Offset Allocation Protocol is devised and favourable comparisons are carried out between the proposed OBS protocol and the Just Enough Time (JET) protocol, in terms of mean queue length, blocking and throughput. Finally the research focuses on simulation methodologies employed throughout the thesis using the Graphics Processing Unit (GPU) on a commercial NVidia GeForce 8800 GTX, which was initially designed for gaming computers. Parallel generators of Optical Bursts are implemented and simulated in 'Compute Unified Device Architecture' (CUDA) and compared with simulations run on general-purpose CPU proving the GPU to be a cost-effective platform which can significantly speed-up calculations in order to make simulations of more complex and demanding networks easier to develop.
|
8 |
MPI Performance Engineering with the MPI Tools Information InterfaceRamesh, Srinivasan 06 September 2018 (has links)
The desire for high performance on scalable parallel systems is increasing
the complexity and the need to tune MPI implementations. The MPI Tools
Information Interface (MPI T) introduced in the MPI 3.0 standard provides
an opportunity for performance tools and external software to introspect and
understand MPI runtime behavior at a deeper level to detect scalability issues. The
interface also provides a mechanism to fine-tune the performance of the MPI library
dynamically at runtime.
This thesis describes the motivation, design, and challenges involved in
developing an MPI performance engineering infrastructure using MPI T for two performance toolkits — the TAU Performance System, and Caliper. I validate the design of the infrastructure for TAU by developing optimizations
for production and synthetic applications. I show that the MPI T runtime
introspection mechanism in Caliper enables a meaningful analysis of performance
data.
This thesis includes previously published co-authored material.
|
9 |
Software Performance Prediction : using SPEGyarmati, Erik, Stråkendal, Per January 2002 (has links)
Performance objectives are often neglected during the design phase of a project, and performance problems are often not discovered until the system is implemented. Therefore, there is a need from the industry to find a method to predict the performance of a system early in the design phase. One method that tries to solve this problem is the Software Performance Engineering (SPE) method. This report gives a short introduction to software performance and an overview of the SPE method for performance prediction. It also contains a case study where SPE is applied on an existing system.
|
10 |
Academic Performance as a Predictor of Student Growth in Achievement and Mental Motivation During an Engineering Design Challenge in Engineering and Technology EducationMentzer, Nathan 01 December 2008 (has links)
The purpose of this correlational research study was to determine if students’ academic success was correlated with: (a) the student change in achievement during an engineering design challenge; and (b) student change in mental motivation toward solving problems and critical thinking during an engineering design challenge. Multiple experimental studies have shown engineering design challenges increase student achievement and attitude toward learning, but conflicting evidence surrounded the impact on higher and lower academically achieving students. A high school classroom was chosen in which elements of engineering design were purposefully taught. Eleventh-grade student participants represented a diverse set of academic backgrounds (measured by grade point average [GPA]). Participants were measured in terms of achievement and mental motivation at three time points.
Longitudinal multilevel modeling techniques were employed to identify significant predictors in achievement growth and mental motivation growth during the school year. Student achievement was significantly correlated with science GPA, but not math or communication GPA. Changes in achievement score over time are not significantly correlated with science, math, or communication. Mental motivation was measured by five subscales. Mental focus was correlated with math and science GPA. Mental focus increases over time were negatively correlated with science GPA, which indicated that the initial score differential (between higher and lower science GPA students) was decreased over time. Learning orientation and cognitive integrity were not correlated with GPA. Creative problem solving was correlated with science GPA, but gains over time were not correlated with GPA. Scholarly rigor was correlated with science GPA, but change over time was not correlated with GPA. (284 pages)
|
Page generated in 0.0373 seconds