• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 2
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 14
  • 10
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Synthetic Traffic Models that Capture Cache Coherent Behaviour

Badr, Mario 24 June 2014 (has links)
Modern and future many-core systems represent large and complex architectures. The communication fabrics in these large systems play an important role in their performance and power consumption. Current simulation methodologies for evaluating networks-on-chip (NoCs) are not keeping pace with the increased complexity of our systems; architects often want to explore many different design knobs quickly. Methodologies that trade-off some accuracy but maintain important workload trends for faster simulation times are highly beneficial at early stages of architectural exploration. We propose a synthetic traffic generation methodology that captures both application behaviour and cache coherence traffic to rapidly evaluate NoCs. This allows designers to quickly indulge in detailed performance simulations without the cost of long-running full system simulation but still capture a full range of application and coherence behaviour. Our methodology has an average (geometric) error of 10.9% relative to full system simulation, and provides 50x speedup on average over full system simulation.
2

Synthetic Traffic Models that Capture Cache Coherent Behaviour

Badr, Mario 24 June 2014 (has links)
Modern and future many-core systems represent large and complex architectures. The communication fabrics in these large systems play an important role in their performance and power consumption. Current simulation methodologies for evaluating networks-on-chip (NoCs) are not keeping pace with the increased complexity of our systems; architects often want to explore many different design knobs quickly. Methodologies that trade-off some accuracy but maintain important workload trends for faster simulation times are highly beneficial at early stages of architectural exploration. We propose a synthetic traffic generation methodology that captures both application behaviour and cache coherence traffic to rapidly evaluate NoCs. This allows designers to quickly indulge in detailed performance simulations without the cost of long-running full system simulation but still capture a full range of application and coherence behaviour. Our methodology has an average (geometric) error of 10.9% relative to full system simulation, and provides 50x speedup on average over full system simulation.
3

Performance modelling of reactive web applications using trace data from automated testing

Anderson, Michael 29 April 2019 (has links)
This thesis evaluates a method for extracting architectural dependencies and performance measures from an evolving distributed software system. The research goal was to establish methods of determining potential scalability issues in a distributed software system as it is being iteratively developed. The research evaluated the use of industry available distributed tracing methods to extract performance measures and queuing network model parameters for common user activities. Additionally, a method was developed to trace and collect system operations the correspond to these user activities utilizing automated acceptance testing. Performance measure extraction was tested across several historical releases of a real-world distributed software system with this method. The trends in performance measures across releases correspond to several scalability issues identified in the production software system. / Graduate
4

Performance Modelling of Message-Passing Parallel Programs

Grove, Duncan January 2003 (has links)
Parallel computing is essential for solving very large scientific and engineering problems. An effective parallel computing solution requires an appropriate parallel machine and a well-optimised parallel program, both of which can be selected via performance modelling. This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). Unlike previous techniques, the PEVPM system is relatively easy to use, inexpensive to apply and extremely accurate. It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention. During model evaluation, the performance distribution attached to each submodel is sampled using Monte Carlo techniques, thus simulating the effects of contention. This allows the PEVPM to accurately simulate a program's execution structure, even if it is non-deterministic, and thus to predict its performance. Obtaining these performance distributions required the development of a new benchmarking tool, called MPIBench. Unlike previous tools, which simply measure average message-passing time over a large number of repeated message transfers, MPIBench uses a highly accurate and globally synchronised clock to measure the performance of individual communication operations. MPIBench was used to benchmark three parallel computers, which encompassed a wide range of network performance capabilities, namely those provided by Fast Ethernet, Myrinet and QsNet. Network contention, a problem ignored by most research in this area, was found to cause extensive performance variation during message-passing operations. For point-to-point communication, this variation was best described by Pearson 5 distributions. Collective communication operations were able to be modelled using their constituent point-to-point operations. In cases of severe contention, extreme outliers were common in the observed performance distributions, which were shown to be the result of lost messages and their subsequent retransmit timeouts. The highly accurate benchmark results provided by MPIBench were coupled with the PEVPM models of a range of parallel programs, and simulated by the PEVPM. These case studies proved that, unlike previous modelling approaches, the PEVPM technique successfully unites generality, flexibility, cost-effectiveness and accuracy in one performance modelling system for parallel programs. This makes it avaluable tool for the development of parallel computing solutions. / Thesis (Ph.D.)--Computer Science, 2003.
5

Performance Modelling of Database Designs using a Queueing Networks Approach. An investigation in the performance modelling and evaluation of detailed database designs using queueing network models.

Osman, Rasha Izzeldin Mohammed January 2010 (has links)
Databases form the common component of many software systems, including mission critical transaction processing systems and multi-tier Internet applications. There is a large body of research in the performance of database management system components, while studies of overall database system performance have been limited. Moreover, performance models specifically targeted at the database design have not been extensively studied. This thesis attempts to address this concern by proposing a performance evaluation method for database designs based on queueing network models. The method is targeted at designs of large databases in which I/O is the dominant cost factor. The database design queueing network performance model is suitable in providing what if comparisons of database designs before database system implementation. A formal specification that captures the essential database design features while keeping the performance model sufficiently simple is presented. Furthermore, the simplicity of the modelling algorithms permits the direct mapping between database design entities and queueing network models. This affords for a more applicable performance model that provides relevant feedback to database designers and can be straightforwardly integrated into early database design development phases. The accuracy of the modelling technique is validated by modelling an open source implementation of the TPC-C benchmark. The contribution of this thesis is considered to be significant in that the majority of performance evaluation models for database systems target capacity planning or overall system properties, with limited work in detailed database transaction processing and behaviour. In addition, this work is deemed to be an improvement over previous methodologies in that the transaction is modelled at a finer granularity, and that the database design queueing network model provides for the explicit representation of active database rules and referential integrity constraints. / Iqra Foundation
6

Performance Modelling of Database Designs using a Queueing Networks Approach. An investigation in the performance modelling and evaluation of detailed database designs using queueing network models.

Osman, Rasha Izzeldin Mohammed January 2010 (has links)
Databases form the common component of many software systems, including mission critical transaction processing systems and multi-tier Internet applications. There is a large body of research in the performance of database management system components, while studies of overall database system performance have been limited. Moreover, performance models specifically targeted at the database design have not been extensively studied. This thesis attempts to address this concern by proposing a performance evaluation method for database designs based on queueing network models. The method is targeted at designs of large databases in which I/O is the dominant cost factor. The database design queueing network performance model is suitable in providing what if comparisons of database designs before database system implementation. A formal specification that captures the essential database design features while keeping the performance model sufficiently simple is presented. Furthermore, the simplicity of the modelling algorithms permits the direct mapping between database design entities and queueing network models. This affords for a more applicable performance model that provides relevant feedback to database designers and can be straightforwardly integrated into early database design development phases. The accuracy of the modelling technique is validated by modelling an open source implementation of the TPC-C benchmark. The contribution of this thesis is considered to be significant in that the majority of performance evaluation models for database systems target capacity planning or overall system properties, with limited work in detailed database transaction processing and behaviour. In addition, this work is deemed to be an improvement over previous methodologies in that the transaction is modelled at a finer granularity, and that the database design queueing network model provides for the explicit representation of active database rules and referential integrity constraints. / Iqra Foundation
7

Performance Modelling of Database Designs using a Queueing Networks Approach. An investigation in the performance modelling and evaluation of detailed database designs using queueing network models.

Osman, Rasha Izzeldin Mohammed January 2010 (has links)
Databases form the common component of many software systems, including mission critical transaction processing systems and multi-tier Internet applications. There is a large body of research in the performance of database management system components, while studies of overall database system performance have been limited. Moreover, performance models specifically targeted at the database design have not been extensively studied. This thesis attempts to address this concern by proposing a performance evaluation method for database designs based on queueing network models. The method is targeted at designs of large databases in which I/O is the dominant cost factor. The database design queueing network performance model is suitable in providing what if comparisons of database designs before database system implementation. A formal specification that captures the essential database design features while keeping the performance model sufficiently simple is presented. Furthermore, the simplicity of the modelling algorithms permits the direct mapping between database design entities and queueing network models. This affords for a more applicable performance model that provides relevant feedback to database designers and can be straightforwardly integrated into early database design development phases. The accuracy of the modelling technique is validated by modelling an open source implementation of the TPC-C benchmark. The contribution of this thesis is considered to be significant in that the majority of performance evaluation models for database systems target capacity planning or overall system properties, with limited work in detailed database transaction processing and behaviour. In addition, this work is deemed to be an improvement over previous methodologies in that the transaction is modelled at a finer granularity, and that the database design queueing network model provides for the explicit representation of active database rules and referential integrity constraints. / Iqra Foundation
8

Performance modelling of database designs using a queueing networks approach : an investigation in the performance modelling and evaluation of detailed database designs using queueing network models

Osman, Rasha Izzeldin Mohammed January 2010 (has links)
Databases form the common component of many software systems, including mission critical transaction processing systems and multi-tier Internet applications. There is a large body of research in the performance of database management system components, while studies of overall database system performance have been limited. Moreover, performance models specifically targeted at the database design have not been extensively studied. This thesis attempts to address this concern by proposing a performance evaluation method for database designs based on queueing network models. The method is targeted at designs of large databases in which I/O is the dominant cost factor. The database design queueing network performance model is suitable in providing what if comparisons of database designs before database system implementation. A formal specification that captures the essential database design features while keeping the performance model sufficiently simple is presented. Furthermore, the simplicity of the modelling algorithms permits the direct mapping between database design entities and queueing network models. This affords for a more applicable performance model that provides relevant feedback to database designers and can be straightforwardly integrated into early database design development phases. The accuracy of the modelling technique is validated by modelling an open source implementation of the TPC-C benchmark. The contribution of this thesis is considered to be significant in that the majority of performance evaluation models for database systems target capacity planning or overall system properties, with limited work in detailed database transaction processing and behaviour. In addition, this work is deemed to be an improvement over previous methodologies in that the transaction is modelled at a finer granularity, and that the database design queueing network model provides for the explicit representation of active database rules and referential integrity constraints.
9

Performance Modelling and Simulation of Automotive Camera Sensors : An exploration of methods and techniques to simulate the behaviour of lane detection cameras / Prestandamodellering och simulering av fordonskamera

Trasiev, Yavor January 2015 (has links)
Nowadays safety, along with efficiency, is one of the two strongest shaping forces of the automotive world, with advanced active safety applications being the major concentration of effort. Their development depends heavily on the quality of sensor data, a detailed measure of which is often up to the automotive manufacturers to derive, since the original equipment manufacturers (OEMs) may not disclose it on trade secrecy grounds. A model would not only provide a measure of the real-world performance of the sensor, but would also enable a higher degree of simulation accuracy which is vital to active safety function development. This is largely due to the high cost and risk involved in testing, a significant part of which is possible to be done in simulation alone. This thesis is an effort to derive a sensor model on behalf of Volvo Trucks of the performance of one of the most crucial sensors in current active safety - a lane detection camera.The work is focused on investigating approaches for modelling and simulation implementation of the lane estimation process within the black-box camera using reverse-engineering of the sensor's principles of operation. The main areas of analysis to define the factors that affect performance are the optics, image sensor, software and computer vision algorithms, and system interface. Each of them is considered separately and then methods for modelling are proposed, motivated, and derived accordingly. Finally, the finished model is evaluated to provide a measure of work success and a basis for further development. / Säkerhet är idag, tillsammans med effektivitet, en av de två starkaste förändringskrafterna i bilvärlden. Störst fokus ligger på avancerade aktiva säkerhetsfunktioner. Deras utveckling beror till stor del på kvaliteten på sensordata. En detaljerad modell för sensordata är ofta upp till fordonstillverkarna att härleda, eftersom tillverkare av sensorn ofta inte vill lämna ut sådan information. En modell ger inte bara ett mått på den verkliga prestandan hos sensorn, men ger också möjlighet till en högre grad av simuleringsnoggrannhet vilket är avgörande för utveckling av aktiva säkerhetsfunktioner. Tester är kostsamma och medför risker och en noggrann modell gör att tester kan utföras i simulering vilket minskar kostnader och risker. I denna avhandling härleds en sensormodell på uppdrag av Volvo Lastvagnar. Sensorn i fråga är en av de viktigaste sensorerna i det nuvarande aktiva säkerhetssystemet, kameran för att följa en körfil på vägen. Arbetet är fokuserat på undersökning av metoder för modellering och simulering av processen för filföljning baserat på sensorns funktionsprinciper. De viktigaste områdena för analys för att definiera de faktorer som påverkar prestanda är optik, bildsensorn, programvara, datorseendealgoritmer och systemets gränssnitt. Var och en av dessa behandlas separat och sedan föreslås och motiveras metoder för modellering. Slutligen utvärderas den färdiga modellen för att ge ett mått på hur framgångsrikt arbetet varit samt för att lägga en grund för ytterligare utveckling. / <p>The thesis work was carried out at Volvo Group Trucks Technology in Göteborg, Sweden. Supervisor for GTT: Mansour Keshavarz.</p>
10

Traffic and performance evaluation for optical networks : an investigation into modelling and characterisation of traffic flows and performance analysis and engineering for optical network architectures

Mouchos, Charalampos January 2009 (has links)
The convergence of multiservice heterogeneous networks and ever increasing Internet applications, like peer to peer networking and the increased number of users and services, demand a more efficient bandwidth allocation in optical networks. In this context, new architectures and protocols are needed in conjuction with cost effective quantitative methodologies in order to provide an insight into the performance aspects of the next and future generation Internets. This thesis reports an investigation, based on efficient simulation methodologies, in order to assess existing high performance algorithms and to propose new ones. The analysis of the traffic characteristics of an OC-192 link (9953.28 Mbps) is initially conducted, a requirement due to the discovery of self-similar long-range dependent properties in network traffic, and the suitability of the GE distribution for modelling interarrival times of bursty traffic in short time scales is presented. Consequently, using a heuristic approach, the self-similar properties of the GE/G/∞ are being presented, providing a method to generate self-similar traffic that takes into consideration burstiness in small time scales. A description of the state of the art in optical networking providing a deeper insight into the current technologies, protocols and architectures in the field, which creates the motivation for more research into the promising switching technique of 'Optical Burst Switching' (OBS). An investigation into the performance impact of various burst assembly strategies on an OBS edge node's mean buffer length is conducted. Realistic traffic characteristics are considered based on the analysis of the OC-192 backbone traffic traces. In addition the effect of burstiness in the small time scales on mean assembly time and burst size distribution is investigated. A new Dynamic OBS Offset Allocation Protocol is devised and favourable comparisons are carried out between the proposed OBS protocol and the Just Enough Time (JET) protocol, in terms of mean queue length, blocking and throughput. Finally the research focuses on simulation methodologies employed throughout the thesis using the Graphics Processing Unit (GPU) on a commercial NVidia GeForce 8800 GTX, which was initially designed for gaming computers. Parallel generators of Optical Bursts are implemented and simulated in 'Compute Unified Device Architecture' (CUDA) and compared with simulations run on general-purpose CPU proving the GPU to be a cost-effective platform which can significantly speed-up calculations in order to make simulations of more complex and demanding networks easier to develop.

Page generated in 0.1306 seconds