• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving the performance of distributed multi-agent based simulation

Mengistu, Dawit January 2011 (has links)
This research investigates approaches to improve the performance of multi-agent based simulation (MABS) applications executed in distributed computing environments. MABS is a type of micro-level simulation used to study dynamic systems consisting of interacting entities, and in some cases, the number of the simulated entities can be very large. Most of the existing publicly available MABS tools are single-threaded desktop applications that are not suited for distributed execution. For this reason, general-purpose multi-agent platforms with multi-threading support are sometimes used for deploying MABS on distributed resources. However, these platforms do not scale well for large simulations due to huge communication overheads. In this research, different strategies to deploy large scale MABS in distributed environments are explored, e.g., tuning existing multi-agent platforms, porting single-threaded MABS tools to distributed environment, and implementing a service oriented architecture (SOA) deployment model. Although the factors affecting the performance of distributed applications are well known, the relative significance of the factors is dependent on the architecture of the application and the behaviour of the execution environment. We developed mathematical performance models to understand the influence of these factors and, to analyze the execution characteristics of MABS. These performance models are then used to formulate algorithms for resource management and application tuning decisions. The most important performance improvement solutions achieved in this thesis include: predictive estimation of optimal resource requirements, heuristics for generation of agent reallocation to reduce communication overhead and, an optimistic synchronization algorithm to minimize time management overhead. Additional application tuning techniques such as agent directory caching and message aggregations for fine-grained simulations are also proposed. These solutions were experimentally validated in different types of distributed computing environments. Another contribution of this research is that all improvement measures proposed in this work are implemented on the application level. It is often the case that the improvement measures should not affect the configuration of the computing and communication resources on which the application runs. Such application level optimizations are useful for application developers and users who have limited access to remote resources and lack authorization to carry out resource level optimizations.
2

An Evaluation of the Linux Virtual Memory Manager to Determine Suitability for Runtime Variation of Memory

Muthukumaraswamy Sivakumar, Vijay 01 June 2007 (has links)
Systems that support virtual memory virtualize the available physical memory such that the applications running on them operate under the assumption that these systems have a larger amount of memory available than is actually present. The memory managers of these systems manage the virtual and the physical address spaces and are responsible for converting the virtual addresses used by the applications to the physical addresses used by the hardware. The memory managers assume that the amount of physical memory is constant and does not change during their period of operation. Some operating scenarios however, such as the power conservation mechanisms and virtual machine monitors, require the ability to vary the physical memory available at runtime, thereby making invalid the assumptions made by these memory managers. In this work we evaluate the suitability of the Linux Memory Manager, which assumes that the available physical memory is constant, for the purposes of varying the memory at run time. We have implemented an infrastructure over the Linux 2.6.11 kernel that enables the user to vary the physical memory available to the system. The available physical memory is logically divided into banks and each bank can be turned on or off independent of the others, using the new system calls we have added to the kernel. Apart from adding support for the new system calls, other changes had to be made to the Linux memory manager to support the runtime variation of memory. To evaluate the suitability for varying memory we have performed experiments with varying memory sizes on both the modified and the unmodified kernels. We have observed that the design of the existing memory manager is not well suited to support the runtime variation of memory; we provide suggestions to make it better suited for such purposes. Even though applications running on systems that support virtual memory do not use the physical memory directly and are not aware of the physical addresses they use, the amount of physical memory available for use affects the performance of the applications. The results of our experiments have helped us study the influence the amount of physical memory available for use has on the performance of various types of applications. These results can be used in scenarios requiring the ability to vary the memory at runtime to do so with least degradation in the application performance. / Master of Science
3

Flexible Event Processing Subsystem for the Java Performance Monitoring Framework / Flexible Event Processing Subsystem for the Java Performance Monitoring Framework

Júnoš, Peter January 2015 (has links)
Java Performance Measurement Framework (JPMF) is a framework dedicated to description of points, where the performance is measured. This description is used to gather performance data in these running points. Data are gathered and written without any processing. The handling increases bandwidth and puts high load on the storage. JPMF does not provide any possibility for user to reduce this data. This thesis aims to solve the described problem by introduction of filtering and aggregation, that should reduce the bandwidth. Additionally, performance bottlenecks in various parts of JPMF are investigated and removed. Powered by TCPDF (www.tcpdf.org)
4

Kartläggning av systemanvändning genom Application Performance Monitoring

Lundgren, Thomas January 2020 (has links)
Application Performance Monitoring (APM) används i allt större utsträckning för att samla in data om mjukvarusystems prestanda och om hur användare interagerar med systemen. Detta för att säkerställa tillgänglighet och robusthet samt förbättra slutanvändarupplevelser. Syftet med denna studie är att undersöka hur införandet av APM kan gå till, vilka utmaningar som finns samt vilka kostnader och prestandaförsämringar som införandet innebär. Detta görs genom en fallstudie där APM implementeras i Enterprise Resource Planning-systemet MONITOR G5 som utvecklats av det svenska företaget Monitor ERP System AB. Systemet är utvecklat i Microsofts mjukvaruramverk .NET Framework och APM-tjänsten som används är Microsofts Application Insights. Studien resulterade i ett förslag på en APM-lösning där data om användarinteraktioner, prestanda och uppkomna fel samlas in och visualiseras. Sex instrumentpaneler skapades som visar olika aspekter av insamlade data, bland annat prestandamätvärden såsom processor- och minnesanvändning, uppkomna fel, laddningstider för vyer samt vilka delar av systemet som används mest och minst. Kostnadsanalysen visar att kostnaderna kan bli väldigt höga, men förslag på strategier för att hålla kostnaderna nere ges. Prestandatesterna som utfördes för att undersöka APM-lösningens påverkan på systemets prestanda gav otillförlitliga resultat, men det är troligt att prestandakostnaden för APM är liten. / The use of Application Performance Monitoring (APM) for collecting data about performance and end-user behaviors in complex software systems is increasing. APM is used to ensure availability and robustness and to enhance end-user experiences. This study aims to investigate how the adoption of APM can be done, what challenges organizations face during the implementation as well as costs and performance overhead associated with APM. This is achieved through a case study in which APM is introduced into the Enterprise Resource Planning (ERP) system MONITOR G5, developed and maintained by the Swedish software company Monitor ERP System AB. The system is developed in Microsoft’s .NET Framework and the APM service used is Microsoft’s Application Insights. The study resulted in a proposed APM solution wherein data regarding user interactions, performance and errors are collected and visualized. Six dashboards were created, showing different aspects of the collected data, for instance: which parts of the system is most and least frequently used, errors, load times and performance metrics such as processor and memory usage. The cost analysis shows that monetary costs can be very high, but strategies for suppressing costs are proposed. The performance tests that were conducted to determine the performance overhead of APM are inconclusive, but it is likely that the performance penalty of using APM is small.
5

Understanding the Impact of OS Background Noise with a Custom Performance Evaluation Tool

Westberg, Daniel January 2023 (has links)
Understanding the background activity of a computer and its operating system when running an arbitrary application can lead to important performance discoveries. This is especially interesting in cases where the same task of an application is run over and over again and there is an expected run time, such as in testing. If a major deviation in the run time occurs, it can be crucial to know the reason to prevent it from happening again. Additionally, finding the relevant measurements to explain the performance in a compact way such as a score can further help both the readability and understanding of the performance. For this project, a tool was developed that, using existing tools, measures various parts of a computer and its operating system and presents their activity during the run time of a selected application over multiple iterations, as well as calculates the relevance of the different measurements with the purpose of finding one that can consistently rate the overall performance. Using the results, no single measurement was found that could rate the overall performance on a consistent level, only for specific scenarios. Possible causes for performance deviations could be found, however. The results show that although there is some activity in the background, most background operating system noise does not have a major effect on performance and that major deviations in the run time are rare. However, inflicting manual noise in either the form of CPU usage or memory usage can cause major performance penalties, sometimes reaching up to the double average run time.
6

Characterizing and improving last mile performance using home networking infrastructure

Sundaresan, Srikanth 27 August 2014 (has links)
More than a billion people access the Internet through residential broadband connections worldwide, and this number is projected to grow further. Surprisingly, little is known about some important properties of these networks: What performance do users obtain from their ISP? What factors affect performance of broadband networks? Are users bottlenecked by their ISP or by their home network? How are applications such as the Web affected by these factors? Answering these questions is difficult; there is tremendous diversity of technologies and applications in home and broadband networks. While a lot of research has tackled these questions piecemeal, the lack of a good vantage point to obtain good measurements from these networks makes it notably difficult to do a holistic characterization of the モlast mileヤ. In this dissertation we use the home gateway to characterize home and access networks and mitigate performance bottlenecks that are specific to such networks. The home gateway is uniquely situated; it is always on and, as the hub of the network, it can directly observe the home network, the access network, and user traffic. We present one such gateway- based platform, BISmark, that currently has nearly 200 active access points in over 20 countries. We do a holistic characterization of three important components of the last mile using the gateway as the vantage point: the access link that connects the user to the wider Internet, the home network to which devices connect, and Web performance, one of the most commonly used applications in today's Internet. We first describe the design, development, and deployment of the BISmark platform. BISmark uses custom gateways to enable measurements and evaluate performance opti- mizations directly from home networks. We characterize access link performance in the US using measurements from the gateway; we evaluate existing techniques and propose new techniques that help us understand these networks better. We show how access link technology and home networking hardware can affect performance. We then develop a new system that uses passive measurements at the gateway to localize bottlenecks to either the wireless network or the access link. We deploy this system in 64 homes worldwide and characterize the nature of bottlenecks, and the state of the wireless network in these homes - specifically we show how the wireless network is rarely the bottleneck as throughput exceeds 35 Mbits/s. Finally, we characterize bottlenecks that affect Web performance that are specific to the last mile. We show how latency in the last mile results in page load times stagnating at throughput exceeding 16 Mbits/s, and how simple techniques deployed at the gateway can mitigate these bottlenecks.
7

Applikationsövervakning : Dess möjliga bidrag till en verksamhet

Dellestrand, August, Lundin, Tobias January 2015 (has links)
Applikationsövervakning är en term för att i realtid övervaka applikationer och kunna upptäcka fel innan slutanvändaren märker av ett problem. Med övervakning av applikationer menas inte bara den enskilda programvaran utan allt som rör applikationen i fråga. Trafikverkets önskemål är att leverera en hög kvalité i sina applikationer. I nuläget har utvecklare ingen eller dålig insyn i hur en applikation levererar i en skarp miljö efter att de lämnat över ansvaret till drift. För att kunna hålla en bra kvalité i sina applikationer så vill de undersöka hur applikationsövervakning kan hjälpa till att se behov av ändringar i applikationer innan större problem uppstår. I en fallstudie bestående av intervjuer och dokumentstudier kommer genom användning av situationsbaserad FA/SIMM nuvarande arbetssätt fångas. Samt fånga mål och problem som uttrycks i verksamheten kring utveckling & förvaltning och drift av applikationer. Dessa kommer sedan analyseras för att undersöka på vilket sätt applikationsövervakning skulle hjälpa utvecklare & förvaltare, men även driftspersonal i deras arbete. Resultatet av detta visar att de problem och mål som tas upp dels är organisatoriska i sin natur och arbetssättet DevOps framhålls som en möjlig lösning. Även att applikationsövervakning de facto skulle kunna bidra till en ökad kvalité i applikationerna genom att tillföra en möjlighet att arbeta mer proaktivt. / Application monitoring is a term for real-time monitoring of applications to be able to discover faults before they reach the end-user. Application monitoring does not only mean the individual software but also everything surrounding it, which can have an impact on the application. Trafikverket wishes to deliver high quality in their applications. At present the developers have no or little insight in how an application delivers in a live environment after they handed over the responsibility to the operations. In order to maintain a good quality of their applications they want to explore how application monitoring may help to see changes in the needs of applications before major problems occur. In a case study consisting of interviews and document studies and through situation based FA/SIMM present ways of working will be produced. It will also identify wishes/concerns expressed by the developers and operations departments in the managing of existent applications. These will then be analyzed to examine in which way application monitoring would help developers, but also operations, in their work. The result shows that the problems which are brought forward are in a sense organizational of nature and that DevOps is a possible way for solution. But also that application monitoring could contribute to the delivery of high quality in applications in a proactive manor.
8

Efficient Implementation of Mesh Generation and FDTD Simulation of Electromagnetic Fields

Hill, Jonathan 06 October 1999 (has links)
"This thesis presents an implementation of the Finite Difference Time Domain (FDTD) method on a massively parallel computer system, for the analysis of electromagnetic phenomenon. In addition, the implementation of an efficient mesh generator is also presented. For this research we selected the MasPar system, as it is a relatively low cost, reliable, high performance computer system. In this thesis we are primarily concerned with the selection of an efficient algorithm for each of the programs written for our selected application, and devising clever ways to make the best use of the MasPar system. This thesis has a large emphasis on examining the application performance."

Page generated in 0.2007 seconds