Spelling suggestions: "subject:"[een] BENCHMARK"" "subject:"[enn] BENCHMARK""
21 |
Design and Implementation of a Configurable and Cost Effective Web BenchmarkLiang, Ming-Chang 29 August 2001 (has links)
As WWW service grow up rapidly and becomes the most popular information system on the internet, web site owner invest heavily to improve the performance of web server. And, because of the 'Server Farm' architecture comes real, web server performance breaks the limit of single server, which directly cause server performance improvement. All these factors raise the cost of website performance measurement program to catch up server performance. Besides, because of dynamic webpage and database linkage are applied widely in production environment. HTTP requests (for example, sessions) with state and identifications increase rapidly. Traditional Web Benchmarking methods are out of date and not supporting these new transaction models. Also, traditional benchmarking methodologies only provide the max/average values for overview, which can not properly describe the performance of the websites with massive dynamic webpages. All these problems shows the traditional benchmarking methodologies are not sufficient for today's technologies.
In this paper, we have designed and implemented a configurable and cost-effective website performance measurement program against these problems. We introduced the concept of 'workload', and pre-design detailed HTTP request table, where time, contains, and HTTP commands to use can be assigned. Also we use configurable and replaceable design of open modules and divide the major modules into workload generator, load generator and report generator. These modules can be even separated into three independent programs, which makes this benchmark program become more flexible and adaptive to fit new technologies without adjusting the kernel. We also introduced the concept of 'Virtual User' to describe a real user behavior. We could keep the HTTP state and identification by automatic reply via cookie and assigning user identification in the same process. To increase the efficiency, each load generator can do self-diagnostics and quantify the measurements, and properly reassign the workloads by the value returned by the system. These makes every load generator do everything it could, and not been halted by the low-speed machine. And we can also prevent the mis-measurement by overloading. From the result of the experience, our design can describe the web server performance and load changes by time. We can also compare it to request category and URL, to show the root of causes and time basis to administrator. Overall, our web benchmarking methodology shows the strength to traditional web benchmarking methods.
|
22 |
A collection of case studies for verification of reservoir simulatorsLi, Xue, active 2012 03 February 2014 (has links)
A variety of oil recovery
improvement techniques has been developed and applied to the productive life of an oil reservoir. Reservoir simulators have a definitely established role in helping to identify the opportunity and select the most suitable techniques to optimum improvement in reservoir productivity. This is significantly important for those reservoirs whose operating and development costs are relatively expensive, because numerical modeling helps simulate the increased oil productivity process and evaluates the performance without undertaking trials in field. Moreover, rapid development in modeling provides engineers diverse choices. Hence the need for complete and comprehensive case studies is increasing. This study will show the different characteristics of in-house (UTCOMP and GPAS) and commercial simulators and also can validate implementation and development of models in the future.
The purpose of this thesis is to present a series of case studies with analytical solutions, in addition to a series of more complicated field cases studies with no exact solution, to verify and test the functionality and efficiency of various simulators. These case studies are performed with three reservoir simulators, including UTCOMP, GPAS, and CMG. UTCOMP and GPAS were both developed at the Center for Petroleum and Geosystem Engineering at The University of Texas at Austin and CMG is a commercial reservoir simulator developed by Computer Modelling Group Ltd. These simulators are first applied to twenty case studies with exact solutions. The simulation results are compared with exact solutions to examine the mathematical formulations and ensure the correctness of program coding. Then, ten more complicated field-scale case studies are performed. These case studies vary in difficulty and complexity, often featuring heterogeneity, larger number of components and wells, and very fine gridblocks. / text
|
23 |
Risk adjusted returns on riding the Yield curve A Holographic Neuron Model Approach /Eggenschwyler, Basil. January 2005 (has links) (PDF)
Master-Arbeit Univ. St. Gallen, 2005.
|
24 |
Behavioral Selection Criteria And Portfolio PerformanceDeijk, Manuel. January 2006 (has links) (PDF)
Master-Arbeit Univ. St. Gallen, 2006.
|
25 |
Destination benchmarking : facilities, customer satisfaction and levels of tourist expenditureKozak, Metin January 2000 (has links)
An extensive review of past benchmarking literature showed that there have been a substantial number of both conceptual and empirical attempts to formulate a benchmarking approach, particularly in the manufacturing industry. However, there has been limited investigation and application of benchmarking in tourism and particularly in tourist destinations. The aim of this research is to further develop the concept of benchmarking for application within tourist destinations and to evaluate its potential impact on destination performance. A holistic model for destination benchmarking was developed using the three main types of benchmark: internal, external and generic. Internal benchmarking aimed at improving a destination's internal performance by evaluating quantitative and qualitative measures. External benchmarking used tourist motivation, satisfaction and expenditure scores to investigate how one destination may perform better than another. Generic benchmarking aimed at evaluating and improving a destination's performance using quality and eco-label standards. This study developed four hypotheses to test the possible measures and methods to be used in carrying out destination benchmarking research and investigate how cross-cultural differences between tourists and between destinations might influence its formulation and application. These hypotheses and the model were tested utilising both primary and secondary data collection methods. The primary data was collected from eight different groups of British and German tourists visiting Mallorca and Turkey in the summer of 1998 (n=2,582). Findings were analysed using content analysis and a series of statistical procedures such as chisquare, mean difference (t-test), factor analysis and multiple regression. Personal observations were also recorded. The secondary data included statistical figures on tourism in Mallorca and Turkey. This research provides a discussion of findings and their implications for benchmarking theory and practitioners. The relevance of benchmarking to tourist destinations was examined through the measurement of performance, types of destination benchmarking and taking action. It is apparent that specific measures could be developed for destinations. Both internal and external benchmarking could be applied to benchmarking of destinations. However, in the case of external benchmarking, this research indicated that each destination might have its own regional differentiation and unique characteristics in some respects. Crosscultural differences between tourists from different countries also need to be considered. Given these findings, it is possible to suggest that this research makes a fresh and innovative contribution to the literature not only on tourism but also on benchmarking. The contribution of this study's findings to knowledge exists in the methods and techniques used to identify the factors influencing selected destination performance variables and in the methods to be employed for comparison between the two destinations. Caution should be used in generalising the results to apply to other destinations.
|
26 |
Earnings Management to Achieve the Peer Performance BenchmarkYi, Sheng 16 June 2016 (has links)
Other than three extensively researched earnings thresholds, avoiding earnings declines, avoiding negative earnings and avoiding negative earnings surprises (Burgstahler and Dichev 1997; Degeorge, Patel, and Zeckhauser 1999), peer performance is an additional threshold that is often mentioned in news reports, compensation contracts and analysts’ reports, while largely ignored in the academic research. Thus, I examine whether firms manage earnings to achieve peer performance. First, I examine accruals-based earnings management to achieve peer performance. The empirical results show that firms exhibit more income-increasing accruals management in the current year under the following situations: 1) when firms’ prior year performance is below that of their peer group; 2) when firms’ average performance over the prior two years is below that of its peer group; 3) when firms’ expected performance is below its peer group’s expected performance. In addition, firms with cumulative performance that is lower than that of its peer group through the first three quarters of the fiscal year exhibit more upward accruals management in the fourth quarter. Second, I investigate real activities manipulation to achieve peer performance. The empirical results show that that firms exhibit more income-increasing real activities manipulation in the current year under the following situations: 1) when firms’ prior year performance is below that of their peer group; 2) when firms’ average performance over the prior two years is below that of its peer group. Third, firms that are under pressure to achieve peer performance benchmarks tend to restate financial statements in subsequent years. Specifically, firms under the following four situations are more likely to restate current earnings in the future: 1) firm’s prior year performance is below that of its peer group; 2) firm’s average performance over the prior two years is below that of its peer group; 3) firm’s expected performance is below that of its peer group; and 4) firm’s cumulative performance for the first three fiscal quarters is below that of its peer group. The influence of peer performance on earnings management behavior implies that relative performance evaluation can induce income-increasing earnings management and subsequent restatements.
|
27 |
Benchmarking of G4STORK for the Coolant Void Reactivity of the Super Critical Water Reactor DesignFord, Wesley January 2016 (has links)
The objectives of this thesis were the validation of G4STORK to use it for the investigation of the SCWR lattice cell. MCNP6 was chosen as the program that the methodology of G4STORK would be validated against. Over multiple steps, the methodology of G4STORK was matched to that of MCNP6 (described here, 3.4). After each step, the output of the two programs were compared, allowing us to pinpoint why and where discrepancies came about. At the end of this process, we were able to show that when G4STORK used the same assumptions as MCNP6, it produced similar results (shown here, 4.1.4). The results of G4STORK simulating the SCWR lattice cell, using its more accurate default methodology, was then compared to those of MCNP6 (shown here, 4.2.1). Large differences in the results were seen to occur, because of the inaccurate assumptions used by MCNP6, during transient cases. We concluded that despite the existence of minor discrepancies between the results of MCNP and G4STORK for some cases, G4STORK is still the theoretically more accurate method for simulating lattice cell cases such as these, due to MCNP’s use of the generational method. / Thesis / Master of Applied Science (MASc)
|
28 |
Performanceuntersuchungen von JVM–Implementierungen in containerisierten Umgebungen am Beispiel von PodmanWanck, Cedric 29 October 2024 (has links)
Die Bachelorarbeit konzentriert sich auf die Untersuchung der Leistung der Java Virtual Machi-
ne (JVM), insbesondere in containerisierten Umgebungen. Die Motivation für diese Arbeit sowie
die Zielsetzung und das methodische Vorgehen, einschlieSSlich der verwendeten Werkzeuge und
Methoden, werden in Kapitel 1 dargelegt.
Kapitel 2 erläutert die theoretischen Grundlagen, die für das Verständnis der Performanceun-
tersuchungen in dieser Arbeit erforderlich sind. Dazu gehört eine Einführung in die Performance
von Programmen und deren Messung, die Grundlagen von Containertechnologien sowie eine detail-
lierte Untersuchung der Struktur der Java Virtual Machine und der in dieser Arbeit verwendeten
JVM–Implementierungen. Zudem werden die eingesetzten Werkzeuge und Methoden sowie der
aktuelle Forschungsstand in diesem Bereich vorgestellt.
Das Experiment wird umfassend in Kapitel 3 beschrieben. Es wird ein detaillierter Einblick
in die Forschungs– und Entscheidungsprozesse gegeben, die zur Durchführung der Performan-
ceuntersuchungen führten, einschlieSSlich der Entwicklung und Prognosen der Benchmarks. Die
Implementierung der Anwendung, die Containerisierung der JVMs und die notwendigen Schritte
zur Ausführung und Messung der Benchmarks werden ebenfalls erläutert.
In Kapitel 4 werden die Messergebnisse der Container–ImagegröSSen und Startzeiten sowie
die Resultate der implementierten Benchmarks präsentiert. Die Daten werden analysiert und in
Bezug auf verschiedene Performanceaspekte interpretiert. Es wird versucht, die Vor– und Nach-
teile der unterschiedlichen JVM–Implementierungen darzustellen und Designentscheidungen zu
unterstützen.
AbschlieSSend werden die Ergebnisse der Arbeit in Kapitel 5 zusammengefasst. Es wird auf
die Gültigkeit der Ergebnisse unter bestimmten Bedingungen eingegangen und mögliche Grenzen
der Untersuchung aufgezeigt. AuSSerdem wird ein Ausblick auf potenzielle zukünftige Forschungs-
arbeiten gegeben.:Abbildungsverzeichnis III
Tabellenverzeichnis V
Listings VII
1 Einleitung 1
1.1 Motivation 1
1.2 Zielsetzung 1
1.3 Methodisches Vorgehen 2
2 Theoretische Grundlagen 3
2.1 Performance von Programmen 3
2.2 Container und Podman 5
2.3 JVM und ihre Struktur 6
2.3.1 Runtime Data Areas 6
2.3.2 Execution Engine 8
2.4 Unterschiede der JVM–Implementationen 9
2.4.1 HotSpot 9
2.4.2 GraalVM 10
2.4.3 Zulu 10
2.4.4 OpenJ9 10
2.5 Werkzeuge und Methoden 11
2.6 State of the Art 11
3 Experiment 13
3.1 Planung 13
3.1.1 Grundlegende Recherche 13
3.1.2 Entwurf der Benchmarks 14
3.1.3 Erwartungshaltung 15
3.2 Implementierung 15
3.3 Durchführung und Messung 21
4 Auswertung und Vergleich 23
5 Fazit 33
Literaturverzeichnis 35
A Quellcode und Ergebnisse der Benchmarks 39
A.1 Elektronischer Datenträger 39
A.2 Quellcode der Matrixmultiplikation 40
A.3 Ausschnitte des Quellcodes der GC–Benchmarks 41
A.4 Ausschnitte des Quellcodes des Quicksort–Benchmarks 42
|
29 |
Performanceuntersuchungen von JVM–Implementierungen in containerisierten Umgebungen am Beispiel von PodmanWanck, Cedric 06 November 2024 (has links)
Die Bachelorarbeit konzentriert sich auf die Untersuchung der Leistung der Java Virtual Machi-
ne (JVM), insbesondere in containerisierten Umgebungen. Die Motivation für diese Arbeit sowie
die Zielsetzung und das methodische Vorgehen, einschlieSSlich der verwendeten Werkzeuge und
Methoden, werden in Kapitel 1 dargelegt.
Kapitel 2 erläutert die theoretischen Grundlagen, die für das Verständnis der Performanceun-
tersuchungen in dieser Arbeit erforderlich sind. Dazu gehört eine Einführung in die Performance
von Programmen und deren Messung, die Grundlagen von Containertechnologien sowie eine detail-
lierte Untersuchung der Struktur der Java Virtual Machine und der in dieser Arbeit verwendeten
JVM–Implementierungen. Zudem werden die eingesetzten Werkzeuge und Methoden sowie der
aktuelle Forschungsstand in diesem Bereich vorgestellt.
Das Experiment wird umfassend in Kapitel 3 beschrieben. Es wird ein detaillierter Einblick
in die Forschungs– und Entscheidungsprozesse gegeben, die zur Durchführung der Performan-
ceuntersuchungen führten, einschlieSSlich der Entwicklung und Prognosen der Benchmarks. Die
Implementierung der Anwendung, die Containerisierung der JVMs und die notwendigen Schritte
zur Ausführung und Messung der Benchmarks werden ebenfalls erläutert.
In Kapitel 4 werden die Messergebnisse der Container–ImagegröSSen und Startzeiten sowie
die Resultate der implementierten Benchmarks präsentiert. Die Daten werden analysiert und in
Bezug auf verschiedene Performanceaspekte interpretiert. Es wird versucht, die Vor– und Nach-
teile der unterschiedlichen JVM–Implementierungen darzustellen und Designentscheidungen zu
unterstützen.
AbschlieSSend werden die Ergebnisse der Arbeit in Kapitel 5 zusammengefasst. Es wird auf
die Gültigkeit der Ergebnisse unter bestimmten Bedingungen eingegangen und mögliche Grenzen
der Untersuchung aufgezeigt. AuSSerdem wird ein Ausblick auf potenzielle zukünftige Forschungs-
arbeiten gegeben.:Abbildungsverzeichnis III
Tabellenverzeichnis V
Listings VII
1 Einleitung 1
1.1 Motivation 1
1.2 Zielsetzung 1
1.3 Methodisches Vorgehen 2
2 Theoretische Grundlagen 3
2.1 Performance von Programmen 3
2.2 Container und Podman 5
2.3 JVM und ihre Struktur 6
2.3.1 Runtime Data Areas 6
2.3.2 Execution Engine 8
2.4 Unterschiede der JVM–Implementationen 9
2.4.1 HotSpot 9
2.4.2 GraalVM 10
2.4.3 Zulu 10
2.4.4 OpenJ9 10
2.5 Werkzeuge und Methoden 11
2.6 State of the Art 11
3 Experiment 13
3.1 Planung 13
3.1.1 Grundlegende Recherche 13
3.1.2 Entwurf der Benchmarks 14
3.1.3 Erwartungshaltung 15
3.2 Implementierung 15
3.3 Durchführung und Messung 21
4 Auswertung und Vergleich 23
5 Fazit 33
Literaturverzeichnis 35
A Quellcode und Ergebnisse der Benchmarks 39
A.1 Elektronischer Datenträger 39
A.2 Quellcode der Matrixmultiplikation 40
A.3 Ausschnitte des Quellcodes der GC–Benchmarks 41
A.4 Ausschnitte des Quellcodes des Quicksort–Benchmarks 42
|
30 |
Serverless Computing som Function-as-a-Service : Skillnader i prestanda mellan GCP, Azure och AWSKristiansson, Albin January 2022 (has links)
Digitaliseringen går allt snabbare för att fylla det behov som det moderna samhälletkräver så behövs inte bara en digital arbetskraft, det behövs även en infrastruktur sommöjliggör en snabbare digital utveckling. Samtidigt har cloud computing och molnleverantörer blivit en alltmer integrerad del av mjukvaruutvecklingen. Ett ytterligare abstraktionslager som fått mer popularitet och uppmärksamhet de senaste åren är serverless computing. Serverless computing innebär ett abstraktionslager som moln-leverantörer tillhandahåller för att ta bort ansvaret för drift och skalbarhet av servrar. Denna studie konstruerar ett ramverk för en benchmark av prestanda för serverless infrastruktur på tre av de största moln-leverantörerna. Ramverket bygger på en grey box implementering av en rekursiv algoritm för att beräkna det 45:e numret i en Fibonacci-serie i Python, Java och NodeJS. Detta görs på moln-plattformarna Google Cloud Platform, Amazon Web Services och Microsoft Azure. Syftet är att se huruvida det finns skillnader i exekveringstid och minnesåtgång för den givna algoritmen på de tre plattformarna i respektive programmeringsspråk. Studien visar att det finns statistiskt signifikanta skillnader mellan både exekveringstid och minnesåtgång, för alla kodspråken på de tre plattformarna. Störst skillnad är det på NodeJS, följt av Java och sist Python. På aggregerad nivå är det större skillnad för minnesåtgång gentemot exekveringstid. / The pace of digitalization is ever-increasing. To fill societies need for digitalization adigital workforce is needed, as well as the infrastructure to support said workforce.In the wake of digitalization, cloud computing and cloud providers have become anintegrated part of software production. An abstraction layer that builds on top ofcloud computing has gained traction over the last couple of years, serverless computing.This is an abstraction layer that cloud providers provide, which takes away theresponsibility of scaling and maintaining servers. This study constructs a framework to benchmark performance for serverless infrastructurefor three large cloud providers. The framework is a grey-box implementationof a recursive algorithm to calculate the 45th number in a Fibonacci series. Saidalgorithm will be tested in Python, Java and NodeJS. The tests will be conducted onthe cloud providers Google Cloud Platform, Amazon Web Service and Microsoft Azure.The purpose of the study is to show any differences in execution time andmemory consumption, for the given algorithm, on all three platforms and betweeneach programming language. The study shows that there are statistically significant differences for execution timeas well as memory consumption, for all coding languages, between all three platforms.The biggest difference is observed for NodeJS, followed by Java and lastly Python.On an aggregated level there are greater differences in memory consumptionrather than execution time.
|
Page generated in 0.0436 seconds