Spelling suggestions: "subject:"performance 2analysis"" "subject:"performance 3analysis""
321 |
Kongruens inom nominalfraser : En performansanalys om andraspråkselevers användning av nominalfraser i skriftliga och muntliga formerAli Ahmed, Faiza January 2017 (has links)
The purpose of this study is to see how second-language learners uses noun phrases in their written texts and oral speech as well as how they differ from each other. In order to answer this, the following questions have been formulated: What congruent errors are found in the noun phrases in the students' texts and oral speeches? Are there any differences in the use of noun phrases in the students' texts and oral speeches? The study is based on performance analysis, where student texts and transcribed recordings of students' oral speeches are analyzed. The result shows that congruence errors in the students' noun phrases are mostly due to the misuse of gender, definite form and numerus. The result also shows that the students had more congruence errors in the written texts, but there were similar, but fewer, mistakes in the oral speeches. This means that you cannot draw general conclusions about any differences between the written texts and the oral speeches.
|
322 |
CLUE: A Cluster Evaluation ToolParker, Brandon S. 12 1900 (has links)
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.
|
323 |
Desempenho de sistemas com dados georeplicados com consistência em momento indeterminado e na linha do tempo / Performace of systems with geo-replicated data with eventual consistency and timeline consistencyMauricio José de Oliveira de Diana 21 March 2013 (has links)
Sistemas web de larga escala são distribuídos em milhares de servidores em múltiplos centros de processamento de dados em diferentes localizações geográficas, operando sobre redes de longa distância (WANs). Várias técnicas são usadas para atingir os altos níveis de escalabilidade requeridos por esses sistemas. Replicação de dados está entre as principais delas, e tem por objetivo diminuir a latência, aumentar a vazão e/ou aumentar a disponibilidade do sistema. O principal problema do uso de replicação em sistemas georeplicados é a dificuldade de garantir consistência entre as réplicas sem prejudicar consideravelmente o desempenho e a disponibilidade do sistema. O desempenho do sistema é afetado pelas latências da ordem de centenas de milissegundos da WAN, enquanto a disponibilidade é afetada por falhas que impedem a comunicação entre as réplicas. Quanto mais rígido o modelo de consistência de um sistema de armazenamento, mais simples é o desenvolvimento do sistema que o usa, mas menores são seu desempenho e disponibilidade. Entre os modelos de consistência mais relaxados e mais difundidos em sistemas web georeplicados está a consistência em momento indeterminado (eventual consistency). Esse modelo de consistência garante que em algum momento as réplicas convergem após as escritas terem cessado. Um modelo mais rígido e menos difundido é a consistência na linha do tempo. Esse modelo de consistência usa uma réplica mestre para garantir que não ocorram conflitos na escrita. Nas leituras, os clientes podem ler os valores mais recentes a partir da cópia mestre, ou optar explicitamente por ler valores possivelmente desatualizados para obter maior desempenho ou disponibilidade. A consistência na linha do tempo apresenta disponibilidade menor que a consistência em momento indeterminado em determinadas situações, mas não há dados comparando o desempenho de ambas. O objetivo principal deste trabalho foi a comparação do desempenho de sistemas de armazenamento georeplicados usando esses dois modelos de consistência. Para cada modelo de consistência, foram realizados experimentos que mediram o tempo de resposta do sistema sob diferentes cargas de trabalho e diferentes condições de rede entre centros de processamento de dados. O estudo mostra que um sistema usando consistência na linha do tempo apresenta desempenho semelhante ao mesmo sistema usando consistência em momento indeterminado em uma WAN quando a localidade dos acessos é alta. Esse comparativo pode auxiliar desenvolvedores e administradores de sistemas no planejamento de capacidade e de desenvolvimento de sistemas georeplicados. / Large scale web systems are distributed among thousands of servers spread over multiple data centers in geographically different locations operating over wide area networks (WANs). Several techniques are employed to achieve the high levels of scalability required by such systems. One of the main techniques is data replication, which aims to reduce latency, increase throughput and/or increase availability. The main drawback of replication in geo-replicated systems is that it is hard to guarantee consistency between replicas without considerably impacting system performance and availability. System performance is affected by WAN latencies, typically of hundreds of miliseconds, while system availability is affected by failures cutting off communication between replicas. The more rigid the consistency model provided by a storage system, the simpler the development of the system using it, but the lower its performance and availability. Eventual consistency is one of the more relaxed and most widespread consistency models among geo-replicated systems. This consistency model guarantees that all replicas converge at some unspecified time after writes have stopped. A model that is more rigid and less widespread is timeline consistency. This consistency model uses a master replica to guarantee that no write conflicts occur. Clients can read the most up-to-date values from the master replica, or they can explicitly choose to read stale values to obtain greater performance or availability. Timeline consistency has lower availability than eventual consistency in particular situations, but there are no data comparing their performance. The main goal of this work was to compare the performance of a geo-replicated storage system using these consistency models. For each consistency model, experiments were conducted to measure system response time under different workloads and network conditions between data centers. The study shows that a system using timeline consistency has similar performance than the same system using eventual consistency over a WAN when access locality is high. This comparative may help developers and system administrators on capacity and development planning of geo-replicated systems.
|
324 |
Analýza výkonnosti skupiny podniků / Performance Analysis of a Group of CompaniesBlaženec, Jakub January 2014 (has links)
Master´s thesis deal with performance analysis of a group of companies. It is divided into several parts. The first part is focused on theoretical bases describing a group of companies, the consolidated financial statement and financial analysis for the consolidated financial statement. The second part deal with practical application of financial analysis group of companies. The thesis also contains analysis of problems and proposals for improving the economic situation of a group of companies.
|
325 |
Análise da eficiência de um método alternativo de integração das equações diferenciais ordinárias de linhas de transmissão /Fernandes, João Paulo January 2020 (has links)
Orientador: Sérgio Kurokawa / Resumo: Uma linha de transmissão representada por uma cascata de circuitos π é descrita por meio de sistema de equações diferenciais ordinárias que podem ser resolvidas diretamente no domínio do tempo, através de métodos numéricos de integração. A cada passo de cálculo é resolvido o sistema de ordem 2n, sendo n a quantidade de circuitos π. Este é o procedimento clássico de resolução. Um método alternativo de resolução de equações diferencias, proposto recentemente, resolve os sistemas de equações diferenciais em linhas de até 5 km representada por uma quantidade limite de 333 circuitos π. Neste trabalho é investigado a aplicação desse método alternativo em linhas de maior comprimento e sua eficiência na resolução das equações diferenciais que representam a linha de transmissão. No método alternativo cada circuito π da linha é resolvido individualmente através de um sistema que não depende da quantidade de circuitos π, cuja matrizes possuem ordem 2x2. Dessa forma, para n circuitos π é necessário resolver n sistemas de duas equações diferencias para cada instante de tempo. Este procedimento é denominado como método acelerado. A análise do desempenho do método acelerado é realizada através do número de operações de pontos flutuantes (flops) e através do tempo de cálculo das variáveis de estado ao longo da linha de transmissão. O mesmo procedimento é realizado no método clássico e após comparados os resultados é possível constatar a eficiência do método acelerado para determinadas config... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: A transmission line represented by a cascade of π circuits is described by means of a system of ordinary differential equations that can be solved directly in the time domain, through numerical integration methods. At each calculation step, the 2n order system is solved, with n being the number of π circuits. This is the classic resolution procedure. An alternative method of solving differential equations recently proposed solves the systems of differential equations in lines of up to 5 km represented by a limit quantity of 333 π circuits. This work investigates the application of this alternative method on lines of greater length and its efficiency in solving the differential equations that represent the transmission line. In the alternative method, each π circuit of the line is solved individually through a system that does not depend on the number of π circuits, whose matrices have a 2x2 order. Thus, for n π circuits, it is necessary to solve n systems of two different equations for each instant of time. This procedure is called an accelerated method. The analysis of the performance of the accelerated method is performed through the number of floating-point operations (flops) and through the calculation time of the state variables along the transmission line. The same procedure is performed in the classic method and after comparing the results it is possible to verify the efficiency of the accelerated method for certain transmission line configurations. / Mestre
|
326 |
Visual Analytics for Decision Making in Performance EvaluationJieqiong Zhao (8791535) 05 May 2020 (has links)
Performance analysis often considers numerous factors contributing to performance, and the relative importance of these factors is evolving based on dynamic conditions and requirements. Investigating large numbers of factors and understanding individual factors' predictability within the ultimate performance are challenging tasks. A visual analytics approach that integrates interactive analysis, novel visual representations, and predictive machine learning models can provide new capabilities to examine performance effectively and thoroughly. Currently, only limited research has been done on the possible applications of visual analytics for performance evaluation. In this dissertation, two specific types of performance analysis are presented: (1) organizational employee performance evaluation and (2) performance improvement of machine learning models with interactive feature selection. Both application scenarios leverage the human-in-the-loop approach to assist the identification of influential factors. For organizational employee performance evaluation, a novel visual analytics system, MetricsVis, is developed to support exploratory organizational performance analysis. MetricsVis incorporates hybrid evaluation metrics that integrate quantitative measurements of observed employee achievements and subjective feedback on the relative importance of these achievements to demonstrate employee performance at and between multiple levels regarding the organizational hierarchy. MetricsVis II extends the original system by including actual supervisor ratings and user-guided rankings to capture preferences from users through derived weights. Comparing user preferences with objective employee workload data enables users to relate user evaluation to historical observations and even discover potential bias. For interactive feature selection and model evaluation, a visual analytics system, FeatureExplorer, allows users to refine and diagnose a model iteratively by selecting features based on their domain knowledge, interchangeable features, feature importance, and the resulting model performance. FeatureExplorer enables users to identify stable, trustable, and credible predictive features that contribute significantly to a prediction model.
|
327 |
Untersuchungen zur weiteren Vervollkommnung der Anschlagtechniken Liegend und Stehend im BiathlonschießenEspig, Nico 16 June 2014 (has links)
Der Anteil der Schießleistung in Bezug auf die Komplexleistung gewinnt durch die Einführung neuer und kürzerer Wettkampfdisziplinen sowie durch die zunehmende Verdichtung der Weltspitzenleistung immer mehr an Bedeutung. Neben den allgemeinen Schießtechnikelementen Atmung, Abzug, Zielen und Anschlag sowie deren optimaler Koordination können Körperschwankungen und Anschlagsstabilität als wesentliche Einflussgrößen der Schießtechnik im Biathlon benannt werden. Zielstellung dieser Untersuchung war die Aufhellung des komplexen Beziehungsgeflechts der Schießtechnik in den Anschlagsarten Liegend und Stehend im Biathlon. Auf der Grundlage der sportartspezifischen Anforderungen des Biathlonschießens, wurden in beiden Anschlägen, in Abhängigkeit anthropometrischer Merkmale (z. B. Alter und Körpergröße) sowie in Abhängigkeit der Art der Vorbelastung, die Zusammenhänge zwischen der Anschlagsstabilität und der Anschlagsgestaltung, im Hinblick auf möglichst geringe Bewegungen der Laufmündung im Moment der Schussabgabe, als Voraussetzung für sichere Trefferergebnisse, analysiert. Zur Beantwortung der Forschungsfragen wurde ein komplexer Schießmessplatz eingesetzt, welcher die synchrone Erfassung und Analyse von Kraft-Zeit-Verläufen an den Kontaktstellen Sportler-Gewehr, Bewegungen an der Laufmündung, Schwankungen des Systems Sportler-Waffe, die Belastungsverteilung der Auflagepunkte auf der Unterstützungsfläche sowie Winkel und Winkelveränderungen der Anschlagsposition in der Schussvor- und Nachbereitung sowie im Verlauf einer kompletten Schussserie ermöglicht. Im Rahmen einer Evaluationsstudie konnten über einen Zeitraum von zwei Jahren eine Vielzahl von leistungsrelevanten Parametern identifiziert werden. Neben der Präzisierung des sporttechnischen Leitbildes konnten Norm- und Richtwerte als Orientierungsgrößen für den Techniktrainingsprozess abgeleitet werden. Auf der Basis der Erkenntnisse aus der Evaluationsstudie, ist es im Rahmen einer spezifischen Interventionsstudie gelungen, Möglichkeiten aufzuzeigen, eine Steigerung der Schießleistung durch Abbau entsprechender Leistungsdiskrepanzen zu erreichen.:Inhaltsverzeichnis 3
Abbildungsverzeichnis 5
Tabellenverzeichnis 8
1 Einleitung 12
1.1 Problemstellung 12
1.2 Zielstellung 16
1.3 Struktur der Arbeit 17
2 Theoretische Vorbetrachtungen 19
2.1 Struktur und Entwicklungstendenzen der Wettkampfleistung im Biathlon 19
2.1.1 Laufgeschwindigkeit 20
2.1.2 Schießstandaufenthalt 21
2.1.3 Schießergebnis 22
2.1.4 Komplexe Biathlonleistung 24
2.2 Struktur der Leistungsfähigkeit und Diagnostik im Biathlonschießen 25
2.2.1 Anforderungsstruktur Schießtechnik 25
2.2.2 Anschlagstechnik 28
2.2.3 Anschlagsgestaltung 34
2.2.4 Anschlagsstabilität 36
2.2.5 Zieltechnik 40
2.2.6 Atemtechnik 42
2.2.7 Abzugstechnik 43
2.2.8 Zusammenfassung − Diagnostik der Schießleistung im Biathlon 43
2.3 Trainingsstruktur im Biathlonschießen 46
2.3.1 Sportliche Technik, Normwertorientierungen und Techniktraining 46
2.3.2 Techniktraining im Biathlonschießen 50
3 Fragestellungen und wissenschaftliche Forschungshypothesen 54
3.1 Fragestellungen 54
3.2 Forschungshypothesen 57
4 Untersuchungskonzept 58
4.1 Untersuchungsstichprobe 58
4.1.1 Evaluationsstudie 58
4.1.2 Interventionsstudie 59
4.2 Untersuchungsansatz und -ablauf 60
4.2.1 Evaluationsstudie 60
4.2.2 Interventionsstudie 62
4.3 Untersuchungsverfahren 65
4.3.1 Gewehrsensorik (Biathlonschießmessplatz) 65
4.3.2 Stabilometrie mittels Footscan® Balance (RS Scan Intl.) 70
4.3.3 2D-Bewegungsanalyse 72
4.3.4 Synchronisation und Zusammenführung der Messdaten der verschiedenen Untersuchungsverfahren 75
4.4 Untersuchungsaufbau 77
4.5 Statistische Datenbearbeitung 78
4.6 Kritik der Untersuchungsmethodik 80
5 Evaluationsstudie 83
5.1 Ergebnisse Evaluationsstudie 83
5.1.1 Beziehungsgeflecht Anschlagsstabilität 83
5.1.2 Beziehungsgeflecht Anschlagsgestaltung 106
5.2 Diskussion Evaluationsstudie 128
5.2.1 Diskussion der Ergebnisse zur Anschlagsstabilität 128
5.2.2 Diskussion der Ergebnisse zur Anschlagsgestaltung 148
6 Interventionsstudie 161
6.1 Ergebnisse Interventionsstudie 161
6.1.1 Gleichgewichts- und Haltetraining 161
6.1.2 Messplatztraining 163
6.2 Diskussion Interventionsstudie 177
6.2.1 Anschlag Liegend 177
6.2.2 Anschlag Stehend 179
7 Zusammenfassung und Ausblick 183
7.1 Empfehlungen für die Sportpraxis 192
7.2 Ausblick für die Forschung 194
Literaturverzeichnis 196
Anhang 202
|
328 |
Comparison and End-to-End Performance Analysis of Parallel FilesystemsKluge, Michael 05 September 2011 (has links)
This thesis presents a contribution to the field of performance analysis for Input/Output (I/O) related problems, focusing on the area of High Performance Computing (HPC).
Beside the compute nodes, High Performance Computing systems need a large amount of supporting components that add their individual behavior to the overall performance characteristic of the whole system. Especially file systems in such environments have their own infrastructure. File operations are typically initiated at the compute nodes and proceed through a deep software stack until the file content arrives at the physical medium. There is a handful of shortcomings that characterize the current state of the art for performance analyses in this area. This includes a system wide data collection, a comprehensive analysis approach for all collected data, an adjusted trace event analysis for I/O related problems, and methods to compare current with archived performance data.
This thesis proposes to instrument all soft- and hardware layers to enhance the performance analysis for file operations. The additional information can be used to investigate performance characteristics of parallel file systems. To perform I/O analyses on HPC systems, a comprehensive approach is needed to gather related performance events, examine the collected data and, if necessary, to replay relevant parts on different systems. One larger part of this thesis is dedicated to algorithms that reduce the amount of information that are found in trace files to the level that is needed for an I/O analysis. This reduction is based on the assumption that for this type of analysis all I/O events, but only a subset of all synchronization events of a parallel program trace have to be considered. To extract an I/O pattern from an event trace, only these synchronization points are needed that describe dependencies among different I/O requests. Two algorithms are developed to remove negligible events from the event trace.
Considering the related work for the analysis of a parallel file systems, the inclusion of counter data from external sources, e.g. the infrastructure of a parallel file system, has been identified as a major milestone towards a holistic analysis approach. This infrastructure contains a large amount of valuable information that are essential to describe performance effects observed in applications. This thesis presents an approach to collect and subsequently process and store the data. Certain ways how to correctly merge the collected values with application traces are discussed. Here, a revised definition of the term "performance counter" is the first step followed by a tree based approach to combine raw values into secondary values. A visualization approach for I/O patterns closes another gap in the analysis process.
Replaying I/O related performance events or event patterns can be done by a flexible I/O benchmark. The constraints for the development of such a benchmark are identified as well as the overall architecture for a prototype implementation.
Finally, different examples demonstrate the usage of the developed methods and show their potential. All examples are real use cases and are situated on the HRSK research complex and the 100GBit Testbed at TU Dresden. The I/O related parts of a Bioinformatics and a CFD application have been analyzed in depth and enhancements for both are proposed. An instance of a Lustre file system was deployed and tuned on the 100GBit Testbed by the extensive use of external performance counters.
|
329 |
Energy and Design Cost Efficiency for Streaming Applications on Systems-on-ChipZhu, Jun January 2009 (has links)
With the increasing capacity of today's integrated circuits, a number ofheterogeneous system-on-chip (SoC) architectures in embedded systemshave been proposed. In order to achieve energy and design cost efficientstreaming applications on these systems, new design space explorationframeworks and performance analysis approaches are required. Thisthesis considers three state-of-the-art SoCs architectures, i.e., themulti-processor SoCs (MPSoCs) with network-on-chip (NoC) communication,the hybrid CPU/FPGA architectures, and the run-time reconfigurable (RTR)FPGAs. The main topic of the author?s research is to model and capturethe application scheduling, architecture customization, and bufferdimensioning problems, according to the real-time requirement. Sincethese problems are NP-complete, heuristic algorithms and constraintprogramming solver are used to compute a solution.For NoC communication based MPSoCs, an approach to optimize thereal-time streaming applications with customized processorvoltage-frequency levels and memory sizes is presented. A multi-clockedsynchronous model of computation (MoC) framework is proposed inheterogeneous timing analysis and energy estimation. Using heuristicsearching (i.e., greedy and taboo search), the experiments show anenergy reduction (up to 21%) without any loss in application throughputcompared with an ad-hoc approach.On hybrid CPU/FPGA architectures, the buffer minimization scheduling ofreal-time streaming applications is addressed. Based on event models,the problem has been formalized decoratively as constraint basescheduling, and solved by public domain constraint solver Gecode.Compared with traditional PAPS method, the proposed method needssignificantly smaller buffers (2.4% of PAPS in the best case), whilehigh throughput guarantees can still be achieved.Furthermore, a novel compile-time analysis approach based on iterativetiming phases is proposed for run-time reconfigurations in adaptivereal-time streaming applications on RTR FPGAs. Finally, thereconfigurations analysis and design trade-offs analysis capabilities ofthe proposed framework have been exemplified with experiments on bothexample and industrial applications. / Andres
|
330 |
Hadoop scalability evaluation for machine learning algorithms on physical machines : Parallel machine learning on computing clustersRoderus, Jens, Larson, Simon, Pihl, Eric January 2021 (has links)
The amount of available data has allowed the field of machine learning to flourish. But with growing data set sizes comes an increase in algorithm execution times. Cluster computing frameworks provide tools for distributing data and processing power on several computer nodes and allows for algorithms to run in feasible time frames when data sets are large. Different cluster computing frameworks come with different trade-offs. In this thesis, the scalability of the execution time of machine learning algorithms running on the Hadoop cluster computing framework is investigated. A recent version of Hadoop and algorithms relevant in industry machine learning, namely K-means, latent Dirichlet allocation and naive Bayes are used in the experiments. This paper provides valuable information to anyone choosing between different cluster computing frameworks. The results show everything from moderate scalability to no scalability at all. These results indicate that Hadoop as a framework may have serious restrictions in how well tasks are actually parallelized. Possible scalability improvements could be achieved by modifying the machine learning library algorithms or by Hadoop parameter tuning.
|
Page generated in 0.1225 seconds