Spelling suggestions: "subject:"bperformance data"" "subject:"bperformance mata""
1 |
How To Present Performance Data to Decision Makers in HealthcareJennings, Heather 30 April 2013 (has links)
Healthcare organizations are moving towards the use of dashboards for presenting performance data and away from the use of balanced scorecards, but there is little research that addresses whether dashboards are better than balanced scorecards. This study gathers qualitative and quantitative data from interviews with decision makers, 6 directors and 10 managers, from a large healthcare organization. Decision makers were presented with the most commonly used graphic formalisms from both the dashboard and the balanced scorecard, which were a gauge and tabular format respectively. The presentation contained information about healthcare decision making scenarios. Neither of the formats affected the decision maker’s ultimate decision on whether to take action and for both display formats the decision maker requested more information than what was presented to them. However, it was found that the gauge format was perceived as being easier to understand, better supported decision making and that it contained more complete information. Overall, the analysis reveals that 94% of participants preferred the graphic formalisms from a dashboard to the graphic formalisms in the balanced scorecard. This study shows that decision makers prefer dashboards to balanced scorecards when comparing the most common graphic formalisms found in balanced scorecards (tabular format) and dashboards (gauge format). The results are consistent with a move towards greater use of dashboards in healthcare. Theoretical implications of the work are discussed. / Graduate / 769 / hjenning@uvic.ca
|
2 |
Machine learning to detect anomalies in datacenterLindh, Filip January 2019 (has links)
This thesis investigates the possibility of using anomaly detection on performance data of virtual servers in a datacenter to detect malfunctioning servers. Using anomaly detection can potentially reduce the time a server is malfunctioning, as the server can be detected and checked before the error has a significant impact. Several approaches and methods were applied and evaluated on one virtual server: the K-nearest neighbor algorithm, the support-vector machine, the K-means clustering algorithm, self-organizing maps, CPU-memory usage ratio using a Gaussian model, and time series analysis using neural network and linear regression. The evaluation and comparison of the methods were mainly based on reported errors during the time period they were tested. The better the detected anomalies matched the reported errors the higher score they received. It turned out that anomalies in performance data could be linked to real errors in the server to some extent. This enables the possibility of using anomaly detection on performance data as a way to detect malfunctioning servers. The most simple method, looking at the ratio between memory usage and CPU, was the most successful one, detecting most errors. However the anomalies were often detected just after the error had been reported. Support vector machine were more successful at detecting anomalies before they were reported. The proportion of anomalies played a big role however and K-nearest neighbor received higher score when having a higher proportion of anomalies.
|
3 |
Návrh části webové aplikace pro výpočet režijních nákladů / A Design of a Portion of Web Application for Overhead Cost CalculationFlorians, Patrik January 2021 (has links)
Subject of this thesis is to design a web application for overhead calculation, whose purpose is to be a substitution for presently used solution, which is considered to be deprecated. This is being done as a part of strategy of SAP SE corporation for which the solution is designed. This ambition to develop and improve cloud portfolio of already existing applications of the company should lead to betterment of already existing applications of this type and in a long run an improvement of the company’s market position as well as it’s products. The thesis is divided into 3 parts. It begins with a description of theoretical concepts, tools and principles, which are then utilized in further chapters. The following chapter analyzes current state of the affairs, where it is illustrated, what the current solution looks like along with key parts of it. The final, 3rd chapter is dedicated to a description of implemented solution and it also closely describes key differences mentioned in chapter 2.
|
4 |
Analýza letových výkonů letounu VUT 081 KONDOR / Flight performance analysis of the airplane VUT 081 KONDORKerndl, Jindřich January 2013 (has links)
The aim of this diploma thesis is to analyse flight performance of airplane VUT-081 Kondor. The first part focuses on determination and estimation of aerodynamic characteristics of the airplane. Based on this data flight performance was calculated and evaluated according to CS-ELSA. The last part is dedicated to comparison of flight performance of VUT-081 Kondor with other similar airplanes.
|
5 |
An Investigation of Gas Foil Thrust Bearing Performance and its Influencing FactorsDickman, Joseph Robert 17 May 2010 (has links)
No description available.
|
6 |
Efficient, Practical Dynamic Program Analyses for Concurrency CorrectnessCao, Man 15 August 2017 (has links)
No description available.
|
7 |
Analysis of investment strategies: a new look at investment returnsRubio, Jose F 20 December 2013 (has links)
Chapter 1:
Intuition suggests that constraint investment strategies will result in losses due to a limited portfolio allocation. Yet prior research has shown that this is not the case for a particular set of constraint mutual funds so-called Socially Responsible Investing, SRI. In this paper I show that such assets do face loses to portfolio efficiency due to their limited asset universe. I contribute to the literature by employing two techniques to estimate asset performance. First, I estimate a DEA based efficiency score that allows for direct comparison between ex-post efficiency rankings and test the ex-ante relevance of such scores by including them into asset pricing models. Second, I further check if these results are consistent when comparing the performance of ethical funds based on the alphas of traditional asset pricing models even after adjusting for coskewness risk. Overall, the results suggest that ethical funds underperform traditional unconstraint investment assets.
Chapter 2:
Starting after the turn of the millennium, inflation has been persistently higher than the short term T-Bill rate. Following the traditional view, this will imply a negative real rates of return that have become commonplace in the US economy. This paper examines the possibility that if an inflation risk discount contained in nominal rates exist and can explain low or negative real rates, using consumption based asset pricing model. Evidence suggests using the traditional Fisher equation to calculate real rates leads to an overestimate of real rates due to a modest inflation risk premium. To achieve non-negative real rates in a consumption based asset pricing framework the covariance between consumption growth and inflation innovations would have to be at least thirty times larger than empirically found, and in opposite direction, for the Post-Volker era. Still, though the after 2000’s covariance is positive, which suggest a discount on risk free, the magnitude is still too small to explain negativity of real rates.
JEL Classification : E21, E31
Key Words : Mutual Funds, Performance, Data Envelop Analysis, Coskewness, Risk Factors, Real Returns, Consumption Bases Asset Pricing Models, Inflation
|
8 |
Exploration of parallel graph-processing algorithms on distributed architectures / Exploration d’algorithmes de traitement parallèle de graphes sur architectures distribuéesCollet, Julien 06 December 2017 (has links)
Avec l'explosion du volume de données produites chaque année, les applications du domaine du traitement de graphes ont de plus en plus besoin d'être parallélisées et déployées sur des architectures distribuées afin d'adresser le besoin en mémoire et en ressource de calcul. Si de telles architectures larges échelles existent, issue notamment du domaine du calcul haute performance (HPC), la complexité de programmation et de déploiement d’algorithmes de traitement de graphes sur de telles cibles est souvent un frein à leur utilisation. De plus, la difficile compréhension, a priori, du comportement en performances de ce type d'applications complexifie également l'évaluation du niveau d'adéquation des architectures matérielles avec de tels algorithmes. Dans ce contexte, ces travaux de thèses portent sur l’exploration d’algorithmes de traitement de graphes sur architectures distribuées en utilisant GraphLab, un Framework de l’état de l’art dédié à la programmation parallèle de tels algorithmes. En particulier, deux cas d'applications réelles ont été étudiées en détails et déployées sur différentes architectures à mémoire distribuée, l’un venant de l’analyse de trace d’exécution et l’autre du domaine du traitement de données génomiques. Ces études ont permis de mettre en évidence l’existence de régimes de fonctionnement permettant d'identifier des points de fonctionnements pertinents dans lesquels on souhaitera placer un système pour maximiser son efficacité. Dans un deuxième temps, une étude a permis de comparer l'efficacité d'architectures généralistes (type commodity cluster) et d'architectures plus spécialisées (type serveur de calcul hautes performances) pour le traitement de graphes distribué. Cette étude a démontré que les architectures composées de grappes de machines de type workstation, moins onéreuses et plus simples, permettaient d'obtenir des performances plus élevées. Cet écart est d'avantage accentué quand les performances sont pondérées par les coûts d'achats et opérationnels. L'étude du comportement en performance de ces architectures a également permis de proposer in fine des règles de dimensionnement et de conception des architectures distribuées, dans ce contexte. En particulier, nous montrons comment l’étude des performances fait apparaitre les axes d’amélioration du matériel et comment il est possible de dimensionner un cluster pour traiter efficacement une instance donnée. Finalement, des propositions matérielles pour la conception de serveurs de calculs plus performants pour le traitement de graphes sont formulées. Premièrement, un mécanisme est proposé afin de tempérer la baisse significative de performance observée quand le cluster opère dans un point de fonctionnement où la mémoire vive est saturée. Enfin, les deux applications développées ont été évaluées sur une architecture à base de processeurs basse-consommation afin d'étudier la pertinence de telles architectures pour le traitement de graphes. Les performances mesurés en utilisant de telles plateformes sont encourageantes et montrent en particulier que la diminution des performances brutes par rapport aux architectures existantes est compensée par une efficacité énergétique bien supérieure. / With the advent of ever-increasing graph datasets in a large number of domains, parallel graph-processing applications deployed on distributed architectures are more and more needed to cope with the growing demand for memory and compute resources. Though large-scale distributed architectures are available, notably in the High-Performance Computing (HPC) domain, the programming and deployment complexity of such graphprocessing algorithms, whose parallelization and complexity are highly data-dependent, hamper usability. Moreover, the difficult evaluation of performance behaviors of these applications complexifies the assessment of the relevance of the used architecture. With this in mind, this thesis work deals with the exploration of graph-processing algorithms on distributed architectures, notably using GraphLab, a state of the art graphprocessing framework. Two use-cases are considered. For each, a parallel implementation is proposed and deployed on several distributed architectures of varying scales. This study highlights operating ranges, which can eventually be leveraged to appropriately select a relevant operating point with respect to the datasets processed and used cluster nodes. Further study enables a performance comparison of commodity cluster architectures and higher-end compute servers using the two use-cases previously developed. This study highlights the particular relevance of using clustered commodity workstations, which are considerably cheaper and simpler with respect to node architecture, over higher-end systems in this applicative context. Then, this thesis work explores how performance studies are helpful in cluster design for graph-processing. In particular, studying throughput performances of a graph-processing system gives fruitful insights for further node architecture improvements. Moreover, this work shows that a more in-depth performance analysis can lead to guidelines for the appropriate sizing of a cluster for a given workload, paving the way toward resource allocation for graph-processing. Finally, hardware improvements for next generations of graph-processing servers areproposed and evaluated. A flash-based victim-swap mechanism is proposed for the mitigation of unwanted overloaded operations. Then, the relevance of ARM-based microservers for graph-processing is investigated with a port of GraphLab on a NVIDIA TX2-based architecture.
|
9 |
Closing the Gaps in Professional Development: A Tool for School-based Leadership TeamsSampayo, Sandra 01 January 2015 (has links)
The field of professional learning in education has been studied and added to extensively in the last few decades. Because the importance of learning in authentic contexts through professional dialogue has become so important, high quality, school-based professional learning is vital to building capacity at the school level. Unfortunately, the literature on professional development (PD) does not provide much guidance on how to bridge theory and practice at the school level, creating a gap. With the goal of PD ultimately being to improve teacher performance and student learning, the problem with this gap is that school-level professional development is arbitrarily planned, resulting in variable outcomes. I propose the reason for this is schools lack a comprehensive framework or tool that guides the design of a quality professional learning plan. This problem was identified in Orange County Public School and this dissertation in practice aims at developing a solution that accounts for the district*s specific contextual needs. My proposed solution is the design of an integrative tool that school leaders can use to guide them through the professional development planning process. The School-based Professional Learning Design Tool incorporates the professional development standards in planning, learning, implementing, and evaluating outlined in the Florida Professional Development System Evaluation Protocol. It also guides leaders in taking an inventory of the culture and context of their school in order to plan PD that will be viable given those considerations. The components of the Tool guide teams through assessing school teacher performance and student achievement data to help identify focus groups; determining gaps in learning through root cause analysis; creating goals aligned to gaps in performance; and selecting strategies for professional learning, follow-up support, and evaluation. The development of the Tool was informed by the extant literature on professional development, organizational theory, state and national standards for professional development, and principles of design. The Tool is to be completed in four phases. Phases one and two, the focus of this paper, include the literature review, organizational assessment, design specifications, and the first iteration of the Tool. In the next phases, the goals are to solicit feedback from an expert panel review, create a complete version of the Tool, and pilot it in elementary schools. Although the development of the Tool through its final phases will refine it considerably, there are limitations that will transcend all iterations. While the Tool incorporates best practices in professional development, the lack of empirical evidence on the effectiveness of specific PD elements in the literature renders this Tool only a best guess in helping schools plan effective professional development. Another limitation is that the Tool is not prescriptive and cannot use school data to make decisions for what strategies to implement. Taking these limitations into consideration, the use of this Tool can significantly impact the quality and effectiveness of professional development in schools.
|
10 |
Big and Small Data for Value Creation and Delivery: Case for Manufacturing FirmsStout, Blaine David, PhD January 2018 (has links)
No description available.
|
Page generated in 0.0766 seconds