• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 191
  • 169
  • 69
  • 51
  • 44
  • 23
  • 17
  • 9
  • 9
  • 7
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 998
  • 212
  • 165
  • 151
  • 105
  • 83
  • 81
  • 80
  • 68
  • 68
  • 62
  • 60
  • 57
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The theory and practice of benchmarking: Then and now

Yasin, Mahmoud M. 01 August 2002 (has links)
The literature related to benchmarking practices and theory was reviewed from 1986 to 2000. The earlier stages of benchmarking developments stressed a process and/or activity orientation. Recently, however, the scope of benchmarking appears to have expanded to include strategies and systems. Despite recent advancements, the field of benchmarking still suffers from the lack of theoretical developments which are badly needed to guide its multi-faceted applications.
192

Construction Project Benchmarking in the U.S. Army Corps of Engineers

LaBarre, Philip Samuel 11 May 2013 (has links)
The construction industry is unique with many challenges. Managing claims can be one of the greatest challenges. Construction projects are becoming more influenced by factors that lead to claims. The literature review highlighted a few of these factors which include: safety issues, design errors, delay, and changes. Moreover, the literature review presented studies in performance measurement and benchmarking as a way to mitigate these factors. The research presented the results from a benchmarking study used to improve contractors that performed work for the Army Corps of Engineers, Vicksburg District. The study selected and analyzed 40 random construction contractors. Five performance elements were identified to measure each contractor. A five-point scale evaluated each contractor based on these elements. The results of this research indicated that benchmarking is an effective tool for improving performance and mitigating the cause of claims.
193

Eye-Tracking over Source Code : A Benchmarking Extension to Evaluate the Accuracy ofEyesOnTheCode

Kyrk, John, Gagas Piechowiak, Sylwia January 2023 (has links)
Context: This report covers the development of a benchmarking extension to the eye-tracking software ”EyesOnTheCode” that utilizes WebGazer. The extension makes it possible tobenchmark the software and for researchers to quantify externalfactors’ impact on eye-tracking.Objective: To create a benchmarking extension to EyesOnTheCode that measures the accuracy of its readings. Additionally,to create a data analysis tool that assists researchers in interpretingthe data more easily.Approach: The benchmarking software displays a movingobject on the screen that the user follows with their gaze. Thesoftware records data throughout the benchmarking sessionthat can then be analyzed through the data analysis tool. Thetool makes it possible to generate charts that can be used forresearch purposes.Results: The use of reading glasses increased the Euclideandistance by 44.55 pixels, with a decrease in accuracy fromthe control test by 20.5 percent. A decrease in web cameraresolution from 1920 by 1080 to 1280 by 720 pixels reducedthe accuracy by an average of 77.5 pixels or roughly 35.7percent. Increasing the web camera resolution from 1920 by1080 to 3840 by 2160 pixels decreased the Euclidean distanceby 27.66 pixels, increasing accuracy by roughly 12.7 percent.Conclusion: Data polled from the software indicates thatusing reading glasses and low web camera resolution negativelyimpacts eye-tracking. Further work and more test samples must bemade to verify the data, but the benchmarking software showspromising results. More features may need to be implemented tomake the software more usable for researchers.
194

Measuring, Rating, and Predicting the Energy Efficiency of Servers / Messung, Bewertung und Vorhersage von Serverenergieeffizienz

von Kistowski, Jóakim Gunnarsson January 2019 (has links) (PDF)
Energy efficiency of computing systems has become an increasingly important issue over the last decades. In 2015, data centers were responsible for 2% of the world's greenhouse gas emissions, which is roughly the same as the amount produced by air travel. In addition to these environmental concerns, power consumption of servers in data centers results in significant operating costs, which increase by at least 10% each year. To address this challenge, the U.S. EPA and other government agencies are considering the use of novel measurement methods in order to label the energy efficiency of servers. The energy efficiency and power consumption of a server is subject to a great number of factors, including, but not limited to, hardware, software stack, workload, and load level. This huge number of influencing factors makes measuring and rating of energy efficiency challenging. It also makes it difficult to find an energy-efficient server for a specific use-case. Among others, server provisioners, operators, and regulators would profit from information on the servers in question and on the factors that affect those servers' power consumption and efficiency. However, we see a lack of measurement methods and metrics for energy efficiency of the systems under consideration. Even assuming that a measurement methodology existed, making decisions based on its results would be challenging. Power prediction methods that make use of these results would aid in decision making. They would enable potential server customers to make better purchasing decisions and help operators predict the effects of potential reconfigurations. Existing energy efficiency benchmarks cannot fully address these challenges, as they only measure single applications at limited sets of load levels. In addition, existing efficiency metrics are not helpful in this context, as they are usually a variation of the simple performance per power ratio, which is only applicable to single workloads at a single load level. Existing data center efficiency metrics, on the other hand, express the efficiency of the data center space and power infrastructure, not focusing on the efficiency of the servers themselves. Power prediction methods for not-yet-available systems that could make use of the results provided by a comprehensive power rating methodology are also lacking. Existing power prediction models for hardware designers have a very fine level of granularity and detail that would not be useful for data center operators. This thesis presents a measurement and rating methodology for energy efficiency of servers and an energy efficiency metric to be applied to the results of this methodology. We also design workloads, load intensity and distribution models, and mechanisms that can be used for energy efficiency testing. Based on this, we present power prediction mechanisms and models that utilize our measurement methodology and its results for power prediction. Specifically, the six major contributions of this thesis are: We present a measurement methodology and metrics for energy efficiency rating of servers that use multiple, specifically chosen workloads at different load levels for a full system characterization. We evaluate the methodology and metric with regard to their reproducibility, fairness, and relevance. We investigate the power and performance variations of test results and show fairness of the metric through a mathematical proof and a correlation analysis on a set of 385 servers. We evaluate the metric's relevance by showing the relationships that can be established between metric results and third-party applications. We create models and extraction mechanisms for load profiles that vary over time, as well as load distribution mechanisms and policies. The models are designed to be used to define arbitrary dynamic load intensity profiles that can be leveraged for benchmarking purposes. The load distribution mechanisms place workloads on computing resources in a hierarchical manner. Our load intensity models can be extracted in less than 0.2 seconds and our resulting models feature a median modeling error of 12.7% on average. In addition, our new load distribution strategy can save up to 10.7% of power consumption on a single server node. We introduce an approach to create small-scale workloads that emulate the power consumption-relevant behavior of large-scale workloads by approximating their CPU performance counter profile, and we introduce TeaStore, a distributed, micro-service-based reference application. TeaStore can be used to evaluate power and performance model accuracy, elasticity of cloud auto-scalers, and the effectiveness of power saving mechanisms for distributed systems. We show that we are capable of emulating the power consumption behavior of realistic workloads with a mean deviation less than 10% and down to 0.2 watts (1%). We demonstrate the use of TeaStore in the context of performance model extraction and cloud auto-scaling also showing that it may generate workloads with different effects on the power consumption of the system under consideration. We present a method for automated selection of interpolation strategies for performance and power characterization. We also introduce a configuration approach for polynomial interpolation functions of varying degrees that improves prediction accuracy for system power consumption for a given system utilization. We show that, in comparison to regression, our automated interpolation method selection and configuration approach improves modeling accuracy by 43.6% if additional reference data is available and by 31.4% if it is not. We present an approach for explicit modeling of the impact a virtualized environment has on power consumption and a method to predict the power consumption of a software application. Both methods use results produced by our measurement methodology to predict the respective power consumption for servers that are otherwise not available to the person making the prediction. Our methods are able to predict power consumption reliably for multiple hypervisor configurations and for the target application workloads. Application workload power prediction features a mean average absolute percentage error of 9.5%. Finally, we propose an end-to-end modeling approach for predicting the power consumption of component placements at run-time. The model can also be used to predict the power consumption at load levels that have not yet been observed on the running system. We show that we can predict the power consumption of two different distributed web applications with a mean absolute percentage error of 2.2%. In addition, we can predict the power consumption of a system at a previously unobserved load level and component distribution with an error of 1.2%. The contributions of this thesis already show a significant impact in science and industry. The presented efficiency rating methodology, including its metric, have been adopted by the U.S. EPA in the latest version of the ENERGY STAR Computer Server program. They are also being considered by additional regulatory agencies, including the EU Commission and the China National Institute of Standardization. In addition, the methodology's implementation and the underlying methodology itself have already found use in several research publications. Regarding future work, we see a need for new workloads targeting specialized server hardware. At the moment, we are witnessing a shift in execution hardware to specialized machine learning chips, general purpose GPU computing, FPGAs being embedded into compute servers, etc. To ensure that our measurement methodology remains relevant, workloads covering these areas are required. Similarly, power prediction models must be extended to cover these new scenarios. / In den vergangenen Jahrzehnten hat die Energieeffizienz von Computersystemen stark an Bedeutung gewonnen. Bereits 2015 waren Rechenzentren für 2% der weltweiten Treibhausgasemissionen verantwortlich, was mit der durch den Flugverkehr verursachten Treibhausgasmenge vergleichbar ist. Dabei wirkt sich der Stromverbrauch von Rechenzentren nicht nur auf die Umwelt aus, sondern verursacht auch erhebliche, jährlich um mindestens 10% steigende, Betriebskosten. Um sich diesen Herausforderungen zu stellen, erwägen die U.S. EPA und andere Behörden die Anwendung von neuartigen Messmethoden, um die Energieeffizienz von Servern zu bestimmen und zu zertifizieren. Die Energieeffizienz und der Stromverbrauch eines Servers wird von vielen verschiedenen Faktoren, u.a. der Hardware, der zugrundeliegenden Ausführungssoftware, der Arbeitslast und der Lastintensität, beeinflusst. Diese große Menge an Einflussfaktoren führt dazu, dass die Messung und Bewertung der Energieeffizienz herausfordernd ist, was die Auswahl von energieeffizienten Servern für konkrete Anwendungsfälle erheblich erschwert. Informationen über Server und ihre Energieeffizienz bzw. ihren Stromverbrauch beeinflussenden Faktoren wären für potentielle Kunden von Serverhardware, Serverbetreiber und Umweltbehörden von großem Nutzen. Im Allgemeinen mangelt es aber an Messmethoden und Metriken, welche die Energieeffizienz von Servern in befriedigendem Maße erfassen und bewerten können. Allerdings wäre es selbst unter der Annahme, dass es solche Messmethoden gäbe, dennoch schwierig Entscheidungen auf Basis ihrer Ergebnisse zu fällen. Um derartige Entscheidungen zu vereinfachen, wären Methoden zur Stromverbrauchsvorhersage hilfreich, um es potentiellen Serverkunden zu ermöglichen bessere Kaufentscheidungen zu treffen und Serverbetreibern zu helfen, die Auswirkungen möglicher Rekonfigurationen vorherzusagen. Existierende Energieeffizienzbenchmarks können diesen Herausforderungen nicht vollständig begegnen, da sie nur einzelne Anwendungen bei wenigen Lastintensitätsstufen ausmessen. Auch sind die vorhandenen Energieeffizienzmetriken in diesem Kontext nicht hilfreich, da sie normalerweise nur eine Variation des einfachen Verhältnisses von Performanz zu Stromverbrauch darstellen, welches nur auf einzelne Arbeitslasten bei einer einzigen gemessenen Lastintensität angewandt werden kann. Im Gegensatz dazu beschreiben die existierenden Rechenzentrumseffizienzmetriken lediglich die Platz- und Strominfrastruktureffizienz von Rechenzentren und bewerten nicht die Effizienz der Server als solche. Methoden zur Stromverbrauchsvorhersage noch nicht für Kunden verfügbarer Server, welche die Ergebnisse einer ausführlichen Stromverbrauchsmessungs- und Bewertungsmethodologie verwenden, gibt es ebenfalls nicht. Stattdessen existieren Stromverbrauchsvorhersagemethoden und Modelle für Hardwaredesigner und Hersteller. Diese Methoden sind jedoch sehr feingranular und erfordern Details, welche für Rechenzentrumsbetreiber nicht verfügbar sind, sodass diese keine Vorhersage durchführen können. In dieser Arbeit werden eine Energieeffizienzmess- und Bewertungsmethodologie für Server und Energieeffizienzmetriken für diese Methodologie vorgestellt. Es werden Arbeitslasten, Lastintensitäten und Lastverteilungsmodelle und -mechanismen, die für Energieeffizienzmessungen und Tests verwendet werden können, entworfen. Darauf aufbauend werden Mechanismen und Modelle zur Stromverbrauchsvorhersage präsentiert, welche diese Messmethodologie und die damit produzierten Ergebnisse verwenden. Die sechs Hauptbeiträge dieser Arbeit sind: Eine Messmethodologie und Metriken zur Energieeffizienzbewertung von Servern, die mehrere, verschiedene Arbeitslasten unter verschiedenen Lastintensitäten ausführt, um die beobachteten Systeme vollständig zu charakterisieren. Diese Methodologie wird im Bezug auf ihre Wiederholbarkeit, Fairness und Relevanz evaluiert. Es werden die Stromverbrauchs- und Performanzvariationen von wiederholten Methodologieausführungen untersucht und die Fairness der Methodologie wird durch mathematische Beweise und durch eine Korrelationsanalyse anhand von Messungen auf 385 Servern bewertet. Die Relevanz der Methodologie und der Metrik wird gezeigt, indem Beziehungen zwischen Metrikergebnissen und der Energieeffizienz von anderen Serverapplikationen untersucht werden. Modelle und Extraktionsverfahren für sich mit der Zeit verändernde Lastprofile, sowie Lastverteilungsmechanismen und -regeln. Die Modelle können dazu verwendet werden, beliebige Lastintensitätsprofile, die zum Benchmarking verwendet werden können, zu entwerfen. Die Lastverteilungsmechanismen, hingegen, platzieren Arbeitslasten in hierarchischer Weise auf Rechenressourcen. Die Lastintensitätsmodelle können in weniger als 0,2 Sekunden extrahiert werden, wobei die jeweils resultierenden Modelle einen durchschnittlichen Medianmodellierungsfehler von 12,7% aufweisen. Zusätzlich dazu kann die neue Lastverteilungsstrategie auf einzelnen Servern zu Stromverbrauchseinsparungen von bis zu 10,7% führen. Ein Ansatz um kleine Arbeitslasten zu erzeugen, welche das Stromverbrauchsverhalten von größeren, komplexeren Lasten emulieren, indem sie ihre CPU Performance Counter-Profile approximieren sowie den TeaStore: Eine verteilte, auf dem Micro-Service-Paradigma basierende Referenzapplikation. Der TeaStore kann verwendet werden, um Strom- und Performanzmodellgenauigkeit, Elastizität von Cloud Autoscalern und die Effektivität von Stromsparmechanismen in verteilten Systemen zu untersuchen. Das Arbeitslasterstellungsverfahren kann das Stromverbrauchsverhalten von realistischen Lasten mit einer mittleren Abweichung von weniger als 10% und bis zu einem minimalen Fehler von 0,2 Watt (1%) nachahmen. Die Anwendung des TeaStores wird durch die Extraktion von Performanzmodellen, die Anwendung in einer automatisch skalierenden Cloudumgebung und durch eine Demonstration der verschiedenen möglichen Stromverbräuche, die er auf Servern verursachen kann, gezeigt. Eine Methode zur automatisierten Auswahl von Interpolationsstrategien im Bezug auf Performanz und Stromverbrauchscharakterisierung. Diese Methode wird durch einen Konfigurationsansatz, der die Genauigkeit der auslastungsabhängigen Stromvorhersagen von polynomiellen Interpolationsfunktionen verbessert, erweitert. Im Gegensatz zur Regression kann der automatisierte Interpolationsmethodenauswahl- und Konfigurationsansatz die Modellierungsgenauigkeit mit Hilfe eines Referenzdatensatzes um 43,6% verbessern und kann selbst ohne diesen Referenzdatensatz eine Verbesserung von 31,4% erreichen. Einen Ansatz, der explizit den Einfluss von Virtualisierungsumgebungen auf den Stromverbrauch modelliert und eine Methode zur Vorhersage des Stromverbrauches von Softwareapplikationen. Beide Verfahren nutzen die von der in dieser Arbeit vorgegestellten Stromverbrauchsmessmethologie erzeugten Ergebnisse, um den jeweiligen Stromverbrauch von Servern, die den Vorhersagenden sonst nicht zur Verfügung stehen, zu ermöglichen. Die vorgestellten Verfahren können den Stromverbrauch für verschiedene Hypervisorkonfigurationen und für Applikationslasten zuverlässig vorhersagen. Die Vorhersage des Stromverbrauchs von Serverapplikationen erreicht einen mittleren absoluten Prozentfehler von 9,5%. Ein Modellierungsansatz zur Stromverbrauchsvorhersage für Laufzeitplatzierungsentscheidungen von Softwarekomponenten, welcher auch dazu verwendet werden kann den Stromverbrauch für bisher nicht beobachtete Lastintensitäten auf dem laufenden System vorherzusagen. Der Modellierungsansatz kann den Stromverbrauch von zwei verschiedenen, verteilten Webanwendungen mit einem mittleren absoluten Prozentfehler von 2,2% vorhersagen. Zusätzlich kann er den Stromverbrauch von einem System bei einer in der Vergangenheit nicht beobachteten Lastintensität und Komponentenverteilung mit einem Fehler von 1,2% vorhersagen. Die Beiträge in dieser Arbeit haben sich bereits signifikant auf Wissenschaft und Industrie ausgewirkt. Die präsentierte Energieeffizienzbewertungsmethodologie, inklusive ihrer Metriken, ist von der U.S. EPA in die neueste Version des ENERGY STAR Computer Server-Programms aufgenommen worden und wird zurzeit außerdem von weiteren Behörden, darunter die EU Kommission und die Nationale Chinesische Standardisierungsbehörde, in Erwägung gezogen. Zusätzlich haben die Implementierung der Methodologie und die zugrundeliegende Methodologie bereits Anwendung in mehreren wissenschaftlichen Arbeiten gefunden. In Zukunft werden im Rahmen von weiterführenden Arbeiten neue Arbeitslasten erstellt werden müssen, um die Energieeffizienz von spezialisierter Hardware zu untersuchen. Zurzeit verändert sich die Server-Rechenlandschaft in der Hinsicht, dass spezialisierte Ausführungseinheiten, wie Chips zum maschinellen Lernen, GPGPU Rechenchips und FPGAs in Servern verbaut werden. Um sicherzustellen, dass die Messmethodologie aus dieser Arbeit weiterhin relevant bleibt, wird es nötig sein, Arbeitslasten zu erstellen, welche diese Fälle abdecken, sowie Stromverbrauchsmodelle zu entwerfen, die in der Lage sind, derartige spezialisierte Hardware zu betrachten.
195

Benchmarking microservices: effects of tracing and service mesh

Unnikrishnan, Vivek 04 November 2023 (has links)
Microservices have become the current standard in software architecture. As the number of microservices increases, there is an increased need for better visualization, debugging and configuration management. Developers currently adopt various tools to achieve the above functionalities two of which are tracing tools and service meshes. Despite the advantages, they bring to the table, the overhead they add is also significant. In this thesis, we try to understand these overheads in latency and throughput by conducting experiments on known benchmarks with different tracing tools and service meshes. We introduce a new tool called Unified Benchmark Runner (UBR) that allows easy benchmark setup, enabling a more systematic way to run multiple benchmark experiments under different scenarios. UBR supports Jaeger, TCP Dump, Istio, and three popular microservice benchmarks, namely, Social Network, Hotel Reservation, and Online Boutique. Using UBR, we conduct experiments with all three benchmarks and report performance for different deployments and configurations.
196

BENCHMARKING IN RADIATION ONCOLOGY: DISCOVERING INCONSISTENCIES IN REPORTING METHODOLOGIES

MARTIN, ROBERT SPENCER 02 July 2004 (has links)
No description available.
197

Strategic Planning and Benchmarking for Dummies

Channing, Jill, Ebenhack, K. 02 March 2020 (has links)
No description available.
198

Benchmarking Methods For Predicting Phenotype Gene Associations

Tyagi, Tanya 16 September 2020 (has links)
Assigning human genes to diseases and related phenotypes is an important topic in modern genomics. Human Phenotype Ontology (HPO) is a standardized vocabulary of phenotypic abnormalities that occur in human diseases. Computational methods such as label-propagation and supervised-learning address challenges posed by traditional approaches such as manual curation to link genes to phenotypes in the HPO. It is only in recent years that computational methods have been applied in a network-based approach for predicting genes to disease-related phenotypes. In this thesis, we present an extensive benchmarking of various computational methods for the task of network-based gene classification. These methods are evaluated on multiple protein interaction networks and feature representations. We empirically evaluate the performance of multiple prediction tasks using two evaluation experiments: cross-fold validation and the more stringent temporal holdout. We demonstrate that all of the prediction methods considered in our benchmarking analysis have similar performance, with each of the methods outperforming a random predictor. / Master of Science / For many years biologists have been working towards studying diseases, characterizing dis- ease history and identifying what factors and genetic variants lead to diseases. Such studies are critical to working towards the advanced prognosis of diseases and being able to iden- tify targeted treatment plans to cure diseases. An important characteristic of diseases is that they can be expressed by a set of phenotypes. Phenotypes are defined as observable characteristics or traits of an organism, such as height and the color of the eyes and hair. In the context of diseases, the phenotypes that describe diseases are referred to as clinical phenotypes, with some examples being short stature, abnormal hair pattern, etc. Biologists have identified the importance of deep phenotyping, which is defined as a concise analysis that gathers information about diseases and their observed traits in humans, in finding genetic variants underlying human diseases. We make use of the Human Phenotype Ontology (HPO), a standardized vocabulary of phenotypic abnormalities that occur in human diseases. The HPO provides relationships between phenotypes as well as associations between phenotypes and genes. In our study, we perform a systematic benchmarking to evaluate different types of computational approaches for the task of phenotype-gene prediction, across multiple molecular networks using various feature representations and for multiple evaluation strategies.
199

An investigation into benchmarking for the Asset Administration Industry

Morkel, Carl 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: The Asset Administration Industry is managed as a back office entity with limited tools to assess operational performance. There is no industry index for performance or platform for collaborative learning. In order to manage the operational efficiency the old cliché of "what gets measured gets managed" applies. Benchmarking is a proven management tool that is used to establish measures of operational performance relative to an industry benchmark. Benchmarking is a systematic and continuous measurement process that assists a company in determining its relative performance and shows up the factors that influence performance. The theory of benchmarking is a dynamic field and various types of benchmarking evolved. In spite of its noted benefits the popularity of benchmarking has lead to sub-standard benchmarking exercise, giving it "management fad" status. It is therefore imperative that any benchmarking study be well planned and focused. The selection of the appropriate benchmarking type is important. A data benchmarking exercise was chosen as a pilot study to introduce the concept to participants in a simplistic non-threatening format that could serve as a platform for future benchmarking studies. A five-step benchmarking process model was followed, consisting of: 1. Determine what to benchmark 2. Form a benchmarking team 3. Identify benchmarking partners 4. Collect and analyse benchmarking information 5. Take action Application of benchmarking theory to the asset Administration Industry led to the development of specific performance indicators from a process and financial perspective as well as a learning and growth perspective. Due to the sensitivity of the information the benchmarking report was customised for each participant, reflecting only industry average measures (the benchmark) and the particular company measurement. In conclusion the pilot study has proven to generate robust measures useful to the management of the Asset Administration function by determining relative performance. The benchmarking exercise has also been successful in introducing the concept of shared learning and a platform for future benchmarking studies. Despite these positive outcomes the real benefits of a process benchmarking exercise has not been explored and could generate tremendous benefit for the effective operation of Asset Administration. / AFRIKAANSE OPSOMMING: Die Administrasie van Batebestuur Industrie word bestuur as 'n agterkantoor funksie met beperkte hulpmiddels om operasionele werkverrigting te bepaal. Daar bestaan geen industrie indeks vir werkverrigting asook geen basis vir samewerking nie. Die ou gesegde dat "wat gemeet word, word bestuur" is hier van toepassing. Hoogtemerking (benchmarking) is 'n bewese bestuursmiddel wat gebruik word om operasionele werkverrigting relatief tot die industrie te bepaal. Hoogtemerking is 'n sistematiese en voortdurende proses van meting wat 'n maatskappy help om hul relatiewe werksverrigting te bepaal sowel as om die faktore wat bydra tot werkverrigting uit te lig. Die teorie van hoogtemerking is dinamies en verskeie tipes hoogtemerking het reeds ontstaan. Ten spyte van bewese voordele het die populariteit van hoogtemerking gelei tot sub-standaard hoogtemerking oefeninge waardeur dit die reputasie van 'n bestuursfoefie gekry het. Dit is daarom belangrik dat enige hoogtemerking studie goed beplan word en gefokus is. Die keuse van die gepaste hoogtemerking tipe is belangrik. Ten einde die konsep van hoogtemerking bekend te stel en 'n basis te skep vir toekomstige hoogtemerking is besluit om 'n eenvoudige proefprojek te loods. Die hoogtemerking proses bestaan uit vyf stappe, nl: 1. Bepaal die basis van die hoogtemerk. 2. Stel 'n hoogtemerking span saam. 3. Identifiseer hoogtemerking vennote. 4. Vesamel en analiseer hoogtemerking informasie. 5. Neem aksie. Die toepassing van hoogtemerking teorie tot die Batebestuur Administrasieindustrie het gelei tot die ontwikkeling van spesifieke werkverrigting aanwysers vanuit 'n proses en finansiële perspektief aan die een kant, en 'n leer en groei perspektief aan die ander kant. As gevolg van die sensitiewe aard van die informasie is die hoogtemerking verslag volgens maat voorberei vir elke deelnemende maatskappy. Hierdie veslag reflekteer net die maatskappy se spesifieke hoogtemerk in verhouding tot die industrie gemiddelde. Ter afsluiting het die proefprojek daarin geslaag om robuuste data oor relatiewe werkverrigting te genereer wat gebruik kan word in die bestuur van Batebestuursadministrasie. Die hoogtemerking oefening het ook daarin geslaag om die konsep van gemeenskaplike leersaamheid oor te dra en 'n basis te skep vir toekomstige hoogtemerking studies. Ten spyte van al die positiewe gevolge is die werklike waarde van proses hoogtemerking nog nie ontgin nie en mag dit geweldige voordele ontsluit vir die effektiewe werking van Batebestuurs-administrasie.
200

Miljömärkning av logianläggningar : En studie av effekterna på Green Key-märkta hotell och vandrarhems miljöprestanda

Börjesson, Hannah January 2016 (has links)
Logisektorn står för en betydande del av resurskonsumtionen och den miljömässiga påverkan från turistindustrin. Idag finns ett växande antal miljömärkningar för att hjälpa logianläggningar att bli mer hållbara - en av de ledande internationella miljömärkningarna är Green Key. Logianläggningar kan ha olika motiv till att ansluta sig till en miljömärkning förutom ett internt miljöengagemang; kostnadsbesparingar, konkurrensfördelar, fler gäster och ökad lönsamhet. Den här studien har undersökt skillnader kring Green Keys effekter på anläggningar smiljöprestanda beroende på typ av anläggning, samt om antalet uppfyllda poängkriterier påverkar miljöprestandan. Syftet var framförallt att ta reda på om Green Key minskar logianläggningars miljöpåverkan över tid, och om märkningen leder till ett ökat antal gäster beroende på vilken typ av gäster anläggningarna hade främst. En kvantitativ metod valdes och longitudinell data över Green Key-anslutna hotell och vandrarhems årliga vatten-, el- och energiförbrukning samlades in. Även data över antal gästnätter per år, typ av gäster och total inomhusyta sammanställdes. Statistiska tester genomfördes och de visade att det inte fanns ett samband mellan antal uppfyllda poängkriterier och miljöpåverkan. Logianläggningarnas resursförbrukning skiljde sig endast åt gällande elförbrukning, där hotell hade en signifikant högre elförbrukning/m2 än vandrarhem. Resultatet visade att det fanns en effekt av Green Key på anläggningarnas resursförbrukning över tid. Effekten slog något olika, men majoriteten av anläggningarna hade minskat förbrukningen över tid. Det fanns en signifikant skillnad i vattenförbrukning/gästnatt från startåret med Green Key i jämförelse med 2015. Det fanns ingen skillnad i ökning av antalet gäster beroende på typ av gäster, men antalet gästnätter totalt var dock fler efter en miljömärkning med Green Key än före. Det är emellertid svårt att påvisa om effekten beror på Green Key eller andra faktorer. / The accommodation sector accounts for a significant proportion of resource consumption and environmental impact of the tourism industry. Today, there are a growing number of ecolabels in the accommodation sector to help establishments become more sustainable. One of them is the international eco-label Green Key. Lodging establishments have different motives for joining an eco-label in addition to an internal commitment to the environment; cost savings, competitive advantages, more guests and increased profitability. This study investigated if there is any difference when it comes to the effects of Green Key depending on the type of facility, and if the number of ‘scoring criteria’ affect environmental performance. The purpose was mainly to find out whether Green Key reduces the environmental impact over time, and if the label leads to an increased number of guests, depending on the type of guests. A quantitative method was chosen and longitudinal data over Green Key hotel and hostel's annual water, electricity and energy consumption was collected. Data on the number of guest nights per year, type of guests and total indoor area was also collected. Statistical tests were conducted and they showed that there was no correlation between the number of ‘scoring criteria’ and environmental impact. Only hotel and hostel's electricity consumption differed - hotels had a significantly higher electricity consumption/m2 than hostels. The results showed that there was an effect of Green Key on the establishments' resource consumption over time, although the effect differed. However, the majority of the establishments had reduced their resource consumption over time. The results also showed that there was a significant difference in water consumption per guest night from the starting year with Green Key in comparison with 2015. There was no difference in the increase of guest nights, depending on the type of guests. Although, the number of guest nights in total had significantly increased after the establishments had been rewarded with Green Key. However, it is difficult to demonstrate that the effect depends on Green Key.

Page generated in 0.0542 seconds