Spelling suggestions: "subject:"bperformance evaluation"" "subject:"deperformance evaluation""
881 |
Beyond budgeting under coronapandemin : En fallstudie på Ahlsell Sverige ABPetersson, Axel, Torstensson, Jacob, Lundegårdh, Theo January 2023 (has links)
Det finns forskning som säger att under en kris gynnas organisationer av att använda det mer traditionella budgetsystemet och forskning som menar att den budgetlösa metoden är att föredra. Det här undersöks genom att se hur sex stycken principer av beyond budgeting används inom företaget Ahlsell Sverige AB. Syftet med arbetet är således att öka förståelsen för hur Ahlsell styr sin verksamhet, hur de använder principerna och se hur det påverkades under coronapandemin. Har krisen medfört att företaget agerade på ett sätt som är mindre i linje med beyond budgeting? Med det ovan nämnda har en fallstudie varit grunden i forskningsdesignen och metoden. Där en kvalitativ forskning med en induktiv ansats har följts för att sedan utföra semistrukturerade intervjuer hos ett antal respondenter inom företaget. I och med intervjuerna har målbilden varit att få ut nyanserad och djupgående information. Baserat på den insamlade empiriska datan samt analysen tillsammans med teorin, har slutsatserna varit att Ahlsell använder sex styrprinciper inom beyond budgeting i olika former. Vissa ligger helt i linje med beyond budgeting men andra ligger mindre i linje med styrprocesserna. Det framkommer i slutsatserna att coronapandemin inte påverkat företaget och styrprocesserna särskilt mycket alls. Ahlsell visade fina siffror och fortsatte att styra budgetlöst i samma riktning utan stora ändringar. / There is research that says that during a crisis, organizations benefit by using the more traditional budgeting system and research that means that the beyond budgeting method is to be preferred. The basis of our research is to see how a beyond budgeting company used the six management control principles during the corona pandemic. This is examined by seeing how the six principles of beyond budgeting are used within the company Ahlsell Sverige AB. The purpose of the work is thus to increase understanding of how Ahlsell manages its operations, how they use the principles and see how it was affected during the corona pandemic. Did the crisis result in the company operating in a way that is less in line with beyond budgeting? With the aforementioned, a case study has been the basis of the research design and method. Where qualitative research with an inductive approach has been followed to then carry out semi-structured interviews with a number of respondents within the company. With the interviews, the goal has been to obtain nuanced and in-depth information. Based on the collected empirical data, as well as the analysis together with the theory, the conclusions have been that they use the six principles within beyond budgeting in different forms. Some are completely in line with beyond budgeting principles but others are less in line with beyond budgeting. It appears in the conclusions that the corona pandemic did not affect the company and the management processes very much at all. Ahlsell showed good numbers and continued to use control without a budget in the same direction without major changes.
|
882 |
System Support for Next-Gen Mobile ApplicationsJiayi Meng (16512234) 10 July 2023 (has links)
<p>Next-generation (Next-Gen) mobile applications, Extended Reality (XR), which encompasses Virtual/Augmented/Mixed Reality (VR/AR/MR), promise to revolutionize how people interact with technology and the world, ushering in a new era of immersive experiences. However, the hardware capacity of mobile devices will not grow proportionally with the escalating resource demands of the mobile apps due to their battery constraint. To bridge the gap, edge computing has emerged as a promising approach. It is further boosted by emerging 5G cellular networks, which promise low latency and high bandwidth. However, realizing the full potential of edge computing faces several fundamental challenges.</p>
<p><br></p>
<p>In this thesis, we first discuss a set of fundamental design challenges in supporting Next-Gen mobile applications via edge computing. These challenges extend across the three key system components involved — mobile clients, edge servers, and cellular networks. We then present how we address several of these challenges, including (1) how to coordinate mobile clients and edge servers to achieve stringent QoE requirements for Next-Gen apps; (2) how to optimize energy consumption of running Next-Gen apps on mobile devices to ensure long-lasting user experience; and (3) how to model and generate control-plane traffic of cellular networks to enable innovation on mobile network architectural design to support Next-Gen apps not only over 4G but also over 5G and beyond.</p>
<p><br></p>
<p>First, we present how to optimize the latency in edge-assisted XR system via the mobile-client and edge-server co-design. Specifically, we exploit key insights about frame similarity in VR to build the first multiplayer edge-assisted VR design, Coterie. We demonstrate that compared with the prior work on single-player VR, Coterie reduces the per-player network load by 10.6X−25.7X, and can easily support 4 players for high-quality VR apps on Pixel 2 over 802.11ac running at 60 FPS and under 16ms responsiveness without exhausting the finite wireless bandwidth.</p>
<p><br></p>
<p>Second, we focus on the energy perspective of running Next-Gen apps on mobile devices. We study a major limitation of a classic and de facto app energy management technique, reactive energy-aware app adaptation, which was first proposed two decades ago. We propose, design, and validate a new solution, the first proactive energy-aware app adaptation, that effectively tackles the limitation and achieves higher app QoE while meeting a given energy drain target. Compared with traditional approaches, our proactive solution improves the QoE by 44.8% (Pixel 2) and 19.2% (Moto Z3) under low power budget.</p>
<p><br></p>
<p>Finally, we delve into the third system component, cellular networks. To facilitate innovation in mobile network architecture to better support Next-Gen apps, we characterize and model the control-plane traffic of cellular networks, which has been mostly overlooked by prior work. To model the control-plane traffic, we first prove that traditional probability distributions that have been widely used for modeling Internet traffic (e.g., Poisson, Pareto, and Weibull) cannot model the control-plane traffic due to the much higher burstiness and longer tails in the cumulative distributions of the control-plane traffic. We then propose a two-level state-machine-based traffic model based on the Semi-Markov model. We finally validate that the synthesized traces by using our model achieve small differences compared with the real traces, i.e., within 1.7%, 4.9% and 0.8%, for phones, connected cars, and tablets, respectively. We also show that our model can be easily adjusted from LTE to 5G, enabling further research on control-plane design and optimization for 4G/5G and beyond.</p>
|
883 |
Composable, Sound Transformations for Nested Recursion and LoopsKirshanthan Sundararajah (16647885) 26 July 2023 (has links)
<p> </p>
<p>Programs that use loops to operate over arrays and matrices are generally known as <em>regular programs</em>. These programs appear in critical applications such as image processing, differential equation solvers, and machine learning. Over the past few decades, extensive research has been done on composing, verifying, and applying scheduling transformations like loop interchange and loop tiling for regular programs. As a result, we have general frameworks such as the polyhedral model to handle transformations for loop-based programs. Similarly, programs that use recursion and loops to manipulate pointer-based data structures are known as <em>irregular programs</em>. Irregular programs also appear in essential applications such as scientific simulations, data mining, and graphics rendering. However, there is no analogous framework for recursive programs. In the last decade, although many scheduling transformations have been developed for irregular programs, they are ad-hoc in various aspects, such as being developed for a specific application and lacking portability. This dissertation examines principled ways to handle scheduling transformations for recursive programs through a unified framework resulting in performance enhancement. </p>
<p>Finding principled approaches to optimize irregular programs at compile-time is a long-standing problem. We specifically focus on scheduling transformations that reorder a program’s operations to improve performance by enhancing locality and exploiting parallelism. In the first part of this dissertation, we present PolyRec, a unified general framework that can compose and apply scheduling transformations to nested recursive programs and reason about the correctness of composed transformations. PolyRec is a first-of-its-kind unified general transformation framework for irregular programs consisting of nested recursion and loops. It is built on solid theoretical foundations from the world of automata and transducers and provides a fundamentally novel way to think about recursive programs and scheduling transformations for them. The core idea is designing mechanisms to strike a balance between the expressivity in representing the set of dynamic instances of computations, transformations, and dependences and the decidability of checking the correctness of composed transformations. We use <em>multi-tape </em>automata and transducers to represent the set of dynamic instances of computations and transformations, respectively. These machines are similar yet more expressive than their classical single-tape counterparts. While in general decidable properties of classical machines are undecidable for multi-tape machines, we have proven that those properties are decidable for the class of machines we consider, and we present algorithms to verify these properties. Therefore these machines provide the building blocks to compose and verify scheduling transformations for nested recursion and loops. The crux of the PolyRec framework is its regular string-based representation of dynamic instances that allows to lexicographically order instances identically to their execution order. All the transformations considered in PolyRec require different ordering of these strings representable only with <em>additive </em>changes to the strings. </p>
<p>Loop transformations such as <em>skewing </em>require performing arithmetic on the representation of dynamic instances. In the second part of this dissertation, we explore this space of transformations by introducing skewing to nested recursion. Skewing plays an essential role in producing easily parallelizable loop nests from seemingly difficult ones due to dependences carried across loops. The inclusion of skewing for nested recursion to PolyRec requires significant extensions to representing dynamic instances and transformations that facilitate <em>performing arithmetic using strings</em>. First, we prove that the machines that represent the transformations are still composable. Then we prove that the representation of dependences and the algorithm that checks the correctness of composed transformations hold with minimal changes. Our new extended framework is known as UniRec, since it resembles the unimodular transformations for perfectly nested loop nests, which consider any combination of the primary transformations interchange, reversal, and skewing. UniRec opens possibilities of producing newly composed transformations for nested recursion and loops and verifying their correctness. We claim that UniRec completely subsumes the unimodular framework for loop transformations since nested recursion is more general than loop nests. </p>
|
884 |
Performance Evaluation of Certified Pilots in Flight SimulatorKrishna, Anandu 15 May 2023 (has links)
No description available.
|
885 |
The Development of a Multifunction UGVXing, Anzhou January 2023 (has links)
With the increasingly prevalent use of robots, this paper presents the design and evaluation of a multifunctional Unmanned Ground Vehicle (UGV) with an adjustable suspension system, overmolding omni-wheels, and a unique tool head pick-up mechanism. The UGV addresses current adaptability, performance, and versatility limitations across various industries, including agriculture, construction, and surveillance. The adjustable suspension system enhances the UGV's stability and adaptability on diverse terrains, and the overmolding omni-wheels improve maneuverability and durability in off-road conditions. The tool head pick-up mechanism allows for the seamless integration of various tools, enabling the UGV to perform multiple tasks without manual intervention. A comprehensive performance evaluation assessed the UGVs' versatility, load capacity, passability, and adaptability. The results indicate that the proposed UGV design successfully addresses current limitations and has the potential to revolutionize various applications in different industries. Further research and development are necessary to optimize the UGV's performance, safety, and cost-effectiveness. / Thesis / Master of Applied Science (MASc)
|
886 |
Characterization and Implementation of Screen-Printed, Flexible PTC Heaters for Portable Diagnostic TestingRiley J Brown (15348913) 26 April 2023 (has links)
<p>The 2020 pandemic emphasized the need for accessible and accurate point-of-care diagnostic tests. With the continued development of isothermal nucleic acid amplification tests, this can be achieved. A requirement of these tests includes heating and holding a specific temperature, in this case, 65C for 30 minutes, for amplification to occur. To achieve this, heaters often require external feedback to control the temperature; bringing up the device’s cost. Several self-regulating heaters have been made with materials having a positive thermal coefficient of resistance eliminating the need for complex circuitry. With this property, point-of-care diagnostic tests can be simplified and made more accessible. In this study, ink-based positive thermal coefficient of resistance heaters are developed and characterized using the scalable method of screen printing to achieve 65C and aid in the detection of SARS-CoV-2. Various curing methods and screen-printing parameters were evaluated to improve the stability and understanding of the reproducibility of the heaters. The longevity of the heaters was evaluated with oxidation studies and a COMSOL model was created to study the heat transfer within the device. Furthermore, the heaters were successfully implemented into a second-generation electronic point-of-care diagnostic device. Detection of SARS-CoV-2 using a self-regulating heater removes the need for complex circuitry, improving the accessibility of point-of-care tests with the potential to be expanded to a wide range of pathogen detection. </p>
|
887 |
[en] AN ESSAY ON ANALYSIS OF PERFORMANCE AND USE OF RESOURCES IN HEALTHCARE SYSTEMS / [pt] UM ENSAIO SOBRE ANÁLISE DE DESEMPENHO E USO DE RECURSOS EM SISTEMAS DE SAÚDELEONARDO DOS SANTOS LOURENCO BASTOS 28 December 2021 (has links)
[pt] A gestão adequada dos recursos de saúde é essencial para fornecer os cuidados ideais aos pacientes, especialmente em condições de alta pressão no sistema ou recursos limitados. O benchmarking é útil para avaliar o desempenho e a qualidade do atendimento em sistemas de saúde e para identificar pontos de melhoria. Na unidade de terapia intensiva (UTI) isto é essencial, pois nela há ocorrência de casos complexos, custos elevados e geração de conhecimento relevante para o tratamento de doenças graves. Em condições normais, a avaliação do desempenho em terapia intensiva é uma tarefa difícil, uma vez que métricas de desempenho devem levar em conta o perfil dos paciente admitidos e os aspectos organizacionais da unidade. Condições de alta pressão no sistema aumentam a variabilidade no uso de recursos e, portanto, a unidade passa desempenhar de maneira incomum. Um desses cenários é a pandemia COVID-19, que sobrecarregou os sistemas de saúde pelo mundo desde dezembro de 2019, especialmente em termos da terapia intensiva. Esta tese tem como objetivo avaliar a utilização de recursos e o desempenho dos sistemas de saúde nos cenários de pré e durante a pandemia de COVID-19. Utilizando dados de hospitais brasileiros, desenvolvemos seis estudos individuais com o objetivo de realizar benchmarking em UTIs em um período pré-pandêmico e compreender o uso de recursos e resultados da unidade durante a progressão da pandemia. Cada trabalho foi executado como um projetosde Ciência de Dados seguindo o Ciclo de Vida da Ciência de Dados, embasado pelo Design Science framework, e usando diferentes técnicas e modelagens estatísticas para analisar dados. Os principais resultados mostraram que antes da pandemia os dados de UTIs brasileiras indicavam que a avaliação das métricas de qualidade e a vigilância ativa de infecções eram atividade organizacionais associadas às unidades eficientes. Além disso, durante o período de pandemia o uso de recursos e os desfechos variaram temporal e regionalmente no Brasil. Regiões com sistemas de saúde mais vulneráveis e menor disponibilidade de recursos de UTI, tais como Norte e Nordeste, apresentaram panoramas desfavoráveis em comparação às regiões Sul e Sudeste. Este impacto no sistema de saúde brasileiro foi intensificado em um segundo surto da pandemia, mostrando aumento no uso de recursos respiratórios e mortalidade. Por fim, ao avaliar uma rede de hospitais privados com grande disponibilidade de recursos, verificamos que a mortalidade geral foi baixa, e o suporte respiratório não invasivo foi independentemente associado à redução da mortalidade. / [en] The adequate management of healthcare resources is essential to provide optimal patient care, especially under high stress/strain conditions or limited resources. Benchmarking is helpful to evaluate the performance and quality of care within these systems and provide targets for improvement. This is especially important for the intensive care units (ICUs), which deal with complex cases, high costs and provides relevant insights for treating severe diseases. Under usual conditions, assessing performance in intensive care is complex since metrics must account for the patient s case-mix and the unit s organizational settings. When high strain or stress conditions arise, the resource use increases, and the unit performs in unusual conditions. One of these settings is the COVID-19 pandemic, which has overwhelmed healthcare systems worldwide since December 2019, notably intensive care resources. This thesis aims to evaluate the use of resources and performance of healthcare systems under the perspectives of before and during the COVID-19 pandemic. Using data from Brazilian hospitals, we developed six individual studies aiming to perform ICU benchmarking in a pre-pandemic period and understand the use of ICU resources and outcomes during the progression of the pandemic. We managed each work as data science projects following the Data Science Life Cycle, under the Design Science research framework, and used different statistical approaches to analyze data. Our main results show that before the pandemic, the assessment of quality-of-care metrics and active surveillance of infections were associated with efficient ICU units. During the pandemic period, the use of resources and outcomes varied temporally and regionally in Brazil. North and Northeast, regions with more vulnerable healthcare systems, showed poor outcomes and lower availability of ICU resources than South and Southeast regions. The impact on the Brazilian healthcare system was intensified in a second pandemic surge, showing increasing use of respiratory resources and mortality. Finally, when evaluating the mortality evolution in a network of private hospitals that underwent preparedness and presented large availability of resources, the overall mortality was low and decreased over time. Noninvasive respiratory support was independently associated with a reduction in mortality.
|
888 |
Comparing Android Runtime with native : Fast Fourier Transform on Android / Jämförelse av Android Runtime och native : Fast Fourier Transform på AndroidDanielsson, André January 2017 (has links)
This thesis investigates the performance differences between Java code compiled by Android Runtime and C++ code compiled by Clang on Android. For testing the differences, the Fast Fourier Transform (FFT) algorithm was chosen to demonstrate examples of when it is relevant to have high performance computing on a mobile device. Different aspects that could affect the execution time of a program were examined. One test measured the overhead related to the Java Native Interface (JNI). The results showed that the overhead was insignificant for FFT sizes larger than 64. Another test compared matching implementations of FFTs between Java and native code. The conclusion drawn from this test was that, of the converted algorithms, Columbia Iterative FFT performed the best in both Java and C++. A third test, evaluating the performance of vectorization, proved to be an efficient option for native optimization. Finally, tests examining the effect of using single-point precision (float) versus double-point precision (double) data types were covered. Choosing float could improve performance by using the cache in an efficient manner. / I denna studie undersöktes prestandaskillnader mellan Java-kod kompilerad av Android Runtime och C++-kod kompilerad av Clang på Android. En snabb Fourier Transform (FFT) användes under experimenten för att visa vilka användningsområden som kräver hög prestanda på en mobil enhet. Olika påverkande aspekter vid användningen av en FFT undersöktes. Ett test undersökte hur mycket påverkan Java Native Interface (JNI) hade på ett program i helhet. Resultaten från dessa tester visade att påverkan inte var signifikant för FFT-storlekar större än 64. Ett annat test undersökte prestandaskillnader mellan FFT-algoritmer översatta från Java till C++. Slutsatsen kring dessa tester var att av de översatta algoritmerna var Columbia Iterative FFT den som presterade bäst, både i Java och i C++. Vektorisering visade sig vara en effektiv optimeringsteknik för arkitekturspecifik kod skriven i C++. Slutligen utfördes tester som undersökte prestandaskillnader mellan flyttalsprecision för datatyperna float och double. float kunde förbättra prestandan genom att på ett effektivt sätt utnyttja processorns cache.
|
889 |
Evaluating The Performance Of Animal Shelters: An Application Of Data Envelopment AnalysisHeyde, Brandy 01 January 2008 (has links)
The focus of this thesis is the application of data envelopment analysis to understand and evaluate the performance of diverse animal welfare organizations across the United States. The results include identification of the most efficient animal welfare organizations, at least among those that post statistics on their operations, and a discussion of various partnerships that may improve the performance of the more inefficient organizations. The Humane Society of the United States estimates that there are 4000 - 6000 independently-run animal shelters across the United States, with an estimated 6-8 million companion animals entering them each year. Unfortunately, more than half of these animals are euthanized. The methods shared in this research illustrate how data envelopment analysis may help shelters improve these statistics through evaluation and cooperation. Data envelopment analysis (DEA) is based on the principle that the efficiency of an organization depends on its ability to transform its inputs into the desired outputs. The result of a DEA model is a single measure that summarizes the relative efficiency of each decision making unit (DMU) when compared with similar organizations. The DEA linear program defines an efficiency frontier with the most efficient animal shelters that are put into the model that "envelops" the other DMUs. Individual efficiency scores are calculated by determining how close each DMU is to reaching the frontier. The results shared in this research focus on the performance of 15 animal shelters. Lack of standardized data regarding individual animal shelter performance limited the ability to review a larger number of shelters and provide more robust results. Various programs are in place within the United States to improve the collection and availability of individual shelter performance. Specifically, the Asilomar Accords provide a strong framework for doing this and could significantly reduce euthanasia of companion animals if more shelters would adopt the practice of collecting and reporting their data in this format. It is demonstrated in this research that combining performance data with financial data within the data envelopment analysis technique can be powerful in helping shelters identify how to better deliver results. The addition of data from other organizations will make the results even more robust and useful for each shelter involved.
|
890 |
A comparative analysis of database sanitization techniques for privacy-preserving association rule mining / En jämförande analys av tekniker för databasanonymisering inom sekretessbevarande associationsregelutvinningMårtensson, Charlie January 2023 (has links)
Association rule hiding (ARH) is the process of modifying a transaction database to prevent sensitive patterns (association rules) from discovery by data miners. An optimal ARH technique successfully hides all sensitive patterns while leaving all nonsensitive patterns public. However, in practice, many ARH algorithms cause some undesirable side effects, such as failing to hide sensitive rules or mistakenly hiding nonsensitive ones. Evaluating the utility of ARH algorithms therefore involves measuring the side effects they cause. There are a wide array of ARH techniques in use, with evolutionary algorithms in particular gaining popularity in recent years. However, previous research in the area has focused on incremental improvement of existing algorithms. No work was found that compares the performance of ARH algorithms without the incentive of promoting a newly suggested algorithm as superior. To fill this research gap, this project compares three ARH algorithms developed between 2019 and 2022—ABC4ARH, VIDPSO, and SA-MDP— using identical and unbiased parameters. The algorithms were run on three real databases and three synthetic ones of various sizes, in each case given four different sets of sensitive rules to hide. Their performance was measured in terms of side effects, runtime, and scalability (i.e., performance on increasing database size). It was found that the performance of the algorithms varied considerably depending on the characteristics of the input data, with no algorithm consistently outperforming others at the task of mitigating side effects. VIDPSO was the most efficient in terms of runtime, while ABC4ARH maintained the most robust performance as the database size increased. However, results matching the quality of those in the papers originally describing each algorithm could not be reproduced, showing a clear need for validating the reproducibility of research before the results can be trusted. / ”Association rule hiding”, ungefär ”döljande av associationsregler” – hädanefter ARH – är en process som går ut på att modifiera en transaktionsdatabas för att förhindra att känsliga mönster (så kallade associationsregler) upptäcks genom datautvinning. En optimal ARH-teknik döljer framgångsrikt alla känsliga mönster medan alla ickekänsliga mönster förblir öppet tillgängliga. I praktiken är det dock vanligt att ARH-algoritmer orsakar oönskade sidoeffekter. Exempelvis kan de misslyckas med att dölja vissa känsliga regler eller dölja ickekänsliga regler av misstag. Evalueringen av ARH-algoritmers användbarhet inbegriper därför mätning av dessa sidoeffekter. Bland det stora urvalet ARH-tekniker har i synnerhet evolutionära algoritmer ökat i popularitet under senare år. Tidigare forskning inom området har dock fokuserat på inkrementell förbättring av existerande algoritmer. Ingen forskning hittades som jämförde ARH-algoritmer utan det underliggande incitamentet att framhäva överlägsenheten hos en nyutvecklad algoritm. Detta projekt ämnar fylla denna lucka i forskningen genom en jämförelse av tre ARH-algoritmer som tagits fram mellan 2019 och 2022 – ABC4ARH, VIDPSO och SA-MDP – med hjälp av identiska och oberoende parametrar. Algoritmerna kördes på sex databaser – tre hämtade från verkligheten, tre syntetiska av varierande storlek – och fick i samtliga fall fyra olika uppsättningar känsliga regler att dölja. Prestandan mättes enligt sidoeffekter, exekveringstid samt skalbarhet (dvs. prestation när databasens storlek ökar). Algoritmernas prestation varierade avsevärt beroende på indatans egenskaper. Ingen algoritm var konsekvent överlägsen de andra när det gällde att minimera sidoeffekter. VIDPSO var tidsmässigt mest effektiv, medan ABC4ARH var mest robust vid hanteringen av växande indata. Resultat i nivå med de som uppmättes i forskningsrapporterna som ursprungligen presenterat varje algoritm kunde inte reproduceras, vilket tyder på ett behov av att validera reproducerbarheten hos forskning innan dess resultat kan anses tillförlitliga.
|
Page generated in 0.128 seconds