Spelling suggestions: "subject:"[een] COMPUTER SYSTEMS"" "subject:"[enn] COMPUTER SYSTEMS""
771 |
Ansible in different cloud environmentsWitt, Axel, Westling, Sebastian January 2023 (has links)
Cloud computing offers higher reliability and lower up-front IT costs than traditional computing environments and is a great way to dynamically scale both resources and capabilities. For further efficiency and consistency, cloud computing tasks can also be automated with tools such as Ansible. In this thesis, we will analyze and compare the abilities of Ansible with the three leading cloud platforms, i.e. Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). To evaluate this, we cover three different areas of automation with cloud platforms. These areas are performance, user complexity, and missing network functionalities. The performance was evaluated through experiments that revealed a big gap between the platforms, where AWS was the clear winner in all scenarios. Microsoft Azure was slightly faster than GCP when a low number of virtual machines were created but GCP scaled better than Azure when more virtual machines were created. The user complexity was evaluated on the setup process and the creation of playbooks where Azure was the clear winner in both areas. AWS and GCP had similar setup processes, but AWS takes second place through its superior documentation for the creation of playbooks. All three platforms had missing network functionalities, but the majority of the missing functionalities were not relevant as they were not related to deployment which is the main usage of an automation tool like Ansible. However, some deployment functions were missing, for example, the firewall function for AWS was missing in Ansible.
|
772 |
Predictive Maintenance for Cyclotrons using Machine LearningPawlik, Cesar January 2023 (has links)
A cyclotron is used for diagnosing and treating cancer. Pipes in the cyclotron have to be replaced as they get worn out when isotopes travel through them. This thesis aims to use machine learning models to predict when these parts have to be changed. Based on previous studies for predictive maintenance three dif- ferent machine learning models are used. The chosen models are random forest, gradient boosting and support vector machine. The results show that a gradient boosting regressor that predicts the number of remaining runs before the pipes have to be changed in the cyclotron is preferred. However, some data augmenta- tion had to be done to obtain these results, and future studies could explore the possibility of using a bigger data set or a multiple classifier approach.
|
773 |
Performance Evaluation of Digital Image Processing on the Web using WebAssemblyNyberg, Christoffer January 2023 (has links)
JavaScript has been the de-facto standard programming language for Web browsers for some time. Although it has enabled the interactive and complex web pages we have today, it has long been characterized by performance issues. A promising new technology, WebAssembly, aims to enable near-native performance on the Web. WebAssembly is a binary instruction format designed as a compilation target for programming languages like C/C++. This allows developers to deploy their applications for execution in a Web browser environment. Previous benchmarks have examined the performance of WebAssembly and observed a varying performance slowdown of 10% to around 55% slower than native. Recent additions to the WebAssembly standard, such as the support for SIMD instructions and multithreading, enables even greater performance and new benchmarks need to be constructed. This thesis explores the performance implications of these new features by applying them in the domain of digital image processing, which is particularly suited for such optimizations. The OpenCV library was used to construct two benchmark suites, one running natively and one running in two different Web browsers using WebAssembly. The results of the benchmarks indicate that, although in some cases performance approached native performance, the mean slowdown was approximately double compared to native code.
|
774 |
Utvärdering av H2-databas för GlobalEye jämfört med Sybase : Prestandatester mellan databaserNilsson, Mattias January 2021 (has links)
Att byta databas i ett befintligt system är svårt. För att ta reda på om en annan databas fungerar tillfredsställande till systemet finns inte mycket annat att göra än att testa. Syftet med detta arbete är att utvärdera om det är möjligt för SAAB att byta databas frånSybase ASE Express Edition till H2. För utvärderingen byggs ett så kallat ''Proof-of-Concept'' med den nya databasen kopplad till systemet. Med denna uppsättning görs tester för att undersöka skillnader i funktion och prestanda. Testerna görs genom att bygga arbetslaster som båda databaserna får hantera, där tiden det tar att utföra arbetet mäts. Det görs även förbättringar på linjära sökmetoder i systemet. Resultaten visar att Sybase-databasen har bättre prestanda när det gäller att läsa från databasen.Men att H2 har mycket bättre prestanda när det gäller att skriva till databasen.Slutsatsen som dras från utvärderingen är att H2 fungerar som en ersättare till Sybase ASE ExpressEdition. Systemets linjära sökmetoder utvärderas och ersätts med teoretiskt bättre binära sökmetoder. / The changing of database to an existing system is not easy. To find out if the new database willwork to a satisfying degree there is no other way than to test it. The purpose of this work is toevaluate if it is possible for SAAB to switch database from Sybase ASE Express Edition to H2. To evaluate the database, a Proof-of-Concept is created and connected to the systemWith this setup, tests are performed to investigate differences in functions and performance.The tests are preformed by building workloads that both databases will work with, where the time it takesto work with them is measured. Will also create an improvement on search methods that were linear. The results show that Sybase have a better performance when it comes to reading from the database while H2 have a better performance when it comes to writing to the database. A conclusion is that H2 will work as a substitute to Sybase ASE Express Edition. The linear search methods as previouslyimplemented where replaced with binary search methods and the results show that the binary searchmethod were faster.
|
775 |
Predicting Early Hospital Readmissions usingMachine LearningTemmel, Adam January 2022 (has links)
An early hospital readmission means that a newly discharged patient is readmitted within a small time frame (< 30 days) due to reasons directly related to the original admission. This generally runs the risk of negatively impacting both the wellbeing of the patient in question as well as the hospice care unit admitting the patient economically. Being able to use modern computational tools to predict which patients run a large risk of soon becoming admitted once more either prior to or during their discharge can help in the task of preventing these incidents altogether. During this study, 65 different machine learning models were trained on a dataset assembled using metrics from 130 American hospitals over a 10-year period. While the dataset is specialised on patients affected by diabetes, the study also presents generalized models trained on a version of the dataset free from attributes unique to patients affected by diabetes. Several of these models are trained using methods specifically designed to counter an inherent class imbalance issue present within the chosen problem domain. The study results in the presentation of several performance related metrics of the trained models, including AUC scores and an approximation of the early readmission cost per patient predicted using the different models. Lastly, the study concludes with some examples of potential alternative methods that may further evolve the performance of the models designed for this task as well as a discussion regarding the ethics of deploying such a solution in the real world.
|
776 |
Compressing Deep Convolutional Neural NetworksMancevo del Castillo Ayala, Diego January 2017 (has links)
Deep Convolutional Neural Networks and "deep learning" in general stand at the cutting edge on a range of applications, from image based recognition and classification to natural language processing, speech and speaker recognition and reinforcement learning. Very deep models however are often large, complex and computationally expensive to train and evaluate. Deep learning models are thus seldom deployed natively in environments where computational resources are scarce or expensive. To address this problem we turn our attention towards a range of techniques that we collectively refer to as "model compression" where a lighter student model is trained to approximate the output produced by the model we wish to compress. To this end, the output from the original model is used to craft the training labels of the smaller student model. This work contains some experiments on CIFAR-10 and demonstrates how to use the aforementioned techniques to compress a people counting model whose precision, recall and F1-score are improved by as much as 14% against our baseline.
|
777 |
Distributed Dominant Resource Fairness using Gradient OverlayÖstman, Alexander January 2017 (has links)
Resource management is an important component in many distributed clusters. A resource manager handles which server a task should run on and which user’s task that should be allocated. If a system has multiple users with similar demands, all users should have an equal share of the cluster, making the system fair. This is typically done today using a centralized server which has full knowledge of all servers in the cluster and the different users. Having a centralized server brings problems such as single point of failure, and vertical scaling on the resource manager. This thesis focuses on fairness for users during task allocation with a decentralized resource manager. A solution called, Parallel Distributed Gradient-based Dominant Resource Fairness, is proposed. It allows servers to handle a subset of users and to allocate tasks in parallel, while maintaining fairness results close to a centralized server. The solution utilizes a gradient network topology overlay to sort the servers based on their users’ current usage and allows a server to know if it has the user with the currently lowest resource usage. The solution is compared to pre-existing solutions, based on fairness and allocation time. The results show that the solution is more fair than the pre-existing solutions based on the gini-coefficient. The results also show that the allocation time scales based on the number of users in the cluster because it allows more parallel allocations by the servers. It does not scale as well though as existing distributed solutions. With 40 users and over 100 servers the solution has an equal time to a centralized solution and outperforms a centralized solution with more users. / Resurshantering är en viktig komponent i många distribuerade kluster. En resurshanterare bestämmer vilken server som skall exekvera en uppgift, och vilken användares uppgift som skall allokeras. Om ett system har flera användare med liknande krav, bör resurserna tilldelas jämnlikt mellan användarna. Idag implementeras resurshanterare oftast som en centraliserad server som har information om alla servrar i klustret och de olika användarna. En centraliserad server skapar dock problem som driftstopp vid avbrott på ett enda ställe, även enbart vertikal skalning för resurshanteraren. Denna uppsats fokuserar på jämnlikhet för användare med en decentraliserad resurshanterare. En lösning föreslås, Parallel Distributed Gradient-based Dominant Resource Fairness, som tillåter servrar att hantera en delmängd av användare i systemet, detta med en liknande jämnlikhet jämförande med en centraliserad server. Lösningen använder en så kallad gradient network topology overlay för att sortera servrarna baserat på deras användares resursanvändning och tillåter en server att veta om den har användaren med lägst resursanvändning i klustret. Lösningen jämförs med existerande lösningar baserat på jämnlikhet och allokeringstid. Resultaten visar att lösningen ger en mer jämnlik allokering än existerande lösningar utifrån gini-koefficienten. Resultaten visar även att systemets skallbarhet angående allokeringstid är beroende på antalet användare i klustret eftersom det tillåter fler parallella allokeringar. Lösningen skalar inte lika bra dock som existerande distribuerade lösningar. Med 40 användare och över 100 servrar har lösningen liknande tid som en centraliserad server, och är snabbare med fler användare.
|
778 |
A Runtime Bounds-Checks Lister for SoftBoundCETSHedencrona, Daniel January 2018 (has links)
Memory-safe execution of C programs has been well researched but the ability to find memory-safety violations before execution has often been overlooked. One approach for memory-safe C is SoftBoundCETS which infer some memory-accesses as statically safe and others become runtime-checked. One problem with this approach is that it is not obvious to the programmer which checks are runtime-checked and which are inferred as safe. This report analyses the approach taken by SoftBoundCETS by implementing a runtime bounds-checks lister for SoftBoundCETS.The resulting runtime bounds-checks-listing program that can track 99% of the inlined runtime bounds-checks to user program source code lines in programs compiled with -O3 and link-time-optimisation. Analysing SoftBoundCETS with this tool reveals SoftBoundCETS can eliminate about 35% of the memory loads and stores as statically safe in Coreutils 8.27. / Mycket forskning har utförts om minnessäker exekvering av C-program men förmågan att hitta minnessäkerhetsöverträdelser har ofta förbisetts. En approach för minnessäker C är Softbound-Cets som härleder vissa minnesaccesser som statiskt säkra och andra blir kontrollerade vid körtid. Ett problem med denna approach är att det inte är uppenbart för programmeraren vilka kontroller som utförs vid körtid och vilka som härleds som säkra. Denna rapport analyserar Softbound-Cets approach genom att implementera ett program som listar fältaccesser för Softbound-Cets. Det färdiga programmet som listar fältaccesser vid körtid kan spåra 99% av inline-expanderade accesskontroller vid körtid till kodrader i användarprogram kompilerade med -O3 och länktidsoptimering. Analysen av Softbound-Cets med detta verktyg avslöjar att Softbound-Cets kan eliminera runt 35% av minnesaccesserna som statiskt säkra bland programmeni Coreutils 8.27.
|
779 |
Simultaneous Measurement Imputation and Rehabilitation Outcome Prediction for Achilles Tendon RuptureHamesse, Charles January 2018 (has links)
Achilles tendonbrott (Achilles Tendon Rupture, ATR) är en av de typiska mjukvävnadsskadorna. Rehabilitering efter sådana muskuloskeletala skador förblir en långvarig process med ett mycket variet resultat. Att kunna förutsäga rehabiliteringsresultat exakt är avgörande för beslutsfattande stöduppdrag. I detta arbete designar vi en probabilistisk modell för att förutse rehabiliteringsresultat för ATR med hjälp av en klinisk kohort med många saknade poster. Vår modell är tränad från början till slutet för att samtidigt förutsäga de saknade inmatningarna och rehabiliteringsresultat. Vi utvärderar vår modell och jämför med flera baslinjer, inklusive flerstegsmetoder. Experimentella resultat visar överlägsenheten hos vår modell över dessa flerstadiga tillvägagångssätt med olika dataimuleringsmetoder för ATR rehabiliterings utfalls prognos. / Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries. Rehabilitation after such musculoskeletal injuries remains a prolonged process with a very variable outcome. Being able to predict the rehabilitation outcome accurately is crucial for treatment decision support. In this work, we design a probabilistic model to predict the rehabilitation outcome for ATR using a clinical cohort with numerous missing entries. Our model is trained end-to-end in order to simultaneously predict the missing entries and the rehabilitation outcome. We evaluate our model and compare with multiple baselines, including multi-stage methods. Experimental results demonstrate the superiority of our model over these baseline multi-stage approaches with various data imputation methods for ATR rehabilitation outcome prediction.
|
780 |
Utterances classifier for chatbots’ intentsJoigneau, Axel January 2018 (has links)
Chatbots are the next big improvement in the era of conversational services. A chatbot is a virtual person who can carry out a conversation with a human about a certain subject, using interactive textual skills. Currently, there are many cloud-based chatbots services that are being developed and improved such as IBM Watson, well known for winning the quiz show “Jeopardy!” in 2011. Chatbots are based on a large amount of structured data. They contains many examples of questions that are associated to a specific intent which represents what the user wants to say. Those associations are currently being done by hand, and this project focuses on improving this data structuring using both supervised and unsupervised algorithms. A supervised reclassification using an improved Barycenter method reached 85% in precision and 75% in recall for a data set containing 2005 questions. Questions that did not match any intent were then clustered in an unsupervised way using a K-means algorithm that reached a purity of 0.5 for the optimal K chosen. / Chatbots är nästa stora förbättring i konversationstiden. En chatbot är en virtuell person som kan genomföra en konversation med en människa om ett visst ämne, med hjälp av interaktiva textkunskaper. För närvarande finns det många molnbaserade chatbots-tjänster som utvecklas och förbättras som IBM Watson, känt för att vinna quizshowen "Jeopardy!" 2011. Chatbots baseras på en stor mängd strukturerade data. De innehåller många exempel på frågor som är kopplade till en specifik avsikt som representerar vad användaren vill säga. Dessa föreningar görs för närvarande för hand, och detta projekt fokuserar på att förbättra denna datastrukturering med hjälp av både övervakade och oövervakade algoritmer. En övervakad omklassificering med hjälp av en förbättrad Barycenter-metod uppnådde 85 % i precision och 75 % i recall för en dataset innehållande 2005 frågorna. Frågorna som inte matchade någon avsikt blev sedan grupperade på ett oövervakad sätt med en K-medelalgoritm som nådde en renhet på 0,5 för den optimala K som valts.
|
Page generated in 0.0511 seconds