Spelling suggestions: "subject:"[een] COMPUTER SYSTEMS"" "subject:"[enn] COMPUTER SYSTEMS""
741 |
Locality-aware Scheduling and Characterization of Task-based ProgramsMuddukrishna, Ananya January 2014 (has links)
Modern computer architectures expose an increasing number of parallel features supported by complex memory access and communication structures. Currently used task scheduling techniques perform poorly since they focus solely on balancing computation load across parallel features and remain oblivious to locality properties of support structures. We contribute with locality-aware task scheduling mechanisms which improve execution time performance on average by 44\% and 11\% respectively on two locality-sensitive architectures - the Tilera TILEPro64 manycore processor and an AMD Opteron 6172 processor based four socket SMP machine. Programmers need task performance metrics such as amount of task parallelism and task memory hierarchy utilization to analyze performance of task-based programs. However, existing tools indicate performance mainly using thread-centric metrics. Programmers therefore resort to using low-level and tedious thread-centric analysis methods to infer task performance. We contribute with tools and methods to characterize task-based OpenMP programs at the level of tasks using which programmers can quickly understand important properties of the task graph such as critical path and parallelism as well as properties of individual tasks such as instruction count and memory behavior. / <p>QC 20140212</p>
|
742 |
Scenario Based Comparison Between Risk AssessmentSchemesRydén, Calle January 2020 (has links)
Background. In the field of risk management, focusing on digital infrastructure, there is an uncertainty about which methods and algorithms are relevant and correct. Behind this uncertainty lies a need for testing and evaluation of different risk management analysis methods in order to determine how effective they are in relation to each other. Purpose. The purpose of this thesis is to manufacture a reproducible and universal method of comparison between risk management analysis methods. This is based on the need to compare two risk assessment analysis methods. One method relies solely on impact information and the other expands on that concept by also utilizing information about the network environment. Method. A network is modeled into a scenario. A risk assessment is conducted on the scenario by risk assessment experts which will be used as the correct solution. The tested risk management analysis methods are applied to the scenario and the results are compared with the expert risk assessment. The distance between the assessments are measured with Mean Square Error; A smaller distance between one assessment and the experts assessment indicates that it is more correct. Result. The result shows that it is possible to reproducibly compare risk management analysis methods by comparing the respective output with an established truth. The conducted comparison shows that a method that use network environment data is capable of producing a more correct assessment than one which simply uses impact data. Conclusion. A scenario based approach to compare risk management analysis methods for risk assessment has been proven effective.
|
743 |
Lastsimulator för anodiseringsutrustning : Realtidssimulering av en ytbehandlingsprocess på BeagleBone Black / Loadsimulator for anodization equipmentSjökvist, Bob, Eriksson, Fredrik January 2021 (has links)
Prevas Development AB vill effektivisera service, underhåll, utveckling, och felsökning genom att simulera en anodiseringsprocess på en enkortsdator. Genom att simulera denna process kan Prevas undvika de kemiska substanser som ingår i en verklig anodiseringsprocess. Svårigheten i projektet ligger i att leverera rätt information till rätt plats och i rätt tid. Syftet med vårt examensarbete var att välja en lämplig simuleringsplattform och på denna plattform bygga en prototyp av en simulator. Det ingick också i vårt arbete att välja utvecklingsmiljö. Efter en kombinerad teoretisk och praktisk utvärdering av ett antal möjliga plattformar och utvecklingsmiljöer föll valet på BeagleBone Black och att köra lastsimulatorn på enkortsdatorns huvudprocessor A8. Vi valde programmeringsspråket Python till det grafiska gränssnittet och informationshanteringen då det lämpar sig väl till dessa ändamål. Till datakommunikationen valdes C för dess snabbhet. Vi lyckades att få fram en simulator som kan plocka ut rätt information och sända den till rätt plats i rätt tidsintervall. Detta bevisades genom att mäta över den last som simulatorn skall leverera information till och jämföra detta mot en verklig anodiseringsprocess. / <p>Presentation över Zoom i enlighet med Folkhälsomyndighetens rekommendationer om distansstudier.</p>
|
744 |
Enabling container failover by extending current container migration techniquesTerneborg, Martin January 2021 (has links)
Historically virtual machines have been the backbone of the cloud-industry, allowing cloud-providers to offer virtualized multi-tenant solutions. A key aspect of the cloud is its flexibility and abstraction of the underlying hardware. Virtual machines can enhance this aspect by enabling support for live migration and failover. Live migration is the process of moving a running virtual machine from one host to another and failover ensures that a failed virtual machine will automatically be restarted (possibly on another host). Today, as containers continue to increase in popularity and make up a larger portion of the cloud, often replacing virtual machines, it becomes increasingly important for these processes to be available to containers as well. However, little support for container live migration and failover exists and remains largely experimental. Furthermore, no solution seems to exists that offers both live migration and failover for containers in a unified solution. The thesis presents a proof-of-concept implementation and description of a system that enables support for both live migration and failover for containers by extending current container migration techniques. It is able to offer this to any OCI-compliant container, and could therefore potentially be integrated into current container and container orchestration frameworks. In addition, measurements for the proof-of-concept implementation are provided and used to compare the proof-of-concept implementation to a current container migration technique. Furthermore, the thesis presents an overview of the history and implementation of containers, current migration techniques, and metrics that can be used for measuring different migration techniques are introduced. The paper concludes that current container migration techniques can be extended in order to support both live migration and failover, and that in doing so one might expect to achieve a downtime equal to, and total migration time lower than that of pre-copy migration. Supporting both live migration and failover, however, comes at a cost of an increased amount of data needed to be transferred between the hosts.
|
745 |
Computation offloading of 5G devices at the Edge using WebAssemblyHansson, Gustav January 2021 (has links)
With an ever-increasing percentage of the human population connected to the internet, the amount of data produced and processed is at an all-time high. Edge Computing has emerged as a paradigm to handle this growth and, combined with 5G, enables complex time-sensitive applications running on resource-restricted devices. This master thesis investigates the use of WebAssembly in the context of computa¬tional offloading at the Edge. The focus is on utilizing WebAssembly to move computa¬tional heavy parts of a system from an end device to an Edge Server. An objective is to improve program performance by reducing the execution time and energy consumption on the end device. A proof-of-concept offloading system is developed to research this. The system is evaluated on three different use cases; calculating Fibonacci numbers, matrix multipli¬cation, and image recognition. Each use case is tested on a Raspberry Pi 3 and Pi 4 comparing execution of the WebAssembly module both locally and offloaded. Each test will also run natively on both the server and the end device to provide some baseline for comparison.
|
746 |
En tillämpning av UTAUT för val av JS-frontend ramverkHübenette, Philip, Eidman, Johan January 2021 (has links)
The aim of this thesis is to examine differences between the three Javascript frameworks Angular, React and Vue along the Unified Theory of Acceptance and Use of Technology (UTAUT) model. With the purpose to give developers support in their choice of framework. Previous studies focused on advantages and disadvantages of the frameworks from a technical perspective, therefore we decided to carry into effect a qualitative study based on developers experience. Our method of choice was a case study where the phenomenon was “why a framework is chosen”, this ruled and motivated our research. We used a survey as the method to collect data. With our survey we collected data from more than 500 different developers. The results from the research are presented in different tables and charts which we analyze and compare to the results of previous studies related to ours. The conclusion of our research is that Vue is a good option for a developer that is learning on your own, looking for a framework that is easier to learn or developing a smaller project on your own. Angular is considered to be the most complicated framework to learn, but is a good option for anyone who is looking for a job or is involved with bigger projects. React has the lowest overall values in all the UTAUT categories, but has shown to be a viable option for someone looking for better job opportunities. The advantage React has over Angular is that it is easier to learn. The final results of our thesis are recommendations and not strict rules. Decisions for and against a framework depend heavily on the use case and other varying circumstances. Our study can hopefully give a better insight and support developers in their choice of framework. / Webbutveckling utvecklas, innoveras och skapar nya möjligheter för användare att mötas på en gemensam plattform. En del av utvecklingen är möjlig mha JavaScript-ramverk vars versioner är i tusental. Syftet med vår rapport var att undersöka skillnader mellan de tre ramverken Angular, React och Vue utefter Unified Theory of Acceptance and Use of Technology (UTAUT) modellen för att kunna ge utvecklare stöd i sitt val av ramverk. Tidigare studier visar på fördelar och nackdelar kring de tekniska aspekterna hos ramverken. Därför valde vi att utföra en kvalitativ studie baserad på utvecklares erfarenheter. Vi utförde en fallstudie där fenomenet var “varför ett ramverk väljs”, detta styrde och var motiveringen till våran forskning. Vi använde en enkät som metod för datainsamling. Med vår enkät samlade vi in data från över 500 olika utvecklare. Resultat från undersökningen presenteras i olika tabeller och grafer som analyseras och jämförs med tidigare studiers resultat inom samma område. Vår slutsats av resultaten är att Vue är ett bra alternativ för dig som lär sig utveckla på egen hand, vill ha ett lättare ramverk att lära sig och utvecklar mindre projekt på egen hand. Angular anses var det svåraste ramverket att lära sig, men kan vara ett bra alternativ för dig som vill söka jobb eller ska utveckla ett större projekt. Angular är det ramverk som funnits under en längre tid och har därför bredare möjligheter. Även om React uppskattas lägst i alla kategorier så har det visat att det är ett bra alternativ för dig som vill öppna upp fler jobbmöjligheter. Fördelen React har över Angular är det är lättare att lära sig. Resultatet vi presenterar är förstås bara rekommendationer och inget som är definitivt. Beslut för och emot vilket ramverk som är bäst varierar från fall till fall beroende på omständigheterna. Men vår studie kan förhoppningsvis ge en bättre insikt och hjälpa utvecklare som står inför att välja ramverk.
|
747 |
Comparative study of Canvas and Google Classroom Learning Management Systems using usability heuristicsGattupalli, Monica, Reddivari, Ananya January 2021 (has links)
Learning management systems (LMS) are playing a key role in the education systems. Education institutions are using LMS platforms to make the communication and collaboration between teacher and student easier, which inspired this study to measure the user satisfaction in using the different platforms applying usability heuristics. The survey evaluation is used to measure user satisfaction. The main objective of this study is to measure the user experience while using interactive interfaces. The selected LMS platforms for the research are canvas and google classroom. The experiment involves creating dummy course in the selected LMS platforms, fabricating the course assignments, gathering the users, and enrolling them into the platforms. The enrolled set of users will complete the assignments and take a survey on their experience with the platforms. Time taken by each user to complete assignments and survey are recorded and collected off-time comments. The responses of the survey are collected and graphically will interpret each question. Statistical attributes like populated variance and standard deviation are calculated for measuring the user experiences, and they are tabulated for the LMS platforms. User satisfaction on the canvas and google classroom was measured using usability heuristics. From the survey results, we can conclude that the canvas web application obeys all the usability heuristics, whereas the google classroom obeys only seven of the usability heuristics.
|
748 |
Assessing HTTP Security Header implementations : A study of Swedish government agencies’ first line of defense against XSS and client-side supply chain attacksJohnson, Ludwig, Mårtensson, Lukas January 2021 (has links)
Background. Security on the web is a fundamental requirement as it becomes a bigger part of society and more information than ever is shared over it. However, as recent incidents have shown, even Swedish government agencies have had issues with their website security. One such example is when a client-side supply chain for several governmental websites was hacked and malicious javascript was subsequently found on several governmental websites. Hence this study is aimed at assessing the security of Swedish government agencies’ first line of defense against attacks like XSS and client-side supply chain. Objectives. The main objective of the thesis is to assess the first line of defense, namely HTTP security headers, of Swedish government agency websites. In addition, collecting statistics of what HTTP security headers are actually used by Swedish government agencies today were gathered for comparison with similar studies. Methods. To fulfill the objectives of the thesis, a scan of all Swedish government agency websites, found on Myndighetsregistret, was completed and an algorithm was developed to assess the implementation of the security features. In order to facilitate tunable assessments for different types of websites, the algorithm has granular weights that can be assigned to each test to make the algorithm more generalized. Results. The results show a low overall implementation rate of the various HTTP security headers among the Swedish government agency websites. However, when compared to similar studies, the adoption of all security features are higher among the Swedish government agency websites tested in this thesis. Conclusions. Previous tools/studies mostly checked if a header was implemented or not. With our algorithm, the strength of the security header implementation is also assessed. According to our results, there is a significant difference between if a security header has been implemented, and if it has been implemented well, and provides adequate security. Therefore, traditional tools for testing HTTP security headers may be inefficient and misleading.
|
749 |
Intergration of CloudMe to Sonos wireless HiFi speaker systemVelusamy Chandramohan, Pavithra January 2013 (has links)
CloudMe is a cloud computing service used for business and home users. CloudMe facilitates the user to store their personal files like music, video, documents and images. The primary focus of this thesis is on music. The personal music files can be uploaded to CloudMe manually or by using CloudMe sync in any order just like in personal computer. CloudMe offers different services to access the cloud from other devices like smart phones, web browser and the home computer. Sonos wireless HiFi system is a set of Sonos component interconnected with the mesh network with the primary function to play digital audio. The components include subwoofer, speakers and Bridges in order to connect to wireless speakers. Sonos system is connected to internet through Ethernet or via Wi-Fi. Sonos gives access to music libraries stored in computer, free Internet radio stations and additional music services. The controller for the complete system has various choices as iPhone, Android and other specific Sonos controllers. However, with Sonos, a computer is considered necessary to be running all the time in order to access the personal music files from the personal computer. Combining CloudMe to Sonos allow the requirement of an always-on computer to be removed. Instead the selected personal music files can be stored with the user‟s private CloudMe account, and the music can be accessed from the cloud storage through the Internet at anytime. The main objective of this thesis is to build the given APIs from the Sonos that are required in order to access CloudMe from Sonos. Each API handles specific task to present CloudMe through Sonos to the user. For example an API handles user authentication and another API handles the metadata accessing. All the APIs are implemented in the given server from CloudMe. This integration not only provides access roughly the way the music files are stored in the cloud, but also implemented in a way to accesses via categories like artist, albums, genre, composers and also the playlist stored in the cloud. In order to get this menu view of all the music, the metadata of the entire music library from CloudMe is accessed and programmed to differentiate music options in the menu.
|
750 |
Learning Decision Trees and Random Forests from Histogram Data : An application to component failure prediction for heavy duty trucksGurung, Ram Bahadur January 2017 (has links)
A large volume of data has become commonplace in many domains these days. Machine learning algorithms can be trained to look for any useful hidden patterns in such data. Sometimes, these big data might need to be summarized to make them into a manageable size, for example by using histograms, for various reasons. Traditionally, machine learning algorithms can be trained on data expressed as real numbers and/or categories but not on a complex structure such as histogram. Since machine learning algorithms that can learn from data with histograms have not been explored to a major extent, this thesis intends to further explore this domain. This thesis has been limited to classification algorithms, tree-based classifiers such as decision trees, and random forest in particular. Decision trees are one of the simplest and most intuitive algorithms to train. A single decision tree might not be the best algorithm in term of its predictive performance, but it can be largely enhanced by considering an ensemble of many diverse trees as a random forest. This is the reason why both algorithms were considered. So, the objective of this thesis is to investigate how one can adapt these algorithms to make them learn better on histogram data. Our proposed approach considers the use of multiple bins of a histogram simultaneously to split a node during the tree induction process. Treating bins simultaneously is expected to capture dependencies among them, which could be useful. Experimental evaluation of the proposed approaches was carried out by comparing them with the standard approach of growing a tree where a single bin is used to split a node. Accuracy and the area under the receiver operating characteristic (ROC) curve (AUC) metrics along with the average time taken to train a model were used for comparison. For experimental purposes, real-world data from a large fleet of heavy duty trucks were used to build a component-failure prediction model. These data contain information about the operation of trucks over the years, where most operational features are summarized as histograms. Experiments were performed further on the synthetically generated dataset. From the results of the experiments, it was observed that the proposed approach outperforms the standard approach in performance and compactness of the model but lags behind in terms of training time. This thesis was motivated by a real-life problem encountered in the operation of heavy duty trucks in the automotive industry while building a data driven failure-prediction model. So, all the details about collecting and cleansing the data and the challenges encountered while making the data ready for training the algorithm have been presented in detail.
|
Page generated in 0.0332 seconds