Spelling suggestions: "subject:"datavetenskap (datalogi)"" "subject:"datvetenskap (datalogi)""
581 |
Blockkedjesystem för smarta elnät : En undersökning om befintliga blockkedjesystem där deras egenskaper analyseras, och kvantitativa mätningar gällande deras responstid utförsPrestberg, Sara January 2020 (has links)
The interest in blockchain technology is increasing and is being applied in more areas than just cryptocurrencies. To facilitate the use of blockchains suppliers are launching blockchain-as-a-service, which is a fusion between cloud service and blockchain. The objective of this project is to investigate which existing blockchain system is most suitable for an energy trading scenario, where the idea is that in the future it will be possible to buy and sell local energy via smart grids. The project starts with literature studies to gain basic and in-depth knowledge about blockchain systems, cryptocurrencies and smart grids. Furthermore, the availability, documentation and payment options of a number of systems are investigated. Then, based on how they meet the above parameters, two systems move on to the test phase. To perform the quantitative measurements, a blockchain member is deployed at the respective supplier with the associated software. To measure the response time, a code is written that can send one or more, small or large transactions. This leads to a result that shows that both systems are able to fit into an energy trading scenario, but that Kaleido has faster response time than Azure Blockchain Service. The conditions have been largely the same for both blockchain systems, but there are a number of factors that can affect the measurement results. The report is rounded off with an ethical and societal discussion where the blockchain's impact on the black market is addressed, how it can benefit when it comes to COVID-19, if it really is a sustainable alternative given its energy consumption, and finally proposals for further work. / Intresset för blockkedjeteknologin ökar allt mer och appliceras inom fler områden än bara kryptovalutor. För att underlätta användningen av blockkedjor lanserar leverantörer bland annat blockchain-as-a-service, som är en fusion mellan molntjänst och blockkedjor. Målet med detta projekt är att utreda vilket redan befintligt blockkedjesystem som är mest lämpat för ett energihandelscenario, där tanken är att man i framtiden ska kunna köpa och sälja lokal energi via smarta elnät. Projektet inleds med litteraturstudier för att få grundläggande och fördjupad kunskap om blockkedjesystem, kryptovalutor och smarta elnät. Vidare undersökts ett antal systems tillgänglighet, dokumentation samt deras betalningsalternativ. Sedan, baserat på hur de uppfyller nämnda parametrar, går två system vidare till testfasen. För att utföra de kvantitativa mätningarna driftsätts en blockkedjemedlem hos respektive leverantör med tillhörande programvara. För att mäta responstiden skrivs en kod som kan skicka en eller flera, små eller stora transaktioner. Detta leder till ett resultat som visar på att båda systemen har möjlighet att lämpa sig i ett energihandelsscenario, men att Kaleido har snabbare responstid än vad Azure Blockchain Service har. Förutsättningarna har i stort sett varit densamma för båda blockkedjesystemen, men det är många olika faktorer som kan påverka mätresultaten. Rapporten avrundas med en etisk och samhällelig diskussion, där blockkedjans inverkan på bland annat svarta marknaden tas upp, hur den kan göra nytta när det kommer till COVID-19, om det verkligen är ett hållbart alternativ med tanke på dess energiförbrukning, och slutligen förslag på vidare arbete.
|
582 |
A study and review of distributed ledger technologiesOlsson, Maria January 2020 (has links)
With the rise in popularity of cryptocurrencies, distributed ledger technology is a term that has gained traction. The aim of this study is to review and comparethe distributed ledger technologies blockchain and directed acyclic graph, examining their internal structures as well as some platforms and existing areas of application. An implementation, the goal of which is to illustrate the components of a possible distributed ledger solution and how they might interact, has been made in the form of a smart contract deployed on a simulated distributed ledger network. To give some explanation to the foundations of distributed ledger technology, a brief overview is given on the topics of cryptography, underlying data structures, and the frameworks used in this study. The literature study has been conducted by collecting and reviewing primarily scientific articles on the topic of distributed ledger technologies and consensus algorithms, as well as white papers on selected distributed ledger platforms. The construction has been done using the framework Hyperledger Fabric. The result chapter reviews how the implemented smart contract fulfills the concrete goals. The study is concluded with a discussion regarding how distributed ledgers might possibly be used in thef uture, what might be done to further develop the implemented smart contract and some of the ethical concerns surrounding distributed ledger technology.
|
583 |
Inferring Dataset Relations using Knowledge Graph MetadataEdström, August, Isaksson, Johan January 2020 (has links)
The web site dataportalen.se aims to increase the availability of Swedish open datasets. This is achieved by collecting metadata about the open datasets provided by Swedish organizations. At the time of writing, metadata from more than two thousand datasets reside in the portal, and this number is set to increase. As the number of datasets increases, browsing for relevant information becomes increasingly difficult and time-consuming. The web site supports searching using text and then filtering the results by theme, organization, file format och license. We believe that there exists potential to connect the datasets, thus making it easier to find a dataset of interest. The idea is to find common denominators in the metadata of the datasets. Furthermore, as no user data is available, the datasets had to be connected based solely on the metadata. The datasets are annotated with metadata, such as title,description, keywords, themes. By comparing metadata from different datasets, a measure of similarity could be computed. This measure can then be used to find the most relevant datasets for a specific dataset.The achieved results suggests that it is indeed possible to find similar datasets by only analyzing the metadata. By exploring various methods, we found it to be the case that text data holds useful information that can be used to find relations between datasets. Using a related workas a benchmark, we found that our results are as good if not better. Furthermore, the approach taken in this project is quite general, and should theoretically be applicable in other scenarios where textual data is available. / Webbplatsen dataportalen.se syftar till att öka tillgängligheten av svensk öppna data. Detta görs genom att samla in metadata om de öppna datamängderna som tillhandahålls av svenska organisationer. I skrivande stund finns metadata från mer än två tusendatamängder i portalen, och detta antal kommer att öka. När antalet datamängder ökar blir genomsökandet av relevant information allt svårare och mer tidskrävande. För närvarande är det möjligt att genomsöka datamängderna med hjälp av sökning med text och sedan filtrering med tema, organisation, filformat eller licens. Vi tror att det finns mer potential att koppla samman datamängderna, vilket skulle göra det enklare att hitta datamängder av intresse. Idén är att hitta gemensamma nämnare i metadatat för datamängderna. Eftersom det inte finns någon användardata kommervi att undersöka i vilken utsträckning denna idé kan realiseras. Datamängderna kommenteras med metadata som titel, beskrivning, nyckelord, och tema. Genom att jämföra metadata från olika datamängder kan ett mått på likhet beräknas. Detta mått kan sedan användas för att hitta de mest relevanta datamängderna för en specifik datamängd. Resultaten av analysen av metadata är att liknande datamängder kan hittas. Genom att utforska olika metoder fann vi att textdata inneåaller användbar information som kan användas för att hitta relationer mellan datamängder. Genom att använda ett relaterat arbete som riktmärke fann vi att våra resultat är lika bra, om inte bättre. Resultaten visar att relaterade datamängder kan hittas med bara textdata, och vi anser att den identifierade metoden är tillräckligt generell för att ha potential att kunna användas i liknande problem där textdata är tillgänglig.
|
584 |
Automated Model Generation using Graphwalker Based On Given-When-Then SpecificationsKorhonen, Joakim January 2020 (has links)
Software testing is often a laborious and costly process, as testers need extensive domain-specific knowledge and engineering experience to manually create test cases for diverse test scenarios. These scenarios in many industrial projects are represented in requirement specification documents. Since the creation of test cases from these requirements is manual and is error-prone, researchers have proposed methods to automate the creation of tests and execution of tests. One of the most popular approaches is called model-based testing. Model-based testing uses models to manually or automatically create tests based on existing models. Since most of the effort in model-based testing lies in the creation of the model, this thesis aims at improving a model-based testing tool. This improvement is for generating a model from Natural language as this is what requirements usually are written in. Given-When-Then is a test-case writing template used to specify a system's behavior. To implement the natural language processing into a model-based testing tool, an extension for Graphwalker was created. Graphwalker is a popular open-source model-based testing tool, which can create, edit, and test the models created. The extension is using requirements as input written in natural languages and then creates a model based on the requirements provided. Graphwalker's models are based on finite state machines that have elements such as vertices and edges. The model also can change its state, change values of variables, and block access to certain elements. Graphwalker can however not generate models from natural language requirements. This thesis shows how one can transform natural language requirements into models. The extension is implemented to use requirements through both manual input and via a JSON file and it is processing the text and tags each word. These tags will then be used to interpret the sentence meaning and will either create a transition, change a value, or block access to a selected element. The results of this thesis show that this extension is an applicable method to automatically generate models for the GraphWalker tool. This extension can be used and improved by both researchers and practitioners.
|
585 |
Evaluating remote and local web rendering of real-time interactive 2D graphics using BlazorAndersson Tholin, Alexander January 2020 (has links)
With the growing popularity of the web, companies are starting to extend current development to reflect this. When extending desktop applications to the web, it can be difficult to choose what techniques and technologies to use when solving a problem, as current solutions might not be directly applicable. Rendering high-performance interactive 2D graphics on the web can be achieved in multiple ways. The rise in open standards such as the Canvas API allows the client to render natively in the browser, provided they can receive the full object state. There are some cases where this is simply not possible, where the object state is too large, or the client is lacking sufficient hardware. A possible solution is migrating the rendering of the graphic from the client to the server. However, remote rendering comes with new sets of issues as it often lacks high interaction capabilities, and would theoretically require more resources with multiple connections. This thesis will evaluate the performance differences and individual capabilities of remote and local rendering in terms of scalability and Quality of Experience using ASP.NET Core Blazor. The evaluation is done through the implementation of the four different solutions for the scenario. These implementations are based on Canvas and SVG using remote and local rendering. Different configurations of the performed tests, such as how much data should be rendered and how many clients are connected, were used to see how they affect response time and interaction latency. The results show that remote rendering performed better in all scalability tests, with remote SVG being the recommended approach. Due to implementation issues and lack of a proper testing environment, the number of concurrent clients was downsized. This caused problems when analyzing the results, and drawing concrete conclusions were difficult. In tests with increasing image size, the client solution suffered memory exceptions, preventing the local versions to be tested further. When testing interaction capabilities by measuring interaction latency, the SVG technology significantly outperformed Canvas, since SVG does not require a full re-render of the elements.
|
586 |
Privacy-preserving proof-of-location using homomorphic encryptionLee, Carmen January 2020 (has links)
Location-based software services require knowledge about a user's geographic data. Sharing these data risks compromising the user's privacy, exposes the user to targeted marketing, and enables potentially undesired behavioural profiling. Today, there exist several privacy-preserving proof-of-location solutions. However, these solutions often rely on a trusted third party, which reduces a user's control of their own data, or feature novel encryption schemes that may contain yet undiscovered security vulnerabilities. This thesis adopts a generic homomorphic encryption scheme and presents a way of generating location proofs without a user having to reveal their location.
|
587 |
A tool for extracting and visualizing source code metric trendsSjövall, Albin January 2022 (has links)
The quality of code can be measured using source code metrics. Looking at the trends of these metrics over a period of time can show a potential decrease in code quality. Low quality code will lead to technical debt resulting in higher cost of maintenance for a software project. In the context of a research project related to this topic, a tool was created to analyse source code metric trends for Git repositories. This tool had two problems, it was slow and unreliable. Both of these issues were caused by the tool requiring target repositories to be built during the metrics collection. To solve these issues the tool will be modified to collect metrics straight from the source code files, thus not requiring a build process, leading to increased performance and higher reliability.The result was a six times increase in performance as well as the tool functioning on a wider range of repositories.
|
588 |
How to incorporate DL models into a microprocessor for EDGE applicationsTóth, Zsombor January 2022 (has links)
No description available.
|
589 |
Automate memory forensics InvestigationMohamed, Azeem, Saad, Tirmizi January 2022 (has links)
The growth of digital technology spawns both positive and negative effects. Cybercrimes rise with the advancement of computer technology, necessitating a digital forensics investigation of the evolving digital world to assist in solving and tracing criminals' digital activity. We also know that every process executed in a digital system must run in memory at some point. Therefore, volatile memory forensics is at the forefront of forensic investigation and incident response. The memory analysis technique retrieves artifacts to analyze inappropriate behaviors. A bit-to-bit memory image contains significant artifacts that provide the analyst with relevant clues, such as system processes, recent activities, opened network ports, and connections. However, all this information is lost as soon as the system is shut down, which flushed the volatile memory. It also takes a long time to gather, analyze, and present data from various devices for every crime because the number of devices and the amount of data are constantly growing and adding to the backlog of devices to examine and analyze. Therefore, to eliminate human error and backlogs, we develop multiple machine learning classification models and identify the best performing model to automate the memory forensic process.
|
590 |
Latency-aware Resource Management at the EdgeToczé, Klervie January 2020 (has links)
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
|
Page generated in 0.0754 seconds