Spelling suggestions: "subject:"datorsystem"" "subject:"aktorsystem""
561 |
An Evaluation of Spring WebFlux : With focus on built in SQL featuresDahlin, Karl January 2020 (has links)
In today’s society the need for more hardware efficient software since some people think that the doubling of computer power for the same price that Moore’s law predicted is no more. Reactive programming can be a step in the right direction, this has led to an increase in interest in reactive programming. This object of this thesis is to evaluate the possibility of using reactive programming and R2DBC in Java to communicate with a relation database. This has been done by creating two Spring applications one using the standards JDBC and servlet stack and one using R2DBC and the reactive stack. Then connecting them to a MySQL database and selecting and inserting values in to and from it and measuring the CPU usage, memory usage and execution time. In addition to this the possibilities to handle BLOBs in a good enough way were researched. The study shows that there are both advantages and disadvantages with using R2DBC it has basic support and it is based on good idea but at the time of this thesis it still needs more development before it can be used fully.
|
562 |
Comparison and Implementation of Software Frameworks for Internet of Things / Jämförelse och implementation av mjukvaruramverk för Internet of ThingsBjörnström, Tommie, Cederqvist, Reidar January 2015 (has links)
There is no established standard for how Internet of Things devices are communicating with each other, every manufacturer uses their own proprietary software and protocols. This makes it difficult to ensure the best possible user experience. There are several projects that can become a standard for how devices discovering, communicating, networking etc. The goal for this thesis work was to compare such software frameworks in some areas and investigate how Inteno’s operating system Iopsys OS can be complemented by implement one of these frameworks. A literature study gave two candidates for the comparison, AllJoyn and Bonjour. The result of the comparison showed that AllJoyn was the most appropriate choice for Inteno to implement into their OS. AllJoyn was chosen because it has a potential to become an established standard and includes tools for easy implementation. To make a proof of concept, an AllJoyn application was created. The application together with a JavaScript web page, can show and control options for an AllJoyn Wi-Fi manager application and AllJoyn enabled lamps. / Det finns ingen etablerad standard för hur enheter inom Internet of Things kommunicerar med varandra. När alla tillverkare använder sina egna programvaror och protokoll, försvårar det möjligheten att skapa bästa möjliga användarvänlighet. Det finns flera projekt som utvecklar mjukvaruramverk, flera av dessa har möjligheten att bli en standard för hur enheter upptäcker, kommunicerar mm. Målet med examensarbete var att jämföra sådana mjukvaruramverk inom vissa områden samt att undersöka hur Intenos operativsystem Iopsys OS kan förbättras genom att implementera ett av dessa ramverk. En litteraturstudie gav två kandidater till jämförelsen, AllJoyn och Bonjour. Resultatet av jämförelsen visade att AllJoyn var det lämpligaste valet för Inteno att implementera i sitt operativsystem. AllJoyn valdes eftersom den har potential att bli en etablerad standard och innehåller verktyg för enkel implementering. För att bevisa konceptet, skapades ett AllJoyn-program. Programmet kan tillsammans med JavaScript generera en webbsida där användaren kan styra Wi-Fi inställningar och styra lampor via AllJoyn.
|
563 |
Säker identitetshantering på internet : Att minimera bedrägerier och öka konsumentens säkerhet och inflytande vid e-handel / Secure identity management online : Minimizing fraud and improving consumer security and influence in e-commerceÅkerberg, Mathias, Tibbling, Anders January 2015 (has links)
Risken att en obehörig part kan komma över och använda en enskild konsuments identitets-handlingar är stor, samtidigt som individens möjlighet att kontrollera hur och när dess iden-titet används är liten. Problemformuleringen som skulle besvaras var hur identitetsstölder och bedrägerier på internet kunde minimeras samtidigt som konsumenten får ett ökat infly-tande över hanteringen av sin identitet. Målsättningen var att centralisera och skapa ett gemensamt förhållningssätt för identitets-hantering på internet till förmån för konsumenterna, och på så vis minimera spridning av egna lösningar för identitetshantering hos enskilda aktörer. Lösningen resulterade i en systemmodell med förutsättningar för att autentisera konsumen-ten, hantera filter för hur enskilda identitetshandlingar får användas på internet, samt för att möjliggöra kommunikation med konsumenten genom att skicka notifikationer om händelser som uppstått kopplat till en specifik identitet. Genom en användarportal skulle konsumen-ten kunna administrera sina filter för olika e-tjänster och webbutiker samt få en överblick över specifika händelser som inträffat. En prototyp togs fram för att demonstrera systemmodellens grundläggande funktionalitet i praktiken. Denna kom att innefatta funktionalitet för att autentisera konsumenten, skicka notifikationer om händelser och kontrollera existerande filter för en specifik identitet. Pro-totypen kom att bestå av ett förenklat system enligt den modell som tagits fram, med ett tillhörande API samt två modeller motsvarande en webbutik och en betalningsväxel som skulle nyttja funktionaliteten genom att anropa systemets API. Lösningen utvärderades baserat på det uppnådda resultatet från intervjuer med experter inom problemområdet och genomförd funktionskontroll av den framtagna prototypen. Ge-nom utvärderingen kunde slutsatsen dras att identitetsbaserade bedrägerier med stor sanno-likhet skulle sjunka drastiskt och att den enskilde konsumentens inflytande och medveten-het skulle stärkas. Den största bidragande faktorn till slutsatsen ansågs främst vara det kon-sistenta och standardiserade sätt som skapats för autentisering av och kommunikation med konsumenten. På så vis skulle aktörerna själva avsätta delar av den funktionalitet och de säkerhetsrisker som ansågs finnas i anslutning till hanteringen av identiteter på internet. Svårigheterna med den föreslagna lösningen ansågs vara att få konsumenter, webbutiker och betalningsväxlar att ansluta sig till ett centralt system då man av affärsmässiga skäl väljer att behålla särskilda delar internt. / There's a high risk that an unauthorized party can gain access to and use a consumer's iden-tity. This while the ability to control how and when a personal identity is used is small. The question to be answered were regarding how identity theft and online fraud could be mi-nimized and give the consumers a greater influence and more control over the management of their identity online. The goal was to centralize and establish a common approach for identity management on-line, with greater benefits for consumers. Through central service individual consumers would be able to set conditions for which online shops and services would be able to access their identities and grant access in each specific transaction. This would remove the need for non-central control of identities and as a result remove the need for independent storage of identity information. The solution would result in a system model with the potential to authenticate the consu-mer, managing conditions for how individual identity documents may be used online and to provide the consumer with a online history by sending notifications of events that has occurred with regard to a specific identity. A prototype was developed to demonstrate the basic functionality in practice. This included the functionality to authenticate the consumer with Mobile BankID, send notifications about events and check existing conditions regarding a specific identity. This prototype came to consist of a simplified system according to the model developed, an associated API, and two models representing an online store and a payment provider that would utilize the functionality of the system by calling the API. The proposed solution was evaluated through two interviews with experts in the fields of IT Security and e-commerce. The conclusion was that identity fraud would probably drop drastically and the individual consumer influence and awareness would be fortified. The main reason for this was considered to be primarily through the consistent and standardized way for authentication of and communication with the consumer. This would remove the individual risk for online services. The challenge with this proposed solution is believed to be getting consumers, online re-tailers and payment providers to accept a central solution instead of relying on internally developed and disconnected solution.
|
564 |
Qualitative analysis about the experience of VPN from people with software expertise in SwedenGerdtsson, Markus, Nielsen, Erik January 2022 (has links)
VPN is primarily used to encrypt your network traffic and identity online securely from a private location. This can be used as a safety measure to prevent theft of personal data. It also allows its user to change the geolocation to wherever they want which unlocks the possibility to use another country's services. Related work has shown that there are also downsides to using VPN services. Some VPN solutions do have security problems that its user could be unaware of. This study explored the experience and beliefs surrounding the usage of VPN while browsing the internet from people with software expertise. Interviews were conducted with people in different areas surrounding usage of VPN services to get a deeper understanding of why VPN is used and to what extent they believe VPN is providing anonymity and security of their data. The findings from this study is that the main reason to use a VPN is to access unavailable services. These services can vary from content online that is not available in the region from where you access the internet to services that are work related and locked to specific networks. Another finding was also that among these people the belief that the use of a VPN was enough to make a user anonymous by itself is controversial.
|
565 |
Åtkomst nekad : Autentisering och säkerhetsrutiner för lokala nätverk / Access denied : Authentication and security routines for local area networksWISTRÖM, EDVARD January 2022 (has links)
In the field of Cybersecurity, it is essential to know who is connected to your system. The functionality for Authentication of connecting users in the local area network is in the focus for this report. There exist various authentication protocols, however in this report IEEE 802.1X is covered since it is the protocol most suitable for wired local area networks. The IEEE 802.1X protocol is studied in theory with its architecture of Supplicator, Authenticator and Authentication server and the used communication protocols EAPOL and RADIUS. A practical test was then performed as a basic concept to learn more about pros and cons for utilizing these protocols where the fundamentals of protocol communications are observed and later the prerequisites for a larger scale implementation are described. The outcome from the test is proof of the relative difficulties involved with having to keep up with the pace of Cybersecurity evolution. In the test, older equipment where thought to be used, however due to incompatibility of gear and software the test needed to be revised to use other gear. The learning outcome from the test is that it is a complex task to set up authentication, competent staff are needed, as well as suitable equipment. The motivation for setting up IEEE 802.1X is found in larger organizations where the risks of an attack are high, the large number of users calls for centralized systems for the handling of users and network policies. Due to the trend of Bringing you own device, a policy for the handling of unauthorized users and devices is needed to be in place. The default behavior may be to just deny access for unauthorized devices, however with authentication systems implemented the unauthorized user may instead benefit from being automatically referred to a guest network in a secured manner and the authorized user gains flexibility to access the network thru any available network port. For the improvement and maintenance of Cybersecurity administration an Information Security Management System is found useful, the organization can thereby continuously improve their work and document the system features and routines. In case of a security breach that system gives support for immediate action upon the problem, and even stronger preparation for the Cyber defense in the form of good backup routines and monitoring the normal state activities where all devices are either authorized or unauthorized and placed into their proper network according to network policies. / <p>Examensarbete för högskoleingenjörsexamen i nätverksteknik</p>
|
566 |
VM Instruction Decoding Using C Unions in Stack and Register ArchitecturesStrömberg Skott, Kasper January 2022 (has links)
The architecture of virtual machine (VM) interpreters has long been a subject of researchand discussion. The initial trend of stack-based interpreters was shortly thereafterchallenged by research showing the performance advantages of virtual register machines. Despite this, many VM interpreters are still stack-based, with some notable exceptions, like Lua, Android Runtime, and its predecessor Dalvik. A register architecture isusually associated with greater overhead from instruction dispatch, and to some extent, instruction decoding. By designing, and implementing a novel technique that replaces the conventional way of decoding instructions, this thesis attempts to reduce that overhead. More specifically, a VM interpreter is developed as an artifact of design-science research. The novel technique is then evaluated through benchmarking in various configurations. As the results indicate, however, using this technique showed no performance advantage, as the resulting machine instructions are exactly the same after compiler optimization. This suggests that there is no apparent decoding overhead to begin with. As a result, register-based VMs seem to not suffer from any dispatch related overhead, other than the fact that there are more operands per instruction to access. Source code is available on GitHub, at https://github.com/kaspr61/RackVM.
|
567 |
Predicting Service Metrics from Device Statistics in a Container-Based EnvironmentJiang, Zuoying January 2015 (has links)
Service assurance is critical for high-demand services running on telecom clouds. While service performance metrics may not always be available in real time to telecom operators or service providers, service performance prediction becomes an important building block for such a system. However, it is generally hard to achieve. In this master thesis, we propose a machine-learning based method that enables performance prediction for services running in virtualized environments with Docker containers. This method is service agnostic and the prediction models built by this method use only device statistics collected from the server machine and from the containers hosted on it to predict the values of the service-level metrics experienced on the client side. The evaluation results from the testbed, which runs a Video-on-Demand service using containerized servers, show that such a method can accurately predict different service-level metrics under various scenarios and, by applying suitable preprocessing techniques, the performance of the prediction models can be further improved. In this thesis, we also show the design of a proof-of-concept of a Real-Time Analytics Engine that uses online learning methods to predict the service-level metrics in real time in a container-based environment.
|
568 |
Predicting Service Metrics from Device and Network StatisticsForte, Paolo January 2015 (has links)
For an IT company that provides a service over the Internet like Facebook or Spotify, it is very important to provide a high quality of service; however, predicting the quality of service is generally a hard task. The goal of this thesis is to investigate whether an approach that makes use of statistical learning to predict the quality of service can obtain accurate predictions for a Voldemort key-value store [1] in presence of dynamic load patterns and network statistics. The approach follows the idea that the service-level metrics associated with the quality of service can be estimated from serverside statistical observations, like device and network statistics. The advantage of the approach analysed in this thesis is that it can virtually work with any kind of service, since it is based only on device and network statistics, which are unaware of the type of service provided. The approach is structured as follows. During the service operations, a large amount of device statistics from the Linux kernel of the operating system (e.g. cpu usage level, disk activity, interrupts rate) and some basic end-to-end network statistics (e.g. average round-trip-time, packet loss rate) are periodically collected on the service platform. At the same time, some service-level metrics (e.g. average reading time, average writing time, etc.) are collected on the client machine as indicators of the store’s quality of service. To emulate network statistics, such as dynamic delay and packet loss, all the traffic is redirected to flow through a network emulator. Then, different types of statistical learning methods, based on linear and tree-based regression algorithms, are applied to the data collections to obtain a learning model able to accurately predict the service-level metrics from the device and network statistics. The results, obtained for different traffic scenarios and configurations, show that the thesis’ approach can find learning models that can accurately predict the service-level metrics for a single-node store with error rates lower than 20% (NMAE), even in presence of network impairments.
|
569 |
Sound Modular Extraction of Control Flow Graphs from Java Bytecodede Carvalho Gomes, Pedro January 2012 (has links)
Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs. / <p>QC 20121122</p>
|
570 |
Live Streaming / Video-on-Demand : An IntegrationHaghighi Fard, Sara January 2012 (has links)
Video delivery over the Internet is becoming increasingly popular and comes in many flavors, such as Live Streaming and Video-on-Demand. In the recent years, many peer to peer solutions for Live Streaming and VoD have been proposed as opposed to the centralized solutions that are not scalable due to the high bandwidth requirements. In all existing solutions, Live Streaming and VoD have traditionally and artificially been considered as separate technical problems. We propose an integrated Live Streaming with VoD system that offers the potential for users to watch live TV with short delays. In Live Streaming, peers are interested in the content that is being generated live by the streaming source, unlike VoD in which peers are interested in the content that has been generated from the beginning of the streaming. In this manner, Live nodes can contribute to VoD nodes and send them the pieces they have downloaded since their joining time. In this work, we demonstrate that our system, called Live-VoD, brings out the possibility of having both types of nodes in one system, each being served based on their interest. We propose a P2P Live-VoD protocol for overlay construction based on peer’s upload bandwidth that is built on top of the Gradient topology and an innovative algorithm based on the number of pieces peers can contribute to each other. We also propose an innovative stochastic algorithm for data dissemination based on the rareness of the piece and the requesting node’s download position. Our experiments show that Live-VoD is decentralized, scalable and self-organizing. We also show that even when most of the nodes in the system are VoDs, all VoD nodes regardless of their joining time, manage to download the whole movie with no assistance from the streaming source.
|
Page generated in 0.0693 seconds