Spelling suggestions: "subject:"datorsystem"" "subject:"aktorsystem""
271 |
3D Visualization of MPC-based Algorithms for Autonomous VehiclesSörliden, Pär January 2019 (has links)
The area of autonomous vehicles is an interesting research topic, which is popular in both research and industry worldwide. Linköping university is no exception and some of their research is based on using Model Predictive Control (MPC) for autonomous vehicles. They are using MPC to plan a path and control the autonomous vehicles. Additionally, they are using different methods (for example deep learning or likelihood) to calculate collision probabilities for the obstacles. These are very complex algorithms, and it is not always easy to see how they work. Therefore, it is interesting to study if a visualization tool, where the algorithms are presented in a three-dimensional way, can be useful in understanding them, and if it can be useful in the development of the algorithms. This project has consisted of implementing such a visualization tool, and evaluating it. This has been done by implementing a visualization using a 3D library, and then evaluating it both analytically and empirically. The evaluation showed positive results, where the proposed tool is shown to be helpful when developing algorithms for autonomous vehicles, but also showing that some aspects of the algorithm still would need more research on how they could be implemented. This concerns the neural networks, which was shown to be difficult to visualize, especially given the available data. It was found that more information about the internal variables in the network would be needed to make a better visualization of them.
|
272 |
Discrete Event Simulation of Bus TerminalsLindberg, Therese January 2019 (has links)
Public transport is important to society as it provides spatial accessibility and reduces congestion and pollution in comparison to other motorized modes. To assure a high-quality service, all parts of the system need to be well-functioning and properly planned. One important aspect for the system's bus terminals is their capacity. This needs to be high enough to avoid congestion and queues and the delays these may lead to. During planning processes, various suggested designs and solutions for a terminal need to be evaluated. Estimating capacity and how well the suggestions will function is a challenging problem, however. It requires analysis of complex interactions and behaviour of the vehicles. This sort of analyses can preferably be carried out using microsimulation. Furthermore, a discrete event simulation approach can make use of the fact that the path of a vehicle through a terminal can readily be described by a sequence of events (such as arriving, starting to drive to a stop etc.). The overall aim of this thesis is to investigate how discrete event simulation can be used to evaluate bus terminal design and traffic control policies. The main contribution is the development of a method for bus terminal simulation. As a first step, a discrete event simulation model of a combined bus and tram stop is formulated. The model is tested on a real system where the current design is compared to an alternative one. The test shows that a model developed with a discrete event approach can be used to evaluate the situation at a stop and compare design alternatives. In the next step, a general discrete event simulation model of bus terminals is formulated. A modular approach is introduced, where a terminal can be constructed from a set of module building blocks. Another important contribution of the model is its spatial resolution that allows for queues and blockages to occur throughout the terminal. By applying the simulation model in a case study, it is shown that the model can be used to evaluate and compare various scenarios related to the layout, number of passengers and the outside traffic situation. Lastly, the bus terminal simulation model is used in a second case study in order to compare model output with empirical data. This study identified a number of factors that may have had an influence on differences between observations and simulation results and that is of interest to look further into. This includes the actual adherence to terminal rules and the effects of model parameters.
|
273 |
Fog Computing : Architecture and Security aspectsBozios, Athanasios January 2018 (has links)
As the number of Internet of Things (IoT) devices that are used daily is increasing, the inadequacy of cloud computing to provide neseccary IoT-related features, such as low latency, geographic distribution and location awareness, is becoming more evident. Fog computing is introduced as a new computing paradigm, in order to solve this problem by extending the cloud‟s storage and computing resources to the network edge. However, the introduction of this new paradigm is also confronted by various security threats and challenges since the security practices that are implemented in cloud computing cannot be applied directly to this new architecture paradigm. To this end, various papers have been published in the context of fog computing security, in an effort to establish the best security practices towards the standardization of fog computing. In this thesis, we perform a systematic literature review of current research in order to provide with a classification of the various security threats and challenges in fog computing. Furthermore, we present the solutions that have been proposed so far and which security challenge do they address. Finally, we attempt to distinguish common aspects between the various proposals, evaluate current research on the subject and suggest directions for future research.
|
274 |
Comparison of Auto-Scaling Policies Using Docker Swarm / Jämförelse av autoskalningspolicies med hjälp av Docker SwarmAdolfsson, Henrik January 2019 (has links)
When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs. In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.
|
275 |
CA UIM : Övervakning för Sametingets nätverkSandberg, Oskar, Nutti, Kenneth January 2019 (has links)
Det globala konsultföretaget CGI är leverantör för Sametingets nätverk. Behovet av en förbättrad övervakning på Sametingets nätverk efterfrågades av CGI. Genom att förnya och förbättra övervakningen säkerställs en snabbare felsökning och en högre tillgänglighet. Plattformen som skulle användas för att förverkliga målet heter CA Unified Infrastructure Management (CA UIM). Eftersom slutprodukten skulle övervaka en skarp miljö så började arbetet med att upprätta en testmiljö där experiment kunde utföras. En liten del av arbetet bestod av att kartlägga det nätverk som skulle övervakas. Det huvudsakliga arbetet gick ut på att lära känna programvaran. UIM bygger på att prober utför insamling av data som sedan transporteras vidare till en gemensam databas för den specifika domänen. Databasen kan i sin tur leverera data till exempelvis dashboards på front-end sidan. Följande prober har använts i vårt arbete: Net_Connect – Använder ICMP för att bekräfta en enhets tillgänglighet. Interface_Traffic - Övervakar nätverkstrafiken med SNMP-agenter. CDM - Ansvar för övervakningen av CPU, disk och minnesutnyttjande på servrar. Efter att designen av våra dashboards var färdigställda och övervakningen fungerade så gick vi över till den skarpa miljön och upprättade samma koncept där. Arbetet presenteras sedan för CGI i Kiruna med positiva reaktioner.
|
276 |
Resource utilization comparison of Cassandra and ElasticsearchSelander, Nizar January 2019 (has links)
Elasticsearch and Cassandra are two of the widely used databases today withElasticsearch showing a more recent resurgence due to its unique full text searchfeature, akin to that of a search engine, contrasting with the conventional querylanguage-based methods used to perform data searching and retrieval operations. The demand for more powerful and better performing yet more feature rich andflexible databases has ever been growing. This project attempts to study how the twodatabases perform under a specific workload of 2,000,000 fixed sized logs and underan environment where the two can be compared while maintaining the results of theexperiment meaningful for the production environment which they are intended for. A total of three benchmarks were carried, an Elasticsearch deployment using defaultconfiguration and two Cassandra deployments, a default configuration a long with amodified one which reflects a currently running configuration in production for thetask at hand. The benchmarks showed very interesting performance differences in terms of CPU,memory and disk space usage. Elasticsearch showed the best performance overallusing significantly less memory and disk space as well as CPU to some degree. However, the benchmarks were done in a very specific set of configurations and a veryspecific data set and workload. Those differences should be considered whencomparing the benchmark results.
|
277 |
Generic Compare ToolNordgren, Daniel January 2007 (has links)
<p>När dagens datorsystem utvecklas ökar i de flesta fall också dess komplexitet under utvecklingens gång. Detta för med sig negativa konsekvenser i form av svårare testning och felsökning av systemen. Denna uppsats har för avsikt att förklara vilka fel som kan uppstå och varför. Även lösningar i form av modifieringar av system kommer att tas upp.</p><p>Beställaren av undersökningen, Saab, ser möjligheter i att på ett enkelt sätt återskapa fel i deras datorsystem på Gripen. Detta skulle kunna minska kostnaderna för felsökning drastiskt. Då mycket tid ägnas åt verifiering av ny programvara för datorsystemet blir detta också en stor kostnad i utvecklingsarbetet. Därför är det också ett önskemål att undersöka huruvida regressionstest av nya programvaror skulle kunna automatiseras. </p><p>Till en början studerades artiklar inom området och marknaden avsöktes efter färdiga verktyg. Efter en sammanställning av teorin bakom problemen så kunde en analys av det befintliga datorsystemet påbörjas, vilka problem som kunde uppstå och ifall det var möjligt att lösa dessa undersöktes med hjälp av systemets dokumentation. Vissa problem uppdagades där en del kunde avhjälpas med ett genomtänkt verktyg. Vissa problem var dock inte möjligt att deterministiskt visa lösbara, vilket leder till att målet om ett fullständigt regressionstest troligen blir svårt att genomföra. Däremot så kommer andra sorters tester för att testa robustheten i systemet att vara genomförbara. Framförallt så kommer det finnas ett underlag för framtida system där man redan från början kan ta hänsyn till problemen.</p><p>De på marknaden förekommande verktyg som har analyserats har alla visat sig ha funktioner som är användbara, men inget av verktygen kan ensamt hantera de önskningar som finns.</p>
|
278 |
Generic Compare ToolNordgren, Daniel January 2007 (has links)
När dagens datorsystem utvecklas ökar i de flesta fall också dess komplexitet under utvecklingens gång. Detta för med sig negativa konsekvenser i form av svårare testning och felsökning av systemen. Denna uppsats har för avsikt att förklara vilka fel som kan uppstå och varför. Även lösningar i form av modifieringar av system kommer att tas upp. Beställaren av undersökningen, Saab, ser möjligheter i att på ett enkelt sätt återskapa fel i deras datorsystem på Gripen. Detta skulle kunna minska kostnaderna för felsökning drastiskt. Då mycket tid ägnas åt verifiering av ny programvara för datorsystemet blir detta också en stor kostnad i utvecklingsarbetet. Därför är det också ett önskemål att undersöka huruvida regressionstest av nya programvaror skulle kunna automatiseras. Till en början studerades artiklar inom området och marknaden avsöktes efter färdiga verktyg. Efter en sammanställning av teorin bakom problemen så kunde en analys av det befintliga datorsystemet påbörjas, vilka problem som kunde uppstå och ifall det var möjligt att lösa dessa undersöktes med hjälp av systemets dokumentation. Vissa problem uppdagades där en del kunde avhjälpas med ett genomtänkt verktyg. Vissa problem var dock inte möjligt att deterministiskt visa lösbara, vilket leder till att målet om ett fullständigt regressionstest troligen blir svårt att genomföra. Däremot så kommer andra sorters tester för att testa robustheten i systemet att vara genomförbara. Framförallt så kommer det finnas ett underlag för framtida system där man redan från början kan ta hänsyn till problemen. De på marknaden förekommande verktyg som har analyserats har alla visat sig ha funktioner som är användbara, men inget av verktygen kan ensamt hantera de önskningar som finns.
|
279 |
Omvandlingen av en skrivbordsapplikation till en webbapplikation / Transformation of a desktop application to a web applicationRazafimandimbison, Handi, Håkanson, Daniel January 2015 (has links)
This thesis investigates how a developer can move a desktop application to the web environment. The investigation also includes finding out what are the limitations of a web application and what is not possible to move to the web from the desktop. The question is whether there is any functionality that is gained or lost in the web application after the move. A survey has been conducted to find out what types of problems occur when moving an application from desktop to the web. The authors of this thesis have developed two prototypes for a company who wants to move a limited part of their current desktop application to the web in the future. The prototypes have been developed in Silverlight and ASP.NET. The survey has revealed that one of the disadvantages of web applications is the difficulty in working with local files and hardware on the computer. An information search has suggested that a function in a web application can be carried out faster or slower than in a desktop application depending on the conditions. Silverlight has shown through the practical work that it can make the move to the web easier for companies who are more specialized in desktop applications because there is not much knowledge required for the migration. Silverlight has also shown during the implementation of graphical components that it can easily move functionality such as drag and drop with the aid of its toolkit. The shift from desktop to the web can be seen as building a web application from scratch. The result can be influenced by the experience of the developer in the development of the web or desktop applications. Increased knowledge in the methods and technology used in the development of web applications is a necessity for anyone considering to make the move from the desktop.
|
280 |
Fence surveillance with convolutional neural networksJönsson, Jonatan, Stenbäck, Felix January 2018 (has links)
Broken fences is a big security risk for any facility or area with strict security standards. In this report we suggest a machine learning approach to automate the surveillance for chain-linked fences. The main challenge is to classify broken and non-broken fences with the help of a convolution neural network. Gathering data for this task is done by hand and the dataset is about 127 videos at 26 minutes length total on 23 different locations. The model and dataset are tested on three performances traits, scaling, augmentation improvement and false rate. In these tests we concluded that nearest neighbor increased accuracy. Classifying with fences that has been included in the training data a false rate that was low, about 1%. Classifying with fences that are unknown to the model produced a false rate of about 90%. With these results we concludes that this method and dataset is useful under the right circumstances but not in an unknown environment.
|
Page generated in 0.0628 seconds