101 |
Control and design of engineering mechanics systemsYedeg, Esubalewe Lakie January 2013 (has links)
No description available.
|
102 |
Identification and tuning of algorithmic parameters in parallel matrix computations : Hessenberg reduction and tensor storage format conversionEljammaly, Mahmoud January 2018 (has links)
This thesis considers two problems in numerical linear algebra and high performance computing (HPC): (i) the parallelization of a new blocked Hessenberg reduction algorithm using Parallel Cache Assignment (PCA) and the tunability of its algorithm parameters, and (ii) storing and manipulating dense tensors on shared memory HPC systems. The Hessenberg reduction appears in the Aggressive Early Deflation (AED) process for identifying converged eigenvalues in the distributed multishift QR algorithm (state-of-the-art algorithm for computing all eigenvalues for dense square matrices). Since the AED process becomes a parallel bottleneck it motivates a further study of AED components. We present a new Hessenberg reduction algorithm based on PCA which is NUMA-aware and targeting relatively small problem sizes on shared memory systems. The tunability of the algorithm parameters are investigated. A simple off-line tuning is presented and the performance of the new Hessenberg reduction algorithm is compared to its counterparts from LAPACK and ScaLAPACK. The new algorithm outperforms LAPACK in all tested cases and outperforms ScaLAPACK in problems smaller than order 1500, which are common problem sizes for AED in the context of the distributed multishift QR algorithm. We also investigate automatic tuning of the algorithm parameters. The parameters span a huge search space and it is impractical to tune them using standard auto-tuning and optimization techniques. We present a modular auto-tuning framework which applies: search space decomposition, binning, and multi-stage search to enable searching the huge search space efficiently. The framework using these techniques exposes the underlying subproblems which allows using standard auto-tuning methods to tune them. In addition, the framework defines an abstract interface, which combined with its modular design, allows testing various tuning algorithms. In the last part of the thesis, the focus is on the problem of storing and manipulating dense tensors. Developing open source tensor algorithms and applications is hard due to the lack of open source software for fundamental tensor operations. We present a software library dten, which includes tools for storing dense tensors in shared memory and converting a tensor storage format from one canonical form to another. The library provides two different ways to perform the conversion in parallel, in-place and out-of-place. The conversion involves moving blocks of contiguous data and are done to maximize the size of the blocks to move. In addition, the library supports tensor matricization for one or two tensors at the same time. The latter case is important in preparing tensors for contraction operations. The library is general purpose and highly flexible.
|
103 |
Performance Analysis and Improvement of PR-SCTP in an Event Logging ContextRajiullah, Mohammad January 2012 (has links)
Due to certain shortcomings in TCP and UDP, the Stream Control Transmission Protocol (SCTP) was defined for transporting telephony signaling traffic. The partially reliable extension of SCTP, PR-SCTP, has been considered as a candidate for prioritizing content sensitive traffic and trading reliability against timeliness for applications with soft real time requirements. In this thesis, we investigate the applicability of PR-SCTP for event logging applications. Event logs are inherently prioritized. This makes PR-SCTP a promising candidate for transporting event logs. However, the performance gain of PR-SCTP can be very limited when application message sizes are small and messages have mixed reliability requirements. Several factors influence PR-SCTP’s performance. One key factor is the inefficiency in the forward_tsn mechanism of PR-SCTP. We examine the inefficiency in detail and propose several solutions. Moreover, we implement and evaluate one solution that utilizes the Non-Renegable Selective Acknowledgements (NR-SACKs) mechanism currently being standardized in the IETF, which is available in the FreeBSD operating system. Our results show a significant performance gain for PR-SCTP with NR-SACKs. In some scenarios, the average message transfer delay is reduced by more than 75%. Moreover, we evaluate NR-SACK based PR-SCTP using real traces from an event logging application called syslog. It significantly improves the syslog application performance as compared to SCTP, TCP and UDP.
|
104 |
Guidelines for integration testing of asynchronous many-to-many message passing applications for use in 4G and 5G telecommunicationJansson, Oskar, Nilsson, Niklas January 2018 (has links)
Message Passing Systems (MPS) is today a widely used architecture for distributed embedded systems, where components communicate by sending and receiving messages. Integration testing a system using MPS with a many-to-many relationship can be demanding as both the time and the order in which messages are delivered depend on the execution environment. The non-deterministicness can lead to message race faults, where the order of messages can result in false truths. If a test cannot continue execution until the response has been received, it can potentially lead to a message deadlock. Google Test is a popular framework for testing code written in C/C++; it features a rich set of assertions and fatal and non-fatal failures. This paper presents guidelines on how to test a non-deterministic message order in an MPS system using additions to the Google Test framework. From studies a set of solutions were brought forward. Each solution was evaluated with the use of a minimalistic MPS system that we constructed for the task, and the guidelines are based upon the results of these.
|
105 |
Utveckling av applikation för visualisering av data kring kulturverksamhet – med fokus på användbarhet : En studie för Region GävleborgOlsson, Christian, Suau Carvajal, Nicolas January 2018 (has links)
Denna rapport behandlar ämnet användbarhet och hur det kan tillämpas på Region Gävleborgs avdelning Kulturutveckling. Idag saknar de en applikation som tillhandahåller tjänsten att med hjälp av Excel-data grafiskt på en karta visualisera kulturaktörer med regionalt uppdrag och deras verksamhet. För att kunna skapa och utforma en användarvänlig applikation har författarna undersökt olika tillämpningar, definitioner och standarder som rör användbarhet. Ett användartest utfördes för att bekräfta författarnas tes om att med hjälp av Shneidermans gyllene regler tillverka en applikation som i framtiden kan användas för att underlätta beslutsfattande kring de kulturella aktörernas verksamhet i regionen. Författarna har utöver feedbacken från de tilltänkta användarna även tagit fram en lathund för att möjliggöra användare med olika kunskap och erfarenhet att använda applikationen som redskap i deras arbete. Slutsatsen som kan dras är att det är tillräckligt att tillämpa en uppsättning användbarhetsregler för att skapa en användarvänlig applikation.
|
106 |
A Visualization Application for Anomaly Detection in Water Management Systems / En Visualiseringsapplikation för Anomalitetsdetektion i VattenhanteringssystemEberhardsson, Elias January 2018 (has links)
Denna rapport beskriver och går igenom processen med att utforma och implementera en applikation för visualisering av data relaterade till vattenhanteringssystem. Datavisualiseringen i fråga är implementerad på en kartvy. Uppgifterna tillhandahålls via Aquaductus sensorenheter vilka placeras i vattenhanteringssystem, dessa för sen regelbundet över data till Aquaductus databas. Applikationen kan både ge en snabb överblick av status över en stor mängd sensorenheter, och är gjord som ett verktyg för att diagnostisera och lokalisera eventuella läckor eller blockeringar. Applikationen är gjord för att användas av kontorsarbetare, som är vana vid verktyg med många, många funktioner och har hög teknisk förmåga. Applikationen ska dock också fungera som ett verktyg för arbetstagare ute på fältet, som använder det på surfplattor eller möjligen mobiltelefoner. Detta breda utbud av tekniska färdigheter och användarmiljöer lägger vikt vid användargränssnittet och att utforma verktyg som kan användas på olika nivåer. Det färdiga verktyget levererar den kraft som behövs, men presenterar den kraften på ett användarvänligt sätt, upprätthållande av designfilosofin. Så levereras användbarhet och flexibilitet, som utlovat. / This thesis outlines and walks you through the process of designing and implementing an application for visualizing data related to water management systems. The data visualization in question is implemented on top of maps. The data is provided through Aquaductus sensors placed in water management systems, which relay data periodically. The application is able to both give a quick overview of the status of a large number of sensor-units, and is also made as a tool for diagnosing and locating possible leaks or blockages. The application is made to be used by office workers, who are familiar with tools with many functions and have high technical prowess. The application though, should also work as a tool for workers out in the field, who are using it on tablet devices or possibly smartphones. This wide range of technical skill and user environments puts a high priority on the user interface and designing tools that can be used at different levels of skill. The tools designed gives the users the power needed but presented in a user friendly manner, upholding the design philosophy. Delivering the usability and flexibility promised.
|
107 |
Synkronisera grafik till flera skärmar för sportevenemang / Synchronize graphics to multiple screens for sports eventsMoritz, Hugo, Hellgren, William January 2018 (has links)
Multi-screen video playback is often used for sports events. It is important that the graphics are visualized synchronized. The thesis investigates a solution for synchronizing video to multiple devices through a central computer. Programs to test response time and synchronization in OpenCV and libVLC software libraries were developed and tested using automated measurements, empirical measurements, and manual tests. The result showed that libVLC had better and more secure results for response time and synchronization time. However, OpenCV managed to achieve more synchronized results when using more synchronization attempts. The work provided an understanding of how synchronization is possible for software libraries and the ability to further develop tests with NTP and network protocols. / Uppspelning av video på flera skärmar används frekvent för sportevenemang. Det är viktigt att grafiken visualiseras synkroniserat. Arbetet undersöker en lösning för att kunna synkronisera video till flera enheter genom en central dator. Program för att kunna testa svarstid och synkronisering i programvarubiblioteken OpenCV och libVLC utvecklades och testades med automatiserade mätningar, empiriska mätningar och manuella tester. Resultatet visade att libVLC hade bättre och säkrare resultat för svarstid och synkroniseringstid. OpenCV lyckades dock uppnå mer synkroniserade resultat vid flertalet synkroniseringsförsök. Arbetet gav en insyn i hur synkronisering är möjlig för programvarubibliotek och möjligheten för att kunna vidareutveckla tester med NTP och nätverksprotokoll.
|
108 |
Applied machine learning in the logistics sector : A comparative analysis of supervised learning algorithmsAllberg, Petrus January 2018 (has links)
BackgroundMachine learning is an area that is being explored with great haste these days, which inspired this study to investigate how seven different supervised learning algorithms perform compared to each other. These algorithms were used to perform classification tasks on logistics consignments, the classification is binary and a consignment can either be classified as missed or not. ObjectivesThe goal was to find which of these algorithms perform well when used for this classification task and to see how the results varied with different sized datasets. Importance of the features which were included in the datasets has been analyzed with the intention of finding if there is any connection between human errors and these missed consignments. MethodsThe process from raw data to a predicted classification has many steps including data gathering, data preparation, feature investigation and more. Through cross-validation, the algorithms were all trained and tested upon the same datasets and then evaluated based on the metrics recall and accuracy. ResultsThe scores on both metrics increase with the size of the datasets, and when comparing the seven algorithms, two does not perform equally compared to the other five, which all perform moderately the same. Conclusions Any of the five algorithms mentioned prior can be chosen for this type of classification, or to further study based on other measurements, and there is an indication that human errors could play a part on whether a consignment gets classified as missed or not.
|
109 |
Tjänstebaserad Business Intelligence med Power BI i en webbapplikation / Service Based Business Intelligence using Power BI in a web applicationÖsterling Sicking, Sigrid, Hegna Tengstål, Vidar January 2018 (has links)
Business Intelligence handlar om att verksamheter och företag, med hjälp av verktyg och applikationer, ska kunna analysera och visualisera data och information för att sedan med hjälp av denna information kunna fatta verksamhetsnyttiga beslut. Företaget QBIM i Karlstad arbetar med att leverera tjänstebaserad Business Intelligence till kunder i många olika branscher. De sammanställer datan med hjälp av mjukvaran Microsoft Power BI (benämns Power BI), och de levererar reslutatet via bland annat portalen SKI-ANALYTICS. Portalen i nuläget har många brister, och därför kommer detta arbete kretsa kring att identifiera dessa brister och förbättra portalen ur ett användarperspektiv. Detta utförs genom att implementera innehållet i de tre delmålen: inbäddning av Q&A, inbäddning av rapport samt mobilanpassning. Förbättringarna kommer att implementeras med hjälp av Power BI REST API samt Power BI JavaScript API. Slutresultatet är en mer användarvänlig och attraktiv kundportal som är enkel att använda, där de tre delmålen implementerats. Arbetet inkluderar även en diskussion kring fördelar och nackdelar kring användandet av ett färdigt API som lämnar lite frihet till utvecklaren. / Business Intelligence is about helping companies making business beneficial decisions by analysing and visualising data with the help of tools and applications. QBIM is a company in Karlstad that provides their customers with service based BI in various branches. They use a software called Power BI to compile the data from their customers and display the results in their portal SKI-ANALYTICS. The portal has as of now many flaws, and this report will focus on identify and improve those flaws. This goal will be split up and done in three parts; Q&A embedding, report embedding and mobile adaptation. The improvements will be implemented with the use of Power BI REST API and Power BI JavaScript API. The result is a more user friendly and attractive customer portal, where the three part goals are implemented. The report also includes a discussion on pros and cons of the use of an API that severely limits the freedom of the developer.
|
110 |
USING DOMAIN KNOWLEDGE FUNCTIONS TO ACCOUNT FOR HETEROGENEOUS CONTEXT FOR TASKS IN DECISION SUPPORT SYSTEMS FOR PLANNINGRoslund, Anton January 2018 (has links)
This thesis describes a way to represent domain knowledge as functions. Those functions can be composed and used for better predicting time needed for a task. These functions can aggregate data from different systems to provide a more complete view of the contextual environment without the need to consolidate data into one system. These functions can be crafted to make a more precise time prediction for a specific task that needs to be carried out in a specific context. We describe a possible way to structure and model data that could be used with the functions. As a proof of concept, a prototype was developed to test an envisioned scenario with simulated data. The prototype is compared to predictions using min, max and average values from previous experience. The result shows that domain knowledge, represented as functions can be used for improved prediction. This way of defining functions for domain knowledge can be used as a part of a CBR system to provide decision support in a problem domain where information about context is available. It is scalable in the sense that more context can be added to new tasks over time and more functions can be added and composed. The functions can be validated on old cases to assure consistency.
|
Page generated in 0.0582 seconds