181 |
Reverse analysis of the epoxy kinetic model : A search for kamal model parameters to fit measured dataAsalya, Oday, Fjällborg, Joar, Hagert, Lucas January 2021 (has links)
This report is about the curing of epoxy and how to find a model to fit the measured temperature data. The curing process of the polymer was modeled using Kamal model, which includes several parameters. The goal of the report was to gain an understanding about the Kamal model and also to learn how to approach the problem of finding these parameter values using measured data of the temperature of the epoxy during its exothermic reaction. Using the heat equation we were able to find a system of equation that describes the temperature of the epoxy. To understand the parameters, we changed each parameter drastically which gave us an intuition of the Kamal model. In an attempt to fit the measured data parameters we first changed the parameters by trial and error. Thereafter, an optimization method was implemented that given an initial guess iteratively changed the parameters to approach the measured data. The 'approach' was quantitatively measured by using a loss function that measures the closeness of the simulated and the measured data. By having a large grid of starting guesses, many local minimas were found and the best fitted parameters were documented. The achieved results were inconclusive as the model didn't fit the exothermic peak sufficiently, but the goal of our report to create an approach to this problem was still successful. To further improve our model, all the assumptions in our model should be analyzed and possibly revised, also more datasets would have to be fitted to draw further conclusions.
|
182 |
Improving sales forecast accuracy for restaurants / Förbättrad träffsäkerhet i försäljningsprognoser för restaurangerAdolfsson, Rickard, Andersson, Eric January 2019 (has links)
Data mining and machine learning techniques are becoming more popular in helping companies with decision-making, due to these processes’ ability to automatically search through very large amounts of data and discover patterns that can be hard to see with human eyes. Onslip is one of the companies looking to achieve more value from its data. They provide a cloud-based cash register to small businesses, with a primary focus on restaurants. Restaurants are heavily affected by variations in sales. They sell products with short expiration dates, low profit margins and much of their expenses are tied to personnel. By predicting future demand, it is possible to plan inventory levels and make more effective employee schedules, thus reducing food waste and putting less stress on workers. The project described in this report, examines how sales forecasts can be improved by incorporating factors known to affect sales in the training of machine learning models. Several different models are trained to predict the future sales of 130 different restaurants, using varying amounts of additional information. The accuracy of the predictions are then compared against each other. Factors known to impact sales have been chosen and categorized into restaurant information, sales history, calendar data and weather information. The results show that, by providing additional information, the vast majority of forecasts could be improved significantly. In 7 of 8 examined cases, the addition of more sales factors had an average positive effect on the predictions. The average improvement was 6.88% for product sales predictions, and 26.62% for total sales. The sales history information was most important to the models’ decisions, followed by the calendar category. It also became evident that not every factor that impacts sales had been captured, and further improvement is possible by examining each company individually.
|
183 |
The feasibility and practicality of a generic social media libraryJonsén, Fredrik, Stolpe, Alexander January 2017 (has links)
Many people today use social media in one way or another, and many of these platforms have released APIs developers can use to integrate social media in their applications. As many of these platforms share a lot of functionality we see a need for developing a library, to contain these, and ease the development process when working with the platforms. The purpose of this paper is to find common functionality and explore the possibility of generalization in this regard. We first look for common denominators between the top social media networks, and using this information we attempt to make an implementation to evaluate the practicality. After the development process we analyze our findings and discuss the usability and maintainability of such a library. Our findings show that the current state of the studied APIs are not suitable for generalization.
|
184 |
Best way for collecting data for low-resourced languagesKarim, Hiva January 2020 (has links)
Low resource languages possess a limited number of digitized texts, making it challenging togenerate a satisfactory language audio corpus and information retrieval services. Low resourcelanguages, especially those spoken exclusively in African countries, lack a well-defined andannotated language corpus, making it a big obstacle for experts to provide a comprehensive textprocessing system. In this study, I Found out the best practices for producing and collectingdata for such zero/low resource languages by means of crowd-sourcing. For the purpose of thisstudy, a number of research articles (n=260) were extracted from Google Scholar, MicrosoftAcademic, and science direct. From these articles, only 60 of them, which met the inclusioncriteria' demands, were considered to review for eligibility. A full-text version of these researcharticles was downloaded and then were carefully screened to ensure eligibility. On the result ofthe eligibility assessment from potentially eligible 60 full-text articles for inclusion, only 25were selected and qualified to include in the final review. The final pool of the selected articles,concerning data generation practices and collection of low resource languages, can beconcluded that speech-based audio data is one of the most common and accessible data types.It can be contended that the collection of audio data from speech-based resources such as nativespeakers of the intended language and available audio recording by taking the advantages ofnew technologies is the most practical, cost-effective, and common method for collecting datafor low resource languages.
|
185 |
Baljangåvan: Omtanke på distans : Hur kan en webbapplikation som säljer enkla gåvor utformas så att den upplevs navigerbar, tillförlitlig och har en effektiv köpprocess? / The Baljan Gift: Caring from afar : How can a web application that sells simple gifts be designed to be perceived as navigable, trustworthy and have an efficient buying process?Dahlström, Felicia, Eirik, Funnemark, Gudmundsson, Tomas, Lindberg, Sophie, Nilsson, Filip, Olsson, Marcus, Svensk, Herman, Sörensen, Joakim January 2018 (has links)
Syftet med denna studie var att undersöka hur en webbapplikation kan tillfredsställa behovet av att snabbt och smidigt kunna ge enklare gåvor till vänner och bekanta på Linköpings universitet. En webbapplikation utvecklades med fokus på de tre nyckelaspekterna navigerbarhet, tillförlitlighet och effektiv köpprocess. Under studiens gång utfördes användartester av webbapplikationen under olika faser av utvecklingen med de ovannämnda aspekterna som mätvärden. Metoder som användes i samband med användartesterna var Smiths L-formel för att utvärdera navigerbarhetensamt Concurrent Think-Aloud Procedure och Retrospective Probing. Resultaten från test-personerna samlades sedan in och användes som underlag för framtida utveckling. Trots att testresultaten låg till grund för utformningen av webbapplikationen ansågs navigerbarheten vara som lägst efter det sista testet enligt resultaten vilket är något som diskuteras i diskussionskapitlet. Vidare var målet med en effektiv köpprocess att kunna klicka, mata in persondata och ta beslut så få gånger som möjligt för att utföra ett köp, vilket slutligen resulterade i fyra klick, sex inmatningar och två beslut. Slutligen var den upplevda tillförlitligheten för webbapplikationen hög och lyckades uppnå sitt högsta testresultat efter de sista testerna. Slutsatsen indikerar att det krävs kompromisser för att skapa en webbapplikation som maximerar de tre nyckelaspekterna. Vad som gynnar en parameter missgynnaren annan och prioriteringar kan krävas. I Baljangåvans fall blev resultatet en webbapplikation som upplevdes navigerbar och tillförlitlig, samtidigt som köpprocessen hölls så effektiv som möjligt. / The purpose of this study was to examine whether or not a web application can satisfy the need to give simple gifts to friends and acquiantences, studying at Linköping University, in a quick and simple manner. The web application was developed keeping three key-aspects in mind: navigability, trust and an efficient method of payment. During the stages of the development, user tests were performed using the afore-mentioned aspects as metrics. Some of the methods used during testing were Smith’s L-formula, to evaluate navigability, as well as Concurrent Think-Aloud Procedure and Restrospective Probing. The obtained test data was gathered and became the foundation for future development. Even though the development was premised on the test data, however, the experienced navigability showed an all time low in the last test. This is additionally explored in the discussion part of this thesis. Furthermore, the objective of an efficient method of payment was to keep the amount of clicks, inputs and decisions needed in or-der to buy a product as low as possible. This ended up being four clicks, six inputs, and two decisions. Lastly, the perceived trust of the web application was high and attain edits best result during the last test iteration. The conclusion indicates that compromises are needed in order to develop a web application that maximizes the three key-aspects. There is always a trade-off between the different paramters and priorities are crucial in order to reach a disired balance. The result of this study was a web application with high perceived navigability and trust as well as an efficient method of payment.
|
186 |
Utformning av en e-butik för överproducerad mat med fokus på god navigerbarhetJonsson, Nils, Nyman, Lowe, Davidsson, Maria, Luu, Katarina, Hakegård, Viktor, Berggren, Oskar, Woxén, Gustav January 2018 (has links)
Denna rapport undersöker hur en e-butik kan utvecklas för att uppnå god navigerbarhet med avseende på användarens subjektiva upplevelse. Med grund i befintlig vetenskaplig teori inom området användbarhet med fokus på navigerbarhet har en e-handelsplattform för försäljning och handel av överproducerad mat i form av matlådor utvecklats. Under utvecklingen som genomförts med en iterativ projektmetod har användartest genomförts. Resultaten från dessa användartest har sedan tillsammans med den vetenskapliga teorin använts i den fortsatta utvecklingen av webbapplikationen. Med utgångspunkt i resultat från användartester och vetenskaplig teori har frågeställningen ”Hur kan en webbapplikation ämnad som handelsplattform mellan restauranger och konsumenter för försäljning av överproducerad lunch utformas för att uppnå god navigerbarhet med avseende på användarens subjektiva upplevelse?” undersökts och besvarats. Det konkluderades att en viktig aspekt för navigerbarhet är att användaren ständigt är medveten om var i navigationsträdet denne befinner sig. Samtidigt bör det vara tydligt när och var användaren navigerar på applikationen vilket bland annat kan underlättas genom animationer eller färgmarkeringar i navigationsfältet. Det visade sig också vara viktigt att omdirigera användaren till relevanta platser på applikationen efter nyttjandet av viss funktionalitet. Vidare drogs slutsatsen att trasiga eller otydliga länkar reducerade den upplevda navigerbarheten. Länkar bör alltså utföra förväntad funktionalitet. En annan viktig faktor som påverkar navigerbarhet är vilken information som delges användaren och på vilket sätt denna presenteras.
|
187 |
Keystroke dynamics for student authentication in online examinationsMattsson, Rebecka January 2020 (has links)
Biometrics are distinctive for each person, and can not be given away or hacked like a password. Keystroke dynamics is a behavioral biometric characteristic that can be used as a complementary authentication step [1]. In online examinations it is difficult to make sure that each student write their own work. Keystroke dynamics from these examinations could be used to detect attempted cheating. To detect cheating attempts, a Gaussian Mixture Models with Universal Background Model (GMM-UBM) was implemented, and tested on benchmark data set recorded from online examinations written in free text. The use of a Universal Background Model (UBM) allows students to be enrolled using a limited amount of data, making the sug- gested approach suitable for the use case. The use of a GMM-UBM resulted in an Equal Error Rate (ERR) of 5.4% and an accuracy of 94.5%.
|
188 |
Moving Garbage Collection with Low-Variation Memory Overhead and Deterministic Concurrent RelocationNorlinder, Jonas January 2020 (has links)
A parallel and concurrent garbage collector offers low latency spikes. A common approach in such collectors is to move objects around in memory without stopping the application. This imposes additional overhead on an application in the form of tracking objects' movements, so that all pointers to them, can eventually be updated to the new locations. Typical ways of storing this information suffer from pathological cases where the size of this "forwarding information" can theoretically become as big as the heap itself. If we dimension the application for the pathological case this would be a waste of resources, since the memory usage is usually significantly less. This makes it hard to determine an application's memory requirements. In this thesis, we propose a new design that trades memory for CPU, with a maximum memory overhead of less than 3.2% memory overhead. To evaluate the impact of this trade-off, measurements on application execution time was performed using the benchmarks from the DaCapo suite and SPECjbb2015. For 6 configurations in DaCapo a statistically significant negative effect on execution time in the range of 1-3% was found for the new design. For 10 configurations in DaCapo no change in execution times was shown in statistically significant data and two configurations in DaCapo showed a statistically significant shorter execution times for the new design on 6% and 22%, respectively. In SPECjbb2015, both max-jOPS and critical-jOPS has, for the new design, a statistically significant performance regression of ~2%. This suggests that for applications were stable and predictable memory footprint is important, this approach could be something to consider.
|
189 |
Introducing a library for declarative list user interfaces on mobile devicesHedbrandh Strömberg, Casper January 2020 (has links)
Developing user interfaces that consist of lists on native mobile platforms is complex. This project aims at reducing the complexity for application developers when developing dynamic interactive lists on the iOS-platform by creating an abstraction that lets the application developer write code on a higher abstraction level. The result is a library that creates an abstraction that developers can use to build list user interfaces in a declarative way.
|
190 |
Forecasting Financial Time Series through Causal and Dilated Convolutional Neural NetworksBörjesson, Lukas January 2020 (has links)
In this paper, predictions of future price movements of a major American stock index was made by analysing past movements of the same and other correlated indices. A model that has shown very good results in speech recognition was modified to suit the analysis of financial data and was then compared to a base model, restricted by assumptions made for an efficient market. The performance of any model, that is trained by looking at past observations, is heavily influenced by how the division of the data into train, validation and test sets is made. This is further exaggerated by the temporal structure of the financial data, which means that the causal relationship between the predictors and the response is dependent in time. The complexity of the financial system further increases the struggle to make accurate predictions, but the model suggested here was still able to outperform the naive base model by more than 20 percent. The model is, however, too primitive to be used as a trading system, but suitable modifications, in order to turn the model into one, will be discussed in the end of the paper.
|
Page generated in 0.1515 seconds