• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5022
  • 1825
  • 8
  • 2
  • 1
  • Tagged with
  • 6859
  • 4636
  • 884
  • 782
  • 749
  • 607
  • 592
  • 584
  • 543
  • 502
  • 497
  • 494
  • 398
  • 380
  • 372
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

Röring, Johan January 2017 (has links)
Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volatility, which are used as the strike price for a contract. This thesis will manage that difficulty and estimate strike prices with several different models. I will de- scribe how the variance strike for a variance swap can be estimated with a theoretical replicating scheme and how the result can be manipulated to obtain the volatility strike, which is a tech- nique that require Laplace transformations. The famous Black-Scholes model is described and how it can be used to estimate a volatility strike for volatility swaps. A new model that uses the Greeks vanna and vomma is described and put to the test. The thesis will also cover a couple of stochastic volatility models, Exponentially Weighted Moving Average (EWMA) and Gener- alized Autoregressive Conditional Heteroskedasticity (GARCH). The models’ estimations are compared to the realized volatility. A comparison of the mod- els’ performance over 2015 is made as well as a more extensive backtesting for Black-Scholes, EWMA and GARCH. The GARCH model performs the best in the comparison and the model that uses vanna and vomma gives a good result. However, because of limited data, one can not fully conclude that the model that uses vanna and vomma can be used when calculating the fair volatility strike for a volatility swap.
42

Increase resource utilization of Alfa Laval's Vertical Lift Modules by maximizing their order picking share

Gullberg, Sofia, Lundberg, Elina January 2017 (has links)
Alfa Laval is currently world leaders within heat transfer, separation and fluid handling. One of their distribution centers is situated in Tumba, Stockholm, where they maintain a storage warehouse used for spare parts to their products. In the warehouse, there are currently 4 216 items located in so called Vertical Lift Modules, VLMs. The goal for DC Tumba is to reach an order line picking share of 40% provided by these VLMs, to decrease time spent on picking activities. The purpose of this project was to evaluate and improve current method for item location management of these VLMs by optimizing its resource utilization to reach a picking share of 40% of total order lines and thereby increase efficiency.   Data were provided by Alfa Laval containing item information and historical order lines. Additional data were also collected by measurement tests and observations in the warehouse. A mathematical optimization model was then developed and formulated as a binary Knapsack problem, using the collected data as input. The model was thereafter implemented via AMPL and solved with the Gurobi Optimizer solver and, upon request, by a modified Greedy Heuristic algorithm implemented via VBA in Excel. The Gurobi Optimizer solver generated an order linepicking share from the VLMs of 41,93% and the solution was used to verify the strength and credibility of the solution generated by the Greedy Heuristic solver. The Greedy Heuristic solution resulted in a picking share of 41,25%.   Improvement was achieved as the new solutions increased the order picking share from the VLMs with at least seven percentage points, which implies picking time savings of nearly 20% per year. The Greedy Heuristic solver also proved to be almost as good as the exact Gurobi Optimizer solver since the two solutions have 94,44 % of selected items in common. Therefore, the Greedy Heuristic solver is considered good and useful in the future for Alfa Laval
43

Summability of Fourier transforms of functions from Lorentz spaces

Kopezhanova, Aigerim January 2017 (has links)
This PhD thesis is devoted to the study of relations between integrability properties of functions and summability properties of its Fourier coefficients and transforms. The relations are given in terms of generalized weighted Lorentz norms, where the weights have some additional growth properties. The thesis contains six papers (papers A-F) together with an introduction, which put these papers into a general frame. In paper A some relations between weighted Lorentz norms and some corresponding sums of Fourier coefficients are studied for the case with a general orthonormal bounded system. Under certain circumstances even two-sided estimates are obtained. In paper B we study relations between summability of Fourier coefficients and integrability of the corresponding functions for generalized weighted Lorentz spaces in the case of a regular system. Some new inequalities of Hardy-Littlewood-Paley type with respect to a regular system for these generalized Lorentz spaces are obtained. It is also proved that the obtained results are in a sense sharp. In paper C we investigate integrability properties of the orthogonal series with coefficients from generalized weighted Lorentz spaces in the case of a regular system. The upper and the lower estimates of some corresponding Lorentz type norms of the Fourier coefficients are obtained. In paper D some new Boas type theorems for generalized weighted Lorentz spaces with respect to regular systems for generalized monotone functions are proved. In paper E inequalities for the Fourier transform of functions from the generalized weighted Lorentz spaces are studied. The upper and the lower estimates of the norm of the Fourier transform in generalized weighted Lorentz spaces are derived. Finally, in paper F a new inequality concerning the Fourier transform is derived. Moreover, it is described conditions so that this result is sharp in the sense that both upper and lower bounds are obtained.
44

Homogenization Results for Parabolic and Hyperbolic-Parabolic Problems and Further Results on Homogenization in Perforated Domains

Lobkova, Tatiana January 2017 (has links)
This thesis is based on four papers. The main focus is on homogenization of selected parabolic problems with time oscillations, and hyperbolic-parabolic problems without time oscillations. The approaches are prepared by means of certain methods, such as two-scale convergence, multiscale convergence and evolution multiscale convergence. We also discuss further results on homogenization of evolution problems in perforated domains. / <p>Vid tidpunkten för försvar av avhandlingen var följande delarbeten opublicerade: delarbete 1 inskickat, delarbete 2 accepterat, delarbete 4 inskickat.</p><p>At the time of the defence the following papers were unpublished: paper 1 submitted, paper 2 accepted, paper 4 submitted.</p>
45

Mathematical modelling in upper secondary mathematics education in Sweden

Bergman Ärlebäck, Jonas January 2010 (has links)
The aim of this thesis is to investigate and enhance our understanding of the notions of mathematical models and modelling at the Swedish upper secondary school level. Focus is on how mathematical models and modelling are viewed by the different actors in the school system, and what characterises the collaborative process of a didactician and a group of teachers engaged in designing and developing, implementing and evaluating teaching modules (so called modelling modules) exposing students to mathematical modelling in line with the present mathematics curriculum. The thesis consists of five papers and reports, along with a summary introduction, addressing both theoretical and empirical aspects of mathematical modelling. The thesis uses both qualitative and quantitative methods and draws partly on design-based research methodology and cultural-historical activity theory (CHAT). The results of the thesis are presented using the structure of the three curriculum levels of the intended, potentially implemented, and attained curriculum respectively. The results show that since 1965 and to the present day, gradually more and more explicit emphasis has been put on mathematical models and modelling in the syllabuses at this school level. However, no explicit definitions of these notions are provided but described only implicitly, opening up for a diversity of interpretations. From the collaborative work case study it is concluded that the participating teachers could not express a clear conception of the notions mathematical models or modelling, that the designing process often was restrained by constraints originating from the local school context, and that working with modelling highlights many systemic tensions in the established school practice. In addition, meta-results in form of suggestions of how to resolve different kinds of tensions in order to improve the study design are reported. In a questionnaire study with 381 participating students it is concluded that only one out of four students stated that they had heard about or used mathematical models or modelling in their education before, and the expressed overall attitudes towards working with mathematical modelling as represented in the test items were negative. Students’ modelling proficiency was positively affected by the students’ grade, last taken mathematics course, and if they thought the problems in the tests were easy or interesting. In addition empirical findings indicate that so-called realistic Fermi problems given to students working in groups inherently evoke modelling activities.
46

Portfolio investment strategy based on Twitter sentiment

Lohman, Pontus January 2017 (has links)
This paper investigates if it is possible to create a portfolio investment strategy by looking at the sentiment (i.e. are they positive or negative) of twitter data for ten companies, five IT companies and five fashion companies. 764 340 tweets were collected during the study which spanned 60 trading days, and of those tweets, 483 946 where from the IT companies and the rest from the fashion companies. The tweets were collected in a Python program using Twitters API, and then analyzed and classified in another Python program using three different Naive Bayes classifiers that had been trained on a training set consisting of positive and negative text. The sentiment results were then used to create two different portfolios where one was based solely on sentiment and the other one was a combination of sentiment and market capitalization, the ratio used was determined by testing. Those portfolios were then compared against a market capitalization portfolio and a Sharpe portfolio. I found that for the IT companies the portfolio based solely on sentiment performed decently, but was the worst of the four portfolios. The combination portfolio performed well and when comparing it to the Sharpe portfolio and the market capitalization portfolio, it might even be the preferable strategy depending on the investor’s appetite for risk as it had the highest ratio between return and standard deviation. For the fashion companies the sentiment portfolio performed very poorly. The combination portfolio performed decently, but that was only because it consisted mainly (85%) of the market capitalization portfolio which performed the best of all strategies and thereby “saving” the combination portfolio. The poor performance of the sentiment portfolio for the fashion companies might in part be explained by the fact that there were almost twice as many tweets for the IT companies, making the sentiment less accurate and less reliable for the fashion companies when compared to sentiment of the IT companies. It might also be that there is more irrelevant stuff being tweeted about when it comes to the fashion companies, causing the sentiment portfolio to performworse.
47

Statistisk regressionsanalys i matematisk och didaktisk kontext : En studie av hur matematikböcker i Sverige, Norge och Finlandstödjer gymnasieelevers förståelse av regressionsanalysens matematiska natur och dess tillämpningsmöjligheter / Statistic Regression Analysis in the Mathematical and Didactic Contexts

Österberg, Marika January 2016 (has links)
No description available.
48

A Machine Learning Approach to Fiber Delivery Lead Time Estimation

Lyxell, Erika January 2019 (has links)
Everybody’s talking about big data. Telecom companies have access to huge quantities of data but unfortunately, most of it is not further analyzed in search of valuable information regarding the optimization of processes. Telia Infra delivery operates to supply fiber to the whole of Swe- den. An important part of planning future deliveries is to decide on a delivery date immediately after the order is placed, a so-called ”First promise”. Today, there are great opportunities for improvement in this decision making. In 2018, the business had a low delivery precision on First promise, which today is done with manual resources. Therefore, the aim of the master thesis project is to develop and apply a model that improves the delivery precision at First promise, and can be implemented in the business as support for decision making at the delivery date. A variety of machine learning algorithms were trained and tested in order to estimate fiber de- livery lead time. One of the algorithms was chosen and results have shown that a significant increase in delivery precision is possible to obtain by implementing an automated system for estimation of delivery lead times based on analyzed available data. It gives reason to believe that to take the step from basing decisions on a hunch and move to a more technology based solution will optimize processes and with a high possibility reduce costs and resource demands. The master thesis has shown that there exists great potential for other tasks within the field of machine learning and automation of today’s processes. / Alla pratar om big data. Telekomföretag har tillgång till stora mängder data men tyvärr är det mesta inte analyserat ytterligare på jakt efter värdefull information i relation till optimering av processer. Telia Infra arbetar med att levererar fiber till hela Sverige. En viktig del i planeringen av framtida leveranser är att besluta om leveransdatum omedelbart efter beställningen, ett så kallat "Första löfte". Idag finns det stora möjligheter till förbättring i detta beslutsfattande. Under 2018 hade verksamheten låg leveransprecision på Första löftet, som idag görs med manuella resurser. Därför är syftet med examensarbetet att utveckla och tillämpa en modell som förbättrar leveransprecisionen på Första Löfte, samt går att implementera i verksamheten som stöd till beslutsfattande vid leveransdatum. En mängd olika maskininlärningsalgoritmer har tränats och testats för att uppskatta ledtid av fiberleverans. En av algoritmerna valdes och resultatet har visat att en signifikant ökning av leveransprecision kan erhållas genom att implementera ett automatiserat system för uppskattning av leveranstider baserat på analyserad tillgänglig data. Det ger anledning att tro att ta steget från att basera beslut på känsla och att flytta till en mer teknologiskt baserad lösning kommer optimera processer och med en hög sannolikhet minska kostnader och resursbehov. Examensarbetet har visat att det finns stor potential för andra uppgifter inom maskininlärning och automatisering av dagens processer.
49

Predicting Marketing Churn Using Machine Learning Models

Ahlin, Mikael, Ranby, Felix January 2019 (has links)
For any organisation that engages in marketing actions there is a need to understand how people react to communication messages that are sent. Since the introduction of General Data Protection Regulation, the requirements for personal data usage have increased and people are able to effect the way their personal information is used by companies. For instance people have the possibility to unsubscribe from communication that is sent, this is called Opt-Out and can be viewed as churning from communication channels. When a customer Opt-Out the organisation loses the opportunity to send personalised marketing to that individual which in turn result in lost revenue.  The aim with this thesis is to investigate the Opt-Out phenomena and build a model that is able to predict the risk of losing a customer from the communication channels. The risk of losing a customer is measured as the estimated probability that a specic individual will Opt-Out in the near future. To predict future Opt-Outs the project uses machine learning algorithms on aggregated communication and customer data. Of the algorithms that were tested the best and most stable performance was achieved by an Extreme Gradient Boosting algorithm that used simulated variables. The performance of the model is best described by an AUC score of 0.71 and a lift score of 2.21, with an adjusted threshold on data two months into the future from when the model was trained. With a model that uses simulated variables the computational cost goes up. However, the increase in performance is signicant and it can be concluded that the choice to include information about specic communications is considered relevant for the outcome of the predictions. A boosted method such as the Extreme Gradient Boosting algorithm generates stable results which lead to a longer time between model retraining sessions.
50

Simplices of f-vectors

Kupreyeva, Aliaksandra January 2019 (has links)
In this thesis the properties of the convex hulls of the sets of f-vectors of simplicial complexes on p vertices are investigated. The work is primarily constrained to the convex hulls of the sets of f-vectors of simplicial complexes of dimension at most 2 and 3. We study the functions which count the number of lattice points in these convex hulls and in their integral dilations. Also, the center of mass for these sets of f-vectors is studied. For the convex hulls of the sets of f-vectors with cardinality at most 3 we also investigate whether there is a compact rational function representing its f-vectors. Chapter 6 is devoted to researching whether there is any relationship between the number of f-vectors and the total number of lattice points in the convex hulls of the sets f-vectors with cardinality at most n.

Page generated in 0.0631 seconds