81 |
Skattning av kausala effekter med matchat fall-kontroll data / Estimation of causal effects with matched case-control dataAbramsson, Evelina, Grind, Kajsa January 2017 (has links)
No description available.
|
82 |
Reinforcement Learning for 5G HandoverBonneau, Maxime January 2017 (has links)
The development of the 5G network is in progress, and one part of the process that needs to be optimised is the handover. This operation, consisting of changing the base station (BS) providing data to a user equipment (UE), needs to be efficient enough to be a seamless operation. From the BS point of view, this operation should be as economical as possible, while satisfying the UE needs. In this thesis, the problem of 5G handover has been addressed, and the chosen tool to solve this problem is reinforcement learning. A review of the different methods proposed by reinforcement learning led to the restricted field of model-free, off-policy methods, more specifically the Q-Learning algorithm. On its basic form, and used with simulated data, this method allows to get information on which kind of reward and which kinds of action-space and state-space produce good results. However, despite working on some restricted datasets, this algorithm does not scale well due to lengthy computation times. It means that the agent trained can not use a lot of data for its learning process, and both state-space and action-space can not be extended a lot, restricting the use of the basic Q-Learning algorithm to discrete variables. Since the strength of the signal (RSRP), which is of high interest to match the UE needs, is a continuous variable, a continuous form of the Q-learning needs to be used. A function approximation method is then investigated, namely artificial neural networks. In addition to the lengthy computational time, the results obtained are not convincing yet. Thus, despite some interesting results obtained from the basic form of the Q-Learning algorithm, the extension to the continuous case has not been successful. Moreover, the computation times make the use of reinforcement learning applicable in our domain only for really powerful computers.
|
83 |
Inferens på rangordningar - En Monte Carlo-analysBohlin, Lars January 2015 (has links)
No description available.
|
84 |
En SäsongsspelmodellPirsech, William January 2015 (has links)
No description available.
|
85 |
Tennismodellen II : En undersökning om fördelaktiga odds och spelstrategi för spel på tennismatcher med hjälp av en statistisk modellEricsson, Tomas January 2015 (has links)
No description available.
|
86 |
Datainsamlingsmetoder för statistiska undersökningar : -en beskrivande litteraturstudieMalmkvist, Jenny January 2015 (has links)
No description available.
|
87 |
Test of Causality in Conditional Variance Hafner and Herwatz Test for CausalityJatta, Abdullah January 2015 (has links)
No description available.
|
88 |
A Monte Carlo Study Comparing Three Methods for Determining the Number of Principal Components and FactorsSheytanova, Teodora January 2015 (has links)
No description available.
|
89 |
Finance Forecasting in Fractal Market HypothesisLin, Wangke January 2015 (has links)
No description available.
|
90 |
Oddssättning : - utvärdering av modeller för skattning av matchodds i Svenska Superligan i innebandyMundt, Henrik January 2015 (has links)
No description available.
|
Page generated in 0.0558 seconds