• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1011
  • 190
  • 1
  • Tagged with
  • 1202
  • 1202
  • 1201
  • 1199
  • 1199
  • 181
  • 165
  • 142
  • 115
  • 110
  • 110
  • 89
  • 88
  • 79
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Skattning av kausala effekter med matchat fall-kontroll data / Estimation of causal effects with matched case-control data

Abramsson, Evelina, Grind, Kajsa January 2017 (has links)
No description available.
82

Reinforcement Learning for 5G Handover

Bonneau, Maxime January 2017 (has links)
The development of the 5G network is in progress, and one part of the process that needs to be optimised is the handover. This operation, consisting of changing the base station (BS) providing data to a user equipment (UE), needs to be efficient enough to be a seamless operation. From the BS point of view, this operation should be as economical as possible, while satisfying the UE needs.  In this thesis, the problem of 5G handover has been addressed, and the chosen tool to solve this problem is reinforcement learning. A review of the different methods proposed by reinforcement learning led to the restricted field of model-free, off-policy methods, more specifically the Q-Learning algorithm. On its basic form, and used with simulated data, this method allows to get information on which kind of reward and which kinds of action-space and state-space produce good results. However, despite working on some restricted datasets, this algorithm does not scale well due to lengthy computation times. It means that the agent trained can not use a lot of data for its learning process, and both state-space and action-space can not be extended a lot, restricting the use of the basic Q-Learning algorithm to discrete variables. Since the strength of the signal (RSRP), which is of high interest to match the UE needs, is a continuous variable, a continuous form of the Q-learning needs to be used. A function approximation method is then investigated, namely artificial neural networks. In addition to the lengthy computational time, the results obtained are not convincing yet. Thus, despite some interesting results obtained from the basic form of the Q-Learning algorithm, the extension to the continuous case has not been successful. Moreover, the computation times make the use of reinforcement learning applicable in our domain only for really powerful computers.
83

Inferens på rangordningar - En Monte Carlo-analys

Bohlin, Lars January 2015 (has links)
No description available.
84

En Säsongsspelmodell

Pirsech, William January 2015 (has links)
No description available.
85

Tennismodellen II : En undersökning om fördelaktiga odds och spelstrategi för spel på tennismatcher med hjälp av en statistisk modell

Ericsson, Tomas January 2015 (has links)
No description available.
86

Datainsamlingsmetoder för statistiska undersökningar : -en beskrivande litteraturstudie

Malmkvist, Jenny January 2015 (has links)
No description available.
87

Test of Causality in Conditional Variance Hafner and Herwatz Test for Causality

Jatta, Abdullah January 2015 (has links)
No description available.
88

A Monte Carlo Study Comparing Three Methods for Determining the Number of Principal Components and Factors

Sheytanova, Teodora January 2015 (has links)
No description available.
89

Finance Forecasting in Fractal Market Hypothesis

Lin, Wangke January 2015 (has links)
No description available.
90

Oddssättning : - utvärdering av modeller för skattning av matchodds i Svenska Superligan i innebandy

Mundt, Henrik January 2015 (has links)
No description available.

Page generated in 0.0457 seconds