• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92413
  • 58298
  • 33332
  • 15520
  • 5695
  • 3705
  • 1283
  • 1215
  • 1101
  • 1089
  • 1031
  • 967
  • 895
  • 710
  • Tagged with
  • 8975
  • 7962
  • 7348
  • 7110
  • 6425
  • 6143
  • 5763
  • 5197
  • 5036
  • 4593
  • 4493
  • 4397
  • 4211
  • 3536
  • 3482
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

A Statistical Model of Lexical Context

Jabbari, Sanaz January 2010 (has links)
No description available.
492

A statistical approach to sports betting

Altmann, Anton January 2004 (has links)
While gambling on sports fixtures is a popular activity, for the majority of gamblers it is not a profitable one. In order to make a consistent profit through gambling, one of the requirements is the ability to assess accurate probabilities for the outcomes of the events upon which one wishes to place bets. Through experience of betting, familiarity with certain sports and a natural aptitude for estimating probabilities, a small number of gamblers are able to do this. This thesis also attempts to achieve this but through purely scientific means. There are three main areas covered in this thesis. These are the market for red and yellow cards in Premier League soccer, the market for scores in American football (NFL) and the market for scores in US Basketball (NBA). There are several issues that must be considered when attempting to fit a statistical model to any of these betting markets. These are introduced in the early stages of this thesis along with some previously suggested solutions. Among these, for example, is the importance of obtaining estimates of team characteristics that reflect the belief that these characteristics adjust over time. It is also important to devise measures of evaluating the successo f any model and to be able to comparet he predictive abilities of different models for the same market. A general method is described which is suitable for modelling the sporting markets that are featured in this thesis. This method is adapted from a previous study on UK soccer results and involves the maximisation of a likelihood function. In order to make predictions that have any chance of competing with the odds supplied by professional bookmakers, this modelling process must be expanded to reflect the idiosyncrasies of each sport. With the market for red and yellow cards in Premier League soccer matches, in addition to considering the characteristics of the two teams in the match, one must also consider the effect of the referee. It is also discovered that the average booking rate for Premier League soccer matches varies significantly throughout the course of a season. The unusual scoring system used in the NFL means that a histogram of the final scores for match results does not resemble any standard statistical distribution. There is also a wealth of data available for every NFL match besides the final score. It is worth investigating whether by exploiting this additional past data, more accurate predictions for future matches can be obtained. The analysis of basketball considers the busier schedule of games that NBA teams face, compared to NFL or Premier League soccer teams. The result of one match may plausibly be affected by the number of games that the team has had to play in the days immediately before the match. Furthermore, data is available giving the scores of the game at various stages throughout the match. By using this data, one can assess to what extent, and in which situations, the scoring rate varies during a match. These issues, among many others, are addressed during this thesis. In each case a model is devised and a betting strategy is simulated by comparing model predictions with odds that were supplied by professional bookmakers prior to fixtures. The limitations of each model are discussed and possible extensions of the analysis are suggested throughout.
493

A statistical analysis of electrocardiogram variation

Johnson, T. E. January 1981 (has links)
No description available.
494

Two studies in statistical inference

Wearing, P. J. January 1983 (has links)
No description available.
495

Statistical classification of atmospheric regimes

Law, Barry Ka-Ping January 1996 (has links)
Meteorologists have spent decades attempting to predict the weather over extended periods of time. Complex models of up to several million variables can only produce reliable predictions of up to four days. By representing the atmosphere in a multi-dimensional 'phase space', we hope to find preferred areas of this space where the weather will persist. Using a simple simulation model we applied 9 clustering methods, some of which are new, to the simulated data. These methods represent 3 different levels of interactions between the user and the method. While developing new cluster methods, we also developed an outlier method which is shown to be better than 16 current multivariate outlier methods, based on a real dataset. The results of the simulation studies indicate that the more interaction between the user and the method, the better the outcome. Next we adapted the usual Ward's, and Caussinus and Ruiz's clustering methods to take time into consideration. This created 6 new time constraint clustering methods which we applied to simulated data from a new time dependent simulated model. Consistent patterns were found and the results also indicate that if we apply the usual Ward's clustering method on suspected time dependent data then we would achieve the best outcome only 35% of the time, at most. Finally we looked at ways of sieving transient observations from cluster groups and highlighting significant transitions by applying several techniques to a meteorological dataset.
496

A statistical study of rainrate distributions

Holland, D. A. January 1988 (has links)
No description available.
497

Simulation systems for statistical tests

Aziz, A. M. A. H. January 1987 (has links)
No description available.
498

Counting statistics of stochastic processes

Gillespie, Colin Stevenson January 2003 (has links)
No description available.
499

Introduction of statistics in optimization

Teytaud, Fabien 08 December 2011 (has links) (PDF)
In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
500

Data mining and classical statistics

Luo, Man January 2004 (has links)
This study introduces an overview of data mining. It suggests that methods derived from classical statistics are an integrated part of data mining. However, there are substantial differences between these two areas. Classical statistical models and non-statistical models used in data mining, such as regression trees and artificial neural networks, are presented to emphasize their unique approaches to extract information from data. In summation, this research provides some background to data mining and the role of classical statistics played in it. / Department of Mathematical Sciences

Page generated in 0.2141 seconds