611 |
Contributions to industrial statisticsLeung, Bartholomew Ping Kei January 1999 (has links) (PDF)
No description available.
|
612 |
Dualities in Abelian statistical modelsJaimungal, Sebastian January 1999 (has links) (PDF)
No description available.
|
613 |
Statistical estimation of effective bandwidthRabinovitch, Peter January 2000 (has links) (PDF)
No description available.
|
614 |
Contributions to Bayesian statistical inferenceThabane, Lehana January 1998 (has links) (PDF)
No description available.
|
615 |
The statistical work of SüssmilchCrum, Frederick Stephen, January 1901 (has links)
Thesis (P.H.D.)--Cornell University. / OSU copy imperfect: cover wanting. Cover title. "Reprinted from Quarterly publications of the American statistical association, September, 1901." Bibliography: p. 36-40.
|
616 |
Chemical and statistical soot modeling /Blanquart, Guillaume. January 2008 (has links)
Thesis (Ph. D.)--Stanford University, 2008. / Submitted to the Department of Mechanical Engineering. Copyright by the author.
|
617 |
Contributions to industrial statisticsFontdecaba Rigat, Sara 23 December 2015 (has links)
Tesi per compendi de publicacions. / This thesis is about statistics' contributions to industry. It is an article compendium comprising four articles divided in two blocks: (i) two contributions for a water supply company, and (ii) significance of the effects in Design of Experiments. In the first block, great emphasis is placed on how the research design and statistics can be applied to various real problems that a water company raises and it aims to convince water management companies that statistics can be very useful to improve their services.
The article "A methodology to model water demand based on the identification of homogeneous client segments. Application to the city of Barcelona", makes a comprehensive review of all the steps carried out for developing a mathematical model to forecast future water demand. It pays attention on how to know more about the influence of socioeconomic factors on customer's consumption in order to detect segments of customers with homogenous habits to objectively explain the behavior of the demand.
The second article -related to water demand management, "An Approach to disaggregating total household water consumption into major end-uses" describes the procedure to assign water consumption to microcomponents (taps, showers, cisterns, washer machines and dishwashers) on the basis of the readings of water consumption of the water meter. The main idea to accomplish this is, to determine which of the devices has caused the consumption, to treat the consumption of each device as a stochastic process.
In the second block of the thesis, a better way to judge the significance of effects in unreplicated factorial experiments is described.
The article "Proposal of a Single Critical Value for the Lenth Method" analyzes the many analytical procedures that have been proposed for identifying significant effects in not replicated two level factorial designs. Many of them are based on the original "Lenth Method and explain and try to overcome the problems that it presents". The article proposes a new strategy to choose the critical values to better differentiate the inert from the active factors.
The last article "Analysing DOE with Statistical Software Packages: Controversies and Proposals" review the most important and commonly used in industry statistical software with DOE capabilities: JMP, Minitab, SigmaXL, StatGraphics and Statistica and evaluates how well they resolve the problem of analyzing the significance of effects in unreplicated factorial designs
|
618 |
Investigations into the robustness of statistical decisionsWatson, James Andrew January 2016 (has links)
Decision theory is a cornerstone of Statistics, providing a principled framework in which to act under uncertainty. It underpins Bayesian theory via the Savage axioms, game theory via Wald's minimax, and supplies a mathematical formulation of 'rational choice'. This thesis argues that its role is of particular importance in the so-called 'big data' era. Indeed, as data have become larger, statisticians are confronted with an explosion of new methods and algorithms indexing ever more complicated statistical models. Many of these models are not only high-dimensional and highly non-linear, but are also approximate by design, e.g. deliberately making approximations for reasons such as tractability and interpretation. For Bayesian theory, and for Statistics in general, this raises many important questions, which I believe decision theory can help elucidate. From a foundational standpoint, how does one interpret the outputs of Bayesian computations when the model is known to be approximate and misspecified? Concerns of misspecification violate the necessary assumptions for the use of the Savage axioms. Should principles such as expected loss minimisation apply in such settings? On a practical level, how can modellers assess the extent of the impact of model misspecification? How can this be integrated into the process of model construction in order to inform the user whether more work needs to be done (for example, more hours of computation, or a more accurate model)? They need to know whether the model is unreliable, or whether the conclusions of the model are robust and can be trusted. In the history of Robust Statistics, whose main aspects are covered in Chapter 1, there has been periodic concern with misspecification. Robust Bayesian analysis was a particularly active area of research through the 1980s to mid-90s, but later declined due to methodological and computational advances which overcame original concerns of misspecification. Now, however, the complexity of datasets frequently prohibits the possibility of constructing fully specified and well-crafted models and therefore Bayesian robustness merits a reappraisal. Additionally, new methods have been developed which are characterised by their deliberately approximate and misspecified nature, such as integrated nested Laplace approximation (INLA), approximate Bayesian computation (ABC), Variational Bayes, and composite likelihoods. These all start with a premise of misspecification. The work described in this thesis concerns the development of a comprehensive framework addressing challenges associated with imperfect models, encompassing both formal methods to assess the sensitivity of the model (Chapters 2 & 3), and diagnostic exploratory methods via graphical plots and summary statistics (Chapters 4 & 5). This framework is built on a post hoc sensitivity analysis of the posterior approximating model via the loss function. Chapter 2 describes methods for estimating the sensitivity of a model with respect to the loss function by analysing the effect of local perturbations in neighbourhoods centred at the approximating model (in a Bayesian context this would be the posterior distribution). These neighbourhoods are defined using the Kullback-Leibler divergence. This approach provides a bridge between the two dominant paradigms in decision theory: Wald's minimax and Savage's expected loss criterion. Two key features of this framework are that the solution is analytical, and it unifies other well known methods in Statistics such as predictive tempering, power likelihoods and Gibbs posteriors. It also offers an interesting solution to the Ellsberg paradox. Another application of the work is in the area of computational decision theory where the statistician only has access to the model via a finite set of samples. In this context, the methods can be used at very little extra computational cost. Chapter 3 considers nonparametric extensions to the approximating reference model. In particular, we look at the Pólya tree process, the Dirichlet process and bootstrap procedures. Again using the Kullback-Leibler divergence, it is possible to characterise random samples of these nonparametric models with respect to the base model, and therefore understand the effect of local perturbations on the distribution of loss of the approximating model. A series of diagnostic plots and summary statistics are presented in Chapter 4, and further illustrated in Chapter 5 by means of two applications taken from the medical decision-making literature. These complete the framework of post-hoc assessment of model stability and allow the user to understand why the model might be sensitive to misspecification. Graphical displays are an essential part of statistical analyses, indeed the point of departure for any serious data analysis. Their use in model exploration in the context of decision theory, however, is not common. We borrow some ideas from finance and econometrics as a basis of exploratory decision-system plots. Other plots come as natural consequences of the methodology from the two previous chapters. The final chapter examines a very specific application of statistical decision theory, notably the analysis of randomised clinical trials to assess the evidence in favour of patient heterogeneity. This problem, known as subgroup analysis, has traditionally been solved using predictive models which are a proxy for the real object of interest: evidence of patient heterogeneity. By formally expressing the decision problem as a hypothesis test, and working from first principles, the problem is shown to be in fact much easier than previously thought. The method avoids issues involving counterfactuals by testing decision rules against their mirror images. It can harness the strength of well known model free tests and uses a random forest-type approach for post-hoc exploration of decision rules. The randomisation allows for a causal interpretation of the results.
|
619 |
The evolutionary roots of intuitive statisticsEckert, Johanna 24 September 2018 (has links)
No description available.
|
620 |
Didaktika statistiky / Didactics of statisticsKvaszová, Milena January 2012 (has links)
The thesis Didactics of statistics is devoted to the problem of understanding of the basic concepts of probability theory and statistics. I have conducted a research in which I studied the understanding of the basic statistical concepts such as average, randomness, chance, variability, and sample by college students. I also studied whether the students are able to work with statistical data, whether they can read values from a chart, identify, and interpret their arithmetic mean. I interpreted the data obtained from the research using the model of the French psychologist Jean Piaget, according to which it is possible to distinguish three stages in the development of a theory, which Piaget named INTRA, INTER and TRANS. My research indicates that students' ideas about the basic statistical concepts are often very different from the ideas of the teacher. There is also interference between the scientific concepts and the similarly sounding terms of the everyday language. It is important that the teacher is aware of these differences and explains them using appropriate examples.
|
Page generated in 0.2022 seconds