• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92413
  • 58298
  • 33332
  • 15520
  • 5695
  • 3705
  • 1283
  • 1215
  • 1101
  • 1089
  • 1031
  • 967
  • 895
  • 710
  • Tagged with
  • 8975
  • 7962
  • 7348
  • 7110
  • 6425
  • 6143
  • 5763
  • 5197
  • 5036
  • 4593
  • 4493
  • 4397
  • 4211
  • 3536
  • 3482
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Statistical metric spaces

Bogar, Gary Allan. January 1963 (has links)
Call number: LD2668 .R4 1963 B674
322

Statistical analysis of pyrosequence data

Keating, Karen January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Gary L. Gadbury / Since their commercial introduction in 2005, DNA sequencing technologies have become widely available and are now cost-effective tools for determining the genetic characteristics of organisms. While the biomedical applications of DNA sequencing are apparent, these technologies have been applied to many other research areas. One such area is community ecology, in which DNA sequence data are used to identify the presence and abundance of microscopic organisms that inhabit an environment. This is currently an active area of research, since it is generally believed that a change in the composition of microscopic species in a geographic area may signal a change in the overall health of the environment. An overview of DNA pyrosequencing, as implemented by the Roche/Life Science 454 platform, is presented and aspects of the process that can introduce variability in data are identified. Four ecological data sets that were generated by the 454 platform are used for illustration. Characteristics of these data include high dimensionality, a large proportion of zeros (usually in excess of 90%), and nonzero values that are strongly right-skewed. A nonparametric method to standardize these data is presented and effects of standardization on outliers and skewness are examined. Traditional statistical methods for analyzing macroscopic species abundance data are discussed, and the applicability of these methods to microscopic species data is examined. One objective that receives focus is the classification of microscopic species as either rare or common species. This is an important distinction since there is much evidence to suggest that the biological and environmental mechanisms that govern common species are distinctly different than the mechanisms that govern rare species. This indicates that the abundance patterns for common and rare species may follow different probability models, and the suitability of the Pareto distribution for rare species is examined. Techniques for classifying macroscopic species are shown to be ill-suited for microscopic species, and an alternative technique is presented. Recognizing that the structure of the data is similar to that of financial applications (such as insurance claims and the distribution of wealth), the Gini index and other statistics based on the Lorenz curve are explored as potential test statistics for distinguishing rare versus common species.
323

The formulation of the relativistic statistical mechanics

Suh, Kiu Suk. January 1957 (has links)
Call number: LD2668 .T4 1957 S94 / Master of Science
324

A Generalization of The Partition Problem in Statistics

Zhou, Jie 20 December 2013 (has links)
In this dissertation, the problem of partitioning a set of treatments with respect to a control treatment is considered. Starting in 1950's a number of researchers have worked on this problem and have proposed alternative solutions. In Tong (1979), the authors proposed a formulation to solve this problem and hundreds of researchers and practitioners have used that formulation for the partition problem. However, Tong's formulation is somewhat rigid and misleading for the practitioners, if the distance between the ``good'' and the ``bad'' populations is large. In this case, the indifference zone gets quite large and the undesirable feature of the Tong's formulation to partition the populations in the indifference zone, without any penalty, can potentially lead Tong's formulation to produce misleading partitions. In this dissertation, a generalization of the Tong's formulation is proposed, under which, the treatments in the indifference zone are not partitioned as ``good'' or ``bad'', but are partitioned as a identifiable set. For this generalized partition, a fully sequential and a two-stage procedure is proposed and its theoretical properties are derived. The proposed procedures are also studied via Monte Carlo Simulation studies. The thesis concludes with some non-parametric partition procedures and the study of robustness of the various available procedures in the statistical literature.
325

Statistical Steganalysis of Images

Min Huang (7036661) 13 August 2019 (has links)
<div>Steganalysis is the study of detecting secret information hidden in objects such as images, videos, texts, time series and games via steganography. Among those objects, the image is the most widely used object to hide secret messages. Detection of possible secret information hidden in images has attracted a lot of attention over the past ten years. People may conduct covert communications by exchanging images in which secret messages may be embedded in bits. One of main advantages of steganography over cryptography is that the former makes this communication insensible for human beings. So statistical methods or tools are needed to help distinguish cover images from stego images. <br></div><div><br></div><div>In this thesis, we start with a discussion of image steganography. Different kinds of embedding schemes for hiding secret information in images are investigated. We also propose a hiding scheme using a reference matrix to lower the distortion caused by embedding. As a result, we obtain Peak Signal-to-Noise Ratios (PSNRs) of stego images that are higher than those given by a Sudoku-based embedding scheme. Next, we consider statistical steganalysis of images in two different frameworks. We first study staganalysis in the framework of statistical hypothesis testing. That is, we cast a cover/stego image detection problem as a hypothesis testing problem. For this purpose, we employ different statistical models for cover images and simulate the effects caused by secret information embedding operations on cover images. Then the staganalysis can be characterized by a hypothesis testing problem in terms of the embedding rate. Rao’s score statistic is used to help make a decision. The main advantage of using Rao’s score test for this problem is that it eliminates an assumption used in the previous work where approximated log likelihood ratio (LR) statistics were commonly employed for the hypothesis testing problems.<br></div><div><br></div><div>We also investigate steganalysis using the deep learning framework. Motivated by neural network architectures applied in computer vision and other tasks, we propose a carefully designed a deep convolutional neural network architecture to classify the cover and stego images. We empirically show the proposed neural network outperforms the state-of-the-art ensemble classifier using a rich model, and is also comparable to other convolutional neural network architectures used for steganalysis.<br></div><div><br></div>The image databases used in the thesis are available on websites cited in the thesis. The stego images are generated from the image databases using source code from the website. <a href="http://dde.binghamton.edu/download/">http://dde.binghamton.edu/download/</a>
326

Statistical Consulting at Draper Laboratory

Richard, Noelle M. 27 August 2014 (has links)
"This Master’s capstone was conducted in conjunction with Draper Laboratory, a non-profit research and development organization in Cambridge, Massachusetts. During a three month period, the author worked for the Microfabrication Department, assisting with projects related to statistics and quality control. The author gained real-world experience in data collection and analysis, and learned a new statistical software. Statistical methods covered in this report include regression analysis, control charts and capability, Gage R & R studies, and basic exploratory data analysis."
327

Learning to teach statistics meaningfully.

Lampen, Christine Erna 06 January 2014 (has links)
Following international trends, statistics is a relatively new addition to the South African mathematics curriculum at school level and its implementation was fraught with problems. Since 2001 teaching statistics in the Further Education and Training Phase (Grades 10 to 12) has been optional due to lack of professional development of teachers. From 2014 teaching statistics will be compulsory. This study is therefore timely as it provides information about different discourses in discussions of an ill-structured problem in a data-rich context, as well as in discussions of the meaning of the statistical mean. A qualitative case study of informal statistical reasoning was conducted with a group of students that attended an introductory course in descriptive statistics as part of an honours degree in mathematics education at the University of the Witwatersrand. The researcher was the course lecturer. Transcripts of the discussions in four video recorded sessions at the start of the semester long course form the bulk of the data. The discussions in the first three sessions of the course were aimed at structuring the data-context, or grasping the system dynamics of the data-context, as is required at the start of a cycle of statistical investigation. The discussion in the fourth session was about the syntactical meaning of the mean algorithm. It provides guidelines for meaningful disobjectification of the well-known mean algorithm. This study provides insight into informal statistical reasoning that is currently described as idiosyncratic or verbal according to statistical reasoning models. Discourse analysis based on Sfard’s (2008) theory of Commognition was used to investigate and describe discursive patterns that constrain shifting from colloquial to informal statistical discourse. The main finding is that colloquial discourse that is aimed at decision making in a data-context is incommensurable with statistical discourse, since comparison of data in the two discourses are drawn on incommensurable scales – a qualitative evaluation scale and a quantitative descriptive scale. The problem of comparison on a qualitative scale also emerged in the discourse on the syntactical meaning of the mean algorithm, where average as a qualitative judgement conflicted with the mean as a quantitative measurement. Implications for teaching and teacher education are that the development of statistical discourse may be dependent on alienation from data-contexts and the abstraction of measurements as abstract numerical units. Word uses that confound measurements as properties of objects and measurements as abstract units are discussed. Attention to word use is vital in order to discern evaluation narratives as deed routines from exploration narratives and routines.
328

Statistical analysis of bioequivalence studies

Nyathi, Mavuto January 2016 (has links)
A Research Report submitted to the Faculty of Science in partial fulfilment of the requirements for the degree of Master of Science. 26 October 2016. / The cost of healthcare has become generally expensive the world over, of which the greater part of the money is spent buying drugs. In order to reduce the cost of drugs, drug manufacturers came up with the idea of manufacturing generic drugs, which cost less as compared to brand name drugs. The challenge which arose was how safe, effective and efficient the generic drugs are compared to the brand name drugs, if people were to buy them. As a consequence of this challenge, bioequivalence studies evolved, being statistical procedures for comparing whether the generic and brand name drugs are similar in treating patients for various diseases. This study was undertaken to show the existence of bioequivalence in drugs. Bioavailability is considered in generic drugs to ensure that it is more or less the same as that of the original drugs by using statistical tests. The United States of America’s Food and Agricultural Department took a lead in the research on coming up with statistical methods for certifying generic drugs as bioequivalent to brand name drugs. Pharmacokinetic parameters are obtained from blood samples after dosing study subjects with generic and brand name drugs. The design for analysis in this research report will be a 2 2 crossover design. Average, population and individual bioequivalence is checked from pharmacokinetic parameters to ascertain as to whether drugs are bioequivalent or not. Statistical procedures used include confidence intervals, interval hypothesis tests using parametric as well as nonparametric statistical methods. On presenting results to conclude that drugs are bioequivalent or not, in addition to hypothesis tests and confidence intervals, which indicates whether there is a difference or not, effect sizes will also be reported. If ever there is a difference between generic and brand name drugs, effect sizes then quantify the magnitude of the difference. KEY WORDS: bioequivalence, bioavailability, generic (test) drugs, brand name (reference) drugs, average bioequivalence, population bioequivalence, individual bioequivalence, pharmacokinetic parameters, therapeutic window, pharmaceutical equivalence, confidence intervals, hypothesis tests, effect sizes. / TG2016
329

INFERENTIAL STATISTICS FOR MULTILEVEL ANALYSIS

Unknown Date (has links)
The purpose of this study was to compare alternative models for analyzing student learning outcomes. The models were: (a) single equation approach, (b) separate equation approach using actual within-group intercept (actual b(,oj)) and (c) separate equation approach using adjusted within-group intercept (adjusted b(,oj)). The models were compared first under an additive model assumption and then under an interactive model assumption. Basic models, assumptions and procedures were discussed. Data were generated using computer simulation. / The simulation model assumed that the within-group process is represented by the within-group slope and intercept. Those parameters were assumed to be a linear function of the group mean. / One hundred, 500 and 1,000 replications were generated for the additive model, the interactive model assuming an intraclass correlation of 0.2 and the interactive model assuming an intraclass correlation of 0.4, respectively. Each set of replications was analyzed using the three approaches. Sampling distributions for the additive constant (b(,o)), the individual (b(,s)), group (b(,c)) and interaction (b(,sc)) effects were compared. / The results suggested that, for the additive model, the single equation approach and the separate equation approach using adjusted b(,oj) provided unbiased estimates of b(,o), b(,s), and b(,c) with approximately equal sizes for the actual standard errors of the estimates. However, only the separate equation approach using adjusted b(,oj) provided an accurate picture of the actual precision of the estimates. / Results for the interactive model suggested that the separate equation approaches are superior to the single equation approach, in terms of providing equal and unbiased, estimates. However, only the separate equation approach using actual b(,oj) is recommended because it is less costly, both in computer time and personnel time. / Source: Dissertation Abstracts International, Volume: 43-01, Section: A, page: 0147. / Thesis (Ph.D.)--The Florida State University, 1982.
330

CONTRIBUTIONS TO BAYESIAN NONPARAMETRIC STATISTICS

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 40-09, Section: B, page: 4367. / Thesis (Ph.D.)--The Florida State University, 1979.

Page generated in 0.2251 seconds