751 |
Statistical Simulation of Patient ComplianceSong, Mikyung 08 1900 (has links)
Patient compliance is one of the important factors which effect patient outcome in therapeutic trials. As the methods of measuring patient compliance have been developed, the impact of compliance on therapeutic regimen performance could be studied. In this project, patient compliance measures are defined and compliance distributions are introduced. The two are combined from the two sources to improve both the knowledge of drug-outcome combinations and compliance information has been simulated in the case of therapeutic trials. The study of compliance effects on patient outcome was approached in two ways; looking at the fixed compliance effects upon the entire dose-response curve, and looking at the compliance measure as a variable effect on dose-response outcome of patients, the latter by means of statistical simulation. It was shown that patient compliance information affected patient outcome in terms of the achievement of the therapeutic goal. Also, the results of this project support what should be said about the
basis of drug-outcome combinations and prescriptions, when we mention patient compliance. Another result is that Compliance-Response-outcome chains were obtained in terms of percentages to respond from simulation of some common design models used in therapeutic trials. / Thesis / Master of Science (MS)
|
752 |
The application of Mellin transforms to statistics.Mercier, Raymond. P. January 1956 (has links)
No description available.
|
753 |
A statistical problem in the geometry of numbers.Smith, Norman Edward. January 1952 (has links)
No description available.
|
754 |
Statistical models for purchasing behaviourMarchant, Carol L. January 1968 (has links)
No description available.
|
755 |
Schur complements and statisticsOuellette, Diane Valérie. January 1978 (has links)
No description available.
|
756 |
Statistical approaches to population geneticsChun, Dan January 1962 (has links)
This paper is concerned with the statistical approach to population genetics. The genetical characteristics of the population under consideration are the population size and the gene frequencies.
The limitations on the population are
(a) the individuals are either all haploids or all diploids;
(b) the alleles are only two in number, either A or a;
(c) there are no selection and migration pressures occurring in the population.
Three main model types, the branching processes, the Markov chains, and the diffusion processes are discussed.
Published results on the subject are presented along with some new investigations. Comparable results obtained by various authors are checked, some of which are found to be invalid.
Continuous approximations are used to derive some of the results of discrete processes, but a chapter is devoted to the justification of such approximations. / M.S.
|
757 |
Statistical models with dichotomous responseDaugherty, Sherryl A. January 1984 (has links)
Statistical models with dichotomous response are investigated. Many techniques for modeling such situations are discussed which involve various types of distributions. The two distributions which shall be discussed further are those related to probit and logit analysis. Several estimation methods are developed for determining the parameters in the two distributions mentioned.
Examples are given for each of these estimation methods. And finally, the estimation methods are compared. / Master of Science
|
758 |
Statistical study of measurements of nematodesAl-Sady, Salem Dhaib January 1964 (has links)
This thesis is a preliminary statistical study on two nematodes, Horsenettle Cyst and Osborne Cyst. Each of the two nematodes were collected from two hosts, Horsenettle weed and tobacco. For each of the four combinations 115 nemas were selected, and six measurements were taken from each nema.
A test of normality was made to investigate the normality of the distributions of the combinations. Two tests of Homogeneity of Variances were made for combinations which had been considered to be normally distributed. Analyses of Variances were made for the measurement combination which had satisfied the assumptions of normality and homogeneity of variances.
Nonparametric analyses of variance were made for the combinations which had been considered non-normally distributed or had unequal variance.
A discrimination function was estimated to discriminate between any two of the combinations based on the six measurements, and a test of significance for each discrimination function was made. The separation point between each two groups was determined, and the probability of misclassification was computed. / Master of Science
|
759 |
Statistical analysis for streamflow predictionHancher, Boyd Thomas January 1965 (has links)
For the six regions taken under investigation a statistical analysis of mean monthly flows was attempted. The relationship was established for all but one region.
The analysis compared the coefficient of variation of the monthly flows to the size of the drainage area for each basin in a region. The regions were defined by basins of similar topography and climate. Streamflow prediction would be made by mathematical synthesis from the standard deviation parameter computed from the graphical relationships established for each region.
The value of such a relationship was evidenced by the general consistency of the results. / Master of Science
|
760 |
Data Analytics for Statistical LearningKomolafe, Tomilayo A. 05 February 2019 (has links)
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process.
However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data.
Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure.
Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data.
Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process / PHD / The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications.
Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process.
In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows:
• In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data.
• In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection
o I address the research area of statistical modeling in two ways:
- There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups
- In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors
o I address the research area of anomaly detection in two ways:
- A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements
- Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
|
Page generated in 0.188 seconds