• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 10
  • 8
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 47
  • 34
  • 29
  • 25
  • 23
  • 18
  • 18
  • 16
  • 15
  • 15
  • 13
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Euclidean Domains

Tombs, Vandy Jade 01 July 2018 (has links)
In the usual definition of a Euclidean domain, a ring has a norm function whose codomain is the positive integers. It was noticed by Motzkin in 1949 that the codomain could be replaced by any well-ordered set. This motivated the study of transfinite Euclidean domains in which the codomain of the norm function is replaced by the class of ordinals. We prove that there exists a (transfinitely valued) Euclidean Domain with Euclidean order type for every indecomposable ordinal. Modifying the construction, we prove that there exists a Euclidean Domain with no multiplicative norm. Following a definition of Clark and Murty, we define a set of admissible primes. We develop an algorithm that can be used to find sets of admissible primes in the ring of integers of quadratic extensions of the rationals and provide some examples.
52

Comparing trend and gap statistics across tests: distributional change using ordinal methods and bayesian inference

Denbleyker, John Nickolas 01 May 2012 (has links)
The shortcomings of the proportion above cut (PAC) statistic used so prominently in the educational landscape renders it a very problematic measure for making correct inferences with student test data. The limitations of PAC-based statistics are more pronounced with cross-test comparisons due to their dependency on cut-score locations. A better alternative is using mean-based statistics that can translate to parametric effect-size measures. However, these statistics as well can be problematic. When Gaussian assumptions are not met, reasonable transformations of a score scale produce non-monotonic outcomes. The present study develops a distribution-wide approach to summarize trend, gap, and gap trend (TGGT) measures. This approach counters the limitations of PAC-based measures and mean-based statistics in addition to addressing TGGT-related statistics in a manner more closely tied to both the data and questions regarding student achievement. This distribution-wide approach encompasses visual graphics such as percentile trend displays and probability-probability plots fashioned after Receiver Operating Characteristic (ROC) curve methodology. The latter is framed as the P-P plot framework that was proposed by Ho (2008) as a way to examine trends and gaps with more consideration given to questions of scale and policy decisions. The extension in this study involves three main components: (1) incorporating Bayesian inference, (2) using a multivariate structure for longitudinal data, and (3) accounting for measurement error at the individual level. The analysis is based on mathematical assessment data comprising Grade 3 to Grade 7 from a large Midwestern school district. Findings suggest that PP-based effect sizes provide a useful framework to measure aggregate test score change and achievement gaps. The distribution-wide perspective adds insight by examining both visually and numerically how trends and gaps are affected throughout the score distribution. Two notable findings using the PP-based effect sizes were (1) achievement gaps were very similar between the Focal and Audit test, and (2) trend measures were significantly larger for the Audit test. Additionally, measurement error corrections using the multivariate Bayesian CTT approach had effect sizes disattenuated from those based on observed scores. Also, the ordinal-based effect size statistics were generally larger than their parametric-based counterparts, and this disattenuation was practically equivalent to that seen by accounting for measurement error. Finally, the rank-based estimator of P(X>Y) via estimated true scores had smaller standard errors than for its parametric-based counterpart.
53

Constrained ordinal models with application in occupational and environmental health

Capuano, Ana W. 01 May 2012 (has links)
Occupational and environmental epidemiological studies often involve ordinal data, including antibody titer data, indicators of health perceptions, and certain psychometrics. Ideally, such data should be analyzed using approaches that exploit the ordinal nature of the scale, while making a minimum of assumptions. In this work, we first review and illustrate the analytical technique of ordinal logistic regression called the "proportional odds model". This model, which is based on a constrained ordinal model, is considered the most popular ordinal model. We use hypothetical data to illustrate a situation where the proportional odds model holds exactly, and we demonstrate through derivations and simulations how using this model has better statistical power than simple logistic regression. The section concludes with an example illustrating the use of the model in avian and swine influenza research. In the middle section of this work, we show how the proportional model assumption can be relaxed to a less restrictive model called the "trend odds model". We demonstrate how this model is related to latent logistic, normal, and exponential distributions. In particular, scale changes in these potential latent distributions are found to be consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. Actual data of antibody titer against avian and swine influenza among occupationally- exposed participants and non-exposed controls illustrate the fit and interpretation of the proportional odds model and the trend odds model. Finally, we show how to perform a multivariable analysis in which some of the variables meet the proportional model assumption and some meet the trend odds assumption. Likert-scaled data pertaining to violence among middle school students illustrate the fit and interpretation of the multivariable proportional-trend odds model. In conclusion, the proportional odds model provides superior power compared to models that employ arbitrary dichotomization of ordinal data. In addition, the added complexity of the trend odds model provides improved power over the proportional odds model when there are moderate to severe departures from proportionality. The increase in power is of great public health relevance in a time of increasingly scarce resources for occupational and environmental health research. The trend odds model indicates and tests the presence of a trend in odds, providing a new dimension to risk factors and disease etiology analyses. In addition to applications demonstrated in this work, other research areas in occupational and environmental health can benefit from the use of these methods. For example, worker fatigue is often self-reported using ordinal scales, and traumatic brain injury recovery is measured using recovery scores such as the Glasgow Outcome Scale (GOS).
54

Automatically Proving the Termination of Functional Programs

Vroon, Daron 27 August 2007 (has links)
Establishing the termination of programs is a fundamental problem in the field of software verification. For transformational programs, termination is used to extend partial correctness to total correctness. For reactive systems, termination reasoning is used to establish liveness properties. In the context of theorem proving, termination is used to establish the consistency of definitional axioms and to automate proofs by induction. Of course, termination is an undecidable problem, as Turing himself proved. However, the question remains: how automatic can a general termination analysis be in practice? In this dissertation, we develop two new general frameworks for reasoning about termination and demonstrate their effectiveness in automating the task of proving termination in the domain of applicative first-order functional languages. The foundation of the first framework is the development of the first known complete set of algorithms for ordinal arithmetic over an ordinal notation. We provide algorithms for ordinal ordering ($<$), addition, subtraction, multiplication, and exponentiation on the ordinals up to epsilon-naught. We prove correctness and complexity results for each algorithm. We also create a library for automating arithmetic reasoning over epsilon-naught in the ACL2 theorem proving system. This ordinal library enables new termination proofs that were previously not possible in previous versions of ACL2. The foundation of the second framework is an algorithm for fully automating termination reasoning with no user assistance. This algorithm uses a combination of theorem proving and static analysis to create a Calling Context Graph (CCG), a novel abstraction that captures the looping behavior of the program. Calling Context Measures (CCMs) are then used to prove that no infinite path through the CCG can be an actual computation of the program. We implement this algorithm in the ACL2, and empirically evaluate its effectiveness on the regression suite, a collection of over 11,000 user-defined functions from a wide variety of applications.
55

Financial resource allocation in Texas : how does money matter

Villarreal, Rosa Maria, active 2010 30 April 2014 (has links)
The study examined school district expenditures in Texas and their correlations with student achievement. The following research question guided this study: Which resource allocations produce statistically significant correlations between the resource allocation variances among school district and student achievement? An ordinal logistic regression analysis included 1009 school districts in the State of Texas, 18 of 26 possible finance function codes provided per-pupil dollar amounts, and 9 of 11 possible demographic categories were utilized for the study. The study held the school district as the unit of analysis. The statistical model was used to regress the dollar amounts categorized by financial function codes and percent student demographics to determine if a relationship existed with the dependent variable of the Texas Education Agency’s defined accountability rating during the 5-year time period—2004-2008. At the national level, there is a long-standing debate over whether the amount of money allocated to education affects student achievement. The literature review presents two sides of the debate concerning whether financial resources make a difference with regard to student achievement as represented through district-level accountability ratings. The research revealed that specific school district resource allocations by function code are statistically significant with regard to district level accountability measures through the Texas Education Agency (TEA) accountability system. However, the odds ratios temper the impact of the significance. The research also revealed that demographics are statistically significant in the State of Texas accountability system. / text
56

Sample Size in Ordinal Logistic Hierarchical Linear Modeling

Timberlake, Allison M 07 May 2011 (has links)
Most quantitative research is conducted by randomly selecting members of a population on which to conduct a study. When statistics are run on a sample, and not the entire population of interest, they are subject to a certain amount of error. Many factors can impact the amount of error, or bias, in statistical estimates. One important factor is sample size; larger samples are more likely to minimize bias than smaller samples. Therefore, determining the necessary sample size to obtain accurate statistical estimates is a critical component of designing a quantitative study. Much research has been conducted on the impact of sample size on simple statistical techniques such as group mean comparisons and ordinary least squares regression. Less sample size research, however, has been conducted on complex techniques such as hierarchical linear modeling (HLM). HLM, also known as multilevel modeling, is used to explain and predict an outcome based on knowledge of other variables in nested populations. Ordinal logistic HLM (OLHLM) is used when the outcome variable has three or more ordered categories. While there is a growing body of research on sample size for two-level HLM utilizing a continuous outcome, there is no existing research exploring sample size for OLHLM. The purpose of this study was to determine the impact of sample size on statistical estimates for ordinal logistic hierarchical linear modeling. A Monte Carlo simulation study was used to investigate this research query. Four variables were manipulated: level-one sample size, level-two sample size, sample outcome category allocation, and predictor-criterion correlation. Statistical estimates explored include bias in level-one and level-two parameters, power, and prediction accuracy. Results indicate that, in general, holding other conditions constant, bias decreases as level-one sample size increases. However, bias increases or remains unchanged as level-two sample size increases, holding other conditions constant. Power to detect the independent variable coefficients increased as both level-one and level-two sample size increased, holding other conditions constant. Overall, prediction accuracy is extremely poor. The overall prediction accuracy rate across conditions was 47.7%, with little variance across conditions. Furthermore, there is a strong tendency to over-predict the middle outcome category.
57

Logistic Regression Analysis to Determine the Significant Factors Associated with Substance Abuse in School-Aged Children

Maxwell, Kori Lloyd Hugh 17 April 2009 (has links)
Substance abuse is the overindulgence in and dependence on a drug or chemical leading to detrimental effects on the individual’s health and the welfare of those surrounding him or her. Logistic regression analysis is an important tool used in the analysis of the relationship between various explanatory variables and nominal response variables. The objective of this study is to use this statistical method to determine the factors which are considered to be significant contributors to the use or abuse of substances in school-aged children and also determine what measures can be implemented to minimize their effect. The logistic regression model was used to build models for the three main types of substances used in this study; Tobacco, Alcohol and Drugs and this facilitated the identification of the significant factors which seem to influence their use in children.
58

Essays on Estimation Methods for Factor Models and Structural Equation Models

Jin, Shaobo January 2015 (has links)
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
59

Logistic Regression Analysis to Determine the Significant Factors Associated with Substance Abuse in School-Aged Children

Maxwell, Kori Lloyd Hugh 17 April 2009 (has links)
Substance abuse is the overindulgence in and dependence on a drug or chemical leading to detrimental effects on the individual’s health and the welfare of those surrounding him or her. Logistic regression analysis is an important tool used in the analysis of the relationship between various explanatory variables and nominal response variables. The objective of this study is to use this statistical method to determine the factors which are considered to be significant contributors to the use or abuse of substances in school-aged children and also determine what measures can be implemented to minimize their effect. The logistic regression model was used to build models for the three main types of substances used in this study; Tobacco, Alcohol and Drugs and this facilitated the identification of the significant factors which seem to influence their use in children.
60

Effective and Efficient Optimization Methods for Kernel Based Classification Problems

Tayal, Aditya January 2014 (has links)
Kernel methods are a popular choice in solving a number of problems in statistical machine learning. In this thesis, we propose new methods for two important kernel based classification problems: 1) learning from highly unbalanced large-scale datasets and 2) selecting a relevant subset of input features for a given kernel specification. The first problem is known as the rare class problem, which is characterized by a highly skewed or unbalanced class distribution. Unbalanced datasets can introduce significant bias in standard classification methods. In addition, due to the increase of data in recent years, large datasets with millions of observations have become commonplace. We propose an approach to address both the problem of bias and computational complexity in rare class problems by optimizing area under the receiver operating characteristic curve and by using a rare class only kernel representation, respectively. We justify the proposed approach theoretically and computationally. Theoretically, we establish an upper bound on the difference between selecting a hypothesis from a reproducing kernel Hilbert space and a hypothesis space which can be represented using a subset of kernel functions. This bound shows that for a fixed number of kernel functions, it is optimal to first include functions corresponding to rare class samples. We also discuss the connection of a subset kernel representation with the Nystrom method for a general class of regularized loss minimization methods. Computationally, we illustrate that the rare class representation produces statistically equivalent test error results on highly unbalanced datasets compared to using the full kernel representation, but with significantly better time and space complexity. Finally, we extend the method to rare class ordinal ranking, and apply it to a recent public competition problem in health informatics. The second problem studied in the thesis is known as the feature selection problem in literature. Embedding feature selection in kernel classification leads to a non-convex optimization problem. We specify a primal formulation and solve the problem using a second-order trust region algorithm. To improve efficiency, we use the two-block Gauss-Seidel method, breaking the problem into a convex support vector machine subproblem and a non-convex feature selection subproblem. We reduce possibility of saddle point convergence and improve solution quality by sharing an explicit functional margin variable between block iterates. We illustrate how our algorithm improves upon state-of-the-art methods.

Page generated in 0.0763 seconds