• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 237
  • 37
  • 32
  • 18
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 650
  • 650
  • 151
  • 80
  • 59
  • 51
  • 45
  • 41
  • 39
  • 37
  • 37
  • 37
  • 36
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Popper's views of theory formation compared with the development of post-relativistic cosmological models

Leith, Thomas Henry January 1963 (has links)
Thesis (Ph.D.)--Boston University / This dissertation confronts contemporary physical cosmology with Karl Popper's standards of scientific method and theory construction. To the degree to which there are differences, an attempt is made to criticize the major cosmological models in the light of Popper's analysis and, in turn, to explore revisions necessitated in this analysis by the unique problems of cosmology. As background, the major facets of Popper's work are presented in detail: his falsifiability criterion for demarcating scientific theories from metaphysics, his hypothetico-deductive method, and his rejection of induction. Then the origins of general relativity and its competitors are analyzed both as explanatory background to modern cosmology and so as to reveal the history of certain problems pertinent to Popper's scheme: for instance, the use of arguments from simplicity, the ideas of the utility of analogy and models, and the relation of theory to reality. Finally, the great variety of evolutionary, fundamentalistic, and steady-state models available for study is explored in detail as to presupposition and methodology so that their distinctives are revealed and a basis for comparison with Popper's suggestions provided. [TRUNCATED]
92

Symmetrical complementation designs

Beyer, William H. January 1961 (has links)
The “Symmetrical Complementation Design” which is discussed in this dissertation is intended for those experimental situations where the levels of three factors always sum to the same constant. The levels of the factors, if referred to a common unit of measurement, must be equally spaced. Certain cell entries are omitted to ensure complete interchangeability of the three factors. The usual additive model is assumed. A detailed study about the types of functions which are estimable in this design is presented in the chapter on linear estimation. The study shows that the number of contrasts that can be estimated is limited. For example, the usual linear contrast, which would lead to the hypothesis of equality of effects of all levels of one factor, is not testable in this design. On the other hand, quadratic and higher-order contrasts are estimable for each factor separately. These contrasts are combined into different hypotheses. Estimable functions in one factor only and in two factors are presented for the general case of p levels. There are several methods which can be employed in order to obtain estimates of the treatment effects under various constraints. It must be noted, however, that these estimates are rather meaningless quantities. It is only when they are combined in estimable functions that unique results are obtained. Two methods are described in complete detail; the “high-low” method if only estimation is required, and the "modified high-low” method if both estimation and tests of hypotheses are required. The complete inverse matrix required for this latter method, or a method of obtaining this patterned inverse, is presented. For testing hypotheses, a general technique, based upon the inversion of the matrix in the modified high-low method, is presented. Sums of squares and test statistics are presented for the various hypotheses formulated. Sections are also included which indicate how one might obtain the response for intermediate levels of the factors, and how one might obtain response functions for single factors. A chapter on extensions is presented, where n observations are available per treatment combination. In this connection, three different cases are considered; a) the replications are strictly repetitions of the experiment under otherwise identical conditions. In this case, the analysis proceeds in the customary three-way analysis with n replicates per treatment combination; b) the experiments within a cell represent repetitions over a period of time, during which some kind of trend may be present. In this case the analysis is readily extended into an analysis of covariance; c) the experiments within a cell represent several experiments with the same experimental units, so that the observations within a cell are dependent. On the assumption that the covariance matrix of observations in a cell is the same for every cell, a multivariate analysis can be performed. The problem of estimation is essentially the same in these methods. However different methods are necessary for the testing of hypotheses. Special discussion is also presented for the case where the levels of the factors are not equally spaced; and the case where the model is considered as a mixed model. In conclusion, it has been found that this type of design requires a rather careful consideration of the types of functions that can be estimated and the types of hypotheses that can be tested. Recommendations for interpretation and statement of limitations are made in detail. / Ph. D.
93

Change-over designs

Mason, James Mark January 1970 (has links)
When it is necessary to apply several different treatments in succession to a given subject, the residual effect of one treatment on another must be taken into consideration. A number of various designs have been developed for this purpose. A number of them are presented in this paper and can be summarized as follows: Type I: Balanced for first-order residual effects. For n, the number of treatments, even, any number of Latin squares can be used; for n odd, an even number of squares is necessary. Type II: Formed by repeating the final period of Type I designs. Direct and residual effects are orthogonal. Type III: Formed from p<n corresponding rows of n-1 orthogonal nxn Latin squares. Type IV: Complete orthogonality except for subjects and residuals. Very efficient but large numbers of observations are necessary. Type V: Designs balanced for first and second order effects. Also formed from orthogonal Latin squares. Type VI: Designs orthogonal for direct, first and second order residuals. Designs presented for n=2, 3 and 5. Type VII: Orthogonal for linear, quadratic, ...components of direct and linear component of residual effects. Analysis includes linear direct x linear residual interaction. Designs given for n = 4, 5. Type VIII: Type II designs analyzed under model for Type VII designs. Less efficiency, but designs available for all n. Type IX: Designs useful for testing more than one treatment and direct x residual interactions. Analysis for most designs includes normal equations, analysis of variance, variances of estimates, expected mean squares, efficiencies and missing value formulas. A list of designs is presented in an appendix. / Master of Science
94

Optimal Experimental Designs for the Poisson Regression Model in Toxicity Studies

Wang, Yanping 31 July 2002 (has links)
Optimal experimental designs for generalized linear models have received increasing attention in recent years. Yet, most of the current research focuses on binary data models especially the one-variable first-order logistic regression model. This research extends this topic to count data models. The primary goal of this research is to develop efficient and robust experimental designs for the Poisson regression model in toxicity studies. D-optimal designs for both the one-toxicant second-order model and the two-toxicant interaction model are developed and their dependence upon the model parameters is investigated. Application of the D-optimal designs is very limited due to the fact that these optimal designs, in terms of ED levels, depend upon the unknown parameters. Thus, some practical designs like equally spaced designs and conditional D-optimal designs, which, in terms of ED levels, are independent of the parameters, are studied. It turns out that these practical designs are quite efficient when the design space is restricted. Designs found in terms of ED levels like D-optimal designs are not robust to parameters misspecification. To deal with this problem, sequential designs are proposed for Poisson regression models. Both fully sequential designs and two-stage designs are studied and they are found to be efficient and robust to parameter misspecification. For experiments that involve two or more toxicants, restrictions on the survival proportion lead to restricted design regions dependent on the unknown parameters. It is found that sequential designs perform very well under such restrictions. In most of this research, the log link is assumed to be the true link function for the model. However, in some applications, more than one link functions fit the data very well. To help identify the link function that generates the data, experimental designs for discrimination between two competing link functions are investigated. T-optimal designs for discrimination between the log link and other link functions such as the square root link and the identity link are developed. To relax the dependence of T-optimal designs on the model truth, sequential designs are studied, which are found to converge to T-optimal designs for large experiments. / Ph. D.
95

A response surface approach to the mixture problem when the mixture components are categorized

Cornell, John A. 02 June 2010 (has links)
A method is developed for experiments with mixtures where the mixture components are categorized (acids, bases, etc.), and each category of components contributes a fixed proportion to the total mixture. The number of categories of mixture components is general and each category will be represented in every mixture by one or more of its member components. The purpose of this paper is to show how standard response surface designs and polynomial models can be used for estimating the response to mixtures of the k mixture components. The experimentation is concentrated in an ellipsoidal region chosen by the experimenter, subject to the constraints placed on the components. The selection of this region, the region of interest, permits the exclusion of work in areas not of direct interest. The transformation from a set of linearly dependent mixture components to a set of linearly independent design variables is shown. This transformation is accomplished with the use of an orthogonal matrix. Since we want the properties of the predictor ŷ at a point w to be invariant to the arbitrary elements of the transformation matrix, we choose to use rotatable designs. Frequently, there are underlying sources of variation in the experimental program whose effects can be measured by dividing the experimentation into stages, that is, blocking the observations. With the use of orthogonal contrasts of the observations, it is shown how these effects can be measured. This concept of dividing the program of experiments into stages is extended to include second degree designs. The radius of the largest sphere, in the metric of the design variables, that will fit inside the factor space is derived. This sphere provides an upper bound on the size of an experimental design. This is important when one desires to use a design to minimize the average variance of ŷ only for a first-degree model. It is also shown with an example how with the use of the largest sphere, one can cover almost all combinations of the mixture components, subject to the constraints. / Ph. D.
96

Structures and properties of repeated measurement designs

Shing, Chen-Chi January 1984 (has links)
In this study the structure and properties of repeated measurement (RM) designs are investigated from different points of view, such as (i) balancedness or partial balancedness, (ii) construction versus estimation, (iii) underlying linear models, (iv) factorial treatment structure. In studying balanced repeated measurement designs for the first order residual effects model it becomes apparent that one has to distinguish between balancedness with respect to construction and balancedness with respect to estimation. These two concepts do not necessarily imply each other as they do, for example, for the balanced incomplete block design. Such designs are referred to as BRMl and BRMlE designs, respectively. It is shown that they are imbedded in a much larger class of RM designs. This class is based on generalized partially balanced incomplete block designs and hence referred to as GPBRMl designs. The properties of GPBRMl designs can be investigated by means of association matrices. For the construction of these designs the concept of asymmetrically repeated differences is introduced as a generalization of symmetrically repeated differences used for constructing certain PBIB designs. Another generalization of RM designs concerns the underlying linear model. In particular, the situation is considered where in addition to first order residual effects the model also contains second order residual effects. This leads to BRM2 and BRM2E designs. Extension to kᵗʰ order residual effect models are mentioned briefly. Modifications of existing RM designs can be achieved if the treatments have a factorial structure and if certain, usually higher order, interactions can be considered negligible. In particular, it is shown how this can lead to a substantial reduction in the number of periods and/or subjects for a RM design. / Doctor of Philosophy
97

Platform design for customizable products and processes with non-uniform demand

Williams, Christopher Bryant 01 December 2003 (has links)
No description available.
98

Computer and physical experiments: design, modeling, and multivariate interpolation

Kang, Lulu 28 June 2010 (has links)
Many problems in science and engineering are solved through experimental investigations. Because experiments can be costly and time consuming, it is important to efficiently design the experiment so that maximum information about the problem can be obtained. It is also important to devise efficient statistical methods to analyze the experimental data so that none of the information is lost. This thesis makes contributions on several aspects in the field of design and analysis of experiments. It consists of two parts. The first part focuses on physical experiments, and the second part on computer experiments. The first part on physical experiments contains three works. The first work develops Bayesian experimental designs for robustness studies, which can be applied in industries for quality improvement. The existing methods rely on modifying effect hierarchy principle to give more importance to control-by-noise interactions, which can violate the true effect order of a system because the order should not depend on the objective of an experiment. The proposed Bayesian approach uses a prior distribution to capture the effect hierarchy property and then uses an optimal design criterion to satisfy the robustness objectives. The second work extends the above Bayesian approach to blocked experimental designs. The third work proposes a new modeling and design strategy for mixture-of-mixtures experiments and applies it in the optimization of Pringles potato crisps. The proposed model substantially reduces the number of parameters in the existing multiple-Scheffé model and thus, helps the engineers to design much smaller experiments. The second part on computer experiments introduces two new methods for analyzing the data. The first is an interpolation method called regression-based inverse distance weighting (RIDW) method, which is shown to overcome some of the computational and numerical problems associated with kriging, particularly in dealing with large data and/or high dimensional problems. In the second work, we introduce a general nonparametric regression method, called kernel sum regression. More importantly, we make an interesting discovery by showing that a particular form of this regression method becomes an interpolation method, which can be used to analyze computer experiments with deterministic outputs.
99

Sequential optimal design of neurophysiology experiments

Lewi, Jeremy 31 March 2009 (has links)
For well over 200 years, scientists and doctors have been poking and prodding brains in every which way in an effort to understand how they work. The earliest pokes were quite crude, often involving permanent forms of brain damage. Though neural injury continues to be an active area of research within neuroscience, technology has given neuroscientists a number of tools for stimulating and observing the brain in very subtle ways. Nonetheless, the basic experimental paradigm remains the same; poke the brain and see what happens. For example, neuroscientists studying the visual or auditory system can easily generate any image or sound they can imagine to see how an organism or neuron will respond. Since neuroscientists can now easily design more pokes then they could every deliver, a fundamental question is ``What pokes should they actually use?' The complexity of the brain means that only a small number of the pokes scientists can deliver will produce any information about the brain. One of the fundamental challenges of experimental neuroscience is finding the right stimulus parameters to produce an informative response in the system being studied. This thesis addresses this problem by developing algorithms to sequentially optimize neurophysiology experiments. Every experiment we conduct contains information about how the brain works. Before conducting the next experiment we should use what we have already learned to decide which experiment we should perform next. In particular, we should design an experiment which will reveal the most information about the brain. At a high level, neuroscientists already perform this type of sequential, optimal experimental design; for example crude experiments which knockout entire regions of the brain have given rise to modern experimental techniques which probe the responses of individual neurons using finely tuned stimuli. The goal of this thesis is to develop automated and rigorous methods for optimizing neurophysiology experiments efficiently and at a much finer time scale. In particular, we present methods for near instantaneous optimization of the stimulus being used to drive a neuron.
100

Vitamin supplementation of sows

Shelton, Nicholas William January 1900 (has links)
Doctor of Philosophy / Department of Animal Sciences and Industry / Jim Nelssen / A total of 701 pigs were used to evaluate effects of natural vitamin E relative to synthetic vitamin E in sow diets, late gestation feeding level on sow reproductive performance, dietary L-carnitine and chromium on sow reproductive performance, and experimental design on nursery pig trial interpretation. As D-α-tocopheryl acetate increased in the sow’s diet, concentrations of α-tocopherol increased (P < 0.03) in sow plasma, colostrum, milk, pig plasma, and pig heart. Regression analysis indicated that the bioavailability coefficients for D-α-tocopheryl acetate relative to DL-α-tocopheryl acetate ranged from 2.1 to 4.2 for sow and pig plasma α-tocopherol, 2.9 to 3.0 for colostrum α-tocopherol, 1.6 for milk α-tocopherol, 1.8 for heart α-tocopherol, and 2.0 for liver α-tocopherol. Overall, this study indicates that the relative bioavailability for D-α-tocopheryl acetate relative to DL-α-tocopheryl acetate varies depending on the response criteria but is greater than the standard potency value of 1.36. Increasing sow gestation feeding level by 0.9 kg from d 90 of gestation through farrowing reduced (P = 0.001) daily lactation feed intake in gilts, but also resulted in improved conception rate in gilts, whereas increasing late gestation feeding level decreased conception rate in sows (interaction; P = 0.03). Increasing late gestation feed intake in gilts also increased (P < 0.02) pig weaning weights during the second parity. Increasing late gestation feeding levels did not improve performance of older sows. Adding L-carnitine and chromium from chromium picolinate to sow gestation and lactation diets reduced (P = 0.01) the amount of sow weight loss during lactation, however, did not improve (P > 0.05) litter size, pig birth weight, or the variation in pig birth weight. Blocking pens of nursery pigs by BW in a randomized complete block design (RCBD) did not improve the estimates for σ2error compared to a completely randomized design (CRD) where all pens were allotted to have similar means and variations of body weight. Therefore, the added degrees of freedom for the error term in the CRD allowed more power to detect treatment differences for the CRD compared to the RCBD.

Page generated in 0.075 seconds