• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7111
  • 1941
  • 732
  • 598
  • 595
  • 518
  • 134
  • 121
  • 114
  • 108
  • 98
  • 93
  • 85
  • 83
  • 68
  • Tagged with
  • 14842
  • 9178
  • 1979
  • 1809
  • 1644
  • 1563
  • 1306
  • 1244
  • 1233
  • 1219
  • 1058
  • 1022
  • 1004
  • 967
  • 881
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Evaluation of new industrial product ideas : an empirical study of the new product screening model and an analysis of managers' screening behavior

De Brentani, Ulrike. January 1983 (has links)
No description available.
622

The problem of classifying members of a population into groups

Flora, Roger Everette January 1965 (has links)
A model is assumed in which individuals are to be classified into groups as to their "potential” with respect to a given characteristic. For example, one may wish to classify college applicants into groups with respect to their ability to succeed in college. Although actual values for the “potential,” or underlying variable of classification, may be unobservable, it is assumed possible to divide the individuals into groups with respect to this characteristic. Division into groups may be accomplished either by fixing the boundaries of the underlying variable of classification or by fixing the proportion of the individuals which may belong to a given group. For discriminating among the different groups, a set of measurements is obtained for each individual. In the example above, for instance, classification might be based on test scores achieved by the applicants on a set of tests administered to them. Since the value of the underlying variable of classification is unobservable, we may assign, in place of this variable, a characteristic random variable to each individual. The characteristic variable will be the same for every member of a given group. We then consider a choice of characteristic random variable and a linear combination of the observed measurements such that the correlation between the two is a maximum with respect to both the coefficients of the different measurements and the characteristic variable. If a significant correlation is found, one may then use a discriminant for a randomly selected individual the linear combination obtained by using the coefficients found by the above procedure. In order to facilitate a test of validity for the proposed discriminant function, the distribution of a suitable function of the above correlation coefficient is found under the null hypothesis of no correlation between the underlying variable of classification and the observed measurements. A test procedure based on the statistic for which the null distribution is found is then described. Special consideration is given in the study to the case of only two classification groups with the proportion of individuals to belong to each group fixed. For this case, in addition to obtaining the null distribution, the distribution of the test statistic is also considered under the alternative hypothesis. Low order momenta of the test criterion are obtained, and the approximate power of the proposed test is found for specific cases of the model by fitting an appropriate density to the moments derived. A general consideration of the power function and its behavior as the sample size increases and as the population multiple correlation between the underlying variable of classification and the observed measurements increases is also investigated. Finally, the probability of misclassification, or “the problem of shrinkage" as it is often called, is considered. Possible approaches to the problem and aaae of the difficulties in investigating this problem are indicated. / Ph. D.
623

The Problem of classifying members of a population on a continuous scale

Barnett, Frederic Charles January 1964 (has links)
Having available a vector of measurements for each individual in a random sample from a multivariate population, we assume in addition that these individuals can be ranked on some criterion of interest. As an example of this situation, we may have measured certain physiological characteristics (blood pressure, amounts of certain chemical substances in the blood, etc.) in a random sample of schizophrenics. After a series of treatments (perhaps shock treatments, doses of a tranquillizer, etc.) these individuals might be ranked on the basis of favorable response to treatment. We shall in general be interested in predicting which individuals in a new group would respond most favorably. Thus, in the example, we should wish to know·which individuals would most likely benefit from the series of treatments. Some difficulties in applying the classical discriminant function analysis to problems of this type are noted. We have chosen to use the multiple correlation coefficient of ranks with measured variates as a statistic in testing whether ranks are associated with measurements. We give to this coefficient the name "quasi-rank multiple correlation coefficient", and proceed to find its first four exact moments under the assumption that the underlying probability distribution is multivariate normal. Two methods are used to approximate the power of tests based on the quasi-rank multiple correlation coefficient in the case of just one measured variate. The agreement for a sample size of twenty is quite good. The asymptotic relative efficiency of the squared quasi-rank coefficient vis-a-vis the squared standard multiple correlation coefficient is 9/π² , a result which does not depend on the number of measured variates. If the null hypothesis that ranks are not associated with measurements is rejected, it is appropriate to use the measurements in some way to predict the ranks. The quasi-rank multiple correlation coefficient is, however, the maximized simple correlation of ranks with linear combinations of the measured variates. The maximizing linear combination of measured variates is taken as a discriminant function, and its values for subsequently chosen individuals is used to rank these individuals in order of merit. A demonstration study is included in which we employ a random sample of size twenty from a six-variate normal distribution of known structure (for which the population multiple correlation coefficient is .655). The null hypothesis of no association of ranks with measurements is rejected in a two-sided size .05 test. The discriminant function is obtained and is used to "predict" the true ranks of the twenty individuals in the sample. The predicted ranks represent the true ranks rather well, with no predicted rank more than four places from the true rank. For other populations in which the population multiple correlation coefficient is greater than .655 we should expect to obtain even better sets of predicted ranks. In developing the moments of the quasi-rank multiple correlation coefficient it was necessary to obtain exact moments of a certain linear combination of quasi-ranges in a random sample from a normal population. Since this quasi-range statistic may be useful in other investigations, we include also its moment generating function and some derivatives of this moment generating function. / Ph. D.
624

Comparison of two drugs by multiple stage sampling using Bayesian decision theory

Smith, Armand V. 02 February 2010 (has links)
The general problem considered in this thesis is to determine an optimum strategy for deciding how to allocate the observations in each stage of a multi-stage experimental procedure between two binomial populations (e.g., the numbers of successes for two drugs) on the basis of the results of previous stages. After all of the stages of the experiment have been performed, one must make the terminal decision of which of the two populations has the higher probability of success. The optimum strategy is to be optimum relative to a given loss function; and a prior distribution, or weighting function, for the probabilities of success for the two populations is assumed. Two general classes of loss functions are considered, and it is assumed that the total number of observations in each stage is fixed prior to the experiment. In order to find the optimum strategy a method of analysis called extensive-form analysis is used. This is essentially a method for enumerating all the possible outcomes and corresponding strategies and choosing the optimum strategy for a given outcome. However, it is found that this method of analysis is much too long for all but small examples even when a digital computer is used. Because of this difficulty two alternative procedures, which are approximations to extensive-form analysis, are proposed. In the stage-by-stage procedure one assumes that at each stage he is at the last stage of his multi-stage procedure and allocates his observations to each of the two populations accordingly. It is shown that this is equivalent to assuming at each stage one has a one stage procedure. In the approximate procedure one (approximately) minimizes the posterior variance of the difference of the probabilities of success for the two populations at each stage. The computations for this procedure are quite simple to perform. The stage-by-stage procedure for the case that the two populations are normal with known variance rather than binomial is considered. It is then shown that the approximate procedure can be derived as an approximation to the stage-by- stage procedure when normal approximations to binomial distributions are used. The three procedures are compared with each other and with equal division of the observations in several examples by the computation of the probability of making the correct terminal decision for various values of the population parameters (the probabilities of success}. It is assumed in these computations that the prior distributions of the population parameters are rectangular distributions and that the loss functions are symmetric} i.e., the losses are as great for one wrong terminal decision as they are for the other. These computations show that, for the examples studied, there is relatively little loss in using the stage-by-stage procedure rather than extensive-form analysis and relatively little gain in using the approximate procedure instead of equal division of the observations. However, there is a relatively large loss in using the approximate procedure rather than the stage-by-stage procedure when the population parameters are close to 0 or 1. At first it is assumed there are a fixed number of stages in the experiment, but later in the thesis this restriction is weakened to the restriction that only the maximum number of stages possible in the experiment is fixed and the experiment can be stopped at any stage before the last possible stage is reached. Stopping rules for the stage-by- stage and the approximate procedures are then derived. / Ph. D.
625

Some parametric empirical Bayes techniques

Rutherford, John Ross January 1965 (has links)
This thesis considers two distinct aspects of the empirical Bayes decision problem. The first aspect considered is the problem or point estimation and hypothesis testing. The second aspect considered is that of estimating the prior distribution and then the estimation of posterior distribution and confidence intervals. In the first aspect considered we assume that there exists an unobservable parameter space 𝔏={λ} on which is defined a prior distribution G(λ). For any action a from a class A there is a loss, L(a,λ) ≥ 0, which we incur when we take action a and the true parameter is λ. There exists an observable random vector X̰=(X₁,...X<sub>k</sub>), k ≥ 1, which admits of a sufficient statistic T=T(X̰) for λ. The conditional density function (c.d.f.) of T is f(t(λ). We assume that there exists a decision function δ<sub>ɢ</sub>(t) from a class D (δεD) implies that δ(t)εA for all t) such that the expected loss, R(δ,G) = ∫∫L(δ(t),λ) f(t|λ)dtdG(λ), is minimized. This minimizing decision function is called a Bayes decision function and the associated minimum expected loss is called the Bayes risk R(G). We assume that there exists a sequence or independent identically distributed random vectors <(X₁,...,X<sub>k</sub>,λ)<sub>n</sub>> (or <(T,λ)<sub>n</sub> >) with each element distributed independently of and identically to (X₁,...,X<sub>𝗄</sub>,λ) (or (T,λ). The problem is to construct sequential decision functions, δ<sub>n</sub>(t;t₁,t₂,...,t<sub>n</sub>)=δ<sub>n</sub>(t), which are asymptotically optimal (a.o.), that is which satisfy lim<sub>n→∞</sub> R(δ<sub>n</sub>(T),G) = R(G). We extend a theorem or Robbins (Ann. Math. Statist. 35,1-20) to provide a simple method for the construction or a.o. point estimators of λ with a squared-error loss function when f(t|λ) is continuous. We extend the results or Samuel (Ann. Math. Statist., 34, 1370-1385) to provide a.o. tests of certain parametric hypotheses with loss functions which are polynomials in λ. The c.d.f.'s which are considered are all continuous and include some or those of the exponential class and some whose range depends upon the parameter. This latter result is applied to the problem or the one-sided truncation of density functions. The usefulness of all or these results is predicated upon the tact that the forms or the Bayes decision functions can be determined in such a way that the construction or the analogous a.o. empirical Bayes decision functions is simple. Two manipulative techniques, which provide the desired forms of the Bayes decision function, are introduced. These techniques are applied to several examples, and a.o. decision functions are defined. To estimate the prior distribution we assume that there exists a sequence of independent identically distributed random vectors <(T,λ)<sub>n</sub>>) each distributed according to the joint density function J(t,λ)=G(λ)F(t|λ). The sequence <λ<sub>n</sub>> of <(T,λ)<sub>n</sub>> is unobservable. G(λ) belongs to a subclass g of a class G<sub>p</sub>(g) and F(t|λ) belongs to a class F. G<sub>p</sub>(g) is determined by the conditions: (a) G(λ) is absolutely continuous with with respect to Lebesgum measure; (b) its density function, g(λ), is determined completely by a continuous function of its first p moments, p ≥ 2; (c) the first p moments are finite; (d) the subclass g contains those density functions which are determined by a particular known continuous function. The class F is determined by the condition that there exist known functions h<sub>𝗸</sub>(.), k=1,...,p, such that E[h<sub>𝗸</sub>(T)|λ] = λᵏ. Under these conditions we construct an estimate, Gn(λ), of G(λ) such that lim<sub>n→∞</sub> E[(G<sub>n</sub>(λ) - G(λ))²] = 0, a.e.λ. Estimates of the posterior distribution and of confidence intervals are constructed using G<sub>n</sub>(λ). / Ph. D.
626

Factors affecting group decision making: an insight on information practices by investigating decision making process among tactical commanders

Mishra, Jyoti L. 12 1900 (has links)
Yes / Introduction. Decision making though an important information use has not been vigorously researched in information practices research. By studying how decision makers make decision in groups, we can learn about several underlying issues in information practices. Method. T20 middle-level (tactical) Commanders from blue light services in the UK were interviewed to share their experience on how and where they seek information from and how they make decisions while managing major incidents. Analysis. Activity theory was used as an overarching framework to design interview questions and as an analysis framework. Results. Information need and information practices such as information sharing and information use are investigated. A model of group decision making process and factors affecting group decision making is proposed. Conclusions. By understanding factors affecting decision making, decision support system designers and policy makers can readdress the underlying issue. Moreover, this paper reiterates the need of studying decision making to understand information practices. / This research is funded by ESRC and 1Spatial PLC.
627

Decision analysis in Turkey

Gonul, M.S., Soyer, E., Onkal, Dilek 05 1900 (has links)
No
628

Ethical Reasoning and Risk Propensity: A Comparison of Hospital and General Industry Senior Executives

Williamson, Stanley G. (Stanley Greer) 12 1900 (has links)
This research explores whether differences in ethical reasoning levels exist between senior hospital managers and top level general industry executives. Similar comparisons are made between not-for-profit hospital managers and their peers in for-profit hospitals. Also examined are the ethical reasoning levels used most often by practicing executives, regardless of industry affiliation.
629

The acquisition of expertise in auditing : a judgmental analysis

Ettenson, Richard Thomas January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
630

Bayesian decision-makers reaching consensus using expert information

Garisch, I. January 2009 (has links)
Published Article / The paper is concerned with the problem of Bayesian decision-makers seeking consensus about the decision that should be taken from a decision space. Each decision-maker has his own utility function and it is assumed that the parameter space has two points, Θ = {θ1,θ2 }. The initial probabilities of the decision-makers for Θ can be updated by information provided by an expert. The decision-makers have an opinion about the expert and this opinion is formed by the observation of the expert's performance in the past. It is shown how the decision-makers can decide beforehand, on the basis of this opinion, whether the consultation of an expert will result in consensus.

Page generated in 0.083 seconds