Spelling suggestions: "subject:"hypotheses"" "subject:"bypotheses""
1 |
Futures markets : Theory and testsAntoniou, A. January 1986 (has links)
No description available.
|
2 |
Optimal Distributed Detection of Multiple Hypotheses Using Blind AlgorithmsLiu, Bin 10 1900 (has links)
In a parallel distributed detection system each local detector makes a decision based on its own observations and transmits its local decision to a fusion center, where a global decision is made. Given fixed local decision rules, in order to design the optimal fusion rule, the fusion center needs to have perfect knowledge of the performance of the local detectors as well as the prior probabilities of the hypotheses. Such knowledge is not available in most practical cases. In this thesis, we propose a blind technique for the general distributed detection problem with multiple hypotheses. We start by formulating the optimal M-ary fusion rule in the sense of minimizing the overall error probability when the local decision rules are fixed. The optimality can only be achieved if the prior probabilities of hypotheses and parameters describing the local detector performance are known. Next, we propose a blind technique to estimate the parameters aforementioned as in most cases they are unknown. The occurrence numbers of possible decision combinations at all local detectors are multinomially distributed with occurrence probabilities being nonlinear functions of the prior probabilities of hypotheses and the parameters describing the performance of local detectors. We derive nonlinear Least Squares (LS) and Maximum Likelihood (ML) estimates of unknown parameters respectively. The ML estimator accounts for the known parametric form of the likelihood function of the local decision combinations, hence has a better estimation accuracy.
Finally, we present the closed-form expression of the overall detection performance for both binary and M-ary distributed detection and show that the overall detection performance using estimated values of unknown parameters approaches quickly to that using their true values. We also investigate various impacts to the overall detection. The simulation results show that the blind algorithm proposed in this thesis provides an efficient way to solve distributed detection problems. / Thesis / Master of Applied Science (MASc)
|
3 |
Aid to Lesotho: dilemmas of state survival and developmentMATLOSA, KHABELE TEBOHO January 1995 (has links)
Philosophiae Doctor - PhD / This thesis discusses the triangular relationship of aid, state and development since Lesotho's independence. It builds on three key hypotheses. First, during the preadj ustment period aid entrenched bureaucratic state power, but this changed with the adoption of the adjustment programme which only facilitates state survival. Secondly, hemmed in by external developemts and internal political and economic crisis, the state is caught between survival and shrinking resources. Thirdly, given the above, development has remained elusive inspite of the infusion of aid at highly preferential terms. Since the Cold War, aid issues have undergone three phases.
Until the 1960s, donor concerns focussed primarily on economic growth. Growth with redistribution or the basic needs approach
dominated aid disbursement up to the late 1970s. Since the 1980s, aid has been influenced predominantly by the IMF/World Bank
orthodoxy of adjustment. Much of the debate on aid to Africa generally and to Lesotho specifically has revolved around whether
aid develops or underdevelops recipient countries. The view that aid bolsters state power is not new. This study argues, however, that this may not be the case under adjustment conditions. Aid facilitates state survival in the context whereby donors mount a systematic offensive agianst dirigisme and economic nationalism. As they do that, the locus of economic production and interaction is shifted to private agents and autonomous social movements and the role of the state is cut back. Donor confidence, therefore; shifts from states to markets.
The implications of these processes for the Lesotho state and prospects for development form the central thrust of this study. Non-probability purposive sampling was used for data collection. This approach rests on qualitative research methodology. Respondents were chosen on the basis of their position and influence on decion-making processes that impinge on the interface amongst aid, state and development. Primary data sources are clustered into three categories: Government; Donor agencies and embassies; and Non-governmental Organisations.
|
4 |
Cue-Sampling Strategies and the Role of Verbal Hypotheses in Concept IdentificationHislop, Mervyn W. 03 1900 (has links)
<p> The role of verbal hypotheses in concept identification was explored by manipulating three variables affecting the relation between verbalized rules and classification performance. (i) Verbalizing rules before and after classification changed subjects' cue-sampling strategies
and the control of verbal hypotheses over sorting performance. (ii) The difficulty of stimulus description affected how subjects utilized verbal hypotheses, and whether verbalized rules completely specified the cues used for classification. (iii) The number of irrelevant attributes
changed the relative efficiency of stimulus-learning over rule-learning for concept identification.</p> <p> These investigations demonstrate effective techniques for varying and evaluating the importance of verbal rules for classification; and suggest that subjects' prior
verbal habits markedly affect the degree of reliance placed on verbal hypotheses in concept attainment.</p> / Thesis / Doctor of Philosophy (PhD)
|
5 |
Intergenerational Differences in Barriers that Impede Mental Health Service Use among LatinosEscobar-Galvez, Irene 07 1900 (has links)
Research has extensively documented the mental health disparities that exist for ethnic and racial minorities living in the United States. With respect to Latinos, such disparities are marked by less access to care and poorer quality of mental health treatment. Studies on Latino mental health have found differences in mental health service utilization among ethnic subgroups and among different generations of Latinos. However, empirical data examining specific attitudes and barriers to mental health treatment among different generations of Latinos are limited. This study explored the relationships between Latino generational status, mental health service utilization, psychological distress, and barriers to mental health treatment. An online survey (N = 218) included samples of first-generation (n = 67), second-generation (n = 86), and third-generation or beyond Latinos (n = 65). Results indicated first-generation Latinos had the lowest rate of mental health service utilization and reported greater linguistic and structural knowledge barriers, however, they had lower perceived social stigma of mental health services when age at migration was considered. Implications of these findings for research, mental health service providers and mental health policy are discussed.
|
6 |
Topics in multiple hypotheses testingQian, Yi 25 April 2007 (has links)
It is common to test many hypotheses simultaneously in the application of statistics.
The probability of making a false discovery grows with the number of statistical tests
performed. When all the null hypotheses are true, and the test statistics are indepen-
dent and continuous, the error rates from the family wise error rate (FWER)- and
the false discovery rate (FDR)-controlling procedures are equal to the nominal level.
When some of the null hypotheses are not true, both procedures are conservative. In
the first part of this study, we review the background of the problem and propose
methods to estimate the number of true null hypotheses. The estimates can be used
in FWER- and FDR-controlling procedures with a consequent increase in power. We
conduct simulation studies and apply the estimation methods to data sets with bio-
logical or clinical significance.
In the second part of the study, we propose a mixture model approach for the
analysis of ChIP-chip high density oligonucleotide array data to study the interac-
tions between proteins and DNA. If we could identify the specific locations where
proteins interact with DNA, we could increase our understanding of many important
cellular events. Most experiments to date are performed in culture on cell lines, bac-
teria, or yeast, and future experiments will include those in developing tissues, organs,
or cancer biopsies, and they are critical in understanding the function of genes and proteins. Here we investigate the ChIP-chip data structure and use a beta-mixture
model to help identify the binding sites. To determine the appropriate number of
components in the mixture model, we suggest the Anderson-Darling testing. Our
study indicates that it is a reasonable means of choosing the number of components
in a beta-mixture model. The mixture model procedure has broad applications in
biology and is illustrated with several data sets from bioinformatics experiments.
|
7 |
Thresholding FMRI imagesPavlicova, Martina January 2004 (has links)
No description available.
|
8 |
Definição do nível de significância em função do tamanho amostral / Setting the level of significance depending on the sample sizeOliveira, Melaine Cristina de 28 July 2014 (has links)
Atualmente, ao testar hipóteses utiliza-se como convenção um valor fixo (normalmente 0,05) para o Erro Tipo I máximo aceitável (probabilidade de Rejeitar H0 dado que ela é verdadeira) , também conhecido como nível de significância do teste de hipóteses proposto, representado por alpha. Na maioria das vezes nem se chega a calcular o Erro tipo II ou beta (probabilidade de Aceitar H0 dado que ela é falsa). Tampouco costuma-se questionar se o alpha adotado é razoável para o problema analisado ou mesmo para o tamanho amostral apresentado. Este texto visa levar à reflexão destas questões. Inclusive sugere que o nível de significância deve ser função do tamanho amostral. Ao invés de fixar-se um nível de significância único, sugerimos fixar a razão de gravidade entre os erros tipo I e tipo II baseado nas perdas incorridas em cada caso e assim, dado um tamanho amostral, definir o nível de significância ideal que minimiza a combinação linear dos erros de decisão. Mostraremos exemplos de hipóteses simples, compostas e precisas para a comparação de proporções, da forma mais convencionalmente utilizada comparada com a abordagem bayesiana proposta. / Usually the significance level of the hypothesis test is fixed (typically 0.05) for the maximum acceptable Type I error (probability of Reject H0 as it is true), also known as the significance level of the hypothesis test, represented here by alpha. Normally the type II error or beta (probability of Accept H0 as it is false) is not calculed. Nor often wonder whether the alpha adopted is reasonable for the problem or even analyzed for the sample size presented. This text aims to take the reflection of these issues. Even suggests that the significance level should be a function of the sample size. Instead of fix a unique level of significance, we suggest fixing the ratio of gravity between type I and type II errors based on losses incurred in each case and so, given a sample size, set the ideal level of significance that minimizes the linear combination of the decision errors. There are examples of simple, composite and sharp hypotheses for the comparison of proportions, the more conventionally used form compared with the Bayesian approach proposed.
|
9 |
Definição do nível de significância em função do tamanho amostral / Setting the level of significance depending on the sample sizeMelaine Cristina de Oliveira 28 July 2014 (has links)
Atualmente, ao testar hipóteses utiliza-se como convenção um valor fixo (normalmente 0,05) para o Erro Tipo I máximo aceitável (probabilidade de Rejeitar H0 dado que ela é verdadeira) , também conhecido como nível de significância do teste de hipóteses proposto, representado por alpha. Na maioria das vezes nem se chega a calcular o Erro tipo II ou beta (probabilidade de Aceitar H0 dado que ela é falsa). Tampouco costuma-se questionar se o alpha adotado é razoável para o problema analisado ou mesmo para o tamanho amostral apresentado. Este texto visa levar à reflexão destas questões. Inclusive sugere que o nível de significância deve ser função do tamanho amostral. Ao invés de fixar-se um nível de significância único, sugerimos fixar a razão de gravidade entre os erros tipo I e tipo II baseado nas perdas incorridas em cada caso e assim, dado um tamanho amostral, definir o nível de significância ideal que minimiza a combinação linear dos erros de decisão. Mostraremos exemplos de hipóteses simples, compostas e precisas para a comparação de proporções, da forma mais convencionalmente utilizada comparada com a abordagem bayesiana proposta. / Usually the significance level of the hypothesis test is fixed (typically 0.05) for the maximum acceptable Type I error (probability of Reject H0 as it is true), also known as the significance level of the hypothesis test, represented here by alpha. Normally the type II error or beta (probability of Accept H0 as it is false) is not calculed. Nor often wonder whether the alpha adopted is reasonable for the problem or even analyzed for the sample size presented. This text aims to take the reflection of these issues. Even suggests that the significance level should be a function of the sample size. Instead of fix a unique level of significance, we suggest fixing the ratio of gravity between type I and type II errors based on losses incurred in each case and so, given a sample size, set the ideal level of significance that minimizes the linear combination of the decision errors. There are examples of simple, composite and sharp hypotheses for the comparison of proportions, the more conventionally used form compared with the Bayesian approach proposed.
|
10 |
Three essays on hypotheses testing involving inequality constraintsHsu, Yu-Chin, 1978- 21 September 2010 (has links)
The focus of this research is on hypotheses testing involving inequality constraints. In the first chapter of this dissertation, we propose Kolmogorov-Smirnov type tests for stochastic dominance relations between the potential outcomes of a binary treatment under the unconfoundedness assumption. Our stochastic dominance tests compare every point of the cumulative distribution functions (CDF), so they can fully utilize all information in the distributions. For first order stochastic dominance, the test statistic is defined as the supremum of the difference of two inverse-probability-weighting estimators for the CDFs of the potential outcomes. The critical values are approximated based on a simulation method. We show that our test has good size properties and is consistent in the sense that it can detect any violation of the null hypothesis asymptotically. First order stochastic dominance tests in the treated subpopulation, and higher order stochastic dominance tests in the whole population and among the treated are shown to share the same properties. The tests are applied to evaluate the effect of a job training program on incomes, and we find that job training has a positive effect on real earnings. Finally, we extend our tests to cases in which the unconfoundedness assumption does not hold. On the other hand, there has been a considerable amount of attention paid to testing inequality restrictions using Wald type tests. As noted by Wolak (1991), there are certain situations where it is difficult to obtain tests with correct size even asymptotically. These situations occur when the variance-covariance matrix of the functions in the constraints depends on the unknown parameters as would be the case in nonlinear models. This dependence on the unknown parameters makes it computationally difficult to find the least favorable configuration (LFC) which can be used to bound the size of the test. In the second chapter of this dissertation, we extend Hansen's (2005) superior predictive ability (SPA) test to testing hypotheses involving general inequality constraints in which the variance-covariance matrix can be dependent on the unknown parameters. For our test we are able to obtain correct size asymptotically plus test consistency without requiring knowledge of the LFC. Also the test can be applied to a wider class of problems than considered in Wolak (1991). In the last chapter, we construct new Kolmogorov-Smirnov tests for stochastic dominance of any pre-specified order without resorting to the LFC to improve the power of Barrett and Donald's (2003) tests. To do this, we first show that under the null hypothesis if the objects being compared at a given income level are not equal, then the objects at this given income level will have no effect on the null distribution. Second, we extend Hansen's (2005) recentering method to a continuum of inequality constraints and construct a recentering function that will converge to the underlying parameter function uniformly asymptotically under the null hypothesis. We treat the recentering function as a true underlying parameter function and add it to the simulated Brownian bridge processes to simulate the critical values. We show that our tests can control the size asymptotically and are consistent. We also show that by avoiding the LFC, our tests are less conservative and more powerful than Barrett and Donald's (2003). Monte Carlo simulations support our results. We also examine the performances of our tests in an empirical example. / text
|
Page generated in 0.0459 seconds