• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 145
  • 145
  • 92
  • 33
  • 30
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Tests of Independence in a Single 2x2 Contingency Table with Random Margins

Yu, Yuan 01 May 2014 (has links)
In analysis of the contingency tables, the Fisher's exact test is a very important statistical significant test that is commonly used to test independence between the two variables. However, the Fisher' s exact test is based upon the assumption of the fixed margins. That is, the Fisher's exact test uses information beyond the table so that it is conservative. To solve this problem, we allow the margins to be random. This means that instead of fitting the count data to the hypergeometric distribution as in the Fisher's exact test, we model the margins and one cell using multinomial distribution, and then we use the likelihood ratio to test the hypothesis of independence. Furthermore, using Bayesian inference, we consider the Bayes factor as another test statistic. In order to judge the test performance, we compare the power of the likelihood ratio test, the Bayes factor test and the Fisher's exact test. In addition, we use our methodology to analyse data gathered from the Worcester Heart Attack Study to assess gender difference in the therapeutic management of patients with acute myocardial infarction (AMI) by selected demographic and clinical characteristics.
2

Analyzing and classifying the jumping spider of Eugaria albidentata

Lin, Shih-hua 28 July 2010 (has links)
Under the mechanism of natural selection, creatures are forced to evolve naturally in order to survive. Keen-sighted jumping spiders have long been considered as the main predation pressure of terrestrial arthropod. Many species benefit from mimicking the appearance of jumping spider. In this study according to the experimental data from Wang (2009b), a data analysis is undertaken concerning male Ptocasius strupifer¡¦s behavior to different subject groups, namely, male Ptocasius strupifer, female Ptocasius strupifer, male Plexippus paykulli, female Plexippus paykulli, Cataclysta angulata and Eugauria albidentata, so as to investigate the jumping spider mimicry of Eugauria albidentata. In this work, our interest is to compare the behavior of male Ptocasius strupifer on Eugauria albidentata with there of the other five groups mentioned above, and identify which one is the most similar to there of Eugauria albidentata . We use different statistical methods, i.e. likelihood ratio test, factor analysis and cluster analysis to evaluate the closeness of the behavior between different groups. According to the analysis result, it shows that the behavior of Ptocasius strupifer towards Eugauria albidentata is more similar to those of both female Ptocasius strupifer and female Plexippus paykulli. Moreover there is a wide discrepancy between Eugauria albidentata and Cataclysta angulata, although both of them belong to Musotiminae.
3

Rarities of genotype profiles in a normal Swedish population

Hedell, Ronny January 2010 (has links)
Investigation of stains from crime scenes are commonly used in the search for criminals. At The National Laboratory of Forensic Science, where these stains are examined, a number of questions of theoretical and practical interest regarding the databases of DNA profiles and the strength of DNA evidence against a suspect in a trial are not fully investigated. The first part of this thesis deals with how a sample of DNA profiles from a population is used in the process of estimating the strength of DNA evidence in a trial, taking population genetic factors into account. We then consider how to combine hypotheses regarding the relationship between a suspect and other possible donors of the stain from the crime scene by two applications of Bayes’ theorem. After that we assess the DNA profiles that minimize the strength of DNA evidence against a suspect, and investigate how the strength is affected by sampling error using the bootstrap method and a Bayesian method. In the last part of the thesis we examine discrepancies between different databases of DNA profiles by both descriptive and inferential statistics, including likelihood ratio tests and Bayes factor tests. Little evidence of major differences is found.
4

Evaluation of evidence for autocorrelated data, with an example relating to traces of cocaine on banknotes

Wilson, Amy Louise January 2014 (has links)
Much research in recent years for evidence evaluation in forensic science has focussed on methods for determining the likelihood ratio in various scenarios. One proposition concerning the evidence is put forward by the prosecution and another is put forward by the defence. The likelihood of each of these two propositions is calculated, given the evidence. The likelihood ratio, or value of the evidence, is then given by the ratio of the likelihoods associated with these two propositions. The aim of this research is twofold. Firstly, it is intended to provide methodology for the evaluation of the likelihood ratio for continuous autocorrelated data. The likelihood ratio is evaluated for two such scenarios. The first is when the evidence consists of data which are autocorrelated at lag one. The second, an extension to this, is when the observed evidential data are also believed to be driven by an underlying latent Markov chain. Two models have been developed to take these attributes into account, an autoregressive model of order one and a hidden Markov model, which does not assume independence of adjacent data points conditional on the hidden states. A nonparametric model which does not make a parametric assumption about the data and which accounts for lag one autocorrelation is also developed. The performance of these three models is compared to the performance of a model which assumes independence of the data. The second aim of the research is to develop models to evaluate evidence relating to traces of cocaine on banknotes, as measured by the log peak area of the ion count for cocaine product ion m/z 105, obtained using tandem mass spectrometry. Here, the prosecution proposition is that the banknotes are associated with a person who is involved with criminal activity relating to cocaine and the defence proposition is the converse, which is that the banknotes are associated with a person who is not involved with criminal activity relating to cocaine. Two data sets are available, one of banknotes seized in criminal investigations and associated with crime involving cocaine, and one of banknotes from general circulation. Previous methods for the evaluation of this evidence were concerned with the percentage of banknotes contaminated or assumed independence of measurements of quantities of cocaine on adjacent banknotes. It is known that nearly all banknotes have traces of cocaine on them and it was found that there was autocorrelation within samples of banknotes so thesemethods are not appropriate. The models developed for autocorrelated data are applied to evidence relating to traces of cocaine on banknotes; the results obtained for each of the models are compared using rates of misleading evidence, Tippett plots and scatter plots. It is found that the hiddenMarkov model is the best choice for themodelling of cocaine traces on banknotes because it has the lowest rate of misleading evidence and it also results in likelihood ratios which are large enough to give support to the prosecution proposition for some samples of banknotes seized from crime scenes. Comparison of the results obtained for models which take autocorrelation into account with the results obtained from the model which assumes independence indicate that not accounting for autocorrelation can result in the overstating of the likelihood ratio.
5

Best-subset model selection based on multitudinal assessments of likelihood improvements

Carter, Knute Derek 01 December 2013 (has links)
Given a set of potential explanatory variables, one model selection approach is to select the best model, according to some criterion, from among the collection of models defined by all possible subsets of the explanatory variables. A popular procedure that has been used in this setting is to select the model that results in the smallest value of the Akaike information criterion (AIC). One drawback in using the AIC is that it can lead to the frequent selection of overspecified models. This can be problematic if the researcher wishes to assert, with some level of certainty, the necessity of any given variable that has been selected. This thesis develops a model selection procedure that allows the researcher to nominate, a priori, the probability at which overspecified models will be selected from among all possible subsets. The procedure seeks to determine if the inclusion of each candidate variable results in a sufficiently improved fitting term, and hence is referred to as the SIFT procedure. In order to determine whether there is sufficient evidence to retain a candidate variable or not, a set of threshold values are computed. Two procedures are proposed: a naive method based on a set of restrictive assumptions; and an empirical permutation-based method. Graphical tools have also been developed to be used in conjunction with the SIFT procedure. The graphical representation of the SIFT procedure clarifies the process being undertaken. Using these tools can also assist researchers in developing a deeper understanding of the data they are analyzing. The naive and empirical SIFT methods are investigated by way of simulation under a range of conditions within the standard linear model framework. The performance of the SIFT methodology is compared with model selection by minimum AIC; minimum Bayesian Information Criterion (BIC); and backward elimination based on p-values. The SIFT procedure is found to behave as designed—asymptotically selecting those variables that characterize the underlying data generating mechanism, while limiting the selection of false or spurious variables to the desired level. The SIFT methodology offers researchers a promising new approach to model selection, whereby they are now able to control the probability of selecting an overspecified model to a level that best suits their needs.
6

Subset selection based on likelihood from uniform and related populations

Chotai, Jayanti January 1979 (has links)
Let π1,  π2, ... π be k (>_2) populations. Let  πi (i = 1, 2, ..., k) be characterized by the uniform distributionon (ai, bi), where exactly one of ai and bi is unknown. With unequal sample sizes, suppose that we wish to select arandom-size subset of the populations containing the one withthe smallest value of 0i = bi - ai. Rule Ri selects πi iff a likelihood-based k-dimensional confidence region for the unknown (01,..., 0k) contains at least one point having 0i as its smallest component. A second rule, R, is derived through a likelihood ratio and is equivalent to that of Barr and Rizvi (1966) when the sample sizes are equal. Numerical comparisons are made. The results apply to the larger class of densities g(z; 0i) = M(z)Q(0i) iff a(0i) < z < b(0i). Extensions to the cases when both ai and bi are unknown and when 0max is of interest are i i indicated. / digitalisering@umu
7

Subset selection based on likelihood ratios : the normal means case

Chotai, Jayanti January 1979 (has links)
Let π1, ..., πk be k(&gt;_2) populations such that πi, i = 1, 2, ..., k, is characterized by the normal distribution with unknown mean and ui variance aio2 , where ai is known and o2 may be unknown. Suppose that on the basis of independent samples of size ni from π (i=1,2,...,k), we are interested in selecting a random-size subset of the given populations which hopefully contains the population with the largest mean.Based on likelihood ratios, several new procedures for this problem are derived in this report. Some of these procedures are compared with the classical procedure of Gupta (1956,1965) and are shown to be better in certain respects. / <p>Ny rev. utg.</p><p>This is a slightly revised version of Statistical Research Report No. 1978-6.</p> / digitalisering@umu
8

Robust Channel Estimation for Cooperative Communication Systems in the Presence of Relay Misbehaviors

Chou, Po-Yen 17 July 2012 (has links)
In this thesis, we investigate the problem of channel estimation in the amplify-and-forward cooperative communication systems when the networks could be in the presence of selfish relays. The information received at the destination will be detected and then used to estimate the channel. In previous studies, the relays will deliver the information under the prerequisite for cooperation and the destination can receive the information sent from the source without any possible selfish relay. Therefore, the channel will be estimated under this over idealistic assumption. Unfortunately, the assumption does not make sense in real applications. Currently, we don¡¦t have a mechanism to guarantee the relays will always be cooperative. The performance of channel estimation will be significantly degraded when the selfish relays present in the network. Therefore, this thesis considers an amplify-and-forward cooperative communication system with direct transmission and proposes a detection mechanism to overcome the misbehaving relay problem. The detection mechanism employed estimation is based on likelihood ratio test using both direct transmission and relayed information. The detection result will then be used to reconstruct the codeword used for estimating product channel gain of the source-to-relay and relay- to-destination links. The mathematical derivation for the considered problem is developed and numerical simulations for illustration is also carried out in the thesis. The numerical simulation results verify that the proposed method is indeed able to achieve robust channel estimation.
9

A Study of the Mean Residual Life Function and Its Applications

Mbowe, Omar B 12 June 2006 (has links)
The mean residual life (MRL) function is an important function in survival analysis, actuarial science, economics and other social sciences and reliability for characterizing lifetime. Different methods have been proposed for doing inference on the MRL but their coverage probabilities for small sample sizes are not good enough. In this thesis we apply the empirical likelihood method and carry out a simulation study of the MRL function using different statistical distributions. The simulation study does a comparison of the empirical likelihood method and the normal approximation method. The comparisons are based on the average lengths of confidence intervals and coverage probabilities. We also did comparisons based on median lengths of confidence intervals for the MRL. We found that the empirical likelihood method gives better coverage probability and shorter confidence intervals than the normal approximation method for almost all the distributions that we considered. Applying the two methods to real data we also found that the empirical likelihood method gives thinner pointwise confidence bands.
10

Linear Approximations for Second Order High Dimensional Model Representation of the Log Likelihood Ratio

Foroughi pour, Ali 19 June 2019 (has links)
No description available.

Page generated in 0.2903 seconds