• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An adaptive multistage interference cancellation receiver for CDMA

Kaul, Ashish 13 February 2009 (has links)
Most of the previous research on multistage interference cancellation receivers for Code Division Multiple Access (CDMA) systems has relied on the use of simulation techniques for performance evaluation. This thesis formulates a model for an adaptive multistage interference cancellation receiver within a CDMA system to be employed at the cellular radio base station. A closed form expression for the probability of bit error for this adaptive multistage interference cancellation receiver is derived, using a Gaussian approximation for Multiple Access Interference (MAI). The Bit Error Rate (BER) after any stage of interference cancellation can be computed from the signal to noise ratio, number of users and processing gain of the CDMA system. The BER expressions are extended to derive asymptotic limits on the performance of interference cancellation as the number of cancellation stages approaches infinity, demonstrating a fundamental limit on the performance improvement that can be expected from any multistage interference cancellation scheme. Furthermore, the analysis quantifies conditions under which interference cancellation may degrade performance. This thesis also extends a software implementation of the Multistage Rake receiver for a wide range of channel models including Gaussian noise, MAI, multipath propagation and near-far effects. Simulation results demonstrate the robustness of the Multistage Rake receiver to near-far effects and manifold capacity improvement compared to conventional demodulation techniques. / Master of Science
2

The effect of sample size re-estimation on type I error rates when comparing two binomial proportions

Cong, Danni January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / Estimation of sample size is an important and critical procedure in the design of clinical trials. A trial with inadequate sample size may not produce a statistically significant result. On the other hand, having an unnecessarily large sample size will definitely increase the expenditure of resources and may cause a potential ethical problem due to the exposure of unnecessary number of human subjects to an inferior treatment. A poor estimate of the necessary sample size is often due to the limited information at the planning stage. Hence, the adjustment of the sample size mid-trial has become a popular strategy recently. In this work, we introduce two methods for sample size re-estimation for trials with a binary endpoint utilizing the interim information collected from the trial: a blinded method and a partially unblinded method. The blinded method recalculates the sample size based on the first stage’s overall event proportion, while the partially unblinded method performs the calculation based only on the control event proportion from the first stage. We performed simulation studies with different combinations of expected proportions based on fixed ratios of response rates. In this study, equal sample size per group was considered. The study shows that for both methods, the type I error rates were preserved satisfactorily.
3

A Monte Carlo Analysis of Experimentwise and Comparisonwise Type I Error Rate of Six Specified Multiple Comparison Procedures When Applied to Small k's and Equal and Unequal Sample Sizes

Yount, William R. 12 1900 (has links)
The problem of this study was to determine the differences in experimentwise and comparisonwise Type I error rate among six multiple comparison procedures when applied to twenty-eight combinations of normally distributed data. These were the Least Significant Difference, the Fisher-protected Least Significant Difference, the Student Newman-Keuls Test, the Duncan Multiple Range Test, the Tukey Honestly Significant Difference, and the Scheffe Significant Difference. The Spjøtvoll-Stoline and Tukey—Kramer HSD modifications were used for unequal n conditions. A Monte Carlo simulation was used for twenty-eight combinations of k and n. The scores were normally distributed (µ=100; σ=10). Specified multiple comparison procedures were applied under two conditions: (a) all experiments and (b) experiments in which the F-ratio was significant (0.05). Error counts were maintained over 1000 repetitions. The FLSD held experimentwise Type I error rate to nominal alpha for the complete null hypothesis. The FLSD was more sensitive to sample mean differences than the HSD while protecting against experimentwise error. The unprotected LSD was the only procedure to yield comparisonwise Type I error rate at nominal alpha. The SNK and MRT error rates fell between the FLSD and HSD rates. The SSD error rate was the most conservative. Use of the harmonic mean of the two unequal sample n's (HSD-TK) yielded uniformly better results than use of the minimum n (HSD-SS). Bernhardson's formulas controlled the experimentwise Type I error rate of the LSD and MRT to nominal alpha, but pushed the HSD below the 0.95 confidence interval. Use of the unprotected HSD produced fewer significant departures from nominal alpha. The formulas had no effect on the SSD.
4

BEST SOURCE SELECTORS AND MEASURING THE IMPROVEMENTS

Gatton, Tim 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / After years of tracing the evolution and solutions to finding the best data, I learned that it isn’t best source selection that we all want. What we need is best data selection.
5

USE OF ID-1 HIGH DENSITY DIGITAL RECORDING SYSTEMS FOR TEST RANGE SUPPORT

Schoeck, Kenneth O. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Space and Missile Systems Center at Vandenberg AFB has integrated ID-1 high bit rate helical scan digital recorders into the ground based and mobile telemetry receiving and processing facilities. The systems are used for recording higher bit rates than those available with the current IRIG standard longitudinal wideband and double density instrumentation magnetic tape recorder/reproducers. In addition to the 400 Mbps digital recorders, the systems consist of high-speed multiplexer/ demultiplexers and multi-channel bit synchronizers for recording numerous telemetry data links and sources on a single recorder. This paper describes the system configurations and compares recording capabilities with those of the previous generation instrumentation magnetic tape recorder/reproducers.
6

Statistical Analysis of High-Dimensional Gene Expression Data

Justin Zhu Unknown Date (has links)
The use of diagnostic rules based on microarray gene expression data has received wide attention in bioinformatics research. In order to form diagnostic rules, statistical techniques are needed to form classifiers with estimates for their associated error rates, and to correct for any selection biases in the estimates. There are also the associated problems of identifying the genes most useful in making these predictions. Traditional statistical techniques require the number of samples to be much larger than the number of features. Gene expression datasets usually have a small number of samples, but a large number of features. In this thesis, some new techniques are developed, and traditional techniques are used innovatively after appropriate modification to analyse gene expression data. Classification: We first consider classifying tissue samples based on the gene expression data. We employ an external cross-validation with recursive feature elimination to provide classification error rates for tissue samples with different numbers of genes. The techniques are implemented as an R package BCC (Bias-Corrected Classification), and are applied to a number of real-world datasets. The results demonstrate that the error rates vary with different numbers of genes. For each dataset, there is usually an optimal number of genes that returns the lowest cross-validation error rate. Detecting Differentially Expressed Genes: We then consider the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. The focus is on the use of mixture models to handle the multiplicity issue. The mixture model approach provides a framework for the estimation of the prior probability that a gene is not differentially expressed. It estimates various error rates, including the FDR (False Discovery Rate) and the FNR (False Negative Rate). We also develop a method for selecting biomarker genes for classification, based on their repeatability among the highly differentially expressed genes in cross-validation trials. The latter method incorporates both gene selection and classification. Selection Bias: When forming a prediction rule on the basis of a small number of classified tissue samples, some form of feature (gene) selection is usually adopted. This is a necessary step if the number of features is high. As the subset of genes used in the final form of the rule has not been randomly selected but rather chosen according to some criteria designed to reflect the predictive power of the rule, there will be a selection bias inherent in estimates of the error rates of the rule if care is not taken. Various situations are presented where selection biases arise in the formation of a prediction rule and where there is a consequent need for the correction of the biases. Three types of selection biases are analysed: selection bias from not using external cross-validation, selection bias of not working with the full set of genes, and the selection bias from optimizing the classification error rate over a number of subsets obtained according to a selection method. Here we mostly employ the support vector machine with recursive feature elimination. This thesis includes a description of cross-validation schemes that are able to correct for these selection biases. Furthermore, we examine the bias incurred when using the predicted rather than the true outcomes to define the class labels in forming and evaluating the performance of the discriminant rule. Case Study: We present a case study using the breast cancer datasets. In the study, we compare the 70 highly differentially expressed genes proposed by van 't Veer and colleagues, against the set of the genes selected using our repeatability method. The results demonstrate that there is more than one set of biomarker genes. We also examine the selection biases that may exist when analysing this dataset. The selection biases are demonstrated to be substantial.
7

Language Production In A Typological Perspective: A Corpus Study Of Turkish Slips Of The Tongue

Erisen, Ibrahim Ozgur 01 June 2010 (has links) (PDF)
The main purpose of this study is to establish a Turkish slips of the tongue (SOT) corpus and make typological comparisons with English, French and German corpora. In the first part of the study, a slips of the tongue corpus has been created. 85 podcast recordings were analyzed and 53 SOT errors were found. SOT errors were extracted from the podcasts and these audio clips were combined with their spectrograms in a flash video. Classification of SOT errors were carried out with respect to linguistic units involved, type of error, and repair behavior. In this study it is hypothesized that Turkish will have more morphological errors due to agglutination, and Turkish will have less phonological errors as vowel harmony will function as an extra control mechanism. Classification of the SOT errors with respect to linguistic units that are involved shows that 54.27% of the errors are phonological, 16.98% of errors are morphological, 13.21% of errors are lexical and 7.55% errors are phrasal. The classification with respect to error type shows that 26.42% of errors are anticipations, 30,19% of errors are perseverations, 18.87% errors are substitutions and 7.56% of errors are blends. There is a difference in the percentages of errors as compared to the other corpora. Turkish has more morphological and phonological errors. Also the data shows that there are more perseverations than anticipations, similar to German. Typological comparisons with other languages suggests that the difference in the ratio might be caused by the SOV sentence structure rather than agglutination. The first hypothesis was therefore confirmed partly. However, the second hypothesis was not supported. Vowel harmony did not function as a control mechanism on the phonological well-formedness of the utterance. Rather, it seems to be located at the level of morpho-phonology in the lexicon proper. Turkish having more phonological errors might also be related with a higher demand on working memory because of the head-final SOV sentence structure. In order to be able to draw more reliable conclusions the size of the Turkish SOT database needs to be increased.
8

A Developmental Analysis of Sentence Production Errors in the Writing of Secondary School Students

Stromberg, Linda J. (Linda Jones) 12 1900 (has links)
This study measured the effect of mode of discourse and developmental factors on composition length, syntactic complexity, and sentence-production error rate in the writing of secondary school students. The study also included a descriptive analysis of syntactic and logical patterns found in the sentence production errors. The 297 students whose writing samples provided the data for this study were enrolled in grades 7, 9, and 11. The students were divided into low and high within-grade developmental groups. Each student wrote two compositions, one in the descriptive mode and one in the persuasive mode.
9

Learning Algorithms Using Chance-Constrained Programs

Jagarlapudi, Saketha Nath 07 1900 (has links)
This thesis explores Chance-Constrained Programming (CCP) in the context of learning. It is shown that chance-constraint approaches lead to improved algorithms for three important learning problems — classification with specified error rates, large dataset classification and Ordinal Regression (OR). Using moments of training data, the CCPs are posed as Second Order Cone Programs (SOCPs). Novel iterative algorithms for solving the resulting SOCPs are also derived. Borrowing ideas from robust optimization theory, the proposed formulations are made robust to moment estimation errors. A maximum margin classifier with specified false positive and false negative rates is derived. The key idea is to employ chance-constraints for each class which imply that the actual misclassification rates do not exceed the specified. The formulation is applied to the case of biased classification. The problems of large dataset classification and ordinal regression are addressed by deriving formulations which employ chance-constraints for clusters in training data rather than constraints for each data point. Since the number of clusters can be substantially smaller than the number of data points, the resulting formulation size and number of inequalities are very small. Hence the formulations scale well to large datasets. The scalable classification and OR formulations are extended to feature spaces and the kernelized duals turn out to be instances of SOCPs with a single cone constraint. Exploiting this speciality, fast iterative solvers which outperform generic SOCP solvers, are proposed. Compared to state-of-the-art learners, the proposed algorithms achieve a speed up as high as 10000 times, when the specialized SOCP solvers are employed. The proposed formulations involve second order moments of data and hence are susceptible to moment estimation errors. A generic way of making the formulations robust to such estimation errors is illustrated. Two novel confidence sets for moments are derived and it is shown that when either of the confidence sets are employed, the robust formulations also yield SOCPs.
10

中国人日本語学習者による外来語および漢字語の処理における学習期間の影響

CHU, Xiang Juan, TAMAOKA, Katsuo, 初, 相娟, YAMATO, Yuko, 玉岡, 賀津雄, 大和, 祐子 15 December 2010 (has links)
No description available.

Page generated in 0.0438 seconds