• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 46
  • 12
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Like a wrecking ball Gillian Welch and the modern South /

Kirby, Jason. January 2006 (has links)
Thesis (M.A.)--Bowling Green State University, 2006. / Document formatted into pages; contains v, 159 p. Includes bibliographical references.
2

Effects of Transcription Errors on Supervised Learning in Speech Recognition

Sundaram, Ramasubramanian H 13 December 2003 (has links)
Supervised learning using Hidden Markov Models has been used to train acoustic models for automatic speech recognition for several years. Typically clean transcriptions form the basis for this training regimen. However, results have shown that using sources of readily available transcriptions, which can be erroneous at times (e.g., closed captions) do not degrade the performance significantly. This work analyzes the effects of mislabeled data on recognition accuracy. For this purpose, the training is performed using manually corrupted training data and the results are observed on three different databases: TIDigits, Alphadigits and SwitchBoard. For Alphadigits, with 16% of data mislabeled, the performance of the system degrades by 12% relative to the baseline results. For a complex task like SWITCHBOARD, at 16% mislabeled training data, the performance of the system degrades by 8.5% relative to the baseline results. The training process is more robust to mislabeled data because the Gaussian mixtures that are used to model the underlying distribution tend to cluster around the majority of the correct data. The outliers (incorrect data) do not contribute significantly to the reestimation process.
3

LIKE A WRECKING BALL: GILLIAN WELCH AND THE MODERN SOUTH

Kirby, Jason 26 June 2006 (has links)
No description available.
4

A rhetorical analysis of The blue book : a major speech by Robert Welch

Hosterman, Craig Allan January 1970 (has links)
This thesis is a rhetorical analysis of a major speech by Robert Welch, founder of The John Birch Society. A verbatim copy of the speech used in this analysis is available under the title of The Blue Book Of The John Birch Society. However, this speech was originally delivered by Robert Welch in December of 1958.The analysis examines the purpose and organization of The Blue Book speech, and the use of logical, factual, and non-factual arguments. An attempt is also made to point out fallacies in arguments, weaknesses in organization, multiplicity of purposes, and unethical techniques and factual discrepancies.
5

A city hall for Welch, West Virginia

Brunschwyler, Richard Grant January 1951 (has links)
This thesis has five objectives. They are as follows: (a) to present a study of the existing conditions of local government facilities within the city of Welch, West Virginia, showing the need for a new local government building and fire station; (b) to present the findings of a study made to determine the facilities needed to fulfill the requirements of the city government and fire department; (c) to present the findings of a study made to select the best available site; (d) to present a study of the possibilities of remodeling an existing building for use as a city hall and fire department; (e) to present a design of a new local government building which shall house the administrative and enforcement offices and departments of the city government, together with a fire department. This building shall be designated, “A City Hall for Welch, West Virginia.” / Master of Science
6

Training of Hidden Markov models as an instance of the expectation maximization algorithm

Majewsky, Stefan 27 July 2017 (has links) (PDF)
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.
7

Comparing Welch's ANOVA, a Kruskal-Wallis test and traditional ANOVA in case of Heterogeneity of Variance

Liu, Hangcheng 01 January 2015 (has links)
Analysis of variance (ANOVA) is a robust test against the normality assumption, but it may be inappropriate when the assumption of homogeneity of variance has been violated. Welch ANOVA and the Kruskal-Wallis test (a non-parametric method) can be applicable for this case. In this study we compare the three methods in empirical type I error rate and power, when heterogeneity of variance occurs and find out which method is the most suitable with which cases including balanced/unbalanced, small/large sample size, and/or with normal/non-normal distributions.
8

A Dynamically Partitionable Compressed Cache

Chen, David, Peserico, Enoch, Rudolph, Larry 01 1900 (has links)
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. Naive application of this idea performs poorly since the data values in a cache greatly vary in their “compressibility.” The novelty of this paper is a scheme that dynamically partitions the cache into sections of different compressibilities. While compression is often researched in the context of a large stream, in this work it is applied repeatedly on smaller cache-line sized blocks so as to preserve the random access requirement of a cache. When a cache-line is brought into the L2 cache or the cache-line is to be modified, the line is compressed using a dynamic, LZW dictionary. Depending on the compression, it is placed into the relevant partition. The partitioning is dynamic in that the ratio of space allocated to compressed and uncompressed varies depending on the actual performance, Certain SPEC-2000 benchmarks using a compressed L2 cache show an 80reduction in L2 miss-rate when compared to using an uncompressed L2 cache of the same area, taking into account all area overhead associated with the compression circuitry. For other SPEC-2000 benchmarks, the compressed cache performs as well as a traditional cache that is 4.3 times as large as the compressed cache in terms of hit rate, The adaptivity ensures that, in terms of miss rates, the compressed cache never performs worse than a traditional cache. / Singapore-MIT Alliance (SMA)
9

Exploring the Behaviour of the Hidden Markov Model on CpG Island Prediction

2013 April 1900 (has links)
DNA can be represented abstrzctly as a language with only four nucleotides represented by the letters A, C, G, and T, yet the arrangement of those four letters plays a major role in determining the development of an organism. Understanding the signi cance of certain arrangements of nucleotides can unlock the secrets of how the genome achieves its essential functionality. Regions of DNA particularly enriched with cytosine (C nucleotides) and guanine (G nucleotides), especially the CpG di-nucleotide, are frequently associated with biological function related to gene expression, and concentrations of CpGs referred to as \CpG islands" are known to collocate with regions upstream from gene coding sequences within the promoter region. The pattern of occurrence of these nucleotides, relative to adenine (A nucleotides) and thymine (T nucleotides), lends itself to analysis by machine-learning techniques such as Hidden Markov Models (HMMs) to predict the areas of greater enrichment. HMMs have been applied to CpG island prediction before, but often without an awareness of how the outcomes are a ected by the manner in which the HMM is applied. Two main ndings of this study are: 1. The outcome of a HMM is highly sensitive to the setting of the initial probability estimates. 2. Without the appropriate software techniques, HMMs cannot be applied e ectively to large data such as whole eukaryotic chromosomes. Both of these factors are rarely considered by users of HMMs, but are critical to a successful application of HMMs to large DNA sequences. In fact, these shortcomings were discovered through a close examination of published results of CpG island prediction using HMMs, and without being addressed, can lead to an incorrect implementation and application of HMM theory. A rst-order HMM is developed and its performance compared to two other historical methods, the Takai and Jones method and the UCSC method from the University of California Santa Cruz. The HMM is then extended to a second-order to acknowledge that pairs of nucleotides de ne CpG islands rather than single nucleotides alone, and the second-order HMM is evaluated in comparison to the other methods. The UCSC method is found to be based on properties that are not related to CpG islands, and thus is not a fair comparison to the other methods. Of the other methods, the rst-order HMM method and the Takai and Jones method are comparable in the tests conducted, but the second-order HMM method demonstrates superior predictive capabilities. However, these results are valid only when taking into consideration the highly sensitive outcomes based on initial estimates, and nding a suitable set of estimates that provide the most appropriate results. The rst-order HMM is applied to the problem of producing synthetic data that simulates the characteristics of a DNA sequence, including the speci ed presence of CpG islands, based on the model parameters of a trained HMM. HMM analysis is applied to the synthetic data to explore its delity in generating data with similar characteristics, as well as to validate the predictive ability of an HMM. Although this test fails to i meet expectations, a second test using a second-order HMM to produce simulated DNA data using frequency distributions of CpG island pro les exhibits highly accurate predictions of the pre-speci ed CpG islands, con- rming that when the synthetic data are appropriately structured, an HMM can be an accurate predictive tool. One outcome of this thesis is a set of software components (CpGID 2.0 and TrackMap) capable of ef- cient and accurate application of an HMM to genomic sequences, together with visualization that allows quantitative CpG island results to be viewed in conjunction with other genomic data. CpGID 2.0 is an adaptation of a previously published software component that has been extensively revised, and TrackMap is a companion product that works with the results produced by the CpGID 2.0 program. Executing these components allows one to monitor output aspects of the computational model such as number and size of the predicted CpG islands, including their CG content percentage and level of CpG frequency. These outcomes can then be related to the input values used to parameterize the HMM.
10

Verification of Pipelined Ciphers

Lam, Chiu Hong January 2009 (has links)
The purpose of this thesis is to explore the formal verification technique of completion functions and equivalence checking by verifying two pipelined cryptographic circuits, KASUMI and WG ciphers. Most of current methods of communications either involve a personal computer or a mobile phone. To ensure that the information is exchanged in a secure manner, encryption circuits are used to transform the information into an unintelligible form. To be highly secure, this type of circuits is generally designed such that it is hard to analyze. Due to this fact, it becomes hard to locate a design error in the verification of cryptographic circuits. Therefore, cryptographic circuits pose significant challenges in the area of formal verification. Formal verification use mathematics to formulate correctness criteria of designs, to develop mathematical models of designs, and to verify designs against their correctness criteria. The results of this work can extend the existing collection of verification methods as well as benefiting the area of cryptography. In this thesis, we implemented the KASUMI cipher in VHDL, and we applied the optimization technique of pipelining to create three additional implementations of KASUMI. We verified the three pipelined implementations of KASUMI with completion functions and equivalence checking. During the verification of KASUMI, we developed a methodology to handle the completion functions efficiently based on VHDL generic parameters. We implemented the WG cipher in VHDL, and we applied the optimization techniques of pipelining and hardware re-use to create an optimized implementation of WG. We verified the optimized implementation of WG with completion functions and equivalence checking. During the verification of WG, we developed the methodology of ``skipping" that can decrease the number of verification obligations required to verify the correctness of a circuit. During the verification of WG, we developed a way of applying the completion functions approach such that it can deal with a circuit that has been optimized with hardware re-use.

Page generated in 0.0456 seconds