• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 547
  • 94
  • 78
  • 58
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 22
  • 15
  • 4
  • 3
  • Tagged with
  • 952
  • 952
  • 221
  • 162
  • 139
  • 126
  • 97
  • 90
  • 87
  • 74
  • 72
  • 69
  • 66
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

A systematic, experimental methodology for design optimization

Ritchie, Paul Andrew, 1960- January 1988 (has links)
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.
222

Statistical analysis of marine water quality data in Hong Kong

Cheung, Ngai-pang., 張毅鵬. January 2001 (has links)
published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
223

Managerial use of quantitative techniques in building project management: contractors perspectives

Lin, Chun-ming., 連振明. January 2000 (has links)
published_or_final_version / Architecture / Master / Master of Science in Construction Project Management
224

A Simulation Study Comparing Various Confidence Intervals for the Mean of Voucher Populations in Accounting

Lee, Ihn Shik 12 1900 (has links)
This research examined the performance of three parametric methods for confidence intervals: the classical, the Bonferroni, and the bootstrap-t method, as applied to estimating the mean of voucher populations in accounting. Usually auditing populations do not follow standard models. The population for accounting audits generally is a nonstandard mixture distribution in which the audit data set contains a large number of zero values and a comparatively small number of nonzero errors. This study assumed a situation in which only overstatement errors exist. The nonzero errors were assumed to be normally, exponentially, and uniformly distributed. Five indicators of performance were used. The classical method was found to be unreliable. The Bonferroni method was conservative for all population conditions. The bootstrap-t method was excellent in terms of reliability, but the lower limit of the confidence intervals produced by this method was unstable for all population conditions. The classical method provided the shortest average width of the confidence intervals among the three methods. This study provided initial evidence as to how the parametric bootstrap-t method performs when applied to the nonstandard distribution of audit populations of line items. Further research should provide a reliable confidence interval for a wider variety of accounting populations.
225

AUC estimation under various survival models

Unknown Date (has links)
In the medical science, the receiving operationg characteristic (ROC) curve is a graphical representation to evaluate the accuracy of a medical diagnostic test for any cut-off point. The area under the ROC curve (AUC) is an overall performance measure for a diagnostic test. There are two parts in this dissertation. In the first part, we study the properties of bi-Exponentiated Weibull models. FIrst, we derive a general moment formula for single Exponentiated Weibull models. Then we move on to derive the precise formula of AUC and study the maximus likelihood estimation (MLE) of the AUC. Finally, we obtain the asymptotoc distribution of the estimated AUC. Simulation studies are used to check the performance of MLE of AUC under the moderate sample sizes. The second part fo the dissertation is to study the estimation of AUC under the crossing model, which extends the AUC formula in Gonen and Heller (2007). / by Fazhe Chang. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
226

Latent Variable Modeling and Statistical Learning

Chen, Yunxiao January 2016 (has links)
Latent variable models play an important role in psychological and educational measurement, which attempt to uncover the underlying structure of responses to test items. This thesis focuses on the development of statistical learning methods based on latent variable models, with applications to psychological and educational assessments. In that connection, the following problems are considered. The first problem arises from a key assumption in latent variable modeling, namely the local independence assumption, which states that given an individual's latent variable (vector), his/her responses to items are independent. This assumption is likely violated in practice, as many other factors, such as the item wording and question order, may exert additional influence on the item responses. Any exploratory analysis that relies on this assumption may result in choosing too many nuisance latent factors that can neither be stably estimated nor reasonably interpreted. To address this issue, a family of models is proposed that relax the local independence assumption by combining the latent factor modeling and graphical modeling. Under this framework, the latent variables capture the across-the-board dependence among the item responses, while a second graphical structure characterizes the local dependence. In addition, the number of latent factors and the sparse graphical structure are both unknown and learned from data, based on a statistically solid and computationally efficient method. The second problem is to learn the relationship between items and latent variables, a structure that is central to multidimensional measurement. In psychological and educational assessments, this relationship is typically specified by experts when items are written and is incorporated into the model without further verification after data collection. Such a non-empirical approach may lead to model misspecification and substantial lack of model fit, resulting in erroneous interpretation of assessment results. Motivated by this, I consider to learn the item - latent variable relationship based on data. It is formulated as a latent variable selection problem, for which theoretical analysis and a computationally efficient algorithm are provided.
227

Advances in Credit Risk Modeling

Neuberg, Richard January 2017 (has links)
Following the recent financial crisis, financial regulators have placed a strong emphasis on reducing expectations of government support for banks, and on better managing and assessing risks in the banking system. This thesis considers three current topics in credit risk and the statistical problems that arise there. The first of these topics is expectations of government support in distressed banks. We utilize unique features of the European credit default swap market to find that market expectations of European government support for distressed banks have decreased -- an important development in the credibility of financial reforms. The second topic we treat is the estimation of covariance matrices from the perspective of market risk management. This problem arises, for example, in the central clearing of credit default swaps. We propose several specialized loss functions, and a simple but effective visualization tool to assess estimators. We find that proper regularization significantly improves the performance of dynamic covariance models in estimating portfolio variance. The third topic we consider is estimation risk in the pricing of financial products. When parameters are not known with certainty, a better informed counterparty may strategically pick mispriced products. We discuss how total estimation risk can be minimized approximately. We show how a premium for remaining estimation risk may be determined when one counterparty is better informed than the other, but a market collapse is to be avoided, using a simple example from loan pricing. We illustrate the approach with credit bureau data.
228

Persistence and heterogeneity in habitat selection studies

Usner, Dale Wesley 16 May 2000 (has links)
Recently the independent multinomial selections model (IMS) with the multinomial logit link has been suggested as an analysis tool for radio-telemetry habitat selection data. This model assumes independence between animals, independence between sightings within an animal, and identical multinomial habitat selection probabilities for all animals. We propose two generalizations to the IMS model. The first generalization is to allow a Markov chain dependence between consecutive sightings of the same animal. This generalization allows for both positive correlation (individuals persisting in the same habitat class in which they were previously sighted) and negative correlation (individual vacating the habitat class in which they were previously sighted). The second generalization is to allow for heterogeneity. Here, a hierarchical Dirichlet-multinomial distribution is used to allow for variability in selection probabilities between animals. This generalization accounts for over-dispersion of selection probabilities and allows for inference to the population of animals, assuming that the animals studied constitute a random sample from that population.. Both generalizations are one parameter extensions to the multinomial logit model and allow for testing the assumptions of identical multinomial selection probabilities and independence. These tests are performed using the score, Wald, and asymptotic likelihood ratio statistics. Estimates of model parameters are obtained using maximum likelihood techniques, and habitat characteristics are tested using drop-in-deviance statistics. Using example data, we show that persistence and heterogeneity exist in habitat selection data and illustrate the difference in analysis results between the IMS model and the persistence and heterogeneity models. Through simulation, we show that analyzing persistence data assuming independence between sightings within an animal gives liberal tests of significance for habitat characteristics when the data are generated with positive correlation and conservative tests of significance when the data are generated with negative correlation. Similarly, we show that analyzing heterogeneous data, assuming identical multinomial selection probabilities, gives liberal tests of significance for habitat characteristics. / Graduation date: 2001
229

New AB initio methods of small genome sequence interpretation

Mills, Ryan Edward 07 April 2006 (has links)
This thesis presents novel methods for analysis of short viral sequences and identifying biologically significant regions based on their statistical properties. The first section of this thesis describes the ab initio method for identifying genes in viral genomes of varying type, shape and size. This method uses statistical models of the viral protein-coding and non-coding regions. We have created an interactive database summarizing the results of the application of this method to viral genomes currently available in GenBank. This database, called VIOLIN, provides an access to the genes identified for each viral genome, allows for further analysis of these gene sequences and the translated proteins, and displays graphically the distribution of protein-coding potential in a viral genome. The next two sections of this thesis describe individual projects for two specific viral genomes analyzed with the new method. The first project was devoted to the recently sequenced Herpes B virus from Rhesus macaque. This genome was initially thought to lack an ortholog of the gamma-34.5 gene encoding for a neurovirulence factor necessary for viability of the two close relatives, human herpes simplex viruses 1 and 2. The genome of Rhesus macaque Herpes B virus was annotated using the new gene finding procedure and an in-depth analysis was conducted to find a gamma-34.5 ortholog using a variety of tools for a similarity search. A profound similarity in codon usage between B virus and its host was also identified, despite the large difference in their GC contents (74% and 51%, respectively). The last thesis section describes the analysis of the Mouse Cytomegalovirus (MCMV) genome by the combination of methods such as sequence segmentation, gene finding and protein identification by mass spectrometry. The MCMV genome is a challenging subject for statistical sequence analysis due to the heterogeneity of its protein coding regions. Therefore the MCMV genome was segmented based on its nucleotide composition and then each segment was considered independently. A thorough analysis was conducted to identify previously unnoticed genes, incorrectly annotated genes and potential sequence errors causing frameshifts. All the findings were then corroborated by the mass spectrometry analysis.
230

Methods for improving the reliability of semiconductor fault detection and diagnosis with principal component analysis

Cherry, Gregory Allan 28 August 2008 (has links)
Not available / text

Page generated in 0.0976 seconds