• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 289
  • 113
  • 32
  • 31
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 602
  • 602
  • 212
  • 118
  • 101
  • 99
  • 97
  • 82
  • 78
  • 65
  • 61
  • 61
  • 55
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Parameter Estimation for the Beta Distribution

Owen, Claire Elayne Bangerter 20 November 2008 (has links) (PDF)
The beta distribution is useful in modeling continuous random variables that lie between 0 and 1, such as proportions and percentages. The beta distribution takes on many different shapes and may be described by two shape parameters, alpha and beta, that can be difficult to estimate. Maximum likelihood and method of moments estimation are possible, though method of moments is much more straightforward. We examine both of these methods here, and compare them to three more proposed methods of parameter estimation: 1) a method used in the Program Evaluation and Review Technique (PERT), 2) a modification of the two-sided power distribution (TSP), and 3) a quantile estimator based on the first and third quartiles of the beta distribution. We find the quantile estimator performs as well as maximum likelihood and method of moments estimators for most beta distributions. The PERT and TSP estimators do well for a smaller subset of beta distributions, though they never outperform the maximum likelihood, method of moments, or quantile estimators. We apply these estimation techniques to two data sets to see how well they approximate real data from Major League Baseball (batting averages) and the U.S. Department of Energy (radiation exposure). We find the maximum likelihood, method of moments, and quantile estimators perform well with batting averages (sample size 160), and the method of moments and quantile estimators perform well with radiation exposure proportions (sample size 20). Maximum likelihood estimators would likely do fine with such a small sample size were it not for the iterative method needed to solve for alpha and beta, which is quite sensitive to starting values. The PERT and TSP estimators do more poorly in both situations. We conclude that in addition to maximum likelihood and method of moments estimation, our method of quantile estimation is efficient and accurate in estimating parameters of the beta distribution.
52

Parameter Estimation for the Lognormal Distribution

Ginos, Brenda Faith 13 November 2009 (has links) (PDF)
The lognormal distribution is useful in modeling continuous random variables which are greater than or equal to zero. Example scenarios in which the lognormal distribution is used include, among many others: in medicine, latent periods of infectious diseases; in environmental science, the distribution of particles, chemicals, and organisms in the environment; in linguistics, the number of letters per word and the number of words per sentence; and in economics, age of marriage, farm size, and income. The lognormal distribution is also useful in modeling data which would be considered normally distributed except for the fact that it may be more or less skewed (Limpert, Stahel, and Abbt 2001). Appropriately estimating the parameters of the lognormal distribution is vital for the study of these and other subjects. Depending on the values of its parameters, the lognormal distribution takes on various shapes, including a bell-curve similar to the normal distribution. This paper contains a simulation study concerning the effectiveness of various estimators for the parameters of the lognormal distribution. A comparison is made between such parameter estimators as Maximum Likelihood estimators, Method of Moments estimators, estimators by Serfling (2002), as well as estimators by Finney (1941). A simulation is conducted to determine which parameter estimators work better in various parameter combinations and sample sizes of the lognormal distribution. We find that the Maximum Likelihood and Finney estimators perform the best overall, with a preference given to Maximum Likelihood over the Finney estimators because of its vast simplicity. The Method of Moments estimators seem to perform best when σ is less than or equal to one, and the Serfling estimators are quite accurate in estimating μ but not σ in all regions studied. Finally, these parameter estimators are applied to a data set counting the number of words in each sentence for various documents, following which a review of each estimator's performance is conducted. Again, we find that the Maximum Likelihood estimators perform best for the given application, but that Serfling's estimators are preferred when outliers are present.
53

Diagnostics after a Signal from Control Charts in a Normal Process

Lou, Jianying 03 October 2008 (has links)
Control charts are fundamental SPC tools for process monitoring. When a control chart or combination of charts signals, knowing the change point, which distributional parameter changed, and/or the change size helps to identify the cause of the change, remove it from the process or adjust the process back in control correctly and immediately. In this study, we proposed using maximum likelihood (ML) estimation of the current process parameters and their ML confidence intervals after a signal to identify and estimate the changed parameters. The performance of this ML diagnostic procedure is evaluated for several different charts or chart combinations for the cases of sample sizes and , and compared to the traditional approaches to diagnostics. None of the ML and the traditional estimators performs well for all patterns of shifts, but the ML estimator has the best overall performance. The ML confidence interval diagnostics are overall better at determining which parameter has shifted than the traditional diagnostics based on which chart signals. The performance of the generalized likelihood ratio (GLR) chart in shift detection and in ML diagnostics is comparable to the best EWMA chart combination. With the application of the ML diagnostics naturally following a GLR chart compared to the traditional control charts, the studies of a GLR chart during process monitoring can be further deepened in the future. / Ph. D.
54

Positive Selection in Transcription Factor Genes Along the Human Lineage

Nickel, Gabrielle Celeste 06 October 2008 (has links)
No description available.
55

Food For Thought: When Information Optimization Fails to Optimize Utility

Agarwala, Edward K. 03 August 2009 (has links)
No description available.
56

Performance of Recursive Maximum Likelihood Turbo Decoding

Krishnamurthi, Sumitha 03 December 2003 (has links)
No description available.
57

Maximum likelihood estimation of phylogenetic tree with evolutionary parameters

Wang, Qiang 19 May 2004 (has links)
No description available.
58

Coral Paleo-geodesy: Inferring Local Uplift Histories from the Heights and Ages of Coral Terraces

Sui, Weiguang 20 October 2011 (has links)
No description available.
59

Energy-efficient custom integrated circuit design of universal decoders using noise-centric GRAND algorithms

Riaz, Arslan 24 May 2024 (has links)
Whenever data is stored or transmitted, it inevitably encounters noise that can lead to harmful corruption. The communication technologies rely on decoding the data using Error Correcting Codes (ECC) that enable the rectification of noise to retrieve the original message. Maximum Likelihood (ML) decoding has proven to be optimally accurate, but it has not been adopted due to the lack of a feasible implementation arising from its computational complexity. It has been established that ML decoding of arbitrary linear codes is a Nondeterministic Polynomial-time (NP) hard problem. As a result, many code-specific decoders have been developed as an approximation of an ML decoder. This code-centric decoding approach leads to a hardware implementation that tightly couples with a specific code structure. Recently proposed Guessing Random Additive Noise Decoding (GRAND) offers a solution by establishing a noise-centric decoding approach, thereby making it a universal ML decoder. Both the soft-detection and hard-detection variants of GRAND have shown to be capacity achieving for any moderate redundancy arbitrary code. This thesis claims that GRAND can be efficiently implemented in hardware with low complexity while offering significantly higher energy efficiency than state-of-the-art code-centric decoders. In addition to being hardware-friendly, GRAND offers high parallelizability that can be chosen according to the throughput requirement making it flexible for a wide range of applications. To support this claim, this thesis presents custom-designed energy-efficient integrated circuits and hardware architectures for the family of GRAND algorithms. The universality of the algorithm is demonstrated through measurements across various codebooks for different channel conditions. Furthermore, we employ the noise recycling technique in both hard-detection and soft-detection scenarios to improve the decoding by exploiting the temporal noise correlations. Using the fabricated chips, we demonstrate that employing noise recycling with GRAND significantly reduces energy and latency, while providing additional gains in decoding performance. Efficient integrated architectures of GRAND will significantly reduce the hardware complexity while future-proofing a device so that it can decode any forthcoming code. The noise-centric decoding approach overcomes the need for code standardization making it adaptable for a wide range of applications. A single GRAND chip can replace all existing decoders, offering competitive decoding performance while also providing significantly higher energy and area efficiency. / 2026-05-23T00:00:00Z
60

Classifying Maximum Likelihood Degree for Small Colored Gaussian Graphical Models / Klassifikation av Maximum Likelihood Graden av Små Färgade Gaussiska Grafiska Modeller

Kuhlin, Jacob January 2023 (has links)
The Maximum Likelihood Degree (ML degree) of a statistical model is the number of complex critical points of the likelihood function. In this thesis we study this on Colored Gaussian Graphical Models, classifying the ML degree of colored graphs of order up to three. We do this by calculating the rational function degree of the gradient of the log- likelihood. Moreover we find that coloring a graph can lower the ML degree. Finally we calculate solutions to the homaloidal partial differential equation developed by Améndola et al. The code developed for these calculations can be used on graphs of higher orders. / Maximum likelihood-graden (ML-graden) för en statistisk modell är antalet komplexa kritiska punkter för likelihoodfunktionen. I denna avhandling studerar vi detta på färgade Gaussiska grafiska modeller och klassificerar ML-graden för färgade grafer av ordning upp till tre. Detta görs genom att beräkna den rationella funktionsgraden för gradienten av logaritmen av likelihoodfunktionen. Dessutom finner vi att ML-graden av en graf kan minskas genom att färgläggas. Slutligen beräknar vi lösningar till den homaloidala partiella differentialekvationen utvecklad av Améndola et al. Den kod som utvecklats för dessa beräkningar kan användas på grafer av högre ordning.

Page generated in 0.0744 seconds