• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 291
  • 113
  • 32
  • 31
  • 15
  • 13
  • 8
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 604
  • 604
  • 213
  • 118
  • 101
  • 99
  • 97
  • 82
  • 78
  • 65
  • 62
  • 61
  • 55
  • 53
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

O problema da superdispersão em dados categorizados politômicos nominais em estudos agrários / The problem of overdispersion in categorized polymorphic data in agrarian studies

Salvador, Maria Letícia 31 May 2019 (has links)
Variáveis politômicas são comuns em experimentos agronômicos, apresentando natureza nominal ou ordinal. O modelo dos logitos generalizados é uma classe de modelos que pode ser empregada para a análise desses dados. Uma das características deste modelo é a pressuposição de que a variância é uma função conhecida da média e, espera-se, que a variância observada esteja próxima da variância pressuposta pelo modelo assumido. Contudo, quando ela é maior do que a especificada pelo modelo, tem-se o fenômeno da superdispersão. Nesse contexto, o presente trabalho objetivou caracterizar o problema da superdispersão associado a dados nominais em estudos \"cross-sectional\". Como motivação apresentam-se dois estudos adaptados da área de ciências agrárias relativos à fruticultura e zootecnia, ambos planejados no delineamento inteiramente casualizado. Verifica-se indicativo de superdispersão nos dados dos dois exemplos e como uma alternativa metodológica utilizou-se o modelo Dirichlet-multinomial. Por meio do gráfico de diagnóstico half-normal plot avaliou-se o ajuste do modelo dos logitos generalizados e do Dirichlet-multinomial. Adicionalmente, foi proposta uma extensão do índice de dispersão para os dados politômicos, com performance avaliada sob simulação. O modelo Dirichlet-multinomial mostrou-se adequado para o ajuste aos dados com superdispersão comparativamente ao modelo dos logitos generalizados. Apesar dos resultados satisfatórios obtidos, ressalta-se que este trabalho é uma introdução ao problema. / Polytomic variables are common in agronomic experiments, presenting nominal or ordinal nature. The generalized logits model is a class of models that can be used to analyze these facts. One of the characteristics of this model is the assumption that variance is a known function of the mean and. It is expected, that the analyzed variance is close to that assumed by the model. However, when it is larger than the one specified by the model, it has the phenomenon of overdipersion. In this context, the present work aims to characterize the problem of overdispersion associated with nominal data in cross-sectional studies. As motivation, it is showed two adapted studies of the agricultural sciences area, related to fruit growing and zootechnics, both planned in the completely randomized design. The Dirichlet-multinomial model was used as a methodological alternative and was indicated as an overdispersion in the facts of the two examples. The model of the generalized logits and the Dirichlet-multinomial model were evaluated using the half-normal plot. In addition, it was proposed an extension of the dispersion index for the polytomic data, with performance evaluated under simulation. The Dirichlet-multinomial model proved to be adequate for the adjustment to the overdispersed fact compared to the generalized logit model. Despite the satisfactory results obtained, it is emphasized that this work is an introduction to the problem.
222

影響產業獲利率因素之探討--以台灣中游石化業為例

趙國卿 Unknown Date (has links)
利用小型開放經濟體系下寡佔理論模型的建立,以民國78年到85年台灣中游石化業的實證資料為依據,利用完全訊息最大概似法(Full Information Maximum Likelihood Method)對產業獲利率與產業集中度兩條聯立方程式進行估計,結果本文發現,產業集中度對產業獲利率的影響為正但並不具有顯著性;而加權匯率的影響為負但也不具有顯著性;又關稅與產能利用率對產業獲利率則呈現顯著正面的影響;以及進口比例與出口運輸成本對其的影響則為負向且在統計上呈現顯著性。另一方面,產業獲利率、進口比例與進口運輸成本對產業集中度產生顯著正面的影響;而市場規模則呈現顯著負面的影響。
223

Maximum Likelihood Estimation of Hammerstein Models / Maximum Likelihood-metoden för identifierig av Hammersteinmodeller

Sabbagh, Yvonne January 2003 (has links)
<p>In this Master's thesis, Maximum Likelihood-based parametric identification methods for discrete-time SISO Hammerstein models from perturbed observations on both input and output, are investigated. </p><p>Hammerstein models, consisting of a static nonlinear block followed by a dynamic linear one, are widely applied to modeling nonlinear dynamic systems, i.e., dynamic systems having nonlinearity at its input. </p><p>Two identification methods are proposed. The first one assumes a Hammerstein model where the input signal is noise-free and the output signal is perturbed with colored noise. The second assumes, however, white noises added to the input and output of the nonlinearity and to the output of the whole considered Hammerstein model. Both methods operate directly in the time domain and their properties are illustrated by a number of simulated examples. It should be observed that attention is focused on derivation, numerical calculation, and simulation corresponding to the first identification method mentioned above.</p>
224

Tree species classification using support vector machine on hyperspectral images / Trädslagsklassificering med en stödvektormaskin på hyperspektrala bilder

Hedberg, Rikard January 2010 (has links)
<p>For several years, FORAN Remote Sensing in Linköping has been using pulseintense laser scannings together with multispectral imaging for developing analysismethods in forestry. One area these laser scannings and images are used for is toclassify the species of single trees in forests. The species have been divided intopine, spruce and deciduous trees, classified by a Maximum Likelihood classifier.This thesis presents the work done on a more spectrally high-resolution imagery,hyperspectral images. These images are divided into more, and finer gradedspectral components, but demand more signal processing. A new classifier, SupportVector Machine, is tested against the previously used Maximum LikelihoodClassifier, to see if it is possible to increase the performance. The classifiers arealso set to divide the deciduous trees into aspen, birch, black alder and gray alder.The thesis shows how the new data set is handled and processed to the differentclassifiers, and shows how a better result can be achieved using a Support VectorMachine.</p>
225

Testing the Hazard Rate, Part I

Liero, Hannelore January 2003 (has links)
We consider a nonparametric survival model with random censoring. To test whether the hazard rate has a parametric form the unknown hazard rate is estimated by a kernel estimator. Based on a limit theorem stating the asymptotic normality of the quadratic distance of this estimator from the smoothed hypothesis an asymptotic ®-test is proposed. Since the test statistic depends on the maximum likelihood estimator for the unknown parameter in the hypothetical model properties of this parameter estimator are investigated. Power considerations complete the approach.
226

Estimation of Pareto distribution functions from samples contaminated by measurement errors

Lwando Orbet Kondlo January 2010 (has links)
<p>The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo / s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.</p>
227

Knowledge-based speech enhancement

Srinivasan, Sriram January 2005 (has links)
Speech is a fundamental means of human communication. In the last several decades, much effort has been devoted to the efficient transmission and storage of speech signals. With advances in technology making mobile communication ubiquitous, communications anywhere has become a reality. The freedom and flexibility offered by mobile technology brings with it new challenges, one of which is robustness to acoustic background noise. Speech enhancement systems form a vital front-end for mobile telephony in noisy environments such as in cars, cafeterias, subway stations, etc., in hearing aids, and to improve the performance of speech recognition systems. In this thesis, which consists of four research articles, we discuss both single and multi-microphone approaches to speech enhancement. The main contribution of this thesis is a framework to exploit available prior knowledge about both speech and noise. The physiology of speech production places a constraint on the possible shapes of the speech spectral envelope, and this information s captured using codebooks of speech linear predictive (LP) coefficients obtained from a large training database. Similarly, information about commonly occurring noise types is captured using a set of noise codebooks, which can be combined with sound environment classi¯cation to treat different environments differently. In paper A, we introduce maximum-likelihood estimation of the speech and noise LP parameters using the codebooks. The codebooks capture only the spectral shape. The speech and noise gain factors are obtained through a frame-by-frame optimization, providing good performance in practical nonstationary noise environments. The estimated parameters are subsequently used in a Wiener filter. Paper B describes Bayesian minimum mean squared error estimation of the speech and noise LP parameters and functions there-of, while retaining the in- stantaneous gain computation. Both memoryless and memory-based estimators are derived. While papers A and B describe single-channel techniques, paper C describes a multi-channel Bayesian speech enhancement approach, where, in addition to temporal processing, the spatial diversity provided by multiple microphones s also exploited. In paper D, we introduce a multi-channel noise reduction technique motivated by blind source separation (BSS) concepts. In contrast to standard BSS approaches, we use the knowledge that one of the signals is speech and that the other is noise, and exploit their different characteristics. / QC 20100929
228

Tree species classification using support vector machine on hyperspectral images / Trädslagsklassificering med en stödvektormaskin på hyperspektrala bilder

Hedberg, Rikard January 2010 (has links)
For several years, FORAN Remote Sensing in Linköping has been using pulseintense laser scannings together with multispectral imaging for developing analysismethods in forestry. One area these laser scannings and images are used for is toclassify the species of single trees in forests. The species have been divided intopine, spruce and deciduous trees, classified by a Maximum Likelihood classifier.This thesis presents the work done on a more spectrally high-resolution imagery,hyperspectral images. These images are divided into more, and finer gradedspectral components, but demand more signal processing. A new classifier, SupportVector Machine, is tested against the previously used Maximum LikelihoodClassifier, to see if it is possible to increase the performance. The classifiers arealso set to divide the deciduous trees into aspen, birch, black alder and gray alder.The thesis shows how the new data set is handled and processed to the differentclassifiers, and shows how a better result can be achieved using a Support VectorMachine.
229

Localization of Dynamic Acoustic Sources with a Maneuverable Array

Rogers, Jeffrey S. January 2010 (has links)
<p>This thesis addresses the problem of source localization and time-varying spatial spectrum estimation with maneuverable arrays. Two applications, each having different environmental assumptions and array geometries, are considered: 1) passive broadband source localization with a rigid 2-sensor array in a shallow water, multipath environment and 2) time-varying spatial spectrum estimation with a large, flexible towed array. Although both applications differ, the processing scheme associated with each is designed to exploit array maneuverability for improved localization and detection performance.</p><p>In the first application considered, passive broadband source localization is accomplished via time delay estimation (TDE). Conventional TDE methods, such as the generalized cross-correlation (GCC) method, make the assumption of a direct-path signal model and thus suffer localization performance loss in shallow water, multipath environments. Correlated multipath returns can result in spurious peaks in GCC outputs resulting in large bearing estimate errors. A new algorithm that exploits array maneuverability is presented here. The multiple orientation geometric averaging (MOGA) technique geometrically averages cross-correlation outputs to obtain a multipath-robust TDE. A broadband multipath simulation is presented and results indicate that the MOGA effectively suppresses correlated multipath returns in the TDE.</p><p>The second application addresses the problem of field directionality mapping (FDM) or spatial spectrum estimation in dynamic environments with a maneuverable towed acoustic array. Array processing algorithms for towed arrays are typically designed assuming the array is straight, and are thus degraded during tow ship maneuvers. In this thesis, maneuvering the array is treated as a feature allowing for left and right disambiguation as well as improved resolution towards endfire. The Cramer Rao lower bound is used to motivate the improvement in source localization which can be theoretically achieved by exploiting array maneuverability. Two methods for estimating time-varying field directionality with a maneuvering array are presented: 1) maximum likelihood estimation solved using the expectation maximization (EM) algorithm and 2) a non-negative least squares (NNLS) approach. The NNLS method is designed to compute the field directionality from beamformed power outputs, while the ML algorithm uses raw sensor data. A multi-source simulation is used to illustrate both the proposed algorithms' ability to suppress ambiguous towed-array backlobes and resolve closely spaced interferers near endfire which pose challenges for conventional beamforming approaches especially during array maneuvers. Receiver operating characteristics (ROCs) are presented to evaluate the algorithms' detection performance versus SNR. Results indicate that both FDM algorithms offer the potential to provide superior detection performance in the presence of noise and interfering backlobes when compared to conventional beamforming with a maneuverable array.</p> / Dissertation
230

Exponential Smoothing for Forecasting and Bayesian Validation of Computer Models

Wang, Shuchun 22 August 2006 (has links)
Despite their success and widespread usage in industry and business, ES methods have received little attention from the statistical community. We investigate three types of statistical models that have been found to underpin ES methods. They are ARIMA models, state space models with multiple sources of error (MSOE), and state space models with a single source of error (SSOE). We establish the relationship among the three classes of models and conclude that the class of SSOE state space models is broader than the other two and provides a formal statistical foundation for ES methods. To better understand ES methods, we investigate the behaviors of ES methods for time series generated from different processes. We mainly focus on time series of ARIMA type. ES methods forecast a time series using only the series own history. To include covariates into ES methods for better forecasting a time series, we propose a new forecasting method, Exponential Smoothing with Covariates (ESCov). ESCov uses an ES method to model what left unexplained in a time series by covariates. We establish the optimality of ESCov, identify SSOE state space models underlying ESCov, and derive analytically the variances of forecasts by ESCov. Empirical studies show that ESCov outperforms ES methods and regression with ARIMA errors. We suggest a model selection procedure for choosing appropriate covariates and ES methods in practice. Computer models have been commonly used to investigate complex systems for which physical experiments are highly expensive or very time-consuming. Before using a computer model, we need to address an important question ``How well does the computer model represent the real system?" The process of addressing this question is called computer model validation that generally involves the comparison of computer outputs and physical observations. In this thesis, we propose a Bayesian approach to computer model validation. This approach integrates together computer outputs and physical observation to give a better prediction of the real system output. This prediction is then used to validate the computer model. We investigate the impacts of several factors on the performance of the proposed approach and propose a generalization to the proposed approach.

Page generated in 0.0705 seconds