• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Landbird Response to Fine-Scale Habitat Characteristics Within Riparian Forests of the Central California Coast

Melcer, Ronald E., Jr. 01 March 2012 (has links) (PDF)
Riparian corridors in California are known to be an important but reduced and degraded resource for landbirds. In spite of previous research, the habitat characteristics that correlate with high landbird abundance remain poorly understood. In particular, the scale at which predictive models are useful (fine scale, watershed, sub-region or region) is ill defined. Herein, point count-based abundance indices for 8 riparian associated/obligate species with uniform and high detection probabilities are correlated with biotic and abiotic habitat variables: a sums of squares procedure is used to select the top 5 predictive variables for each species, best fit linear models are selected in an information theoretic framework, and the relative importance of individual variables assessed. These analyses identified site and vegetation characteristics that could serve as targets for restoration and conservation efforts within this coastal central California region. The specific characteristics vary somewhat across the 8 species I surveyed. In addition, the characteristics that I have found important as predictors are distinct from analyses that others have conducted. Therefore, just as we should probably accept regional variation in the composition of riparian avifaunas, we should also probably expect regional variation in the relationship between habitat variables and avian abundance. It appears that important habitat characteristics vary at the fine, watershed, sub-region and regional scales thus reducing the generality of all of the currently available models.
2

A Monte Carlo Study of Fit Indices in Hierarchical Linear Models

McMurray, Kelly 01 January 2010 (has links)
In educational research, students often exist in a multilevel social setting that can be identified by students within classrooms, classrooms nested in schools, schools nested in school districts, school districts nested in school counties, and school counties nested in states. These are considered hierarchical, nested, or multilevel because students are within the same community and share similar experiences which have the potential to influence an outcome. Because students within the same classrooms have similar characteristics, conclusions made on these students cannot be independent. To adapt to the hierarchical, multilevel, or nested data structure, multilevel analysis techniques such as hierarchical linear modeling (HLM) can be used to analyze the data. One purpose of HLM is to specify a model that includes appropriate random effects (Guo, 2005). One method which should be considered for inclusion or exclusion of random effects and to evaluate the goodness of fit of the final model to the data is the comparison of models with different specifications of random effects based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) , or Deviance Information Criterion (DIC) which corrects for bias induced by sample size and number of random effects. AIC, BIC, and DIC are information criteria that measure the statistical fit of a model. There has not been any research conducted in the multilevel literature about the impact of sample size and information criteria. This Monte Carlo Monte Carlo simulation compared the influence of sample size on the ability to select the best model in two-level hierarchical models using the information criteria Akaike Information Criterion, Bayesian Information Criterion, and Deviance Information Criterion. Results of this investigation showed that all three information criteria had very low or nonexistent success in choosing the best hierarchical linear model.
3

Error Models for Quantum State and Parameter Estimation

Schwarz, Lucia 17 October 2014 (has links)
Within the field of Quantum Information Processing, we study two subjects: For quantum state tomography, one common assumption is that the experimentalist possesses a stationary source of identical states. We challenge this assumption and propose a method to detect and characterize the drift of nonstationary quantum sources. We distinguish diffusive and systematic drifts and examine how quickly one can determine that a source is drifting. Finally, we give an implementation of this proposed measurement for single photons. For quantum computing, fault-tolerant protocols assume that errors are of certain types. But how do we detect errors of the wrong type? The problem is that for large quantum states, a full state description is impossible to analyze, and so one cannot detect all types of errors. We show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. Our example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size. This dissertation includes previously published co-authored material.
4

Selection of Predictors and Estimators in Spatial Statistics

Bradley, Jonathan R. 19 September 2013 (has links)
No description available.
5

Classification of Hate Tweets and Their Reasons using SVM

Tarasova, Natalya January 2016 (has links)
Denna studie fokuserar på att klassificera hat-meddelanden riktade mot mobiloperatörerna Verizon,  AT&amp;T and Sprint. Huvudsyftet är att med hjälp av maskininlärningsalgoritmen Support Vector Machines (SVM) klassificera meddelanden i fyra kategorier - Hat, Orsak, Explicit och Övrigt - för att kunna identifiera ett hat-meddelande och dess orsak. Studien resulterade i två metoder: en "naiv" metod (the Naive Method, NM) och en mer "avancerad" metod (the Partial Timeline Method, PTM). NM är en binär metod i den bemärkelsen att den ställer frågan: "Tillhör denna tweet klassen Hat?". PTM ställer samma fråga men till en begränsad mängd av tweets, dvs bara de som ligger inom ± 30 min från publiceringen av hat-tweeten. Sammanfattningsvis indikerade studiens resultat att PTM är noggrannare än NM. Dock tar den inte hänsyn till samtliga tweets på användarens tidslinje. Därför medför valet av metod en avvägning: PTM erbjuder en noggrannare klassificering och NM erbjuder en mer utförlig klassificering. / This study focused on finding the hate tweets posted by the customers of three mobileoperators Verizon, AT&amp;T and Sprint and identifying the reasons for their dissatisfaction. The timelines with a hate tweet were collected and studied for the presence of an explanation. A machine learning approach was employed using four categories: Hate, Reason, Explanatory and Other. The classication was conducted with one-versus-all approach using Support Vector Machines algorithm implemented in a LIBSVM tool. The study resulted in two methodologies: the Naive method (NM) and the Partial Time-line Method (PTM). The Naive Method relied only on the feature space consisting of the most representative words chosen with Akaike Information Criterion. PTM utilized the fact that the majority of the explanations were posted within a one-hour time window of the posting of a hate tweet. We found that the accuracy of PTM is higher than for NM. In addition, PTM saves time and memory by analysing fewer tweets. At the same time this implies a trade-off between relevance and completeness. / <p>Opponent: Kristina Wettainen</p>
6

Sparse Signal Reconstruction Modeling for MEG Source Localization Using Non-convex Regularizers

Samarasinghe, Kasun M. 19 October 2015 (has links)
No description available.
7

Um procedimento para seleção de variáveis em modelos lineares generalizados duplos / A procedure for variable selection in double generalized linear models

Cavalaro, Lucas Leite 01 April 2019 (has links)
Os modelos lineares generalizados duplos (MLGD), diferentemente dos modelos lineares generalizados (MLG), permitem o ajuste do parâmetro de dispersão da variável resposta em função de variáveis preditoras, aperfeiçoando a forma de modelar fenômenos. Desse modo, os mesmos são uma possível solução quando a suposição de que o parâmetro de dispersão constante não é razoável e a variável resposta tem distribuição que pertence à família exponencial. Considerando nosso interesse em seleção de variáveis nesta classe de modelos, estudamos o esquema de seleção de variáveis em dois passos proposto por Bayer e Cribari-Neto (2015) e, com base neste método, desenvolvemos um esquema para seleção de variáveis em até k passos. Para verificar a performance do nosso procedimento, realizamos estudos de simulação de Monte Carlo em MLGD. Os resultados obtidos indicam que o nosso procedimento para seleção de variáveis apresenta, em geral, performance semelhante ou superior à das demais metodologias estudadas sem necessitar de um grande custo computacional. Também avaliamos o esquema para seleção de variáveis em até \"k\" passos em um conjunto de dados reais e o comparamos com diferentes métodos de regressão. Os resultados mostraram que o nosso procedimento pode ser também uma boa alternativa quando possui-se interesse em realizar previsões. / The double generalized linear models (DGLM), unlike the generalized linear model (GLM), allow the fit of the dispersion parameter of the response variable as a function of predictor variables, improving the way of modeling phenomena. Thus, they are a possible solution when the assumption that the constant dispersion parameter is unreasonable and the response variable has distribution belonging to the exponential family. Considering our interest in variable selection in this class of models, we studied the two-step variable selection scheme proposed by Bayer and Cribari-Neto (2015) and, based on this method, we developed a scheme to select variables in up to k steps. To check the performance of our procedure, we performed Monte Carlo simulation studies in DGLM. The results indicate that our procedure for variable selection presents, in general, similar or superior performance than the other studied methods without requiring a large computational cost. We also evaluated the scheme to select variables in up to \"k\" steps in a set of real data and compared it with different regression methods. The results showed that our procedure can also be a good alternative when the interest is in making predictions.
8

Habitat Selection by Feral Horses in the Alberta Foothills

Bevan, Tisa L Unknown Date
No description available.
9

GLS detrending, efficient unit root tests and structural change / GLS para eliminar los componentes determinísticos, estadísticos de raíz unitaria eficientes y cambio estructural

Perron, Pierre, Rodríguez, Gabriel 10 April 2018 (has links)
We extend the class of M-tests for a unit root analyzed by Perron and Ng (1996) and Ng and Perron (1997) to the case where a change in the trend function is allowed to occur at an unknown time. These tests M(GLS) adopt the GLS detrending approach of Dufour and King (1991) and Elliott, Rothenberg and Stock (1996) (ERS). Following Perron (1989), we consider two models: one allowing for a change in slope and the other for both a change in intercept and slope. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal tests PT(GLS) suggested by ERS. The asymptotic critical values of the tests aretabulated. Also, we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. We show that the M(GLS) and PT(GLS) tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. An empirical application is also provided. / Extendemos los estadísticos tipo M para una raíz unitaria analizados por Perron y Ng (1996) y Ng y Perron (2001) al caso donde se permite que el cambio en la función de tendencia ocurra en un punto desconocido. Estos estadísticos (MGLS) adoptan el enfoque GLS para eliminar la tendencia desarrollado por Elliott et al. (1996) (ERS) siguiendo los resultados de Dufour y King (1991). Siguiendo a Perron (1989), consideramos dos modelos: uno que permite un cambio en la pendiente y otro que permite tanto un cambio en el intercepto como en la pendiente. Derivamos las distribuciones asintóticas así como el estadístico óptimo factible en un punto de la hipótesis alternativa (PT GLS) sugerido por ERS. También computamos el parámetro de no centralidad utilizado por el enfoque GLS local a la unidad con el fin de eliminar la tendencia que permite que el estadístico PT GLS tenga 50% de potencia asintótica en ese valor. Asimismo, se han tabulado los valores críticos asintóticos de los estadísticos. Mostramos que los estadísticos MGLS y PT GLS tienen una función de potencia asintótica cercana a la envolvente de potencia. Un estudio de simulación analiza el tamaño y potencia en muestras finitas bajo varios métodos para seleccionar la truncación para estimar la densidad espectral autorregresiva. Finalmente, también se presenta una aplicación empírica.
10

Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility

Yu, Jung-Suk 22 May 2006 (has links)
There has been an on-going debate about choices of the most suitable model amongst a variety of model specifications and parameterizations. The first dissertation essay investigates whether asymmetric leptokurtic return distributions such as Hansen's (1994) skewed tdistribution combined with GARCH specifications can outperform mixed GARCH-jump models such as Maheu and McCurdy's (2004) GARJI model incorporating the autoregressive conditional jump intensity parameterization in the discrete-time framework. I find that the more parsimonious GJR-HT model is superior to mixed GARCH-jump models. Likelihood-ratio (LR) tests, information criteria such as AIC, SC, and HQ and Value-at-Risk (VaR) analysis confirm that GJR-HT is one of the most suitable model specifications which gives us both better fit to the data and parsimony of parameterization. The benefits of estimating GARCH models using asymmetric leptokurtic distributions are more substantial for highly volatile series such as emerging stock markets, which have a higher degree of non-normality. Furthermore, Hansen's skewed t-distribution also provides us with an excellent risk management tool evidenced by VaR analysis. The second dissertation essay provides a variety of empirical evidences to support redundancy of stochastic volatility for SP500 index returns when stochastic volatility is taken into account with infinite activity pure Lévy jumps models and the importance of stochastic volatility to reduce pricing errors for SP500 index options without regard to jumps specifications. This finding is important because recent studies have shown that stochastic volatility in a continuous-time framework provides an excellent fit for financial asset returns when combined with finite-activity Merton's type compound Poisson jump-diffusion models. The second essay also shows that stochastic volatility with jumps (SVJ) and extended variance-gamma with stochastic volatility (EVGSV) models perform almost equally well for option pricing, which strongly imply that the type of Lévy jumps specifications is not important factors to enhance model performances once stochastic volatility is incorporated. In the second essay, I compute option prices via improved Fast Fourier Transform (FFT) algorithm using characteristic functions to match arbitrary log-strike grids with equal intervals with each moneyness and maturity of actual market option prices.

Page generated in 0.1159 seconds