• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 11
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 94
  • 25
  • 20
  • 16
  • 15
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improved modelling in finite-sample and nonlinear frameworks

Lawford, Stephen Derek Charles January 2001 (has links)
No description available.
2

Essays on wage dispersion

Davies, Stuart January 1999 (has links)
No description available.
3

Análise espaço-temporal do uso do habitat pelo boto-cinza (Sotalia Guianensis) na Baía de Guanabara, Rio de Janeiro / Spatio temporal analysis on the habitat use of Guiana dolphins (Sotalia Guianensis) in Guanabara Bay, Rio de Janeiro

Rafael Ramos de Carvalho 01 February 2013 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Determine home ranges have been an important issue in studies trying to understand the relationship between species and their environment. Guanabara Bay shelter a resident population of Guiana dolphins (Sotalia guianensis) and this study is aimed at analysing the habitat use of Sotalia guianensis in Guanabara Bay (RJ) between the years of 2002 and 2012. A total of 204 days of survey effort was analysed and 902 points were selected to be used in the distribution maps. The bay was divided into four sections where differences in survey effort did not exceed 16%. The Kernel Density method was utilized to estimate and interpret the habitat use presented by the Guiana dolphin groups. The core areas interpretation was also developed through grid cells of 1,5km x 1,5km and the calculation of Piankas niche overlap index. Dephts utilized by S. guianensis did not show significant differences along the study period (p = 0,531). Areas utilized during the period of 2002/2004 were estimated in 79,4 km and core areas of 19,4 km. The periods of 2008/2010 and 2010/2012 demonstrated that areas were estimated in 51.4 and 58.9 km, respectively, while core areas were estimated in 10.8 and 10.4 km, respectivelly. The areas utilized by the group of dolphins gathered regions that cross the main channel of the bay and the northeast region of the study area where is also located the Environmental Protection Area of Guapimirim. Nevertheless, the population home range as well as its core areas decreased gradually along the years, specifically around Paquetá Island and the central-south portion of the main channel. Groups of more than 10 individuals and that had ≥ 25% of calves on its composition, showed reductions on habitat use of more than 60%. The population size of Guiana dolphins has been decreasing drastically and the individuals interact with disturbance sources in a daily basis. These could be the possible causes of the reduction on habitat use in Guanabara Bay. For that reason, the results are of fundamental importance for the conservation of this population as it demonstrates the consequences of a long-term interaction with a highly impacted coastal environment. / Determinar áreas de vida tem sido um tema amplamente discutido em trabalhos que procuram entender a relação da espécie estudada com as características de seu habitat. A Baía de Guanabara abriga uma população residente de botos-cinza (Sotalia guianensis) e o objetivo do presente estudo foi analisar o uso espacial de Sotalia guianensis, na Baía de Guanabara (RJ), entre 2002 e 2012. Um total de 204 dias de coleta foi analisado e 902 pontos selecionados para serem gerados os mapas de distribuição. A baía foi dividida em quatro subáreas e a diferença no esforço entre cada uma não ultrapassou 16%. O método Kernel Density foi utilizado nas análises para estimativa e interpretação do uso do habitat pelos grupos de botos-cinza. A interpretação das áreas de concentração da população também foi feita a partir de células (grids) de 1,5km x 1,5km com posterior aplicação do índice de sobreposição de nicho de Pianka. As profundidades utilizadas por S. guianensis não apresentaram variações significativas ao longo do período de estudo (p = 0,531). As áreas utilizadas durante o período de 2002/2004 foram estimadas em 79,4 km com áreas de concentração de 19,4 km. Os períodos de 2008/2010 e 2010/2012 apresentaram áreas de uso estimadas em um total de 51,4 e 58,9 km, respectivamente e áreas de concentração com 10,8 e 10,4 km, respectivamente. As áreas utilizadas envolveram regiões que se estendem por todo o canal central e região nordeste da Baía de Guanabara, onde também está localizada a Área de Proteção Ambiental de Guapimirim. Apesar disso, a área de vida da população, assim como suas áreas de concentração, diminuiu gradativamente ao longo dos anos, especialmente no entorno da Ilha de Paquetá e centro-sul do canal central. Grupos com mais de 10 indivíduos e grupos na classe ≥ 25% de filhotes em sua composição, evidenciaram reduções de mais de 60% no tamanho das áreas utilizadas. A população de botos-cinza vem decrescendo rapidamente nos últimos anos, além de interagir diariamente com fontes perturbadoras, sendo estas possíveis causas da redução do uso do habitat da Baía de Guanabara. Por esse motivo, os resultados apresentados são de fundamental importância para a conservação desta população já que representam consequências da interação em longo prazo com um ambiente costeiro altamente impactado pela ação antrópica.
4

Assessing the Impacts of Anthropogenic Drainage Structures on Hydrologic Connectivity Using High-Resolution Digital Elevation Models

Bhadra, Sourav 01 August 2019 (has links)
Stream flowline delineation from high-resolution digital elevation models (HRDEMs) can be problematic due to the fine representation of terrain features as well as anthropogenic drainage structures (e.g., bridges, culverts) within the grid surface. The anthropogenic drainage structures (ADS) may create digital dams while delineating stream flowlines from HRDEMs. The study assessed the effects of ADS locations, spatial resolution (ranged from 1m to 10m), depression processing methods, and flow direction algorithms (D8, D-Infinity, and MFD-md) on hydrologic connectivity through digital dams using HRDEMs in Nebraska. The assessment was conducted based on the offset distances between modeled stream flowlines and original ADS locations using kernel density estimation (KDE) and calculated frequency of ADS samples within offset distances. Three major depression processing techniques (i.e., depression filling, stream breaching, and stream burning) were considered for this study. Finally, an automated method, constrained burning was proposed for HRDEMs which utilizes ancillary datasets to create underneath stream crossings at possible ADS locations and perform DEM reconditioning. The results suggest that coarser resolution DEMs with depression filling and breaching can produce better hydrologic connectivity through ADS compared with finer resolution DEMs with different flow direction algorithms. It was also found that stream burning with known stream crossings at ADS locations outperformed depression filling and breaching techniques for HRDEMs in terms of hydrologic connectivity. The flow direction algorithms combining with depression filling and breaching techniques do not have significant effects on the hydrologic connectivity of modeled stream flowlines. However, for stream burning methods, D8 was found as the best performing flow direction algorithm in HRDEMs with statistical significance. The stream flowlines delineated using the proposed constrained burning method from the HRDEM was found better than depression filling and breaching techniques. This method has an overall accuracy of 78.82% in detecting possible ADS locations within the study area.
5

Multivariate Analysis of Diverse Data for Improved Geostatistical Reservoir Modeling

Hong, Sahyun 11 1900 (has links)
Improved numerical reservoir models are constructed when all available diverse data sources are accounted for to the maximum extent possible. Integrating various diverse data is not a simple problem because data show different precision and relevance to the primary variables being modeled, nonlinear relations and different qualities. Previous approaches rely on a strong Gaussian assumption or the combination of the source-specific probabilities that are individually calibrated from each data source. This dissertation develops different approaches to integrate diverse earth science data. First approach is based on combining probability. Each of diverse data is calibrated to generate individual conditional probabilities, and they are combined by a combination model. Some existing models are reviewed and a combination model is proposed with a new weighting scheme. Weakness of the probability combination schemes (PCS) is addressed. Alternative to the PCS, this dissertation develops a multivariate analysis technique. The method models the multivariate distributions without a parametric distribution assumption and without ad-hoc probability combination procedures. The method accounts for nonlinear features and different types of the data. Once the multivariate distribution is modeled, the marginal distribution constraints are evaluated. A sequential iteration algorithm is proposed for the evaluation. The algorithm compares the extracted marginal distributions from the modeled multivariate distribution with the known marginal distributions and corrects the multivariate distribution. Ultimately, the corrected distribution satisfies all axioms of probability distribution functions as well as the complex features among the given data. The methodology is applied to several applications including: (1) integration of continuous data for a categorical attribute modeling, (2) integration of continuous and a discrete geologic map for categorical attribute modeling, (3) integration of continuous data for a continuous attribute modeling. Results are evaluated based on the defined criteria such as the fairness of the estimated probability or probability distribution and reasonable reproduction of input statistics. / Mining Engineering
6

Modelling Probability Distributions from Data and its Influence on Simulation

Hörmann, Wolfgang, Bayar, Onur January 2000 (has links) (PDF)
Generating random variates as generalisation of a given sample is an important task for stochastic simulations. The three main methods suggested in the literature are: fitting a standard distribution, constructing an empirical distribution that approximates the cumulative distribution function and generating variates from the kernel density estimate of the data. The last method is practically unknown in the simulation literature although it is as simple as the other two methods. The comparison of the theoretical performance of the methods and the results of three small simulation studies show that a variance corrected version of kernel density estimation performs best and should be used for generating variates directly from a sample. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
7

EMPIRICAL BAYES NONPARAMETRIC DENSITY ESTIMATION OF CROP YIELD DENSITIES: RATING CROP INSURANCE CONTRACTS

Ramadan, Anas 16 September 2011 (has links)
This thesis examines a newly proposed density estimator in order to evaluate its usefulness for government crop insurance programs confronted by the problem of adverse selection. While the Federal Crop Insurance Corporation (FCIC) offers multiple insurance programs including Group Risk Plan (GRP), what is needed is a more accurate method of estimating actuarially fair premium rates in order to eliminate adverse selection. The Empirical Bayes Nonparametric Kernel Density Estimator (EBNKDE) showed a substantial efficiency gain in estimating crop yield densities. The objective of this research was to apply EBNKDE empirically by means of a simulated game wherein I assumed the role of a private insurance company in order to test for profit gains from the greater efficiency and accuracy promised by using EBNKDE. Employing EBNKDE as well as parametric and nonparametric methods, premium insurance rates for 97 Illinois counties for the years 1991 to 2010 were estimated using corn yield data from 1955 to 2010 taken from the National Agricultural Statistics Service (NASS). The results of this research revealed substantial efficiency gain from using EBNKDE as opposed to other estimators such as Normal, Weibull, and Kernel Density Estimator (KDE). Still, further research using other crops yield data from other states will provide greater insight into EBNKDE and its performance in other situations.
8

Multivariate Analysis of Diverse Data for Improved Geostatistical Reservoir Modeling

Hong, Sahyun Unknown Date
No description available.
9

Applications of Copulas to Analysis of Efficiency of Weather Derivatives as Primary Crop Insurance Instruments

Filonov, Vitaly 2011 August 1900 (has links)
Numerous authors note failure of private insurance markets to provide affordable and comprehensive crop insurance. Economic logic suggests that index contracts potentially may have some advantages when compared with traditional (farm based) crop insurance. It is also a matter of common knowledge that weather is an important production factor and at the same time one of the greatest sources of risk in agriculture. Hence introduction of crop insurance contracts, based on weather indexes, might be a reasonable approach to mitigate problems, associated with traditional crop insurance products, and possibly lower the cost of insurance for end users. In spite of the fact that before the financial crisis of 2008-09 market for weather derivatives was the fastest growing derivatives market in the USA, agricultural producers didn’t express much interest in application of weather derivatives to management of their systematic risk. There are several reasons for that, but the most important one is the presence of high basis risk, which is represented by its two major components: technological (i.e. goodness of fit between yield and weather index) and geographical basis. Majority of the researchers is focusing either on pricing of weather derivatives or on mitigation of geographical basis risk. At the same time the number of papers researching possible ways to decrease technological basis is quite limited, and always assumes linear dependency between yields and weather variables, while estimating the risk reducing efficiency of weather contracts, which is obviously large deviation from reality. The objective of this study is to estimate the risk reducing efficiency of crop insurance contracts, based on weather derivatives (indexes) in the state of Texas. The distributions of representative farmer’s profits with the proposed contracts are compared to the distributions of profits without a contract. This is done to demonstrate the risk mitigating effect of the proposed contracts. Moreover the study will try to account for a more complex dependency structures between yields and weather variables through usage of copulas, while constructing joint distribution of yields and weather data. Selection of the optimal copula will be implemented in the out-of-sample efficient framework. An effort will be done to identify the most relevant periods of year, when weather has the most significant influence on crop yields, which should be included in the model, and to discover the most effective copula to model joint weather/yield risk. Results suggest that effective insurance of crop yields in the state of Texas by the means of proposed weather derivatives is possible. Besides, usage of data-mining techniques allows for more accurate selection of the time periods to be included in the model than ad hoc procedure previously used in the literature. Finally selection of optimal copula for modeling of joint weather/yield distribution should be crop and county specific, while in general Clayton and Frank copula of Archimedean copula family provide the best out-of-sample metric results.
10

Aplicação do valor de base da frequência fundamental via estatística MVKD em comparação forense de locutor / Applying base value of fundamental frequency via MVKD in forensic speaker comparison

Silva, Ronaldo Rodrigues da 13 December 2016 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016. / Submitted by Camila Duarte (camiladias@bce.unb.br) on 2017-01-20T15:21:13Z No. of bitstreams: 1 2016_RonaldoRodriguesdaSilva.pdf: 1507927 bytes, checksum: 9db7accbea588ec2c50174217c542007 (MD5) / Approved for entry into archive by Ruthléa Nascimento(ruthleanascimento@bce.unb.br) on 2017-03-22T16:52:27Z (GMT) No. of bitstreams: 1 2016_RonaldoRodriguesdaSilva.pdf: 1507927 bytes, checksum: 9db7accbea588ec2c50174217c542007 (MD5) / Made available in DSpace on 2017-03-22T16:52:27Z (GMT). No. of bitstreams: 1 2016_RonaldoRodriguesdaSilva.pdf: 1507927 bytes, checksum: 9db7accbea588ec2c50174217c542007 (MD5) / Comparação forense de locutor (CFL) é utilizada como uma abordagem complementar na confirmação da autoria de um crime. A metodologia mais difundida mundialmente neste tipo de exame se baseia em análises perceptuais e acústicas. Uma das medidas acústicas mais utilizadas em CFL é a frequência fundamental (F0). O parâmetro acústico F0 é robusto em áudios de baixa qualidade e é independente do conteúdo das falas, o que o torna um parâmetro interessante de ser utilizado nas análises forenses. Além disso, o algoritmo de extração de F0 apresenta baixa complexidade computacional. Neste trabalho, propõe-se analisar o poder discriminante da medida de longo termo da frequência fundamental nomeada valor de base de F0, que em trabalhos recentes tem se mostrado menos sujeita a variações associadas ao conteúdo, ao estilo da fala, ao canal utilizado na gravação, além de exigir uma menor quantidade de material para obter uma medida estável em comparação a outras medidas de longo termo, como a média aritmética e o desvio padrão. Foi avaliado o ganho de poder discriminante ao combinar a medida do valor de base de F0 a outras medidas de longo termo de F0 usualmente utilizadas na área forense por meio de uma abordagem que aplica a estatística de densidade do núcleo de multivariáveis, do inglês Multivariate Kernel-Density (MVKD). Os testes foram realizados utilizando um corpus composto de gravações de áudios de falantes masculinos do português brasileiro contendo 60 segundos de produções vozeadas e obteve-se uma Taxa de Erro Igual, do inglês Equal Error Rate (EER)de 13 %, superando pesquisas recentes. / Forensic Speaker Comparisons (FSC) are applied as a complementary approach to con rm the authorship of a crime. The methodology most used in FSC is based on perceptual and acoustic analysis. One of the most frequent measures in FSC is the fundamental frequency F0. The acoustic parameter F0 is robust in low audio quality regardless of the speech content, which is very important to the forensic area. Moreover, its algorithm has a low computational complexity. In this work, we propose to analyze the discriminatory power of the long-term fundamental frequency parameter named baseline of the F0. This parameter is more stable considering the speech content and style, the recording channel and needs less audio quantity to extract a reliable measure compared to other F0 parameters, as arithmetic mean and the standard deviation which are the most used parameters in the forensic area. The discriminant gain improvement obtained combining the baseline of the F0 and other long-term fundamental frequency measures was addressed using the statistics of the Multivariate Kernel-Density (MVKD). The experiments were done using a brasilian portuguese male recording corpus containing 60 seconds of voiced speech each sample. We show that our proposed approach achieves an Equal Error Rate (EER) of 13 % outperforming recent researches.

Page generated in 0.0635 seconds