• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • Tagged with
  • 12
  • 12
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Precificação de derivativos climáticos no Brasil: uma abordagem estatística alternativa e construção de um algoritmo em R / Pricing weather derivatives in Brazil: a statistical approach and algorithm building using R

Lemos, Gabriel Bruno de 07 February 2014 (has links)
Muitos negócios possuem exposição às variações climáticas e com poucas alternativas para mitigar este tipo de risco. Nos últimos 20 anos o mercado de derivativos climáticos se desenvolveu principalmente em locais como Canadá, EUA e Europa para transferir os riscos relacionados às variações climáticas para investidores com maior capacidade de absorção, tais como seguradoras, resseguradoras e fundos de investimentos. Este trabalho implementou uma metodologia de precificação destes contratos para a variável temperatura média diária no Brasil. Foram utilizados os dados de 265 estações meteorológicas cadastras no site do BDMEP/INMET, utilizando-se observações diárias durante o período 1970-2012. Enquanto a maior parte dos trabalhos de precificação fora desenvolvida para um local específico, neste estudo buscou-se uma solução mais generalizada e que permitisse aos participantes deste novo mercado balizar suas expectativas de preço para qualquer ponto com uma estação meteorológica no país. O principal desafio para esta abordagem foram as falhas nas séries temporais e para isto desenvolveu-se uma metodologia de preenchimento utilizando as informações do projeto NCEP/NCAR. Cada estação foi submetida ao algoritmo de análise e modelagem das séries de temperatura. Considerou-se \"Sucesso\" (36.2% dos casos) as estações cujo processo de modelagem culminou em um resíduo ruído branco, estacionário e homoscedástico. Por \"Fracasso\" (63.8% das estações) entendem-se os casos que violaram pelo menos uma destas condições. Para a incorporação da tendência nos dados utilizou-se a Regressão Polinomial Local (LOESS). Para a estimação da sazonalidade foi empregada análise espectral e utilizada a série de Fourier. Para o tratamento da autocorrelação serial nos resíduos utilizou-se modelos ARFIMA, que contempla um parâmetro para memória longa do processo. A análise espacial dos resultados sugere uma maior taxa de \"Sucesso\" para a precificação de contratos na região Centro-Sul do país e piores para Norte e Nordeste. O método de preenchimento das falhas não deve ser utilizado indiscriminadamente por todo o país, uma vez que a correlação entre as séries do BDMEP/INMET e NCEP/NCAR não é constante, além de apresentar um claro padrão na dispersão espacial. A precificação dos contratos foi feita pelos métodos de \"Burning cost\", \"Modelagem do Índice\" e \"Modelagem da temperatura média diária\". Para este último caso as temperaturas simuladas apresentaram um viés ligeiramente acima dos dados históricos, podendo causar grandes distorções na precificação dos contratos. Deve-se realizar uma correção dos valores simulados antes da precificação dos contratos. A qualidade e consistência dos dados climáticos representam a maior ameaça para a utilização de derivativos climáticos no país, principalmente na região Cento-Oeste, aonde existem poucas estações meteorológicas, e Nordeste, com baixíssima taxa de \"Sucesso\", mesmo com um razoável número de estações. / Many business are exposed to weather variations and managers did not use to have a tool to avoid it. In the last twenty years, weather derivative markets has developed mainly in Canada, USA and Europe, transferring these risks to investors who are willing and able to assume it and receive a financial compensation for that, such as investment funds, insurance and reinsurance companies. This study developed a methodology to price weather contracts with daily average temperature as underlying. It was used 265 public weather stations from BDMEP/ INMET and data was collected from 1970 up to 2012. While the most part of studies in this area have focused in one or few stations, the goal of this study was to develop a more general pricing tool which would allow assessing weather risk and quoting it at any place in Brazil with an available weather station. The main issue was the gaps that occur so frequently in weather time series data and a methodology using interpolated data from NCEP/NCAR was proposed to deal with it. At the bottom of modelling process, weather stations were classified as \"Success\" (36.2%) or \"Failure\" (63.8%) according to the analysis of residuals. To be considered \"Success\", residuals of a time series must be stationary, homoscedastic and white-noise, i.e., free of autocorrelation. If at least one of these was not reached, the modelling process of this weather station was considered \"Failure\". Detrend data was done using Local Polynomial Regression (LOESS). Seasonality was estimated using spectral analysis and Fourier analysis. Autocorrelation of residuals was incorporated into the model using ARFIMA models, which have a parameter to deal with long memory process. Spatial analysis of results suggests a higher \"Success\" rate for contracts priced in the Center south region and worst results were obtained in North and Northeast. Methodology to fill the gaps should not be used in all situations, once correlation is not constant through the country and has a strong spatial pattern (clustering). Pricing was done using \"Burning cost\", \"Index modelling\" and \"Daily modelling of average temperature\". In this former case, simulated temperature has shown a slightly positive bias, which could create huge differences in prices compared with other models. A correction should be done to these values, to use it for pricing purposes. The quality and consistency of weather data is the main issue to develop a weather market in Brazil, mainly in Center-West region, where there is a small number of weather stations and Northeast with the lowest \"Success\" rate, even with a not so small number of weather stations.
2

Precificação de derivativos climáticos no Brasil: uma abordagem estatística alternativa e construção de um algoritmo em R / Pricing weather derivatives in Brazil: a statistical approach and algorithm building using R

Gabriel Bruno de Lemos 07 February 2014 (has links)
Muitos negócios possuem exposição às variações climáticas e com poucas alternativas para mitigar este tipo de risco. Nos últimos 20 anos o mercado de derivativos climáticos se desenvolveu principalmente em locais como Canadá, EUA e Europa para transferir os riscos relacionados às variações climáticas para investidores com maior capacidade de absorção, tais como seguradoras, resseguradoras e fundos de investimentos. Este trabalho implementou uma metodologia de precificação destes contratos para a variável temperatura média diária no Brasil. Foram utilizados os dados de 265 estações meteorológicas cadastras no site do BDMEP/INMET, utilizando-se observações diárias durante o período 1970-2012. Enquanto a maior parte dos trabalhos de precificação fora desenvolvida para um local específico, neste estudo buscou-se uma solução mais generalizada e que permitisse aos participantes deste novo mercado balizar suas expectativas de preço para qualquer ponto com uma estação meteorológica no país. O principal desafio para esta abordagem foram as falhas nas séries temporais e para isto desenvolveu-se uma metodologia de preenchimento utilizando as informações do projeto NCEP/NCAR. Cada estação foi submetida ao algoritmo de análise e modelagem das séries de temperatura. Considerou-se \"Sucesso\" (36.2% dos casos) as estações cujo processo de modelagem culminou em um resíduo ruído branco, estacionário e homoscedástico. Por \"Fracasso\" (63.8% das estações) entendem-se os casos que violaram pelo menos uma destas condições. Para a incorporação da tendência nos dados utilizou-se a Regressão Polinomial Local (LOESS). Para a estimação da sazonalidade foi empregada análise espectral e utilizada a série de Fourier. Para o tratamento da autocorrelação serial nos resíduos utilizou-se modelos ARFIMA, que contempla um parâmetro para memória longa do processo. A análise espacial dos resultados sugere uma maior taxa de \"Sucesso\" para a precificação de contratos na região Centro-Sul do país e piores para Norte e Nordeste. O método de preenchimento das falhas não deve ser utilizado indiscriminadamente por todo o país, uma vez que a correlação entre as séries do BDMEP/INMET e NCEP/NCAR não é constante, além de apresentar um claro padrão na dispersão espacial. A precificação dos contratos foi feita pelos métodos de \"Burning cost\", \"Modelagem do Índice\" e \"Modelagem da temperatura média diária\". Para este último caso as temperaturas simuladas apresentaram um viés ligeiramente acima dos dados históricos, podendo causar grandes distorções na precificação dos contratos. Deve-se realizar uma correção dos valores simulados antes da precificação dos contratos. A qualidade e consistência dos dados climáticos representam a maior ameaça para a utilização de derivativos climáticos no país, principalmente na região Cento-Oeste, aonde existem poucas estações meteorológicas, e Nordeste, com baixíssima taxa de \"Sucesso\", mesmo com um razoável número de estações. / Many business are exposed to weather variations and managers did not use to have a tool to avoid it. In the last twenty years, weather derivative markets has developed mainly in Canada, USA and Europe, transferring these risks to investors who are willing and able to assume it and receive a financial compensation for that, such as investment funds, insurance and reinsurance companies. This study developed a methodology to price weather contracts with daily average temperature as underlying. It was used 265 public weather stations from BDMEP/ INMET and data was collected from 1970 up to 2012. While the most part of studies in this area have focused in one or few stations, the goal of this study was to develop a more general pricing tool which would allow assessing weather risk and quoting it at any place in Brazil with an available weather station. The main issue was the gaps that occur so frequently in weather time series data and a methodology using interpolated data from NCEP/NCAR was proposed to deal with it. At the bottom of modelling process, weather stations were classified as \"Success\" (36.2%) or \"Failure\" (63.8%) according to the analysis of residuals. To be considered \"Success\", residuals of a time series must be stationary, homoscedastic and white-noise, i.e., free of autocorrelation. If at least one of these was not reached, the modelling process of this weather station was considered \"Failure\". Detrend data was done using Local Polynomial Regression (LOESS). Seasonality was estimated using spectral analysis and Fourier analysis. Autocorrelation of residuals was incorporated into the model using ARFIMA models, which have a parameter to deal with long memory process. Spatial analysis of results suggests a higher \"Success\" rate for contracts priced in the Center south region and worst results were obtained in North and Northeast. Methodology to fill the gaps should not be used in all situations, once correlation is not constant through the country and has a strong spatial pattern (clustering). Pricing was done using \"Burning cost\", \"Index modelling\" and \"Daily modelling of average temperature\". In this former case, simulated temperature has shown a slightly positive bias, which could create huge differences in prices compared with other models. A correction should be done to these values, to use it for pricing purposes. The quality and consistency of weather data is the main issue to develop a weather market in Brazil, mainly in Center-West region, where there is a small number of weather stations and Northeast with the lowest \"Success\" rate, even with a not so small number of weather stations.
3

Landscape pattern and blister rust infection in whitebark pine (Pinus albicaulis) at alpine treeline, Northern Rocky Mountains, U.S.A.

Franklin, Lauren Nicole 26 July 2011 (has links)
Whitebark pine (Pinus albicaulis) is a foundation and keystone species at alpine treelines of the northern Rocky Mountains and is threatened by the fungus white pine blister rust (Cronartium ribicola). This disease affects all five-needled white pines, but has caused particularly widespread mortality in whitebark pine. Objectives of this research were: 1) to characterize the landscape structure of the treeline study sites at Divide Mountain in Glacier National Park and at Wyoming Creek in the Beartooth Mountains of Montana using landscape metrics and fieldwork; 2) to determine the frequency of blister rust infection of whitebark pine trees and determine if landscape pattern is correlated with higher infection rates; and 3) to characterize the climate at alpine treeline. I used both field surveys and subsequent statistical analysis to meet these objectives. Field data collection included detailed surveys of blister rust infection of treeline whitebark pine and characterization of landscape cover type in a combined total of 60 quadrats, positioned at the study sites using a random sampling scheme stratified by aspect. Landscape analysis of metrics such as patch area, proximity and contagion were generated in FRAGSTATS software and ArcGIS. Spearman's rank correlation analysis found significant correlations between tree island patch size, patch perimeter, and percent of landscape and blister rust infection intensity at both study sites. These findings support previous research involving the relationship between patch area and blister rust infection rates and contribute to the field of landscape ecology by understanding what other landscape metrics are significant in invasive disease infection patterns. / Master of Science
4

Estimation of average travel speed on a road segment based on weather and road accidents

Höjmark, André, Singh, Vivek January 2023 (has links)
The previous research available to predict travel speed is wide and has been extensively studied. What currently is missing from the previous work is to estimate the travel speed when different non-recurrent events occur, such as car accidents and road maintenance work. This research implements a machine learning model to predict the average speed on a road segment with and without road accidents. The model would assist in (1) planning the most efficient route which could reduce CO2 emissions and travel time (2) the drivers in traffic could get an estimate of when the traffic will open up again (3) the authorities could take safety measures if drivers are expected to be stuck for too long. In our work, we conducted a review to determine some of the optimal machine learning models to predict on time series data. What we found by comparing GRU (Gated Recurrent Unit) and LSTM (Long Short Term Memory) on travel speed data over a road in Sweden provided by the Swedish Transport Administration, is that there is no major difference in performance between the LSTM and GRU algorithms to predict the average travel speed. We also study the impact of using weather, date and accident related parameters on the model’s predictions. What we found is that we obtained much better results when including the weather data. Furthermore, the inclusion of road events vaguely hints that it could improve performance, but can not be verified due to the low number of road accidents in our dataset.
5

Climatology and firn processes in the lower accumulation area of the Greenland ice sheet

Charalampidis, Charalampos January 2016 (has links)
The Greenland ice sheet is the largest Northern Hemisphere store of fresh water, and it is responding rapidly to the warming climate. In situ observations document the changing ice sheet properties in the lower accumulation area, Southwest Greenland. Firn densities from 1840 meters above sea level retrieved in May 2012 revealed the existence of a 5.5-meter-thick, near-surface ice layer in response to the recent increased melt and refreezing in firn. As a consequence, vertical meltwater percolation in the extreme summer 2012 was inefficient, resulting in surface runoff. Meltwater percolated and refroze at six meters depth only after the end of the melt season. This prolonged autumn refreezing under the newly accumulated snowpack resulted in unprecedented firn warming with temperature at ten meters depth increased by more than four degrees Celsius. Simulations confirm that meltwater reached nine meters depth at most. The refrozen meltwater was estimated at 0.23 meters water equivalent, amounting to 25 % of the total 2012 ablation. A surface energy balance model was used to evaluate the seasonal and interannual variability of all surface energy fluxes at that elevation in the years 2009 to 2013. Due to the meltwater presence at the surface in 2012, the summer-averaged albedo was significantly reduced (0.71 in 2012; typically 0.78). A sensitivity analysis revealed that 71 % of the subsequent additional solar radiation in 2012 was used for melt, corresponding to 36 % of the total 2012 surface lowering. This interplay between melt and firn properties highlights that the lower accumulation area of the Greenland ice sheet will be responding rapidly in a warming climate. / Stability and Variations of Arctic Land Ice (SVALI) / Programme for Monitoring of the Greenland Ice Sheet (PROMICE) / Greenland Analogue Project (GAP)
6

Imputação de dados pluviométricos e sua aplicação na modelagem de eventos extremos de seca agrícola / Imputation of rainfall data and its application in modeling extreme events of agricultural drought

Ferrari, Gláucia Tatiana 17 June 2011 (has links)
Este trabalho relata o procedimento utilizado na obtenção de um banco de dados contínuo de precipitação diária de estações meteorológicas localizadas no Estado do Paraná. O banco de dados é composto por 484 séries históricas com dados entre janeiro de 1975 a dezembro de 2009. Para preencher os dados faltantes do banco de dados foram testados três métodos de imputação: o vizinho mais próximo, distância inversa ponderada e regressão linear. A raiz do erro quadrático médio (REQM) foi utilizada para comparar os métodos e o método da distância inversa ponderada proporcionou o melhor resultado. Após a imputação, os dados passaram por um processo de controle de qualidade que teve como objetivo identificar possíveis erros como precipitação idêntica em sete dias consecutivos (não aplicados a dados de precipitação zero) e valores de precipitação que diferem significativamente dos valores em estações meteorológicas vizinhas. Neste processo foram substituídos 1,21% valores de precipitação. Com o banco de dados contínuo, o interesse foi utilizar a teoria de valores extremos para modelar o período seco (número máximo de dias consecutivos com precipitação abaixo de 7mm para o período entre janeiro e fevereiro) crítico para a fase de enchimento de grãos da soja nas cinco principais mesorregiões (Centro Ocidental, Centro Sul, Norte Central, Oeste e Sudoeste) produtoras do Estado do Paraná. Pelo teste de Kolmogorov-Smirnov, ao nível de 5% de significância, a distribuição Gumbel foi a que melhor se ajustou aos dados de cada mesorregião e assim, a probabilidade de ocorrência de valores extremos de seca acima de 5, 25, 35 e 45 dias, o período de retorno para os maiores valores registrados em cada mesorregião e os níveis de retorno para o período de 5, 25, 50 e 75 anos foram calculados. / This paper describes the procedure used to obtain a continuous database of daily precipitation from weather stations located in the state of Parana. The database consists of 484 time series with data from January 1975 to December 2009. To complete missing data from the database were tested three imputation methods: the nearest neighbour, inverse distance weighting and linear regression. The root mean square error (RMSE) was used to compare the methods and the inverse distance weighting method yielded better results. After imputing the data went through a process of quality control that aimed to identify possible errors as precipitation identical in seven consecutive days (not applied to precipitation data zero) and precipitation values that dier signicantly from the values in neighboring meteorological stations. In this process were replaced 1.21 % values of precipitation. With a continuous database, the interest was to use the Extreme Value Theory to model the dry period (maximum number of consecutive days with precipitation less than 7mm for the period between January and February) for the critical grain lling stage of soybean in ve main regions (Central West South Central, North Central, West and Southwest) producing state of Parana. Through the Kolmogorov-Smirnov, at 5 % level of signicance, the Gumbel distribution was best tted the data of each regions and therefore the probability of extreme values of drought over 5, 25, 35 and 45 days, the return period for the highest values in each and levels return for the period of 5, 25, 50 and 75 years were calculated.
7

Pragmatism and Cooperation: Canadian-American Defence Activities in the Arctic, 1945-1951

Kikkert, Peter January 1900 (has links)
During the early Cold War, as the Soviet menace placed Canada in between two hostile superpowers, the Canadian government decided to take steps to ensure that its sovereignty and national interests were not threatened by the Americans in the new strategic environment. This study examines the extent to which the Canadian government actually defended its sovereignty and rights against American intrusions in the early Cold War. At its core is an examination of the government’s policy of gradual acquisition in the Arctic between 1945 and 1951. This thesis explores the relationships that existed at the time, the essence of the negotiations, the state of international law and the potential costs and benefits of certain Canadian courses of action. It also explains how Canada’s quiet diplomacy allowed it to avoid alienating its chief ally, contribute to continental defence, and strengthen its sovereignty during this period.
8

Pragmatism and Cooperation: Canadian-American Defence Activities in the Arctic, 1945-1951

Kikkert, Peter January 1900 (has links)
During the early Cold War, as the Soviet menace placed Canada in between two hostile superpowers, the Canadian government decided to take steps to ensure that its sovereignty and national interests were not threatened by the Americans in the new strategic environment. This study examines the extent to which the Canadian government actually defended its sovereignty and rights against American intrusions in the early Cold War. At its core is an examination of the government’s policy of gradual acquisition in the Arctic between 1945 and 1951. This thesis explores the relationships that existed at the time, the essence of the negotiations, the state of international law and the potential costs and benefits of certain Canadian courses of action. It also explains how Canada’s quiet diplomacy allowed it to avoid alienating its chief ally, contribute to continental defence, and strengthen its sovereignty during this period.
9

Imputação de dados pluviométricos e sua aplicação na modelagem de eventos extremos de seca agrícola / Imputation of rainfall data and its application in modeling extreme events of agricultural drought

Gláucia Tatiana Ferrari 17 June 2011 (has links)
Este trabalho relata o procedimento utilizado na obtenção de um banco de dados contínuo de precipitação diária de estações meteorológicas localizadas no Estado do Paraná. O banco de dados é composto por 484 séries históricas com dados entre janeiro de 1975 a dezembro de 2009. Para preencher os dados faltantes do banco de dados foram testados três métodos de imputação: o vizinho mais próximo, distância inversa ponderada e regressão linear. A raiz do erro quadrático médio (REQM) foi utilizada para comparar os métodos e o método da distância inversa ponderada proporcionou o melhor resultado. Após a imputação, os dados passaram por um processo de controle de qualidade que teve como objetivo identificar possíveis erros como precipitação idêntica em sete dias consecutivos (não aplicados a dados de precipitação zero) e valores de precipitação que diferem significativamente dos valores em estações meteorológicas vizinhas. Neste processo foram substituídos 1,21% valores de precipitação. Com o banco de dados contínuo, o interesse foi utilizar a teoria de valores extremos para modelar o período seco (número máximo de dias consecutivos com precipitação abaixo de 7mm para o período entre janeiro e fevereiro) crítico para a fase de enchimento de grãos da soja nas cinco principais mesorregiões (Centro Ocidental, Centro Sul, Norte Central, Oeste e Sudoeste) produtoras do Estado do Paraná. Pelo teste de Kolmogorov-Smirnov, ao nível de 5% de significância, a distribuição Gumbel foi a que melhor se ajustou aos dados de cada mesorregião e assim, a probabilidade de ocorrência de valores extremos de seca acima de 5, 25, 35 e 45 dias, o período de retorno para os maiores valores registrados em cada mesorregião e os níveis de retorno para o período de 5, 25, 50 e 75 anos foram calculados. / This paper describes the procedure used to obtain a continuous database of daily precipitation from weather stations located in the state of Parana. The database consists of 484 time series with data from January 1975 to December 2009. To complete missing data from the database were tested three imputation methods: the nearest neighbour, inverse distance weighting and linear regression. The root mean square error (RMSE) was used to compare the methods and the inverse distance weighting method yielded better results. After imputing the data went through a process of quality control that aimed to identify possible errors as precipitation identical in seven consecutive days (not applied to precipitation data zero) and precipitation values that dier signicantly from the values in neighboring meteorological stations. In this process were replaced 1.21 % values of precipitation. With a continuous database, the interest was to use the Extreme Value Theory to model the dry period (maximum number of consecutive days with precipitation less than 7mm for the period between January and February) for the critical grain lling stage of soybean in ve main regions (Central West South Central, North Central, West and Southwest) producing state of Parana. Through the Kolmogorov-Smirnov, at 5 % level of signicance, the Gumbel distribution was best tted the data of each regions and therefore the probability of extreme values of drought over 5, 25, 35 and 45 days, the return period for the highest values in each and levels return for the period of 5, 25, 50 and 75 years were calculated.
10

Analysis of Long-Term Utah Temperature Trends Using Hilbert-Haung Transforms

Hargis, Brent H 01 June 2014 (has links) (PDF)
We analyzed long-term temperature trends in Utah using a relatively new signal processing method called Empirical Mode Decomposition (EMD). We evaluated the available weather records in Utah and selected 52 stations, which had records longer than 60 years, for analysis. We analyzed daily temperature data, both minimum and maximums, using the EMD method that decomposes non-stationary data (data with a trend) into periodic components and the underlying trend. Most decomposition algorithms require stationary data (no trend) with constant periods and temperature data do not meet these constraints. In addition to identifying the long-term trend, we also identified other periodic processes in the data. While the immediate goal of this research is to characterize long-term temperature trends and identify periodic processes and anomalies, these techniques can be applied to any time series data to characterize trends and identify anomalies. For example, this approach could be used to evaluate flow data in a river to separate the effects of dams or other regulatory structures from natural flow or to look at other water quality data over time to characterize the underlying trends and identify anomalies, and also identify periodic fluctuations in the data. If these periodic fluctuations can be associated with physical processes, the causes or drivers might be discovered helping to better understand the system. We used EMD to separate and analyze long-term temperature trends. This provides awareness and support to better evaluate the extremities of climate change. Using these methods we will be able to define many new aspects of nonlinear and nonstationary data. This research was successful and identified several areas in which it could be extended including data reconstruction for time periods missing data. This analysis tool can be applied to various other time series records.

Page generated in 0.6882 seconds