Spelling suggestions: "subject:"dhange impact"" "subject:"dchange impact""
41 |
ECONOMIC IMPACTS OF THE EXPANSION OF RENEWABLE ENERGY: THE EXPERIENCE AT THE COUNTY AND NATIONAL LEVELAlma R Cortes Selva (11249646) 09 August 2021 (has links)
<p>This dissertation examines the
impact of the expansion of renewable technology at both national and local
level, through distinct essays. At the national level, the first paper analyzes
the effects of economic and distributional impacts of climate mitigation
policy, in the context of a developing country, to understand the interactions
between the energy system and the macroeconomic environment. In the case of the
local level, the second paper uses synthetic control method, to estimate the
effect at the county level of utility scale wind in the development indicators
for two counties in the U.S. </p>
<p>The first paper assesses the economic and distributional
impacts of Nicaragua’s commitments to limit future greenhouse gas emissions in
the context of the Paris Agreement, known as the Nationally Determined
Contributions (NDCs). The analysis relies on two distinct models. The first is
a top-down approach based on a single-country computable general equilibrium
(CGE) model, known as the Mitigation, Adaptation and New Technologies Applied
General Equilibrium (MANAGE) Model. The second is a bottom-up approach based on
the Open-Source energy Modeling System (OSeMOSYS), which is technology rich
energy model. The combined model is calibrated to an updated social accounting
matrix for Nicaragua, which disaggregates households into 20 representative
types: 10 rural and 10 urban households. For the household disaggregation we
have used information from the 2014 Living Standards Measurement Study (LSMS)
for Nicaragua. Our analysis focuses on the distributional impacts of meeting
the NDCs as well as additional scenarios—in a dynamic framework as the MANAGE
model is a (recursive) dynamic model. The results show that a carbon tax has
greatest potential for reduction in emissions, with modest impact in macro variables.
An expansion of the renewable sources in the electricity matrix also leads to significant
reduction in emissions. Only a carbon tax achieves a reduction in emissions
consistent with keeping global warming below 2°C. Nicaragua’s NDC alone would
not achieve the target and mitigation instruments are needed. An expansion of
generation from renewable sources, does not lead to a scenario consistent with a
2°C pathway. </p>
<p>The second paper measures the
impact of wind generation on county level outcomes through the use of the Synthetic
Control Method (SCM). SCM avoids the pitfalls of other methods such as
input-output models and project level case studies that do not provide county
level estimates. We find that the local per capita income effect of utility
wind scale is 6 percent (translate into an increase of $1,511 in per capita
income for 2019) for Benton County and 8 percent for White county in Indiana (an
increase of $2,100 in per capita income for 2019). The per capita income effect
measures the average impact, which includes the gains in rents from capital, land,
and labor from wind power in these counties. Moreover, we find that most of the
rents from wind power accrue to the owners of capital and labor. Even assuming
the lowest projections of electricity prices and the highest reasonable cost we
still find a 10 percent minimum rate of return to capital for both Benton and
White counties’ wind power generators. Furthermore, we find that there are
excess rents that could be taxed and redistributed at the county, state, or
federal level without disincentivizing investment in wind power.</p>
|
42 |
Design in öko-sozialen Transformationsprozessen: Eine explorative Betrachtung seiner Wirkung und WirkungsmachtFineder, Martina, Baedeker, Carolin, Fastenrath, Felix, Kremser, Katja, Liedtke, Christa 21 January 2025 (has links)
DESIGNING GOVERNANCE – POWER AND COMPLICITY:Einleitung
Verbundprojekt Transform.NRW
Thematische Rahmung
Explorative Diskussion von Wirkungspotenzialen
Theories of change
Das Format Good Practice Breakfast
Erste Erkenntnisse
Good Practice Breakfast Nr. 1 - Symbiosen
Good Practice Breakfast Nr. 3 - The Outside Inside
Diskussion
Fazit
Referenzen / Das Verbundprojekt transform.NRW entwickelt eine hybride Plattform, über die Kunst, Kultur, Design und Wissenschaft mit Wirtschaft, Politik und Gesellschaft zur Umsetzung sozial-ökologischer Transformationen in Austausch treten können. In diesem Work-In-Progress Paper thematisieren wir die Sichtbarmachung von Wirkung und Wirkungsmacht des Designs über das Aufzeigen seiner Wirkungspotenziale im offenen Diskurs unterschiedlicher Akteursgruppen als Bestandteil unseres kollektiven Kuratierungsprozesses. Designprojekte haben die transformative Fähigkeit, so der Ansatz, Strukturen wie Lebens- und Produktionsverhältnisse aufzuzeigen und in neue nachhaltige Verhältnisse und Muster zu transformieren. Konkret diskutieren wir die prozesshafte Befragung von Good Practice Beispielen hinsichtlich ihrer Wirkungspotenziale. Für diesen Bestandteil unseres kollektiven und interdisziplinären Kuratierungsprozesses wurde das Format der Good Practice Breakfasts entwickelt. Als heuristischer Rahmen zur Einordnung der Beispiele dient eine Kombination der Theories of Change und der Nachhaltigkeitsziele (SDGs). Anhand von zwei Beispielen zeigen wir, dass Methode und Format im Zusammenspiel einen bedeutenden Diskursaum eröffnen, in dem sowohl eine dialogische Wissens- und Bedeutungsproduktion als auch gemeinsames Lernen gefördert werden. Die fortschreitende Dokumentation als Erzählhistorie und Wissensfundus soll zur Entwicklung eines gemeinsamen Fundaments einer Transformations- und Nachhaltigkeits-Literacy beitragen.:Einleitung
Verbundprojekt Transform.NRW
Thematische Rahmung
Explorative Diskussion von Wirkungspotenzialen
Theories of change
Das Format Good Practice Breakfast
Erste Erkenntnisse
Good Practice Breakfast Nr. 1 - Symbiosen
Good Practice Breakfast Nr. 3 - The Outside Inside
Diskussion
Fazit
Referenzen
|
43 |
Predição de mudanças conjuntas de artefatos de software com base em informações contextuais / Predicting co-changes of software artifacts based on contextual informationWiese, Igor Scaliante 18 March 2016 (has links)
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software / Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
|
44 |
Hydrologic Impacts Of Clmate Change : Quantification Of UncertaintiesRaje, Deepashree 12 1900 (has links)
General Circulation Models (GCMs), which are mathematical models based on principles of fluid dynamics, thermodynamics and radiative transfer, are the most reliable tools available for projecting climate change. However, the spatial scale on which typical GCMs operate is very coarse as compared to that of a hydrologic process and hence, the output from a GCM cannot be directly used in hydrologic models. Statistical Downscaling (SD) derives a statistical or empirical relationship between the variables simulated by the GCM (predictors) and a point-scale meteorological series (predictand). In this work, a new downscaling model called CRF-downscaling model, is developed where the conditional distribution of the hydrologic predictand sequence, given atmospheric predictor variables, is represented as a conditional random field (CRF) to downscale the predictand in a probabilistic framework. Features defined in the downscaling model capture information about various factors influencing precipitation such as circulation patterns, temperature and pressure gradients and specific humidity levels. Uncertainty in prediction is addressed by projecting future cumulative distribution functions (CDFs) for a number of most likely precipitation sequences. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework, and changes in the non-parametric distribution of precipitation and dry and wet spell lengths are projected. Application of the method is demonstrated with the case study of downscaling to daily precipitation in the Mahanadi basin in Orissa, with the A1B scenario of the MIROC3.2 GCM from the Center for Climate System Research (CCSR), Japan.
An uncertainty modeling framework is presented in this work, which combines GCM, scenario and downscaling uncertainty using the Dempster-Shafer (D-S) evidence theory for representing and combining uncertainty. The methodology for combining uncertainties is applied to projections of hydrologic drought in terms of monsoon standardized streamflow index (SSFI-4) from streamflow projections for the Mahanadi river at Hirakud. The results from the work indicate an increasing probability of extreme, severe and moderate drought and decreasing probability of normal to wet conditions, as a result of a decrease in monsoon streamflow in the Mahanadi river due to climate change.
In most studies to date, the nature of the downscaling relationship is assumed stationary, or remaining unchanged in a future climate. In this work, an uncertainty modeling framework is presented in which, in addition to GCM and scenario uncertainty, uncertainty in the downscaling relationship itself is explored by linking downscaling with changes in frequencies of modes of natural variability. Downscaling relationships are derived for each natural variability cluster and used for projections of hydrologic drought. Each projection is weighted with the future projected frequency of occurrence of that cluster, called ‘cluster-linking’, and scaled by the GCM performance with respect to the associated cluster for the present period, called ‘frequency scaling’. The uncertainty modeling framework is applied to a case study of projections of hydrologic drought or SSFI-4 classifications, using projected streamflows for the Mahanadi river at Hirakud. It is shown that a stationary downscaling relationship will either over- or under-predict downscaled hydrologic variable values and associated uncertainty. Results from the work show improved agreement between GCM predictions at the regional scale, which are validated for the 20th century, implying that frequency scaling and cluster-linking may indeed be a valid method for constraining uncertainty.
To assess the impact of climate change on reservoir performance, in this study, a range of integrated hydrologic scenarios are projected for the future. The hydrologic scenarios incorporate increased irrigation demands; rule curves dictated by increased need for flood storage and downscaled projections of streamflow from an ensemble of GCMs and emission scenarios. The impact of climate change on multipurpose reservoir performance is quantified, using annual hydropower and RRV criteria, under GCM and scenario uncertainty. The ‘business-as-usual’ case using Standard Operating Policy (SOP) is studied initially for quantifying impacts. Adaptive Stochastic Dynamic Programming (SDP) policies are subsequently derived for the range of future hydrologic scenarios, with the objective of maximizing reliabilities with respect to multiple reservoir purposes of hydropower, irrigation and flood control. It is shown that the hydrologic impact of climate change is likely to result in decreases in performance criteria and annual hydropower generation for Hirakud reservoir. Adaptive policies show that a marginal reduction in irrigation and flood control reliability can achieve increased hydropower reliability in future. Hence, reservoir rules for flood control may have to be revised in the future.
|
45 |
Squid impact analyser: uma ferramenta para an?lise de impacto de mudan?a em linhas de produto de softwareVianna, Alexandre Strapa??o Guedes 25 May 2012 (has links)
Made available in DSpace on 2014-12-17T15:48:03Z (GMT). No. of bitstreams: 1
AlexandreSGV_DISSERT.pdf: 2732563 bytes, checksum: ab07c81d7e941ed2d721415180865feb (MD5)
Previous issue date: 2012-05-25 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Software Products Lines (SPL) is a software engineering approach to developing software system families that share common features and differ in other features according to the requested software systems. The adoption of the SPL approach can promote several benefits such as cost reduction, product quality, productivity, and time to market. On the other hand, the SPL approach brings new challenges to the software evolution that must be considered. Recent research work has explored and proposed automated approaches based on code analysis and traceability techniques for change impact analysis in the context of SPL development. There are existing limitations concerning these approaches such as the customization of the analysis functionalities to address different strategies for change impact analysis, and the change impact analysis of fine-grained variability. This dissertation proposes a change impact analysis tool for SPL development, called Squid Impact Analyzer. The tool allows the implementation of change impact analysis based on information from variability modeling, mapping of variability to code assets, and existing dependency relationships between code assets. An assessment of the tool is conducted through an experiment that compare the change impact analysis results provided by the tool with real changes applied to several evolution releases from a SPL for media management in mobile devices / Linhas de Produtos de Software (LPS) consiste em um paradigma de desenvolvimento de software, no qual fam?lias de sistemas compartilham caracter?sticas comuns e tornam expl?citas outras caracter?sticas que variam de acordo com o sistema final sendo considerado. Esta abordagem oferece benef?cios ao desenvolvimento de software como redu??o de custos, qualidade do produto final, produtividade e tempo de desenvolvimento reduzido. Por outro lado, a abordagem imp?e novos desafios para a atividade de evolu??o dos artefatos que modelam e implementam a LPS. Trabalhos de pesquisa recentes prop?em abordagens com suporte automatizado de ferramentas de an?lise de impacto de mudan?a no contexto de evolu??o de LPSs. Tais abordagens s?o baseadas em t?cnicas de an?lise de impacto de mudan?as e rastreabilidade de artefatos, por?m apresentam limita??es quanto ? an?lise de impacto de mudan?as em variabilidades de granularidade fina, bem como ? customiza??o dos tipos e estrat?gias de an?lise realizadas. Esta disserta??o prop?e uma ferramenta de an?lise de impacto de mudan?a, denominada Squid Impact Analyzer, que utiliza uma estrat?gia de estimativa de impacto baseada em informa??es de caracter?sticas, mapeamento de tais caracter?sticas em artefatos de c?digo, e depend?ncia existente entre artefatos de implementa??o. A ferramenta ? avaliada atrav?s da condu??o de experimentos que realizam a quantifica??o de m?tricas de cobertura, precis?o e m?dia harm?nica nos resultados de buscas de an?lise de impacto de mudan?a da ferramenta proposta em contraposi??o ?s mudan?as reais realizadas nos artefatos de diversas vers?es de evolu??o de uma LPS para gerenciamento de m?dias em dispositivos m?veis. A ferramenta foi desenvolvida com base em uma infraestrutura que serve de base para a instancia??o de ferramentas de an?lise de propriedades de c?digo de LPSs, e que ? tamb?m parte da contribui??o da disserta??o
|
46 |
Modelagem de mudanças climáticas: do nicho fundamental à conservação da biodiversidade / Climate change modeling: from the fundamental niche to biodiversity conservationFaleiro, Frederico Augusto Martins Valtuille 07 March 2016 (has links)
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2016-05-31T09:35:51Z
No. of bitstreams: 2
Tese - Frederico Augusto Martins Valtuille Faleiro - 2016.pdf: 7096330 bytes, checksum: 04cfce04ef128c5bd6e99ce18bb7f650 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-31T10:52:51Z (GMT) No. of bitstreams: 2
Tese - Frederico Augusto Martins Valtuille Faleiro - 2016.pdf: 7096330 bytes, checksum: 04cfce04ef128c5bd6e99ce18bb7f650 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2016-05-31T10:52:51Z (GMT). No. of bitstreams: 2
Tese - Frederico Augusto Martins Valtuille Faleiro - 2016.pdf: 7096330 bytes, checksum: 04cfce04ef128c5bd6e99ce18bb7f650 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2016-03-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The climate changes are one of the major threats to the biodiversity and it is expected to increase its impact along the 21st century. The climate change affect all levels of the biodiversity from individuals to biomes, reducing the ecosystem services. Despite of this, the prediction of climate change impacts on biodiversity is still a challenge. Overcoming these issues depends on improvements in different aspects of science that support predictions of climate change impact on biodiversity. The common practice to predict the climate change impact consists in formulate ecological niche models based in the current climate and project the changes based in the future climate predicted by the climate models. However, there are some recognized limitations both in the formulation of the ecological niche model and in the use of predictions from the climate models that need to be analyzed. Here, in the first chapter we review the science behind the climate models in order to reduce the knowledge gap between the scientific community that formulate the climate models and the community that use the predictions of these models. We showed that there is not consensus about evaluate the climate models, obtain regional models with higher spatial resolution and define consensual models. However, we gave some guidelines for use the predictions of the climate models. In the second chapter, we tested if the predictions of correlative ecological niche models fitted with presence-absence match the predictions of models fitted with abundance data on the metrics of climate change impact on orchid bees in the Atlantic Forest. We found that the presence-absence models were a partial proxy of change in abundance when the output of the models was continuous, but the same was not true when the predictions were converted to binary. The orchid bees in general will decrease the abundance in the future, but will retain a good amount of suitable sites in the future and the distance to gained climatic suitable areas can be very close, despite of great variation. The change in the species richness and turnover will be mainly in the western and some regions of southern of the Atlantic Forest. In the third chapter, we discussed the drawbacks in using the estimations of realized niche instead the fundamental niche, such as overpredicting the effect of climate change on species’ extinction risk. We proposed a framework based on phylogenetic comparative and missing data methods to predict the dimensions of the fundamental niche of species with missing data. Moreover, we explore sources of uncertainty in predictions of fundamental niche and highlight future directions to overcome current limitations of phylogenetic comparative and missing data methods to improve predictions. We conclude that it is possible to make better use of the current knowledge about species’ fundamental niche with phylogenetic information and auxiliary traits to predict the fundamental niche of poorly-studied species. In the fourth chapter, we used the framework of the chapter three to test the performance of two recent phylogenetic modeling methods to predict the thermal niche of mammals. We showed that PhyloPars had better performance than Phylogenetic Eigenvector Maps in predict the thermal niche. Moreover, the error and bias had similar phylogenetic pattern for both margins of the thermal niche while they had differences in the geographic pattern. The variance in the performance was explained by taxonomic differences and not by methodological aspects. Finally, our models better predicted the upper margin than the lower margin of the thermal niche. This is a good news for predicting the effect of climate change on species without physiological data. We hope our finds can be used to improve the predictions of climate change effect on the biodiversity in future studies and support the political decisions on minimizing the effects of climate change on biodiversity. / As mudanças climáticas são uma das principais ameaças à biodiversidade e é esperado que aumente seu impacto ao longo do século XXI. As mudanças climáticas afetam todos os níveis de biodiversidade, de indivíduos à biomas, reduzindo os serviços ecossistêmicos. Apesar disso, as predições dos impactos das mudanças climáticas na biodiversidade é ainda um desafio. A superação dessas questões depende de melhorias em diferentes aspectos da ciência que dá suporte para predizer o impacto das mudanças climáticas na biodiversidade. A prática comum para predizer o impacto das mudanças climáticas consiste em formular modelos de nicho ecológico baseado no clima atual e projetar as mudanças baseadas no clima futuro predito pelos modelos climáticos. No entanto, existem algumas limitações reconhecidas na formulação do modelo de nicho ecológico e no uso das predições dos modelos climáticos que precisam ser analisadas. Aqui, no primeiro capítulo nós revisamos a ciência por detrás dos modelos climáticos com o intuito de reduzir a lacuna de conhecimentos entre a comunidade científica que formula os modelos climáticos e a comunidade que usa as predições dos modelos. Nós mostramos que não existe consenso sobre avaliar os modelos climáticos, obter modelos regionais com maior resolução espacial e definir modelos consensuais. No entanto, nós damos algumas orientações para usar as predições dos modelos climáticos. No segundo capítulo, nós testamos se as predições dos modelos correlativos de nicho ecológicos ajustados com presença-ausência são congruentes com aqueles ajustados com dados de abundância nas medidas de impacto das mudanças climáticas em abelhas de orquídeas da Mata Atlântica. Nós encontramos que os modelos com presença-ausência foram substitutos parciais das mudanças na abundância quando o resultado dos modelos foi contínuo (adequabilidade), mas o mesmo não ocorreu quando as predições foram convertidas para binárias. As espécies de abelhas, de modo geral, irão diminuir em abundância no futuro, mas reterão uma boa quantidade de locais adequados no futuro e a distância para áreas climáticas adequadas ganhadas podem estar bem próximo, apesar da grande variação. A mudança na riqueza e na substituição de espécies ocorrerá principalmente no Oeste e algumas regiões no sul da Mata Atlântica. No terceiro capítulo, nós discutimos as desvantagens no uso de estimativas do nicho realizado ao invés do nicho fundamental, como superestimar o efeito das mudanças climáticas no risco de extinção das espécies. Nós propomos um esquema geral baseado em métodos filogenéticos comparativos e métodos de dados faltantes para predizer as dimensões do nicho fundamental das espécies com dados faltantes. Além disso, nós exploramos as fontes de incerteza nas predições do nicho fundamental e destacamos direções futuras para superar as limitações atuais dos métodos comparativos filogenéticas e métodos de dados faltantes para melhorar as predições. Nós concluímos que é possível fazer melhor uso do conhecimento atual sobre o nicho fundamental das espécies com informação filogenética e caracteres auxiliares para predizer o nicho fundamental de espécies pouco estudadas. No quarto capítulo, nós usamos o esquema geral do capítulo três para testar a performance de dois novos métodos de modelagem filogenética para predizer o nicho térmico dos mamíferos. Nós mostramos que o “PhyloPars” teve uma melhor performance que o “Phylogenetic Eigenvector Maps” em predizer o nicho térmico. Além disso, o erro e o viés tiveram um padrão filogenético similar para ambas as margens do nicho térmico, enquanto eles apresentaram diferentes padrões espaciais. A variância na performance foi explicada pelas diferenças taxonômicas e não pelas diferenças em aspectos metodológicos. Finalmente, nossos modelos melhor predizem a margem superior do que a margem inferior do nicho térmico. Essa é uma boa notícia para predizer o efeito das mudanças climáticas em espécies sem dados fisiológicos. Nós esperamos que nossos resultados possam ser usados para melhorar as predições do efeito das mudanças climáticas na biodiversidade em estudos futuros e dar suporte para decisões políticas para minimização dos efeitos das mudanças climáticas na biodiversidade.
|
47 |
Predição de mudanças conjuntas de artefatos de software com base em informações contextuais / Predicting co-changes of software artifacts based on contextual informationIgor Scaliante Wiese 18 March 2016 (has links)
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software / Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
|
48 |
ADAPTIVE MANAGEMENT OF MIXED-SPECIES HARDWOOD FORESTS UNDER RISK AND UNCERTAINTYVamsi K Vipparla (9174710) 28 July 2020 (has links)
<p>Forest management
involves numerous stochastic elements. To sustainably manage forest
resources, it is crucial to acknowledge
these sources as uncertainty or risk, and incorporate them in adaptive
decision-making. Here, I developed several stochastic programming models in the
form of passive or active adaptive management for natural mixed-species
hardwood forests in Indiana. I demonstrated how to use these tools to deal with
time-invariant and time-variant natural disturbances in optimal planning of
harvests.</p>
<p> Markov decision process (MDP)
models were first constructed based upon stochastic simulations of an empirical
forest growth model for the forest type of interest. Then, they were optimized
to seek the optimal or near-optimal harvesting decisions while considering risk
and uncertainty in natural disturbances. In particular, a classic
expected-criterion infinite-horizon MDP model was first used as a passive
adaptive management tool to determine the optimal action for a specific forest
state when the probabilities of forest transition remained constant over time.
Next, a two-stage non-stationary MDP model combined with a rolling-horizon
heuristic was developed, which allowed information
update and then adjustments of decisions accordingly. It was used to determine
active adaptive harvesting decisions for a three-decade planning horizon during
which natural disturbance probabilities may be altered by climate change.</p>
<p> The empirical results can be used
to make some useful quantitative management recommendations, and shed light on
the impacts of decision-making on the forests and timber yield when some
stochastic elements in forest management changed. In general, the increase in
the likelihood of damages by natural disturbance to forests would cause more
aggressive decisions if timber production was the management objective. When
windthrow did not pose a threat to mixed hardwood forests, the average optimal
yield of sawtimber was estimated to be 1,376 ft<sup>3</sup>/ac/acre, while the
residual basal area was 88 ft<sup>2</sup>/ac. Assuming a 10 percent per decade probability
of windthrow that would reduce the stand basal area considerably, the optimal sawtimber yield per decade would
decline by 17%, but the residual basal area would be lowered only by 5%. Assuming
that the frequency of windthrow increased in the magnitude of 5% every decade
under climate change, the average sawtimber yield would be reduced by 31%, with
an average residual basal area slightly around 76 ft<sup>2</sup>/ac. For
validation purpose, I compared the total sawtimber yield in three decades
obtained from the heuristic approach to that of a three-decade MDP model making
<i>ex post</i> decisions. The heuristic
approach was proved to provide a satisfactory result which was only about 18%
lower than the actual optimum.</p>
These findings highlight the need for landowners, both private and
public, to monitor forests frequently and use flexible planning approaches in
order to anticipate for climate change impacts. They also suggest that climate
change may considerably lower sawtimber yield, causing a concerning decline in
the timber supply in Indiana. Future improvements of the approaches used here are
recommended, including addressing the changing stumpage market condition and
developing a more flexible rolling-horizon heuristic approach.
|
49 |
Tourism and climate change: an investigation of the two-way linkages for the Victoria Falls resort, ZimbabweDube, Kaitano 02 1900 (has links)
There remain vast knowledge gaps in the global south as to how tourism will affect climate change and vice versa. Recent extreme weather events in southern Africa attributed to climate variability and change have led to speculation that, the Victoria Falls, is under threat from climate change. This research was aimed at examining the two-way linkage between tourism and climate change. The research adopted a pragmatism paradigm in a mixed-method case study. A number of research techniques were used to investigate the problem, namely: an online survey (n=427), secondary data analysis, field observation and interviews. Data analysis was done making use of Mann-Kendall Trend Analysis, QuestionPro analytics, Microsoft Excel Analysis Toolpak, Tools from ArcMap 10.3.1 and SPSS 24. Content analysis and thematic analysis was used to analyse secondary and interview data respectively. It emerged that the Victoria Falls is experiencing climate change, which resulted in statistically significant increase in temperature over the past 40 years of between 0.3°C and 0.75°C per decade. However, no significant changes in rainfall were noted, although there has been a seasonal shift in average rainfall onset. Weather extremes and annual rainfall point to increased occurrence and severity of extreme years of droughts and wetting which has in turn also affected waterflow regime at the waterfalls. The changes have a negative impact on wildlife, tourists, and tourism business in the area. The study also revealed that tourism is an equally significant driver of climate change through carbon emissions throughout its value chain. Carbon emissions from tourism value chain are set to increase in the foreseeable future despite efforts of going green by the industry owing to exponential growth of the industry. There is, therefore, a need for the industry to adapt, mitigate and intensify green tourism efforts to achieve sustainability. The study further suggests that there is a need for better communication and education to build resilience and capacity for the tourism industry to deal with climate change. Further research is suggested to ascertain the tourism threshold for the area, impact of climate change on wildlife and basin changes that led to water flow increase in the Zambezi River. / Environmental Sciences / Ph. D. (Environmental Management)
|
50 |
Impacts of Climate Change on IDF Relationships for Design of Urban Stormwater SystemsSaha, Ujjwal January 2014 (has links) (PDF)
Increasing global mean temperature or global warming has the potential to affect the hydrologic cycle. In the 21st century, according to the UN Intergovernmental Panel on Climate Change (IPCC), alterations in the frequency and magnitude of high intensity rainfall events are very likely. Increasing trend of urbanization across the globe is also noticeable, simultaneously. These changes will have a great impact on water infrastructure as well as environment in urban areas. One of the impacts may be the increase in frequency and extent of flooding. India, in the recent years, has witnessed a number of urban floods that have resulted in huge economic losses, an instance being the flooding of Mumbai in July, 2005. To prevent catastrophic damages due to floods, it has become increasingly important to understand the likely changes in extreme rainfall in future, its effect on the urban drainage system, and the measures that can be taken to prevent or reduce the damage due to floods. Reliable estimation of future design rainfall intensity accounting for uncertainties due to climate change is an important research issue. In this context, rainfall intensity-duration-frequency (IDF) relationships are one of the most extensively used hydrologic tools in planning, design and operation of various drainage related infrastructures in urban areas. There is, thus, a need for a study that investigates the potential effects
of climate change on IDF relationships.
The main aim of the research reported in this thesis is to investigate the effect of climate change on Intensity-Duration-Frequency relationship in an urban area. The rainfall in Bangalore City is used as a case study to demonstrate the applications of the methodologies developed in the research
Ahead of studying the future changes, it is essential to investigate the signature of changes in the observed hydrological and climatological data series. Initially, the yearly mean temperature records are studied to find out the signature of global warming. It is observed that the temperature of Bangalore City shows an evidence of warming trend at a statistical confidence level of 99.9 %, and that warming effect is visible in terms of increase of minimum temperature at a rate higher than that of maximum temperature. Interdependence studies between temperature and extreme rainfall reveal that up to a certain range, increase in temperature intensifies short term rainfall intensities at a rate more than the average rainfall. From these two findings, it is clear that short duration rainfall intensities may intensify in the future due to global warming and urban heat island effect. The possible urbanization signatures in the extreme rainfall in terms of intensification in the evening and weekends are also inferred, although inconclusively. The IDF relationships are developed with historical data and changes in the long term daily rainfall extreme characteristics are studied. Multidecedal oscillations in the daily rainfall extreme series are also examined. Further, non-parametric trend analyses of various indices of extreme rainfall are carried out to confirm that there is a trend of increase in extreme rainfall amount and frequency, and therefore it is essential to the study the effects of climate change on the IDF relationships of the Bangalore City.
Estimation of future changes in rainfall at hydrological scale generally relies on simulations of future climate provided by Global Climate Models (GCMs). Due to spatial and temporal resolution mismatch, GCM results need to be downscaled to get the information at station scale and at time resolutions necessary in the context of urban flooding. The downscaling of extreme rainfall characteristics in an urban station scale pose the following challenges: (1) downscaling methodology should be efficient enough to simulate rainfall at the tail of rainfall distribution (e.g., annual maximum rainfall), (2) downscaling at hourly or up to a few minutes temporal resolution is required, and (3) various uncertainties such as GCM uncertainties, future scenario uncertainties and uncertainties due to various statistical methodologies need to be addressed. For overcoming the first challenge, a stochastic rainfall generator is developed for spatial downscaling of GCM precipitation flux information to station scale to get the daily annual maximum rainfall series (AMRS). Although Regional Climate Models (RCMs) are meant to simulate precipitation at regional scales, they fail to simulate extreme events accurately. Transfer function based methods and weather typing techniques are also generally inefficient in simulating the extreme events. Due to its stochastic nature, rainfall generator is better suited for extreme event generation. An algorithm for stochastic simulation of rainfall, which simulates both the mean and extreme rainfall satisfactorily, is developed in the thesis and used for future projection of rainfall by perturbing the parameters of the rainfall generator for the future time periods. In this study, instead of using the customary two states (rain/dry) Markov chain, a three state hybrid Markov chain is developed. The three states used in the Markov chain are: dry day, moderate rain day and heavy rain day. The model first decides whether a day is dry or rainy, like the traditional weather generator (WGEN) using two transition probabilities, probabilities of a rain day following a dry day (P01), and a rain day following a rain day (P11). Then, the state of a rain day is further classified as a moderate rain day or a heavy rain day. For this purpose, rainfall above 90th percentile value of the non-zero precipitation distribution is termed as a heavy rain day. The state of a day is assigned based on transition probabilities (probabilities of a rain day following a dry day (P01), and a rain day following a rain day (P11)) and a uniform random number. The rainfall amount is generated by Monte Carlo method for the moderate and heavy rain days separately. Two different gamma distributions are fitted for the moderate and heavy rain days. Segregating the rain days into two different classes improves the process of generation of extreme rainfall. For overcoming the second challenge, i.e. requirement of temporal scales, the daily scale IDF ordinates are disaggregated into hourly and sub-hourly durations. Disaggregating continuous rainfall time series at sub-hourly scale requires continuous rainfall data at a fine scale (15 minute), which is not available for most of the Indian rain gauge stations. Hence, scale invariance properties of extreme rainfall time series over various rainfall durations are investigated through scaling behavior of the non-central moments (NCMs) of generalized extreme value (GEV) distribution. The scale invariance properties of extreme rainfall time series are then used to disaggregate the distributional properties of daily rainfall to hourly and sub-hourly scale. Assuming the scaling relationships as stationary, future sub-hourly and hourly IDF relationships are developed.
Uncertainties associated with the climate change impacts arise due to existence of several GCMs developed by different institutes across the globe, climate simulations available for different
representative concentration pathway (RCP) scenarios, and the diverse statistical techniques available for downscaling. Downscaled output from a single GCM with a single emission scenario represents only a single trajectory of all possible future climate realizations and cannot be representative of the full extent of climate change. Therefore, a comprehensive assessment of future projections should use the collective information from an ensemble of GCM simulations. In this study, 26 different GCMs and 4 RCP scenarios are taken into account to come up with a range of IDF curves at different future time periods. Reliability ensemble averaging (REA) method is used for obtaining weighted average from the ensemble of projections. Scenario uncertainty is not addressed in this study. Two different downscaling techniques (viz., delta change and stochastic rainfall generator) are used to assess the uncertainty due to downscaling techniques. From the results, it can be concluded that the delta change method under-estimated the extreme rainfall compared to the rainfall generator approach. This study also confirms that the delta change method is not suitable for impact studies related to changes in extreme events, similar to some earlier studies. Thus, mean IDF relationships for three different future extreme events, similar to some earlier studies. Thus, mean IDF relationships for three different future
periods and four RCP scenarios are simulated using rainfall generator, scaling GEV method, and REA method. The results suggest that the shorter duration rainfall will invigorate more due to climate change. The change is likely to be in the range of 20% to 80%, in the rainfall intensities across all durations.
Finally, future projected rainfall intensities are used to investigate the possible impact of climate change in the existing drainage system of the Challaghatta valley in the Bangalore City by running the Storm Water Management Model (SWMM) for historical period, and the best and the worst case scenario for three future time period of 2021–2050, 2051–2080 and 2071–2100. The results indicate that the existing drainage is inadequate for current condition as well as for future scenarios. The number of nodes flooded will increase as the time period increases, and a huge change in runoff volume is projected. The modifications of the drainage system are suggested by providing storage pond for storing the excess high speed runoff in order to restrict the width of the drain The main research contribution of this thesis thus comes from an analysis of trends of extreme rainfall in an urban area followed by projecting changes in the IDF relationships under climate change scenarios and quantifying uncertainties in the projections.
|
Page generated in 0.0633 seconds