• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 11
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 23
  • 23
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Volatility Modeling and Risk Measurement using Statistical Models based on the Multivariate Student's t Distribution

Banasaz, Mohammad Mahdi 01 April 2022 (has links)
An effective risk management program requires reliable risk measurement. Failure to assess inherited risks in mortgage-backed securities in the U.S. market contributed to the financial crisis of 2007–2008, which has prompted government regulators to pay greater attention to controlling risk in banks, investment funds, credit unions, and other financial institutions to prevent bankruptcy and financial crisis in the future. In order to calculate risk in a reliable manner, this thesis has focused on the statistical modeling of expected return and volatility. The primary aim of this study is to propose a framework, based on the probabilistic reduction approach, to reliably quantify market risk using statistical models and historical data. Particular emphasis is placed on the importance of the validity of the probabilistic assumptions in risk measurement by demonstrating how a statistically misspecified model will lead the evaluation of risk astray. The concept of market risk is explained by discussing the narrow definition of risk in a financial context and its evaluation and implications for financial management. After highlighting empirical evidence and discussing the limitations of the ARCH-GARCH-type volatility models using exchange rate and stock market data, we proposed Student's t Autoregressive models to estimate expected return and volatility to measure risk, using Value at Risk (VaR) and Expected Shortfall (ES). The misspecification testing analysis shows that our proposed models can adequately capture the chance regularities in exchange rates and stock indexes data and give a reliable estimation of regression and skedastic functions used in risk measurement. According to empirical findings, the COVID-19 pandemic in the first quarter of 2020 posed an enormous risk to global financial markets. The risk in financial markets returned to levels prior to the COVID-19 pandemic in 2021, after COVID-19 vaccine distribution started in developed countries. / Doctor of Philosophy / Reliable risk measurement is necessary for any effective risk management program. Hence, the primary purpose of this dissertation was to propose a framework to quantify market risk using statistical models and historical data, with a particular emphasis placed on checking the validity of probabilistic assumptions underlying models. After discussing the concept of market risk and its evaluation methods in financial management, we explored the empirical evidence in financial data and highlighted some limitations of other well-known modeling approaches. In order to ameliorate limitations, this study proposed Student's t Autoregressive models to estimate the conditional mean and the conditional variance of the financial variables and use them to measure risk via two popular methods: Value at Risk (VaR) and Expected Shortfall (ES). Further investigation shows that our proposed models can adequately model exchange rates and stock indexes data and give reliable estimations to use in risk measurement. We used our model to quantify risk in global financial markets in recent years. The results show that the COVID-19 pandemic posed an enormous risk to global financial markets in the first quarter of 2020. In 2021, the level of risk in financial markets returned to levels before the COVID-19 pandemic, after COVID-19 vaccine distribution started in developed countries.
52

Análise de custo-eficácia dos pagamentos por serviços ambientais em paisagens fragmentadas: estudo de caso de São Paulo / Cost-effectiveness analysis of payments for environmental services in fragmented landscapes: case study in the State of São Paulo

Fendrich, Arthur Nicolaus 14 November 2017 (has links)
Mesmo com o crescimento da dependência da vida humana em relação aos serviços ecossistêmicos, a taxa de perda de diversidade genética no planeta tem alcançado níveis semelhantes à de grandes eventos de extinção, evidenciando a necessidade de ações para a conservação dos recursos naturais. Em adição aos tradicionais instrumentos de comando e controle para a conservação, os instrumentos econômicos têm tido crescente atenção no mundo nos últimos anos, com especial destaque para os Pagamentos por Serviços Ambientais (PSA). A abordagem de pagamentos de incentivos tem crescido na última década e, apesar das potencialidades que o PSA apresenta, muitos programas falham em incorporar o conhecimento científico em sua execução, sendo esse um dos aspectos que podem acarretar baixo desempenho ambiental e econômico. Neste contexto, o presente projeto buscou avaliar a custo-eficácia do PSA em paisagens fragmentadas. A área de estudo é o estado de São Paulo, cuja fragmentação historicamente ocorre pela expansão agropecuária e pelos diversos impactos decorrentes do grande crescimento populacional em seu território. Foram distribuídos questionários para a obtenção das preferências dos proprietários rurais paulistas em relação aos programas de PSA para restauração de vegetação nativa. Os dados coletados foram relacionados a características socioeconômicas e ambientais e um modelo beta inflacionado de zero misto dentro da classe GAMLSS foi utilizado. Em seguida, o modelo foi utilizado para predizer os resultados para os proprietários não entrevistados e a curva de investimento para diferentes retornos para conservação foi construída. Os resultados apontaram que o PSA é uma alternativa muito custosa frente aos orçamentos ambientais paulistas e que traz poucos benefícios para a restauração no estado de São Paulo. A pesquisa possui uma vertente teórica, pois contribui para a compreensão da adequabilidade do PSA em paisagens fragmentadas, e uma vertente prática, pois explicita a quantidade de recursos necessária para a execução dos programas analisados. / Although the dependence of human activities on ecosystem services has risen in the past decades, the current rate of genetic diversity loss has substantially declined and reached alarming levels. In addition to the traditional command and control approach for the promotion of conservation, growing attention has been given to economic instruments, especially to Payments for Environmental Services (PES). Despite all potentialities of the PES instrument, many programs fail to introduce scientic knowledge in the execution. Such a lack of foundation may result in low environmental and economic performance. The present research aims at evaluating the cost-effectiveness of PES in fragmented landscapes. The study area is the state of São Paulo, which has been fragmented by the agricultural and pasture expansion, and the impacts linked to the large population growth. A survey with dierent PES programs was sent to rural landowners and responses were analyzed and linked to socioeconomic and environmental characteristics through a zero-inflated beta mixed model, within the GAMLSS framework. The model was used to predict enrollment of non-respondents in different PES programs. Finally, the relationship between total area for restoration and the amount of resources needed for each program was compared to the environmental budget of the state of São Paulo. Results show that PES is a very costly alternative that can provide only few results for restoration. The present work has a theoretical orientation, as it contributes to the comprehension of the feasibility of PES programs in fragmented landscapes, and a practical orientation, as it quantifies the amount of resources required by the programs analyzed.
53

Maxent Estimation of Aquatic Escherichia Coli Stream Impairment

Gilfillan, Dennis, Joyner, Timothy Andrew, Scheuerman, Phillip R. 13 September 2018 (has links)
Background: The leading cause of surface water impairment in United States’ rivers and streams is pathogen contamination. Although use of fecal indicators has reduced human health risk, current approaches to identify and reduce exposure can be improved. One important knowledge gap within exposure assessment is characterization of complex fate and transport processes of fecal pollution. Novel modeling processes can inform watershed decision-making to improve exposure assessment. Methods: We used the ecological model, Maxent, and the fecal indicator bacterium Escherichia coli to identify environmental factors associated with surface water impairment. Samples were collected August, November, February, and May for 8 years on Sinking Creek in Northeast Tennessee and analyzed for 10 water quality parameters and E. coli concentrations. Univariate and multivariate models estimated probability of impairment given the water quality parameters. Model performance was assessed using area under the receiving operating characteristic (AUC) and prediction accuracy, defined as the model’s ability to predict both true positives (impairment) and true negatives (compliance). Univariate models generated action values, or environmental thresholds, to indicate potential E. coli impairment based on a single parameter. Multivariate models predicted probability of impairment given a suite of environmental variables, and jack-knife sensitivity analysis removed unresponsive variables to elicit a set of the most responsive parameters. Results: Water temperature univariate models performed best as indicated by AUC, but alkalinity models were the most accurate at correctly classifying impairment. Sensitivity analysis revealed that models were most sensitive to removal of specific conductance. Other sensitive variables included water temperature, dissolved oxygen, discharge, and NO3. The removal of dissolved oxygen improved model performance based on testing AUC, justifying development of two optimized multivariate models; a 5-variable model including all sensitive parameters, and a 4-variable model that excluded dissolved oxygen. Discussion: Results suggest that E. coli impairment in Sinking Creek is influenced by seasonality and agricultural run-off, stressing the need for multi-month sampling along a stream continuum. Although discharge was not predictive of E. coli impairment alone, its interactive effect stresses the importance of both flow dependent and independent processes associated with E. coli impairment. This research also highlights the interactions between nutrient and fecal pollution, a key consideration for watersheds with multiple synergistic impairments. Although one indicator cannot mimic the plethora of existing pathogens in water, incorporating modeling can fine tune an indicator’s utility, providing information concerning fate, transport, and source of fecal pollution while prioritizing resources and increasing confidence in decision making. Methods We used the ecological model, Maxent, and the fecal indicator bacterium Escherichia coli to identify environmental factors associated with surface water impairment. Samples were collected August, November, February, and May for 8 years on Sinking Creek in Northeast Tennessee and analyzed for 10 water quality parameters and E. coli concentrations. Univariate and multivariate models estimated probability of impairment given the water quality parameters. Model performance was assessed using area under the receiving operating characteristic (AUC) and prediction accuracy, defined as the model’s ability to predict both true positives (impairment) and true negatives (compliance). Univariate models generated action values, or environmental thresholds, to indicate potential E. coli impairment based on a single parameter. Multivariate models predicted probability of impairment given a suite of environmental variables, and jack-knife sensitivity analysis removed unresponsive variables to elicit a set of the most responsive parameters. Results Water temperature univariate models performed best as indicated by AUC, but alkalinity models were the most accurate at correctly classifying impairment. Sensitivity analysis revealed that models were most sensitive to removal of specific conductance. Other sensitive variables included water temperature, dissolved oxygen, discharge, and NO3. The removal of dissolved oxygen improved model performance based on testing AUC, justifying development of two optimized multivariate models; a 5-variable model including all sensitive parameters, and a 4-variable model that excluded dissolved oxygen. Discussion Results suggest that E. coli impairment in Sinking Creek is influenced by seasonality and agricultural run-off, stressing the need for multi-month sampling along a stream continuum. Although discharge was not predictive of E. coli impairment alone, its interactive effect stresses the importance of both flow dependent and independent processes associated with E. coli impairment. This research also highlights the interactions between nutrient and fecal pollution, a key consideration for watersheds with multiple synergistic impairments. Although one indicator cannot mimic theplethora of existing pathogens in water, incorporating modeling can fine tune an indicator’s utility, providing information concerning fate, transport, and source of fecal pollution while prioritizing resources and increasing confidence in decision making.
54

Modification Of Magnetic Properties Of Siderite By Thermal Treatment

Alkac, Dilek 01 September 2007 (has links) (PDF)
Obtaining high magnetic susceptibility phases from Hekimhan&amp / #8211 / Deveci siderite orevia preliminary thermal treatment has been the basic target of the thesis study.Thermal decomposition characteristics of samples, determined bythermogravimetric analysis (TGA), differential thermal analysis (DTA), and differential scanning calorimetry (DSC), were referenced in advancement of thestudy. Heat treatment experiments, particularly roasting, were carried out byconventional heating and microwave heating. Results showed that roasting of Hekimhan&amp / #8211 / Deveci siderite samples could not be achieved by microwave energywhilst conventional heating experiments recorded success. Subsequentlow&amp / #8211 / intensity magnetic separation of roasted samples gave recovery above 90%, where low&amp / #8211 / intensity magnetic separation of run&amp / #8211 / of&amp / #8211 / mine sample had failed. Formation of high magnetic susceptibility phases was verified by magneticsusceptibility balance and x&amp / #8211 / ray diffraction analysis (XRD), on roasted samples. Statistical modeling was applied to determine the optimum conditions of roastingin conventional heating system / based on heating temperature, time of heating, particle size as factors.It was concluded that roasting at T= 560 &ordm / C, for t= 45 minutes was adequate toobtain desired results. Particle size was noted to be not much effective on the process as other factors at the studied size range. Kinetics (E, n) and reaction mechanism for the thermal decomposition in conventional heating system were evaluated with different solid&amp / #8211 / state reaction models by interpretation of the model graphs.Three&amp / #8211 / dimensional diffusion reaction models reported to characterize the thermal decomposition well, with values of activation energy (E), E= 85.53 kJ/mol (Jander) / E= 85.49 kJ/mol, (Ginstling&amp / #8211 / Brounshtein).
55

Age Dependent Analysis and Modeling of Prostate Cancer Data

Bonsu, Nana Osei Mensa 01 January 2013 (has links)
Growth rate of prostate cancer tumor is an important aspect of understanding the natural history of prostate cancer. Using real prostate cancer data from the SEER database with tumor size as a response variable, we have clustered the cancerous tumor sizes into age groups to enhance its analytical behavior. The rate of change of the response variable as a function of age is given for each cluster. Residual analysis attests to the quality of the analytical model and the subject estimates. In addition, we have identified the probability distribution that characterize the behavior of the response variable and proceeded with basic parametric analysis. There are several remarkable treatment options available for prostate cancer patients. In this present study, we have considered the three commonly used treatment for prostate cancer: radiation therapy, surgery, and combination of surgery and radiation therapy. The study uses data from the SEER database to evaluate and rank the effectiveness of these treatment options using survival analysis in conjunction with basic parametric analysis. The evaluation is based on the stage of the prostate cancer classification. Improvement in prostate cancer disease can be measured by improvement in its mortality. Also, mortality projection is crucial for policy makers and the financial stability of insurance business. Our research applies a parametric model proposed by Renshaw et al. (1996) to project the force of mortality for prostate cancer. The proposed modeling structure can pick up both age and year effects.
56

Protein Crystallization: Soft Matter and Chemical Physics Perspectives

Fusco, Diana January 2014 (has links)
<p>X-ray and neutron crystallography are the predominant methods for obtaining atomic-scale information on bimolecular macromolecules. Despite the success of these techniques, generating well diffracting crystals critically limits going from protein to structure. In practice, the crystallization process proceeds through knowledge-informed empiricism. Better physico-chemical understanding remains elusive because of the large number of variables involved, hence little guidance is available to systematically identify solution conditions that promote crystallization. </p><p>The fields of structural biology and soft matter have independently sought out fundamental principles to rationalize protein crystallization. Yet the conceptual differences and limited overlap between the two disciplines may have prevented a comprehensive understanding of the phenomenon to emerge. Part of this dissertation focuses on computational studies of rubredoxin and human uniquitin that bridge the two fields.</p><p>Using atomistic simulations, the protein crystal contacts are characterized, and patchy particle models are accordingly parameterized. Comparing the phase diagrams of these schematic models with experimental results enables the critical review of the assumptions behind the two approaches, and reveals insights about protein-protein interactions that can be leveraged to crystallize proteins more generally. In addition, exploration of the model parameter space provides a rationale for several experimental observations, such as the success and occasional failure of George and Wilson's proposal for protein crystallization conditions and the competition between different crystal forms.</p><p>These simple physical models enlighten the connection between protein phase behavior and protein-protein interactions, which are, however, remarkably sensitive to the protein chemical environment. To help determine relationships between the physico-chemical protein properties and crystallization propensity, statistical models are trained on samples for 182 proteins supplied by the Northeast Structural Genomics consortium. Gaussian processes, which capture trends beyond the reach of linear statistical models, distinguish between two main physico-chemical mechanisms driving crystallization. One is characterized by low levels of side chain entropy and has been extensively reported in the literature. The other identifies specific electrostatic interactions not previously described in the crystallization context. Because evidence for two distinct mechanisms can be gleaned both from crystal contacts and from solution conditions leading to successful crystallization, the model offers future avenues for optimizing crystallization screens based on partial structural information. The availability of crystallization data coupled with structural outcomes analyzed through state-of-the-art statistical models may thus guide macromolecular crystallization toward a more rational basis.</p><p>To conclude, the behavior of water in protein crystals is specifically examined. Water is not only essential for the correct functioning and folding of proteins, but it is also a key player in protein crystal assembly. Although water occupies up to 80% of the volume fraction of a protein crystal, its structure has so far received little attention and it is often overly simplified in the structural refinement process. Merging information derived from molecular dynamics simulations and original structural information provides a way to better understand the behavior of water in crystals and to develop a method that enriches standard structural refinement.</p> / Dissertation
57

A Microdata Analysis Approach to Transport Infrastructure Maintenance

Svenson, Kristin January 2017 (has links)
Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.
58

Modeling and Survival Analysis of Breast Cancer: A Statistical, Artificial Neural Network, and Decision Tree Approach

Mudunuru, Venkateswara Rao 26 March 2016 (has links)
Survival analysis today is widely implemented in the fields of medical and biological sciences, social sciences, econometrics, and engineering. The basic principle behind the survival analysis implies to a statistical approach designed to take into account the amount of time utilized for a study period, or the study of time between entry into observation and a subsequent event. The event of interest pertains to death and the analysis consists of following the subject until death. Events or outcomes are defined by a transition from one discrete state to another at an instantaneous moment in time. In the recent years, research in the area of survival analysis has increased greatly because of its large usage in areas related to bio sciences and the pharmaceutical studies. After identifying the probability density function that best characterizes the tumors and survival times of breast cancer women, one purpose of this research is to compare the efficiency between competing estimators of the survival function. Our study includes evaluation of parametric, semi-parametric and nonparametric analysis of probability survival models. Artificial Neural Networks (ANNs), recently applied to a number of clinical, business, forecasting, time series prediction, and other applications, are computational systems consisting of artificial neurons called nodes arranged in different layers with interconnecting links. The main interest in neural networks comes from their ability to approximate complex nonlinear functions. Among the available wide range of neural networks, most research is concentrated around feed forward neural networks called Multi-layer perceptrons (MLPs). One of the important components of an artificial neural network (ANN) is the activation function. This work discusses properties of activation functions in multilayer neural networks applied to breast cancer stage classification. There are a number of common activation functions in use with ANNs. The main objective in this work is to compare and analyze the performance of MLPs which has back-propagation algorithm using various activation functions for the neurons of hidden and output layers to evaluate their performance on the stage classification of breast cancer data. Survival analysis can be considered a classification problem in which the application of machine-learning methods is appropriate. By establishing meaningful intervals of time according to a particular situation, survival analysis can easily be seen as a classification problem. Survival analysis methods deals with waiting time, i.e. time till occurrence of an event. Commonly used method to classify this sort of data is logistic regression. Sometimes, the underlying assumptions of the model are not true. In model building, choosing an appropriate model depends on complexity and the characteristics of the data that affect the appropriateness of the model. Two such strategies, which are used nowadays frequently, are artificial neural network (ANN) and decision trees (DT), which needs a minimal assumption. DT and ANNs are widely used methodological tools based on nonlinear models. They provide a better prediction and classification results than the traditional methodologies such as logistic regression. This study aimed to compare predictions of the ANN, DT and logistic models by breast cancer survival. In this work our goal is to design models using both artificial neural networks and logistic regression that can precisely predict the output (survival) of breast cancer patients. Finally we compare the performances of these models using receiver operating characteristic (ROC) analysis.
59

Two-dimensional expressive speech animation = Animação 2D de fala expressiva / Animação 2D de fala expressiva

Costa, Paula Dornhofer Paro, 1978- 26 August 2018 (has links)
Orientador: José Mario De Martino / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T21:43:57Z (GMT). No. of bitstreams: 1 Costa_PaulaDornhoferParo_D.pdf: 15894797 bytes, checksum: 194a20ae502dfc7198a008d576e23e4c (MD5) Previous issue date: 2015 / Resumo: O desenvolvimento da tecnologia de animação facial busca atender uma demanda crescente por aplicações envolvendo assistentes, vendedores, tutores e apresentadores de notícias virtuais; personagens realistas de videogames, agentes sociais e ferramentas para experimentos científicos em psicologia e ciências comportamentais. Um aspecto relevante e desafiador no desenvolvimento de cabeças falantes, ou "talking heads", é a reprodução realista dos movimentos articulatórios da fala combinados aos elementos de comunicação não-verbal e de expressão de emoções. Este trabalho presenta uma metodologia de síntese de animação facial baseada em imagens, ou animação facial 2D, que permite a reprodução de uma ampla gama de estados emocionais de fala expressiva, além de suportar a modulação de movimentos da cabeça e o controle de elementos faciais tais como o piscar de olhos e o arqueamento de sobrancelhas. A síntese da animação utiliza uma base de imagens-protótipo que são processadas para obtenção dos quadros-chave da animação. Os pesos utilizados para a combinação das imagens-protótipo são derivados de um modelo estatístico de aparência e formas, construído a partir de um conjunto de imagens de treinamento extraídas de um corpus audiovisual de uma face real. A síntese das poses-chave é guiada pela transcrição fonética temporizada da fala a ser animada e pela informação do estado emocional almejado. As poses-chave representam visemas dependentes de contexto fonético que implicitamente modelam os efeitos da coarticulação na fala visual. A transição entre poses-chave adjacentes é realizada por um algoritmo de metamorfose não-linear entre imagens. As animações sintetizadas aplicando-se a metodologia proposta foram avaliadas por meio de avaliação perceptual de reconhecimento de emoções. Dentre as contribuições deste trabalho encontra-se a construção de uma base de dados de vídeo e captura de movimento para fala expressiva em português do Brasil / Abstract: The facial animation technology experiences an increasing demand for applications involving virtual assistants, sellers, tutors and newscasters; lifelike game characters, social agents, and tools for scientific experiments in psychology and behavioral sciences. A relevant and challenging aspect of the development of talking heads is the realistic reproduction of the speech articulatory movements combined with the elements of non-verbal communication and the expression of emotions. This work presents an image-based, or 2D, facial animation synthesis methodology that allows the reproduction of a wide range of expressive speech emotional states and also supports the modulation of head movements and the control of face elements, like the blinking of the eyes and the raising of the eyebrows. The synthesis of the animation uses a database of prototype images which are combined to produce animation keyframes. The weights used for combining the prototype images are derived from a statistical active appearance model (AAM), which is built from a set of sample images extracted from an audio-visual corpus of a real face. The generation of the animation keyframes is driven by the timed phonetic transcription of the speech to be animated and the desired emotional state. The keyposes consist of expressive context-dependent visemes that implicitly model the speech coarticulation effects. The transition between adjacent keyposes is performed through a non-linear image morphing algorithm. To evaluate the synthesized animations, a perceptual evaluation based on the recognition of emotions was performed. Among the contributions of the work is also the building of a database of expressive speech video and motion capture data for Brazilian Portuguese / Doutorado / Engenharia de Computação / Doutora em Engenharia Elétrica
60

Análise de custo-eficácia dos pagamentos por serviços ambientais em paisagens fragmentadas: estudo de caso de São Paulo / Cost-effectiveness analysis of payments for environmental services in fragmented landscapes: case study in the State of São Paulo

Arthur Nicolaus Fendrich 14 November 2017 (has links)
Mesmo com o crescimento da dependência da vida humana em relação aos serviços ecossistêmicos, a taxa de perda de diversidade genética no planeta tem alcançado níveis semelhantes à de grandes eventos de extinção, evidenciando a necessidade de ações para a conservação dos recursos naturais. Em adição aos tradicionais instrumentos de comando e controle para a conservação, os instrumentos econômicos têm tido crescente atenção no mundo nos últimos anos, com especial destaque para os Pagamentos por Serviços Ambientais (PSA). A abordagem de pagamentos de incentivos tem crescido na última década e, apesar das potencialidades que o PSA apresenta, muitos programas falham em incorporar o conhecimento científico em sua execução, sendo esse um dos aspectos que podem acarretar baixo desempenho ambiental e econômico. Neste contexto, o presente projeto buscou avaliar a custo-eficácia do PSA em paisagens fragmentadas. A área de estudo é o estado de São Paulo, cuja fragmentação historicamente ocorre pela expansão agropecuária e pelos diversos impactos decorrentes do grande crescimento populacional em seu território. Foram distribuídos questionários para a obtenção das preferências dos proprietários rurais paulistas em relação aos programas de PSA para restauração de vegetação nativa. Os dados coletados foram relacionados a características socioeconômicas e ambientais e um modelo beta inflacionado de zero misto dentro da classe GAMLSS foi utilizado. Em seguida, o modelo foi utilizado para predizer os resultados para os proprietários não entrevistados e a curva de investimento para diferentes retornos para conservação foi construída. Os resultados apontaram que o PSA é uma alternativa muito custosa frente aos orçamentos ambientais paulistas e que traz poucos benefícios para a restauração no estado de São Paulo. A pesquisa possui uma vertente teórica, pois contribui para a compreensão da adequabilidade do PSA em paisagens fragmentadas, e uma vertente prática, pois explicita a quantidade de recursos necessária para a execução dos programas analisados. / Although the dependence of human activities on ecosystem services has risen in the past decades, the current rate of genetic diversity loss has substantially declined and reached alarming levels. In addition to the traditional command and control approach for the promotion of conservation, growing attention has been given to economic instruments, especially to Payments for Environmental Services (PES). Despite all potentialities of the PES instrument, many programs fail to introduce scientic knowledge in the execution. Such a lack of foundation may result in low environmental and economic performance. The present research aims at evaluating the cost-effectiveness of PES in fragmented landscapes. The study area is the state of São Paulo, which has been fragmented by the agricultural and pasture expansion, and the impacts linked to the large population growth. A survey with dierent PES programs was sent to rural landowners and responses were analyzed and linked to socioeconomic and environmental characteristics through a zero-inflated beta mixed model, within the GAMLSS framework. The model was used to predict enrollment of non-respondents in different PES programs. Finally, the relationship between total area for restoration and the amount of resources needed for each program was compared to the environmental budget of the state of São Paulo. Results show that PES is a very costly alternative that can provide only few results for restoration. The present work has a theoretical orientation, as it contributes to the comprehension of the feasibility of PES programs in fragmented landscapes, and a practical orientation, as it quantifies the amount of resources required by the programs analyzed.

Page generated in 0.1472 seconds