• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 14
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 11
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

[en] A COMPARATIVE STUDY OF METHODOLOGIES FOR MODELLING COMPLEX SURVEYS MODELLING - AN APPLICATION TO SAEB 99 / [es] UN ESTUDIO COMPARATIVO DE LAS METODOLOGÍAS DE MODELAJE DE DATOS PROVENIENTES DE MUESTREOS COMPLEJOS UNA APLICACIÓN AL SAEB 99 / [pt] UM ESTUDO COMPARATIVO DAS METODOLOGIAS DE MODELAGEM DE DADOS AMOSTRAIS COMPLEXOS - UMA APLICAÇÃO AO SAEB 99

MARCEL DE TOLEDO VIEIRA 23 July 2001 (has links)
[pt] A consideração do desenho amostral é fundamental e indispensável em trabalhos que têm como objetivo a análise e modelagem de dados selecionados através de desenhos amostrais complexos. Desta forma torna-se possível a produção de resultados realmente úteis e confiáveis para os gestores de políticas públicas. O principal objetivo desta dissertação é chamar a atenção para a importância da utilização das técnicas adequadas ao tratamento de dados amostrais complexos, discutindo também as conseqüências de sua não adoção. As metodologias adequadas para a análise de dados amostrais complexos podem ser agrupadas em duas abordagens. A primeira, denominada de abordagem agregada, se baseia na incorporação de pesos e efeitos do plano amostral no ajuste dos modelos estatísticos. Através da outra abordagem, que é denominada de abordagem desagregada, a lógica de modelagem é modificada, incorporando os efeitos devidos à amostragem complexa. Isto pode ser feito através do uso de modelos lineares hierárquicos, ou multinível. Os dados analisados nesta dissertação foram coletados pelo Sistema Nacional de Avaliação da Educação Básica (SAEB) no ano de 1999. Esta pesquisa compreende um exame de conhecimentos e um levantamento sobre condições sócio-econômico-demográficas de mais de 200.000 alunos, suas escolas, professores e diretores. A amostra do SAEB 99 foi selecionada a partir de um plano amostral complexo. O desenho amostral do SAEB 99 considera amostragem aleatória estratificada de unidades conglomeradas, com múltiplas etapas. A estimação pontual de estatísticas descritivas a partir de dados amostrais complexos não apresenta grandes dificuldades na medida em que se utiliza de forma adequada os pesos na expansão da amostra. Será ilustrada, através de um exemplo, a importância dos pesos amostrais na estimação. Será verificado que sua não adoção no cálculo da média, na situação em questão, poderia gerar resultados superestimados. Nesta dissertação serão apresentados aspectos teóricos das técnicas (adequadas a dados amostrais complexos) de estimação pontual de parâmetros de modelos de regressão e de suas respectivas variâncias. Também é realizada uma discussão sobre o efeito do plano amostral, intervalos de confiança e testes de hipóteses, e sobre o pacote SUDAAN. Serão apresentados os resultados da aplicação das técnicas estudadas. Paralelamente, será conduzido um estudo dos determinantes da proficiência dos alunos. Ainda, serão apresentadas e analisadas as conseqüências de não se considerar o desenho amostral na estimação dos parâmetros dos modelos e de suas respectivas variâncias, para o SAEB 99. Será realizada uma interpretação educacional dos resultados apresentados. / [en] It is very important to consider the sample design in the analysis and modelling of complex survey data. It permits the production of correct results, which can be used for public political decision making and evaluation. The main objective of this dissertation is to give information about the importance of the use of the techniques for complex survey data. The methodologies for complex survey data analysis can be divided in two different approaches. The first is based on incorporating weights and design effects in the fitting of usual statistical models, such as contingency tables, regression, etc. This approach is called aggregated approach. The other approach, called disaggregated approach, modifies the model attempting to incorporate the complex population structure and/or design effects, for example using hierarchical (or multilevel) linear models. The data analysed in this dissertation were collected by the Brazilian National System of Basic Education Assessment (SAEB), in 1999. This survey applies an exam and asks social-economic-demographic information about more than 200.000 students, schools and teachers. The SAEB 99 sample were selected by a complex survey design, considering stratification and conglomeration, with multiples steps. There is not any problem in estimation of descriptive statistics, such as means, correlation and regression coefficients, provided that we correctly use the sample weights to expand the data. An example will be presented to verify the importance of the use of the sample weights. The theoretical aspects of the techniques for the estimation of regression model parameters and their variances will be presented. The design effect, confidence intervals, significance tests, and SUDAAN characteristics will also be discussed. The application of these techniques will be presented. It will be also conducted a study of the determinants of the student proficiency. It still will be presented and analysed the consequences of the non- consideration of the sample design in the estimation of parameters and their variances, for SAEB 99 data. The results will be educationally interpreted. / [es] La consideración del diseño muestral es fundamental e indispensable en trabajos que tienen como objetivo el análisis y modelaje de datos selecionados a través de diseños muestrales complejos. De esta forma es posible la producción de resultados realmente útiles y confiables para los gestores de políticas públicas. EL objetivo principal de esta disertación es llamar la atención para la importancia de la utilización de las técnicas adecuadas al tratamiento de datos muestrales complejos, discutiendo también las consecuencias de no adoptarlas. Las metodologías adecuadas para el análisis de datos muestrales complejos pueden ser agrupadas en dos abordajes. La primera, denominada de abordaje agregado, consiste en la incorporación de pesos y efectos del plano muestral en el ajuste de los modelos estadísticos. A través del otro abordaje, denominado de abordaje desagregado, se modifica la lógica, incorporando los efectos debidos al muestreo complejo. Esto puede realizarse a través del uso de modelos lineales jerárquicos, o multiníveles. Los datos analizados en esta disertación fueron colectados por el Sistema Nacional de Evaluación de la Educación Básica (SAEB) en el año de 1999. Esta investigación comprende un exámen de conocimientos y un levantamiento sobre condiciones socioeconómicas-demográficas de más de 200.000 alumnos, sus escuelas, profesores y directores. La muestra del SAEB 99 fue seleccionada a partir de un diseño muestral complejo. El diseño muestral del SAEB 99 considera el muestreo aleatório estratificado de unidades conglomeradas, con múltiples etapas. La estimación puntual de estadísticas descriptivas a partir de datos muestrales complejos no presenta grandes dificuldades si se utiliza de forma adecuada los pesos en la expansión de la muestra. Se ilustrará, a través de un ejemplo, la importancia de los pesos muestrales en la estimación. Será verificado que la no adopción de estos pesos en el cálculo de la media, podería generar resultados superestimados. En esta disertación serán presentados aspectos teóricos de las técnicas (adecuadas a datos de muestras complejas) de estimación puntual de parámetros de modelos de regresión y de sus respectivas varianzas. Se discute también el efecto del diseño muestral, intervalos de confianza y testes de hipótesis, y el paquete SUDAAN. Serán presentados los resultados de la aplicación de las técnicas estudiadas. Paralelamente, se estudian los determinantes de la proficiencia de los alumnos. Se presentan y analizan también, las consecuencias de no considerar el diseño muestral en la estimación de los parámetros del modelos y de sus respectivas varianzas, para el SAEB 99. Será realizada una interpretación educacional de los resultados presentados.
32

VARIATION I VALDELTAGANDE : - En statistisk undersökning av moderniseringens subnationella effekter på det svenska valdeltagandet

Elfving, Johan, Rosén, Elin January 2017 (has links)
This essay sets out to bring further knowledge within the field of political participation on an aggregated level in Sweden. The theoretical access point for this study is the modernization theory provided by Lipset (1959). This theory will be investigated to see if variables within this theory have an effect on the Swedish voter turn-out. Furthermore this study aims to investigate if the contexts of the economical situations have an effect on modernization. The main questions of this essay is: (1) what effects do modernization have on voter turn-out in Swedish municipals? (2) Regarding the economic cycles, what influence does the effects of modernization have on voter turn-out? The method used in this essay is a qauntative analysis in the form of a bivariate and a multiple regression analysis. The empirical material in this study includes statistics from different public agencies. The analysis aims to investigate three election years, 1994, 2006 and 2014. The empirical study shows that socioeconomic pre-conditions, such as average income level, and urbanization have a positive and strong effect on the Swedish voter turn-out on an aggregated level. This shows that modernization theory is not relevant in full, it rather shows that certain parts of the original theory is relevant today. The effects modernization have on voter turn-out is also a lot stronger when the economic context is an economic boom. When the economy goes down the effect goes down with it.
33

Empirical Essays in Development Economics

Dadzie, Nicholas Nyamekeh January 2013 (has links)
No description available.
34

Development of simplified power grid models in EU project Spine

Alharbi, Mohammad January 2020 (has links)
The electric power system is among the biggest and most complex man-made physical network worldwide. The increase of electricity demand, the integration of ICT technologies for the modernization of the electric grid and the introduction of intermittent renewable generation has resulted in further increasing the complexity of operating and planning the grid optimally. For this reason the analysis of large-scale power systems considering all state variables is a very complicated procedure. Thus, it is necessary to explore methods that represent the original network with smaller equivalent networks in order to simplify power system studies. The equivalent network should provide an accurate and efficient estimation of the behavior of the original power system network without considering the full analytical modelling of the grid infrastructure.   This thesis investigates partitioning methods and reduction methodologies in order to develop a proper reduction model. The K-means and K-medoids clustering algorithms are employed to partition the network into numerous clusters of buses. In this thesis the Radial, Equivalent, and Independent (REI) method is further developed, implemented, and evaluated for obtaining a reduced, equivalent circuit of each cluster of the original power system. The basic idea of REI method is to aggregate the power injections of the eliminated buses to two fictitious buses through the zero power balance network.   The method is implemented using Julia language and the PowerModels.jl package. The reduction methodology is evaluated using the IEEE 5-bus, 30-bus, and 118-bus systems, by comparing a series of accuracy and performance indices. Factors examined in the comparison include the chosen number of clusters, different assumptions for the slack bus as well as the effect of the imposed voltage limits on the fictitious REI buses. / Elsystemet är ett av de största och mest komplexa människotillverkade fysiskanätverken i världen. Ökad elförbrukning, integration av informationsteknik föratt modernisera elnäten samt införandet av varierande förnybar elproduktion harresulterat i ytterligare ökad komplexitet för att driva nätet optimalt. Därför ärdet mycket komplicerat att analysera storskaliga elsystem samtidigt som man tarhänsyn till alla tillståndsvariabler. Det är således nödvändigt att utforska metoderför att modellera det ursprungliga nätverket med ett mindre ekvivalent nätverk föratt underlätta studier av elsystem. Det ekvivalenta nätverket ska ge en noggrann ocheffektiv uppskattning av det ursprungliga systemets egenskaper utan att inkludera enkompletta analytisk modell av nätverkets stuktur.Den här rapporten undersöker metoder för att dela upp och reducera ett nätverkför att få fram en lämplig ekvivalent modell. Klusteranalysalgotmerna K-meansoch K-medoids används för att dela in nätverket i ett antal kluster av noder. Irapporten vidareutvecklas, implementeras och utvärderas REI-metoden för att ta framreducerade ekvivalenta nätverk för varje kluster i det ursprungliga systemet. Dengrundläggande idén med REI-metoden är att den aggregerar elproduktionen i deelminerade noderna i två fiktiva noder genom ett nolleffektbalansnätverk.Metoden är implementerad i programspråket Julia och programpaketetPowerModels.jl. Reduceringsmetoderna utvärderas på IEEE:s system med 5 noder,30 noder respektive 118 noder, genom att jämföra ett antal index för noggrannhetoch prestanda. De faktorer som undersäks i jämförelsen inkluderar det valda antaletkluster, olika antagande om slacknoden samt följderna av spänningsgränserna för defiktiva REI-noderna.v
35

Programação linear com controle de risco para o planejamento da operação do SIN / Linear programming with risk control for the operation planning of SIN

Rui Bertho Junior 08 March 2013 (has links)
O planejamento da operação energética do sistema interligado nacional brasileiro é realizado por uma cadeia de modelos computacionais de otimização e simulação da operação. Entretanto, o risco de déficit, um importante indicador de segurança energética no setor elétrico, é tratado como uma variável de saída dos modelos computacionais. No planejamento de médio prazo é utilizado o software NEWAVE, que utiliza uma representação agregada em subsistemas equivalentes. Este trabalho propõe a implementação de um modelo de otimização linear para o planejamento da operação de médio prazo capaz de considerar o risco de déficit em sua formulação. Para o controle de risco de déficit, é proposta a utilização da métrica de risco conhecida por CVaR (Conditional Value at Risk), por se caracterizar como uma métrica de risco coerente, além de poder ser implementada por meio de um conjunto de restrições lineares. / The energetic operation planning of the Brazilian interconnected system is performed by a chain of computational models for the system optimization and simulation. However, the deficit risk, an important energy security indicator for the electric sector, is treated as an output variable on the computational models. In the medium-term of the energetic planning is used the software NEWAVE, which uses equivalent systems on aggregated representation. This work proposes the implementation of a linear optimization model for the medium-term of the energetic planning able to consider the deficit risk in its own formulation. To control the deficit risk is proposed the use of the risk metric known as CVaR (Conditional Value at Risk), because it is characterized as a coherent risk metric, and can be implemented through a set of linear constraints.
36

Programação linear com controle de risco para o planejamento da operação do SIN / Linear programming with risk control for the operation planning of SIN

Bertho Junior, Rui 08 March 2013 (has links)
O planejamento da operação energética do sistema interligado nacional brasileiro é realizado por uma cadeia de modelos computacionais de otimização e simulação da operação. Entretanto, o risco de déficit, um importante indicador de segurança energética no setor elétrico, é tratado como uma variável de saída dos modelos computacionais. No planejamento de médio prazo é utilizado o software NEWAVE, que utiliza uma representação agregada em subsistemas equivalentes. Este trabalho propõe a implementação de um modelo de otimização linear para o planejamento da operação de médio prazo capaz de considerar o risco de déficit em sua formulação. Para o controle de risco de déficit, é proposta a utilização da métrica de risco conhecida por CVaR (Conditional Value at Risk), por se caracterizar como uma métrica de risco coerente, além de poder ser implementada por meio de um conjunto de restrições lineares. / The energetic operation planning of the Brazilian interconnected system is performed by a chain of computational models for the system optimization and simulation. However, the deficit risk, an important energy security indicator for the electric sector, is treated as an output variable on the computational models. In the medium-term of the energetic planning is used the software NEWAVE, which uses equivalent systems on aggregated representation. This work proposes the implementation of a linear optimization model for the medium-term of the energetic planning able to consider the deficit risk in its own formulation. To control the deficit risk is proposed the use of the risk metric known as CVaR (Conditional Value at Risk), because it is characterized as a coherent risk metric, and can be implemented through a set of linear constraints.
37

Inference for Discrete Time Stochastic Processes using Aggregated Survey Data

Davis, Brett Andrew, Brett.Davis@abs.gov.au January 2003 (has links)
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
38

Entwicklung eines aggregierten Modells zur Simulation der Gewässergüte in Talsperren als Baustein eines Flussgebietsmodells

Siemens, Katja 20 January 2010 (has links) (PDF)
Der großräumige Abbau von Braunkohle in der Lausitz führte in der Vergangenheit zu einer extremen Beeinflussung des Wasserhaushaltes im Einzugsgebiet der Spree. Mit dem Beginn der Sanierung und Flutung der Tagebaue kommt es nun langfristig zu einer verstärkten Nutzung der existierenden Oberflächengewässer und der Einbindung der entstehenden Tagebaurestseen in das Fließgewässernetz. Die Kopplung von Mengenbewirtschaftungsmodellen mit Gütemodellen berücksichtigt die Verfügbarkeit und Verteilung der begrenzten Ressource Wasser im Einzugsgebiet und der aus der Bewirtschaftung resultierenden Gewässergüte. Dies entspricht auch dem Leitbild der EU-WRRL (2000) für ein integriertes Flussgebietsmanagement, was eine einzugsgebietsbezogene Betrachtung der vorhandenen Ressourcen unter Berücksichtigung aller beeinflussten und beeinflussenden Kriterien fordert. Werden Modelle, die unterschiedlich sensitive und komplexe Systeme abbilden, miteinander gekoppelt, erfordert dies eine Anpassung der Datenstruktur und der zeitlichen Skalen. Schwerpunkt dieser Arbeit war die Entwicklung einfacher, robuster Simulationswerkzeuge für die Prognose der Gewässergüte in den Talsperren Bautzen und Quitzdorf. Als Basis diente das komplexe Standgewässergütemodell SALMO. Das Modell wurde zunächst um einfache Algorithmen ergänzt, so dass es trotz einer angepassten, stark reduzierten Datengrundlage, plausible Ergebnisse simulierte. Stochastisch erzeugte Bewirtschaftungsszenarien und die komplex simulierten Modellergebnisse bezüglich der resultierenden Gewässergüte, wurden als Trainingsdaten für ein Künstliches Neuronales Netz (ANN) genutzt. Die für beide Talsperren trainierten ANN sind als effektive Black-Box-Module in der Lage, das komplexe Systemverhalten des deterministischen Modells SALMO widerzuspiegeln. Durch eine Kopplung der entwickelten ANN mit dem Bewirtschaftungsmodell WBalMo ist es möglich, Bewirtschaftungsalternativen hinsichtlich ihrer Konsequenzen für die Gewässergüte zu bewerten. ANN sind systemgebundene Modelle, die nicht auf andere Gewässersysteme übertragen werden können. Allerdings stellt die hier erarbeitete Methodik einen fundierten Ansatz dar, der für die Entwicklung weiterer aggregierter Gütemodule im Rahmen integrierter Bewirtschaftungsmodelle angewendet werden kann. / The large-scale extraction of lignite in Lusatia in the past had an extreme impact on the water balance of the Spree river catchment. The restoration and flooding of the opencast pits put heavy demand on existing surface waters for a long time period. The resulting artificial lakes have to be integrated in the riverine network. The coupling of management models and water quality models allows to consider both availability and distribution of limited water resources in the catchment and resulting water quality. This is corresponding to the principles of the EU-WFD for integrated river basin management, which is a basin-related consideration of available resources taking into account all influencing and influenced characteristics. Adjustment of data structure and time scale is necessary if models describing unequally sensitive and complex systems are to be coupled. Main focus of this task was to develop simple and robust simulation tools for the prediction of water quality in the reservoirs Bautzen and Quitzdorf. The complex water quality model SALMO served as a basis. In a first step, simple algorithms had to be amended in order to generate plausible simulation results despite of an adapted reduced data base. Stochastically generated management scenarios and complex simulated model results regarding the resulting water quality were employed as training data for an Artificial Neuronal Network (ANN). The trained ANN’s are efficient black box modules. As such they are able to mirror complex system behaviour of the deterministic model SALMO. By coupling the developed ANN with the management model WBalMo it is possible to evaluate management strategies in terms of their impact on the quality of the water bodies. ANN’s are system-linked models. A transfer to other aquatic systems is not possible. However, the methodology developed here represents an in-depth approach which is applicable to the development of further aggregated water quality models in the framework of integrated management models.
39

The determinants of UK Equity Risk Premium

Chandorkar, Pankaj Avinash January 2016 (has links)
Equity Risk Premium (ERP) is the cornerstone in Financial Economics. It is a basic requirement in stock valuation, evaluation of portfolio performance and asset allocation. For the last decades, several studies have attempted to investigate the relationship between macroeconomic drivers of ERP. In this work, I empirically investigate the macroeconomic determinants of UK ERP. For this I parsimoniously cover a large body of literature stemming from ERP puzzle. I motivate the empirical investigation based on three mutually exclusive theoretical lenses. The thesis is organised in the journal paper format. In the first paper I review the literature on ERP over the past twenty-eight years. In particular, the aim of the paper is three fold. First, to review the methods and techniques, proposed by the literature to estimate ERP. Second, to review the literature that attempts to resolve the ERP puzzle, first coined by Mehra and Prescott (1985), by exploring five different types of modifications to the standard utility framework. And third, to review the literature that investigates and develops relationship between ERP and various macroeconomic and market factors in domestic and international context. I find that ERP puzzle is still a puzzle, within the universe of standard power utility framework and Consumption Capital Asset Pricing Model, a conclusion which is in line with Kocherlakota (1996) and Mehra (2003). In the second paper, I investigate the impact of structural monetary policy shocks on ex-post ERP. More specifically, the aim of this paper is to investigate the whether the response of UK ERP is different to the structural monetary policy shocks, before and after the implementation of Quantitative Easing in the UK. I find that monetary policy shocks negatively affect the ERP at aggregate level. However, at the sectoral level, the magnitude of the response is heterogeneous. Further, monetary policy shocks have a significant negative (positive) impact on the ERP before (after) the implementation of Quantitative Easing (QE). The empirical evidence provided in the paper sheds light on the equity market’s asymmetric response to the Bank of England’s monetary policy before and after the monetary stimulus. In the third paper I examine the impact of aggregate and disaggregate consumption shocks on the ex-post ERP of various FTSE indices and the 25 Fama-French style value-weighted portfolios, constructed on the basis of size and book-to-market characteristics. I extract consumption shocks using Structural Vector Autoregression (SVAR) and investigate its time-series and cross-sectional implications for ERP in the UK. These structural consumption shocks represent deviation of agent’s actual consumption path from its theoretically expected path. Aggregate consumption shocks seem to explain significant time variation in the ERP. At disaggregated level, when the actual consumption is less than expected, the ERP rises. Durable and Semi-durable consumption shocks have a greater impact on the ERP than non-durable consumption shocks. In the fourth and final paper I investigate the impact of short and long term market implied volatility on the UK ERP. I also examine the pricing implications of innovations to short and long term implied market volatility in the cross-section of stocks returns. I find that both the short and the long term implied volatility have significant negative impact on the aggregate ERP, while at sectoral level the impact is heterogeneous. I find both short and long term volatility is priced negatively indicating that (i) investors care both short and long term market implied volatility (ii) investors are ready to pay for insurance against these risks.
40

Planejamento da produção e da logística para empresas produtoras de sementes de milho.

Junqueira, Rogério de Ávila Ribeiro 23 May 2006 (has links)
Made available in DSpace on 2016-06-02T19:51:21Z (GMT). No. of bitstreams: 1 DissRARJ.pdf: 1930702 bytes, checksum: a1b0d68f6f193815099874af8118a5e1 (MD5) Previous issue date: 2006-05-23 / The production of corn seeds involves a complex agro-industrial production chain whose main agents must offer products of high quality and low price to stay competitive. Efficient tools to coordinate this chain are thus essential to fulfill these conditions. This work proposes a linear programming model for the strategic planning of production, storing and transportation, designed to minimize production, transportation and tax costs. The model is a function of crop planning, capacity and client demand restrictions. Although relevant to the product s final cost, taxes are not traditionally considered by the planning methods currently used by the seed industry. The raw material is generally sent to the industrial unit closest to the farm; proximity to the end consumer is of secondary importance. The main productive processes and features of the seed industry described here are based on literature and visits to plants that produce corn seeds. The mathematical model detailed in this work is implemented using the GAMS programming language and the resulting system of equations solved with CPLEX. The model was validated by using different scenarios with real-world data collected during the visits to the plants; the results were consistent with those expected. Next, the results given by the model of a case study with data for an entire season were compared against those obtained by one of the companies employing the traditional method of shortest distance between farm and industrial unit. The proposed model showed a substantial decrease in total cost when compared to the traditional method. This study also confirmed the importance of integrating taxes with production and transportation planning at the logistics level. / A produção de sementes de milho envolve uma cadeia de produção agroindustrial complexa cujos agentes devem primar por oferecer produtos de alta qualidade a um baixo custo para se manterem competitivos no mercado. Instrumentos eficientes para a coordenação dessa cadeia são fundamentais para atender a esta exigência, orientando seus agentes para o cumprimento dos objetivos comuns. Neste trabalho um modelo de programação linear é proposto para realizar o planejamento tático da produção, estocagem e de transportes de forma a minimizar custos de produção, transportes e fiscais, atendendo ao mesmo tempo às restrições de programação da colheita, de capacidade e de demanda. Tradicionalmente, não são considerados nos métodos de planejamento do setor custos fiscais, como o de ICMS, que se mostram relevantes no custo unitário do produto final. A matéria-prima, em geral, é enviada para a unidade mais próxima ao campo de produção agrícola, deixandose para um segundo plano a proximidade da demanda. As principais características do processo produtivo e do setor são descritas de acordo com a literatura e visitas a empresas produtoras de sementes. Utilizou-se a linguagem GAMS e o solver CPLEX para resolver o modelo matemático. O modelo foi implementado e testado com dados realistas das empresas visitadas em diferentes cenários, verificando-se coerência nas respostas obtidas. Realizou-se um estudo de caso em uma das empresas estudadas utilizando-se os dados completos de uma safra e os resultados obtidos com o modelo proposto foram comparados com o método empregado na empresa, que considerava apenas a menor distância entre a região agrícola e a unidade industrial. Os resultados dessa comparação foram bastante satisfatórios, proporcionando uma redução significativa dos custos considerados. Além disso, foi confirmada, também para este caso, a importância de incorporar os custos fiscais na disciplina de logística integrando planejamento tributário com o de produção e dos transportes.

Page generated in 0.0643 seconds