Spelling suggestions: "subject:"risk analysis"" "subject:"disk analysis""
71 |
Estrutura conceitual para análise de risco nas operações de corte, transbordo e transporte: estudo de caso em uma empresa paulista do setor sucroenergético / Conceptual framework for risk analysis in cutting, transhipment and transportation operations: a case study at São Paulo company in the sugarcane sectorAssumpção, André Luís 30 August 2018 (has links)
Submitted by Andre Luís Assumpção (andreluisassumpcao@gmail.com) on 2018-09-20T19:29:31Z
No. of bitstreams: 1
Dissertação Defesa Mestrado FCAV UNESP 2018 - Andre Assumpcao.pdf: 3064399 bytes, checksum: 3370dac113bb0243290f6773830004df (MD5) / Approved for entry into archive by Neli Silvia Pereira null (nelisps@fcav.unesp.br) on 2018-09-21T17:11:02Z (GMT) No. of bitstreams: 1
assumpcao_al_me_jabo.pdf: 3064399 bytes, checksum: 3370dac113bb0243290f6773830004df (MD5) / Made available in DSpace on 2018-09-21T17:11:02Z (GMT). No. of bitstreams: 1
assumpcao_al_me_jabo.pdf: 3064399 bytes, checksum: 3370dac113bb0243290f6773830004df (MD5)
Previous issue date: 2018-08-30 / A gestão de riscos tem se apresentado como uma opção muito próspera para empresas de diversos setores, pois desenvolve chances maiores de análise e controle de perdas para a direção da organização empresarial, além de promover maior competitividade junto a suas concorrentes no tocante a melhores práticas de gestão. Devido a uma carência de estudos sobre os riscos nestas operações, observou-se a necessidade de propor uma estrutura conceitual para analisar e avaliar os riscos intrínsecos nas operações Corte, Transbordo e Transporte de Cana de Açúcar (CTT) em uma empresa paulista do setor sucroenergético. Esta pesquisa é de caráter qualitativo e composta por três etapas. A primeira etapa envolveu uma revisão bibliográfica sobre os temas de gestão de riscos, normas internacionais, ferramentas de análise de riscos e setor sucroenergético. Na segunda etapa, foram realizadas técnicas como análise de documentos, observação participante e entrevistas semiestruturadas com oito especialistas, com o intuito de entender mais profundamente os riscos incorridos no Sistema CTT. Na terceira etapa da pesquisa, apresentou-se uma estrutura conceitual para análise e avaliação de riscos no Sistema CTT utilizando como base, as ferramentas extraídas das normas internacionais de gestão de riscos que mais se harmonizaram com a estrutura sugerida de análise e avaliação de riscos, sendo elas Análise de Árvore de Falhas (FTA), Análise da Gravata Borboleta (Bow-Tie) e Matriz de Risco (Probabilidade e Consequência). Como resultado conclui-se que esta estrutura conceitual para análise de risco nas operações de CTT pode contribuir segundo especialistas do setor de forma expressiva para a empresa estudo e demais organizações do setor sucroenergético auxiliando a gestão na tomada de decisão para redução de custos e nas perdas e danos causados aos veículos, máquinas e equipamentos, pessoas e ambiente envolvidos nas operações de Corte, Transbordo e Transporte. / Risk management has proven to be a very successful option for companies in many sectors, as it develops greater chances of analysis and loss control for the direction of the business organization, as well as promoting greater competitiveness among its competitors regarding best practices in management. Due to a lack of risk studies in these operations, it was observed the need to propose a conceptual framework to analyze and evaluate the intrinsic risks in the operations Cut, Transhipment and Transport of Sugar Cane (CTT) in a São Paulo company of the sugar-energy sector . This research is qualitative and consists of three stages. The first step involved a literature review on the topics of risk management, international standards, risk analysis tools and the sugar and ethanol sector. In the second stage, techniques such as document analysis, participant observation and semi-structured interviews with eight specialists were carried out, in order to understand more deeply the risks incurred in the CTT System. In the third stage of the research, a conceptual framework for risk analysis and evaluation was presented in the CTT System using as a basis the tools extracted from international risk management standards that were most in harmony with the suggested structure of risk analysis and evaluation, such as Fault Tree Analysis (FTA), Bow-Tie Analysis, and Risk Matrix (Probability and Consequence). As a result, it can be concluded that this conceptual framework for risk analysis in CTT operations can contribute, according to industry experts, to the study firm and other organizations in the sugar and ethanol sector, helping management in decision making to reduce costs and losses and damage to vehicles, machinery and equipment, people and the environment involved in the operations of Cut, Transhipment and Transportation.
|
72 |
Avaliação de riscos em processos de implantação de produção enxutaMarodin, Giuliano Almeida January 2013 (has links)
Empresas em todo o mundo têm atingido um melhor desempenho operacional com a implantação da Produção Enxuta (IPE), apesar de que os resultados são frequentemente aquém do esperado, demorados a alcançar ou difíceis de sustentar em longo prazo. Tais fatos indicam a necessidade de aprofundar o conhecimento sobre as dificuldades no processo de IPE. Nesta pesquisa, as dificuldades na IPE são reinterpretadas e investigadas sob a perspectiva da Gestão de Riscos (GR), uma vez que isso induz à gestão sistemática dos mesmos sob a lógica PDCA e um amplo entendimento do contexto onde elas ocorrem. O objetivo geral desta pesquisa foi desenvolver um método para avaliação dos riscos na IPE. A estrutura da pesquisa foi dividida em cinco artigos, com os seguintes métodos e propósitos: 1) uma revisão sistemática da literatura para identificar o estado da arte sobre a IPE e propor uma agenda para pesquisas futuras; 2) uma survey e um estudo de caso para identificar e validar a lista dos principais riscos presentes na IPE e a existência de relações entre os riscos; 3) uma revisão bibliográfica para criar uma sistemática para a GR na IPE com o objetivo de desenvolver procedimentos para as etapas de descrição do contexto e avaliação dos riscos; 4) aplicar em um estudo de caso as etapas de descrição do contexto e análise dos riscos, sendo que estas fazem parte da avaliação dos riscos; 5) um estudo de caso enfatizando as etapas de análise e modelagem das relações entre os riscos. As principais contribuições da tese consistem em desenvolver e testar um método para a avaliação de riscos na IPE, identificar e descrever as características dos principais riscos na IPE e criar procedimentos para a análise e modelagem das relações entre os riscos. / Companies worldwide have improved operational performance by lean production implementation (LPI), although the results are often lesser than expected, long reach or difficult to sustain in the long term. These facts indicate a need to increase knowledge of the difficulties in the LPI process. In this research, the difficulties in LPI are reinterpreted and investigated with the risk management (RM) perspective, since it leads to systematic management under the PDCA logic and a broad understanding of the context. The main objective of this thesis was to develop a method for risk assessment in LPI. The structure of the research was divided into five articles, with the following methods and purposes: 1) a systematic literature review to identify the state of the art on the LPI and propose an agenda for future research; 2) a survey and a case study to identify and validate the list of major risks present in the LPI and the existence of relationships between risks; 3) a literature review to create a framework of RM on the LPI in order to develop procedures for the steps of the context description and assessment of risks; 4) apply in a case study the steps of description of the context and risk analysis, as part of the risk assessment; 5) a case study highlighting the steps of risk analysis and risk evaluations by modeling of the relationships between risks. The main contributions of the thesis are to develop and test a method for risk assessment in LPI, identify and describe the characteristics of the main risks in LPI and create procedures for risk identification, analysis and evaluation, the three steps of risk assessment.
|
73 |
Application of Fuzzy Logic in the Streeter-Phelps model to analyze the risk of contamination of rivers, considering multiple processes and multiple launch / AplicaÃÃo da lÃgica FUZZY no modelo de Streeter-Phelps para analisar o risco de contaminaÃÃo das Ãguas de rios, considerando mÃltiplos processos e mÃltiplos lanÃamentoRaquel Jucà de Moraes Sales 12 February 2014 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Na tentativa de facilitar o diagnÃstico dos diversos fatores que afetam a qualidade da Ãgua e
antever possÃveis impactos futuros sobre o meio
ambiente
, sÃo adotadas aÃÃes que racionalize
m
o uso da Ãgua a partir da otimizaÃÃo de processos naturais ou tecnolÃgicos. A modelagem
matemÃtica à um exemplo disso e, em conjunto com a Teoria
Fuzzy
, que permite fazer a anÃlise
dos resultados sem necessidade de significativos bancos de dados, pode
-
se
estabelecer o risco
como indicador de contaminaÃÃo das Ãguas de rios, sendo de valor prÃtico na tomada de decisÃo
e concessÃo de outorga de lanÃamentos. Neste estudo, foi desenvolvido um modelo matemÃtico
aplicado Ãs equaÃÃes completas de Streeter
-
Phelps
utilizando a Teoria dos nÃmeros
Fuzzy
, a
fim de analisar o risco de contaminaÃÃo de um curso d'Ãgua que recebe agentes poluentes de
mÃltiplas fontes de lanÃamento. Pelas simulaÃÃes do modelo, foram analisados diferentes
cenÃrios, verificando a influÃncia d
os seus parÃmetros, bem como o lanÃamento de fontes
poluidoras pontuais e difusas, nos percentuais de risco. De acordo com os resultados, observou
-
se que a quantidade de carga lanÃada tem influÃncia no tempo de diluiÃÃo desta massa no
sistema, de forma que
, para maiores valores de lanÃamento, o tempo de diluiÃÃo à menor,
favorecendo os processos de decaimento e formaÃÃo da camada bentÃnica; em relaÃÃo Ãs
reaÃÃes fÃsicas, quÃmicas e biolÃgicas, verifica
-
se que os processos de sedimentaÃÃo,
fotossÃntese e res
piraÃÃo, para os dados mÃdios encontrados em literatura, tem pequena
influÃncia no comportamento das curvas de concentraÃÃo de OD e curvas de risco, enquanto
que o processo de nitrificaÃÃo tem forte influÃncia; jà a temperatura desempenha um
significativo
papel no comportamento do OD, onde, para valores maiores, maior serà o dÃficit
OD e, em consequÃncia, aumento dos percentuais de risco. Por fim, o modelo desenvolvido
como proposta de facilitar a tomada de decisÃo no controle de lanÃamento de efluentes em
rios
mostrou
-
se uma alternativa viÃvel e de valor prÃtico de anÃlise, jà que os objetivos foram
alcanÃados / In an attempt to facilitate the diagnosis of the various factors that affect water quality and predict possible future impacts on the environment, actions to rationalize the use of water from the optimization of natural and technological processes are adopted. Mathematical modeling is one example and, together with Fuzzy Theory, which allows the analysis of the results without the need for significant databases, one can establish the risk as an indicator of contamination of rivers, and of practical value in decision making and allocation of grant releases. In this study, the full Streeter-Phelps equations, using the Fuzzy set Theory, was applied, in order to analyze the risk of contamination of a watercourse that receives multiple sources release pollutants. Through the model simulations, different scenarios were analyzed, and the influence of its parameters as well as the launch point and nonpoint pollution sources, in the calculation of the risk. According to the results, it was observed that the amount of discharge released influences the time of the mass dilution in the system, so that for higher values of launch, the dilution time is less favoring the formation and decay processes of benthic layer; regarding the physical, chemical and biological reactions, it appears that sedimentation processes, photosynthesis and respiration, concerning with the average data found in literature, have little influence on the behavior of the curves of DO concentration curves and risk, while the nitrification process has a strong influence; with respect to the temperature, the results showed that it plays a significant role in the behavior of DO, where, for larger values of it, the higher the DO deficit and, consequently, increase in the risk. Finally, the model developed as a proposal to facilitate the decision making in the control of discharge of effluents into rivers proved to be a viable and practical analytical alternative way, since the goals were achieved.
|
74 |
Avaliação de riscos em processos de implantação de produção enxutaMarodin, Giuliano Almeida January 2013 (has links)
Empresas em todo o mundo têm atingido um melhor desempenho operacional com a implantação da Produção Enxuta (IPE), apesar de que os resultados são frequentemente aquém do esperado, demorados a alcançar ou difíceis de sustentar em longo prazo. Tais fatos indicam a necessidade de aprofundar o conhecimento sobre as dificuldades no processo de IPE. Nesta pesquisa, as dificuldades na IPE são reinterpretadas e investigadas sob a perspectiva da Gestão de Riscos (GR), uma vez que isso induz à gestão sistemática dos mesmos sob a lógica PDCA e um amplo entendimento do contexto onde elas ocorrem. O objetivo geral desta pesquisa foi desenvolver um método para avaliação dos riscos na IPE. A estrutura da pesquisa foi dividida em cinco artigos, com os seguintes métodos e propósitos: 1) uma revisão sistemática da literatura para identificar o estado da arte sobre a IPE e propor uma agenda para pesquisas futuras; 2) uma survey e um estudo de caso para identificar e validar a lista dos principais riscos presentes na IPE e a existência de relações entre os riscos; 3) uma revisão bibliográfica para criar uma sistemática para a GR na IPE com o objetivo de desenvolver procedimentos para as etapas de descrição do contexto e avaliação dos riscos; 4) aplicar em um estudo de caso as etapas de descrição do contexto e análise dos riscos, sendo que estas fazem parte da avaliação dos riscos; 5) um estudo de caso enfatizando as etapas de análise e modelagem das relações entre os riscos. As principais contribuições da tese consistem em desenvolver e testar um método para a avaliação de riscos na IPE, identificar e descrever as características dos principais riscos na IPE e criar procedimentos para a análise e modelagem das relações entre os riscos. / Companies worldwide have improved operational performance by lean production implementation (LPI), although the results are often lesser than expected, long reach or difficult to sustain in the long term. These facts indicate a need to increase knowledge of the difficulties in the LPI process. In this research, the difficulties in LPI are reinterpreted and investigated with the risk management (RM) perspective, since it leads to systematic management under the PDCA logic and a broad understanding of the context. The main objective of this thesis was to develop a method for risk assessment in LPI. The structure of the research was divided into five articles, with the following methods and purposes: 1) a systematic literature review to identify the state of the art on the LPI and propose an agenda for future research; 2) a survey and a case study to identify and validate the list of major risks present in the LPI and the existence of relationships between risks; 3) a literature review to create a framework of RM on the LPI in order to develop procedures for the steps of the context description and assessment of risks; 4) apply in a case study the steps of description of the context and risk analysis, as part of the risk assessment; 5) a case study highlighting the steps of risk analysis and risk evaluations by modeling of the relationships between risks. The main contributions of the thesis are to develop and test a method for risk assessment in LPI, identify and describe the characteristics of the main risks in LPI and create procedures for risk identification, analysis and evaluation, the three steps of risk assessment.
|
75 |
Dispersão de gases densos na atmosfera / Dispersion of dense gasFontana, Geraldo Luiz Pereira 21 December 2004 (has links)
Orientador: Rubens Maciel Filho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-06T19:47:49Z (GMT). No. of bitstreams: 1
Fontana_GeraldoLuizPereira_M.pdf: 981308 bytes, checksum: e96f60ebd2b200494639878583e49542 (MD5)
Previous issue date: 2004 / Resumo: Este trabalho descreve alterações significativas do programa computacional, DENZ, o qual é aplicado para o estudo das conseqüências de lançamentos acidentais de grandes quantidades de gases pesados, isto é, gases tóxicos e inflamáveis mais pesados do que o ar. Contém também uma discussão das razões da importância deste estudo e descreve as características e o comportamento dessas substâncias. Com estas alterações é possível rodar o programa computacional em microcomputadores. A característica mais importante na dispersão destes gases é o fenômeno do afundamento gravitacional. Esta importante característica pode levar a rápida formação de nuvens tóxicas que podem afetar pessoas que estão nas proximidades ou no sentido do vento. A segunda e importante característica de dispersão de gás denso é que, quando o afundando termina, a ação da turbulência atmosférica dilui a nuvem de tal modo que a taxa de crescimento da altura de nuvem é consideravelmente pequena, se comparada com uma pluma passiva. Isto significa que, a entrada de ar no cilindro gasoso formado é suprimida na presença de um gradiente de densidade. Como a nuvem é diluída e mais fresca, as diferenças de temperatura com os arredores são menores e a densidade se aproxima do ar circunvizinho. Desta forma ela pode ser tratada como um gás passivo, e no momento apropriado, pode ser tratada como uma ¿pluma convencional¿ que usa modelos padrões de dispersão atmosférica. A predição correta e o entendimento do fenômeno da dispersão de gases densos representa uma poderosa ferramenta para a tomada de decisões, como também pode ser empregada no treinamento de pessoas em situações emergências nas fábricas / Abstract: This work describes significant alterations in the computer code, DENZ, which is applied to the study of the consequences of accidentally releasing large quantities of heavy, i.e. havier than air, toxic or flammable gases to the atmosphere. lt also contains a discussion of the reasons why the study of such substances is necessary and describes the characteristic features of their behaviour. With this alterations it is possible to run the computer code in a microcomputer PC, including the additionals graphical facilities as well as friendly interface. The most interesting characteristic of the dispersion of such type of gas is the phenomenon of gravitational driven slumping. This important feature can lead to the rapid formation of toxic clouds which may affect people standing close to the release or even upwind. The second and important feature of dense gas dispersion considered is that, once slumping is terminated, the action of atmospheric turbulence dilutes the cloud in such a way that the rate of growth of the cloud height is considerably smaller than that expected for a passive plume. This means that the entrainment of air is suppressed in the presence of a density gradient. As the cloud is diluted and becomes warmer, due to heat transfer with the surrounding, its density approaches that of the surrounding air. lt becomes increasingly like a passive gas, and, in due course may be treated as a ¿conventional¿ plume using standard models of atmospheric dispersion. The predictions allow the understanding of the dispersion phenomenon on dense gases and represent a powerfull tool to take decisions as well as can be employed in personnel training in factory emergency situations / Mestrado / Desenvolvimento de Processos Químicos / Mestre em Engenharia Química
|
76 |
Probabilistic Risk Assessment in Clouds: Models and AlgorithmsPalhares, André Vitor de Almeida 08 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-04T17:17:29Z
No. of bitstreams: 2
dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T17:17:29Z (GMT). No. of bitstreams: 2
dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2012-03-08 / Cloud reliance is critical to its success. Although fault-tolerance mechanisms are employed by cloud
providers, there is always the possibility of failure of infrastructure components. We consequently
need to think proactively of how to deal with the occurrence of failures, in an attempt to minimize
their effects. In this work, we draw the risk concept from probabilistic risk analysis in order to
achieve this.
In probabilistic risk analysis, consequence costs are associated to failure events of the target
system, and failure probabilities are associated to infrastructural components. The risk is the
expected consequence of the whole system. We use the risk concept in order to present
representative mathematical models for which computational optimization problems are formulated
and solved, in a Cloud Computing environment. In these problems, consequence costs are
associated to incoming applications that must be allocated in the Cloud and the risk is either seen as
an objective function that must be minimized or as a constraint that should be limited.
The proposed problems are solved either by optimal algorithm reductions or by
approximation algorithms with provably performance guarantees. Finally, the models and problems
are discussed from a more practical point of view, with examples of how to assess risk using these
solutions. Also, the solutions are evaluated and results on their performance are established, showing
that they can be used in the effective planning of the Cloud.
|
77 |
Develop a Secure Network – A Case StudyRayapati, Habeeb January 2010 (has links)
In recent years, so many networks are being built and some of the organizations are able to provide security to their networks. The performance of a network depends on the amount of security implemented on the network without compromising the network capabilities. For building a secure network, administrators should know all the possible attacks and their mitigation techniques and should perform risk analysis to find the risks involved in designing the network. And they must also know how to design security policies for implement the network and to educate the employees, to protect the organization’s information. The goal behind this case-study is to build a campus network which can sustain from reconnaissance attacks. This thesis describes all the network attacks and explores their mitigation techniques. This will help an administrator to be prepared for the coming attacks. This thesis explains how to perform risk analysis and the two different ways to perform risk analysis. It also describes the importance of security policies and how security policies are designed in real world.
|
78 |
Protection of Non-Volatile Data in IaaS-environmentsSundqvist, Erik January 2014 (has links)
Infrastructure-as-a-Service (IaaS) cloud solutions continue to experience growth, but many enterprises and organizations are of the opinion that cloud adoption has decreased security in several aspects. This thesis addresses protection of IaaS-environment non- volatile data. A risk analysis is conducted, using the CORAS method, to identify and evaluate risks, and to propose treatments to those risks considered non-acceptable. The complex and distributed nature of an IaaS deployment is investigated to identify di↵erent approaches to data protection using encryption in combination with Trusted Computing principles. Additionally, the outcome of the risk analysis is used to decide the advantages and/or drawbacks of the di↵erent approaches; encryption on the storage host, on the compute host or inside the virtual machine. As a result of this thesis, encryption on the compute host is decided to be most beneficial due to minimal needs for trust, minimal data exposure and key management aspects. At the same time, a high grade of automation can be obtained, retaining usability for cloud consumers without any specific security knowledge. A revisited risk analysis shows that both non- acceptable and acceptable risks are mitigated and partly eliminated, but leaves virtual machine security as an important topic for further research. Along with the risk analysis and treatment proposal, this thesis provides a proof-of-concept implementation using encryption and Trusted Computing on the compute host to protect block storage data in an OpenStack environment. The implementation directly follows the Domain-Based Storage Protection (DBSP) protocol, invented by Ericsson Research and SICS, for key management and attestation of involved hosts.
|
79 |
Framing a New Nuclear Renaissance Through Environmental Competitiveness, Community Characteristics, and Cost Mitigation Through Passive SafetyCarless, Travis Seargeoh Emile 01 May 2018 (has links)
The nuclear power sector has a history of challenges with its relative competitiveness against other forms of electricity generation. The availability of low cost low natural gas, the Fukushima accident, and the cancellation of the AP1000 V.C. Summer project has caused a considerable role in ending the short lived “Nuclear Renaissance.” Historically, the nuclear industry has focused on direct cost reduction through construction, increasing installed capacity, and improving efficiencies to capacity factors in the 1990s and 2000s as ways to maintain competitiveness against other forms of energy generation. With renewables serving as an emerging low-carbon competitor, an added focus needs to be placed on indirect methods to increase the competitiveness of nuclear power. This thesis focuses on establishing pathways where nuclear power can be competitive with other forms of electricity generation given its advantages environmentally with Small Modular Reactors (SMRs), socioeconomically with legacy nuclear power plants, and through passive safety with SMRs. In Chapter 2, I estimate the life cycle GHG emissions and examine the cost of carbon abatement when nuclear is used to replace fossil fuels for the Westinghouse SMR (W-SMR) and AP1000. I created LCA models using past literature and Monte Carlo simulation to estimate the mean (and 90% confidence interval) life cycle GHG emissions of the W-SMR to be 7.4 g of CO2-eq/kwh (4.5 to 11.3 g of CO2-eq/kwh) and the AP1000 to be 7.6 g of CO2-eq/kwh (5.0 to 11.3 g of CO2-eq/kwh). Within the analysis I find that the estimated cost of carbon abatement with an AP1000 against coal and natural gas is $2/tonne of CO2-eq (-$13 to $26/tonne of CO2-eq) and $35/tonne of CO2-eq ($3 to $86/tonne of CO2-eq), respectively. In comparison, a W-SMR the cost of carbon abatement against coal and natural gas is $3/tonne of CO2- eq (-$15 to $28/tonne of CO2-eq) and $37/tonne of CO2-eq (-$1 to $90/tonne of CO2-eq), respectively. I conclude, with the exception of hydropower, the Westinghouse SMR design and the AP1000 have a smaller footprint than all other generation technologies including renewables. Assigning a cost to carbon for natural gas plant or implementing zero-emission incentives can improve the economic competitiveness of nuclear power through environmental competitiveness. The retirement of small and medium-scale coal power plants due the availability of natural gas can provide an opportunity for SMRs to replace that missing capacity. This trade-off between higher costs but lower GHG emissions demonstrates that depending on the value placed on carbon, SMR technology could be economically competitive with fossil fuel technologies Following my environmental competitiveness analysis, I shift towards investigating socioeconomic competitiveness of legacy large scale nuclear power plants compared to baseload coal and natural gas plants. In Chapter 3, I utilize ANOVA models, Tukey’s, and t-tests to explore the socioeconomic characteristics and disparities that exist within counties and communities that contain baseload power plants. My results indicate, relative to the home counties of nuclear plants, communities closer to nuclear plants have higher home values and incomes than those further away. Conversely, communities near coal and natural gas have incomes and home values that increase with distance from the plant. Communities near coal plants are typically either in less wealthy parts of the county or have a similar socioeconomic makeup as county. It can be suggested that equity issues regarding the community characteristics could be included in the discussion of converting existing power plants to use other fuel sources. Communities near power plants are not created equally and have different needs. While communities near nuclear power plants may benefit from the added tax base and absence of emissions, this is not the case for communities near coal and natural gas. With the impending retirement of large scale coal plants, the conversion of these plants to natural gas or small modular reactors presents an opportunity where negative environmental externalities can be reduced while also retaining some of the economic benefits. In Chapter 4, I present a model for estimating environmental dose exposure in a post-accident scenario to support scalable emergency planning zones (EPZs). The model includes calculating radionuclide inventory; estimating the impact decontamination factors from the AP1000, NUREG-6189, and EPRI’s Experimental Verification of Post-Accident iPWR Aerosol Behavior test will have on radioactivity within containment; and estimate dose exposure using atmospheric dispersion models. This work aims to compare historical decontamination factors with updated decontamination factors to outline the impact on containment radioactivity and dose exposure relative to the Environmental Protection Agency’s Protective Action Guide (PAG) limits. On average, I have found the AP1000, Surry, and iPWR produces 139, 153, and 104 curies/ft3 75 minutes after a LOCA. The iPWR produces less radioactivity per volume in containment than the AP1000 and Surry 84% and 96% of the time, respectively. The AP1000 produces less radioactivity per volume than Surry 68% of the time. On average, the AP1000, Surry, and iPWR produces 84,000, 106,000, and 7,000 curies/MWth 75 minutes after a LOCA. The lower bound 5 rem PAG limit is never exceeded for and does not exceeds the 1 rem lower PAG limit for whole body exposure at the 5-mile EPZ using the mean value. Considering this analysis uses a simple worst case Gaussian Plume model for atmospheric dispersion, the findings can be used to in conjunction with the State-of-the-Art Reactor Consequence Analyses (SOARCA) to provide accurate and realistic estimates for exposure. I believe this analysis can help to develop a regulatory basis for technology-neutral, risk-based approach to EPZs for iPWRs. Finally, in Chapter 5 I discuss historical challenges facing the nuclear industry, policy implications, and recommendations. These policy implications and recommendations serve as pathways to frame an new nuclear renaissance. I also recommend future work where I details opportunities for improvements to nuclear competitiveness. Ultimately, this thesis can help policy and decision makers that can improve competitiveness and minimize risk as it relates to the expansion of nuclear power sector.
|
80 |
Scenario thinking and stochastic modelling for strategic and policy decisions in agricultureStrauss, P.G. (Petrus Gerhardus) 06 June 2010 (has links)
In 1985, Pierre Wack, arguably the father of modern scenario thinking, wrote the following: “Forecasts often work because the world does not always change. But sooner or later forecasts will fail when they are needed most: in anticipating major shifts…” (Wack, 1985: 73). The truth of this statement have again become apparent, first as the “food price crisis” played out during 2007 and 2008, and secondly as the current financial and economic crisis are playing out. Respected market commentators and analysts, both internationally and within South Africa, made all sorts of “informed predictions” on topics ranging from oil prices, interest rates, and economic growth rates to input costs and food prices. The problem is that none of these “respected views” and “informed predictions and estimates” became true within the period that was assigned to these predictions. In fact, just the opposite occurred: the unexpected implosion of the global economy and hence commodity markets. The result of the experts “getting it so wrong”, is that questions are being asked about the reliability of risk and uncertainty analysis. Even though the experts used highly advanced analytical techniques in analyzing the risks and uncertainties in order to formulate predictions and outlooks, both the “food price crisis” and the economic implosion were totally unanticipated. The same questions need to be asked in terms of risk and uncertainty analyses in agricultural economics. With agriculture experiencing a period of fundamental changes causing significant uncertainty, risk and uncertainty analyses in agriculture will need to move to the next level in order to ensure that policies and business strategies are robust enough to withstand these newly arising uncertainties. The proposed solution to this problem and therefore the hypothesis offered and tested by this thesis is to work with two techniques in conjunction without combining it when developing a view of the future. The two techniques used, namely intuitive scenario thinking and stochastic modelling are based on two fundamentally different hypotheses namely: the future is like the past and present (stochastic modelling), and the future is not like the past and present but is a result of combining current and unexpectedly new forces or factors (intuitive scenario thinking). The idea behind this stems from the philosophy of Socrates, whereby he postulated that the truth can never be fully known and therefore, when working with the truth, one needs to work with multi-hypotheses about the truth until all but one hypothesis can be discarded. This will then bring one closer to the truth, but never lead you to know the truth in full, since the truth can’t be known in full. Applying this idea means conjunctively using two techniques which are based on the two hypotheses about the future. From a literature review it was realised that two such techniques existed, namely, stochastic modelling and scenario thinking. Stochastic modelling, by its very nature, is based on the assumption that the future is like the past and present since historical data, historical inter-relationships, experience, and modelling techniques are used to develop the model, apply it, and to interpret its results. Scenario thinking on the other hand, and specifically intuitive logics scenario thinking, is based on the notion that the future is not like the past or present, but is rather a combination of existing and new and unknown factors and forces. At first the perceived problem with this idea was thought to exist in the problem of using both techniques in combination, since the two techniques are fundamentally different because of the fundamentally different assumptions on which they are based. The question and challenge was therefore whether these two techniques could be used in combination, and how? However, the solution to this problem was more elementary than what was initially thought. As the two techniques are fundamentally different, it implies that the two techniques can’t be combined because the two underlying assumptions can’t be combined. However, what is possible is to use it in conjunction without adjusting either technique. Rather, one would allow each technique to run its course, which at the same time leads to cross-pollination in terms of ideas and perspectives, where possible and applicable. The cross-pollination of ideas and perspectives will then create a process whereby ideas regarding the two basic assumptions on the future are crystallised and refined through a learning process, hence resulting in clearer perspectives on both hypotheses about whether the future will be like the past and present, or whether the future will be a combination of existing and new but unknown factors and forces. These clearer perspectives provide a framework to the decision-maker whereby the two basic hypotheses on the future can be applied simultaneously to develop strategies and policies that are likely robust enough to be successful in both instances. It also provides a framework whereby reality can be interpreted as it unfolds, which signals to the decision-maker which of the two hypotheses is playing out. This will assist the decision-maker in better perceiving what is in fact happening, hence what the newly perceived truth is in terms of the future, and therefore what needs to be done in order to survive and grow within this newly developing future, reality, or truth. The presentation of three case studies assists in testing the hypothesis of this thesis as presented in chapter one, and concludes that the hypothesis can’t be rejected. Hence, through the presentation of the case studies it is found that using scenario thinking in conjunction with stochastic modelling does indeed facilitate a more complete understanding of the risks and uncertainties pertaining to policy and strategic business decisions in agricultural commodity markets, through fostering a more complete learning experience. It therefore does facilitate better decision-making in an increasingly turbulent and uncertain environment. / Thesis (PhD)--University of Pretoria, 2010. / Agricultural Economics, Extension and Rural Development / unrestricted
|
Page generated in 0.0545 seconds