• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Process algebra for located Markovian agents and scalable analysis techniques for the modelling of Collective Adaptive Systems

Feng, Cheng January 2017 (has links)
Recent advances in information and communications technology have led to a surge in the popularity of artificial Collective Adaptive Systems (CAS). Such systems, comprised by many spatially distributed autonomous entities with decentralised control, can often achieve discernible characteristics at the global level; a phenomenon sometimes termed emergence. Examples include smart transport systems, smart electricity power grids, robot swarms, etc. The design and operational management of CAS are of vital importance because different configurations of CAS may exhibit very large variability in their performance and the quality of services they offer. However, due to their complexity caused by varying degrees of behaviour, large system scale and highly distributed nature, it is often very difficult to understand and predict the behaviour of CAS under different situations. Novel modelling and quantitative analysis methodologies are therefore required to address the challenges posed by the complexity of such systems. In this thesis, we develop a process algebraic modelling formalism that can be used to express complex dynamic behaviour of CAS and provide fast and scalable analysis techniques to investigate the dynamic behaviour and support the design and operational management of such systems. The major contributions of this thesis are: (i) development of a novel high-level formalism, PALOMA, the Process Algebra for Located Markovian Agents for the modelling of CAS. CAS specified in PALOMA can be automatically translated to their underlying mathematical models called Population Continuous-Time Markov Chains (PCTMCs). (ii) development of an automatic moment-closure approximation method which can provide rapid Ordinary Differential Equation-based analysis of PALOMA models. (iii) development of an automatic model reduction algorithm for the speed up of stochastic simulation of PALOMA/PCTMC models. (iv) presenting a case study, predicting bike availability in stations of Santander Cycles, the public bike-sharing system in London, to show that our techniques are well-suited for analysing real CAS.
2

Computational Investigations of Earth Viscosity Structure Using Surficial Geophysical Observables Related to Isostatic Adjustment

Hill, Alexander Mackay 09 October 2020 (has links)
The research presented in this thesis seeks to address meaningful geodynamic problems related to the viscosity structure of the Earth’s interior. Isostatic adjustment is a process which is dependent upon the mechanical properties of the lithosphere and mantle. By performing computational simulations of the isostatic response for various surface-loading scenarios and numerous viscosity structures, insight can be gained into the mechanical structure of the Earth and geodynamic processes related to that structure. The modelled isostatic signal for a given set of Earth model parameters can be compared to real-world observational data in order to identify valid Earth model configurations. In Chapter 2, the “Transition Zone Water Filter” theory is tested by modelling the geophysical effects of a low-viscosity melt-rich layer atop the 410 km discontinuity. The thickness and viscosity of this layer, and the surrounding mantle, is constrained using observations of relative sea level and the geodetic J ̇_2 parameter, as well as multiple ice-loading scenarios by which the isostatic adjustment process is driven. The relative sea level data, being most sensitive to the upper mantle and the theorized melt-rich layer it contained, constrain layer properties more effectively than the J ̇_2 observation, which is strongly dependent on the lower mantle. Constraints on the viscosity of the melt-rich layer vary according to thickness, with thicker layers requiring stiffer viscosities to satisfy observations. For instance, a 20 km thick layer would require a viscosity of 10^17 Pas or greater, but any of the considered viscosities could be possible for a 1 km thick layer. Similarly, a broad range of upper mantle viscosities are possible, but they must be balanced by variations in the lower mantle. However, J ̇_2 results show a strong preference for a high-viscosity lower mantle (≥10^22 Pas). For every evaluated Earth model parameter, there is evidence of ice-model sensitivity in the inversion results. Although the results of this study demonstrate that observables related to glacial isostatic adjustment can provide constraints on the properties of this theorized melt-rich layer, the confounding effect of parameter trade-off prevents a more definitive test of this model of mantle geodynamics. The purpose of the study presented in Chapter 3 is to analyze the nature of solid-Earth deformation beneath the Lower Mississippi River, most crucially in the Mississippi Delta region where subsidence is an ongoing and costly problem. The study uses the displacement of the long profile of the Lower Mississippi River over the last 80 kyr to constrain isostatic deformation and determine constraints on the mechanical structure of both the mantle and lithosphere. Deformation recorded in the northern portion of the long profile is dominated by the effect of glacial isostatic adjustment, whereas the southern portion is governed by sediment isostatic adjustment. However, the southern portion is also potentially affected by past fault displacement, and to account for this the observational data are corrected using two distinct faulting scenarios. Displacement of the long profile is modelled using either an entirely elastic lithosphere or a lithosphere with internal viscoelastic structure, the latter of which is derived from two end-member geothermal profiles. Between the elastic and viscous lithosphere models, the viscous models are better able to replicate the observational data for each faulting scenario – both of which prefer a viscous lithosphere corresponding to the warmer geotherm. The chosen faulting scenario exerts no control over the optimal mantle model configuration, however the optimal mantle for the viscous lithosphere models is much stiffer than was determined for their elastic counterparts, reflecting significant parameter trade-off between mantle and lithosphere mechanical structure. These study results demonstrate the utility of the long profile displacement data set for constraining Earth viscosity structure, as well as the importance of considering more-complex models of lithosphere mechanical structure when addressing surface-loading problems similar to those encountered in the Mississippi Delta region.
3

Quantitative decision making in reverse logistics networks with uncertainty and quality of returns considerations

Niknejad, A. January 2014 (has links)
Quantitative modelling of reverse logistics networks and product recovery have been the focus of many research activities in the past few decades. Interest to these models are mostly due to the complexity of reverse logistics networks that necessitates further analysis with the help of mathematical models. In comparison to the traditional forward logistics networks, reverse logistics networks have to deal with the quality of returns issues as well as a high degree of uncertainty in return flow. Additionally, a variety of recovery routes, such as reuse, repair, remanufacturing and recycling, exist. The decision making for utilising these routes requires the quality of returns and uncertainty of return flow to be considered. In this research, integrated forward and reverse logistics networks with repair, remanufacturing and disposal routes are considered. Returns are assumed to be classified based on their quality in ordinal quality levels and quality thresholds are used to split the returned products into repairable, remanufacturable and disposable returns. Fuzzy numbers are used to model the uncertainty in demand and return quantities of different quality levels. Setup costs, non-stationary demand and return quantities, and different lead times have been considered. To facilitate decision making in such networks, a two phase optimisation model is proposed. Given quality thresholds as parameters, the decision variables including the quantities of products being sent to repair, disassembly and disposal, components to be procured and products to be repaired, disassembled or produced for each time period within the time horizon are determined using a fuzzy optimisation model. A sensitivity analysis of the fuzzy optimisation model is carried out on the network parameters including quantity of returned products, unit repair an disassembly costs and procurement, production, disassembly and repair setup costs. A fuzzy controller is proposed to determine quality thresholds based on some ratios of the reverse logistics network parameters including repair to new unit cost, disassembly to new unit cost, repair to disassembly setup, disassembly to procurement setup and return to demand ratios. Fuzzy controller’s sensitivity is also examined in relation to parameters such as average repair and disassembly costs, repair, disassembly, production and procurement setup costs and return to demand ratio. Finally, a genetic fuzzy method is developed to tune the fuzzy controller and improve its rule base. The rule base obtained and the results of sensitivity analyses are utilised to gain better managerial insights into these reverse logistics networks.
4

Decisão de mix de produtos sob a ótica do custeio baseado em atividades e tempo. / Product-mix decision from the perspective of time-driven activity-based costing.

Saraiva Júnior, Abraão Freires 25 February 2010 (has links)
Esta pesquisa versa sobre o tema decisão de mix de produtos que, em uma visão de Gestão de Produção e Operações, pode ser entendido como a definição da quantidade ideal a ser produzida de cada tipo de produto em um determinado período, considerando que estes competem por um número limitado de recursos, de forma a maximizar o resultado econômico (ex: lucro líquido) da empresa. Os modelos de decisão de mix produtos utilizam informações sobre rentabilidade que é determinada a partir de análises e confrontos entre os preços de vendas e os custos dos produtos, custos esses que são mensurados através de métodos de custeio. Dentre os métodos de custeio existentes na literatura, destacam-se o Custeio por Absorção, o Custeio Direto, o Custeio Baseado em Atividades (ABC) e o Custeio Baseado em Atividades e Tempo (TDABC). O TDABC, apesar de lançado na literatura em 2004 e detalhado em 2007 a partir de um livro publicado por Robert Kaplan e Steven Anderson, ainda não foi explorado diretamente pela literatura que versa sobre decisão de mix de produtos, ao contrário dos outros métodos de custeio mencionados. Nesse contexto, esta dissertação tem como objetivo primário construir um modelo quantitativo para alicerçar a decisão de mix de produtos que incorpore a lógica de custeio do TDABC. Para cumprir o objetivo primário, inicialmente, a dissertação é desenvolvida metodologicamente com a realização de uma pesquisa bibliográfica para discutir conceitos e posicionar a pesquisa sobre decisão de mix de produtos e sobre métodos de custeio, com destaque ao TDABC. Em seguida, utiliza-se modelagem quantitativa com vistas à proposição de um modelo baseado no TDABC para decisão de mix de produtos. A partir de um exemplo didático envolvendo um ambiente de manufatura, é ilustrada a aplicação do modelo proposto. Finalmente, conclui-se que o modelo proposto sob a ótica do TDABC é útil para fundamentar a decisão de mix de produtos. A dissertação possui, ainda, os objetivos secundários de posicionar teoricamente, de analisar criticamente e de caracterizar as publicações acadêmicas internacionais sobre decisão de mix de produtos no tocante (i) aos países nos quais os estudos foram originados, (ii) aos principais canais de publicação (periódicos) dos trabalhos, (iii) aos tipos de pesquisa utilizados e (iv) aos destaques em termos de autores e de publicações mais citados. Para cumprir os objetivos secundários, são realizados um levantamento bibliográfico em três portais de periódicos, uma análise bibliométrica dos artigos prospectados e uma análise crítica do conteúdo das publicações sobre decisão de mix de produtos. A partir dos resultados da análise bibliométrica, conclui-se que: (i) a produção acadêmica sobre decisão de mix de produtos apresentou um crescimento considerável a partir de 1991; (ii) 71% das publicações concentram-se em periódicos estritamente relacionados com o campo acadêmico da Gestão de Produção e Operações; (iii) há uma forte concentração das pesquisas em universidades dos Estados Unidos e é relevante o papel de pesquisadores vinculados a instituições de países orientais (Taiwan, Índia e China) na produção acadêmica internacional sobre decisão de mix de produtos; (iv) 77% das publicações foram trabalhados pelos autores a partir dos tipos de pesquisa Modelagem Matemática/Quantitativa e Teórico/Conceitual; e (v) Gerrard Plenert, pesquisador da Brigham Young University, foi o autor mais citado nos artigos analisados. / This research addresses the theme \"product-mix decision\" that, in a Production and Operations Management perspective, can be understood as the definition of the optimum quantity to be produced for each type of product in a given period, considering these products compete for limited resources in order to maximize the firm economic result (e.g. net profit). Product-mix decision models use information on profitability, which is determined from analysis and confrontation between sales prices and costs of the products supplied by the company. These products costs are measured by costing methods. Among the existing costing methods in the literature, Absorption Costing, the Direct Costing, the Activity Based Costing (ABC) and Time- Driven Activity-Based Costing (TDABC) are highlighted. TDABC, despite appearing in the literature in 2004 and detailed in 2007 from a book written by Robert Kaplan and Steven Anderson, has not been directly explored in the literature that deals with the product-mix decision, unlike the other costing methods mentioned. In this context, the dissertation aims primarily to build a quantitative model to underpin the productmix decision incorporating the TDABC approach. To meet the primary objective, firstly, the dissertation is methodologically developed from a literature research to discuss concepts and positioning the research on product-mix decision and on costing methods, emphasizing TDABC. Then, quantitative modeling is used in order to propose a model based on TDABC to assist product-mix decision. An application of the proposed model is illustrated from a didactic example involving a manufacturing environment. Finally, it is concluded that the model proposed from the perspective of TDABC can be helpful for decision making related to product-mix. Secondarily, the dissertation also aims to position theoretically, to analyze critically, and to characterize the academic literature on the product-mix decision published in international journals with respect to (i) the countries where the studies were originated, (ii) the main journals that publish the studies, (iii) the research approach used, and (iv) the highlights in terms of authors and publications cited. In order meet the secondary objectives, a bibliographic survey on three internet portals, a bibliometric analysis of the prospected papers, and a critical analysis of the content of product-mix decision publications are carried out. The bibliographic survey resulted in 70 academic articles on product-mix decision published in international journals. From the results of the bibliometric analysis, it was found that: (i) productmix decision academic research has grown since 1991; (ii) 71% of the publications focus on academic journals strictly related to the Production and Operations Management field; (iii) there is a strong concentration of research in American universities and the role of researchers from institutions in Eastern countries (Taiwan, India and China) is relevant in product-mix decision international publication; (iv) 77% of the publications were treated by the authors as from two research approaches: Mathematical/Quantitative Modeling and Theoretical/Conceptual; and (v) the most cited author in the articles analyzed is researcher Gerrard Plenert, from Brigham Young University.
5

Decisão de mix de produtos sob a ótica do custeio baseado em atividades e tempo. / Product-mix decision from the perspective of time-driven activity-based costing.

Abraão Freires Saraiva Júnior 25 February 2010 (has links)
Esta pesquisa versa sobre o tema decisão de mix de produtos que, em uma visão de Gestão de Produção e Operações, pode ser entendido como a definição da quantidade ideal a ser produzida de cada tipo de produto em um determinado período, considerando que estes competem por um número limitado de recursos, de forma a maximizar o resultado econômico (ex: lucro líquido) da empresa. Os modelos de decisão de mix produtos utilizam informações sobre rentabilidade que é determinada a partir de análises e confrontos entre os preços de vendas e os custos dos produtos, custos esses que são mensurados através de métodos de custeio. Dentre os métodos de custeio existentes na literatura, destacam-se o Custeio por Absorção, o Custeio Direto, o Custeio Baseado em Atividades (ABC) e o Custeio Baseado em Atividades e Tempo (TDABC). O TDABC, apesar de lançado na literatura em 2004 e detalhado em 2007 a partir de um livro publicado por Robert Kaplan e Steven Anderson, ainda não foi explorado diretamente pela literatura que versa sobre decisão de mix de produtos, ao contrário dos outros métodos de custeio mencionados. Nesse contexto, esta dissertação tem como objetivo primário construir um modelo quantitativo para alicerçar a decisão de mix de produtos que incorpore a lógica de custeio do TDABC. Para cumprir o objetivo primário, inicialmente, a dissertação é desenvolvida metodologicamente com a realização de uma pesquisa bibliográfica para discutir conceitos e posicionar a pesquisa sobre decisão de mix de produtos e sobre métodos de custeio, com destaque ao TDABC. Em seguida, utiliza-se modelagem quantitativa com vistas à proposição de um modelo baseado no TDABC para decisão de mix de produtos. A partir de um exemplo didático envolvendo um ambiente de manufatura, é ilustrada a aplicação do modelo proposto. Finalmente, conclui-se que o modelo proposto sob a ótica do TDABC é útil para fundamentar a decisão de mix de produtos. A dissertação possui, ainda, os objetivos secundários de posicionar teoricamente, de analisar criticamente e de caracterizar as publicações acadêmicas internacionais sobre decisão de mix de produtos no tocante (i) aos países nos quais os estudos foram originados, (ii) aos principais canais de publicação (periódicos) dos trabalhos, (iii) aos tipos de pesquisa utilizados e (iv) aos destaques em termos de autores e de publicações mais citados. Para cumprir os objetivos secundários, são realizados um levantamento bibliográfico em três portais de periódicos, uma análise bibliométrica dos artigos prospectados e uma análise crítica do conteúdo das publicações sobre decisão de mix de produtos. A partir dos resultados da análise bibliométrica, conclui-se que: (i) a produção acadêmica sobre decisão de mix de produtos apresentou um crescimento considerável a partir de 1991; (ii) 71% das publicações concentram-se em periódicos estritamente relacionados com o campo acadêmico da Gestão de Produção e Operações; (iii) há uma forte concentração das pesquisas em universidades dos Estados Unidos e é relevante o papel de pesquisadores vinculados a instituições de países orientais (Taiwan, Índia e China) na produção acadêmica internacional sobre decisão de mix de produtos; (iv) 77% das publicações foram trabalhados pelos autores a partir dos tipos de pesquisa Modelagem Matemática/Quantitativa e Teórico/Conceitual; e (v) Gerrard Plenert, pesquisador da Brigham Young University, foi o autor mais citado nos artigos analisados. / This research addresses the theme \"product-mix decision\" that, in a Production and Operations Management perspective, can be understood as the definition of the optimum quantity to be produced for each type of product in a given period, considering these products compete for limited resources in order to maximize the firm economic result (e.g. net profit). Product-mix decision models use information on profitability, which is determined from analysis and confrontation between sales prices and costs of the products supplied by the company. These products costs are measured by costing methods. Among the existing costing methods in the literature, Absorption Costing, the Direct Costing, the Activity Based Costing (ABC) and Time- Driven Activity-Based Costing (TDABC) are highlighted. TDABC, despite appearing in the literature in 2004 and detailed in 2007 from a book written by Robert Kaplan and Steven Anderson, has not been directly explored in the literature that deals with the product-mix decision, unlike the other costing methods mentioned. In this context, the dissertation aims primarily to build a quantitative model to underpin the productmix decision incorporating the TDABC approach. To meet the primary objective, firstly, the dissertation is methodologically developed from a literature research to discuss concepts and positioning the research on product-mix decision and on costing methods, emphasizing TDABC. Then, quantitative modeling is used in order to propose a model based on TDABC to assist product-mix decision. An application of the proposed model is illustrated from a didactic example involving a manufacturing environment. Finally, it is concluded that the model proposed from the perspective of TDABC can be helpful for decision making related to product-mix. Secondarily, the dissertation also aims to position theoretically, to analyze critically, and to characterize the academic literature on the product-mix decision published in international journals with respect to (i) the countries where the studies were originated, (ii) the main journals that publish the studies, (iii) the research approach used, and (iv) the highlights in terms of authors and publications cited. In order meet the secondary objectives, a bibliographic survey on three internet portals, a bibliometric analysis of the prospected papers, and a critical analysis of the content of product-mix decision publications are carried out. The bibliographic survey resulted in 70 academic articles on product-mix decision published in international journals. From the results of the bibliometric analysis, it was found that: (i) productmix decision academic research has grown since 1991; (ii) 71% of the publications focus on academic journals strictly related to the Production and Operations Management field; (iii) there is a strong concentration of research in American universities and the role of researchers from institutions in Eastern countries (Taiwan, India and China) is relevant in product-mix decision international publication; (iv) 77% of the publications were treated by the authors as from two research approaches: Mathematical/Quantitative Modeling and Theoretical/Conceptual; and (v) the most cited author in the articles analyzed is researcher Gerrard Plenert, from Brigham Young University.
6

Cu-catalyzed chemical vapour deposition of graphene : synthesis, characterization and growth kinetics

Wu, Xingyi January 2017 (has links)
Graphene is a two dimensional carbon material whose outstanding properties have been envisaged for a variety of applications. Cu-catalyzed chemical vapour deposition (Cu-CVD) is promising for large scale production of high quality monolayer graphene. But the existing Cu-CVD technology is not ready for industry-level production. It still needs to be improved on some aspects, three of which include synthesizing industrially useable graphene films under safe conditions, visualizing the domain boundaries of the continuous graphene, and understanding the kinetic features of the Cu-CVD process. This thesis presents the research aiming at these three objectives. By optimizing the Cu pre-treatments and the CVD process parameters, continuous graphene monolayers with the millimetre-scale domain sizes have been synthesized. The process safety has been ensured by delicately diluting the flammable gases. Through a novel optical microscope set up, the spatial distributions of the domains in the continuous Cu-CVD graphene films have been directly imaged and the domain boundaries visualised. This technique is non-destructive to the graphene and hence could help manage the domain boundaries of the large area graphene. By establishing the novel rate equations for graphene nucleation and growth, this study has revealed the essential kinetic characteristics of general Cu-CVD processes. For both the edge-attachment-controlled and the surface-diffusion-controlled growth, the rate equations for the time-evolutions of the domain size, the nucleation density, and the coverage are solved, interpreted, and used to explain various Cu-CVD experimental results. The continuous nucleation and inter-domain competitions prove to have non-trivial influences over the growth process. This work further examines the temperature-dependence of the graphene formation kinetics leading to a discovery of the internal correlations of the associated energy barriers. The complicated effects of temperature on the nucleation density are explored. The criteria for identifying the rate-limiting step is proposed. The model also elucidates the kinetics-dependent formation of the characteristic domain outlines. By accomplishing these three objectives, this research has brought the current Cu-CVD technology a large step forward towards practical implementation in the industry level and hence made high quality graphene closer to being commercially viable.
7

ANÁLISE DO DESEMPENHO DO MODELO SWMM5 ACOPLADO AO CALIBRADOR PEST NA BACIA DO ARROIO CANCELA/RS. / PERFORMANCE EVALUATION OF THE SWMM5 MODEL COUPLED WITH THE PEST CALIBRATOR IN THE CANCELA CREEK/RS BASIN.

Beling, Fabio Alex 16 May 2013 (has links)
This dissertation presents the results of the qualitative and quantitative modeling of the Arroio Cancela urban basin, having an area of 4.35 km², with the use of the Storm Water Management Model (SWMM5). The generation and routing of the runoff, base flows and the processes of accumulation and washoff of the total suspended sediments (TSS) and organic matter represented by biochemical oxygen demand (BOD5) were modeled. The package PEST (Parameter Estimator) was used in the calibrations of the most sensitive parameters of the SWMM5. Eight months of monitored rainfall and runoff data containing 34 rainfall events were calibrated. Calibration of the qualitative processes used 10 events containing monitored data with concentrations of TSS and BOD5. The validation of the average calibrated parameters was carried out over a period of three months, having 16 rainfall events, 4 of which containing monitored data of TSS and BOD5. The results indicate that the SWMM5 is more sensitive to parameters related with the impermeable areas of the basin. The parameters of the permeable areas were more sensitive in events of greater magnitude. The use of PEST proved to be valuable in optimizing the model, considering the speed at which the algorithm converges to a satisfactory solution. The runoff calibration of the events reached very good Nash-Sutcliffe efficiencies (ENS) (average of 0.92). For the continuous simulations, calculated ENS reached a value of 0.72. The average errors in the flow volume for both cases were less than 14%. The calibration of TSS reached an average ENS value of 0.56 and, for BOD5, an average ENS of -0.75, with a high dispersion of the washoff parameters for both pollutants. The runoff validation for events produced an average ENS equal to 0.47, calculated median equal to 0.87 and hydrograms with good reproduction of the shape of the observed data. The validation of the continuous series presented ENS equal to 0.74 and an underestimation of the flow volume equal to 7.7%. The validation of the quality processes resulted very poor ENS indexes, with deficiently representation of the variation of TSS and BOD5 concentrations. The results indicate that the use of the SWMM5 model coupled with the PEST calibrator can produce good results in the prediction of runoff events and continuous flow series. However, the representation of the qualitative processes require better initial parameter estimations for buildup and washoff, in addition to improvements in the calculation algorithm of buildup and washoff of pollutants. / Este trabalho apresenta os resultados da modelagem qualiquantitativa da bacia urbana do Arroio Cancela, possuindo 4,35 km² de área, com o uso do modelo Storm Water Management Model (SWMM5). Foram modelados os processos de geração e propagação do escoamento superficial e de base, além dos processos de acumulação e lavagem do total de sedimentos em suspensão (TSS) e da matéria orgânica representada pela demanda bioquímica de oxigênio (DBO5). O pacote de rotinas PEST (Parameter Estimator) foi empregado nas calibrações dos parâmetros mais sensíveis do SWMM5. Ao total, foram calibrados 8 meses de dados monitorados de chuva e vazão, contendo 34 eventos chuvosos. Na calibração dos processos qualitativos empregaram-se 10 eventos contendo estimativas da concentração do TSS e da DBO5. A validação da média dos parâmetros calibrados foi realizada num período de 3 meses, possuindo 16 eventos chuvosos, dos quais 4 contém dados monitorados do TSS e da DBO5. Os resultados indicam que o SWMM5 é mais sensível aos parâmetros relativos às áreas impermeáveis da bacia. Os parâmetros das áreas permeáveis foram mais sensíveis nos eventos de maior magnitude. O uso do PEST revelou-se de grande valia na otimização do modelo, tendo em vista a velocidade com que o algoritmo converge para uma solução satisfatória. A calibração por eventos do escoamento superficial atingiu índices de eficiência de Nash-Sutcliffe (ENS) muito bons (média de 0,92). Para a série contínua, o ENS calculado atingiu o valor de 0,72. Os erros médios no volume de vazão para ambos os casos foi inferior a 14%. A calibração do TSS atingiu ENS médio de 0,56 e, da DBO5, ENS médio de -0,75, com parâmetros de lavagem muito dispersos para ambos os poluentes. Na validação dos eventos, a média dos parâmetros calibrados produziu valores médios de ENS igual a 0,47, mediana calculada igual a 0,87 e hidrogramas com boa reprodução da forma da série observada. A validação da série contínua alcançou ENS igual a 0,74 e subestimativa no volume de vazão igual a 7,7%. A validação dos processos qualitativos foi muito deficiente, não sendo reproduzida satisfatoriamente a variação da concentração do TSS e da DBO5. Os resultados apontam que o uso do modelo SWMM5 acoplado ao calibrador PEST produz bons resultados na predição do escoamento superficial em eventos isolados e em séries contínuas. Todavia, a representação dos processos qualitativos requer melhores estimativas dos parâmetros iniciais de acumulação e lavagem, além do aperfeiçoamento do algoritmo de cálculo dos mesmos.
8

An investigation into the integration of qualitative and quantitative techniques for addressing systemic complexity in the context of organisational strategic decision-making

McLucas, Alan Charles, Civil Engineering, Australian Defence Force Academy, UNSW January 2001 (has links)
System dynamics modelling has been used for around 40 years to address complex, systemic, dynamic problems, those often described as wicked. But, system dynamics modelling is not an exact science and arguments about the most suitable techniques to use in which circumstances, continues. The nature of these wicked problems is investigated through a series of case studies where poor situational awareness among stakeholders was identified. This was found to be an underlying cause for management failure, suggesting need for better ways of recognising and managing wicked problem situations. Human cognition is considered both as a limitation and enabler to decision-making in wicked problem environments. Naturalistic and deliberate decision-making are reviewed. The thesis identifies the need for integration of qualitative and quantitative techniques. Case study results and a review of the literature led to identification of a set of principles of method to be applied in an integrated framework, the aim being to develop an improved way of addressing wicked problems. These principles were applied to a series of cases in an action research setting. However, organisational and political barriers were encountered. This limited the exploitation and investigation of cases to varying degrees. In response to a need identified in the literature review and the case studies, a tool is designed to facilitate analysis of multi-factorial, non-linear causality. This unique tool and its use to assist in problem conceptualisation, and as an aid to testing alternate strategies, are demonstrated. Further investigation is needed in relation to the veracity of combining causal influences using this tool and system dynamics, broadly. System dynamics modelling was found to have utility needed to support analysis of wicked problems. However, failure in a particular modelling project occurred when it was found necessary to rely on human judgement in estimating values to be input into the models. This was found to be problematic and unacceptably risky for sponsors of the modelling effort. Finally, this work has also identified that further study is required into: the use of human judgement in decision-making and the validity of system dynamics models that rely on the quantification of human judgement.
9

An investigation into the integration of qualitative and quantitative techniques for addressing systemic complexity in the context of organisational strategic decision-making

McLucas, Alan Charles, Civil Engineering, Australian Defence Force Academy, UNSW January 2001 (has links)
System dynamics modelling has been used for around 40 years to address complex, systemic, dynamic problems, those often described as wicked. But, system dynamics modelling is not an exact science and arguments about the most suitable techniques to use in which circumstances, continues. The nature of these wicked problems is investigated through a series of case studies where poor situational awareness among stakeholders was identified. This was found to be an underlying cause for management failure, suggesting need for better ways of recognising and managing wicked problem situations. Human cognition is considered both as a limitation and enabler to decision-making in wicked problem environments. Naturalistic and deliberate decision-making are reviewed. The thesis identifies the need for integration of qualitative and quantitative techniques. Case study results and a review of the literature led to identification of a set of principles of method to be applied in an integrated framework, the aim being to develop an improved way of addressing wicked problems. These principles were applied to a series of cases in an action research setting. However, organisational and political barriers were encountered. This limited the exploitation and investigation of cases to varying degrees. In response to a need identified in the literature review and the case studies, a tool is designed to facilitate analysis of multi-factorial, non-linear causality. This unique tool and its use to assist in problem conceptualisation, and as an aid to testing alternate strategies, are demonstrated. Further investigation is needed in relation to the veracity of combining causal influences using this tool and system dynamics, broadly. System dynamics modelling was found to have utility needed to support analysis of wicked problems. However, failure in a particular modelling project occurred when it was found necessary to rely on human judgement in estimating values to be input into the models. This was found to be problematic and unacceptably risky for sponsors of the modelling effort. Finally, this work has also identified that further study is required into: the use of human judgement in decision-making and the validity of system dynamics models that rely on the quantification of human judgement.

Page generated in 0.1263 seconds