• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 879
  • 879
  • 145
  • 124
  • 121
  • 118
  • 113
  • 102
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
751

Contribuição à modelagem da secagem em leito deslizante concorrente

Lira, Taisa Shimosakai de 26 August 2005 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The modelling of heat and mass transfer between the air and soybean seeds in moving bed dryers is based on the application of mass and energy balances equations for both solid and fluid phases (two phases model). In the development of these equations, some classic hypotheses are assumed, such as flat air velocity profiles and constant physico-chemical parameter along the bed, among others. The main goal of this thesis was to discuss the validity of these assumptions in concurrent moving bed dryer modelling. For so much, several equations of bed porosity and air velocity distribution were considered and compared with experimental data. This study verified that non-flat fluid velocity profiles are significant in beds with reason dT /dp = 13, 3. The equation that best represented the experimental data was the Fahien e Stankovich (1979) equation. This equations was incorporated to the two phases model. Through comparisons between the experimental data and the simulated responses it was possible to verify the significant influence of the air velocity distribution in the drying process. The influences of some physico-chemical parameters of the model were also analyzed through sensitivity studies using experiment design and derivative methods using the code DASPK 3.0. The model presented large absolute sensitivity to the perturbations of the parameter specific heat of the dry air (Cpf ). This means then that small variations in the value of this parameter have strong influence in the results obtained by the model. / A modelagem da transferência de calor e massa entre o ar e sementes de soja em secadores de leito deslizante é baseada na aplicação das equações de balanço de massa e energia para as fases sólida e fluida (modelo a duas fases). No desenvolvimento destas equações, algumas hipóteses clássicas são assumidas, tais como perfil plano de velocidade do ar e parâmetros físico-químicos constantes ao longo do leito, dentre outras. O principal objetivo desta dissertação foi discutir a validade destas considerações no modelo do secador de leito deslizante com escoamentos concorrentes. Para tanto, diversas equações de distribuição de porosidade do leito e de velocidade do fluido foram consideradas e comparadas com dados experimentais. Este estudo verificou que perfis não planos de velocidade do ar são significativos em leitos com razão dT /dp = 13, 3. A equação que melhor representou os dados experimentais foi a equação de Fahien e Stankovich (1979), sendo a mesma incorporada ao modelo a duas fases. Através de comparações entre dados experimentais e respostas simuladas foi possível verificar a influência significativa da distribuição da velocidade do ar no modelo de secagem. As influências de alguns parâmetros físico-químicos do modelo também foram analisadas através de estudos de sensibilidade por planejamento de experimento e pelo método das derivadas usando o código DASPK 3.0. O modelo apresentou maior sensibilidade absoluta à perturbação do parâmetro calor específico do ar seco (Cpf ). Isto significa então que pequenas variações no valor deste parâmetro têm forte influência nos resultados obtidos pelo modelo. / Mestre em Engenharia Química
752

Problemas inversos em processos difusivos com retenção / Inverse problems in diffusive process with retention

Luciano Gonçalves da Silva 21 February 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um Estudo para a solução numérica do modelo de difusão com retenção, proposta por Bevilacqua et al. (2011), é apresentado, bem como uma formulação implícita para o problema inverso para a estimativa dos parâmetros envolvidos na formulação matemática do modelo. Através de um estudo minucioso da análise de sensibilidade e do cálculo do coeficiente de correlação de Pearson, são identificadas as chances de se obter sucesso na solução do problema inverso através do método determinístico de Levenberg-Marquardt e dos métodos estocásticos Algoritmo de Colisão de Partículas (Particle Collision Algorithm - PCA) e Evolução Diferencial (Differential Evolution - DE). São apresentados os resultados obtidos através destes três métodos de otimização para três casos de conjunto de parâmetros. Foi observada uma forte correlação entre dois destes três parâmetros, o que dificultou a estimativa simultânea dos mesmos. Porém, foi obtido sucesso nas estimativas individuais de cada parâmetro. Foram obtidos bons resultados para os fatores que multiplicam os termos diferenciais da equação que modela o fenômeno de difusão com retenção. / A Study for the numerical solution of the diffusion model with retention, proposed by Bevilacqua et al.(2011), using the finite difference method is presented, as well as an implicit formulation for the inverse problem to estimate the parameters involved in the formulation of the mathematical model. Through of a thorougth study of sensitivity analysis and calculating the Pearson correlation coefficient, are identified the chances of success in solving the inverse problem using the deterministic method of Levenberg-Marquardt and stochastic methods Particle Collision Algorithm - PCA and Differential Evolution - DE. Presents the results obtained from these three methods of optimization for three cases of parameter set. We observed a strong correlation between two of these three parameters, making it difficult to estimate simultaneously the same. However, success was obtained in the individual estimates for each parameter. Good results were obtained for the factors that increase the terms of the differential equation that models the phenomenon of diffusion with retention.
753

Evaluation de la sensibilité de l’instrument FCI à bord du nouveau satellite Meteosat Troisième Génération imageur (MTG-I) aux variations de la quantité d’aérosols d’origine désertique dans l’atmosphère / Assessment of the sensitivity of the instrument FCI aboard the new satellite Meteosat Third Generation imager (MTG -I) to changes in load of dust aerosols in the atmosphere

Aoun, Youva 19 September 2016 (has links)
Cette thèse porte sur une méthodologie d’estimation des capacités d’un futur instrument spatioporté. Le cas d’étude est l’instrument Flexible Combined Imager (FCI) à bord du futur satellite Meteosat Troisième Génération Imageur (MTG-I), et plus particulièrement ses capacités à détecter des variations de quantité d’aérosols désertiques dans l’atmosphère. Une meilleure connaissance de ces aérosols fait partie des besoins régulièrement exprimés pour l’étude du climat, la prévision météorologique ou l’estimation de la ressource solaire dans des zones arides comme le Sahara. Ce type d’aérosols est abondant dans l’atmosphère. Leurs propriétés physico-chimique les rendent distinguables des autre types d’aérosols comme ceux résultant de la pollution d’origine anthropique, d’autant qu’ils sont émis dans des zones protégées des contaminations par ces autres types. Ils représentent donc un cas d’étude simple pour valider la méthodologie développée dans cette thèse.La méthodologie consiste à réaliser un simulateur de vue du sol par l’instrument, à effectuer de très nombreuses simulations des luminances mesurées par l’instrument sous diverses conditions atmosphériques et de l’albédo du sol, à analyser les résultats de manière à quantifier l’influence de chaque variable dans la variation de la luminance, puis à conclure quant aux capacités de détection grâce un critère de détectabilité prenant en compte les caractéristiques de l’instrument.Le simulateur développé a été validé par confrontation avec des mesures réelles de l’instrument SEVIRI à bord du satellite Meteosat Second Generation. L’innovation principale réside dans l’usage de l’approche d’analyse de sensibilité globale (GSA). Cette dernière quantifie l’influence de chaque variable séparément ainsi que les termes croisés. Elle exploite des fonctions de répartition statistique des variables extraites d’observations, et permet par conséquent d’obtenir une analyse de sensibilité réaliste. La GSA produit aussi des fonctionnelles modélisant l’influence d’une ou plusieurs variables sur la variabilité du signal observé et utilisables pour différentes applications dans la télédétection. / This thesis deals with a methodology to assess the capabilities of future spaceborne instruments. The case study is the Flexible Combined Imager (FCI) of the future Meteosat Third Generation Imaging mission (MTG - I), and in particular its ability to detect variations in load of desert aerosols in a realistically variable atmosphere. A better understanding of the behavior of these aerosols is part of regularly expressed needs for the study of the climate, weather forecast or assessment of the solar resource in arid areas such as the Sahara. This type of aerosols is abundant in the atmosphere. Their physical and chemical properties make them distinguishable from other types of aerosols such as those resulting from anthropogenic pollution, especially as they are emitted in areas protected from contamination by these other types. They therefore represent a simple case study to validate the methodology developed in this thesis.The methodology is to provide a simulator of the view of the instrument to perform a large number of simulations of the radiance measured under different atmospheric conditions and ground albedo, to analyze the results in order to quantify the influence of each variable in the variation of radiance, and then conclude on the capabilities of detection through a test of detectability taking into account the characteristics of the instrument.The developed simulator was validated by comparison against actual measurements of the SEVIRI instruments onboard Meteosat Second Generation satellites. The main innovation lies in the use of the global sensitivity analysis approach (GSA). The latter quantifies the influence of each variable separately as well as their crossed terms. Cumulative distribution functions were computed from actual observations and allow a realistic sensitivity analysis of the instrument. The GSA is also used to compute functional representation of the influence of one or more variables on the variability of the observed signal. The usefulness of such representations is discussed for various applications in remote sensing.
754

Ekonomické zhodnocení efektivnosti podpor z Programu rozvoje venkova v zemědělském podniku / Analysis of economical efficiency of Rural development program subsidies in selected agriculture business

SEVERINOVÁ, Monika January 2011 (has links)
The essence of the thesis is to evaluate the efficiency of investments supported from the Rural Development Programme in the selected farm. A number of investments has been solved now, some practical highest investment firm, to invest in biogas. The evaluation of investment effectiveness were compared assumptions when planning an investment of information resulting from the two-year service biogas. It was also based on selected indicators, assess the implementation of the objectives and programs.
755

Liquidity in the banking sector / Liquidité dans le secteur bancaire

Salé, Laurent 24 November 2016 (has links)
Comme un déterminant de la survie d'une banque durant la crise financière de 2007/2008, la liquidité dans le secteur bancaire a depuis récemment représenté un défi pour les communautés financières et universitaires. Les trois articles présentés dans cette thèse portent sur les deux principales facettes de la liquidité dans le secteur bancaire: la détention d'actifs liquides (à savoir, la trésorerie et les ressources assimilées) et le processus de création de la liquidité dans les banques utilisé pour financer des prêts. Comme on le verra dans les articles, ces deux aspects de la liquidité peuvent être considérés comme les deux faces d'une même pièce. Je reconnais que la liquidité dans le secteur bancaire est liée à la création monétaire; cependant, cette thèse se concentre sur les deux précités aspects de la liquidité. Tout d'abord, cette introduction présente comment le concept de la liquidité a évolué dans la pensée économique dominante. La seconde partie considère le renouveau de la détention de cash qui a été observée depuis la crise financière de 2007/2008 dans le secteur bancaire. La troisième section examine les propriétés de liquidité. La quatrième section explore ce que nous ne savons pas sur la liquidité. La cinquième section identifie et sélectionne trois problèmes fondamentaux relatifs à liquidité et qui sont analysés dans les trois articles présentés dans thèse. La sixième et dernière section présente la méthodologie utilisée dans les trois articles pour répondre à ces questions. Chapitre 1 : “Why do banks hold cash ?". La détention de cash et assimilé cash par les banques détiennent est devenue un enjeu majeur depuis la crise financière de 2008 qui a démontré que la trésorerie retenue est un déterminant majeur dans les chances de survie des banques. Cet article examine les déterminants de la détention de cash banque en utilisant des données internationales pour la période 1981-2014. Sur la base d'un grand échantillon, nous documentons une augmentation séculaire de la détention de cash par les banques pendant une période de 35 ans. Nous apportons la preuve que la nature optimale dynamique de la détention de cash est rejetée dans le secteur bancaire. Ces résultats contrastent avec le secteur non bancaire, où la nature optimale dynamique de trésorerie est observée. Chapitre 2: “Does an increase in capital negatively impact banking liquidity creation?”. A partir d'un ensemble de données composé d'un panel de 940 banques cotées des pays européens, américains et asiatiques, cet article documente l'évolution de la création de la liquidité bancaire au cours d'une période de 35 ans (1981-2014). La preuve empirique confirme que les niveaux de risque et de capital jouent un rôle significatif et négatif dans la création de liquidité par les banques. Dans l'ensemble, les effets négatifs de l’augmentation de capital sur la création de la liquidité bancaire sont plus importants que les effets positifs sur la gestion du risque correspondant, ce qui suggère que les exigences de fonds propres imposées pour soutenir la stabilité financière affectent négativement la création de liquidités. Ces résultats ont de larges implications pour les régulateurs bancaires. Chapitre 3: “Positive effects of Basel III on banking liquidity creation”. Ce document évalue l'effet du cadre réglementaire de Bâle III sur la création de liquidité bancaire. Les résultats sont basés sur un ensemble de données de panel de banques américaines qui représentent environ 60% des prêts et dépôts américains sur une période de 7 ans (2009-2015), en plus de différence dans la différence et les méthodes de survie standard. Tous les composants de Bâle III pris ensemble, il existe des preuves empiriques que Bâle III a un effet positif sur la création de liquidité bancaire sur le marché américain, en particulier pour les grandes banques. Ces résultats ont de larges implications pour les régulateurs bancaires. / As one determinant of a bank’s survival during the financial crisis of 2007-2008, liquidity in the banking sector presents a challenge for the financial and academic communities and has recently become a central point of interest. The three articles presented in this thesis focus on the two main facets of liquidity in the banking sector: the holding of liquid assets (i.e., cash and assimilated resources) and the process of liquidity-creation in banks used to fund loans. As will be discussed in the articles, these two aspects of liquidity can be viewed as two sides of the same coin. I acknowledge that liquidity in banking is linked to the creation of money; however, this thesis focuses on the aforementioned two aspects of liquidity. First, this section presents how ideas about liquidity in the banking sector have evolved in mainstream economic thought. Second, it considers the revival of cash-holding that has been observed since the financial crisis of 2007-2008. Third, it discusses the properties of liquidity. Fourth, it explores what we do not know about liquidity. Fifth, it identifies the fundamental issues analyzed in the three articles. Finally, it presents the methodology used in the articles to address these issues. Chapter1: “Why do banks hold cash ?”. This paper investigates the determinants of bank cash holding by using international data for the period 1981-2014. The results do not seem to provide support for the substitutability hypothesis regarding the substitutive relation between cash and debt levels. Further, using the GMM-system estimation method, we find no support for the dynamic optimal cash model, suggesting that cash management in the banking sector is bounded by number of constraints that make it difficult for the agents to optimize their utility. Chapter 2: “Does an increase in capital negatively impact banking liquidity creation?”. From a dataset composed of a panel of 940 listed banks based in European, American and Asian countries, this paper documents the evolution of bank liquidity creation over a 35-year period (1981-2014). The empirical evidence confirms that risk and equity levels play a significant and negative role. Overall, the negative effects of equity increases on bank liquidity creation are more significant than corresponding positive effects on risk management, suggesting that capital requirements imposed to support financial stability negatively affect liquidity creation. These findings have broad implications for policymakers. Chapter 3: “Positive effects of Basel III on banking liquidity creation”. This paper estimates the effect of the Basel III regulatory framework on banking liquidity creation. The results are based on a panel data set of U.S. banks that represent approximately 60% of U.S. loans and deposits over a 7-year period (from 2009 to 2015) in addition to difference-in-difference and standard survival methods. All components of Basel III taken together, there is empirical evidence that Basel III has a positive effect on banking liquidity creation in the US market in particular for major banks. These findings have broad implications for policy makers.
756

Skyfallskartering i Kumla : 2D-hydraulisk modellering och känslighetsanalys / Cloudburst mapping in Kumla : 2D hydraulic modelling and sensitivity analysis

Friman, Jacob January 2017 (has links)
Översvämningar till följd av intensiva nederbördstillfällen har de senaste åren ökat i antal och omfattning. Dessa händelser förväntas bli vanligare i framtiden och skapa fler översvämningar. Med anledning av detta är det intressant att undersöka hur översvämningar i framtiden breder ut sig och vilka vattennivåer som bildas med förväntad nederbörd. Att modellera översvämningar kräver data som i vissa fall kan vara både tidskrävande och omständig att införskaffa. Möjliga avgräsningar och antaganden i modellparametrar kan då vara intressanta att göra som fortfarande ger användbara resultat. En skyfallskartering har genomförts med 2D-hydraulisk modellering i Kumla med programvaran MIKE 21 Flow Model FM. De översvämningskartor som skapades användes för att identifiera områden i Kumla som riskerar att drabbas av höga vattennivåer till följd av skyfall motsvarande 100- och 200-årsregn. En stor osäkerhet vid modellering av översvämningar är att validera resultaten som fås fram. Ofta saknas information om tidigare översvämningar. De nederbördstillfällen som används är ofta så stora att det saknas data om liknande händelser tidigare. Vid översvämningsmodellering anväds data som beskriver olika typer av modellparametrar. Dessa kommer med ytterligare osäkerheter som kan göra valideringen problematisk. För att undersöka hur stor effekt olika modellparametrar har på resultatet genomfördes en känslighetsanalys där differenskartor skapades mellan undersökta scenarion och referenskartor. Skyfallskarteringen visade att stora delar i Kumla drabbas av översvämningar för både ett 100- och 200-årsregn. Området Kumlaby identifierades som känsligt och får höga vattennivåer. Detta beror mest troligt på omgivningens topografi och att Kumlaby underlagras av leror med låg infiltrationskapacitet. I känslighetsanalysen identifierades markens råhet och infiltrationskapacitet vara styrande parametrar för översvämningens utbredning och vattennivåer. Dessa påverkar främst hur höga vattenflöden som uppstår och översvämningens utbredningen och vattennivåer. Kunskap om dessa parametrar är viktigt för att undvika över- eller underskattning av en översvämning. Användningen av avrinningskoefficienter istället för markens råhet, infiltrationskapacitet och evaporation undersöktes. Differensen i översvämningens utbredning och vattennivåer blev stor i och utanför Kumla tätort. På mindre områden kan det vara mer lämpligt att använda en avrinningskoefficient när en mer detaljerad klassning kan göras av de markytor som finns. Ett scenario som undersöktes i känslighetsanalysen var installation av gröna tak på alla byggnader i Kumla. Simuleringarna som genomfördes visade att både utbredningen och vattennivåer minskade. Detta till följd av större lagringskapacitet och motstånd mot vattenflöden som kommer med gröna tak. / Urban floods caused by intense rainfall have occurred more frequently the last couple of years. These rainfall events are expected to become more common in the future and create more floods in urban areas. This makes it important to investigate the extent and water levels from urban floods in the future. In order to simulate floods, different types of data is needed. This data can be both time consuming and difficult to obtain. With this in mind, it is interesting to investigate possible simplifications and assumptions of model parameters. A cloud burst mapping was made with 2D hydraulic modelling in Kumla with the software MIKE 21 Flow Model FM. The flood maps created were used to identify areas in Kumla which have a higher risk of being subject to high water levels. One uncertainty while modelling urban floods is the process of validating the results. There is often a lack of data for the used rainfall events or information from previous floods in the area. In flood modelling data is used which describes different model parameters, these comes with additional uncertainties and can make the validation more difficult. A sensitivity analysis was made to be able to examine effects on the results from variations in model parameters. The cloud burst mapping showed that large parts of Kumla will be affected by water levels which goes up to 1 m. The area Kumlaby was identified as being sensitive for high water levels. This is due to placement of Kumlaby below higher ground which causes water to flow toward Kumlaby. The ground below is mostly made up of clay which has low infiltration capacity. In the sensitivity analysis the bed resistance and infiltration capacity were identified as governing parameters regarding the extent and water levels of urban floods. In order to avoid over- or underestimation of floods it is important to have knowledge about these parameters in the model area. The use of a runoff coefficient instead of bed resistance, infiltration and evaporation were examined. The difference of the resulting flood were large in the whole model area. In smaller areas a runoff coefficient could be used with better results when a more detailed description can be made of the surfaces in the area. A scenario where green roofs were assumed to have been installed on all buildings in Kumla were examined. The simulations showed that both the extent and water levels decreased. This due to the fact that green roofs have a capacity to store water and delay flows of water.
757

Modelo para avaliação técnico-econômica e otimização de investimentos na proteção de redes de distribuição de energia elétrica contra descargas atmosféricas / A model for technical-economic evaluation and optimization of investments in lightning protection of power distribution networks

Paulo Sergio Milano Bernal 26 April 2018 (has links)
As descargas atmosféricas causam prejuízos às concessionárias de energia elétrica, aos consumidores e à sociedade como um todo. Diferentes métodos podem ser utilizados para melhorar a confiabilidade do sistema elétrico e a qualidade da energia fornecida aos consumidores. Entretanto, as sobretensões atmosféricas variam em função de diversos parâmetros, de modo que a eficácia de determinada alternativa de proteção depende não apenas da configuração da rede, mas também das características da região, especialmente da densidade de descargas atmosféricas e da resistividade do solo. Consequentemente a relação custo-benefício correspondente a cada alternativa também depende das características de cada região. É importante, portanto, dispor de um modelo para realizar essas avaliações levando em conta todos os fatores envolvidos no processo, de forma a auxiliar as empresas de energia na tomada de decisões sobre investimentos em proteção contra descargas atmosféricas. Entretanto, modelos de análise de viabilidade que permitam a obtenção de conclusões econômicas amplas para dar suporte à tomada de decisões não são normalmente utilizados em função da complexidade dos fenômenos associados às descargas atmosféricas e à dificuldade na construção de modelagens econômicas neste contexto. Para preencher esta lacuna, este trabalho propõe um modelo para análise do custo e benefício da implantação de sistemas de proteção contra descargas atmosféricas em redes de distribuição considerando os investimentos, a redução da indisponibilidade e os custos evitados para a distribuidora e a sociedade. A análise financeira é feita com base na taxa interna de retorno dos investimentos e na razão entre custos e benefícios, o que facilita a análise de sensibilidade e permite determinar as condições nas quais a alternativa será viável. O modelo considera a indisponibilidade de energia e a inconfiabilidade do sistema a partir dos indicadores de duração equivalente de interrupção por unidade consumidora (DEC) e da frequência equivalente de interrupção por unidade consumidora (FEC), respectivamente. Os custos evitados devido à redução da indisponibilidade são tratados como benefícios. É utilizado para analisar as condições nas quais a aplicação de um determinado método de proteção em uma determinada rede é técnica e economicamente viável. Faz-se também uma análise de sensibilidade, sendo discutidas as influências de vários parâmetros nas relações custo-benefício correspondentes às regiões estudadas. O que na realidade não limita o modelo à análise de investimentos para melhoria do desempenho de redes de distribuição frente a descargas atmosféricas, mostrou-se prático e de grande utilidade na tomada de decisões quanto à realização de determinado investimento considerando as características da região e a efetividade do método de proteção analisado. / Lightning causes damages and losses to power companies, consumers, and the society as a whole. Different methods can be used to improve the power quality and the reliability of the electrical system. However, lightning overvoltages depend upon several parameters, so that the effectiveness of a certain protection alternative depends not only on the network configuration but also on the characteristics of the region, especially the ground flash density and the soil resistivity. Consequently, the cost-benefit ratio corresponding to each alternative depends also on the characteristics of each region. It is important, therefore, that the model for conducting these assessments be capable of taking into account all the factors involved in the process so that it can assist power companies in making decisions on investments in lightning protection. However, models of feasibility analysis that enable to obtain broad economic conclusions to support decision-making are not normally used due to the complexity of the phenomena associated with lightning overvoltages and the difficulty in constructing economic models in this context. For this reason this work proposes a model for the analysis of the cost and benefit of the implantation of lightning protection systems in distribution networks considering the investments, the reduction of the unavailability, and the costs avoided for the distribution power company and the society. The financial analysis is based on the internal rate of return on investments and the cost-benefit ratio, which facilitates the sensitivity analysis and allows for the determination of the conditions under which the alternative will be feasible. The model considers the energy unavailability and the unreliability of the system from the indicators of equivalent duration of interruption per consumer unit (DEC) and the equivalent frequency of interruption per consumer unit (FEC), respectively. The costs avoided due to the reduction of unavailability are treated as benefits. The model is used to analyze the conditions under which the application of a given protection method in a given network is technically and economically feasible. A sensitivity analysis is also done and the influences of several parameters on the cost-benefit ratios corresponding to the regions studied are discussed. However the model, which actually is not limited to the analysis of investments to improve the lightning performance of power distribution networks, has proved to be practical and very useful in the decision-makingregarding the realization of a given investment taking into account the characteristics of the region and the effectiveness of the protection method analyzed.
758

Avaliação de ciclo de vida na construção civil: análise de sensibilidade / Life cycle assessment in building construction: sensitivity analysis

Cristiane Bueno 16 May 2014 (has links)
No que toca à avaliação de sistemas construtivos, a análise dos sistemas existentes para certificação ambiental de edifícios revela que há raras ferramentas que avaliam desempenho ambiental objetiva e holisticamente por Análise do Ciclo de Vida (ACV), predominando o reconhecimento de atributos de produtos e, desta forma, perdendo-se a noção global do impacto. Dentre as principais dificuldades encontradas para o uso da ACV em sistemas construtivos, encontra-se a escassez de dados de inventário disponíveis para sistemas construtivos no cenário brasileiro, o que torna a aplicação da metodologia ainda mais complexa e demorada. Por outro lado, bancos de dados internacionais dispõem de uma quantidade considerável de informações, as quais são muitas vezes utilizadas para estudos realizados dirigindo-se ao contexto brasileiro. Assim, esta pesquisa buscou responder às seguintes questões: a) se coletados para processos idênticos, os dados disponíveis em bases de dados internacionais validadas devem levar a resultados similares àqueles obtidos por estudos baseados em dados primários coletados no contexto brasileiro?; b) As metodologias de avaliação de impacto disponíveis na atualidade são capazes de avaliar de forma completa e consistente os principais potenciais de impacto derivados do ciclo de vida de materiais de construção tradicionais? Com isso o objetivo desta pesquisa foi avaliar a sensibilidade dos resultados de um estudo comparativo de ACV à utilização de dados secundários (provenientes de bases de dados europeias) ou dados primários (coletados no contexto brasileiro), assim como à utilização de diferentes metodologias de Avaliação de Impacto de Ciclo de Vida (AICV), identificando as categorias de impacto de contribuição mais significativa na avaliação de sistemas construtivos tradicionais, pela aplicação da metodologia em um estudo de caso. Os resultados demonstraram que o escopo geográfico das fontes de dados, assim como as possibilidades de escolha entre diferentes metodologias de AICV constituem pontos de grande sensibilidade dos estudos de ACV, os quais devem ser detalhadamente avaliados e descritos, de forma a se evitar resultados enganosos. Além disso, o desenvolvimento de categorias direcionadas especificamente aos impactos das atividades de mineração apresentou-se como uma importante demanda para futuros desenvolvimentos. / Regarding the evaluation of building systems, the analysis of existing buildings environmental certification systems reveals that there are few tools to evaluate environmental performance objectively and holistically through Life Cycle Analysis (LCA) predominating the product attributes recognition and thus losing the global perspective of impacts. Among the main difficulties encountered in the application of LCA in building systems, lies the lack of inventory data available for building systems in the Brazilian scene, which makes the application of the methodology even more complex and time consuming. In the other hand, international databases provide a considerable amount of information, which are often used for studies addressing the Brazilian context. Thus, this research aimed to answer the following questions: a) whether collected for identical processes, the data available in the international validated databases would lead to results similar to those obtained by studies based on primary data collected in the Brazilian context? b) Are the impact assessment methodologies currently available able to fully and consistently evaluate the main potential impacts derived from the lifecycle of traditional building materials? Therefore, the objective of this research was to perform a sensitivity analysis of the results of a comparative LCA case study to the use of secondary data (provided by European databases) or primary data (collected in the Brazilian context), as well as the use of different Life Cycle Impact Assessment (LCIA) methodologies, identifying the impact categories of most significant contribution in the evaluation of traditional construction systems, through the application of the methodology in a case study. The results showed that the geographic scope of the data sources and the choice among different LCIA methods are points of high sensitivity of LCA studies, which must be evaluated and described in detail to avoid misleading conclusions. Furthermore, the development of an LCIA category addressing impacts of mining activities was presented as the main demand for future developments.
759

Sensibilidade em fluxo de potência ótimo / Sensitivity in optimal power flower

Edmarcio Antonio Belati 21 May 2003 (has links)
Neste trabalho propomos uma abordagem para a resolução do problema de Fluxo de Potência Ótimo (FPO) perturbado. A metodologia consiste na obtenção da solução ótima para o problema inicial via um programa de FPO, e na utilização de sensibilidade para estimar novas soluções depois de ocorridas algumas perturbações no problema. Essas perturbações são variações de carga em uma ou mais barras do sistema. A técnica de sensibilidade está baseada nas informações de segunda ordem e nas condições de otimalidade. A obtenção da solução após ocorrerem perturbações no sistema é direta e não necessita de parâmetros iniciais e de correção, como penalidade e barreira, utilizados nos programas de FPO convencionais. Os resultados numéricos apresentados evidenciam o potencial desta metodologia para resolução do problema de FPO perturbado. / An approach to solve the perturbated Optimal Power Flow (OPF) problem is proposed in this study. The methodology consists in obtaining the optimal solution for the initial problem via a program of OPF, and using sensitivity to estimate new solutions after the occurrence of some perturbations in the problem. These perturbations consist in load variations in some buses of the system. The sensitivity technique is based on both the information of second order and otimality conditions. The computation of the solutions after the occurrence of perturbations in the system does not depend of initial and correction parameters such as penalty and barrier used in the conventional OPF programs. The numerical results demonstrate the potential of this methodology for the solution of the perturbated OPF problem.
760

Emprego de GPGPUs para acelerar simulações do sistema humano inato

Rocha, Pedro Augusto Ferreira 27 August 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T17:47:54Z No. of bitstreams: 1 pedroaugustoferreirarocha.pdf: 4715587 bytes, checksum: dfef00badf9cc3d7c79c1b4c62d3abfd (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:58:07Z (GMT) No. of bitstreams: 1 pedroaugustoferreirarocha.pdf: 4715587 bytes, checksum: dfef00badf9cc3d7c79c1b4c62d3abfd (MD5) / Made available in DSpace on 2017-03-06T19:58:07Z (GMT). No. of bitstreams: 1 pedroaugustoferreirarocha.pdf: 4715587 bytes, checksum: dfef00badf9cc3d7c79c1b4c62d3abfd (MD5) Previous issue date: 2012-08-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Dois mecanismos são utilizados pelo Sistema Imunológico Humano (SIH) para defender o organismo contra doenças causadas pelos mais distintos agentes patogênicos: o sistema inato e o sistema adaptativo. O primeiro é composto por células e substâncias químicas que utilizam um mecanismo genérico de defesa para prevenir ou limitar infecções ocasionadas pela maioria dos patógenos. Já o segundo mecanismo é ativado pelo primeiro, baseando-se na habilidade de reconhecer e de recordar agentes patogênicos específicos, colaborando para a montagem de um ataque mais potente a cada vez que o mesmo patógeno é encontrado. Apesar de ser muito estudado, muitas questões sobre o funcionamento do SIH ainda estão em aberto em virtude de sua complexidade e do grande número de interações, nos mais diversos níveis, entre seus distintos componentes. Neste sentido, ferramentas computacionais podem se constituir em um poderoso ferramental para auxiliar nas pesquisas sobre o tema. O presente trabalho está inserido neste escopo, dividindo-se em duas partes. Na primeira parte, o trabalho apresenta os resultados de uma análise de sensibilidade em um modelo matemático-computacional que simula a resposta imunológica inata ao lipopolissacarídeo (LPS), com o objetivo de encontrar os parâmetros mais sensíveis deste modelo. Além disto, a segunda parte do trabalho propõe uma adaptação do modelo original para um modelo tridimensional. As simulações realizadas nas duas partes do trabalho mostraram-se computacionalmente caras, demandando longos períodos de tempo para serem concluídas. Assim, GPGPUs (General Purpose Graphics Processing Units) foram utilizadas para reduzir os tempos de execução. O uso de GPGPUs permitiu que acelerações de 276 vezes para a análise de sensibilidade massiva e de 87 vezes para a computação do modelo em três dimensões fossem obtidas. / Two mechanisms are used by the Humman Immune System (HIS) to protect the body against diseases caused by distinct pathogens: the innate and the adaptive immune system. The first one is composed of cells and chemicals that use a generic mechanism of defense to prevent or limit infections caused by most pathogens. The second mechanism is activated by the first one. It has the ability to recognize and remember specific pathogens, contributing to the assembly of a more powerful attack each time the same pathogen is encountered again. Despite being widely studied, many questions about the functioning of the HIS are still open because of its complexity and the large number of interactions of its components on distinct levels. In this sense, computational tools are a powerful instrument to assist researchers on this field of study. This work is inserted in this scope and it is split into two parts. In the first part, this work presents the results of a sensitivity analysis on a mathematical-computational model that simulates the innate immune response to lipopolysaccharide (LPS). The main objective of the sensitivity analysis was to find the most sensitive parameters of the mathematical model. The second part of this work proposes the extension of the original model to a three-dimensional one. The simulations in the two parts of the work proved to be computationally expensive, requiring long periods of time to complete. Thus, GPGPUs (General Purpose Graphics Processing Units) were used to reduce execution times. The use of GPGPUs allowed speedups of 276 times for sensitivity analysis, when compared to the sequential one, and of 87 times for computations using the three dimensions model.

Page generated in 0.1033 seconds