• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 131
  • 38
  • 16
  • 11
  • 9
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 437
  • 91
  • 84
  • 81
  • 72
  • 72
  • 48
  • 44
  • 42
  • 40
  • 40
  • 34
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Characterization of Crazing Properties of Polycarbonate

Clay, Stephen Brett 06 September 2000 (has links)
The purpose of this study was to characterize the craze growth behavior of polycarbonate (PC) as a function of stress level, model the residual mechanical properties of PC at various craze levels and strain rates, and determine if the total surface area of crazing is the sole factor in residual properties or if the crazing stress plays a role. To obtain these goals, a new in-situ reflective imaging technique was developed to quantify the craze severity in transparent polymers. To accomplish the goal of craze growth rate characterization, polycarbonate samples were placed under a creep load in a constant temperature, constant humidity environment. Using the new technique, the relative craze density was measured as a function of time under load at stresses of 40, 45, and 50 MPa. The craze growth rates were found to increase exponentially with stress level, and the times to 1% relative craze density were found to decrease exponentially with stress level. One exception to this behavior was found at a crazing stress of 50 MPa at which over half of the samples tested experienced delayed necking, indicating competitive mechanisms of crazing and shear yielding. The draw stress was found to be a lower bound below which delayed necking will not occur in a reasonable time frame. The yield stress, elastic modulus, failure stress, and ductility were correlated to crazing stress, relative craze density, and strain rate using a Design of Experiments (DOE) approach. The yield stress was found to correlate only to the strain rate, appearing to be unaffected by the presence of crazes. No correlation was found between the elastic modulus and the experimental factors. The failure stress was found to decrease with an increase in relative craze density from 0 to 1%, increase with an increase in crazing stress from 40 to 45 MPa, and correlate to the interaction between the crazing stress and the strain rate. The ductility of polycarbonate was found to decrease significantly with an increase in relative craze density, a decrease in crazing stress, and an increase in strain rate. The craze microstructure was correlated to the magnitude of stress during craze formation. The area of a typical craze formed at 40 MPa was measured to be more than 2.5 times larger than the area of a typical craze formed at 45 MPa. The fewer, but larger, crazes formed at the lower stress level were found to decrease the failure strength and ductility of polycarbonate more severely than the large number of smaller crazes formed at the higher stress level. / Ph. D.
52

Investigation of performance and surge behavior of centrifugal compressors through CFD simulations

Tosto, Francesco January 2018 (has links)
The use of turbocharged Diesel engines is nowadays a widespread practice in the automotive sector: heavy-duty vehicles like trucks or buses, in particular, are often equipped with turbocharged engines. An accurate study of the flow field developing inside both the main components of a turbocharger, i.e. compressor and turbine, is therefore necessary: the synergistic use of CFD simulations and experimental tests allows to fulfill this requirement. The aim of this thesis is to investigate the performance and the flow field that develops inside a centrifugal compressor for automotive turbochargers. The study is carried out by means of numerical simulations, both steady-state and transient, based on RANS models (Reynolds Averaged Navier-Stokes equations). The code utilized for the numerical simulations is Ansys CFX.   The first part of the work is an engineering attempt to develop a CFD method for predicting the performance of a centrifugal compressor which is based solely on steady-state RANS models. The results obtained are then compared with experimental observations. The study continues with an analysis of the sensitivity of the developed CFD method to different parameters: influence of both position and model used for the rotor-stator interfaces and the axial tip-clearance on the global performances is studied and quantified.   In the second part, a design optimization study based on the Design of Experiments (DoE) approach is performed. In detail, transient RANS simulations are used to identify which geometry of the recirculation cavity hollowed inside the compressor shroud (ported shroud design) allows to mitigate the backflow that appears at low mass-flow rates. Backflow can be observed when the operational point of the compressor is suddenly moved from design to surge conditions. On actual heavy-duty vehicles, these conditions may arise when a rapid gear shift is performed. / Användningen av turboladdade dieselmotorer ärr numera utbredd inom bilindustrin: i synnerhet tunga fordon som lastbilar eller bussar ärr ofta utrustade med turbo-laddade motorer. En utförlig förståelse av flödesfältet som utvecklas innuti båda huvudkomponenterna hos en turboladdare, dvs kompressor och turbin, är därför nödvändig: den synergistiska användningen av CFD-simuleringar och experimentel-la tester möjliggör att detta krav uppfylls. Syftet med denna avhandling är att undersöka prestanda och det flödesfält som utvecklas i en centrifugalkompressor för turboladdare. Studien utförs genom nu-meriska simuleringar, både steady state och transient, baserat på RANS-modeller (Reynolds Averaged Navier-Stokes-ekvationer). Koden som används för de numeriska simuleringarna är Ansys CFX.   Den första delen av arbetet ¨ar ett försöka att utveckla en CFD-metod för att förutsäga prestanda för en centrifugalkompressor med hjälp av steady-state RANS-modeller. De erhållna resultaten jämförs sedan med experimentella observationer. Studien fortsätter med en analys av känsligheten hos den utvecklade CFD-metoden till olika parametrar: Inflytande av både position och modell som används för rotor-statorgränssnitt samt axiellt spel mellan rotor och hus på de globala prestationerna studeras och kvantifieras.   I andra delen utförs en designoptimeringsstudie baserad på Design of Experiments (DoE). I detalj används tidsupplösta RANS-simuleringar för att identifiera vilken utformning av ported shroud som minskar backflöde i kompressorn under en snabb minskning av massflöde och varvtal och därmed ger bättre prestanda i transient surge. På tunga fordon kan dessa förhållanden uppstå under växling.
53

Development of a machine-tooling-process integrated approach for abrasive flow machining (AFM) of difficult-to-machine materials with application to oil and gas exploration componenets

Howard, Mitchell James January 2014 (has links)
Abrasive flow machining (AFM) is a non-traditional manufacturing technology used to expose a substrate to pressurised multiphase slurry, comprised of superabrasive grit suspended in a viscous, typically polymeric carrier. Extended exposure to the slurry causes material removal, where the quantity of removal is subject to complex interactions within over 40 variables. Flow is contained within boundary walls, complex in form, causing physical phenomena to alter the behaviour of the media. In setting factors and levels prior to this research, engineers had two options; embark upon a wasteful, inefficient and poor-capability trial and error process or they could attempt to relate the findings they achieve in simple geometry to complex geometry through a series of transformations, providing information that could be applied over and over. By condensing process variables into appropriate study groups, it becomes possible to quantify output while manipulating only a handful of variables. Those that remain un-manipulated are integral to the factors identified. Through factorial and response surface methodology experiment designs, data is obtained and interrogated, before feeding into a simulated replica of a simple system. Correlation with physical phenomena is sought, to identify flow conditions that drive material removal location and magnitude. This correlation is then applied to complex geometry with relative success. It is found that prediction of viscosity through computational fluid dynamics can be used to estimate as much as 94% of the edge-rounding effect on final complex geometry. Surface finish prediction is lower (~75%), but provides significant relationship to warrant further investigation. Original contributions made in this doctoral thesis include; 1) A method of utilising computational fluid dynamics (CFD) to derive a suitable process model for the productive and reproducible control of the AFM process, including identification of core physical phenomena responsible for driving erosion, 2) Comprehensive understanding of effects of B4C-loaded polydimethylsiloxane variants used to process Ti6Al4V in the AFM process, including prediction equations containing numerically-verified second order interactions (factors for grit size, grain fraction and modifier concentration), 3) Equivalent understanding of machine factors providing energy input, studying velocity, temperature and quantity. Verified predictions are made from data collected in Ti6Al4V substrate material using response surface methodology, 4) Holistic method to translating process data in control-geometry to an arbitrary geometry for industrial gain, extending to a framework for collecting new data and integrating into current knowledge, and 5) Application of methodology using research-derived CFD, applied to complex geometry proven by measured process output. As a result of this project, four publications have been made to-date – two peer-reviewed journal papers and two peer-reviewed international conference papers. Further publications will be made from June 2014 onwards.
54

Modeling & optimisation of coarse multi-vesiculated particles

Clarke, Stephen Armour 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-vesiculated particles (MVP) are synthetic insoluble polymeric particles containing a multitude of vesicles (micro-voids). The particles are generally produced and used as a suspension in an aqueous fluid and are therefore readily incorporated in latex paints as opacifiers. The coarse or suede MVP have a large volume-mean diameter (VMD) generally in the range of 35-60μm, the large VMD makes them suitable for textured effect paints. The general principle behind the MVP technology is as the particles dry, the vesicles drain of liquid and fill with air. The large refractive index difference between the polymer shell and air result in the scattering of incident light which give the MVP their white opaque appearance making them suitable as an opacifier for the partial replacement of TiO2 in coating systems. Whilst the coarse MVP have been successfully commercialized, insufficient understanding of the influence of the MVP system parameters on the final MVP product characteristics coupled with the MVP’s sensitivity towards the unsaturated polyester resin (UPR) resulted in a product with significant quality variation. On the other hand these uncertainties provided the opportunity to model and optimise the MVP system through developing a better understanding of the influence of the MVP system parameters on the MVP product characteristics, developing a model to mathematically describe these relationships and to optimise the MVP system to achieve the product specifications whilst simultaneously minimising the variation observed in the product characteristics. The primary MVP characteristics for this study were the particle size distribution (quantified by the volume-mean diameter (VMD)) and the reactor buildup.1 The approach taken was to analyse the system determining all possible system factors that may affect it, and then to reduce the total number of system factors by selecting those which have a significant influence on the characteristics of interest. A model was then developed to mathematically describe the relationship between these significant factors and the characteristics of interest. This was done utilising a set of statistical methods known as design of experiments (DoE). A screening DoE was conducted on the identified system factors reducing them to a subset of factors which had a significant effect on the VMD & buildup. The UPR was characterised by its acid value and viscosity and in combination with the identified significant factors a response surface model (RSM) was developed for the chosen design space, mathematically describing their relationship with the MVP characteristics. Utilising a DoE method known as robust parameter design (specifically propagation of error) an optimised MVP system was numerically determined which brought the MVP product within specification and simultaneously reduced the MVP’s sensitivity to the UPR. The validation of the response surface model indicated that the average error in the VMD prediction was 2.16μm (5.16%) which compared well to the 1.96μm standard deviation of replication batches. The high Pred-R2 value of 0.839 and the low validation error indicates that the model is well suited for predicting the VMD characteristic of the MVP system. The application of propagation of error to the model during optimisation resulted in a MVP process and formulation which brought the VMD response from the standard’s average of 44.56μm to the optimised system’s average of 47.84μm which was significantly closer to the desired optimal of 47.5μm. The most notable value added to the system by the propagation of error technique was the reduction in the variation around the mean of the VMD, due to the UPR, by over 30%1 from the standard to optimised MVP system. In addition to the statistical model, dimensional analysis, (specifically Buckingham-Π method) was applied to the MVP system to develop a semi-empirical dimensionless model for the VMD. The model parameters were regressed from the experimental data obtained from the DoE and the model was compared to several models sited in literature. The dimensionless model was not ideal for predicting the VMD as indicated by the R2 value of 0.59 and the high average error of 21.25%. However it described the VMD better than any of the models cited in literature, many of which had negative R2 values and were therefore not suitable for modelling the MVP system. / AFRIKAANSE OPSOMMING: Sintetiese polimeer partikels wat veeltallige lugblasies huisves en omhul, staan beter bekend as MVP (verkort vanaf die Engelse benaming, "multi-vesiculated particles"). Tipies word hierdie partikels berei en gestabiliseer in 'n waterige suspensie wat dit mengbaar maak met konvensionele emulsie sisteme en dit dus in staat stel om te funksioneer as 'n dekmiddel in verf. Deur die volume gemiddelde deursnee (VGD) te manipuleer tot tussen 35 en 60μm, word die growwe partikels geskik vir gebruik in tekstuur verwe, soos byvoorbeeld afwerkings met 'n handskoenleer (suède) tipe tekstuur. Die dekvermoë van MVP ontstaan soos die partikels droog en die water in die polimeer partikel vervang word met lug. As gevolg van die groot verskil in brekingsindeks tussen die polimeer huls en die lugblasies, word lig verstrooi in alle rigtings wat daartoe lei dat die partikels wit vertoon. Dus kan die produk gebruik word om anorganiese pigmente soos TiO2 gedeeltelik te vervang in verf. Alhoewel growwe MVP al suksesvol gekommersialiseer is, bestaan daar nog net 'n beperkte kennis oor die invloed van sisteem veranderlikes op die karakteristieke eienskappe van die finale produk. Dit volg onder andere uit waarnemings dat die kwaliteit van die growwe MVP baie maklik beïnvloed word deur onbekende variasies in die reaktiewe poliëster hars wat gebruik word om die partikels te maak. Dit het egter die geleentheid geskep om die veranderlikes deeglik te modeleer en te optimiseer om sodoende 'n beter begrip te kry van hoe eienskappe geaffekteer word. 'n Wetenskaplike model is opgestel om verwantskappe te illustreer en om die sisteem te optimiseer sodat daar aan produk spesifikasies voldoen word, terwyl produk variasies minimaal bly. Die oorheersende doel in hierdie studie was om te fokus op partikelgrootte en verspreiding (bepaal met behulp van die VGD) as primêre karakteristieke eienskap, asook die graad van aanpaksel op die reaktorwand gedurende produksie. Vanuit eerste beginsel is alle moontlike veranderlikes geanaliseer, waarna die hoeveelheid verminder is na slegs dié wat die karakteristieke eienskap die meeste beïnvloed. Deur gebruik te maak van eksperimentele ontwerp is die wetenskaplike model ontwikkel wat die effek van hierdie eienskappe statisties omsluit. 'n Afskerms eksperimentele ontwerp is uitgevoer om onbeduidende veranderlikes te elimineer van dié wat meer betekenisvol is. Die hars is gekaraktiseer met 'n getal wat gebruik word om die aantal suur groepe per molekuul aan te dui, asook die hars se viskositeit. Hierdie twee eienskappe, tesame met ander belangrike eienskappe is gebruik om 'n karakteristieke oppervlakte model te ontwikkel wat hul invloed op die VGD van die partikels en reaktor aanpakking beskryf. Deur gebruik te maak van 'n robuuste ontwerp, beter beskryf as 'n fout verspreidingsmodel, is die MVP sisteem numeries geoptimiseer. Dit het tot gevolg dat die MVP binne spesifikasie bly en die VGD se sensitiwiteit vir variasie in die hars verminder het. Geldigheidstoetse op die oppervlakte model het aangetoon dat die gemiddelde fout in VGD 2.16μm (5.16%) was. Dit is stem goed ooreen met die 1.96μm standaard afwyking tussen herhaalde lopies. Hoë Pred-R2 waardes (0.839) en lae geldigheidsfout waardes het getoon dat die voorgestelde model die VGD eienskappe uiters goed beskryf. Toepassing van die fout verspreidingsmodel gedurende optimisering het tot gevolg dat die VGD vanaf die standaard gemiddelde van 44.56μm verskuif het na die geoptimiseerde gemiddelde van 47.84μm. Dit is aansienlik nader aan die verlangde optimum waarde van 47.5μm. Die grootste waarde wat toegevoeg is na afloop van hierdie studie, is dat die afwyking rondom die gemiddelde VGD, toegeskryf aan die eienskappe van die hars, verminder het met oor die 30%1 (vanaf die standaard tot die optimiseerde sisteem). Verdere dimensionele analise van die sisteem deur spesifiek gebruik te maak van die Buckingham-Π metode het gelei tot die ontwikkeling van 'n semi-empiriese dimensielose VGD model. Regressie op eksperimentele data verkry uit die eksperimentele ontwerp is vergelyk met verskeie modelle beskryf in ander literatuur bronne. Hierdie dimensionele model was nie ideaal om die VGD te beskryf nie, aangesien die R2 waarde 0.59 was en die gemiddelde fout van 21.25% relatief hoog was. Nietemin, hierdie model beskryf die VGD beter as enige ander model voorgestel in die literatuur. In talle gevalle is negatiewe R2 waardes verkry, wat hierdie literatuur modelle geheel en al ongeskik maak vir toepassing in die MVP sisteem.
55

Automated Selected of Mixed Integer Program Solver Parameters

Stewart, Charles 30 April 2010 (has links)
This paper presents a method that uses designed experiments and statistical models to extract information about how solver parameter settings perform for classes of mixed integer programs. The use of experimental design facilitates fitting a model that describes the response surface across all combinations of parameter settings, even those not explicitly tested, allowing identification of both desirable and poor settings. Identifying parameter settings that give the best expected performance for a specific class of instances and a specific solver can be used to more efficiently solve a large set of similar instances, or to ensure solvers are being compared at their best.
56

Applications of Sure Independence Screening Analysis for Supersaturated Designs

Nicely, Lindsey 25 April 2012 (has links)
Experimental design has applications in many fields, from medicine to manufacturing. Incorporating statistics into both the planning and analysis stages of the experiment will ensure that appropriate data are collected to allow for meaningful analysis and interpretation of the results. If the number of factors of interest is very large, or if the experimental runs are very expensive, then a supersaturated design (SSD) can be used for factor screening. These designs have n runs and k > n - 1 factors, so there are not enough degrees of freedom to allow estimation of all of the main effects. This paper will first review some of the current techniques for the construction and analysis of SSDs, as well as the analysis challenges inherent to SSDs. Analysis techniques of Sure Independence Screening (SIS) and Iterative Sure Independence Screening (ISIS) are discussed, and their applications for SSDs are explored using simulation, in combination with the Smoothly Clipped Absolute Deviation (SCAD) approach for down-selecting and estimating the effects.
57

Considerations for Screening Designs and Follow-Up Experimentation

Leonard, Robert D 01 January 2015 (has links)
The success of screening experiments hinges on the effect sparsity assumption, which states that only a few of the factorial effects of interest actually have an impact on the system being investigated. The development of a screening methodology to harness this assumption requires careful consideration of the strengths and weaknesses of a proposed experimental design in addition to the ability of an analysis procedure to properly detect the major influences on the response. However, for the most part, screening designs and their complementing analysis procedures have been proposed separately in the literature without clear consideration of their ability to perform as a single screening methodology. As a contribution to this growing area of research, this dissertation investigates the pairing of non-replicated and partially–replicated two-level screening designs with model selection procedures that allow for the incorporation of a model-independent error estimate. Using simulation, we focus attention on the ability to screen out active effects from a first order with two-factor interactions model and the possible benefits of using partial replication as part of an overall screening methodology. We begin with a focus on single-criterion optimum designs and propose a new criterion to create partially replicated screening designs. We then extend the newly proposed criterion into a multi-criterion framework where estimation of the assumed model in addition to protection against model misspecification are considered. This is an important extension of the work since initial knowledge of the system under investigation is considered to be poor in the cases presented. A methodology to reduce a set of competing design choices is also investigated using visual inspection of plots meant to represent uncertainty in design criterion preferences. Because screening methods typically involve sequential experimentation, we present a final investigation into the screening process by presenting simulation results which incorporate a single follow-up phase of experimentation. In this concluding work we extend the newly proposed criterion to create optimal partially replicated follow-up designs. Methodologies are compared which use different methods of incorporating knowledge gathered from the initial screening phase into the follow-up phase of experimentation.
58

Regeneration of activated carbon by photocatalysis using titanium dioxide

Carballo-Meilan, M. Ara January 2015 (has links)
The adsorption of methylene blue onto two types of commercial activated carbons, a mesoporous type (Norit CA1) and microporous type (207C) was analysed. Powdered TiO2 was mixed with the carbon and added to the dye solution to determine the influence of the photocatalyst during the adsorption process. Equilibrium and kinetics experiments were done with and without any addition of photocatalytic titanium dioxide (TiO2). Changes in capacity, heterogeneity, and heat of adsorption were detected and related to changes in the quantity of TiO2 added by evaluating the equilibrium parameter from 13 isotherm models. The influence of TiO2 on the adsorption kinetics of the dye was correlated using simplified kinetic-type model as well as mass transfer parameters. Using a formal design of experiments approach responses such as the removal of the dye, variation of pH, external mass transfer rate (KF) and intraparticle rate constant (Ki) were evaluated. Results indicated that TiO2 increased the uptake of methylene blue onto CA1, increased Ki and CA1-TiO2 interactions had electrostatic nature. In contrast, TiO2 was seen to inhibit the equilibrium adsorption for 207C by reducing its capacity. The 207C-TiO2 interaction was attributed to a specific adsorption of TiO2 on the coconut-based adsorbent, as zeta potential and pH measurements seemed to suggest. The regeneration of activated carbon using UV-C/TiO2 heterogeneous photocatalysis in a novel bell photocatalytic reactor, and in a standard-type coiled-tube photoreactor was also studied. Initially, response surface methodology was applied to finding the optimum conditions for the mineralization of methylene blue in both reactors using methylene blue as model compound and TiO2 as photocatalyst performing direct photocatalytic decolourization. Methylene blue concentration, TiO2 concentration and pH were the variables under study. Complete mineralization of the dye was achieved in the coiled-tube reactor using 3.07 mg/L of methylene blue at pH 6.5 with 0.4149 g/L TiO2. The regeneration experiments in the coiled-tube photoreactor were done using One Variable at Time (OVAT) method. The effect of the mass of TiO2 was the only studied variable. The study indicated an increase in regeneration of CA1 and a decrease in the pH during the oxidation step at higher concentration of the photocatalyst. In the case of the regeneration of 207C, the addition of TiO2 lowered the regeneration and made the suspension more basic during the photocatalytic step. However these results were not statistically significant. Experiments using the bell photoreactor were performed applying the same response surface method used in direct photocatalytic decolourization (control). The variables under study were pH, concentration of dye-saturated carbon and TiO2 concentration. The regeneration percentage was the chosen response. Low regeneration percentages were achieved (maximum 63%), and significant differences (95% confidence interval) were found between the regeneration of the activated carbons, being higher in the case of powdered CA1 as compared with granular 207C.
59

Méthodes stochastiques de modélisation de données : application à la reconstruction de données non régulières. / Stochastic methods of data modeling : application to the reconstruction of non-regular data

Buslig, Leticia 06 October 2014 (has links)
Pas de résumé / Pas de résumé
60

[en] BLACK OIL RESERVOIRS SIMULATOR PROXY USING COMPUTATIONAL INTELLIGENCE AND FRACTIONAL FACTORIAL DESIGN OF EXPERIMENTS / [pt] APROXIMADOR DE FUNÇÃO PARA SIMULADOR DE RESERVATÓRIOS PETROLÍFEROS UTILIZANDO TÉCNICAS DE INTELIGÊNCIA COMPUTACIONAL E PROJETO DE EXPERIMENTOS FATORIAIS FRACIONADO

ALEXANDRE DE CASTRO ALMEIDA 30 March 2009 (has links)
[pt] Em diversas etapas da cadeia de trabalho da Indústria de Óleo e Gás a atividade de Engenharia de Petróleo demanda processos que envolvem otimização. Mais especificamente, no gerenciamento de reservatórios, as metodologias para a tomada de decisão pelo uso de poços inteligentes envolvem processos de otimização. Nestes processos, normalmente, visa-se maximizar o VPL (Valor Presente Líquido), que é calculado através das curvas de produção de óleo, gás e água fornecidas por um simulador de reservatório. Estas simulações demandam alto custo computacional, muitas vezes inviabilizando processos de otimização. Neste trabalho, empregam-se técnicas de inteligência computacional - modelos de redes neurais artificiais e neuro-fuzzy - para a construção de aproximadores de função para simulador de reservatórios com o objetivo de diminuir o custo computacional de um sistema de apoio à decisão para utilização ou não de poços inteligentes em reservatórios petrolíferos. Para reduzir o número de amostras necessárias para a construção dos modelos, utiliza-se também Projeto de Experimentos Fatoriais Fracionado. Os aproximadores de função foram testados em dois reservatórios petrolíferos: um reservatório sintético, muito sensível às mudanças no controle de poços inteligentes e outro com características reais. Os resultados encontrados indicam que estes aproximadores de reservatório conseguem bom desempenho na substituição do simulador no processo de otimização - devido aos baixos erros encontrados e à substancial diminuição do custo computacional. Além disto, os testes demonstraram que a substituição total do simulador pelo aproximador se revelou uma interessante estratégia para utilização do sistema de otimização, fornecendo ao especialista uma rápida ferramenta de apoio à decisão. / [en] In many stages of the work chain of Oil & Gas Industry, activities of petroleum engineering demand processes that involve optimization. More specifically, in the reservoirs management, the methodologies for decision making by using intelligent wells involve optimization processes. In those processes, usually, the goal is to maximize the NVP (Net Present Value), which is calculated through the curves of oil, gas and water production, supplied by a reservoir simulator. Such simulations require high computational costs, therefore in many cases the optimization processes become unfeasible. Techniques of computational intelligence are applied in this study - artificial neural networks and neuro-fuzzy models - for building proxies for reservoirs simulators aiming at to reduce the computational cost in a decision support system for using, or not, intelligent wells within oil reservoirs. In order to reduce the number of samples needed for build the models, it was used the Fractional Factorial Design of Experiments. The proxies have been tested in two oil reservoirs: a synthetic one, very sensitive to changes in the control of intelligent wells and another one with real characteristics. The replacement of the simulator by the reservoir proxy, in an optimization process, indicates a good result in terms of performance - low errors and significantly reduced computational costs. Moreover, tests demonstrate that the total replacement of the simulator by the proxy, turned out to be an interesting strategy for using the optimization system, which provides to the users a very fast tool for decision support.

Page generated in 0.1323 seconds