Spelling suggestions: "subject:"ensitivity 2analysis "" "subject:"ensitivity 3analysis ""
481 |
Modeling Of The Biomass Power Generation And Techno-Economic AnalysisMethuku, Shireesha 11 December 2009 (has links)
Biomass is one of the renewable energy sources being used widely for power generation. This research work includes developing a comprehensive model for a biomass based power generation system as well as analyzing the technical, economical, and environmental impacts. The research objectives include modeling of the system, stability studies, and sensitivity analysis using MATLAB/Simulink. A mathematical model for the gas turbine has been developed and successfully interconnected with the distribution network. Transient stability of the power system has been carried out for four bus and six bus test case systems. Maximum rotor speed deviation, oscillation duration, rotor angle, and mechanical power have been taken as the stability indicators to analyze the system characteristics. Additionally, the sensitivity of the system to the changes of gas turbine parameters has been investigated under balanced and unbalanced fault scenarios. The economical and environmental impacts of the biomass have been analyzed using HOMER software developed by the National Renewable Energy Laboratory (NREL). The net present cost of the four biomass resources namely agricultural resources, forest residues, animal waste, and energy crops were obtained and the comparison of the costs of the biomass fuels as well as the diesel have been carried out. To investigate the environmental impact, carbon emissions of the different biomass fuels have been explored using HOMER.
|
482 |
CAUSAL MEDIATION ANALYSIS FOR NON-LINEAR MODELSWang, Wei 26 June 2012 (has links)
No description available.
|
483 |
Verification and Validation of a Transient Heat Exchanger ModelCarper, Jayme Lee 01 September 2015 (has links)
No description available.
|
484 |
Statistical Methods for Functional Genomics Studies Using Observational DataLu, Rong 15 December 2016 (has links)
No description available.
|
485 |
Satellite Attitude Determination Using Laser Communication SystemsSabala, Ryan J. 25 September 2008 (has links)
No description available.
|
486 |
Modeling, Optimization and Estimation in Electric Arc Furnace (EAF) OperationGhobara, Emad Moustafa Yasser 10 1900 (has links)
<p>The electric arc furnace (EAF) is a highly energy intensive process used to convert scrap metal into molten steel. The aim of this research is to develop a dynamic model of an industrial EAF process, and investigate its application for optimal EAF operation. This work has three main contributions; the first contribution is developing a model largely based on MacRosty and Swartz (2005) to meet the operation of a new industrial partner (ArcelorMittal Contrecoeur Ouest, Quebec, Canada). The second contribution is carrying out sensitivity analyses to investigate the effect of the scrap components on the EAF process. Finally, the third contribution includes the development of a constrained multi-rate extended Kalman filter (EKF) to infer the states of the system from the measurements provided by the plant.</p> <p>A multi-zone model is developed and discussed in detail. Heat and mass transfer relationships are considered. Chemical equilibrium is assumed in two of the zones and calculated through the minimization of the Gibbs free energy. The most sensitive parameters are identified and estimated using plant measurements. The model is then validated against plant data and has shown a reasonable level of accuracy.</p> <p>Local differential sensitivity analysis is performed to investigate the effect of scrap components on the EAF operation. Iron was found to have the greatest effect amongst the components present. Then, the optimal operation of the furnace is determined through economic optimization. In this case, the trade-off between electrical and chemical energy is determined in order to maximize the profit. Different scenarios are considered that include price variation in electricity, methane and oxygen.</p> <p>A constrained multi-rate EKF is implemented in order to estimate the states of the system using plant measurements. The EKF showed high performance in tracking the true states of the process, even in the presence of a parametric plant-model mismatch.</p> / Master of Applied Science (MASc)
|
487 |
Cost Modeling Based on Support Vector Regression for Complex Products During the Early Design PhasesHuang, Guorong 04 September 2007 (has links)
The purpose of a cost model is to provide designers and decision-makers with accurate cost information to assess and compare multiple alternatives for obtaining the optimal solution and controlling cost. The cost models developed in the design phases are the most important and the most difficult to develop. Therefore it is necessary to identify appropriate cost drivers and employ appropriate modeling techniques to accurately estimate cost for directing designers. The objective of this study is to provide higher predictive accuracy of cost estimation for directing designer in the early design phases of complex products.
After a generic cost estimation model is presented and the existing methods for identification of cost drivers and different cost modeling techniques are reviewed, the dissertation first proposes new methodologies to identify and select the cost drivers: Causal-Associated (CA) method and Tabu-Stepwise selection approach. The CA method increases understanding and explanation of the cost analysis and helps avoid missing some cost drivers. The Tabu-Stepwise selection approach is used to select significant cost drivers and eliminate irrelevant cost drivers under nonlinear situation. A case study is created to illustrate their procedure and benefits. The test data show they can improve predictive capacity.
Second, this dissertation introduces Tabu-SVR, a nonparametric approach based on support vector regression (SVR) for cost estimation for complex products in the early design phases. Tabu-SVR determines the parameters of SVR via a tabu search algorithm improved by the author. For verification and validation of performance on Tabu-SVR, the five common basic cost characteristics are summarized: accumulation, linear function, power function, step function, and exponential function. Based on these five characteristics and the Flight Optimization Systems (FLOPS) cost module (engine part), seven test data sets are generated to test Tabu-SVR and are used to compare it with other traditional methods (parametric modeling, neural networking and case-based reasoning). The results show Tabu-SVR significantly improves the performance compared to SVR based on empirical study. The radial basis function (RBF) kernel, which is much more robust, often has better performance over linear and polynomial kernel functions. Compared with other traditional cost estimating approaches, Tabu-SVR with RBF kernel function has strong predicable capability and is able to capture nonlinearities and discontinuities along with interactions among cost drivers.
The third part of this dissertation focuses on semiparametric cost estimating approaches. Extensive studies are conducted on three semiparametric algorithms based on SVR. Three data sets are produced by combining the aforementioned five common basic cost characteristics. The experiments show Semiparametric Algorithm 1 is the best approach under most situations. It has better cost estimating accuracy over the pure nonparametric approach and the pure parametric approach. The model complexity influences the estimating accuracy for Semiparametric Algorithm 2 and Algorithm 3. If the inexact function forms are used as the parametric component of semiparametric algorithm, they often do not bring any improvement of cost estimating accuracy over the pure nonparametric approach and even worsen the performance.
The last part of this dissertation introduces two existing methods for sensitivity analysis to improve the explanation capability of the cost estimating approach based on SVR. These methods are able to show the contribution of cost drivers, to determine the effect of cost drivers, to establish the profiles of cost drivers, and to conduct monotonic analysis. They finally can help designers make trade-off study and answer “what-i” questions. / Ph. D.
|
488 |
Robust State Estimation, Uncertainty Quantification, and Uncertainty Reduction with Applications to Wind EstimationGahan, Kenneth Christopher 17 July 2024 (has links)
Indirect wind estimation onboard unmanned aerial systems (UASs) can be accomplished using existing air vehicle sensors along with a dynamic model of the UAS augmented with additional wind-related states. It is often desired to extract a mean component of the wind the from frequency fluctuations (i.e. turbulence). Commonly, a variation of the KALMAN filter is used, with explicit or implicit assumptions about the nature of the random wind velocity. This dissertation presents an H-infinity (H∞) filtering approach to wind estimation which requires no assumptions about the statistics of the process or measurement noise. To specify the wind frequency content of interest a low-pass filter is incorporated. We develop the augmented UAS model in continuous-time, derive the H∞ filter, and introduce a KALMAN-BUCY filter for comparison. The filters are applied to data gathered during UAS flight tests and validated using a vaned air data unit onboard the aircraft. The H∞ filter provides quantitatively better estimates of the wind than the KALMAN-BUCY filter, with approximately 10-40% less root-mean-square (RMS) error in the majority of cases. It is also shown that incorporating DRYDEN turbulence does not improve the KALMAN-BUCY results. Additionally, this dissertation describes the theory and process for using generalized polynomial chaos (gPC) to re-cast the dynamics of a system with non-deterministic parameters as a deterministic system. The concepts are applied to the problem of wind estimation and characterizing the precision of wind estimates over time due to known parametric uncertainties. A novel truncation method, known as Sensitivity-Informed Variable Reduction (SIVR) was developed. In the multivariate case presented here, gPC and the SIVR-derived reduced gPC (gPCr) exhibit a computational advantage over Monte Carlo sampling-based methods for uncertainty quantification (UQ) and sensitivity analysis (SA), with time reductions of 38% and 98%, respectively. Lastly, while many estimation approaches achieve desirable accuracy under the assumption of known system parameters, reducing the effect of parametric uncertainty on wind estimate precision is desirable and has not been thoroughly investigated. This dissertation describes the theory and process for combining gPC and H-infinity (H∞) filtering. In the multivariate case presented, the gPC H∞ filter shows superiority over a nominal H∞ filter in terms of variance in estimates due to model parametric uncertainty. The error due to parametric uncertainty, as characterized by the variance in estimates from the mean, is reduced by as much as 63%. / Doctor of Philosophy / On unmanned aerial systems (UASs), determining wind conditions indirectly, without direct measurements, is possible by utilizing onboard sensors and computational models. Often, the goal is to isolate the average wind speed while ignoring turbulent fluctuations. Conventionally, this is achieved using a mathematical tool called the KALMAN filter, which relies on assumptions about the wind. This dissertation introduces a novel approach called H-infinity (H∞) filtering, which does not rely on such assumptions and includes an additional mechanism to focus on specific wind frequencies of interest. The effectiveness of this method is evaluated using real-world data from UAS flights, comparing it with the traditional KALMAN-BUCY filter. Results show that the H∞ filter provides significantly improved wind estimates, with approximately 10-40% less error in most cases. Furthermore, the dissertation addresses the challenge of dealing with uncertainty in wind estimation. It introduces another mathematical technique called generalized polynomial chaos (gPC), which is used to quantify and manage uncertainties within the UAS system and their impact on the indirect wind estimates. By applying gPC, the dissertation shows that the amount and sources of uncertainty can be determined more efficiently than by traditional methods (up to 98% faster). Lastly, this dissertation shows the use of gPC to provide more precise wind estimates. In experimental scenarios, employing gPC in conjunction with H∞ filtering demonstrates superior performance compared to using a standard H∞ filter alone, reducing errors caused by uncertainty by as much as 63%.
|
489 |
Structural Shape Optimization Based On The Use Of Cartesian GridsMarco Alacid, Onofre 06 July 2018 (has links)
Tesis por compendio / As ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process and yields suboptimal designs, so that new approaches are needed to obtain a competitive advantage. With the ascent of the Finite Element Method (FEM) in the engineering community in the 1970s, structural shape optimization arose as a promising area of application.
However, due to the iterative nature of shape optimization processes, the handling of large quantities of numerical models along with the approximated character of numerical methods may even dissuade the use of these techniques (or fail to exploit their full potential) because the development time of new products is becoming ever shorter.
This Thesis is concerned with the formulation of a 3D methodology based on the Cartesian-grid Finite Element Method (cgFEM) as a tool for efficient and robust numerical analysis. This methodology belongs to the category of embedded (or fictitious) domain discretization techniques in which the key concept is to extend the structural analysis problem to an easy-to-mesh approximation domain that encloses the physical domain boundary.
The use of Cartesian grids provides a natural platform for structural shape optimization because the numerical domain is separated from a physical model, which can easily be changed during the optimization procedure without altering the background discretization. Another advantage is the fact that mesh generation becomes a trivial task since the discretization of the numerical domain and its manipulation, in combination with an efficient hierarchical data structure, can be exploited to save computational effort.
However, these advantages are challenged by several numerical issues. Basically, the computational effort has moved from the use of expensive meshing algorithms towards the use of, for example, elaborate numerical integration schemes designed to capture the mismatch between the geometrical domain boundary and the embedding finite element mesh. To do this we used a stabilized formulation to impose boundary conditions and developed novel techniques to be able to capture the exact boundary representation of the models.
To complete the implementation of a structural shape optimization method an adjunct formulation is used for the differentiation of the design sensitivities required for gradient-based algorithms. The derivatives are not only the variables required for the process, but also compose a powerful tool for projecting information between different designs, or even projecting the information to create h-adapted meshes without going through a full h-adaptive refinement process.
The proposed improvements are reflected in the numerical examples included in this Thesis. These analyses clearly show the improved behavior of the cgFEM technology as regards numerical accuracy and computational efficiency, and consequently the suitability of the cgFEM approach for shape optimization or contact problems. / La competitividad en la industria actual impone la necesidad de generar nuevos y mejores diseños. El tradicional procedimiento de prueba y error, usado a menudo para el diseño de componentes mecánicos, ralentiza el proceso de diseño y produce diseños subóptimos, por lo que se necesitan nuevos enfoques para obtener una ventaja competitiva. Con el desarrollo del Método de los Elementos Finitos (MEF) en el campo de la ingeniería en la década de 1970, la optimización de forma estructural surgió como un área de aplicación prometedora.
El entorno industrial cada vez más exigente implica ciclos cada vez más cortos de desarrollo de nuevos productos. Por tanto, la naturaleza iterativa de los procesos de optimización de forma, que supone el análisis de gran cantidad de geometrías (para las se han de usar modelos numéricos de gran tamaño a fin de limitar el efecto de los errores intrínsecamente asociados a las técnicas numéricas), puede incluso disuadir del uso de estas técnicas.
Esta Tesis se centra en la formulación de una metodología 3D basada en el Cartesian-grid Finite Element Method (cgFEM) como herramienta para un análisis numérico eficiente y robusto. Esta metodología pertenece a la categoría de técnicas de discretización Immersed Boundary donde el concepto clave es extender el problema de análisis estructural a un dominio de aproximación, que contiene la frontera del dominio físico, cuya discretización (mallado) resulte sencilla.
El uso de mallados cartesianos proporciona una plataforma natural para la optimización de forma estructural porque el dominio numérico está separado del modelo físico, que podrá cambiar libremente durante el procedimiento de optimización sin alterar la discretización subyacente. Otro argumento positivo reside en el hecho de que la generación de malla se convierte en una tarea trivial. La discretización del dominio numérico y su manipulación, en coalición con la eficiencia de una estructura jerárquica de datos, pueden ser explotados para ahorrar coste computacional.
Sin embargo, estas ventajas pueden ser cuestionadas por varios problemas numéricos. Básicamente, el esfuerzo computacional se ha desplazado. Del uso de costosos algoritmos de mallado nos movemos hacia el uso de, por ejemplo, esquemas de integración numérica elaborados para poder capturar la discrepancia entre la frontera del dominio geométrico y la malla de elementos finitos que lo embebe. Para ello, utilizamos, por un lado, una formulación de estabilización para imponer condiciones de contorno y, por otro lado, hemos desarrollado nuevas técnicas para poder captar la representación exacta de los modelos geométricos.
Para completar la implementación de un método de optimización de forma estructural se usa una formulación adjunta para derivar las sensibilidades de diseño requeridas por los algoritmos basados en gradiente. Las derivadas no son sólo variables requeridas para el proceso, sino una poderosa herramienta para poder proyectar información entre diferentes diseños o, incluso, proyectar la información para crear mallas h-adaptadas sin pasar por un proceso completo de refinamiento h-adaptativo.
Las mejoras propuestas se reflejan en los ejemplos numéricos presentados en esta Tesis. Estos análisis muestran claramente el comportamiento superior de la tecnología cgFEM en cuanto a precisión numérica y eficiencia computacional. En consecuencia, el enfoque cgFEM se postula como una herramienta adecuada para la optimización de forma. / Actualment, amb la competència existent en la industria, s'imposa la necessitat de generar nous i millors dissenys . El tradicional procediment de prova i error, que amb freqüència es fa servir pel disseny de components mecànics, endarrereix el procés de disseny i produeix dissenys subòptims, pel que es necessiten nous enfocaments per obtindre avantatge competitiu. Amb el desenvolupament del Mètode dels Elements Finits (MEF) en el camp de l'enginyeria en la dècada de 1970, l'optimització de forma estructural va sorgir com un àrea d'aplicació prometedora.
No obstant això, a causa de la natura iterativa dels processos d'optimització de forma, la manipulació dels models numèrics en grans quantitats, junt amb l'error de discretització dels mètodes numèrics, pot fins i tot dissuadir de l'ús d'aquestes tècniques (o d'explotar tot el seu potencial), perquè al mateix temps els cicles de desenvolupament de nous productes s'estan acurtant.
Esta Tesi se centra en la formulació d'una metodologia 3D basada en el Cartesian-grid Finite Element Method (cgFEM) com a ferramenta per una anàlisi numèrica eficient i sòlida. Esta metodologia pertany a la categoria de tècniques de discretització Immersed Boundary on el concepte clau és expandir el problema d'anàlisi estructural a un domini d'aproximació fàcil de mallar que conté la frontera del domini físic.
L'utilització de mallats cartesians proporciona una plataforma natural per l'optimització de forma estructural perquè el domini numèric està separat del model físic, que podria canviar lliurement durant el procediment d'optimització sense alterar la discretització subjacent. A més, un altre argument positiu el trobem en què la generació de malla es converteix en una tasca trivial, ja que la discretització del domini numèric i la seua manipulació, en coalició amb l'eficiència d'una estructura jeràrquica de dades, poden ser explotats per estalviar cost computacional.
Tot i això, estos avantatges poden ser qüestionats per diversos problemes numèrics. Bàsicament, l'esforç computacional s'ha desplaçat. De l'ús de costosos algoritmes de mallat ens movem cap a l'ús de, per exemple, esquemes d'integració numèrica elaborats per poder capturar la discrepància entre la frontera del domini geomètric i la malla d'elements finits que ho embeu. Per això, fem ús, d'una banda, d'una formulació d'estabilització per imposar condicions de contorn i, d'un altra, desevolupem noves tècniques per poder captar la representació exacta dels models geomètrics
Per completar la implementació d'un mètode d'optimització de forma estructural es fa ús d'una formulació adjunta per derivar les sensibilitats de disseny requerides pels algoritmes basats en gradient. Les derivades no són únicament variables requerides pel procés, sinó una poderosa ferramenta per poder projectar informació entre diferents dissenys o, fins i tot, projectar la informació per crear malles h-adaptades sense passar per un procés complet de refinament h-adaptatiu.
Les millores proposades s'evidencien en els exemples numèrics presentats en esta Tesi. Estes anàlisis mostren clarament el comportament superior de la tecnologia cgFEM en tant a precisió numèrica i eficiència computacional. Així, l'enfocament cgFEM es postula com una ferramenta adient per l'optimització de forma. / Marco Alacid, O. (2017). Structural Shape Optimization Based On The Use Of Cartesian Grids [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86195 / Compendio
|
490 |
Efficient Computational Tools for Variational Data Assimilation and Information Content EstimationSingh, Kumaresh 23 August 2010 (has links)
The overall goals of this dissertation are to advance the field of chemical data assimilation, and to develop efficient computational tools that allow the atmospheric science community benefit from state of the art assimilation methodologies. Data assimilation is the procedure to combine data from observations with model predictions to obtain a more accurate representation of the state of the atmosphere.
As models become more complex, determining the relationships between pollutants and their sources and sinks becomes computationally more challenging. The construction of an adjoint model ( capable of efficiently computing sensitivities of a few model outputs with respect to many input parameters ) is a difficult, labor intensive, and error prone task. This work develops adjoint systems for two of the most widely used chemical transport models: Harvard's GEOS-Chem global model and for Environmental Protection Agency's regional CMAQ regional air quality model. Both GEOS-Chem and CMAQ adjoint models are now used by the atmospheric science community to perform sensitivity analysis and data assimilation studies.
Despite the continuous increase in capabilities, models remain imperfect and models alone cannot provide accurate long term forecasts. Observations of the atmospheric composition are now routinely taken from sondes, ground stations, aircraft, and satellites, etc. This work develops three and four dimensional variational data assimilation capabilities for GEOS-Chem and CMAQ which allow to estimate chemical states that best fit the observed reality.
Most data assimilation systems to date use diagonal approximations of the background covariance matrix which ignore error correlations and may lead to inaccurate estimates. This dissertation develops computationally efficient representations of covariance matrices that allow to capture spatial error correlations in data assimilation.
Not all observations used in data assimilation are of equal importance. Erroneous and redundant observations not only affect the quality of an estimate but also add unnecessary computational expense to the assimilation system. This work proposes techniques to quantify the information content of observations used in assimilation; information-theoretic metrics are used.
The four dimensional variational approach to data assimilation provides accurate estimates but requires an adjoint construction, and uses considerable computational resources. This work studies versions of the four dimensional variational methods (Quasi 4D-Var) that use approximate gradients and are less expensive to develop and run.
Variational and Kalman filter approaches are both used in data assimilation, but their relative merits and disadvantages in the context of chemical data assimilation have not been assessed. This work provides a careful comparison on a chemical assimilation problem with real data sets. The assimilation experiments performed here demonstrate for the first time the benefit of using satellite data to improve estimates of tropospheric ozone. / Ph. D.
|
Page generated in 0.0917 seconds