• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 98
  • 66
  • 16
  • 11
  • 5
  • 2
  • 2
  • Tagged with
  • 394
  • 394
  • 136
  • 55
  • 55
  • 55
  • 54
  • 47
  • 39
  • 38
  • 34
  • 32
  • 32
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Automatic history matching in Bayesian framework for field-scale applications

Mohamed Ibrahim Daoud, Ahmed 12 April 2006 (has links)
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
242

Structural Health Monitoring Of Composite Structures Using Magnetostrictive Sensors And Actuators

Ghosh, Debiprasad 01 1900 (has links)
Fiber reinforced composite materials are widely used in aerospace, mechanical, civil and other industries because of their high strength-to-weight and stiffness-to-weight ratios. However, composite structures are highly prone to impact damage. Possible types of defect or damage in composite include matrix cracking, fiber breakage, and delamination between plies. In addition, delamination in a laminated composite is usually invisible. It is very diffcult to detect it while the component is in service and this will eventually lead to catastrophic failure of the structure. Such damages may be caused by dropped tools and ground handling equipments. Damage in a composite structure normally starts as a tiny speckle and gradually grows with the increase in load to some degree. However, when such damage reaches a threshold level, serious accident can occur. Hence, it is important to have up-to-date information on the integrity of the structure to ensure the safety and reliability of composite components, which require frequent inspections to identify and quantify damage that might have occurred even during manufacturing, transportation or storage. How to identify a damage using the obtained information from a damaged composite structure is one of the most pivotal research objectives. Various forms of structural damage cause variations in structural mechanical characteristics, and this property is extensively employed for damage detection. Existing traditional non-destructive inspection techniques utilize a variety of methods such as acoustic emission, C-scan, thermography, shearography and Moir interferometry etc. Each of these techniques is limited in accuracy and applicability. Most of these methods require access to the structure.They also require a significant amount of equipment and expertise to perform inspection. The inspections are typically based on a schedule rather than based on the condition of the structure. Furthermore, the cost associated with these traditional non-destructive techniques can be rather prohibitive. Therefore, there is a need to develop a cost-effective, in-service, diagnostic system for monitoring structural integrity in composite structures. Structural health monitoring techniques based on dynamic response is being used for several years. Changes in lower natural frequencies and mode shapes with their special derivatives or stiffness/ exibility calculation from the measured displacement mode shapes are the most common parameters used in identification of damage. But the sensitivity of these parameters for incipient damage is not satisfactory. On the other hand, for in service structural health monitoring, direct use of structural response histories are more suitable. However, they are very few works reported in the literature on these aspects, especially for composite structures, where higher order modes are the ones that get normally excited due to the presence of flaws. Due to the absence of suitable direct procedure, damage identification from response histories needs inverse mapping; like artificial neural network. But, the main diffculty in such mapping using whole response histories is its high dimensionality. Different general purpose dimension reduction procedures; like principle component analysis or indepen- dent component analysis are available in the literature. As these dimensionally reduced spaces may loose the output uniqueness, which is an essential requirement for neural network mapping, suitable algorithms for extraction of damage signature from these re- sponse histories are not available. Alternatively, fusion of trained networks for different partitioning of the damage space or different number of dimension reduction technique, can overcome this issue efficiently. In addition, coordination of different networks trained with different partitioning for training and testing samples, training algorithms, initial conditions, learning and momentum rates, architectures and sequence of training etc., are some of the factors that improves the mapping efficiency of the networks. The applications of smart materials have drawn much attention in aerospace, civil, mechanical and even bioengineering. The emerging field of smart composite structures offers the promise of truly integrated health and usage monitoring, where a structure can sense and adapt to their environment, loading conditions and operational requirements, and materials can self-repair when damaged. The concept of structural health monitoring using smart materials relies on a network of sensors and actuators integrated with the structure. This area shows great promise as it will be possible to monitor the structural condition of a structure, throughout its service lifetime. Integrating intelligence into the structures using such networks is an interesting field of research in recent years. Some materials that are being used for this purpose include piezoelectric, magnetostrictive and fiber-optic sensors. Structural health monitoring using, piezoelectric or fiber-optic sensors are available in the literature. However, very few works have been reported in the literature on the use of magnetostrictive materials, especially for composite structures. Non contact sensing and actuation with high coupling factor, along with other prop- erties such as large bandwidth and less voltage requirement, make magnetostrictive materials increasingly popular as potential candidates for sensors and actuators in structural health monitoring. Constitutive relationships of magnetostrictive material are represented through two equations, one for actuation and other for sensing, both of which are coupled through magneto-mechanical coefficient. In existing finite element formulation, both the equations are decoupled assuming magnetic field as proportional to the applied current. This assumption neglects the stiffness contribution coming from the coupling between mechanical and magnetic domains, which can cause the response to deviate from the time response. In addition, due to different fabrication and curing difficulties, the actual properties of this material such as magneto-mechanical coupling coefficient or elastic modulus, may differ from results measured at laboratory conditions. Hence, identification of the material properties of these embedded sensor and actuator are essential at their in-situ condition. Although, finite element method still remains most versatile, accurate and generally applicable technique for numerical analysis, the method is computationally expensive for wave propagation analysis of large structures. This is because for accurate prediction, the finite element size should be of the order of the wavelength, which is very small due to high frequency loading. Even in health monitoring studies, when the flaw sizes are very small (of the order of few hundred microns), only higher order modes will get affected. This essentially leads to wave propagation problem. The requirement of cost-effective computation of wave propagation brings us to the necessity of spectral finite element method, which is suitable for the study of wave propagation problems. By virtue of its domain transfer formulation, it bypasses the large system size of finite element method. Further, inverse problem such as force identification problem can be performed most conveniently and efficiently, compared to any other existing methods. In addition, spectral element approach helps us to perform force identification directly from the response histories measured in the sensor. The spectral finite element is used widely for both elementary and higher order one or two dimensional waveguides. Higher order waveguides, normally gives a behavior, where a damping mode (evanescent) will start propagating beyond a certain frequency called the cut-off frequency. Hence, when the loading frequencies are much beyond their corresponding cut-off frequencies, higher order mo des start propagating along the structure and should be considered in the analysis of wave propagations. Based on these considerations, three main goals are identified to be pursued in this thesis. The first is to develop the constitutive relationship for magnetostrictive sensor and actuator suitable for structural analysis. The second is the development of different numerical tools for the modelling the damages. The third is the application of these developed elements towards solving inverse problems such as, material property identification, impact force identification, detection and identification of delamination in composite structure. The thesis consists of four parts spread over six chapters. In the first part, linear, nonlinear, coupled and uncoupled constitutive relationships of magnetostrictive materials are studied and the elastic modulus and magnetostrictive constant are evaluated from the experimental results reported in the literature. In uncoupled model, magnetic field for actuator is considered as coil constant times coil current. The coupled model is studied without assuming any explicit direct relationship with magnetic field. In linear coupled model, the elastic modulus, the permeability and magnetostrictive coupling are assumed as constant. In nonlinear-coupled model, the nonlinearity is decoupled and solved separately for the magnetic domain and mechanical domain using two nonlinear curves,’ namely the stress vs. strain curve and magnetic flux density vs. magnetic field curve. This is done by two different methods. In the first, the magnetic flux density is computed iteratively, while in the second, artificial neural network is used, where a trained network gives the necessary strain and magnetic flux density for a given magnetic field and stress level. In the second part, different finite element formulations for composite structures with embedded magnetostrictive patches, which can act both as sensors and actuators, is studied. Both mechanical and magnetic degrees of freedoms are considered in the formulation. One, two and three-dimensional finite element formulations for both coupled and uncoupled analysis is developed. These developed elements are then used to identify the errors in the overall response of the structure due to uncoupled assumption of the magnetostrictive patches and shown that this error is comparable with the sensitivity of the response due to different damage scenarios. These studies clearly bring out the requirement of coupled analysis for structural health monitoring when magnetostrictive sensor and actuator are used. For the specific cases of beam elements, super convergent finite element formulation for composite beam with embedded magnetostrictive patches is introduced for their specific advantages in having superior convergence and in addition, these elements are free from shear locking. A refined 2-node beam element is derived based on classical and first order shear deformation theory for axial-flexural-shear coupled deformation in asymmetrically stacked laminated composite beams with magnetostrictive patches. The element has an exact shape function matrix, which is derived by exactly solving the static part of the governing equations of motion, where a general ply stacking is considered. This makes the element super convergent for static analysis. The formulated consistent mass matrix, however, is approximate. Since the stiffness is exactly represented, the formulated element predicts natural frequency to greater level of accuracy with smaller discretization compared to other conventional finite elements. Finally, these elements are used for material property identification in conjunction with artificial neural network. In the third part, frequency domain analysis is performed using spectrally formulated beam elements. The formulated elements consider deformation due to both shear and lateral contraction, and numerical experiments are performed to highlight the higher order effects, especially at high frequencies. Spectral element is developed for modelling wave propagation in composite laminate in the presence of magnetostrictive patches. The element, by virtue of its frequency domain formulation, can analyze very large domain with nominal cost of computation and is suitable for studying wave propagation through composite materials. Further more, identification of impact force is performed form the magnetostrictive sensor response histories using these spectral elements. In the last part, different numerical examples for structural health monitoring are directed towards studying the responses due to the presence of the delamination in the structure; and the identification of the delamination from these responses using artificial neural network. Neural network is applied to get structural damage status from the finite element response using its mapping feature, which requires output uniqueness. To overcome the loss of output uniqueness due to the dimension reduction, damage space is divided into different overlapped zones and then different networks are trained for these zones. Committee machine is used to co ordinate among these networks. Next, a five-stage hierarchy of networks is used to consider partitioning of damage space, where different dimension reduction algorithms and different partitioning between training and testing samples are used for better mapping fro the identification procedure. The results of delamination detection for composite laminate show that the method developed in this thesis can be applied to structural damage detection and health monitoring for various industrial structures. This thesis collectively addresses all aspects pertaining to the solution of inverse problem and specially the health monitoring of composite structures using magnetostric tive sensor and actuator. In addition, the thesis discusses the necessity of higher order theory in the high frequency analysis of wavw propagation. The thesis ends with brief summary of the tasks accomplished, significant contribution made to the literature and the future applications where the proposed methods addressed in this thesis can be applied.
243

Analytic and Numerical Methods for the Solution of Electromagnetic Inverse Source Problems

Popov, Mikhail January 2001 (has links)
No description available.
244

New algorithms for solving inverse source problems in imaging techniques with applications in fluorescence tomography

Yin, Ke 16 September 2013 (has links)
This thesis is devoted to solving the inverse source problem arising in image reconstruction problems. In general, the solution is non-unique and the problem is severely ill-posed. Therefore, small perturbations, such as the noise in the data, and the modeling error in the forward problem, will cause huge errors in the computations. In practice, the most widely used method to tackle the problem is based on Tikhonov-type regularizations, which minimizes a cost function combining a regularization term and a data fitting term. However, because the two tasks, namely regularization and data fitting, are coupled together in Tikhonov regularization, they are difficult to solve. It happens even if each task can be efficiently solved when they are separate. We propose a method to overcome the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, separately. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills the regularization and other physical requirements. The key idea is that the correction function in the kernel has no impact to the data fitting, and the regularization is imposed in a smaller space. Moreover, there is no parameter needed to balance the data fitting and regularization terms. As a case study, we apply the proposed method to Fluorescence Tomography (FT), an emerging imaging technique well known for its ill-posedness and low image resolution in existing reconstruction techniques. We demonstrate by theory and examples that the proposed algorithm can drastically improve the computation speed and the image resolution over existing methods.
245

Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes

Yousefpour, Negin 16 December 2013 (has links)
This study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process.
246

Predictive power of nuclear mean-field theories for exotic-nuclei problem

Rybak, Karolina 21 September 2012 (has links) (PDF)
This thesis is a critical examination of phenomenological nuclear mean field theories, focusing on reliable description of levels of individual particles. The approach presented here is new in the sense that it not only allows to predict the numerical values obtained with this formalism, but also yields an estimate of the probability distributions corresponding to the experimental results. We introduce the concept of 'theoretical errors' to estimate uncertainties in theoreticalmodels. We also introduce a subjective notion of 'Predictive Power' of nuclear Hamiltonians, which is analyzed in the context of the energy spectra of individual particles. The mathematical concept of 'Inverse Problem' is applied to a realistic mean-field Hamiltonian. This technique allows to predict the properties of a system from a limited number of data. To deepen our understanding of Inverse Problems, we focus on a simple mathematical problem. A function dependent on four free parameters is introduced in order to reproduce 'experimental' data. We study the behavior of the 'fitted' parameters, their correlation and the associated errors. This study helps us understand the importance of the correct formulation of the problem. It also shows the importance of including theoretical and experimental errors in the solution.
247

Multi-scale nonlinear constitutive models using artificial neural networks

Kim, Hoan-Kee 12 March 2008 (has links)
This study presents a new approach for nonlinear multi-scale constitutive models using artificial neural networks (ANNs). Three ANN classes are proposed to characterize the nonlinear multi-axial stress-strain behavior of metallic, polymeric, and fiber reinforced polymeric (FRP) materials, respectively. Load-displacement responses from nanoindentation of metallic and polymeric materials are used to train new generation of dimensionless ANN models with different micro-structural properties as additional variables to the load-deflection. The proposed ANN models are effective in inverse-problems set to back-calculate in-situ material parameters from given overall nanoindentation test data with/without time-dependent material behavior. Towards that goal, nanoindentation tests have been performed for silicon (Si) substrate with/without a copper (Cu) film. Nanoindentation creep test data, available in the literature for Polycarbonate substrate, are used in these inverse problems. The predicted properties from the ANN models can also be used to calibrate classical constitutive parameters. The third class of ANN models is used to generate the effective multi-axial stress-strain behavior of FRP composites under plane-stress conditions. The training data are obtained from coupon tests performed in this study using off-axis tension/compression and pure shear tests for pultruded FRP E-glass/polyester composite systems. It is shown that the trained nonlinear ANN model can be directly coupled with finite-element (FE) formulation as a material model at the Gaussian integration points of each layered-shell element. This FE-ANN modeling approach is applied to simulate an FRP plate with an open-hole and compared with experimental results. Micromechanical nonlinear ANN models with damage formulation are also formulated and trained using simulated FE modeling of the periodic microstructure. These new multi-scale ANN constitutive models are effective and can be extended by including more material variables to capture complex material behavior, such as softening due to micro-structural damage or failure.
248

Fundamental numerical schemes for parameter estimation in computer vision.

Scoleri, Tony January 2008 (has links)
An important research area in computer vision is parameter estimation. Given a mathematical model and a sample of image measurement data, key parameters are sought to encapsulate geometric properties of a relevant entity. An optimisation problem is often formulated in order to find these parameters. This thesis presents an elaboration of fundamental numerical algorithms for estimating parameters of multi-objective models of importance in computer vision applications. The work examines ways to solve unconstrained and constrained minimisation problems from the view points of theory, computational methods, and numerical performance. The research starts by considering a particular form of multi-equation constraint function that characterises a wide class of unconstrained optimisation tasks. Increasingly sophisticated cost functions are developed within a consistent framework, ultimately resulting in the creation of a new iterative estimation method. The scheme operates in a maximum likelihood setting and yields near-optimal estimate of the parameters. Salient features of themethod are that it has simple update rules and exhibits fast convergence. Then, to accommodate models with functional dependencies, two variant of this initial algorithm are proposed. These methods are improved again by reshaping the objective function in a way that presents the original estimation problem in a reduced form. This procedure leads to a novel algorithm with enhanced stability and convergence properties. To extend the capacity of these schemes to deal with constrained optimisation problems, several a posteriori correction techniques are proposed to impose the so-called ancillary constraints. This work culminates by giving two methods which can tackle ill-conditioned constrained functions. The combination of the previous unconstrained methods with these post-hoc correction schemes provides an array of powerful constrained algorithms. The practicality and performance of themethods are evaluated on two specific applications. One is planar homography matrix computation and the other trifocal tensor estimation. In the case of fitting a homography to image data, only the unconstrained algorithms are necessary. For the problem of estimating a trifocal tensor, significant work is done first on expressing sets of usable constraints, especially the ancillary constraints which are critical to ensure that the computed object conforms to the underlying geometry. Evidently here, the post-correction schemes must be incorporated in the computational mechanism. For both of these example problems, the performance of the unconstrained and constrained algorithms is compared to existing methods. Experiments reveal that the new methods perform with high accuracy to match a state-of-the-art technique but surpass it in execution speed. / Thesis (Ph.D.) - University of Adelaide, School of Mathemtical Sciences, Discipline of Pure Mathematics, 2008
249

Stochastic model of high-speed train dynamics for the prediction of long-time evolution of the track irregularities / Modèle stochastique de la dynamique des trains à grande vitesse pour la prévision de l'évolution à long terme des défauts de géométrie de la voie

Lestoille, Nicolas 16 October 2015 (has links)
Les voies ferrées sont de plus en plus sollicitées: le nombre de trains à grande vitesse, leur vitesse et leur charge ne cessent d'augmenter, ce qui contribue à la formation de défauts de géométrie sur la voie. En retour, ces défauts de géométrie influencent la réponse dynamique du train et dégradent les conditions de confort. Pour garantir de bonnes conditions de confort, les entreprises ferroviaires réalisent des opérations de maintenance de la voie, qui sont très coûteuses. Ces entreprises ont donc intérêt à prévoir l'évolution temporelle des défauts de géométrie de la voie pour anticiper les opérations de maintenance, et ainsi réduire les coûts de maintenance et améliorer les conditions de transport. Dans cette thèse, on analyse l'évolution temporelle d'une portion de voie par un indicateur vectoriel sur la dynamique du train. Pour la portion de voie choisie, on construit un modèle stochastique local des défauts de géométrie de la voie à partir d'un modèle global des défauts de géométrie et de big data de défauts mesurés par un train de mesure. Ce modèle stochastique local prend en compte la variabilité des défauts de géométrie de la voie et permet de générer des réalisations des défauts pour chaque temps de mesure. Après avoir validé le modèle numérique de la dynamique du train, les réponses dynamiques du train sur la portion de voie mesurée sont simulées numériquement en utilisant le modèle stochastique local des défauts de géométrie. Un indicateur dynamique, vectoriel et aléatoire, est introduit pour caractériser la réponse dynamique du train sur la portion de voie. Cet indicateur dynamique est construit de manière à prendre en compte les incertitudes de modèle dans le modèle numérique de la dynamique du train. Pour identifier le modèle stochastique des défauts de géométrie et pour caractériser les incertitudes de modèle, des méthodes stochastiques avancées, comme par exemple la décomposition en chaos polynomial ou le maximum de vraisemblance multidimensionnel, sont appliquées à des champs aléatoires non gaussiens et non stationnaires. Enfin, un modèle stochastique de prédiction est proposé pour prédire les quantités statistiques de l'indicateur dynamique, ce qui permet d'anticiper le besoin en maintenance. Ce modèle est construit en utilisant les résultats de la simulation de la dynamique du train et consiste à utiliser un modèle non stationnaire de type filtre de Kalman avec une condition initiale non gaussienne / Railways tracks are subjected to more and more constraints, because the number of high-speed trains using the high-speed lines, the trains speed, and the trains load keep increasing. These solicitations contribute to produce track irregularities. In return, track irregularities influence the train dynamic responses, inducing degradation of the comfort. To guarantee good conditions of comfort in the train, railways companies perform maintenance operations of the track, which are very costly. Consequently, there is a great interest for the railways companies to predict the long-time evolution of the track irregularities for a given track portion, in order to be able to anticipate the start off of the maintenance operations, and therefore to reduce the maintenance costs and to improve the running conditions. In this thesis, the long-time evolution of a given track portion is analyzed through a vector-valued indicator on the train dynamics. For this given track portion, a local stochastic model of the track irregularities is constructed using a global stochastic model of the track irregularities and using big data made up of experimental measurements of the track irregularities performed by a measuring train. This local stochastic model takes into account the variability of the track irregularities and allows for generating realizations of the track irregularities at each long time. After validating the computational model of the train dynamics, the train dynamic responses on the measured track portion are numerically simulated using the local stochastic model of the track irregularities. A vector-valued random dynamic indicator is defined to characterize the train dynamic responses on the given track portion. This dynamic indicator is constructed such that it takes into account the model uncertainties in the train dynamics computational model. For the identification of the track irregularities stochastic model and the characterization of the model uncertainties, advanced stochastic methods such as the polynomial chaos expansion and the multivariate maximum likelihood are applied to non-Gaussian and non-stationary random fields. Finally, a stochastic predictive model is proposed for predicting the statistical quantities of the random dynamic indicator, which allows for anticipating the need for track maintenance. This modeling is constructed using the results of the train dynamics simulation and consists in using a non-stationary Kalman-filter type model with a non-Gaussian initial condition. The proposed model is validated using experimental data for the French railways network for the high-speed trains
250

Modelagem e avaliação comparativa dos métodos Luus-Jaakola e R2W aplicados na estimativa de parâmetros cinéticos de adsorção / Modeling and comparative evaluation of Luus-Jaakola and R2W methods applied in estimating kinetic parameters of adsorption

Melicia Aline Cortat Ribeiro 18 June 2012 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / As técnicas inversas têm sido usadas na determinação de parâmetros importantes envolvidos na concepção e desempenho de muitos processos industriais. A aplicação de métodos estocásticos tem aumentado nos últimos anos, demonstrando seu potencial no estudo e análise dos diferentes sistemas em aplicações de engenharia. As rotinas estocásticas são capazes de otimizar a solução em uma ampla gama de variáveis do domínio, sendo possível a determinação dos parâmetros de interesse simultaneamente. Neste trabalho foram adotados os métodos estocásticos Luus-Jaakola (LJ) e Random Restricted Window (R2W) na obtenção dos ótimos dos parâmetros cinéticos de adsorção no sistema de cromatografia em batelada, tendo por objetivo verificar qual método forneceria o melhor ajuste entre os resultados obtidos nas simulações computacionais e os dados experimentais. Este modelo foi resolvido empregando o método de Runge- Kutta de 4 ordem para a solução de equações diferenciais ordinárias. / The inverse techniques have been used in the determination of parameters involved in design and performance of many industrial processes. The application of stochastic methods has increased in recent years, demonstrating their potential in study and analysis of different systems in engineering applications. Stochastic routines are able to optimize the solution in a wide range of variables, it is possible to determine the parameters of interest simultaneously. In this work two adopted the stochastic methods, Luus-Jaakola (LJ) and Restricted Random Window (R2W), to obtain the optimum parameters for adsorption kinetics in batch chromatography system, aiming to determine which method would provide the best fit between the results obtained in computer simulations and experimental data. This model was solved using the Runge-Kutta 4th order for ordinary differential equations solution.

Page generated in 0.0509 seconds