• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 517
  • 170
  • 72
  • 52
  • 41
  • 39
  • 21
  • 16
  • 12
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 1107
  • 1107
  • 248
  • 235
  • 199
  • 180
  • 127
  • 122
  • 122
  • 118
  • 112
  • 109
  • 95
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Ground-based attitude determination and gyro calibration

Kim, Chang-Su, doctor of aerospace engineering 03 October 2012 (has links)
Some modern spacecraft missions require precise knowledge of the attitude, obtained from the ground processing of on-board attitude sensors. A traditional 6-state attitude determination filter, containing three attitude errors and three gyro bias errors, has been recognized for its robust performance when it is used with high quality measurement data from a star tracker for many past and present missions. However, as higher accuracies are required for attitude knowledge in the missions, systematic errors such as sensor misalignment and scale factor errors, which could often be neglected in previous missions, have become serious, and sometimes, the dominant error sources. The star tracker data have gaps and degradation caused by, for example, the Sun and Moon blocking in the filed of view and data time tag errors. Thus, attitude determination based on the gyro data without using the star tracker data is inevitably required for most missions for the period when the star tracker is unable to provide accurate data. However, any gyro-based attitude errors would eventually grow exponentially because of the uncorrected systematic errors of gyros and the uncorrected gyro random noises. An improved understanding of the gyro random noise characteristics and the estimation of the gyro scale factor errors and gyro misalignments are necessary for precise attitude determination for some present and future missions. The 6-state filters have been extended to 15-state filters to estimate the scale factor and misalignment errors of gyros especially during a high-slew maneuver and the performance of theses filters has been investigated. During a starless period, the inevitable drift of the EKF solutions, which are caused by the uncorrected gyro’s systematic errors and the gyro random noises, can be replaced with the batch solutions, which are less affected by the data gap in the star tracker. Power Spectral Density and the Allan Variance Method are used for analyzing the gyro random noises in both ICESat and simulated gyro data, which provide better information about the process noise covariance in the attitude filter. Both simulated and real data are used for analyzing and evaluating the performances of EKF and batch algorithms. / text
402

Data assimilation for parameter estimation in coastal ocean hydrodynamics modeling

Mayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These applications include modeling tidal and coastal flows, waves, and extreme events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks, and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline, water depth, ocean floor topography, and hydrographic and meteorological impacts can have significant effects and must all be considered. Water levels and currents in the coastal ocean can be modeled by solv- ing the shallow water equations. The shallow water equations contain many parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive and difficult to obtain. The parameter values must often be inferred and the estimates are often inaccurate, or contain a high degree of uncertainty [28]. In addition, as is the case with many numerical models, coastal ocean models have various other sources of uncertainty, including the approximate physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible. The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is that they can often be implemented non-intrusively, making them relatively straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown. Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining the bottom stress terms using statistical data assimilation methods. In this work, we use a novel approach to estimate the bottom stress terms in the shallow water equations, which we solve numerically using the Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields defined on both an idealized inlet and a more realistic spatial domain. For the first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion. We perform several observation system simulation experiments, and find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
403

Geostationary satellite observations of ozone air quality

Zoogman, Peter William 14 October 2013 (has links)
Ozone in surface air is the primary cause of polluted air in the United States. The current ozone observing network is insufficient either to assess air quality or to fully inform our understanding of the factors controlling tropospheric ozone. This thesis investigates the benefit of an instrument in geostationary orbit for observing near surface ozone using Observing System Simulation Experiments (OSSEs). / Earth and Planetary Sciences
404

Nonlinear orbit uncertainty prediction and rectification for space situational awareness

DeMars, Kyle Jordan 07 February 2011 (has links)
A new method for predicting the uncertainty in a nonlinear dynamical system is developed and analyzed in the context of uncertainty evolution for resident space objects (RSOs) in the near-geosynchronous orbit regime under the influence of central body gravitational acceleration, third body perturbations, and attitude-dependent solar radiation pressure (SRP) accelerations and torques. The new method, termed the splitting Gaussian mixture unscented Kalman filter (SGMUKF), exploits properties of the differential entropy or Renyi entropy for a linearized dynamical system to determine when a higher-order prediction of uncertainty reaches a level of disagreement with a first-order prediction, and then applies a multivariate Gaussian splitting algorithm to reduce the impact of induced nonlinearity. In order to address the relative accuracy of the new method with respect to the more traditional approaches of the extended Kalman filter (EKF) and unscented Kalman filter (UKF), several concepts regarding the comparison of probability density functions (pdfs) are introduced and utilized in the analysis. The research also describes high-fidelity modeling of the nonlinear dynamical system which drives the motion of an RSO, and includes models for evaluation of the central body gravitational acceleration, the gravitational acceleration due to other celestial bodies, and attitude-dependent SRP accelerations and torques when employing a macro plate model of an RSO. Furthermore, a high-fidelity model of the measurement of the line-of-sight of a spacecraft from a ground station is presented, which applies light-time and stellar aberration corrections, and accounts for observer and target lighting conditions, as well as for the sensor field of view. The developed algorithms are applied to the problem of forward predicting the time evolution of the region of uncertainty for RSO tracking, and uncertainty rectification via the fusion of incoming measurement data with prior knowledge. It is demonstrated that the SGMUKF method is significantly better able to forward predict the region of uncertainty and is subsequently better able to utilize new measurement data. / text
405

A method for parameter estimation and system identification for model based diagnostics

Rengarajan, Sankar Bharathi 16 February 2011 (has links)
Model based fault detection techniques utilize functional redundancies in the static and dynamic relationships among system inputs and outputs for fault detection and isolation. Analytical models based on the underlying physics of the system can capture the dependencies between different measured signals in terms of system states and parameters. These physical models of the system can be used as a tool to detect and isolate system faults. As a machine degrades, system outputs deviate from desired outputs, generating residuals defined by the error between sensor measurements and corresponding model simulated signals. These error residuals contain valuable information to interpret system states and parameters. Setting up the measurements from a faulty system as baseline, the parameters of the idealistic model can be varied to minimize these residuals. This process is called “Parameter Tuning”. A framework to automate this “Parameter Tuning” process is presented with a focus on DC motors and 3-phase induction motors. The parameter tuning module presented is a multi-tier module which is designed to operate on real system models that are highly non-linear. The tuning module combines artificial intelligence techniques like Quasi-Monte Carlo (QMC) sampling (Hammersley sequencing) and Genetic Algorithm (Non Dominated Sorting Genetic Algorithm) with an Extended Kalman filter (EKF), which utilizes the system dynamics information available via the physical models of the system. A tentative Graphical User Interface (GUI) was developed to simplify the interaction between a machine operator and the module. The tuning module was tested with real measurements from a DC motor. A simulation study was performed on a 3-phase induction motor by suitably adjusting parameters in an analytical model. The QMC sampling and genetic algorithm stages worked well even on measurement data with the system operating in steady state condition. But the downside was computational expense and inability to estimate the parameters online – ‘batch estimator’. The EKF module enabled online estimation where update was made based on incoming measurements. But observability of the system based on incoming measurements posed a major challenge while dealing with state estimation filters. Implementation details and results are included with plots comparing real and faulty systems. / text
406

Control-friendly scheduling algorithms for multi-tool, multi-product manufacturing systems

Bregenzer, Brent Constant 27 January 2012 (has links)
The fabrication of semiconductor devices is a highly competitive and capital intensive industry. Due to the high costs of building wafer fabrication facilities (fabs), it is expected that products should be made efficiently with respect to both time and material, and that expensive unit operations (tools) should be utilized as much as possible. The process flow is characterized by frequent machine failures, drifting tool states, parallel processing, and reentrant flows. In addition, the competitive nature of the industry requires products to be made quickly and within tight tolerances. All of these factors conspire to make both the scheduling of product flow through the system and the control of product quality metrics extremely difficult. Up to now, much research has been done on the two problems separately, but until recently, interactions between the two systems, which can sometimes be detrimental to one another, have mostly been ignored. The research contained here seeks to tackle the scheduling problem by utilizing objectives based on control system parameters in order that the two systems might behave in a more beneficial manner. A non-threaded control system is used that models the multi-tool, multi-product process in a state space form, and estimates the states using a Kalman filter. Additionally, the process flow is modeled by a discrete event simulation. The two systems are then merged to give a representation of the overall system. Two control system matrices, the estimate error covariance matrix from the Kalman filter and a square form of the system observability matrix called the information matrix, are used to generate several control-based scheduling algorithms. These methods are then tested against more tradition approaches from the scheduling literature to determine their effectiveness on both the basis of how well they maintain the outputs near their targets and how well they minimize the cycle time of the products in the system. The two metrics are viewed simultaneously through use of Pareto plots and merits of the various scheduling methods are judged on the basis of Pareto optimality for several test cases. / text
407

Data Assimilation In Systems With Strong Signal Features

Rosenthal, William Steven January 2014 (has links)
Filtering problems in high dimensional geophysical applications often require spatially continuous models to interpolate spatially and temporally sparse data. Many applications in numerical weather and ocean state prediction are concerned with tracking and assessing the uncertainty in the position of large scale vorticity features, such as storm fronts, jets streams, and hurricanes. Quantifying the amplitude variance in these features is complicated by the fact that both height and lateral perturbations in the feature geometry are represented in the same covariance estimate. However, when there are sufficient observations to detect feature information like spatial gradients, the positions of these features can be used to further constrain the filter, as long as the statistical model (cost function) has provisions for both height perturbations and lateral displacements. Several authors since the 1990s have proposed various formalisms for the simultaneous modeling of position and amplitude errors, and the typical approaches to computing the generalized solutions in these applications are variational or direct optimization. The ensemble Kalman filter is often employed in large scale nonlinear filtering problems, but its predication on Gaussian statistics causes its estimators suffer from analysis deflation or collapse, as well as the usual curse of dimensionality in high dimensional Monte Carlo simulations. Moreover, there is no theoretical guarantee of the performance of the ensemble Kalman filter with nonlinear models. Particle filters which employ importance sampling to focus attention on the important regions of the likelihood have shown promise in recent studies on the control of particle size. Consider an ensemble forecast of a system with prominent feature information. The correction of displacements in these features, by pushing them into better agreement with observations, is an application of importance sampling, and Monte Carlo methods, including particle filters, and possibly the ensemble Kalman filter as well, are well suited to applications of feature displacement correction. In the present work, we show that the ensemble Kalman filter performs well in problems where large features are displaced both in amplitude and position, as long as it is used on a statistical model which includes both function height and local position displacement in the model state. In a toy model, we characterize the performance-degrading effect that untracked displacements have on filters when large features are present. We then employ tools from classical physics and fluid dynamics to statistically model displacements by area-preserving coordinate transformations. These maps preserve the area of contours in the displaced function, and using strain measures from continuum mechanics, we regularize the statistics on these maps to ensure they model smooth, feature-preserving displacements. The position correction techniques are incorporated into the statistical model, and this modified ensemble Kalman filter is tested on a system of vortices driven by a stochastically forced barotropic vorticity equation. We find that when the position correction term is included in the statistical model, the modified filter provides estimates which exhibit substantial reduction in analysis error variance, using a much smaller ensemble than what is required when the position correction term is removed from the model.
408

Ensemble Filtering Methods for Nonlinear Dynamics

Kim, Sangil January 2005 (has links)
The standard ensemble filtering schemes such as Ensemble Kalman Filter (EnKF) and Sequential Monte Carlo (SMC) do not properly represent states of low priori probability when the number of samples is too small and the dynamical system is high dimensional system with highly non-Gaussian statistics. For example, when the standard ensemble methods are applied to two well-known simple, but highly nonlinear systems such as a one-dimensional stochastic diffusion process in a double-well potential and the well-known three-dimensional chaotic dynamical system of Lorenz, they produce erroneous results to track transitions of the systems from one state to the other.In this dissertation, a set of new parametric resampling methods are introduced to overcome this problem. The new filtering methods are motivated by a general H-theorem for the relative entropy of Markov stochastic processes. The entropy-based filters first approximate a prior distribution of a given system by a mixture of Gaussians and the Gaussian components represent different regions of the system. Then the parameters in each Gaussian, i.e., weight, mean and covariance are determined sequentially as new measurements are available. These alternative filters yield a natural generalization of the EnKF method to systems with highly non-Gaussian statistics when the mixture model consists of one single Gaussian and measurements are taken on full states.In addition, the new filtering methods give the quantities of the relative entropy and log-likelihood as by-products with no extra cost. We examine the potential usage and qualitative behaviors of the relative entropy and log-likelihood for the new filters. Those results of EnKF and SMC are also included. We present results of the new methods on the applications to the above two ordinary differential equations and one partial differential equation with comparisons to the standard filters, EnKF and SMC. These results show that the entropy-based filters correctly track the transitions between likely states in both highly nonlinear systems even with small sample size N=100.
409

Diagnosis of a Truck Engine using Nolinear Filtering Techniques

Nilsson, Fredrik January 2007 (has links)
Scania CV AB is a large manufacturer of heavy duty trucks that, with an increasingly stricter emission legislation, have a rising demand for an effective On Board Diagnosis (OBD) system. One idea for improving the OBD system is to employ a model for the construction of an observer based diagnosis system. The proposal in this report is, because of a nonlinear model, to use a nonlinear filtering method for improving the needed state estimates. Two nonlinear filters are tested, the Particle Filter (PF) and the Extended Kalman Filter (EKF). The primary objective is to evaluate the use of the PF for Fault Detection and Isolation (FDI), and to compare the result against the use of the EKF. With the information provided by the PF and the EKF, two residual based diagnosis systems and two likelihood based diagnosis systems are created. The results with the PF and the EKF are evaluated for both types of systems using real measurement data. It is shown that the four systems give approximately equal results for FDI with the exception that using the PF is more computational demanding than using the EKF. There are however some indications that the PF, due to the nonlinearities, could offer more if enough CPU time is available.
410

Evaluating SLAM algorithms for Autonomous Helicopters

Skoglund, Martin January 2008 (has links)
Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver. This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation. An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.

Page generated in 0.0409 seconds