Spelling suggestions: "subject:"[een] FILTERING"" "subject:"[enn] FILTERING""
191 |
Kalman filter and its application to flow forecastingNgan, Patricia January 1985 (has links)
The Kalman Filter has been applied to many fields of hydrology, particularly in the area of flood forecasting. This recursive estimation technique is based on a state-space approach which combines model description of a process with data information, and accounts for uncertainties in a hydrologic system. This thesis deals with applications of the Kalman Filter to ARMAX models in the context of streamflow prediction. Implementation of the Kalman Filter requires specification of the noise covariances (Q, R) and initial conditions of the state vector (x₀, P₀). Difficulties arise in streamflow applications because these quantities are often not known.
Forecasting performance of the Kalman Filter is examined using synthetic flow data, generated with chosen values for the initial state vector and the noise covariances. An ARMAX model is cast into state-space form with the coefficients as the state vector. Sensitivity of the flow forecasts to specification of x₀, P₀, Q, R, (which may be different from the generation values) is examined. The filter's forecasting performance is mainly affected by the combined specification of Q and R. When both noise covariances are unknown, they should be specified relatively large in order to achieve a reasonable forecasting performance. Specififying Q too small and R too large should be avoided as it results in poor flow forecasts. The filter's performance is also examined using actual flow data from a large river, whose behavior changes slowly with time. Three simple ARMAX models are used for this investigation. Although there are different ways of writing the ARMAX model in state-space form, it is found that the best forecasting scheme is to model the ARMAX coefficients as the state vector. Under this formulation, the Kalman Filter is used to give recursive estimates of the coefficients. Hence flow predictions can be revised at each time step with the latest state estimate. This formulation also has the feature that initial values of the ARMAX coefficients need not be known accurately.
The noise variances of each of the three models are estimated by the method of maximum likelihood, whereby the likelihood function is evaluated in terms of the innovations. Analyses of flow data for the stations considered in this thesis, indicate that the variance of the measurement error is proportional to the square of the flow.
In practice, flow predictions several time steps in advance are often required. For autoregressive processes, this involves unknown elements in the system matrix H of the Kalman model. The Kalman algorithm underestimates the variance of the forecast error if H and x are both unknown. For the AR(1) model, a general expression for the mean square error of the forecast is developed. It is shown that the formula reduces to the Kalman equation for the case where the system matrix is known. The importance of this formula is realized in forecasting situations where management decisions depend on the reliability of flow predictions, reflected by their mean square errors. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
|
192 |
Low-Rank Kalman Filtering in Subsurface Contaminant Transport ModelsEl Gharamti, Mohamad 12 1900 (has links)
Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.
|
193 |
A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecastingRaboudi, Naila Mohammed Fathi 11 1900 (has links)
The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background.
In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme's behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated by performing assimilation experiments with the highly nonlinear Lorenz model and a realistic setting of the Advanced Circulation (ADCIRC) model configured for storm surge forecasting in the Gulf of Mexico during Hurricane Ike.
|
194 |
Switching hybrid recommender system to aid the knowledge seekersBacklund, Alexander January 2020 (has links)
In our daily life, time is of the essence. People do not have time to browse through hundreds of thousands of digital items every day to find the right item for them. This is where a recommendation system shines. Tigerhall is a company that distributes podcasts, ebooks and events to subscribers. They are expanding their digital content warehouse which leads to more data for the users to filter. To make it easier for users to find the right podcast or the most exciting e-book or event, a recommendation system has been implemented. A recommender system can be implemented in many different ways. There are content-based filtering methods that can be used that focus on information about the items and try to find relevant items based on that. Another alternative is to use collaboration filtering methods that use information about what the consumer has previously consumed in correlation with what other users have consumed to find relevant items. In this project, a hybrid recommender system that uses a k-nearest neighbors algorithm alongside a matrix factorization algorithm has been implemented. The k-nearest neighbors algorithm performed well despite the sparse data while the matrix factorization algorithm performs worse. The matrix factorization algorithm performed well when the user has consumed plenty of items.
|
195 |
Recommender System for Retail Industry : Ease customers’ purchase by generating personal purchase carts consisting of relevant and original productsCARRA, Florian January 2016 (has links)
In this study we explore the problem of purchase cart recommendationin the field of retail. How can we push the right customize purchase cart that would consider both habits and serendipity constraints? Recommender Systems application is widely restricted to Internet services providers: movie recommendation, e-commerce, search engine. We brought algorithmic and technological breakthroughs to outdated retail systems while keeping in mind its own specificities: purchase cart rather than single products, restricted interactions between customers and products. After collecting ingenious recommendations methods, we defined two major directions - the correctness and the serendipity - that would serve as discriminant aspects to compare multiple solutions we implemented. We expect our solutions to have beneficial impacts on customers, gaining time and open-mindedness, and gradually obliterate the separation between supermarkets and e-commerce platforms as far as customized experience is concerned.
|
196 |
Development of a Terrain Pre-filtering Technique applicable to Probabilistic Terrain using Constraint Mode Tire ModelMa, Rui 15 October 2013 (has links)
The vertical force generated from terrain-tire interaction has long been of interest for vehicle dynamic simulations and chassis development. As the terrain serves as the main excitation to the suspension system through pneumatic tire, proper terrain and tire models are required to produce reliable vehicle response. Due to the high complexity of the tire structure and the immense size of a high fidelity terrain profile, it is not efficient to calculate the terrain-tire interaction at every location. The use of a simpler tire model (e.g. point follower tire model) and a pre-filtered terrain profile as equivalent input will considerably reduce the simulation time. The desired produced responses would be nearly identical to the ones using a complex tire model and unfiltered terrain, with a significant computational efficiency improvement.
In this work, a terrain pre-filtering technique is developed to improve simulation efficiency while still providing reliable load prediction. The work is divided into three parts. First a stochastic gridding method is developed to include the measurement uncertainties in the gridded terrain profile used as input to the vehicle simulation. The obtained uniformly spaced terrain is considered probabilistic, with a series of gridding nodes with heights represented by random variables. Next, a constraint mode tire model is proposed to emulate the tire radial displacement and the corresponding force given the terrain excitation. Finally, based on the constraint mode tire model, the pre-filtering technique is developed. At each location along the tire's path, the tire center height is adjusted until the spindle load reaches a pre-designated constant load. The resultant tire center trajectory is the pre-filtered terrain profile and serves as an equivalent input to the simple tire model. The vehicle response produced by using the pre-filtered terrain profile and the simple tire model is analyzed for accuracy assessment. The computational efficiency improvement is also examined. The effectiveness of the pre-filtering technique is validated on probabilistic terrain by using different realizations of terrain profiles. It is shown through multiple profiles that the computational efficiency can be improved by three orders of magnitude with no statistically significant change in resulting loading. / Ph. D.
|
197 |
classCleaner: A Quantitative Method for Validating Peptide Identification in LC-MS/MS WorkflowsKey, Melissa Chester 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Because label-free liquid chromatography-tandem mass spectrometry (LC-MS/MS)
shotgun proteomics infers the peptide sequence of each measurement, there is inherent
uncertainty in the identity of each peptide and its originating protein. Removing
misidentified peptides can improve the accuracy and power of downstream analyses
when differences between proteins are of primary interest.
In this dissertation I present classCleaner, a novel algorithm designed to identify
misidentified peptides from each protein using the available quantitative data. The
algorithm is based on the idea that distances between peptides belonging to the same
protein are stochastically smaller than those between peptides in different proteins.
The method first determines a threshold based on the estimated distribution of these
two groups of distances. This is used to create a decision rule for each peptide based
on counting the number of within-protein distances smaller than the threshold.
Using simulated data, I show that classCleaner always reduces the proportion
of misidentified peptides, with better results for larger proteins (by number of constituent
peptides), smaller inherent misidentification rates, and larger sample sizes.
ClassCleaner is also applied to a LC-MS/MS proteomics data set and the Congressional
Voting Records data set from the UCI machine learning repository. The later
is used to demonstrate that the algorithm is not specific to proteomics.
|
198 |
Multi-rate Sensor Fusion for GPS Navigation Using Kalman FilteringMayhew, David McNeil 08 July 1999 (has links)
With the advent of the Global Position System (GPS), we now have the ability to determine absolute position anywhere on the globe. Although GPS systems work well in open environments with no overhead obstructions, they are subject to large unavoidable errors when the reception from some of the satellites is blocked. This occurs frequently in urban environments, such as downtown New York City. GPS systems require at least four satellites visible to maintain a good position 'fix' . Tall buildings and tunnels often block several, if not all, of the satellites. Additionally, due to Selective Availability (SA), where small amounts of error are intentionally introduced, GPS errors can typically range up to 100 ft or more. This thesis proposes several methods for improving the position estimation capabilities of a system by incorporating other sensor and data technologies, including Kalman filtered inertial navigation systems, rule-based and fuzzy-based sensor fusion techniques, and a unique map-matching algorithm. / Master of Science
|
199 |
A Unified Approach to Linear Filtering Using a Generalized Covariance RepresentationThomas, Stephen J. 02 1900 (has links)
No description available.
|
200 |
Identification of linear systems using periodic inputsCarew, Burian January 1974 (has links)
No description available.
|
Page generated in 0.0562 seconds