Spelling suggestions: "subject:"[een] FILTERING"" "subject:"[enn] FILTERING""
251 |
A methodology for the application of an automated and interactive reification process in a virtual Community of PracticeRauffet, Philippe 09 October 2007 (has links) (PDF)
Communities of practices are particular and identified knowledge networks involved in a new global, virtual and digital framework. The study of their specific characteristics, the Legitimate Peripheral Participation and the duality Reification/Participation, provides the necessary background to understand and formalize the barriers and the limits in this new context. <br />In order to overcome these ones, the analysis of the tools and the methods for computerized reification (content analysis, information architecture, information visualization) and for enrichment and assessment of content and users (Human-Computer Interactions, Collaborative filtering) enables to develop a methodology to support the application of an automated and interactive reification process in a virtual Communities of Practices.
|
252 |
Wideband Sigma-Delta ModulatorsYuan, Xiaolong January 2010 (has links)
<p>Sigma-delta modulators (SDM) have come up as an attractive candidatefor analog-to-digital conversion in single chip front ends thanks to the continuousimproving performance. The major disadvantage is the limited bandwidthdue to the need of oversampling. Therefore, extending these convertersto broadband applications requires lowering the oversampling ratio (OSR) inorder. The aim of this thesis is the investigation on the topology and structureof sigma-delta modulators suitable for wideband applications, e.g. wireline orwireless communication system applications having a digital baseband aboutone to ten MHz.It has recently become very popular to feedforward the input signal inwideband sigma-delta modulators, so that the integrators only process quantizationerrors. The advantage being that the actual signal is not distorted byopamp and integrator nonlinearities. An improved feedforward 2-2 cascadedstructure is presented based on unity-gain signal transfer function (STF). Theimproved signal-to-noise-ratio (SNR) is obtained by optimizing zero placementof the noise transfer function (NTF) and adopting multi-bit quantizer.The proposed structure has low distortion across the entire input range.In high order single loop continuous-time (CT) sigma-delta modulator, excessloop delay may cause instability. Previous techniques in compensation ofinternal quantizer and feedback DAC delay are studied especially for the feedforwardstructure. Two alternative low power feedforward continuous-timesigma-delta modulators with excess loop delay compensation are proposed.Simulation based CT modulator synthesis from discrete time topologies isadopted to obtain the loop filter coefficients. Design examples are given toillustrate the proposed structure and synthesis methodology.Continuous time quadrature bandpass sigma-delta modulators (QBSDM)efficiently realize asymmetric noise-shaping due to its complex filtering embeddedin the loops. The effect of different feedback waveforms inside themodulator on the NTF of quadrature sigma-delta modulators is presented.An observation is made that a complex NTF can be realized by implementingthe loop as a cascade of complex integrators with a SCR feedback digital-toanalogconverter (DAC), which is desirable for its lower sensitivity to loopmismatch. The QBSDM design for different bandpass center frequencies relativeto the sampling frequency is illustrated.The last part of the thesis is devoted to the design of a wideband reconfigurablesigma-delta pipelined modulator, which consists of a 2-1-1 cascadedmodulator and a pipelined analog-to-digital convertor (ADC) as a multi-bitquantizer in the last stage. It is scalable for different bandwidth/resolutionapplication. The detail design is presented from system to circuit level. Theprototype chip is fabricated in TSMC 0.25um process and measured on thetest bench. The measurement results show that a SNR over 60dB is obtainedwith a sampling frequency of 70 MHz and an OSR of ten.</p>
|
253 |
Particle Filtering for Location EstimationKrenek, Oliver Francis Daley January 2011 (has links)
Vehicle location and tracking has a variety of commercial applications and none of the techniques currently used can provide accurate results in all situations. This thesis details a preliminary investigation into a new location estimation method which uses optical environmental data, gathered by the vehicle during motion, to locate and track vehicle positions by comparing said data to pre-recorded optical maps of the intended location space. The design and implementation of an optical data recorder is presented. The map creation process is detailed and the location algorithm, based on a particle filter, is described in full.
System tests were performed offline on a desktop PC using real world data collected by the data recorder and their results are presented. These tests show good performance for the system tracking the vehicle once its approximate location is determined. However locating a vehicle from scratch appears to be infeasible in a realistically large location space.
|
254 |
Nonlinear Transformations and Filtering Theory for Space OperationsWeisman, Ryan Michael 1984- 14 March 2013 (has links)
Decisions for asset allocation and protection are predicated upon accurate knowledge of the current operating environment as well as correctly characterizing the evolution of the environment over time. The desired kinematic and kinetic states of objects in question cannot be measured directly in most cases and instead are inferred or estimated from available measurements using a filtering process. Often, nonlinear transformations between the measurement domain and desired state domain distort the state domain probability density function yielding a form which does not necessarily resemble the form assumed in the filtering algorithm. The distortion effect must be understood in greater detail and appropriately accounted for so that even if sensors, state estimation algorithms, and state propagation algorithms operate in different domains, they can all be effectively utilized without any information loss due to domain transformations.
This research presents an analytical investigation into understanding how non-linear transformations of stochastic, but characterizable, processes affect state and uncertainty estimation with direct application to space object surveillance and space- craft attitude determination. Analysis is performed with attention to construction of the state domain probability density function since state uncertainty and correlation are derived from the statistical moments of the probability density function. Analytical characterization of the effect nonlinear transformations impart on the structure of state probability density functions has direct application to conventional non- linear filtering and propagation algorithms in three areas: (1) understanding how smoothing algorithms used to estimate indirectly observed states impact state uncertainty, (2) justification or refutation of assumed state uncertainty distribution for more realistic uncertainty quantification, and (3) analytic automation of initial state estimate and covariance in lieu of user tuning.
A nonlinear filtering algorithm based upon Bayes’ Theorem is presented to ac- count for the impact nonlinear domain transformations impart on probability density functions during the measurement update and propagation phases. The algorithm is able to accommodate different combinations of sensors for state estimation which can also be used to hypothesize system parameters or unknown states from available measurements because information is able to appropriately accounted for.
|
255 |
Practical on-line model validation for model predictive controllers (MPC)Naidoo, Yubanthren Tyrin. January 2010 (has links)
A typical petro-chemical or oil-refining plant is known to operate with hundreds if not
thousands of control loops. All critical loops are primarily required to operate at their
respective optimal levels in order for the plant to run efficiently. With such a large
number of vital loops, it is difficult for engineers to monitor and maintain these loops
with the intention that they are operating under optimum conditions at all times. Parts of
processes are interactive, more so nowadays with increasing integration, requiring the use
of a more advanced protocol of control systems. The most widely applied advanced
process control system is the Model Predictive Controller (MPC). The success of these
controllers is noted in the large number of applications worldwide. These controllers rely
on a process model in order to predict future plant responses.
Naturally, the performance of model-based controllers is intimately linked to the quality
of the process models. Industrial project experience has shown that the most difficult and
time-consuming work in an MPC project is modeling and identification. With time, the
performance of these controllers degrades due to changes in feed, working regime as well
as plant configuration. One of the causes of controller degradation is this degradation of
process models. If a discrepancy between the controller’s plant model and the plant itself
exists, controller performance may be adversely affected. It is important to detect these
changes and re-identify the plant model to maintain control performance over time.
In order to avoid the time-consuming process of complete model identification, a model
validation tool is developed which provides a model quality indication based on real-time
plant data. The focus has been on developing a method that is simple to implement but
still robust. The techniques and algorithms presented are developed as far as possible to
resemble an on-line software environment and are capable of running parallel to the
process in real time. These techniques are based on parametric (regression) and nonparametric
(correlation) analyses which complement each other in identifying problems
-iiwithin
on-line models. These methods pinpoint the precise location of a mismatch. This
implies that only a few inputs have to be perturbed in the re-identification process and
only the degraded portion of the model is to be updated. This work is carried out for the
benefit of SASOL, exclusively focused on the Secunda plant which has a large number of
model predictive controllers that are required to be maintained for optimal economic
benefit. The efficacy of the methodology developed is illustrated in several simulation
studies with the key intention to mirror occurrences present in industrial processes. The
methods were also tested on an industrial application. The key results and shortfalls of
the methodology are documented. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2010.
|
256 |
Real-time observer modelling of a gas-phase ethylene polymerisation reactor.Thomason, Richard. January 2000 (has links)
The desire for precise polymer property control, minimum wastage through grade transitions,
and early instrument fault detection, has led to a significant effort in the modelling and control
of ethylene polymerisation world-wide. Control is difficult due to complex inter-relationships
between variables and long response times from gas to solid phase.
The approach in this study involves modelling using the kinetic equations. This forms the
basis of a scheme for real-time kinetic parameter identification and Kalman filtering of the
reactor gas composition. The scheme was constructed off-line and tested on several
industrial polymer grades using historical plant data. The scheme was also converted into a
form for use on the linear low-density polyethylene plant, Poly 2, at POLlFIN Limited.
There proved to be no difficulty in the identification step, but the Kalman filter requires more
tuning for reliable fault detection. The software has been commissioned on-line and results
from the POLlFIN plant match the off-line model exactly. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.
|
257 |
Enhanced positioning in harsh environments / Förbättrad positionering i svåra miljöerGlans, Fredrik January 2013 (has links)
Today’s heavy duty vehicles are equipped with safety and comfort systems, e.g. ABS and ESP, which totally or partly take over the vehicle in certain risk situations. When these systems become more and more autonomous more robust positioning is needed. In the right conditions the GPS system provides precise and robust positioning. However, in harsh environments, e.g. dense urban areas and in dense forests, the GPS signals may be affected by multipaths, which means that the signals are reflected on their way from the satellites to the receiver. This can cause large errors in the positioning and thus can give rise to devastating effects for autonomous systems. This thesis evaluate different methods to enhance a low cost GPS in harsh environments, with focus on mitigating multipaths. Mainly there are four different methods: Regular Unscented Kalman filter, probabilistic multipath mitigation, Unscented Kalman filter with vehicle sensor input and probabilistic multipath mitigation with vehicle sensor input. The algorithms will be tested and validated on real data from both dense forest areas and dense urban areas. The results show that the positioning is enhanced, in particular when integrating the vehicle sensors, compared to a low cost GPS.
|
258 |
Cooperative tracking for persistent littoral undersea surveillanceScott, Robert Derek 05 1900 (has links)
CIVINS / The US Navy has identified a need for an autonomous, persistent, forward deployed system to Detect, Classify, and Locate submarines. In this context, we investigate a novel method for multiple sensor platforms acting cooperatively to locate an uncooperative target. Conventional tracking methods based on techniques such as Kalman filtering or particle filters have been used with great success for tracking targets from a single manned platform; the application of these methods can be difficult for a cooperative tracking scenario with multiple unmanned platforms that have considerable navigation error. This motivates investigation of an alternative, set-based tracking algorithm, first proposed by Detweiler et al. for sensor network localization, to the cooperative tracking problem. The Detweiler algorithm is appealing for its conceptual simplicity and minimal assumptions about the target motion. The key idea of this approach is to compute the temporal evolution of potential target positions in terms of bounded regions that grow between measurements as the target moves and shrink when measurements do occur based on an assumed worst-case bound for uncertainty. In this thesis, we adapt the Detweiler algorithm to the scenario of cooperative tracking for persistent undersea surveillance, and explore its limitations when applied to this domain. The algorithm has been fully implemented and tested both in simulation and with postprocessing of autonomous surface craft (ASC) data from the PLUSNet Monterey Bay 2006 experiment. The results indicate that the method provides disappointing performance when applied to this domain, especially in situations where communication links between the autonomous tracking platforms are poor. We conclude that the method is more appropriate for a large N tracking scenario, with a large number of small, expendable tracking nodes, instead of our intended scenario with a smaller number of more sophisticated mobile trackers. / CIVINS / US Navy (USN) author.
|
259 |
Corrigé de localisation dans un environment extérieur sans fil en utilisant estimation, filtrage, la prévision et des techniques de fusion : une application par wifi utilisant le terrain à base de connaissances / Error corrected location determination in an outdoor wireless environment by using estimation, filtering, prediction and fusion techniques : A wifi application by using terrain based knowledgeAlam, Muhammad Mansoor 04 November 2011 (has links)
L’estimation de la position des noeuds de communication sans fil est un domaine de recherche très populaire au cours des dernières années. La recherche de l'estimation de l'emplacement n'est pas limitée à la communication par satellite, mais elle concerne aussi les réseaux WLAN, MANET, WSN et la communication cellulaire. En raison de la croissance et de l’évolution des architectures de communication cellulaire, l’utilisation des appareils portables a augmenté rapidement, et les appels provenant d’utilisateurs mobiles ont également rapidement augmenté. On estime que plus de 50% des appels d'urgence sont émis par des téléphones mobiles. Les chercheurs ont utilisé différentes techniques d'estimation de la position, tels que les satellites, les techniques statistiques et la cartographie. Afin d'atteindre une meilleure précision, certains chercheurs ont combiné deux ou plusieurs techniques. Cependant, l'estimation de la position basée sur le terrain est un domaine qui n'a pas été considéré en profondeur par les chercheurs. Grâce aux ondes radio qui se comportent différemment dans des atmosphères différentes, les calculs à l’aide de quelques paramètres ne sont pas suffisants pour atteindre une précision avec différents terrains, surtout quand il sont totalement basés sur le format RSS, qui entraine des altérations.Cette recherche se concentre sur la localisation des noeuds de communication sans fil en utilisant des techniques géométriques et statistiques, et en prenant en compte l’altération et l'atténuation des terrains. Le modèle proposé est constitué de quatre étapes, qui sont l'estimation, le filtrage, la prédiction et la fusion. Un prototype a été construit en utilisant le WiFi IEEE 802.11x standard. Dans la première étape, en utilisant le rapport signal-bruit de la zone géographique, la péninsule Malaisienne est classée en 13 types de terrains différents.Dans la deuxième étape, les points de données point-à-point sont enregistrés en utilisant la force du signal disponible et en recevant la puissance du signal en considérant différents types de terrains. L’estimation de la localisation se fait au cours de troisième étape en utilisant la célèbre méthode de triangulation. Les résultats estimés sont encore filtrés dans la quatrième étape en utilisant la moyenne et la moyenne des moyennes. Pour la correction des erreurs, le filtrage de l'emplacement est également fait en utilisant la règle des plus proches voisins. La prédiction est affinée au cours de la cinquième étape en utilisant la variance combinée qui permet de prédire la région considérée. L’utilisation des régions d'intérêt contribue à éliminer les emplacements situés à l'extérieur de la zone sélectionnée. Au cours de la sixième étape, les résultats du filtrage sont fusionnés avec la prédiction afin d'obtenir une meilleure précision.Les résultats montrent que les recherches effectuées permettent de réduire les erreurs de 18 m à 6 m dans des terrains fortement atténués, et de 3,5 m à 0,5 m dans des terrains faiblement atténués. / Location estimation of wireless nodes has been a very popular research area for the past few years. The research in location estimation is not limited to satellite communication, but also in WLAN, MANET, WSN and Cellular communication. Because of the growth and advancement in cellular communication architecture, the usage of handheld devices has increased rapidly, therefore mobile users originating calls are also increasing. It is estimated that more than 50% emergency calls are originated by mobile phones. Researchers have used different location estimation techniques, such as satellite based, geometrical, statistical and mapping techniques. In order to achieve accuracy, researchers have combined two or more techniques. However the terrain based location estimation is an area which is not considered by researchers extensively.Due to the fact that radio waves behave differently in different atmospheres, the calculation of few parameters is not sufficient to achieve accuracy in different terrains, especially when it is totally based on RSS which is carrying impairments.This research is focusing on the localization of wireless nodes by using geometrical and statistical techniques with the consideration of impairment/attenuation of terrains. The proposed model is consisting of four steps, which are estimation, filtering, prediction and fusion. A prototype has been built using the WiFi IEEE 802.11x standard. In step one, by using signal to noise ratio, the peninsular Malaysia geographical area is categorized into 13 different terrains/clutters. In step two, point-to-point data points are recorded by using available signal strength and receive signal strength with the consideration of different terrains. Estimation of the location is done in step three by using the triangulation method. The results of estimated locations are further filtered in step four by using average and mean of means. For error correction, filtering of the location is also done by using k- nearest neighbor rule. Prediction is done in step five by using combined variance which predicts the region of interest. Region of interest helps to eliminate locations outside of the selected area. In step six filtering results are fused with prediction in order to achieve accuracy. Results show that the current research is capable of reducing errors from 18 m to 6 m in highly attenuated terrains and from 3.5 m to 0.5 m in low attenuated terrains.
|
260 |
Inference and parameter estimation for diffusion processesLyons, Simon January 2015 (has links)
Diffusion processes provide a natural way of modelling a variety of physical and economic phenomena. It is often the case that one is unable to observe a diffusion process directly, and must instead rely on noisy observations that are discretely spaced in time. Given these discrete, noisy observations, one is faced with the task of inferring properties of the underlying diffusion process. For example, one might be interested in inferring the current state of the process given observations up to the present time (this is known as the filtering problem). Alternatively, one might wish to infer parameters governing the time evolution the diffusion process. In general, one cannot apply Bayes’ theorem directly, since the transition density of a general nonlinear diffusion is not computationally tractable. In this thesis, we investigate a novel method of simplifying the problem. The stochastic differential equation that describes the diffusion process is replaced with a simpler ordinary differential equation, which has a random driving noise that approximates Brownian motion. We show how one can exploit this approximation to improve on standard methods for inferring properties of nonlinear diffusion processes.
|
Page generated in 0.0562 seconds