Spelling suggestions: "subject:"smooth""
21 |
Tracking of Ground Vehicles : Evaluation of Tracking Performance Using Different Sensors and Filtering TechniquesHomelius, Marcus January 2018 (has links)
It is crucial to find a good balance between positioning accuracy and cost when developing navigation systems for ground vehicles. In open sky or even in a semi-urban environment, a single global navigation satellite system (GNSS) constellation performs sufficiently well. However, the positioning accuracy decreases drastically in urban environments. Because of the limitation in tracking performance for standalone GNSS, particularly in cities, many solutions are now moving toward integrated systems that combine complementary sensors. In this master thesis the improvement of tracking performance for a low-cost ground vehicle navigation system is evaluated when complementary sensors are added and different filtering techniques are used. How the GNSS aided inertial navigation system (INS) is used to track ground vehicles is explained in this thesis. This has shown to be a very effective way of tracking a vehicle through GNSS outages. Measurements from an accelerometer and a gyroscope are used as inputs to inertial navigation equations. GNSS measurements are then used to correct the tracking solution and to estimate the biases in the inertial sensors. When velocity constraints on the vehicle’s motion in the y- and z-axis are included, the GNSS aided INS has shown very good performance, even during long GNSS outages. Two versions of the Rauch-Tung-Striebel (RTS) smoother and a particle filter (PF) version of the GNSS aided INS have also been implemented and evaluated. The PF has shown to be computationally demanding in comparison with the other approaches and a real-time implementation on the considered embedded system is not doable. The RTS smoother has shown to give a smoother trajectory but a lot of extra information needs to be stored and the position accuracy is not significantly improved. Moreover, map matching has been combined with GNSS measurements and estimates from the GNSS aided INS. The Viterbi algorithm is used to output the the road segment identification numbers of the most likely path and then the estimates are matched to the closest position of these roads. A suggested solution to acquire reliable tracking with high accuracy in all environments is to run the GNSS aided INS in real-time in the vehicle and simultaneously send the horizontal position coordinates to a back office where map information is kept and map matching is performed.
|
22 |
Assimilation rétrospective de données par lissage de rang réduit : application et évaluation dans l'Atlantique Tropical / Retrospective data assimilation with a reduced-rank smoother : application and evaluation in the tropical AtlanticFreychet, Nicolas 11 January 2012 (has links)
Le filtre de Kalman est largement utilisé pour l'assimilation de données en océanographie opérationnelle, notamment dans le cadre de prévisions. Néanmoins, à l'heure où les applications de l'assimilation de données tendent à se diversifier, notamment avec les réanalyses, la formulation tridimensionnelle (3D) du filtre n'utilise pas de façon optimale les observations. L'extension de ces méthodes 3D (filtre) à une formulation 4D (appelés lisseurs), permet de mieux tirer partie des observations en les assimilant de façon rétrograde. Nous étudions dans cette thèse la mise en place et les effets d'un lisseur de rang réduit sur les réanalyses, dans le cadre d'une configuration réaliste de la circulation océanique en Atlantique tropical. Ce travail expose dans un premier temps les aspects sensibles mais nécessaires de l'implémentation du lisseur, avec notamment la paramétrisation des statistiques d'erreur et leur évolution temporelle. Les apports du lissage sur les réanalyses sont ensuite étudiés, en comparant la qualité de la solution lissée par rapport à la solution filtrée. Ces résultats permettent d'exposer les bienfaits d'une assimilation 4D. On observe notamment une diminution de l'erreur globale de environ 15% sur les variables assimilées, ainsi qu'une bonne capacité du lisseur à fournir une solution cohérente avec la dynamique de référence. Ce point est illustré par le rephasage de certaines structures sensibles comme les anneaux du Brésil. Enfin, un cas moins en accord avec la théorie mais plus facile à mettre en pratique (et plus souvent utilisé dans les centres opérationnels), l'interpolation optimale, a permis d'étudier les apports du lissage et ses limites dans une telle configuration. L'évolution temporelle des erreurs pour le lissage s'est ainsi révélée nécessaire pour garder un maximum de cohérence avec les erreurs réelles. Néanmoins, le lisseur montre tout de même des résultats encourageant avec l'interpolation optimale en abaissant le niveau global d'erreur (de 10 à 15%). / The Kalman filter is widely used in data assimilation for operational oceanography, in particular for forecasting problems. Yet, now that data assimilation applications tend to diversify, with reanalysis problems for instance, the three-dimensional (3D) formulation of the filter doesn't allow an optimal use of the observations. The four-dimensional extention of the 3D methods, called smoothers, allows a better use of the observations, assimilating them on a retrospective way. We study in this work the implementation and the effects of a reduced-rank smoother on reanalysis, with a realistic tropical Atlantic ocean circulation model. First we expose some sensitive steps required for the smoother implementation, most notably the covariances evolution parametrisation of the filter. The smoother's benefits for reanalysis are then exposed, compare to a 3D reanalysis. It shows that the global error can be reduced by 15% on assimilated variables (like temperature). The smoother also leads to an analyzed solution dynamically closer to the reference (compare to the filter), as we can observe with phasing of Brazil rings for instance. Finally, we studied a case of smoothing based on optimal interpolation (instead of the filter). This case is inconsistent with the theory but often used in operational centers. Results shows that the smoother can improve the reanalysis solution in an OI case (reducing the global error from 10 to 15%), but still the dynamical evolution of error covariances (filter) are needed to get a correction according with the real error structures.
|
23 |
Efficient "black-box" multigrid solvers for convection-dominated problemsRees, Glyn Owen January 2011 (has links)
The main objective of this project is to develop a "black-box" multigrid preconditioner for the iterative solution of finite element discretisations of the convection-diffusion equation with dominant convection. This equation can be considered a stand alone scalar problem or as part of a more complex system of partial differential equations, such as the Navier-Stokes equations. The project will focus on the stand alone scalar problem. Multigrid is considered an optimal preconditioner for scalar elliptic problems. This strategy can also be used for convection-diffusion problems, however an appropriate robust smoother needs to be developed to achieve mesh-independent convergence. The focus of the thesis is on the development of such a smoother. In this context a novel smoother is developed referred to as truncated incomplete factorisation (tILU) smoother. In terms of computational complexity and memory requirements, the smoother is considerably less expensive than the standard ILU(0) smoother. At the same time, it exhibits the same robustness as ILU(0) with respect to the problem and discretisation parameters. The new smoother significantly outperforms the standard damped Jacobi smoother and is a competitor to the Gauss-Seidel smoother (and in a number of important cases tILU outperforms the Gauss-Seidel smoother). The new smoother depends on a single parameter (the truncation ratio). The project obtains a default value for this parameter and demonstrated the robust performance of the smoother on a broad range of problems. Therefore, the new smoothing method can be regarded as "black-box". Furthermore, the new smoother does not require any particular ordering of the nodes, which is a prerequisite for many robust smoothers developed for convection-dominated convection-diffusion problems. To test the effectiveness of the preconditioning methodology, we consider a number of model problems (in both 2D and 3D) including uniform and complex (recirculating) convection fields discretised by uniform, stretched and adaptively refined grids. The new multigrid preconditioner within block preconditioning of the Navier-Stokes equations was also tested. The numerical results gained during the investigation confirm that tILU is a scalable, robust smoother for both geometric and algebraic multigrid. Also, comprehensive tests show that the tILU smoother is a competitive method.
|
24 |
Using Primary Dynamic Factor Analysis on repeated cross-sectional surveys with binary responses / Primär Dynamisk Faktoranalys för upprepade tvärsnittsundersökningar med binära svarEdenheim, Arvid January 2020 (has links)
With the growing popularity of business analytics, companies experience an increasing need of reliable data. Although the availability of behavioural data showing what the consumers do has increased, the access to data showing consumer mentality, what the con- sumers actually think, remain heavily dependent on tracking surveys. This thesis inves- tigates the performance of a Dynamic Factor Model using respondent-level data gathered through repeated cross-sectional surveys. Through Monte Carlo simulations, the model was shown to improve the accuracy of brand tracking estimates by double digit percent- ages, or equivalently reducing the required amount of data by more than a factor 2, while maintaining the same level of accuracy. Furthermore, the study showed clear indications that even greater performance benefits are possible.
|
25 |
On statistical approaches to climate change analysisLee, Terry Chun Kit 21 April 2008 (has links)
Evidence for a human contribution to climatic changes during the past
century is accumulating rapidly. Given the strength of the evidence, it seems natural to ask
whether forcing projections can be used to forecast climate change. A Bayesian method for
post-processing forced climate model simulations that produces probabilistic hindcasts of
inter-decadal temperature changes on large spatial scales is proposed. Hindcasts produced for the
last two decades of the 20th century are shown to be skillful. The suggestion that
skillful decadal forecasts can be produced on large regional scales by exploiting the response to
anthropogenic forcing provides additional evidence that anthropogenic change in the composition of
the atmosphere has influenced our climate. In the absence of large negative volcanic forcing on the
climate system (which cannot presently be forecast), the global mean temperature for the decade
2000-2009 is predicted to lie above the 1970-1999 normal with probability 0.94. The global mean
temperature anomaly for this decade relative to 1970-1999 is predicted to be 0.35C (5-95%
confidence range: 0.21C-0.48C).
Reconstruction of temperature variability of the past centuries using climate proxy data can also
provide important information on the role of anthropogenic forcing in the observed 20th
century warming. A state-space model approach that allows incorporation of additional
non-temperature information, such as the estimated response to external forcing, to reconstruct
historical temperature is proposed. An advantage of this approach is that it permits simultaneous
reconstruction and detection analysis as well as future projection. A difficulty in using this
approach is that estimation of several unknown state-space model parameters is required. To take
advantage of the data structure in the reconstruction problem, the existing parameter estimation
approach is modified, resulting in two new estimation approaches. The competing estimation
approaches are compared based on theoretical grounds and through simulation studies. The two new
estimation approaches generally perform better than the existing approach.
A number of studies have attempted to reconstruct hemispheric mean temperature for the past
millennium from proxy climate indicators. Different statistical methods are used in these studies
and it therefore seems natural to ask which method is more reliable. An empirical comparison
between the different reconstruction methods is considered using both climate model data and
real-world paleoclimate proxy data. The proposed state-space model approach and the RegEM method
generally perform better than their competitors when reconstructing interannual variations in
Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen
to perform well when reconstructing decadal temperature variability. The similarity in performance
provides evidence that the difference between many real-world reconstructions is more likely to be
due to the choice of the proxy series, or the use of difference target seasons or latitudes, than
to the choice of statistical method.
|
26 |
An Autonomous Small Satellite Navigation System for Earth, Cislunar Space, and BeyondOmar Fathi Awad (15352846) 27 April 2023 (has links)
<p dir="ltr">The Global Navigation Satellite System (GNSS) is heavily relied on for the navigation of Earth satellites. For satellites in cislunar space and beyond, GNSS is not readily available. As a result, other sources such as NASA's Deep Space Network (DSN) must be relied on for navigation. However, DSN is overburdened and can only support a small number of satellites at a time. Furthermore, communication with external sources can become interrupted or deprived in these environments. Given NASA's current efforts towards cislunar space operations and the expected increase in cislunar satellite traffic, there will be a need for more autonomous navigation options in cislunar space and beyond.</p><p dir="ltr">In this thesis, a navigation system capable of accurate and computationally efficient orbit determination in these communication-deprived environments is proposed and investigated. The emphasis on computational efficiency is in support of cubesats which are constrained in size, cost, and mass; this makes navigation even more challenging when resources such as GNSS signals or ground station tracking become unavailable.</p><p dir="ltr">The proposed navigation system, which is called GRAVNAV in this thesis, involves a two-satellite formation orbiting a planet. The primary satellite hosts an Extended Kalman Filter (EKF) and is capable of measuring the relative position of the secondary satellite; accurate attitude estimates are also available to the primary satellite. The relative position measurements allow the EKF to estimate the absolute position and velocity of both satellites. In this thesis, the proposed navigation system is investigated in the two-body and three-body problems.</p><p dir="ltr">The two-body analysis illuminates the effect of the gravity model error on orbit determination performance. High-fidelity gravity models can be computationally expensive for cubesats; however, celestial bodies such as the Earth and Moon have non-uniform and highly-irregular gravity fields that require complex models to describe the motion of satellites orbiting in their gravity field. Initial results show that when a second-order zonal harmonic gravity model is used, the orbit determination accuracy is poor at low altitudes due to large gravity model errors while high-altitude orbits yield good accuracy due to small gravity model errors. To remedy the poor performance for low-altitude orbits, a Gravity Model Error Compensation (GMEC) technique is proposed and investigated. Along with a special tuning model developed specifically for GRAVNAV, this technique is demonstrated to work well for various geocentric and lunar orbits.</p><p><br></p><p dir="ltr">In addition to the gravity model error, other variables affecting the state estimation accuracy are also explored in the two-body analysis. These variables include the six Keplerian orbital elements, measurement accuracy, intersatellite range, and satellite formation shape. The GRAVNAV analysis shows that a smaller intersatellite range results in increased state estimation error. Despite the intersatellite range bounds, semimajor axis, measurement model, and measurement errors being identical for both orbits, the satellite formation shape also has a strong influence on orbit determination accuracy. Formations that place both satellites in different orbits significantly outperform those that place both satellites in the same orbit.</p><p dir="ltr">The three-body analysis primarily focuses on characterizing the unique behavior of GRAVNAV in Near Rectilinear Halo Orbits (NRHOs). Like the two-body analysis, the effect of the satellite formation shape is also characterized and shown to have a similar impact on the orbit determination performance. Unlike the two-body problem, however, different orbits possess different stability properties which are shown to significantly affect orbit determination performance. The more stable NRHOs yield better GRAVNAV performance and are also less sensitive to factors that negatively impact performance such as measurement error, process noise, and decreased intersatellite range.</p><p dir="ltr">Overall, the analyses in this thesis show that GRAVNAV yields accurate and computationally efficient orbit determination when GMEC is used. This, along with the independence of GRAVNAV from GNSS signals and ground-station tracking, shows that GRAVNAV has good potential for navigation in cislunar space and beyond.</p>
|
27 |
Désagrégation spatiale de températures Météosat par une méthode d'assimilation de données (lisseur particulaire) dans un modèle de surface continentale / Spatial downscaling of Meteosat temperatures based on a data assimilation approach (Particle Smoother) to constrain a land surface modelMechri, Rihab 04 December 2014 (has links)
La température des surfaces continentales (LST) est une variable météorologiquetrès importante car elle permet l’accès aux bilans d’énergie et d’eau ducontinuum Biosphère-Atmosphère. Sa haute variabilité spatio-temporelle nécessite desmesures à haute résolution spatiale (HRS) et temporelle (HRT) pour suivre au mieuxles états hydriques du sol et des végétations.La télédétection infrarouge thermique (IRT) permet d’estimer la LST à différentesrésolutions spatio-temporelles. Toutefois, les mesures les plus fréquentes sont souventà basse résolution spatiale (BRS). Il faut donc développer des méthodes pour estimerla LST à HRS à partir des mesures IRT à BRS/HRT. Cette solution est connue sous lenom de désagrégation et fait l’objet de cette thèse.Ainsi, une nouvelle approche de désagrégation basée sur l’assimilation de données(AD) est proposée. Il s’agit de contraindre la dynamique des LSTs HRS/HRT simuléespar un modèle en minimisant l’écart entre les LST agrégées et les données IRT àBRS/HRT, sous l’hypothèse d’homogénéité de la LST par type d’occupation des sols àl’échelle du pixel BRS. La méthode d’AD choisie est un lisseur particulaire qui a étéimplémenté dans le modèle de surface SETHYS (Suivi de l’Etat Hydrique du Sol).L’approche a été évaluée dans une première étape sur des données synthétiques etvalidée ensuite sur des données réelles de télédétection sur une petite région au Sud-Est de la France. Des séries de températures Météosat à 5 km de résolution spatialeont été désagrégées à 90m et validées sur une journée à l’aide de données ASTER.Les résultats encourageants nous ont conduit à élargir la région d’étude et la périoded’assimilation à sept mois. La désagrégation des produits Météosat a été validée quantitativementà 1km à l’aide de données MODIS et qualitativement à 30m à l’aide dedonnées Landsat7. Les résultats montrent de bonnes performances avec des erreursinférieures à 2.5K sur les températures désagrégées à 1km. / Land surface temperature (LST) is one of the most important meteorologicalvariables giving access to water and energy budgets governing the Biosphere-Atmosphere continuum. To better monitor vegetation and energy states, we need hightemporal and spatial resolution measures of LST because its high variability in spaceand time.Despite the growing availability of Thermal Infra-Red (TIR) remote sensing LSTproducts, at different spatial and temporal resolutions, both high spatial resolution(HSR) and high temporal resolution (HTR) TIR data is still not possible because ofsatellite resolutions trade-off : the most frequent LST products being low spatial resolution(LSR) ones.It is therefore necessary to develop methods to estimate HSR/HTR LST from availableTIR LSR/HTR ones. This solution is known as "downscaling" and the presentthesis proposes a new approach for downscaling LST based on Data Assimilation (DA)methods. The basic idea is to constrain HSR/HTR LST dynamics, simulated by a dynamicalmodel, through the minimization of their respective aggregated LSTs discrepancytoward LSR observations, assuming that LST is homogeneous at the land cover typescale inside the LSR pixel.Our method uses a particle smoother DA method implemented in a land surfacemodel : SETHYS model (Suivie de l’Etat Hydrique de Sol). The proposed approach hasbeen firstly evaluated in a synthetic framework then validated using actual TIR LSTover a small area in South-East of France. Meteosat LST time series were downscaledfrom 5km to 90m and validated with ASTER HSR LST over one day. The encouragingresults conducted us to expand the study area and consider a larger assimilation periodof seven months. The downscaled Meteosat LSTs were quantitatively validated at1km of spatial resolution (SR) with MODIS data and qualitatively at 30m of SR withLandsat7 data. The results demonstrated good performances with downscaling errorsless than 2.5K at MODIS scale (1km of SR).
|
28 |
Expectation-Maximization (EM) Algorithm Based Kalman Smoother For ERD/ERS Brain-Computer Interface (BCI)Khan, Md. Emtiyaz 06 1900 (has links) (PDF)
No description available.
|
29 |
Zkoumání konektivity mozkových sítí pomocí hemodynamického modelování / Exploring Brain Network Connectivity through Hemodynamic ModelingHavlíček, Martin January 2012 (has links)
Zobrazení funkční magnetickou rezonancí (fMRI) využívající "blood-oxygen-level-dependent" efekt jako indikátor lokální aktivity je velmi užitečnou technikou k identifikaci oblastí mozku, které jsou aktivní během percepce, kognice, akce, ale také během klidového stavu. V poslední době také roste zájem o studium konektivity mezi těmito oblastmi, zejména v klidovém stavu. Tato práce předkládá nový a originální přístup k problému nepřímého vztahu mezi měřenou hemodynamickou odezvou a její příčinou, tj. neuronálním signálem. Zmíněný nepřímý vztah komplikuje odhad efektivní konektivity (kauzálního ovlivnění) mezi různými oblastmi mozku z dat fMRI. Novost prezentovaného přístupu spočívá v použití (zobecněné nelineární) techniky slepé dekonvoluce, což dovoluje odhad endogenních neuronálních signálů (tj. vstupů systému) z naměřených hemodynamických odezev (tj. výstupů systému). To znamená, že metoda umožňuje "data-driven" hodnocení efektivní konektivity na neuronální úrovni i v případě, že jsou měřeny pouze zašumělé hemodynamické odezvy. Řešení tohoto obtížného dekonvolučního (inverzního) problému je dosaženo za použití techniky nelineárního rekurzivního Bayesovského odhadu, který poskytuje společný odhad neznámých stavů a parametrů modelu. Práce je rozdělena do tří hlavních částí. První část navrhuje metodu k řešení výše uvedeného problému. Metoda využívá odmocninové formy nelineárního kubaturního Kalmanova filtru a kubaturního Rauch-Tung-Striebelova vyhlazovače, ovšem rozšířených pro účely řešení tzv. problému společného odhadu, který je definován jako simultánní odhad stavů a parametrů sekvenčním přístupem. Metoda je navržena především pro spojitě-diskrétní systémy a dosahuje přesného a stabilního řešení diskretizace modelu kombinací nelineárního (kubaturního) filtru s metodou lokální linearizace. Tato inverzní metoda je navíc doplněna adaptivním odhadem statistiky šumu měření a šumů procesu (tj. šumů neznámých stavů a parametrů). První část práce je zaměřena na inverzi modelu pouze jednoho časového průběhu; tj. na odhad neuronální aktivity z fMRI signálu. Druhá část generalizuje navrhovaný přístup a aplikuje jej na více časových průběhů za účelem umožnění odhadu parametrů propojení neuronálního modelu interakce; tj. odhadu efektivní konektivity. Tato metoda představuje inovační stochastické pojetí dynamického kauzálního modelování, což ji činí odlišnou od dříve představených přístupů. Druhá část se rovněž zabývá metodami Bayesovského výběru modelu a navrhuje techniku pro detekci irelevantních parametrů propojení za účelem dosažení zlepšeného odhadu parametrů. Konečně třetí část se věnuje ověření navrhovaného přístupu s využitím jak simulovaných tak empirických fMRI dat, a je významných důkazem o velmi uspokojivých výsledcích navrhovaného přístupu.
|
30 |
Evaluation of Target Tracking Using Multiple Sensors and Non-Causal AlgorithmsVestin, Albin, Strandberg, Gustav January 2019 (has links)
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
|
Page generated in 0.0369 seconds