Spelling suggestions: "subject:"filtering algorithm"" "subject:"iltering algorithm""
1 |
Recommendation based trust model with an effective defence scheme for MANETsShabut, Antesar R.M., Dahal, Keshav P., Bista, Sanat K., Awan, Irfan U. January 2015 (has links)
Yes / The reliability of delivering packets through multi-hop intermediate nodes is a significant issue in the mobile ad hoc networks (MANETs). The distributed mobile nodes establish connections to form the MANET, which may include selfish and misbehaving nodes. Recommendation based trust management has been proposed in the literature as a mechanism to filter out the misbehaving nodes while searching for a packet delivery route. However, building a trust model that relies on the recommendations from other nodes in the network is vulnerable to the possible dishonest behaviour, such as bad-mouthing, ballot-stuffing, and collusion, of the recommending nodes. . This paper investigates the problems of attacks posed by misbehaving nodes while propagating recommendations in the existing trust models. We propose a recommendation based trust model with a defence scheme that utilises clustering technique to dynamically filter attacks related to dishonest recommendations within certain time based on number of interactions, compatibility of information and node closeness. The model is empirically tested in several mobile and disconnected topologies in which nodes experience changes in their neighbourhoods and consequently face frequent route changes. The empirical analysis demonstrates robustness and accuracy of the trust model in a dynamic MANET environment.
|
2 |
Constrained clustering by constraint programming / Classification non supervisée sous contrainte utilisateurs par la programmation par contraintesDuong, Khanh-Chuong 10 December 2014 (has links)
La classification non supervisée, souvent appelée par le terme anglais de clustering, est une tâche importante en Fouille de Données. Depuis une dizaine d'années, la classification non supervisée a été étendue pour intégrer des contraintes utilisateur permettant de modéliser des connaissances préalables dans le processus de clustering. Différents types de contraintes utilisateur peuvent être considérés, des contraintes pouvant porter soit sur les clusters, soit sur les instances. Dans cette thèse, nous étudions le cadre de la Programmation par Contraintes (PPC) pour modéliser les tâches de clustering sous contraintes utilisateur. Utiliser la PPC a deux avantages principaux : la déclarativité, qui permet d'intégrer aisément des contraintes utilisateur et la capacité de trouver une solution optimale qui satisfait toutes les contraintes (s'il en existe). Nous proposons deux modèles basés sur la PPC pour le clustering sous contraintes utilisateur. Les modèles sont généraux et flexibles, ils permettent d'intégrer des contraintes d'instances must-link et cannot-link et différents types de contraintes sur les clusters. Ils offrent également à l'utilisateur le choix entre différents critères d'optimisation. Afin d'améliorer l'efficacité, divers aspects sont étudiés. Les expérimentations sur des bases de données classiques et variées montrent qu'ils sont compétitifs par rapport aux approches exactes existantes. Nous montrons que nos modèles peuvent être intégrés dans une procédure plus générale et nous l'illustrons par la recherche de la frontière de Pareto dans un problème de clustering bi-critère sous contraintes utilisateur. / Cluster analysis is an important task in Data Mining with hundreds of different approaches in the literature. Since the last decade, the cluster analysis has been extended to constrained clustering, also called semi-supervised clustering, so as to integrate previous knowledge on data to clustering algorithms. In this dissertation, we explore Constraint Programming (CP) for solving the task of constrained clustering. The main principles in CP are: (1) users specify declaratively the problem in a Constraint Satisfaction Problem; (2) solvers search for solutions by constraint propagation and search. Relying on CP has two main advantages: the declarativity, which enables to easily add new constraints and the ability to find an optimal solution satisfying all the constraints (when there exists one). We propose two models based on CP to address constrained clustering tasks. The models are flexible and general and supports instance-level constraints and different cluster-level constraints. It also allows the users to choose among different optimization criteria. In order to improve the efficiency, different aspects have been studied in the dissertation. Experiments on various classical datasets show that our models are competitive with other exact approaches. We show that our models can easily be embedded in a more general process and we illustrate this on the problem of finding the Pareto front of a bi-criterion optimization process.
|
3 |
Dynamic State Estimation Techniques For Identification Of Parameters Of Finite Element Structural ModelsAhmed, Nasrellah Hassan 04 1900 (has links)
The thesis outlines the development and application of a few novel dynamic state estimation based methods for estimation of parameters of vibrating engineering structures. The study investigates strategies for data fusion from multiple tests of possibly different types and different sensor quantities through the introduction of a common pseudo-time parameter. These strategies have been developed within the framework of Kalman and particle filtering techniques. The proposed methods are applied to a suite of problems that includes laboratory and field studies with a primary focus on finite element model updating of bridge structures and vehicle structure interaction problems. The study also describes how finite element models residing in commercially available softwares can be made to communicate with database of measurements via a particle filtering algorithm developed on the Matlab platform.
The thesis is divided into six chapters and an appendix. A review of literature on problems of structural system identification with emphasis on methods on dynamic state estimation techniques is presented in Chapter 1. The problem of system parameter idenfification when measurements originate from multiple tests and multiple sensors is considered in Chapter 2. and solution based on Neumann expansion of the structural static/dynamic stiffness matrix and Kalman filtering is proposed to tackle this problem. The question of decoupling the problem of parameter estimation from state estimation is also discussed. The avoidance of linearization of the stiffness matrix and solution of the parameter problems by using Monte Carlo filters is examined in Chapter 3. This also enables treatment of nonlinear structural mechanics problems. The proposed method is assessed using synthetic and laboratory measurement data. The problem of interfacing structural models residing in professional finite element analysis software with measured data via particle filtering algorithm developed on Matlab platform is considered in Chapter 4. Illustrative examples now cover laboratory studies on a beam structure and also filed studies on an existing multi-span masonry railway arch bridge. Identification of parameters of systems with strong nonlinearities, such, as a rectangular rubber sheet with a concentric hole, is also investigated. Studies on parameter identification in beam moving oscillator problem are reported in Chapter 5. The efficacy of particle filtering strategy in identifying parameters of this class of time varying system is demonstrated. A resume of contributions made and a few suggestions for further research are provided in Chapter 6. The appendix contains details of development of interfaces among finite element software(NISA), data base of measurements and particle filtering algorithm (developed on Matlab platform).
|
4 |
Imagerie pour le sonar à ouverture synthétique multistatiqueHervé, Caroline 21 January 2011 (has links)
Le sujet porte sur l'étude de systèmes SAS (Synthetic Aperture Sonar) multistatiques. Ces systèmes permettent d'obtenir des images de cibles mieux résolues qu'avec un sonar classique à partir d'ondes acoustiques. Le SAS est largement exploité en configuration monostatique mais il existe très peu d'études à ce jour en SAS multistatique. Le travail consiste donc à évaluer les performances en configuration bistatique et multistatique et à les comparer à celles connues en configuration monostatique. Une méthode de calcul utilisée en radar a donc été mise en oeuvre en sonar de façon à expliciter la résolution en configuration bistatique, ce qui est un résultat original de ce travail. L'algorithme classiquement utilisé pour reconstruire des images repose sur l'hypothèse que la cible est une somme de points brillants. Cette hypothèse n'est pas bien adaptée en acoustique sous-marine. Un nouvel algorithme a donc été développé dans le but de se rapprocher des phénomènes de diffraction présents à l'interface entre l'eau et la cible. Le modèle de champ diffracté est obtenu par la combinaison d'équations intégrales de frontière avec l'approximation de Kirchhoff. Une méthode de reconstruction d'images par transformée de Fourier 2D de ce modèle a été implémentée et testée sur des données simulées, puis sur des données obtenues lors d'essais en cuve. Le nouvel algorithme montre une meilleure précision de la reconstruction et la capacité de pouvoir extraire de l'information quantitative de la cible. L'intérêt des configurations multistatiques pour la reconnaissance de cibles a également été démontré dans ces travaux de thèse. / This study deals with multistatic SAS (Synthetic Aperture Sonar) systems. SAS are high resolution imaging systems compared to classical sonar ones. The SAS technique is largly exploited in the monostatic configuration but few studies already exist in multistatic SAS. Thus, the work consists in evaluating resolution and detection performances in bistatic and multistatic configurations. Then, the objective is to compare these performances to monostatic ones. A radar method has been adapted to sonar to compute bistatic performances and this is an original result of this work.The classical algorithm to reconstruct images from acoustical waves lies on the hypothesis that the target is a sum of point scatterers. This hypothesis is not really well adapted to underwater acoustics that is why a new algogorithm has been developped in this study. The new algorithm would be better adapted to scattering diffraction phenomena at the interface between water and target than the classical one. The scattered field model of the target is obtained by combinating boundary integral equations and the Kirchhoff Approximation. An imaging reconstruction method by 2D Fourier Transform of this model has been implemented and tested on numerical and experimental datas. The new algorithm allow a better reconstruction accurency and is able to give quantitative information on targets. The interest of multistatic configurations for target identification has also been demonstrated in this PhD work.
|
5 |
Simultaneous localization and mapping for autonomous robot navigation in a dynamic noisy environment / Simultaneous localization and mapping for autonomous robot navigation in a dynamic noisy environmentAgunbiade, Olusanya Yinka 11 1900 (has links)
D. Tech. (Department of Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Simultaneous Localization and Mapping (SLAM) is a significant problem that has been extensively researched in robotics. Its contribution to autonomous robot navigation has attracted researchers towards focusing on this area. In the past, various techniques have been proposed to address SLAM problem with remarkable achievements but there are several factors having the capability to degrade the effectiveness of SLAM technique. These factors include environmental noises (light intensity and shadow), dynamic environment, kidnap robot and computational cost. These problems create inconsistency that can lead to erroneous results in implementation. In the attempt of addressing these problems, a novel SLAM technique Known as DIK-SLAM was proposed.
The DIK-SLAM is a SLAM technique upgraded with filtering algorithms and several re-modifications of Monte-Carlo algorithm to increase its robustness while taking into consideration the computational complexity. The morphological technique and Normalized Differences Index (NDI) are filters introduced to the novel technique to overcome shadow. The dark channel model and specular-to-diffuse are filters introduced to overcome light intensity. These filters are operating in parallel since the computational cost is a concern. The re-modified Monte-Carlo algorithm based on initial localization and grid map technique was introduced to overcome the issue of kidnap problem and dynamic environment respectively.
In this study, publicly available dataset (TUM-RGBD) and a privately generated dataset from of a university in South Africa were employed for evaluation of the filtering algorithms. Experiments were carried out using Matlab simulation and were evaluated using quantitative and qualitative methods. Experimental results obtained showed an improved performance of DIK-SLAM when compared with the original Monte Carlo algorithm and another available SLAM technique in literature. The DIK-SLAM algorithm discussed in this study has the potential of improving autonomous robot navigation, path planning, and exploration while it reduces robot accident rates and human injuries.
|
6 |
Subband Adaptive Filtering Algorithms And ApplicationsSridharan, M K 06 1900 (has links)
In system identification scenario, the linear approximation of the system modelled by its impulse response, is estimated in real time by gradient type Least Mean Square (LMS) or Recursive Least Squares (RLS) algorithms. In recent applications like acoustic echo cancellation, the order of the impulse response to be estimated is very high, and these traditional approaches are inefficient and real time implementation becomes difficult. Alternatively, the system is modelled by a set of shorter adaptive filters operating in parallel on subsampled signals. This approach, referred to as subband adaptive filtering, is expected to reduce not only the computational complexity but also to improve the convergence rate of the adaptive algorithm. But in practice, different subband adaptive algorithms have to be used to enhance the performance with respect to complexity, convergence rate and processing delay. A single subband adaptive filtering algorithm which outperforms the full band scheme in all applications is yet to be realized.
This thesis is intended to study the subband adaptive filtering techniques and explore the possibilities of better algorithms for performance improvement. Three different subband adaptive algorithms have been proposed and their performance have been verified through simulations. These algorithms have been applied to acoustic echo cancellation and EEG artefact minimization problems.
Details of the work
To start with, the fast FIR filtering scheme introduced by Mou and Duhamel has been generalized. The Perfect Reconstruction Filter Bank (PRFB) is used to model the linear FIR system. The structure offers efficient implementation with reduced arithmetic complexity. By using a PRFB with non adjacent filters non overlapping, many channel filters can be eliminated from the structure. This helps in reducing the complexity of the structure further, but introduces approximation in the model. The modelling error depends on the stop band attenuation of the filters of the PRFB. The error introduced due to approximation is tolerable
for applications like acoustic echo cancellation.
The filtered output of the modified generalized fast filtering structure is given by
(formula)
where, Pk(z) is the main channel output, Pk,, k+1 (z) is the output of auxiliary channel filters at the reduced rate, Gk (z) is the kth synthesis filter and M the number of channels in the PRFB. An adaptation scheme is developed for adapting the main channel filters. Auxiliary channel filters are derived from main channel filters.
Secondly, the aliasing problem of the classical structure is reduced without using the cross filters. Aliasing components in the estimated signal results in very poor steady state performance in the classical structure. Attempts to eliminate the aliasing have reduced the computation gain margin and the convergence rate. Any attempt to estimate the subband reference signals from the aliased subband input signals results in aliasing. The analysis filter Hk(z) having the following antialiasing property
(formula)
can avoid aliasing in the input subband signal. The asymmetry of the frequency response prevents the use of real analysis filters. In the investigation presented in this thesis, complex analysis filters and real'synthesis filters are used in the classical structure, to reduce the aliasing errors and to achieve superior convergence rate.
PRFB is traditionally used in implementing Interpolated FIR (IFIR) structure. These filters may not be ideal for processing an input signal for an adaptive algorithm. As third contribution, the IFIR structure is modified using discrete finite frames. The model of an FIR filter s is given by Fc, with c = Hs. The columns of the matrix F forms a frame with rows of H as its dual frame. The matrix elements can be arbitrary except that the transformation should be implementable as a filter bank. This freedom is used to optimize the filter bank, with the knowledge of the input statistics, for initial convergence rate enhancement .
Next, the proposed subband adaptive algorithms are applied to acoustic echo cancellation problem with realistic parameters. Speech input and sufficiently long Room Impulse Response (RIR) are used in the simulations. The Echo Return Loss Enhancement (ERLE)and the steady state error spectrum are used as performance measures to compare these algorithms with the full band scheme and other representative subband implementations.
Finally, Subband adaptive algorithm is used in minimization of EOG (Electrooculogram) artefacts from measured EEG (Electroencephalogram) signal. An IIR filterbank providing sufficient isolation between the frequency bands is used in the modified IFIR structure and this structure has been employed in the artefact minimization scheme. The estimation error in the high frequency range has been reduced and the output signal to noise ratio has been increased by a couple of dB over that of the fullband scheme.
Conclusions
Efforts to find elegant Subband adaptive filtering algorithms will continue in the future. However, in this thesis, the generalized filtering algorithm could offer gain in filtering complexity of the order of M/2 and reduced misadjustment . The complex classical scheme offered improved convergence rate, reduced misadjustment and computational gains of the order of M/4 . The modifications of the IFIR structure using discrete finite frames made it possible to eliminate the processing delay and enhance the convergence rate. Typical performance of the complex classical case for speech input in a realistic scenario (8 channel case), offers ERLE of more than 45dB. The subband approach to EOG artefact minimization in EEG signal was found to be superior to their fullband counterpart.
(Refer PDF file for Formulas)
|
7 |
A Stochastic Search Approach to Inverse ProblemsVenugopal, Mamatha January 2016 (has links) (PDF)
The focus of the thesis is on the development of a few stochastic search schemes for inverse problems and their applications in medical imaging. After the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as composed of two parts: while the first part concerns the development of stochastic search algorithms for inverse problems (Chapters 2 and 3), the second part elucidates on the applicability of search schemes to inverse problems of interest in tomographic imaging (Chapters 4 and 5). The chapter-wise contributions of the thesis are summarized below.
Chapter 2 proposes a Monte Carlo stochastic filtering algorithm for the recursive estimation of diffusive processes in linear/nonlinear dynamical systems that modulate the instantaneous rates of Poisson measurements. The same scheme is applicable when the set of partial and noisy measurements are of a diffusive nature. A key aspect of our development here is the filter-update scheme, derived from an ensemble approximation of the time-discretized nonlinear Kushner Stratonovich equation, that is modified to account for Poisson-type measurements. Specifically, the additive update through a gain-like correction term, empirically approximated from the innovation integral in the filtering equation, eliminates the problem of particle collapse encountered in many conventional particle filters that adopt weight-based updates. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth, first with application to filtering problems with diffusive or Poisson-type measurements and then to an automatic control problem wherein the exterminations of the associated cost functional is achieved simply by an appropriate redefinition of the innovation process.
The aim of one of the numerical examples in Chapter 2 is to minimize the structural response of a duffing oscillator under external forcing. We pose this problem of active control within a filtering framework wherein the goal is to estimate the control force that minimizes an appropriately chosen performance index. We employ the proposed filtering algorithm to estimate the control force and the oscillator displacements and velocities that are minimized as a result of the application of the control force. While Fig. 1 shows the time histories of the uncontrolled and controlled displacements and velocities of the oscillator, a plot of the estimated control force against the external force applied is given in Fig. 2.
(a) (b)
Fig. 1. A plot of the time histories of the uncontrolled and controlled (a) displacements and (b) velocities.
Fig. 2. A plot of the time histories of the external force and the estimated control force
Stochastic filtering, despite its numerous applications, amounts only to a directed search and is best suited for inverse problems and optimization problems with unimodal solutions. In view of general optimization problems involving multimodal objective functions with a priori unknown optima, filtering, similar to a regularized Gauss-Newton (GN) method, may only serve as a local (or quasi-local) search. In Chapter 3, therefore, we propose a stochastic search (SS) scheme that whilst maintaining the basic structure of a filtered martingale problem, also incorporates randomization techniques such as scrambling and blending, which are meant to aid in avoiding the so-called local traps. The key contribution of this chapter is the introduction of yet another technique, termed as the state space splitting (3S) which is a paradigm based on the principle of divide-and-conquer. The 3S technique, incorporated within the optimization scheme, offers a better assimilation of measurements and is found to outperform filtering in the context of quantitative photoacoustic tomography (PAT) to recover the optical absorption field from sparsely available PAT data using a bare minimum ensemble. Other than that, the proposed scheme is numerically shown to be better than or at least as good as CMA-ES (covariance matrix adaptation evolution strategies), one of the best performing optimization schemes in minimizing a set of benchmark functions.
Table 1 gives the comparative performance of the proposed scheme and CMA-ES in minimizing a set of 40-dimensional functions (F1-F20), all of which have their global minimum at 0, using an ensemble size of 20. Here, 10 5 is the tolerance limit to be attained for the objective function value and MAX is the maximum number of iterations permissible to the optimization scheme to arrive at the global minimum.
Table 1. Performance of the SS scheme and
Chapter 4 gathers numerical and experimental evidence to support our conjecture in the previous chapters that even a quasi-local search (afforded, for instance, by the filtered martingale problem) is generally superior to a regularized GN method in solving inverse problems. Specifically, in this chapter, we solve the inverse problems of ultrasound modulated optical tomography (UMOT) and diffraction tomography (DT). In UMOT, we perform a spatially resolved recovery of the mean-squared displacements, p r of the scattering centres in a diffusive object by measuring the modulation depth in the decaying autocorrelation of the incident coherent light. This modulation is induced by the input ultrasound focussed to a specific region referred to as the region of interest (ROI) in the object. Since the ultrasound-induced displacements are a measure of the material stiffness, in principle, UMOT can be applied for the early diagnosis of cancer in soft tissues. In DT, on the other hand, we recover the real refractive index distribution, n r of an optical fiber from experimentally acquired transmitted intensity of light traversing through it. In both cases, the filtering step encoded within the optimization scheme recovers superior reconstruction images vis-à-vis the GN method in terms of quantitative accuracies.
Fig. 3 gives a comparative cross-sectional plot through the centre of the reference and reconstructed p r images in UMOT when the ROI is at the centre of the object. Here, the anomaly is presented as an increase in the displacements and is at the centre of the ROI.
Fig. 4 shows the comparative cross-sectional plot of the reference and reconstructed refractive index distributions, n r of the optical fiber in DT.
Fig. 3. Cross-sectional plot through the center of the reference and reconstructed p r images.
Fig. 4. Cross-sectional plot through the center of the reference and reconstructed n r distributions.
In Chapter 5, the SS scheme is applied to our main application, viz. photoacoustic tomography (PAT) for the recovery of the absorbed energy map, the optical absorption coefficient and the chromophore concentrations in soft tissues. Nevertheless, the main contribution of this chapter is to provide a single-step method for the recovery of the optical absorption field from both simulated and experimental time-domain PAT data. A single-step direct recovery is shown to yield better reconstruction than the generally adopted two-step method for quantitative PAT. Such a quantitative reconstruction maybe converted to a functional image through a linear map. Alternatively, one could also perform a one-step recovery of the chromophore concentrations from the boundary pressure, as shown using simulated data in this chapter. Being a Monte Carlo scheme, the SS scheme is highly parallelizable and the availability of such a machine-ready inversion scheme should finally enable PAT to emerge as a clinical tool in medical diagnostics.
Given below in Fig. 5 is a comparison of the optical absorption map of the Shepp-Logan phantom with the reconstruction obtained as a result of a direct (1-step) recovery.
Fig. 5. The (a) exact and (b) reconstructed optical absorption maps of the Shepp-Logan phantom. The x- and y-axes are in m and the colormap is in mm-1.
Chapter 6 concludes the work with a brief summary of the results obtained and suggestions for future exploration of some of the schemes and applications described in this thesis.
|
8 |
Estimation du mouvement bi-dimensionnel de la paroi artérielle en imagerie ultrasonore par une approche conjointe de segmentation et de speckle tracking / Estimation of the bi-dimensional motion of the arterial wall in ultrasound imaging with a combined approach of segmentation and speckle trackingZahnd, Guillaume 10 December 2012 (has links)
Ce travail de thèse est axé sur le domaine du traitement d'images biomédicales. L'objectif de notre étude est l'estimation des paramètres traduisant les propriétés mécaniques de l'artère carotide in vivo en imagerie échographique, dans une optique de détection précoce de la pathologie cardiovasculaire. L'analyse du mouvement longitudinal des tissus de la paroi artérielle, i.e. dans la même direction que le flux sanguin, représente la motivation majeure de ce travail. Les trois contributions principales proposées dans ce travail sont i) le développement d'un cadre méthodologique original et semi-automatique, dédié à la segmentation et à l'estimation du mouvement de la paroi artérielle dans des séquences in vivo d'images ultrasonores mode-B, ii) la description d'un protocole de génération d'une référence, faisant intervenir les opérations manuelles de plusieurs experts, dans le but de quantifier la précision des résultats de notre méthode malgré l'absence de vérité terrain inhérente à la modalité échographique, et iii) l'évaluation clinique de l'association entre les paramètres mécaniques et dynamiques de la paroi carotidienne et les facteurs de risque cardiovasculaire dans le cadre de la détection précoce de l'athérosclérose. Nous proposons une méthode semi-automatique, basée sur une approche conjointe de segmentation des contours de la paroi et d'estimation du mouvement des tissus. L'extraction de la position des interfaces est réalisée via une approche spécifique à la structure morphologique de la carotide, basée sur une stratégie de programmation dynamique exploitant un filtrage adapté. L'estimation du mouvement est réalisée via une méthode robuste de mise en correspondance de blocs (block matching), basée sur la connaissance du déplacement a priori ainsi que sur la mise à jour temporelle du bloc de référence par un filtre de Kalman spécifique. La précision de notre méthode, évaluée in vivo, correspond au même ordre de grandeur que celle résultant des opérations manuelles réalisées par des experts, et reste sensiblement meilleure que celle obtenue avec deux autres méthodes traditionnelles (i.e. une implémentation classique de la technique de block matching et le logiciel commercial Velocity Vector Imaging). Nous présentons également quatre études cliniques réalisées en milieu hospitalier, où nous évaluons l'association entre le mouvement longitudinal et les facteurs de risque cardiovasculaire. Nous suggérons que le mouvement longitudinal, qui représente un marqueur de risque émergent et encore peu étudié, constitue un indice pertinent et complémentaire aux marqueurs traditionnels dans la caractérisation de la physiopathologie artérielle, reflète le niveau de risque cardiovasculaire global, et pourrait être bien adapté à la détection précoce de l'athérosclérose. / This thesis is focused on the domain of bio-medical image processing. The aim of our study is to assess in vivo the parameters traducing the mechanical properties of the carotid artery in ultrasound imaging, for early detection of cardiovascular diseases. The analysis of the longitudinal motion of the arterial wall tissues, i.e. in the same direction as the blood flow, represents the principal motivation of this work. The three main contributions proposed in this work are i) the development of an original and semi-automatic methodological framework, dedicated to the segmentation and motion estimation of the arterial wall in in vivo ultrasound B-mode image sequences, ii) the description of a protocol aiming to generate a reference, involving the manual tracings of several experts, in the objective to quantify the accuracy of the results of our method despite the absence of ground truth inherent to ultrasound imaging, and iii) the clinical evaluation of the association between the mechanical and dynamical parameters of the arterial wall and the cardiovascular risk factors, for early detection of atherosclerosis. We propose a semi-automatic method, based on a combined approach of wall segmentation and tissues motion estimation. The extraction on the interfaces position is realized via an approach specific to the morphological structure of the carotid artery, based on a strategy of dynamic programming using a matched filter. The motion estimation is performed via a robust block matching method, based on the a priori knowledge of the displacement as well as the temporal update of the reference block with a specific Kalman filter. The accuracy of our method, evaluated in vivo, corresponds to the same order of magnitude as the one resulting from the manual operations performed by experts, and is significantly higher than the one obtained from two other classical methods (i.e. a classical implementation of the block matching technique, and the VVI commercial software). We also present four clinical studies, and we evaluate the association between longitudinal motion and cardiovascular risk factors. We suggest that the longitudinal motion, which represents an emerging cardiovascular risk marker that has been only few studied yet, constitutes a pertinent and complementary marker aiming at the characterization of arterial physio-pathology, traduces the overall cardiovascular risk level, and could be well suited to the early detection of the atherosclerosis.
|
Page generated in 0.0743 seconds