• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 2
  • Tagged with
  • 29
  • 29
  • 10
  • 9
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
22

Stochastic volatility : maximum likelihood estimation and specification testing

White, Scott Ian January 2006 (has links)
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
23

EM algorithm for Markov chains observed via Gaussian noise and point process information: Theory and case studies

Damian, Camilla, Eksi-Altay, Zehra, Frey, Rüdiger January 2018 (has links) (PDF)
In this paper we study parameter estimation via the Expectation Maximization (EM) algorithm for a continuous-time hidden Markov model with diffusion and point process observation. Inference problems of this type arise for instance in credit risk modelling. A key step in the application of the EM algorithm is the derivation of finite-dimensional filters for the quantities that are needed in the E-Step of the algorithm. In this context we obtain exact, unnormalized and robust filters, and we discuss their numerical implementation. Moreover, we propose several goodness-of-fit tests for hidden Markov models with Gaussian noise and point process observation. We run an extensive simulation study to test speed and accuracy of our methodology. The paper closes with an application to credit risk: we estimate the parameters of a hidden Markov model for credit quality where the observations consist of rating transitions and credit spreads for US corporations.
24

Real-time visual tracking using image processing and filtering methods

Ha, Jin-cheol 01 April 2008 (has links)
The main goal of this thesis is to develop real-time computer vision algorithms in order to detect and to track targets in uncertain complex environments purely based on a visual sensor. Two major subjects addressed by this work are: 1. The development of fast and robust image segmentation algorithms that are able to search and automatically detect targets in a given image. 2. The development of sound filtering algorithms to reduce the effects of noise in signals from the image processing. The main constraint of this research is that the algorithms should work in real-time with limited computing power on an onboard computer in an aircraft. In particular, we focus on contour tracking which tracks the outline of the target represented by contours in the image plane. This thesis is concerned with three specific categories, namely image segmentation, shape modeling, and signal filtering. We have designed image segmentation algorithms based on geometric active contours implemented via level set methods. Geometric active contours are deformable contours that automatically track the outlines of objects in images. In this approach, the contour in the image plane is represented as the zero-level set of a higher dimensional function. (One example of the higher dimensional function is a three-dimensional surface for a two-dimensional contour.) This approach handles the topological changes (e.g., merging, splitting) of the contour naturally. Although geometric active contours prevail in many fields of computer vision, they suffer from the high computational costs associated with level set methods. Therefore, simplified versions of level set methods such as fast marching methods are often used in problems of real-time visual tracking. This thesis presents the development of a fast and robust segmentation algorithm based on up-to-date extensions of level set methods and geometric active contours, namely a fast implementation of Chan-Vese's (active contour) model (FICVM). The shape prior is a useful cue in the recognition of the true target. For the contour tracker, the outline of the target can be easily disrupted by noise. In geometric active contours, to cope with deviations from the true outline of the target, a higher dimensional function is constructed based on the shape prior, and the contour tracks the outline of an object by considering the difference between the higher dimensional functions obtained from the shape prior and from a measurement in a given image. The higher dimensional function is often a distance map which requires high computational costs for construction. This thesis focuses on the extraction of shape information from only the zero-level set of the higher dimensional function. This strategy compensates for inaccuracies in the calculation of the shape difference that occur when a simplified higher dimensional function is used. This is named as contour-based shape modeling. Filtering is an essential element in tracking problems because of the presence of noise in system models and measurements. The well-known Kalman filter provides an exact solution only for problems which have linear models and Gaussian distributions (linear/Gaussian problems). For nonlinear/non-Gaussian problems, particle filters have received much attention in recent years. Particle filtering is useful in the approximation of complicated posterior probability distribution functions. However, the computational burden of particle filtering prevents it from performing at full capacity in real-time applications. This thesis concentrates on improving the processing time of particle filtering for real-time applications. In principle, we follow the particle filter in the geometric active contour framework. This thesis proposes an advanced blob tracking scheme in which a blob contains shape prior information of the target. This scheme simplifies the sampling process and quickly suggests the samples which have a high probability of being the target. Only for these samples is the contour tracking algorithm applied to obtain a more detailed state estimate. Curve evolution in the contour tracking is realized by the FICVM. The dissimilarity measure is calculated by the contour based shape modeling method and the shape prior is updated when it satisfies certain conditions. The new particle filter is applied to the problems of low contrast and severe daylight conditions, to cluttered environments, and to the appearing/disappearing target tracking. We have also demonstrated the utility of the filtering algorithm for multiple target tracking in the presence of occlusions. This thesis presents several test results from simulations and flight tests. In these tests, the proposed algorithms demonstrated promising results in varied situations of tracking.
25

Détection robuste et précoce des pannes oscillatoires dans les systèmes de commandes de vol

Simon, Pascal 07 December 2011 (has links)
Le travail de recherche effectué dans cette thèse a été réalisé dans le cadre d'une convention CIFRE entre le laboratoire IMS de l'université Bordeaux 1 et la société Airbus Operations S.A.S. Cette thèse traite de la détection robuste et précoce des pannes oscillatoires de faible amplitude dans les systèmes de commandes de vol électriques. Une panne oscillatoire est une oscillation anormale d'une surface de contrôle due à un dysfonctionnement dans la chaîne d'asservissement de la servocommande d'une gouverne. Les pannes oscillatoires ont une influence sur la structure, l'aéroélasticité et la pilotabilité de l'avion, lorsqu'ils sont situés dans la bande passante de l'actionneur. La capacité à détecter ces pannes est très importante car elles ont un impact sur la conception structurale de l'avion. Au plan méthodologique, nous nous sommes focalisés sur l'estimation adaptative des paramètres et de l'état à base d'une technique de filtrage non linéaire local. Le mécanisme de filtrage opère sur un modèle non linaire de la chaine de contrôle-commande de l’actionneur hydraulique en amont des surfaces de contrôle. L'algorithme d'estimation est basé sur une interpolation polynomiale d'opérateurs linéaire, et offre l'avantage d'une implémentation relativement aisée. Un problème crucial et sous-jacent est la détermination des hyper-paramètres de réglage de cet algorithme. Nous avons proposé une démarche hors-ligne dédiée, en intégrant un critère de sensibilité vis-à-vis des pannes que nous devons détecter. La technique proposée a été implémentée et testée: les résultats expérimentaux obtenus sur banc essai et sur un simulateur A380 ont clairement mis en évidence l'apport de la nouvelle approche en termes de performances, tout en gardant le même niveau de robustesse. / The research work done in this PhD has been caried out in the frame of an industrial convention (CIFRE) between the IMS laboratory and Airbus Operations S.A.S. The thesis deals with robust and early detection of oscillatory failures (OFC: Oscillatory Failure Case) in the Electrical Flight Control System. An oscillatory failure is an abnormal oscillation of a control surface due to component malfunction in control surface servoloops. OFCs have an influence on structural loads, aeroelasticity and controllability when located within the actuator bandwidth. The ability to detect these failures is very important because they have an impact on the structural design of the aircraft. Usual monitoring techniques cannot always guarantee to remain within an envelope with acceptable robustness. In this work, we develop a model based strategy to detect such failures with small amplitude at a very early stage. The monitoring strategy is based on dedicated non linear local filtering for on-line joint parameter/state estimation, allowing for model parameter variations during A/C flight. This strategy is associated with the same decision making rules as currently used for in-service Airbus A380. We propose a method for adjusting the tuning parameters so that various design goals and trades-off can be easily formulated and managed. The performance of the proposed fault detection scheme is measured by its detection delay, its propensity to issue false alarms and whether it permits a failure to go undetected. The proposed technique has been implemented and tested with success on Airbus test facilities including an A380 flight simulator.
26

Towards spectral mathematical morphology / Vers la morphologie mathématique spectrale

Deborah, Hilda 21 December 2016 (has links)
En fournissant en plus de l'information spatiale une mesure spectrale en fonction des longueurs d'ondes, l'imagerie hyperspectrale s'enorgueillie d'atteindre une précision bien plus importante que l'imagerie couleur. Grâce à cela, elle a été utilisée en contrôle qualité, inspection de matériaux,… Cependant, pour exploiter pleinement ce potentiel, il est important de traiter la donnée spectrale comme une mesure, d'où la nécessité de la métrologie, pour laquelle exactitude, incertitude et biais doivent être maitrisés à tous les niveaux de traitement.Face à cet objectif, nous avons choisi de développer une approche non-linéaire, basée sur la morphologie mathématique et de l'étendre au domaine spectral par le biais d'une relation d'ordre spectral basée sur les fonctions de distance. Une nouvelle fonction de distance spectrale et une nouvelle relation d'ordonnancement sont ainsi proposées. De plus, un nouvel outil d'analyse du basé sur les histogrammes de différences spectrales a été développé.Afin d'assurer la validité des opérateurs, une validation théorique rigoureuse et une évaluation métrologique ont été mises en œuvre à chaque étage de développement. Des protocoles d'évaluation de la qualité des traitements morphologiques sont proposés, exploitant des jeux de données artificielles pour la validation théorique, des ensembles de données dont certaines caractéristiques sont connues pour évaluer la robustesse et la stabilité et des jeux de données de cas réel pour prouver l'intérêt des approches en contexte applicatif. Les applications sont développées dans le contexte du patrimoine culturel pour l'analyse de peintures et pigments. / Providing not only spatial information but also spectral measure as a function of wavelength, hyperspectral imaging boasts a much greater gain in accuracy than the traditional color imaging. And for this capability, hyperspectral imaging has been employed for quality control, inspection of materials in various fields. However, to fully exploit this potential, it is important to process the spectral data as a measure. This induces the need of metrology where accuracy, uncertainty, and bias are managed at every level of processing.Aiming at developing a metrological image processing framework for spectral data, we select to develop a nonlinear approach using the mathematical morphology framework and extended it to the spectral domain by means of a distance-based ordering relation. A novel spectral distance function and spectral ordering relation are proposed, in addition of a new analysis tools based on spectral differences. To ensure the validity of the spectral mathematical morphology framework, rigorous theoretical validation and metrological assessment are carried out at each development stages. So, protocols for quality assessment of spectral image processing tools are developed. These protocols consist of artificial datasets to validate completely the theoretical requirements, datasets with known characteristics to assess the robustness and stability, and datasets from real cases to proof the usefulness of the framework on applicative context. The application tasks themselves are within the cultural heritage domain, where the target images come from pigments and paintings. / Hyperspektral avbildning muliggjør mye mer nøyaktige målinger enn tradisjonelle gråskala og fargebilder, gjennom både høy romlig og spektral oppløsning (funksjon av bølgelengde). På grunn av dette har hyperspektral avbildning blitt anvendt i økende grad ulike applikasjoner som kvalitetskontroll og inspeksjon av materialer. Men for å fullt ut utnytte sitt potensiale, er det viktig å være i stand til å behandle spektrale bildedata som målinger på en gyldig måte. Dette induserer behovet for metrologi, der nøyaktighet, usikkerhet og skjevhet blir adressert og kontrollert på alle nivå av bildebehandlingen.Med sikte på å utvikle et metrologisk rammeverk for spektral bildebehandling valgte vi en ikke-lineær metodikk basert på det etablerte matematisk morfologi-rammeverket. Vi har utvidet dette rammeverket til det spektrale domenet ved hjelp av en avstandsbasert sorteringsrelasjon. En ny spektral avstandsfunksjon og nye spektrale sorteringsrelasjoner ble foreslått, samt nye verktøy for spektral bildeanalyse basert på histogrammer av spektrale forskjeller.For å sikre gyldigheten av det nye spektrale rammeverket for matematisk morfologi, har vi utført en grundig teoretisk validering og metrologisk vurde-ring på hvert trinn i utviklingen. Dermed er og-så nye protokoller for kvalitetsvurdering av spektrale bildebehandlingsverktøy utviklet. Disse protokollene består av kunstige datasett for å validere de teoretiske måletekniske kravene, bildedatasett med kjente egenskaper for å vurdere robustheten og stabiliteten, og datasett fra reelle anvendelser for å bevise nytten av rammeverket i en anvendt sammenheng. De valgte anvendelsene er innenfor kulturminnefeltet, hvor de analyserte bildene er av pigmenter og malerier.
27

B-Spline Based Multitarget Tracking

Sithiravel, Rajiv January 2014 (has links)
Multitarget tracking in the presence of false alarm is a difficult problem to consider. The objective of multitarget tracking is to estimate the number of targets and their states recursively from available observations. At any given time, targets can be born, die and spawn from already existing targets. Sensors can detect these targets with a defined threshold, where normally the observation is influenced by false alarm. Also if the targets are with low signal to noise ratio (SNR) then the targets may not be detected. The Random Finite Set (RFS) filters can be used to solve such multitarget problem efficiently. Specially, one of the best and most widely used RFS based filter is the Probability Hypothesis Density (PHD) filter. The PHD filter approximates the posterior probability density function (PDF) by the first order moment only, where the targets SNR assumed to be much higher. The PHD filter supports targets die, born, spawn and missed-detection by using the well known implementations including Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and Gaussian Mixture Probability Hypothesis Density (GM-PHD) methods. The SMC-PHD filter suffers from the well known degeneracy problems while GM-PHD filter may not be suitable for nonlinear and non-Gaussian target tracking problems. It is desirable to have a filter that can provide continuous estimates for any distribution. This is the motivation for the use of B-Splines in this thesis. One of the main focus of the thesis is the B-Spline based PHD (SPHD) filters. The Spline is a well developed theory and been used in academia and industry for more than five decades. The B-Spline can represent any numerical, geometrical and statistical functions and models including the PDF and PHD. The SPHD filter can be applied to linear, nonlinear, Gaussian and non-Gaussian multitarget tracking applications. The SPHD continuity can be maintained by selecting splines with order of three or more, which avoids the degeneracy-related problem. Another important characteristic of the SPHD filter is that the SPHD can be locally controlled, which allow the manipulations of the SPHD and its natural tendency for handling the nonlinear problems. The SPHD filter can be further extended to support maneuvering multitarget tracking, where it can be an alternative to any available PHD filter implementations. The PHD filter does not work well for very low observable (VLO) target tracking problems, where the targets SNR is normally very low. For very low SNR scenarios the PDF must be approximated by higher order moments. Therefore the PHD implementations may not be suitable for the problem considered in this thesis. One of the best estimator to use in VLO target tracking problem is the Maximum-Likelihood Probability Data Association (ML-PDA) algorithm. The standard ML-PDA algorithm is widely used in single target initialization or geolocation problems with high false alarm. The B-Spline is also used in the ML-PDA (SML-PDA) implementations. The SML-PDA algorithm has the capability to determine the global maximum of ML-PDA log-likelihood ratio with high efficiency in terms of state estimates and low computational complexity. For fast passive track initialization, search and rescue operations the SML-PDA algorithm can be used more efficiently compared to the standard ML-PDA algorithm. Also the SML-PDA algorithm with the extension supports the multitarget tracking. / Thesis / Doctor of Philosophy (PhD)
28

Approximations and Applications of Nonlinear Filters / Approximation und Anwendung nichtlinearer Filter

Bröcker, Jochen 30 January 2003 (has links)
No description available.
29

A Stochastic Search Approach to Inverse Problems

Venugopal, Mamatha January 2016 (has links) (PDF)
The focus of the thesis is on the development of a few stochastic search schemes for inverse problems and their applications in medical imaging. After the introduction in Chapter 1 that motivates and puts in perspective the work done in later chapters, the main body of the thesis may be viewed as composed of two parts: while the first part concerns the development of stochastic search algorithms for inverse problems (Chapters 2 and 3), the second part elucidates on the applicability of search schemes to inverse problems of interest in tomographic imaging (Chapters 4 and 5). The chapter-wise contributions of the thesis are summarized below. Chapter 2 proposes a Monte Carlo stochastic filtering algorithm for the recursive estimation of diffusive processes in linear/nonlinear dynamical systems that modulate the instantaneous rates of Poisson measurements. The same scheme is applicable when the set of partial and noisy measurements are of a diffusive nature. A key aspect of our development here is the filter-update scheme, derived from an ensemble approximation of the time-discretized nonlinear Kushner Stratonovich equation, that is modified to account for Poisson-type measurements. Specifically, the additive update through a gain-like correction term, empirically approximated from the innovation integral in the filtering equation, eliminates the problem of particle collapse encountered in many conventional particle filters that adopt weight-based updates. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth, first with application to filtering problems with diffusive or Poisson-type measurements and then to an automatic control problem wherein the exterminations of the associated cost functional is achieved simply by an appropriate redefinition of the innovation process. The aim of one of the numerical examples in Chapter 2 is to minimize the structural response of a duffing oscillator under external forcing. We pose this problem of active control within a filtering framework wherein the goal is to estimate the control force that minimizes an appropriately chosen performance index. We employ the proposed filtering algorithm to estimate the control force and the oscillator displacements and velocities that are minimized as a result of the application of the control force. While Fig. 1 shows the time histories of the uncontrolled and controlled displacements and velocities of the oscillator, a plot of the estimated control force against the external force applied is given in Fig. 2. (a) (b) Fig. 1. A plot of the time histories of the uncontrolled and controlled (a) displacements and (b) velocities. Fig. 2. A plot of the time histories of the external force and the estimated control force Stochastic filtering, despite its numerous applications, amounts only to a directed search and is best suited for inverse problems and optimization problems with unimodal solutions. In view of general optimization problems involving multimodal objective functions with a priori unknown optima, filtering, similar to a regularized Gauss-Newton (GN) method, may only serve as a local (or quasi-local) search. In Chapter 3, therefore, we propose a stochastic search (SS) scheme that whilst maintaining the basic structure of a filtered martingale problem, also incorporates randomization techniques such as scrambling and blending, which are meant to aid in avoiding the so-called local traps. The key contribution of this chapter is the introduction of yet another technique, termed as the state space splitting (3S) which is a paradigm based on the principle of divide-and-conquer. The 3S technique, incorporated within the optimization scheme, offers a better assimilation of measurements and is found to outperform filtering in the context of quantitative photoacoustic tomography (PAT) to recover the optical absorption field from sparsely available PAT data using a bare minimum ensemble. Other than that, the proposed scheme is numerically shown to be better than or at least as good as CMA-ES (covariance matrix adaptation evolution strategies), one of the best performing optimization schemes in minimizing a set of benchmark functions. Table 1 gives the comparative performance of the proposed scheme and CMA-ES in minimizing a set of 40-dimensional functions (F1-F20), all of which have their global minimum at 0, using an ensemble size of 20. Here, 10 5 is the tolerance limit to be attained for the objective function value and MAX is the maximum number of iterations permissible to the optimization scheme to arrive at the global minimum. Table 1. Performance of the SS scheme and Chapter 4 gathers numerical and experimental evidence to support our conjecture in the previous chapters that even a quasi-local search (afforded, for instance, by the filtered martingale problem) is generally superior to a regularized GN method in solving inverse problems. Specifically, in this chapter, we solve the inverse problems of ultrasound modulated optical tomography (UMOT) and diffraction tomography (DT). In UMOT, we perform a spatially resolved recovery of the mean-squared displacements, p r of the scattering centres in a diffusive object by measuring the modulation depth in the decaying autocorrelation of the incident coherent light. This modulation is induced by the input ultrasound focussed to a specific region referred to as the region of interest (ROI) in the object. Since the ultrasound-induced displacements are a measure of the material stiffness, in principle, UMOT can be applied for the early diagnosis of cancer in soft tissues. In DT, on the other hand, we recover the real refractive index distribution, n r of an optical fiber from experimentally acquired transmitted intensity of light traversing through it. In both cases, the filtering step encoded within the optimization scheme recovers superior reconstruction images vis-à-vis the GN method in terms of quantitative accuracies. Fig. 3 gives a comparative cross-sectional plot through the centre of the reference and reconstructed p r images in UMOT when the ROI is at the centre of the object. Here, the anomaly is presented as an increase in the displacements and is at the centre of the ROI. Fig. 4 shows the comparative cross-sectional plot of the reference and reconstructed refractive index distributions, n r of the optical fiber in DT. Fig. 3. Cross-sectional plot through the center of the reference and reconstructed p r images. Fig. 4. Cross-sectional plot through the center of the reference and reconstructed n r distributions. In Chapter 5, the SS scheme is applied to our main application, viz. photoacoustic tomography (PAT) for the recovery of the absorbed energy map, the optical absorption coefficient and the chromophore concentrations in soft tissues. Nevertheless, the main contribution of this chapter is to provide a single-step method for the recovery of the optical absorption field from both simulated and experimental time-domain PAT data. A single-step direct recovery is shown to yield better reconstruction than the generally adopted two-step method for quantitative PAT. Such a quantitative reconstruction maybe converted to a functional image through a linear map. Alternatively, one could also perform a one-step recovery of the chromophore concentrations from the boundary pressure, as shown using simulated data in this chapter. Being a Monte Carlo scheme, the SS scheme is highly parallelizable and the availability of such a machine-ready inversion scheme should finally enable PAT to emerge as a clinical tool in medical diagnostics. Given below in Fig. 5 is a comparison of the optical absorption map of the Shepp-Logan phantom with the reconstruction obtained as a result of a direct (1-step) recovery. Fig. 5. The (a) exact and (b) reconstructed optical absorption maps of the Shepp-Logan phantom. The x- and y-axes are in m and the colormap is in mm-1. Chapter 6 concludes the work with a brief summary of the results obtained and suggestions for future exploration of some of the schemes and applications described in this thesis.

Page generated in 0.0973 seconds