• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 23
  • 22
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 231
  • 44
  • 38
  • 33
  • 31
  • 31
  • 29
  • 28
  • 26
  • 23
  • 22
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Predictability of Nonstationary Time Series using Wavelet and Empirical Mode Decomposition Based ARMA Models

Lanka, Karthikeyan January 2013 (has links) (PDF)
The idea of time series forecasting techniques is that the past has certain information about future. So, the question of how the information is encoded in the past can be interpreted and later used to extrapolate events of future constitute the crux of time series analysis and forecasting. Several methods such as qualitative techniques (e.g., Delphi method), causal techniques (e.g., least squares regression), quantitative techniques (e.g., smoothing method, time series models) have been developed in the past in which the concept lies in establishing a model either theoretically or mathematically from past observations and estimate future from it. Of all the models, time series methods such as autoregressive moving average (ARMA) process have gained popularity because of their simplicity in implementation and accuracy in obtaining forecasts. But, these models were formulated based on certain properties that a time series is assumed to possess. Classical decomposition techniques were developed to supplement the requirements of time series models. These methods try to define a time series in terms of simple patterns called trend, cyclical and seasonal patterns along with noise. So, the idea of decomposing a time series into component patterns, later modeling each component using forecasting processes and finally combining the component forecasts to obtain actual time series predictions yielded superior performance over standard forecasting techniques. All these methods involve basic principle of moving average computation. But, the developed classical decomposition methods are disadvantageous in terms of containing fixed number of components for any time series, data independent decompositions. During moving average computation, edges of time series might not get modeled properly which affects long range forecasting. So, these issues are to be addressed by more efficient and advanced decomposition techniques such as Wavelets and Empirical Mode Decomposition (EMD). Wavelets and EMD are some of the most innovative concepts considered in time series analysis and are focused on processing nonlinear and nonstationary time series. Hence, this research has been undertaken to ascertain the predictability of nonstationary time series using wavelet and Empirical Mode Decomposition (EMD) based ARMA models. The development of wavelets has been made based on concepts of Fourier analysis and Window Fourier Transform. In accordance with this, initially, the necessity of involving the advent of wavelets has been presented. This is followed by the discussion regarding the advantages that are provided by wavelets. Primarily, the wavelets were defined in the sense of continuous time series. Later, in order to match the real world requirements, wavelets analysis has been defined in discrete scenario which is called as Discrete Wavelet Transform (DWT). The current thesis utilized DWT for performing time series decomposition. The detailed discussion regarding the theory behind time series decomposition is presented in the thesis. This is followed by description regarding mathematical viewpoint of time series decomposition using DWT, which involves decomposition algorithm. EMD also comes under same class as wavelets in the consequence of time series decomposition. EMD is developed out of the fact that most of the time series in nature contain multiple frequencies leading to existence of different scales simultaneously. This method, when compared to standard Fourier analysis and wavelet algorithms, has greater scope of adaptation in processing various nonstationary time series. The method involves decomposing any complicated time series into a very small number of finite empirical modes (IMFs-Intrinsic Mode Functions), where each mode contains information of the original time series. The algorithm of time series decomposition using EMD is presented post conceptual elucidation in the current thesis. Later, the proposed time series forecasting algorithm that couples EMD and ARMA model is presented that even considers the number of time steps ahead of which forecasting needs to be performed. In order to test the methodologies of wavelet and EMD based algorithms for prediction of time series with non stationarity, series of streamflow data from USA and rainfall data from India are used in the study. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability by the proposed algorithm is checked in two scenarios, first being six months ahead forecast and the second being twelve months ahead forecast. Normalized Root Mean Square Error (NRMSE) and Nash Sutcliffe Efficiency Index (Ef) are considered to evaluate the performance of the proposed techniques. Based on the performance measures, the results indicate that wavelet based analyses generate good variations in the case of six months ahead forecast maintaining harmony with the observed values at most of the sites. Although the methods are observed to capture the minima of the time series effectively both in the case of six and twelve months ahead predictions, better forecasts are obtained with wavelet based method over EMD based method in the case of twelve months ahead predictions. It is therefore inferred that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm could be used to model events such as droughts with reasonable accuracy. Also, some modifications that could be made in the model have been suggested which can extend the scope of applicability to other areas in the field of hydrology.
222

Quelques problèmes en analyse harmonique non commutative / Some problems on noncommutative harmonique analysis

Hong, Guixiang 29 September 2012 (has links)
Quelques problèmes en analyse harmonique non commutative / Some problems on noncommutative harmonique analysis
223

Characterization of carotid artery plaques using noninvasive vascular ultrasound elastography

Li, Hongliang 09 1900 (has links)
L'athérosclérose est une maladie vasculaire complexe qui affecte la paroi des artères (par l'épaississement) et les lumières (par la formation de plaques). La rupture d'une plaque de l'artère carotide peut également provoquer un accident vasculaire cérébral ischémique et des complications. Bien que plusieurs modalités d'imagerie médicale soient actuellement utilisées pour évaluer la stabilité d'une plaque, elles présentent des limitations telles que l'irradiation, les propriétés invasives, une faible disponibilité clinique et un coût élevé. L'échographie est une méthode d'imagerie sûre qui permet une analyse en temps réel pour l'évaluation des tissus biologiques. Il est intéressant et prometteur d’appliquer une échographie vasculaire pour le dépistage et le diagnostic précoces des plaques d’artère carotide. Cependant, les ultrasons vasculaires actuels identifient uniquement la morphologie d'une plaque en termes de luminosité d'écho ou l’impact de cette plaque sur les caractéristiques de l’écoulement sanguin, ce qui peut ne pas être suffisant pour diagnostiquer l’importance de la plaque. La technique d’élastographie vasculaire non-intrusive (« noninvasive vascular elastography (NIVE) ») a montré le potentiel de détermination de la stabilité d'une plaque. NIVE peut déterminer le champ de déformation de la paroi vasculaire en mouvement d’une artère carotide provoqué par la pulsation cardiaque naturelle. En raison des différences de module de Young entre les différents tissus des vaisseaux, différents composants d’une plaque devraient présenter différentes déformations, caractérisant ainsi la stabilité de la plaque. Actuellement, les performances et l’efficacité numérique sous-optimales limitent l’acceptation clinique de NIVE en tant que méthode rapide et efficace pour le diagnostic précoce des plaques vulnérables. Par conséquent, il est nécessaire de développer NIVE en tant qu’outil d’imagerie non invasif, rapide et économique afin de mieux caractériser la vulnérabilité liée à la plaque. La procédure à suivre pour effectuer l’analyse NIVE consiste en des étapes de formation et de post-traitement d’images. Cette thèse vise à améliorer systématiquement la précision de ces deux aspects de NIVE afin de faciliter la prédiction de la vulnérabilité de la plaque carotidienne. Le premier effort de cette thèse a été dédié à la formation d'images (Chapitre 5). L'imagerie par oscillations transversales a été introduite dans NIVE. Les performances de l’imagerie par oscillations transversales couplées à deux estimateurs de contrainte fondés sur un modèle de déformation fine, soit l’ « affine phase-based estimator (APBE) » et le « Lagrangian speckle model estimator (LSME) », ont été évaluées. Pour toutes les études de simulation et in vitro de ce travail, le LSME sans imagerie par oscillation transversale a surperformé par rapport à l'APBE avec imagerie par oscillations transversales. Néanmoins, des estimations de contrainte principales comparables ou meilleures pourraient être obtenues avec le LSME en utilisant une imagerie par oscillations transversales dans le cas de structures tissulaires complexes et hétérogènes. Lors de l'acquisition de signaux ultrasonores pour la formation d'images, des mouvements hors du plan perpendiculaire au plan de balayage bidimensionnel (2-D) existent. Le deuxième objectif de cette thèse était d'évaluer l'influence des mouvements hors plan sur les performances du NIVE 2-D (Chapitre 6). À cette fin, nous avons conçu un dispositif expérimental in vitro permettant de simuler des mouvements hors plan de 1 mm, 2 mm et 3 mm. Les résultats in vitro ont montré plus d'artefacts d'estimation de contrainte pour le LSME avec des amplitudes croissantes de mouvements hors du plan principal de l’image. Malgré tout, nous avons néanmoins obtenu des estimations de déformations robustes avec un mouvement hors plan de 2.0 mm (coefficients de corrélation supérieurs à 0.85). Pour un jeu de données cliniques de 18 participants présentant une sténose de l'artère carotide, nous avons proposé d'utiliser deux jeux de données d'analyses sur la même plaque carotidienne, soit des images transversales et longitudinales, afin de déduire les mouvements hors plan (qui se sont avérés de 0.25 mm à 1.04 mm). Les résultats cliniques ont montré que les estimations de déformations restaient reproductibles pour toutes les amplitudes de mouvement, puisque les coefficients de corrélation inter-images étaient supérieurs à 0.70 et que les corrélations croisées normalisées entre les images radiofréquences étaient supérieures à 0.93, ce qui a permis de démontrer une plus grande confiance lors de l'analyse de jeu de données cliniques de plaques carotides à l'aide du LSME. Enfin, en ce qui concerne le post-traitement des images, les algorithmes NIVE doivent estimer les déformations des parois des vaisseaux à partir d’images reconstituées dans le but d’identifier les tissus mous et durs. Ainsi, le dernier objectif de cette thèse était de développer un algorithme d'estimation de contrainte avec une résolution de la taille d’un pixel ainsi qu'une efficacité de calcul élevée pour l'amélioration de la précision de NIVE (Chapitre 7). Nous avons proposé un estimateur de déformation de modèle fragmenté (SMSE) avec lequel le champ de déformation dense est paramétré avec des descriptions de transformées en cosinus discret, générant ainsi des composantes de déformations affines (déformations axiales et latérales et en cisaillement) sans opération mathématique de dérivées. En comparant avec le LSME, le SMSE a réduit les erreurs d'estimation lors des tests de simulations, ainsi que pour les mesures in vitro et in vivo. De plus, la faible mise en oeuvre de la méthode SMSE réduit de 4 à 25 fois le temps de traitement par rapport à la méthode LSME pour les simulations, les études in vitro et in vivo, ce qui pourrait permettre une implémentation possible de NIVE en temps réel. / Atherosclerosis is a complex vascular disease that affects artery walls (by thickening) and lumens (by plaque formation). The rupture of a carotid artery plaque may also induce ischemic stroke and complications. Despite the use of several medical imaging modalities to evaluate the stability of a plaque, they present limitations such as irradiation, invasive property, low clinical availability and high cost. Ultrasound is a safe imaging method with a real time capability for assessment of biological tissues. It is clinically used for early screening and diagnosis of carotid artery plaques. However, current vascular ultrasound technologies only identify the morphology of a plaque in terms of echo brightness or the impact of the vessel narrowing on flow properties, which may not be sufficient for optimum diagnosis. Noninvasive vascular elastography (NIVE) has been shown of interest for determining the stability of a plaque. Specifically, NIVE can determine the strain field of the moving vessel wall of a carotid artery caused by the natural cardiac pulsation. Due to Young’s modulus differences among different vessel tissues, different components of a plaque can be detected as they present different strains thereby potentially helping in characterizing the plaque stability. Currently, sub-optimum performance and computational efficiency limit the clinical acceptance of NIVE as a fast and efficient method for the early diagnosis of vulnerable plaques. Therefore, there is a need to further develop NIVE as a non-invasive, fast and low computational cost imaging tool to better characterize the plaque vulnerability. The procedure to perform NIVE analysis consists in image formation and image post-processing steps. This thesis aimed to systematically improve the accuracy of these two aspects of NIVE to facilitate predicting carotid plaque vulnerability. The first effort of this thesis has been targeted on improving the image formation (Chapter 5). Transverse oscillation beamforming was introduced into NIVE. The performance of transverse oscillation imaging coupled with two model-based strain estimators, the affine phase-based estimator (APBE) and the Lagrangian speckle model estimator (LSME), were evaluated. For all simulations and in vitro studies, the LSME without transverse oscillation imaging outperformed the APBE with transverse oscillation imaging. Nonetheless, comparable or better principal strain estimates could be obtained with the LSME using transverse oscillation imaging in the case of complex and heterogeneous tissue structures. During the acquisition of ultrasound signals for image formation, out-of-plane motions which are perpendicular to the two-dimensional (2-D) scan plane are existing. The second objective of this thesis was to evaluate the influence of out-of-plane motions on the performance of 2-D NIVE (Chapter 6). For this purpose, we designed an in vitro experimental setup to simulate out-of-plane motions of 1 mm, 2 mm and 3 mm. The in vitro results showed more strain estimation artifacts for the LSME with increasing magnitudes of out-of-plane motions. Even so, robust strain estimations were nevertheless obtained with 2.0 mm out-of-plane motion (correlation coefficients higher than 0.85). For a clinical dataset of 18 participants with carotid artery stenosis, we proposed to use two datasets of scans on the same carotid plaque, one cross-sectional and the other in a longitudinal view, to deduce the out-of-plane motions (estimated to be ranging from 0.25 mm to 1.04 mm). Clinical results showed that strain estimations remained reproducible for all motion magnitudes since inter-frame correlation coefficients were higher than 0.70, and normalized cross-correlations between radiofrequency images were above 0.93, which indicated that confident motion estimations can be obtained when analyzing clinical dataset of carotid plaques using the LSME. Finally, regarding the image post-processing component of NIVE algorithms to estimate strains of vessel walls from reconstructed images with the objective of identifying soft and hard tissues, we developed a strain estimation method with a pixel-wise resolution as well as a high computation efficiency for improving NIVE (Chapter 7). We proposed a sparse model strain estimator (SMSE) for which the dense strain field is parameterized with Discrete Cosine Transform descriptions, thereby deriving affine strain components (axial and lateral strains and shears) without mathematical derivative operations. Compared with the LSME, the SMSE reduced estimation errors in simulations, in vitro and in vivo tests. Moreover, the sparse implementation of the SMSE reduced the processing time by a factor of 4 to 25 compared with the LSME based on simulations, in vitro and in vivo results, which is suggesting a possible implementation of NIVE in real time.
224

Filtrace signálů EKG s využitím vlnkové transformace / Wavelet filtering of ECG Signals

Ryšánek, Jan January 2012 (has links)
This work deals with the possibilities of filtering the ECG signal, representing the first part, which is the basis for successful delineation and follow diagnosis of the ECG signal. Filtration in this case is mean to suppress interference from electrical grid, noise of electrical grid. The content of the work is description of filters realized trough wavelet transform and linear filtering as a means to successful filtration of interference. There are method of stationary wavelet transform - dyadic wavelet transform, wavelet packet transform and wavelet wiener filtering method. Linear filtering includes two narrow-band FIR filters. The objective of this work is to propose different methods of wavelet and linear filters in Matlab, filtering of ECG signals and compare the success of filtration methods. ECG signals used in this work are from the CSE database.
225

Development of a continuous condition monitoring system based on probabilistic modelling of partial discharge data for polymeric insulation cables

Ahmed, Zeeshan 09 August 2019 (has links)
Partial discharge (PD) measurements have been widely accepted as an efficient online insulation condition assessment method in high voltage equipment. Two sets of experimental PD measuring setups were established with the aim to study the variations in the partial discharge characteristics over the insulation degradation in terms of the physical phenomena taking place in PD sources, up to the point of failure. Probabilistic lifetime modeling techniques based on classification, regression and multivariate time series analysis were performed for a system of PD response variables, i.e. average charge, pulse repetition rate, average charge current, and largest repetitive discharge magnitude over the data acquisition period. Experimental lifelong PD data obtained from samples subjected to accelerated degradation was used to study the dynamic trends and relationships among those aforementioned response variables. Distinguishable data clusters detected by the T-Stochastics Neighborhood Embedding (tSNE) algorithm allows for the examination of the state-of-the-art modeling techniques over PD data. The response behavior of trained models allows for distinguishing the different stages of the insulation degradation. An alternative approach utilizing a multivariate time series analysis was performed in parallel with Classification and Regression models for the purpose of forecasting PD activity (PD response variables corresponding to insulation degradation). True observed data and forecasted data mean values lie within the 95th percentile confidence interval responses for a definite horizon period, which demonstrates the soundness and accuracy of models. A life-predicting model based on the cointegrated relations between the multiple response variables, trained model responses correlated with experimentally evaluated time-to-breakdown values and well-known physical discharge mechanisms, can be used to set an emergent alarming trigger and as a step towards establishing long-term continuous monitoring of partial discharge activity. Furthermore, this dissertation also proposes an effective PD monitoring system based on wavelet and deflation compression techniques required for an optimal data acquisition as well as an algorithm for high-scale, big data reduction to minimize PD data size and account only for the useful PD information. This historically recorded useful information can thus be used for, not only postault diagnostics, but also for the purpose of improving the performance of modelling algorithms as well as for an accurate threshold detection.
226

Video extraction for fast content access to MPEG compressed videos

Jiang, Jianmin, Weng, Y. 09 June 2009 (has links)
No / As existing video processing technology is primarily developed in the pixel domain yet digital video is stored in compressed format, any application of those techniques to compressed videos would require decompression. For discrete cosine transform (DCT)-based MPEG compressed videos, the computing cost of standard row-by-row and column-by-column inverse DCT (IDCT) transforms for a block of 8 8 elements requires 4096 multiplications and 4032 additions, although practical implementation only requires 1024 multiplications and 896 additions. In this paper, we propose a new algorithm to extract videos directly from MPEG compressed domain (DCT domain) without full IDCT, which is described in three extraction schemes: 1) video extraction in 2 2 blocks with four coefficients; 2) video extraction in 4 4 blocks with four DCT coefficients; and 3) video extraction in 4 4 blocks with nine DCT coefficients. The computing cost incurred only requires 8 additions and no multiplication for the first scheme, 2 multiplication and 28 additions for the second scheme, and 47 additions (no multiplication) for the third scheme. Extensive experiments were carried out, and the results reveal that: 1) the extracted video maintains competitive quality in terms of visual perception and inspection and 2) the extracted videos preserve the content well in comparison with those fully decompressed ones in terms of histogram measurement. As a result, the proposed algorithm will provide useful tools in bridging the gap between pixel domain and compressed domain to facilitate content analysis with low latency and high efficiency such as those applications in surveillance videos, interactive multimedia, and image processing.
227

Moments of the Ruin Time in a Lévy Risk Model

Strietzel, Philipp Lukas, Behme, Anita 08 April 2024 (has links)
We derive formulas for the moments of the ruin time in a Lévy risk model and use these to determine the asymptotic behavior of the moments of the ruin time as the initial capital tends to infinity. In the special case of the perturbed Cramér-Lundberg model with phase-type or even exponentially distributed claims, we explicitly compute the first two moments of the ruin time. All our considerations distinguish between the profitable and the unprofitable setting.
228

Etude de champs de température séparables avec une double décomposition en valeurs singulières : quelques applications à la caractérisation des propriétés thermophysiques des matérieux et au contrôle non destructif / Study of separable temperatur fields with a double singular value decomposition : some applications in characterization of thermophysical properties of materials and non destructive testing

Ayvazyan, Vigen 14 December 2012 (has links)
La thermographie infrarouge est une méthode largement employée pour la caractérisation des propriétés thermophysiques des matériaux. L’avènement des diodes laser pratiques, peu onéreuses et aux multiples caractéristiques, étendent les possibilités métrologiques des caméras infrarouges et mettent à disposition un ensemble de nouveaux outils puissants pour la caractérisation thermique et le contrôle non desturctif. Cependant, un lot de nouvelles difficultés doit être surmonté, comme le traitement d’une grande quantité de données bruitées et la faible sensibilité de ces données aux paramètres recherchés. Cela oblige de revisiter les méthodes de traitement du signal existantes, d’adopter de nouveaux outils mathématiques sophistiqués pour la compression de données et le traitement d’informations pertinentes. Les nouvelles stratégies consistent à utiliser des transformations orthogonales du signal comme outils de compression préalable de données, de réduction et maîtrise du bruit de mesure. L’analyse de sensibilité, basée sur l’étude locale des corrélations entre les dérivées partielles du signal expérimental, complète ces nouvelles approches. L'analogie avec la théorie dans l'espace de Fourier a permis d'apporter de nouveaux éléments de réponse pour mieux cerner la «physique» des approches modales.La réponse au point source impulsionnel a été revisitée de manière numérique et expérimentale. En utilisant la séparabilité des champs de température nous avons proposé une nouvelle méthode d'inversion basée sur une double décomposition en valeurs singulières du signal expérimental. Cette méthode par rapport aux précédentes, permet de tenir compte de la diffusion bi ou tridimensionnelle et offre ainsi une meilleure exploitation du contenu spatial des images infrarouges. Des exemples numériques et expérimentaux nous ont permis de valider dans une première approche cette nouvelle méthode d'estimation pour la caractérisation de diffusivités thermiques longitudinales. Des applications dans le domaine du contrôle non destructif des matériaux sont également proposées. Une ancienne problématique qui consiste à retrouver les champs de température initiaux à partir de données bruitées a été abordée sous un nouveau jour. La nécessité de connaitre les diffusivités thermiques du matériau orthotrope et la prise en compte des transferts souvent tridimensionnels sont complexes à gérer. L'application de la double décomposition en valeurs singulières a permis d'obtenir des résultats intéressants compte tenu de la simplicité de la méthode. En effet, les méthodes modales sont basées sur des approches statistiques de traitement d'une grande quantité de données, censément plus robustes quant au bruit de mesure, comme cela a pu être observé. / Infrared thermography is a widely used method for characterization of thermophysical properties of materials. The advent of the laser diodes, which are handy, inexpensive, with a broad spectrum of characteristics, extend metrological possibilities of infrared cameras and provide a combination of new powerful tools for thermal characterization and non destructive evaluation. However, this new dynamic has also brought numerous difficulties that must be overcome, such as high volume noisy data processing and low sensitivity to estimated parameters of such data. This requires revisiting the existing methods of signal processing, adopting new sophisticated mathematical tools for data compression and processing of relevant information.New strategies consist in using orthogonal transforms of the signal as a prior data compression tools, which allow noise reduction and control over it. Correlation analysis, based on the local cerrelation study between partial derivatives of the experimental signal, completes these new strategies. A theoretical analogy in Fourier space has been performed in order to better understand the «physical» meaning of modal approaches.The response to the instantaneous point source of heat, has been revisited both numerically and experimentally. By using separable temperature fields, a new inversion technique based on a double singular value decomposition of experimental signal has been introduced. In comparison with previous methods, it takes into account two or three-dimensional heat diffusion and therefore offers a better exploitation of the spatial content of infrared images. Numerical and experimental examples have allowed us to validate in the first approach our new estimation method of longitudinal thermal diffusivities. Non destructive testing applications based on the new technique have also been introduced.An old issue, which consists in determining the initial temperature field from noisy data, has been approached in a new light. The necessity to know the thermal diffusivities of an orthotropic medium and the need to take into account often three-dimensional heat transfer, are complicated issues. The implementation of the double singular value decomposition allowed us to achieve interesting results according to its ease of use. Indeed, modal approaches are statistical methods based on high volume data processing, supposedly robust as to the measurement noise.
229

Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities

Cao, Liang January 2014 (has links)
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
230

Application of new shape descriptors and theory of uncertainty in image processing / Примена нових дескриптора облика и теорије неодређености у обради слике / Primena novih deskriptora oblika i teorije neodređenosti u obradi slike

Ilić Vladimir 20 December 2019 (has links)
<p>The doctoral thesis deals with the study of quantitative aspects of shape attribute ssuitable for numerical characterization, i.e., shape descriptors, as well as the theory of uncertainty, particularly the theory of fuzzy sets, and their application in image<br />processing. The original contributions and results of the thesis can be naturally divided into two groups, in accordance with the approaches used to obtain them. The first group of contributions relates to introducing new shape descriptors (of hexagonality and fuzzy squareness) and associated measures that evaluate to what extent the shape considered satisfies these properties. The introduced measures are naturally defined, theoretically well-founded, and satisfy most of the desirable properties expected to be satisfied by each well-defined shape measure. To mention some of them: they both range through (0,1] and achieve the largest possible value 1 if and only if the shape considered is a hexagon, respectively a fuzzy square; there is no non-zero area shape with the measured hexagonality or fuzzy squareness equal to 0; both introduced measures are invariant to similarity transformations; and provide results that are consistent with the theoretically proven results, as well as human perception and expectation. Numerous experiments on synthetic and real examples are shown aimed to illustrate theoretically proven considerations and to provide clearer insight into the behaviour of the introduced shape measures. Their advantages and applicability are illustrated in various tasks of recognizing and classifying objects images of several well-known and most frequently used image datasets. Besides, the doctoral thesis contains research related to the application of the theory of uncertainty, in the narrower sense fuzzy set theory, in the different tasks of image processing and shape analysis. We distinguish between the tasks relating to the extraction of shape features, and those relating to performance improvement of different image processing and image analysis techniques. Regarding the first group of tasks, we deal with the application of fuzzy set theory in the tasks of introducing new fuzzy shape-based descriptor, named fuzzy squareness, and measuring how much fuzzy square is given fuzzy shape. In the second group of tasks, we deal with<br />the study of improving the performance of estimates of both the Euclidean distance<br />transform in three dimensions (3D EDT) and the centroid distance signature of shape in two dimensions. Performance improvement is particularly reflected in terms of achieved accuracy and precision, increased invariance to geometrical transformations (e.g., rotation and translation), and robustness in the presence of noise and uncertainty resulting from the imperfection of devices or imaging conditions. The latter also refers to the second group of the original contributions and results of the thesis. It is motivated by the fact that the shape analysis traditionally assumes that the objects appearing in the image are previously uniquely and crisply extracted from the image. This is usually achieved in the process of sharp (i.e., binary) segmentation of the original image where a decision on the membership of point to an imaged object is made in a sharp manner. Nevertheless, due to the imperfections of imaging conditions or devices, the presence of noise, and various types of imprecision (e.g., lack of precise object boundary or clear boundaries between the objects, errors in computation, lack of information, etc.), different levels of uncertainty and vagueness in the process of making a decision regarding the membership of image point may potentially occur. This is particularly noticeable in the case of discretization (i.e., sampling) of continuous image domain when a single image element, related to corresponding image sample point, iscovered by multiple objects in an image. In this respect, it is clear that this type of segmentation can potentially lead to a wrong decision on the membership of image points, and consequently irreversible information loss about the imaged objects. This<br />stems from the fact that image segmentation performed in this way does not permit that the image point may be a member to a particular imaged object to some degree, further leading to the potential risk that points partially contained in the object before<br />segmentation will not be assigned to the object after segmentation. However, if instead of binary segmentation, it is performed segmentation where a decision about the membership of image point is made in a gradual rather than crisp manner, enabling that point may be a member to an object to some extent, then making a sharp decision on the membership can be avoided at this early analysis step. This further leads that potentially a large amount of object information can be preserved after segmentation and used in the following analysis steps. In this regard, we are interested in one specific type of fuzzy segmentation, named coverage image segmentation, resulting in fuzzy digital image representation where membership value assigned to each image element is proportional to its relative coverage by a continuous object present in the original image. In this thesis, we deal with the study of coverage digitization model providing coverage digital image representation and present how significant improvements in estimating 3D EDT, as well as the centroid distance signature of continuous shape, can be achieved, if the coverage<br />information available in this type of image representation is appropriately considered.</p> / <p>Докторска дисертација се бави проучавањем квантитативних аспеката атрибута<br />облика погодних за нумеричку карактеризацију, то јест дескриптора облика, као и<br />теоријом неодређености, посебно теоријом фази скупова, и њиховом применом у обради слике. Оригинални доприноси и резултати тезе могу се природно поделити у две групе, у складу са приступом и методологијом која је коришћена за њихово добијање. Прва група доприноса односи се на увођење нових дескриптора облика (шестоугаоности и фази квадратности) као и одговарајућих мера које нумерички оцењују у ком обиму разматрани облик задовољава разматрана својства. Уведене мере су природно дефинисане, теоријски добро засноване и задовољавају већину пожељних својстава које свака добро дефинисана мера облика треба да задовољава. Поменимо неке од њих: обе мере узимају вредности из интервала (0,1] и достижу највећу могућу вредност 1 ако и само ако је облик који се посматра шестоугао, односно фази квадрат; не постоји облик не-нула површине чија је измерена шестоугаоност, односно фази квадратност једнака 0; обе уведене мере су инваријантне у односу на трансформације сличности; и дају резултате који су у складу са теоријски доказаним резултатима, као и људском перцепцијом и очекивањима. Бројни експерименти на синтетичким и реалним примерима приказани су у циљу илустровања теоријски доказаних разматрања и пружања јаснијег увида у понашање уведених мера. Њихова предност и корисност илустровани су у различитим задацима препознавања и класификације слика објеката неколико познатих и најчешће коришћених база слика. Поред тога, докторска теза садржи истраживања везана за примену теорије неодређености, у ужем смислу теорије фази скупова, у различитим задацима обраде слике и анализе облика. Разликујемо задатке који се односе на издвајање карактеристика облика и<br />оне који се односе на побољшање перформанси различитих техника обраде и<br />анализе слике. Што се тиче прве групе задатака, бавимо се применом теорије фази скупова у задацима дефинисања новог дескриптора фази облика, назван фази квадратност, и мерења колико је фази квадратан посматрани фази облик. У другој групи задатака бавимо се истраживањем побољшања перформанси оцене трансформације слике еуклидским растојањима у три димензије (3Д ЕДТ), као и сигнатуре непрекидног облика у две димензије засноване на растојању од<br />центроида облика. Ово последње се посебно огледа у постигнутој тачности и<br />прецизности оцене, повећаној инваријантности у односу на ротацију и транслацију објекта, као и робустности у присуству шума и неодређености које су последица несавршености уређаја или услова снимања. Последњи резултати се такође односе и на другу групу оригиналних доприноса тезе који су мотивисани чињеницом да анализа облика традиционално претпоставља да су објекти на слици претходно једнозначно и јасно издвојени из слике. Такво издвајање објеката се обично постиже у процесу јасне (то јест бинарне) сегментације оригиналне слике где се одлука о припадности тачке објекту на слици доноси на једнозначан и недвосмислени начин. Међутим, услед несавршености услова или уређаја за снимање, присуства шума и различитих врста непрецизности (на пример непостојање прецизне границе објекта или јасних граница између самих објеката, грешке у рачунању, недостатка информација, итд.), могу се појавити различити нивои несигурности и неодређености у процесу доношења одлуке у вези са припадношћу тачке слике. Ово је посебно видљиво у случају дискретизације (то јест узорковања) непрекидног домена слике када<br />елемент слике, придружен одговарајућој тачки узорка домена, може бити<br />делимично покривен са више објеката на слици. У том смислу, имамо да ова врста сегментације може потенцијално довести до погрешне одлуке о припадности тачака слике, а самим тим и неповратног губитка информација о објектима који се на слици налазе. То произлази из чињенице да сегментација слике изведена на овај начин не дозвољава да тачка слике може делимично у одређеном обиму бити члан посматраног објекта на слици, што даље води потенцијалном ризику да тачке делимично садржане у објекту пре сегментације неће бити придружене објекту након сегментације. Међутим, ако се уместо бинарне сегментације изврши сегментација слике где се одлука о припадности тачке слике објекту доноси на начин који омогућава да тачка може делимично бити члан објекта у неком обиму, тада се доношење бинарне одлуке о чланство тачке објекту на слици може избећи у овом раном кораку анализе. То даље резултира да се потенцијално велика количина информација о објектима присутним на слици може сачувати након сегментације, и користити у следећим корацима анализе. С тим у вези, од посебног интереса за нас јесте специјална врста фази сегментације слике, сегментација заснована на покривености елемената слике, која као резултат обезбеђује фази дигиталну репрезентацију слике где је вредност чланства додељена сваком елементу пропорционална његовој релативној покривености непрекидним објектом на оригиналној слици. У овој тези бавимо се истраживањем модела дигитализације покривености који пружа овакву врсту репрезентацију слике и представљамо како се могу постићи значајна побољшања у оцени 3Д ЕДТ, као и сигнатуре непрекидног облика засноване на растојању од центроида, ако су информације о покривености<br />доступне у овој репрезентацији слике разматране на одговарајући начин.</p> / <p>Doktorska disertacija se bavi proučavanjem kvantitativnih aspekata atributa<br />oblika pogodnih za numeričku karakterizaciju, to jest deskriptora oblika, kao i<br />teorijom neodređenosti, posebno teorijom fazi skupova, i njihovom primenom u obradi slike. Originalni doprinosi i rezultati teze mogu se prirodno podeliti u dve grupe, u skladu sa pristupom i metodologijom koja je korišćena za njihovo dobijanje. Prva grupa doprinosa odnosi se na uvođenje novih deskriptora oblika (šestougaonosti i fazi kvadratnosti) kao i odgovarajućih mera koje numerički ocenjuju u kom obimu razmatrani oblik zadovoljava razmatrana svojstva. Uvedene mere su prirodno definisane, teorijski dobro zasnovane i zadovoljavaju većinu poželjnih svojstava koje svaka dobro definisana mera oblika treba da zadovoljava. Pomenimo neke od njih: obe mere uzimaju vrednosti iz intervala (0,1] i dostižu najveću moguću vrednost 1 ako i samo ako je oblik koji se posmatra šestougao, odnosno fazi kvadrat; ne postoji oblik ne-nula površine čija je izmerena šestougaonost, odnosno fazi kvadratnost jednaka 0; obe uvedene mere su invarijantne u odnosu na transformacije sličnosti; i daju rezultate koji su u skladu sa teorijski dokazanim rezultatima, kao i ljudskom percepcijom i očekivanjima. Brojni eksperimenti na sintetičkim i realnim primerima prikazani su u cilju ilustrovanja teorijski dokazanih razmatranja i pružanja jasnijeg uvida u ponašanje uvedenih mera. NJihova prednost i korisnost ilustrovani su u različitim zadacima prepoznavanja i klasifikacije slika objekata nekoliko poznatih i najčešće korišćenih baza slika. Pored toga, doktorska teza sadrži istraživanja vezana za primenu teorije neodređenosti, u užem smislu teorije fazi skupova, u različitim zadacima obrade slike i analize oblika. Razlikujemo zadatke koji se odnose na izdvajanje karakteristika oblika i<br />one koji se odnose na poboljšanje performansi različitih tehnika obrade i<br />analize slike. Što se tiče prve grupe zadataka, bavimo se primenom teorije fazi skupova u zadacima definisanja novog deskriptora fazi oblika, nazvan fazi kvadratnost, i merenja koliko je fazi kvadratan posmatrani fazi oblik. U drugoj grupi zadataka bavimo se istraživanjem poboljšanja performansi ocene transformacije slike euklidskim rastojanjima u tri dimenzije (3D EDT), kao i signature neprekidnog oblika u dve dimenzije zasnovane na rastojanju od<br />centroida oblika. Ovo poslednje se posebno ogleda u postignutoj tačnosti i<br />preciznosti ocene, povećanoj invarijantnosti u odnosu na rotaciju i translaciju objekta, kao i robustnosti u prisustvu šuma i neodređenosti koje su posledica nesavršenosti uređaja ili uslova snimanja. Poslednji rezultati se takođe odnose i na drugu grupu originalnih doprinosa teze koji su motivisani činjenicom da analiza oblika tradicionalno pretpostavlja da su objekti na slici prethodno jednoznačno i jasno izdvojeni iz slike. Takvo izdvajanje objekata se obično postiže u procesu jasne (to jest binarne) segmentacije originalne slike gde se odluka o pripadnosti tačke objektu na slici donosi na jednoznačan i nedvosmisleni način. Međutim, usled nesavršenosti uslova ili uređaja za snimanje, prisustva šuma i različitih vrsta nepreciznosti (na primer nepostojanje precizne granice objekta ili jasnih granica između samih objekata, greške u računanju, nedostatka informacija, itd.), mogu se pojaviti različiti nivoi nesigurnosti i neodređenosti u procesu donošenja odluke u vezi sa pripadnošću tačke slike. Ovo je posebno vidljivo u slučaju diskretizacije (to jest uzorkovanja) neprekidnog domena slike kada<br />element slike, pridružen odgovarajućoj tački uzorka domena, može biti<br />delimično pokriven sa više objekata na slici. U tom smislu, imamo da ova vrsta segmentacije može potencijalno dovesti do pogrešne odluke o pripadnosti tačaka slike, a samim tim i nepovratnog gubitka informacija o objektima koji se na slici nalaze. To proizlazi iz činjenice da segmentacija slike izvedena na ovaj način ne dozvoljava da tačka slike može delimično u određenom obimu biti član posmatranog objekta na slici, što dalje vodi potencijalnom riziku da tačke delimično sadržane u objektu pre segmentacije neće biti pridružene objektu nakon segmentacije. Međutim, ako se umesto binarne segmentacije izvrši segmentacija slike gde se odluka o pripadnosti tačke slike objektu donosi na način koji omogućava da tačka može delimično biti član objekta u nekom obimu, tada se donošenje binarne odluke o članstvo tačke objektu na slici može izbeći u ovom ranom koraku analize. To dalje rezultira da se potencijalno velika količina informacija o objektima prisutnim na slici može sačuvati nakon segmentacije, i koristiti u sledećim koracima analize. S tim u vezi, od posebnog interesa za nas jeste specijalna vrsta fazi segmentacije slike, segmentacija zasnovana na pokrivenosti elemenata slike, koja kao rezultat obezbeđuje fazi digitalnu reprezentaciju slike gde je vrednost članstva dodeljena svakom elementu proporcionalna njegovoj relativnoj pokrivenosti neprekidnim objektom na originalnoj slici. U ovoj tezi bavimo se istraživanjem modela digitalizacije pokrivenosti koji pruža ovakvu vrstu reprezentaciju slike i predstavljamo kako se mogu postići značajna poboljšanja u oceni 3D EDT, kao i signature neprekidnog oblika zasnovane na rastojanju od centroida, ako su informacije o pokrivenosti<br />dostupne u ovoj reprezentaciji slike razmatrane na odgovarajući način.</p>

Page generated in 0.0813 seconds