Spelling suggestions: "subject:"cachine learning"" "subject:"amachine learning""
301 |
Machine Learning Methods to Understand Textual DataUnknown Date (has links)
The amount of textual data that produce every minute on the internet is extremely high. Processing of this tremendous volume of mostly unstructured data is not a straightforward function. But the enormous amount of useful information that lay down on them motivate scientists to investigate efficient and effective techniques and algorithms to discover meaningful patterns. Social network applications provide opportunities for people around the world to be in contact and share their valuable knowledge, such as chat, comments, and discussion boards. People usually do not care about spelling and accurate grammatical construction of a sentence in everyday life conversations. Therefore, extracting information from such datasets are more complicated. Text mining can be a solution to this problem. Text mining is a knowledge
discovery process used to extract patterns from natural language. Application of text mining techniques on social networking websites can reveal a significant amount of information. Text mining in conjunction with social networks can be used for finding a general opinion about any special subject, human thinking patterns, and group identification. In this study, we investigate machine learning methods in textual data in six chapters. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
302 |
Machine Learning Algorithms with Big Medicare Fraud DataUnknown Date (has links)
Healthcare is an integral component in peoples lives, especially for the rising elderly population, and must be affordable. The United States Medicare program is vital in serving the needs of the elderly. The growing number of people enrolled in the Medicare program, along with the enormous volume of money involved, increases the appeal for, and risk of, fraudulent activities. For many real-world applications, including Medicare fraud, the interesting observations tend to be less frequent than the normative observations. This difference between the normal observations and
those observations of interest can create highly imbalanced datasets. The problem of class imbalance, to include the classification of rare cases indicating extreme class
imbalance, is an important and well-studied area in machine learning. The effects of class imbalance with big data in the real-world Medicare fraud application domain, however, is limited. In particular, the impact of detecting fraud in Medicare claims is critical in lessening the financial and personal impacts of these transgressions. Fortunately, the healthcare domain is one such area where the successful detection
of fraud can garner meaningful positive results. The application of machine learning techniques, plus methods to mitigate the adverse effects of class imbalance and rarity, can be used to detect fraud and lessen the impacts for all Medicare beneficiaries. This dissertation presents the application of machine learning approaches to detect Medicare provider claims fraud in the United States. We discuss novel techniques
to process three big Medicare datasets and create a new, combined dataset, which includes mapping fraud labels associated with known excluded providers. We investigate the ability of machine learning techniques, unsupervised and supervised, to detect Medicare claims fraud and leverage data sampling methods to lessen the impact of class imbalance and increase fraud detection performance. Additionally, we extend the study of class imbalance to assess the impacts of rare cases in big data for Medicare fraud detection. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
303 |
Ensemble Learning Algorithms for the Analysis of Bioinformatics DataUnknown Date (has links)
Developments in advanced technologies, such as DNA microarrays, have generated
tremendous amounts of data available to researchers in the field of bioinformatics.
These state-of-the-art technologies present not only unprecedented opportunities to
study biological phenomena of interest, but significant challenges in terms of processing
the data. Furthermore, these datasets inherently exhibit a number of challenging
characteristics, such as class imbalance, high dimensionality, small dataset size, noisy
data, and complexity of data in terms of hard to distinguish decision boundaries
between classes within the data.
In recognition of the aforementioned challenges, this dissertation utilizes a variety
of machine-learning and data-mining techniques, such as ensemble classification
algorithms in conjunction with data sampling and feature selection techniques to alleviate
these problems, while improving the classification results of models built on
these datasets. However, in building classification models researchers and practitioners
encounter the challenge that there is not a single classifier that performs relatively
well in all cases. Thus, numerous classification approaches, such as ensemble learning
methods, have been developed to address this problem successfully in a majority of circumstances. Ensemble learning is a promising technique that generates multiple
classification models and then combines their decisions into a single final result.
Ensemble learning often performs better than single-base classifiers in performing
classification tasks.
This dissertation conducts thorough empirical research by implementing a series
of case studies to evaluate how ensemble learning techniques can be utilized to
enhance overall classification performance, as well as improve the generalization ability
of ensemble models. This dissertation investigates ensemble learning techniques
of the boosting, bagging, and random forest algorithms, and proposes a number of
modifications to the existing ensemble techniques in order to improve further the
classification results. This dissertation examines the effectiveness of ensemble learning
techniques on accounting for challenging characteristics of class imbalance and
difficult-to-learn class decision boundaries. Next, it looks into ensemble methods
that are relatively tolerant to class noise, and not only can account for the problem
of class noise, but improves classification performance. This dissertation also examines
the joint effects of data sampling along with ensemble techniques on whether
sampling techniques can further improve classification performance of built ensemble
models. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection
|
304 |
Unravelling higher order chromatin organisation through statistical analysisMoore, Benjamin Luke January 2016 (has links)
Recent technological advances underpinned by high throughput sequencing have given new insights into the three-dimensional structure of mammalian genomes. Chromatin conformation assays have been the critical development in this area, particularly the Hi-C method which ascertains genome-wide patterns of intra and inter-chromosomal contacts. However many open questions remain concerning the functional relevance of such higher order structure, the extent to which it varies, and how it relates to other features of the genomic and epigenomic landscape. Current knowledge of nuclear architecture describes a hierarchical organisation ranging from small loops between individual loci, to megabase-sized self-interacting topological domains (TADs), encompassed within large multimegabase chromosome compartments. In parallel with the discovery of these strata, the ENCODE project has generated vast amounts of data through ChIP-seq, RNA-seq and other assays applied to a wide variety of cell types, forming a comprehensive bioinformatics resource. In this work we combine Hi-C datasets describing physical genomic contacts with a large and diverse array of chromatin features derived at a much finer scale in the same mammalian cell types. These features include levels of bound transcription factors, histone modifications and expression data. These data are then integrated in a statistically rigorous way, through a predictive modelling framework from the machine learning field. These studies were extended, within a collaborative project, to encompass a dataset of matched Hi-C and expression data collected over a murine neural differentiation timecourse. We compare higher order chromatin organisation across a variety of human cell types and find pervasive conservation of chromatin organisation at multiple scales. We also identify structurally variable regions between cell types, that are rich in active enhancers and contain loci of known cell-type specific function. We show that broad aspects of higher order chromatin organisation, such as nuclear compartment domains, can be accurately predicted in a variety of human cell types, using models based upon underlying chromatin features. We dissect these quantitative models and find them to be generalisable to novel cell types, presumably reflecting fundamental biological rules linking compartments with key activating and repressive signals. These models describe the strong interconnectedness between locus-level patterns of local histone modifications and bound factors, on the order of hundreds or thousands of basepairs, with much broader compartmentalisation of large, multi-megabase chromosomal regions. Finally, boundary regions are investigated in terms of chromatin features and co-localisation with other known nuclear structures, such as association with the nuclear lamina. We find boundary complexity to vary between cell types and link TAD aggregations to previously described lamina-associated domains, as well as exploring the concept of meta-boundaries that span multiple levels of organisation. Together these analyses lend quantitative evidence to a model of higher order genome organisation that is largely stable between cell types, but can selectively vary locally, based on the activation or repression of key loci.
|
305 |
Machine learning models on random graphs. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
In summary, the viewpoint of random graphs indeed provides us an opportunity of improving some existing machine learning algorithms. / In this thesis, we establish three machine learning models on random graphs: Heat Diffusion Models on Random Graphs, Predictive Random Graph Ranking, and Random Graph Dependency. The heat diffusion models on random graphs lead to Graph-based Heat Diffusion Classifiers (G-HDC) and a novel ranking algorithm on Web pages called DiffusionRank. For G-HDC, a random graph is constructed on data points. The generated random graph can be considered as the representation of the underlying geometry, and the heat diffusion model on them can be considered as the approximation to the way that heat flows on a geometric structure. Experiments show that G-HDC can achieve better performance in accuracy in some benchmark datasets. For DiffusionRank, theoretically we show that it is a generalization of PageRank when the heat diffusion coefficient tends to infinity, and empirically we show that it achieves the ability of anti-manipulation. / Predictive Random Graph Ranking (PRGR) incorporates DiffusionRank. PRGR aims to solve the problem that the incomplete information about the Web structure causes inaccurate results of various ranking algorithms. The Web structure is predicted as a random graph, on which ranking algorithms are expected to be improved in accuracy. Experimental results show that the PRGR framework can improve the accuracy of the ranking algorithms such as PageRank and Common Neighbor. / Three special forms of the novel Random Graph Dependency measure on two random graphs are investigated. The first special form can improve the speed of the C4.5 algorithm, and can achieve better results on attribute selection than gamma used in Rough Set Theory. The second special form of the general random graph dependency measure generalizes the conditional entropy because it becomes equivalent to the conditional entropy when the random graphs take their special form-equivalence relations. Experiments demonstrates that the second form is an informative measure, showing its success in decision trees on small sample size problems. The third special form can help to search two parameters in G-HDC faster than the cross-validation method. / Yang, haixuan. / "August 2007." / Advisers: Irwin King; Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1125. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 184-197). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
306 |
IMPROVING THE REALISM OF SYNTHETIC IMAGES THROUGH THE MIXTURE OF ADVERSARIAL AND PERCEPTUAL LOSSESAtapattu, Charith Nisanka 01 December 2018 (has links)
This research is describing a novel method to generate realism improved synthetic images while preserving annotation information and the eye gaze direction. Furthermore, it describes how the perceptual loss can be utilized while introducing basic features and techniques from adversarial networks for better results.
|
307 |
Image representation, processing and analysis by support vector regression. / 支援矢量回歸法之影像表示式及其影像處理與分析 / Image representation, processing and analysis by support vector regression. / Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xiJanuary 2001 (has links)
Chow Kai Tik = 支援矢量回歸法之影像表示式及其影像處理與分析 / 周啓迪. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 380-383). / Text in English; abstracts in English and Chinese. / Chow Kai Tik = Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xi / Zhou Qidi. / Abstract in English / Abstract in Chinese / Acknowledgement / Content / List of figures / Chapter Chapter 1 --- Introduction --- p.1-11 / Chapter 1.1 --- Introduction --- p.2 / Chapter 1.2 --- Road Map --- p.9 / Chapter Chapter 2 --- Review of Support Vector Machine --- p.12-124 / Chapter 2.1 --- Structural Risk Minimization (SRM) --- p.13 / Chapter 2.1.1 --- Introduction / Chapter 2.1.2 --- Structural Risk Minimization / Chapter 2.2 --- Review of Support Vector Machine --- p.21 / Chapter 2.2.1 --- Review of Support Vector Classification / Chapter 2.2.2 --- Review of Support Vector Regression / Chapter 2.2.3 --- Review of Support Vector Clustering / Chapter 2.2.4 --- Summary of Support Vector Machines / Chapter 2.3 --- Implementation of Support Vector Machines --- p.60 / Chapter 2.3.1 --- Kernel Adatron for Support Vector Classification (KA-SVC) / Chapter 2.3.2 --- Kernel Adatron for Support Vector Regression (KA-SVR) / Chapter 2.3.3 --- Sequential Minimal Optimization for Support Vector Classification (SMO-SVC) / Chapter 2.3.4 --- Sequential Minimal Optimization for Support Vector Regression (SMO-SVR) / Chapter 2.3.5 --- Lagrangian Support Vector Classification (LSVC) / Chapter 2.3.6 --- Lagrangian Support Vector Regression (LSVR) / Chapter 2.4 --- Applications of Support Vector Machines --- p.117 / Chapter 2.4.1 --- Applications of Support Vector Classification / Chapter 2.4.2 --- Applications of Support Vector Regression / Chapter Chapter 3 --- Image Representation by Support Vector Regression --- p.125-183 / Chapter 3.1 --- Introduction of SVR Representation --- p.116 / Chapter 3.1.1 --- Image Representation by SVR / Chapter 3.1.2 --- Implicit Smoothing of SVR representation / Chapter 3.1.3 --- "Different Insensitivity, C value, Kernel and Kernel Parameters" / Chapter 3.2 --- Variation on Encoding Method [Training Process] --- p.154 / Chapter 3.2.1 --- Training SVR with Missing Data / Chapter 3.2.2 --- Training SVR with Image Blocks / Chapter 3.2.3 --- Training SVR with Other Variations / Chapter 3.3 --- Variation on Decoding Method [Testing pr Reconstruction Process] --- p.171 / Chapter 3.3.1 --- Reconstruction with Different Portion of Support Vectors / Chapter 3.3.2 --- Reconstruction with Different Support Vector Locations and Lagrange Multiplier Values / Chapter 3.3.3 --- Reconstruction with Different Kernels / Chapter 3.4 --- Feature Extraction --- p.177 / Chapter 3.4.1 --- Features on Simple Shape / Chapter 3.4.2 --- Invariant of Support Vector Features / Chapter Chapter 4 --- Mathematical and Physical Properties of SYR Representation --- p.184-243 / Chapter 4.1 --- Introduction of RBF Kernel --- p.185 / Chapter 4.2 --- Mathematical Properties: Integral Properties --- p.187 / Chapter 4.2.1 --- Integration of an SVR Image / Chapter 4.2.2 --- Fourier Transform of SVR Image (Hankel Transform of Kernel) / Chapter 4.2.3 --- Cross Correlation between SVR Images / Chapter 4.2.4 --- Convolution of SVR Images / Chapter 4.3 --- Mathematical Properties: Differential Properties --- p.219 / Chapter 4.3.1 --- Review of Differential Geometry / Chapter 4.3.2 --- Gradient of SVR Image / Chapter 4.3.3 --- Laplacian of SVR Image / Chapter 4.4 --- Physical Properties --- p.228 / Chapter 4.4.1 --- 7Transformation between Reconstructed Image and Lagrange Multipliers / Chapter 4.4.2 --- Relation between Original Image and SVR Approximation / Chapter 4.5 --- Appendix --- p.234 / Chapter 4.5.1 --- Hankel Transform for Common Functions / Chapter 4.5.2 --- Hankel Transform for RBF / Chapter 4.5.3 --- Integration of Gaussian / Chapter 4.5.4 --- Chain Rules for Differential Geometry / Chapter 4.5.5 --- Derivation of Gradient of RBF / Chapter 4.5.6 --- Derivation of Laplacian of RBF / Chapter Chapter 5 --- Image Processing in SVR Representation --- p.244-293 / Chapter 5.1 --- Introduction --- p.245 / Chapter 5.2 --- Geometric Transformation --- p.241 / Chapter 5.2.1 --- "Brightness, Contrast and Image Addition" / Chapter 5.2.2 --- Interpolation or Resampling / Chapter 5.2.3 --- Translation and Rotation / Chapter 5.2.4 --- Affine Transformation / Chapter 5.2.5 --- Transformation with Given Optical Flow / Chapter 5.2.6 --- A Brief Summary / Chapter 5.3 --- SVR Image Filtering --- p.261 / Chapter 5.3.1 --- Discrete Filtering in SVR Representation / Chapter 5.3.2 --- Continuous Filtering in SVR Representation / Chapter Chapter 6 --- Image Analysis in SVR Representation --- p.294-370 / Chapter 6.1 --- Contour Extraction --- p.295 / Chapter 6.1.1 --- Contour Tracing by Equi-potential Line [using Gradient] / Chapter 6.1.2 --- Contour Smoothing and Contour Feature Extraction / Chapter 6.2 --- Registration --- p.304 / Chapter 6.2.1 --- Registration using Cross Correlation / Chapter 6.2.2 --- Registration using Phase Correlation [Phase Shift in Fourier Transform] / Chapter 6.2.3 --- Analysis of the Two Methods for Registrationin SVR Domain / Chapter 6.3 --- Segmentation --- p.347 / Chapter 6.3.1 --- Segmentation by Contour Tracing / Chapter 6.3.2 --- Segmentation by Thresholding on Smoothed or Sharpened SVR Image / Chapter 6.3.3 --- Segmentation by Thresholding on SVR Approximation / Chapter 6.4 --- Appendix --- p.368 / Chapter Chapter 7 --- Conclusion --- p.371-379 / Chapter 7.1 --- Conclusion and contribution --- p.372 / Chapter 7.2 --- Future work --- p.378 / Reference --- p.380-383
|
308 |
A novel fuzzy first-order logic learning system.January 2002 (has links)
Tse, Ming Fun. / Thesis submitted in: December 2001. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 142-146). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Definition --- p.2 / Chapter 1.2 --- Contributions --- p.3 / Chapter 1.3 --- Thesis Outline --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Representing Inexact Knowledge --- p.7 / Chapter 2.1.1 --- Nature of Inexact Knowledge --- p.7 / Chapter 2.1.2 --- Probability Based Reasoning --- p.8 / Chapter 2.1.3 --- Certainty Factor Algebra --- p.11 / Chapter 2.1.4 --- Fuzzy Logic --- p.13 / Chapter 2.2 --- Machine Learning Paradigms --- p.13 / Chapter 2.2.1 --- Classifications --- p.14 / Chapter 2.2.2 --- Neural Networks and Gradient Descent --- p.15 / Chapter 2.3 --- Related Learning Systems --- p.21 / Chapter 2.3.1 --- Relational Concept Learning --- p.21 / Chapter 2.3.2 --- Learning of Fuzzy Concepts --- p.24 / Chapter 2.4 --- Fuzzy Logic --- p.26 / Chapter 2.4.1 --- Fuzzy Set --- p.27 / Chapter 2.4.2 --- Basic Notations in Fuzzy Logic --- p.29 / Chapter 2.4.3 --- Basic Operations on Fuzzy Sets --- p.29 / Chapter 2.4.4 --- "Fuzzy Relations, Projection and Cylindrical Extension" --- p.31 / Chapter 2.4.5 --- Fuzzy First Order Logic and Fuzzy Prolog --- p.34 / Chapter 3 --- Knowledge Representation and Learning Algorithm --- p.43 / Chapter 3.1 --- Knowledge Representation --- p.44 / Chapter 3.1.1 --- Fuzzy First-order Logic ´ؤ A Powerful Language --- p.44 / Chapter 3.1.2 --- Literal Forms --- p.48 / Chapter 3.1.3 --- Continuous Variables --- p.50 / Chapter 3.2 --- System Architecture --- p.61 / Chapter 3.2.1 --- Data Reading --- p.61 / Chapter 3.2.2 --- Preprocessing and Postprocessing --- p.67 / Chapter 4 --- Global Evaluation of Literals --- p.71 / Chapter 4.1 --- Existing Closeness Measures between Fuzzy Sets --- p.72 / Chapter 4.2 --- The Error Function and the Normalized Error Functions --- p.75 / Chapter 4.2.1 --- The Error Function --- p.75 / Chapter 4.2.2 --- The Normalized Error Functions --- p.76 / Chapter 4.3 --- The Nodal Characteristics and the Error Peaks --- p.79 / Chapter 4.3.1 --- The Nodal Characteristics --- p.79 / Chapter 4.3.2 --- The Zero Error Line and the Error Peaks --- p.80 / Chapter 4.4 --- Quantifying the Nodal Characteristics --- p.85 / Chapter 4.4.1 --- Information Theory --- p.86 / Chapter 4.4.2 --- Applying the Information Theory --- p.88 / Chapter 4.4.3 --- Upper and Lower Bounds of CE --- p.89 / Chapter 4.4.4 --- The Whole Heuristics of FF99 --- p.93 / Chapter 4.5 --- An Example --- p.94 / Chapter 5 --- Partial Evaluation of Literals --- p.99 / Chapter 5.1 --- Importance of Covering in Inductive Learning --- p.100 / Chapter 5.1.1 --- The Divide-and-conquer Method --- p.100 / Chapter 5.1.2 --- The Covering Method --- p.101 / Chapter 5.1.3 --- Effective Pruning in Both Methods --- p.102 / Chapter 5.2 --- Fuzzification of FOIL --- p.104 / Chapter 5.2.1 --- Analysis of FOIL --- p.104 / Chapter 5.2.2 --- Requirements on System Fuzzification --- p.107 / Chapter 5.2.3 --- Possible Ways in Fuzzifing FOIL --- p.109 / Chapter 5.3 --- The α Covering Method --- p.111 / Chapter 5.3.1 --- Construction of Partitions by α-cut --- p.112 / Chapter 5.3.2 --- Adaptive-α Covering --- p.112 / Chapter 5.4 --- The Probabistic Covering Method --- p.114 / Chapter 6 --- Results and Discussions --- p.119 / Chapter 6.1 --- Experimental Results --- p.120 / Chapter 6.1.1 --- Iris Plant Database --- p.120 / Chapter 6.1.2 --- Kinship Relational Domain --- p.122 / Chapter 6.1.3 --- The Fuzzy Relation Domain --- p.129 / Chapter 6.1.4 --- Age Group Domain --- p.134 / Chapter 6.1.5 --- The NBA Domain --- p.135 / Chapter 6.2 --- Future Development Directions --- p.137 / Chapter 6.2.1 --- Speed Improvement --- p.137 / Chapter 6.2.2 --- Accuracy Improvement --- p.138 / Chapter 6.2.3 --- Others --- p.138 / Chapter 7 --- Conclusion --- p.140 / Bibliography --- p.142 / Chapter A --- C4.5 to FOIL File Format Conversion --- p.147 / Chapter B --- FF99 example --- p.150
|
309 |
Autonomous visual learning for robotic systemsBeale, Dan January 2012 (has links)
This thesis investigates the problem of visual learning using a robotic platform. Given a set of objects the robots task is to autonomously manipulate, observe, and learn. This allows the robot to recognise objects in a novel scene and pose, or separate them into distinct visual categories. The main focus of the work is in autonomously acquiring object models using robotic manipulation. Autonomous learning is important for robotic systems. In the context of vision, it allows a robot to adapt to new and uncertain environments, updating its internal model of the world. It also reduces the amount of human supervision needed for building visual models. This leads to machines which can operate in environments with rich and complicated visual information, such as the home or industrial workspace; also, in environments which are potentially hazardous for humans. The hypothesis claims that inducing robot motion on objects aids the learning process. It is shown that extra information from the robot sensors provides enough information to localise an object and distinguish it from the background. Also, that decisive planning allows the object to be separated and observed from a variety of dierent poses, giving a good foundation to build a robust classication model. Contributions include a new segmentation algorithm, a new classication model for object learning, and a method for allowing a robot to supervise its own learning in cluttered and dynamic environments.
|
310 |
Machine learning and forward looking information in option pricesHu, Qi January 2018 (has links)
The use of forward-looking information from option prices attracted a lot of attention after the 2008 financial crisis, which highlighting the difficulty of using historical data to predict extreme events. Although a considerable number of papers investigate extraction of forward-information from cross-sectional option prices, Figlewski (2008) argues that it is still an open question and none of the techniques is clearly superior. This thesis focuses on getting information from option prices and investigates two broad topics: applying machine learning in extracting state price density and recovering natural probability from option prices. The estimation of state price density (often described as risk-neutral density in the option pricing litera- ture) is of considerable importance since it contains valuable information about investors' expectations and risk preferences. However, this is a non-trivial task due to data limitation and complex arbitrage-free constraints. In this thesis, I develop a more efficient linear programming support vector machine (L1-SVM) estimator for state price density which incorporates no-arbitrage restrictions and bid-ask spread. This method does not depend on a particular approximation function and framework and is, therefore, universally applicable. In a parallel empirical study, I apply the method to options on the S&P 500, showing it to be comparatively accurate and smooth. In addition, since the existing literature has no consensus about what information is recovered from The Recovery Theorem, I empirically examine this recovery problem in a continuous diffusion setting. Using the market data of S&P 500 index option and synthetic data generated by Ornstein-Uhlenbeck (OU) process, I show that the recovered probability is not the real-world probability. Finally, to further explain why The Recovery Theorem fails and show the existence of associated martingale component, I demonstrate a example bivariate recovery.
|
Page generated in 0.0807 seconds