681 |
Essays in risk management: conditional expectation with applications in finance and insuranceMaj, Mateusz 08 June 2012 (has links)
In this work we study two problems motivated by Risk Management: the optimal design of financial products from an investor's point of view and the calculation of bounds and approximations for sums involving non-independent random variables. The element that interconnects these two topics is the notion of conditioning, a fundamental concept in probability and statistics which appears to be a useful device in finance. In the first part of the dissertation, we analyse structured products that are now widespread in the banking and insurance industry. These products typically protect the investor against bearish stock markets while offering upside participation when the markets are bullish. Examples of these products include capital guaranteed funds commercialised by banks, and equity linked contracts sold by insurers. The design of these products is complex in general and it is vital to examine to which extent they are actually interesting from the investor's point of view and whether they cannot be dominated by other strategies. In the academic literature on structured products the focus has been almost exclusively on the pricing and hedging of these instruments and less on their performance from an investor's point of view. In this work we analyse the attractiveness of these products. We assess the theoretical cost of inefficiency when buying a structured product and describe the optimal strategy explicitly if possible. Moreover we examine the cost of the inefficiency in practice. We extend the results of Dybvig (1988a, 1988b) and Cox & Leland (1982, 2000) who in the context of a complete, one-dimensional market investigated the inefficiency of path-dependent pay-offs. In the dissertation we consider this problem in one-dimensional Levy and multidimensional Black-Scholes financial markets and we provide evidence that path-dependent pay-offs should not be preferred by decision makers with a fixed investment horizon, and they should buy path-independent structures instead. In these market settings we also demonstrate the optimal contract that provides the given distribution to the consumer, and in the case of risk- averse investors we are able to propose two ways of improving the design of financial products. Finally we illustrate the theory with a few well-known securities and strategies e.g. dollar cost averaging, buy-and-hold investments and widely used portfolio insurance strategies. The second part of the dissertation considers the problem of finding the distribution of a sum of non- independent random variables. Such dependent sums appear quite often in insurance and finance, for instance in case of the aggregate claim distribution or loss distribution of an investment portfolio. An interesting avenue to cope with this problem consists in using so-called convex bounds, studied by Dhaene et al. (2002a, 2002b), who applied these to sums of log-normal random variables. In their papers they have shown how these convex bounds can be used to derive closed-form approximations for several of the risk measures of such a sum. In the dissertation we prove that unlike the log-normal case the construction of a convex lower bound in explicit form appears to be out of reach for general sums of log-elliptical risks and we show how we can construct stop-loss bounds and we use these to construct mean preserving approximations for general sums of log-elliptical distributions in explicit form. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
682 |
Univariate and multivariate symmetry: statistical inference and distributional aspects / Symétrie univariée et multivariée: inférence statistique et aspects distributionnelsLey, Christophe 26 November 2010 (has links)
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.<p><p>The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.<p><p>The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.<p><p>Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./<p><p>Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.<p><p>La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.<p><p>La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.<p><p>Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
683 |
Approximations polynomiales de densités de probabilité et applications en assurance / Polynomial approximtions of probabilitty density function with applications to insuranceGoffard, Pierre-Olivier 29 June 2015 (has links)
Cette thèse a pour objet d'étude les méthodes numériques d'approximation de la densité de probabilité associée à des variables aléatoires admettant des distributions composées. Ces variables aléatoires sont couramment utilisées en actuariat pour modéliser le risque supporté par un portefeuille de contrats. En théorie de la ruine, la probabilité de ruine ultime dans le modèle de Poisson composé est égale à la fonction de survie d'une distribution géométrique composée. La méthode numérique proposée consiste en une projection orthogonale de la densité sur une base de polynômes orthogonaux. Ces polynômes sont orthogonaux par rapport à une mesure de probabilité de référence appartenant aux Familles Exponentielles Naturelles Quadratiques. La méthode d'approximation polynomiale est comparée à d'autres méthodes d'approximation de la densité basées sur les moments et la transformée de Laplace de la distribution. L'extension de la méthode en dimension supérieure à $1$ est présentée, ainsi que l'obtention d'un estimateur de la densité à partir de la formule d'approximation. Cette thèse comprend aussi la description d'une méthode d'agrégation adaptée aux portefeuilles de contrats d'assurance vie de type épargne individuelle. La procédure d'agrégation conduit à la construction de model points pour permettre l'évaluation des provisions best estimate dans des temps raisonnables et conformément à la directive européenne Solvabilité II. / This PhD thesis studies numerical methods to approximate the probability density function of random variables governed by compound distributions. These random variables are useful in actuarial science to model the risk of a portfolio of contracts. In ruin theory, the probability of ultimate ruin within the compound Poisson ruin model is the survival function of a geometric compound distribution. The proposed method consists in a projection of the probability density function onto an orthogonal polynomial system. These polynomials are orthogonal with respect to a probability measure that belongs to Natural Exponential Families with Quadratic Variance Function. The polynomiam approximation is compared to other numerical methods that recover the probability density function from the knowledge of the moments or the Laplace transform of the distribution. The polynomial method is then extended in a multidimensional setting, along with the probability density estimator derived from the approximation formula. An aggregation procedure adapted to life insurance portfolios is also described. The method aims at building a portfolio of model points in order to compute the best estimate liabilities in a timely manner and in a way that is compliant with the European directive Solvency II.
|
684 |
Etude des Distributions de Parton Généralisées avec la Diffusion Compton Profondément Virtuelle "genre espace" et "genre temps" / Generalized Parton Distributions with spacelike and timelike Deeply Virtual Compton ScatteringBoër, Marie 28 November 2014 (has links)
Plus de quarante ans après la découverte de constituants ponctuels dans le nucléon, sa structure en quarks et gluons (partons) fait toujours l'objet d'études intenses. Certains processus exclusifs (où tous les produits de l'état final sont connus) de leptoproduction ou de photoproduction exclusive de photon ou de méson sur le nucléon permettent d'accéder aux Distributions de Parton Généralisées (GPDs). Ces fonctions paramétrisent la structure complexe du nucléon et contiennent des informations sur l'impulsion longitudinale et la position transverse des partons dans le nucléon. De tels processus exclusifs sont la Diffusion Compton Profondément Virtuelle "genre espace" et "genre temps" (DVCS et TCS respectivement) qui correspondent à la diffusion d'un photon de haute énergie sur un quark du nucléon et sont mesurés respectivement à partir des réactions lN⇾l'N'γ (N = proton ou neutron, l = lepton) et γN⇾N'l+l-. La première partie de cette thèse est une étude expérimentale du DVCS avec les données 2009 de l'expérience COMPASS au CERN. Dans un premier temps, la section efficace de diffusion profondément inélastique est mesurée, de façon à valider la mesure du flux de muons et à déterminer certains effets systématiques dans la recontruction des traces. Ensuite, la section efficace de production exclusive d'un photon est mesurée. Elle contient le processus DVCS (photon émis par un quark du nucléon) et le processus Bethe-Heitler (photon émis par le lepton diffusé) qui ont le même état final. L'étude des bruits de fond a aussi conduit à estimer une limite à la section efficace de production exclusive d'un pion neutre. La seconde partie de la thèse est dédiée à une étude phénoménologique du TCS aux énergies typiques de JLab 12 GeV. Les amplitudes du TCS et du Bethe-Heitler associé sont d'abord calculées. Puis, toutes les asymétries de simple et de double polarisation de la cible et/ou du faisceau linéairement ou circulairement polarisé sont calculées en fonction de diverses contributions de GPDs. Enfin, une méthode d'ajustement est présentée pour extraire les Facteurs de Forme Compton (qui sont des fonctions des GPDs) avec des données et/ou des simulations de DVCS et/ou de TCS. / More than forty years after the discovery of pointlike constituents inside the nucleon, its quarks and gluons structure is still intensively studied. Some exclusive processes (where all the final state products are known) of leptoproduction or of photoproduction of photon or meson off the nucleon provide access to the Generalized Parton Distributions (GPDs). These functions parameterize the complex structure of the nucleon and contain informations about the longitudinal momentum and the spatial transverse distribution of partons inside the nucleon. Such exclusive processes are the "Spacelike" and the "Timelike" Deeply Virtual Compton Scattering processes (DVCS and TCS respectively) which correspond to the scattering of a high-energy photon off a quark in the nucleon and are respectively measured in the reactions lN⇾l'N'γ (N = proton or neutron, l' = lepton) and γN⇾N'l+l- The first part of this thesis is devoted to the experimental study of DVCS, using the 2009 data from the COMPASS experiment at CERN. In a first step, the Deep Inelastic Scattering cross section is measured in order to check the muon flux measurement and to evaluate some systematic effects. Then, the cross section for the exclusive production of a photon is measured. It is made up of the DVCS process (the photon is emitted by a quark) and of the Bethe-Heitler process (the photon is emitted by the scattered lepton) which has the same final state. The study of the background has allowed to estimate in parallel an upper limit for the cross section of the exclusive production of a π° meson. The second part of the thesis is devoted to a phenomenological study of TCS at typical energies for the JLab 12 GeV upgrade. Firstly, the amplitudes for the TCS and for the associated Bethe-Heitler process are derived. Then, all single and double polarization (beam and/or target) observables are calculated as a function of different GPD contributions. Finally, a method is presented to extract the Compton Form Factors (functions of GPDs) from fits on DVCS and/or TCS data and/or simulations.
|
685 |
Mesure de la section efficace d'électroproduction de photons sur le neutron à Jefferson Lab en vue de la séparation du terme de diffusion Compton profondément virtuelle / Measurement of the photon electroproduction cross section off the neutron at Jefferson Lab in view of the separation of the deeply virtual Compton scattering termDesnault, Camille 17 September 2015 (has links)
La section efficace d'électroproduction de photons sur le nucléon est proportionnelle aux amplitudes au carré de diffusion Compton profondément virtuelle (DVCS) et du Bethe-Heitler ainsi qu'un terme d'interférence de ces deux processus. Sa mesure sur le neutron fut réalisée dans le cadre de l'expérience E08-025 qui s'est déroulée en 2010 dans le Hall A à Jefferson Lab (USA). Par une forte sensibilité au terme d'interférence, elle aura permis l'extraction de trois observables dépendantes des Distributions Généralisées de Partons (GPDs), ainsi que la perspective de séparer par une méthode Rosenbluth le terme |DVCS|².Les GPDs sont des fonctions de structure qui nous permettent de comprendre la structure interne des nucléons en terme de corrélation entre les distributions en position transverse et en impulsion longitudinale des quarks au sein du nucléon. Plus qu'un moyen d'accéder à une image tri-dimensionnelle de la composition élémentaire du nucléon, la détermination des GPDs du neutron permettrait par la règle de somme de Ji l'accès au moment angulaire des quarks dans le nucléon, la pièce manquante à la compréhension du mystère lié au spin du nucléon.Cette thèse aborde le contexte théorique de la mesure de la section efficace d'électroproduction de photons sur le neutron, puis une description de la configuration expérimentale utilisée pour sa réalisation. Elle expose la sélection des données d'intérêt pour cette mesure et présente les résultats obtenus par une méthode d'ajustement des données à une simulation Monte Carlo dont la démarche est expliquée en détails. Pour finir, une étude systématique des résultats achève ce manuscrit. / The photon electroproduction cross section off the nucleon is proportional to the deeply virtual Compton scattering (DVCS) and Bethe-Heitler amplitudes squared together with an interference term between these two processes. Its measurement on the neutron has been performed in the framework of the E08-025 experiment which took place in 2010 in Hall A at Jefferson Lab (USA). Thanks to a high sensibility to the interference term, it made possible the extraction of three Generalized Parton Distributions (GPDs) dependent observables, as well as the prospect to extract the |DVCS|² term through a Rosenbluth separation.The GPDs are structure functions which allow to understand the internal structure of nucleons in term of the correlation between transverse spatial and longitudinal momentum distributions of quarks inside the nucleon. More than a way to access a three-dimensional picture of the elementary arrangement of the nucleon, the measurement of GPDs on the neutron would give access by the Ji's sum rule to the angular momentum of quarks in the nucleon, the missing piece for the understanding of the nucleon spin puzzle.This thesis outlines the theoretical context of the measurement of the photon electroproduction cross section off the neutron, and the experimental setup used for its achievement. It describes the selection of the experimental data of interest for this measurement and presents the results obtained from a fitting method of data to a Monte Carlo simulation, which is explained in detail. Finally, a systematic study of the results completes this manuscript.
|
686 |
Private Income Transfers and Development : three Applied Essays on Latin America / Transferts Privés de Revenus et Développement : trois Essais Appliqués à l'Amérique LatineAlban Conto, Maria-Carolina 12 January 2018 (has links)
Pendant des décennies, les économistes se sont intéressés à étudier pourquoi et comment les agents s'entraident, en accordant une place particulière à l'analyse des transferts de revenus privés. Les applications récentes comprennent des sujets très divers tels que: l'analyse de l'accumulation du capital, la cohésion sociale etla solidarité, le marché de l'assurance et des taux d'intérêt, les stratégies de gestion des risques face aux chocs négatifset les politiques gouvernementales.La présente thèse analyse de quelles manières les décisions de transfert entre ménages, les envois de fonds internationauxet les transferts intra-ménages ont un impact sur cinq aspects fondamentaux du développement:(i) les interactions sociales, (ii) le travail de marché et domestique, (iii) les dépenses, (iv) la nutrition et (v) la santé.Trois questions de recherche sont traitées en utilisant des données provenant de la Colombie, de l'Équateur et du Pérou, ainsi que de multiples techniques économétriques. Premièrement, y a-t-il une relation entre les transferts entre ménages et la distance entre les donateurs et les récepteurs? Deuxièmement, faire des envois de fonds internationaux modifie-t-il de manière asymétrique l'offre de travail, en fonction des caractéristiques individuelles? Troisièmement, avoir recours des transferts intra-ménages influence-t-il les habitudes de dépenses, la nutrition et la santé au sein des ménages?Les résultats suggèrent que les transferts de revenus privés jouent un rôle clé de redistribution, en modifiant leniveau de vie et en améliorant le bien-être des individus. Dans des contextes de pauvreté, où les mécanismes d'apparentant à une sécurité sociale sont rares, où l'informalité est courante, où les institutions sont fragmentées et où le secteur public est faible, les transferts d'argent et en nature entre ménages constituent des stratégies essentielles de subsistance. Ainsi, améliorer notre compréhension de cette dimension des comportements sociaux est fondamental. / For decades, economists have been interested in studying why and how agents support eachothers, giving a special place to the analysis of private income transfers. Recent applicationsinclude very diverse topics such as: the analysis of capital accumulation, social cohesion andsolidarity, market insurance and interest rates, risk-coping strategies against negative shocksand government policies.The present dissertation analyzes how inter-household transfer decisions, international remittancesand intra-household transfers contribute to shape five fundamental aspects of development:(i) social interactions, (ii) market and household work, (iii) spending patterns, (iv)nutrition and (v) health.Three research questions are addressed using applied data from Colombia, Ecuador and Peru,and multiple econometric techniques. First, is there a relationship between inter-householdtransfer dynamics and distance between donors and receivers? Second, do remittances asymmetricallyshape labor supply responses depending on people’s characteristics? Third, dointra-household transfers influence spending patterns, nutrition and health outcomes?Results suggest that private income transfers play a key re-distributive role, shaping agents’living standards and improving individual and social well-being. In contexts of economic deprivation,where social safety nets are scarce, informality is at stake, institutions are highlyfragmented and the public sector is weak, money and in-kind help from other households orindividuals constitute crucial livelihood strategies to get through the economic world. Thus,enhancing our understanding of this dimension of social behaviors is a must.
|
687 |
Structures contrôlées pour les équations aux dérivées partielles / Controlled structures for partial differential equationsFurlan, Marco 26 June 2018 (has links)
Le projet de thèse comporte différentes directions possibles: a) Améliorer la compréhension des relations entre la théorie des structures de régularité développée par M. Hairer et la méthode des Distributions Paracontrolées développée par Gubinelli, Imkeller et Perkowski, et éventuellement fournir une synthèse des deux. C'est très spéculatif et, pour le moment, il n'y a pas de chemin clair vers cet objectif à long terme. b) Utiliser la théorie des Distributions Paracontrolées pour étudier différents types d'équations aux dérivés partiels: équations de transport et équations générales d'évolution hyperbolique, équations dispersives, systèmes de lois de conservation. Ces EDP ne sont pas dans le domaine des méthodes actuelles qui ont été développées principalement pour gérer les équations d'évolution semi-linéaire parabolique. c) Une fois qu'une théorie pour l'équation de transport perturbée par un signal irregulier a été établie, il sera possible de se dédier à l'étude des phénomènes de régularisation par le bruit qui, pour le moment, n'ont étés étudiés que dans le contexte des équations de transport perturbées par le mouvement brownien, en utilisant des outils standard d'analyse stochastique. d) Les techniques du Groupe de Renormalisation (GR) et les développements multi-échelles ont déjà été utilisés à la fois pour aborder les EDP et pour définir des champs quantiques euclidiens. La théorie des Distributions Paracontrolées peut être comprise comme une sorte d'analyse multi-échelle des fonctionnels non linéaires et il serait intéressant d'explorer l'interaction des techniques paradifférentielles avec des techniques plus standard, comme les "cluster expansions" et les méthodes liées au GR. / The thesis project has various possible directions: a) Improve the understanding of the relations between the theory of Regularity Structures developed by M.Hairer and the method of Paracontrolled Distributions developed by Gubinelli, Imkeller and Perkowski, and eventually to provide a synthesis. This is highly speculative and at the moment there are no clear path towards this long term goal. b) Use the theory of Paracontrolled Distributions to study different types of PDEs: transport equations and general hyperbolic evolution equation, dispersive equations, systems of conservation laws. These PDEs are not in the domain of the current methods which were developed mainly to handle parabolic semilinear evolution equations. c) Once a theory of transport equation driven by rough signals have been established it will become possible to tackle the phenomena of regularization by transport noise which for the moment has been studied only in the context of transport equations driven by Brownian motion, using standard tools of stochastic analysis. d) Renormalization group (RG) techniques and multi-scale expansions have already been used both to tackle PDE problems and to define Euclidean Quantum Field Theories. Paracontrolled Distributions theory can be understood as a kind of mul- tiscale analysis of non-linear functionals and it would be interesting to explore the interplay of paradifferential techniques with more standard techniques like cluster expansions and RG methods.
|
688 |
Evaluating the effectiveness of Benford's law as an investigative tool for forensic accountants / Lizan KellermanKellerman, Lizan January 2014 (has links)
“Some numbers really are more popular than others.”
Mark J. Nigrini (1998a:15)
The above idea appears to defy common sense. In a random sequence of numbers drawn from a company’s financial books, every digit from 1 to 9 seems to have a one-in-nine chance of being the leading digit when used in a series of numbers. But, according to a mathematical formula of over 60 years old making its way into the field of accounting, certain numbers are actually more popular than others (Nigrini, 1998a:15).
Accounting numbers usually follow a mathematical law, named Benford’s Law, of which the result is so unpredictable that fraudsters and manipulators, as a rule, do not succeed in observing the Law. With this knowledge, the forensic accountant is empowered to detect irregularities, anomalies, errors or fraud that may be present in a financial data set.
The main objective of this study was to evaluate the effectiveness of Benford’s Law as a tool for forensic accountants. The empirical research used data from Company X to test the hypothesis that, in the context of financial fraud investigations, a significant difference between the actual and expected frequencies of Benford’s Law could be an indication of an error, fraud or irregularity.
The effectiveness of Benford’s Law was evaluated according to findings from the literature review and empirical study. The results indicated that a Benford’s Law analysis was efficient in identifying the target groups in the data set that needed further investigation as their numbers did not match Benford’s Law. / MCom (Forensic Accountancy), North-West University, Potchefstroom Campus, 2014
|
689 |
Evaluating the effectiveness of Benford's law as an investigative tool for forensic accountants / Lizan KellermanKellerman, Lizan January 2014 (has links)
“Some numbers really are more popular than others.”
Mark J. Nigrini (1998a:15)
The above idea appears to defy common sense. In a random sequence of numbers drawn from a company’s financial books, every digit from 1 to 9 seems to have a one-in-nine chance of being the leading digit when used in a series of numbers. But, according to a mathematical formula of over 60 years old making its way into the field of accounting, certain numbers are actually more popular than others (Nigrini, 1998a:15).
Accounting numbers usually follow a mathematical law, named Benford’s Law, of which the result is so unpredictable that fraudsters and manipulators, as a rule, do not succeed in observing the Law. With this knowledge, the forensic accountant is empowered to detect irregularities, anomalies, errors or fraud that may be present in a financial data set.
The main objective of this study was to evaluate the effectiveness of Benford’s Law as a tool for forensic accountants. The empirical research used data from Company X to test the hypothesis that, in the context of financial fraud investigations, a significant difference between the actual and expected frequencies of Benford’s Law could be an indication of an error, fraud or irregularity.
The effectiveness of Benford’s Law was evaluated according to findings from the literature review and empirical study. The results indicated that a Benford’s Law analysis was efficient in identifying the target groups in the data set that needed further investigation as their numbers did not match Benford’s Law. / MCom (Forensic Accountancy), North-West University, Potchefstroom Campus, 2014
|
690 |
A total quality management (TQM) strategic measurement perspective with specific reference to the software industryPohl, Martha Jacoba. 11 1900 (has links)
The dissertation aims to obtain an integrated and comprehensive perspective on measurement issues that play a strategic role in organisations that aim at continuous quality improvement through TQM. The multidimensional definition of quality is proposed to view quality holistically. The definition is dynamic, thus dimensions are subject to evolution. Measurement of the quality dimensions is investigated. The relationship between quality and cost, productivity and profitability respectively is examined. The product quality dimensions are redefined for processes. Measurement is a strategic component ofTQM. Integration of financial measures with supplier-;
customer-; performance- and internal process measurement is essential for synergism. Measurement of
quality management is an additional strategic quality dimension. Applicable research was integrated. Quantitative structures used successfully in industry to achieve quality improvement is important, thus the quality management maturity grid, cleanroom software engineering, software factories, quality function deployment, benchmarking and the ISO 9000 standards are briefly described. Software Metrics Programs are considered to be an application of a holistic measurement approach to quality. Two practical approaches are identified. A framework for initiating implementation is proposed. Two strategic software measurement issues are reliability and cost estimation. Software reliability measurement and modelling are introduced. A strategic approach to software cost estimation is suggested. The critical role of data collection is emphasized. Different approaches
to implement software cost estimation in organisations are proposed. A total installed cost template as the ultimate goal is envisaged. An overview of selected software cost estimation models is provided. Potential research areas are identified. The linearity/nonlinearity nature of the software production function is analysed. The synergy between software cost estimation models and project management techniques is investigated. The quantification aspects of uncertainty in activity durations, pertaining to project scheduling, are discussed. Statistical distributions for activity durations are reviewed and compared. A structural view of criteria determining activity duration distribution selection is provided. Estimation issues are reviewed. The integration of knowledge from dispersed fields leads to new dimensions of interaction. Research and practical experience regarding software metrics and software metrics programs can be successfully applied to address the measurement of strategic indicators in other industries. / Business Management / D. Phil. (Operations Research)
|
Page generated in 0.1235 seconds