Spelling suggestions: "subject:"[een] SMOOTHING"" "subject:"[enn] SMOOTHING""
361 |
Statistical Predictions Based on Accelerated Degradation Data and Spatial Count DataDuan, Yuanyuan 04 March 2014 (has links)
This dissertation aims to develop methods for statistical predictions based on various types of data from different areas. We focus on applications from reliability and spatial epidemiology. Chapter 1 gives a general introduction of statistical predictions. Chapters 2 and 3 investigate the photodegradation of an organic coating, which is mainly caused by ultraviolet (UV) radiation but also affected by environmental factors, including temperature and humidity. In Chapter 2, we identify a physically motivated nonlinear mixed-effects model, including the effects of environmental variables, to describe the degradation path. Unit-to-unit variabilities are modeled as random effects. The maximum likelihood approach is used to estimate parameters based on the accelerated test data from laboratory. The developed model is then extended to allow for time-varying covariates and is used to predict outdoor degradation where the explanatory variables are time-varying.
Chapter 3 introduces a class of models for analyzing degradation data with dynamic covariate information. We use a general path model with random effects to describe the degradation paths and a vector time series model to describe the covariate process. Shape restricted splines are used to estimate the effects of dynamic covariates on the degradation process. The unknown parameters of these models are estimated by using the maximum likelihood method. Algorithms for computing the estimated lifetime distribution are also described. The proposed methods are applied to predict the photodegradation path of an organic coating in a complicated dynamic environment.
Chapter 4 investigates the Lyme disease emergency in Virginia at census tract level. Based on areal (census tract level) count data of Lyme disease cases in Virginia from 1998 to 2011, we analyze the spatial patterns of the disease using statistical smoothing techniques. We also use the space and space-time scan statistics to reveal the presence of clusters in the spatial and spatial/temporal distribution of Lyme disease.
Chapter 5 builds a predictive model for Lyme disease based on historical data and environmental/demographical information of each census tract. We propose a Divide-Recombine method to take advantage of parallel computing. We compare prediction results through simulation studies, which show our method can provide comparable fitting and predicting accuracy but can achieve much more computational efficiency. We also apply the proposed method to analyze Virginia Lyme disease spatio-temporal data. Our method makes large-scale spatio-temporal predictions possible. Chapter 6 gives a general review on the contributions of this dissertation, and discusses directions for future research. / Ph. D.
|
362 |
EFFICIENT INFERENCE AND DOMINANT-SET BASED CLUSTERING FOR FUNCTIONAL DATAXiang Wang (18396603) 03 June 2024 (has links)
<p dir="ltr">This dissertation addresses three progressively fundamental problems for functional data analysis: (1) To do efficient inference for the functional mean model accounting for within-subject correlation, we propose the refined and bias-corrected empirical likelihood method. (2) To identify functional subjects potentially from different populations, we propose the dominant-set based unsupervised clustering method using the similarity matrix. (3) To learn the similarity matrix from various similarity metrics for functional data clustering, we propose the modularity guided and dominant-set based semi-supervised clustering method.</p><p dir="ltr">In the first problem, the empirical likelihood method is utilized to do inference for the mean function of functional data by constructing the refined and bias-corrected estimating equation. The proposed estimating equation not only improves efficiency but also enables practically feasible empirical likelihood inference by properly incorporating within-subject correlation, which has not been achieved by previous studies.</p><p dir="ltr">In the second problem, the dominant-set based unsupervised clustering method is proposed to maximize the within-cluster similarity and applied to functional data with a flexible choice of similarity measures between curves. The proposed unsupervised clustering method is a hierarchical bipartition procedure under the penalized optimization framework with the tuning parameter selected by maximizing the clustering criterion called modularity of the resulting two clusters, which is inspired by the concept of dominant set in graph theory and solved by replicator dynamics in game theory. The advantage offered by this approach is not only robust to imbalanced sizes of groups but also to outliers, which overcomes the limitation of many existing clustering methods.</p><p dir="ltr">In the third problem, the metric-based semi-supervised clustering method is proposed with similarity metric learned by modularity maximization and followed by the above proposed dominant-set based clustering procedure. Under semi-supervised setting where some clustering memberships are known, the goal is to determine the best linear combination of candidate similarity metrics as the final metric to enhance the clustering performance. Besides the global metric-based algorithm, another algorithm is also proposed to learn individual metrics for each cluster, which permits overlapping membership for the clustering. This is innovatively different from many existing methods. This method is superiorly applicable to functional data with various similarity metrics between functional curves, while also exhibiting robustness to imbalanced sizes of groups, which are intrinsic to the dominant-set based clustering approach.</p><p dir="ltr">In all three problems, the advantages of the proposed methods are demonstrated through extensive empirical investigations using simulations as well as real data applications.</p>
|
363 |
Likelihood Ratio Combination of Multiple Biomarkers and Change Point Detection in Functional Time SeriesDu, Zhiyuan 24 September 2024 (has links)
Utilizing multiple biomarkers in medical research is crucial for the diagnostic accuracy of detecting diseases. An optimal method for combining these biomarkers is essential to maximize the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC). The optimality of the likelihood ratio has been proven but the challenges persist in estimating the likelihood ratio, primarily on the estimation of multivariate density functions. In this study, we propose a non-parametric approach for estimating multivariate density functions by utilizing Smoothing Spline density estimation to approximate the full likelihood function for both diseased and non-diseased groups, which compose the likelihood ratio. Simulation results demonstrate the efficiency of our method compared to other biomarker combination techniques under various settings for generated biomarker values. Additionally, we apply the proposed method to a real-world study aimed at detecting childhood autism spectrum disorder (ASD), showcasing its practical relevance and potential for future applications in medical research.
Change point detection for functional time series has attracted considerable attention from researchers. Existing methods either rely on FPCA, which may perform poorly with complex data, or use bootstrap approaches in forms that fall short in effectively detecting diverse change functions. In our study, we propose a novel self-normalized test for functional time series implemented via a non-overlapping block bootstrap to circumvent reliance on FPCA. The SN factor ensures both monotonic power and adaptability for detecting diverse change functions on complex data. We also demonstrate our test's robustness in detecting changes in the autocovariance operator. Simulation studies confirm the superior performance of our test across various settings, and real-world applications further illustrate its practical utility. / Doctor of Philosophy / In medical research, it is crucial to accurately detect diseases and predict patient outcomes using multiple health indicators, also known as biomarkers. Combining these biomarkers effectively can significantly improve our ability to diagnose and treat various health conditions. However, finding the best way to combine these biomarkers has been a long-standing challenge. In this study, we propose a new, easy-to-understand method for combining multiple biomarkers using advanced estimation techniques. Our method takes into account various factors and provides a more accurate way to evaluate the combined information from different biomarkers. Through simulations, we demonstrated that our method performs better than other existing methods under a variety of scenarios. Furthermore, we applied our new method to a real-world study focusing on detecting childhood autism spectrum disorder (ASD), highlighting its practical value and potential for future applications in medical research.
Detecting changes in patterns over time, especially shifts in averages, has become an important focus in data analysis. Existing methods often rely on techniques that may not perform well with more complex data or are limited in the types of changes they can detect. In this study, we introduce a new approach that improves the accuracy of detecting changes in complex data patterns. Our method is flexible and can identify changes in both the mean and variation of the data over time. Through simulations, we demonstrate that this approach is more accurate than current methods. Furthermore, we applied our method to real-world climate research data, illustrating its practical value.
|
364 |
Graph Cut Based Mesh Segmentation Using Feature Points and Geodesic DistanceLiu, L., Sheng, Y., Zhang, G., Ugail, Hassan January 2015 (has links)
No / Both prominent feature points and geodesic distance
are key factors for mesh segmentation. With these two factors,
this paper proposes a graph cut based mesh segmentation
method. The mesh is first preprocessed by Laplacian smoothing.
According to the Gaussian curvature, candidate feature points
are then selected by a predefined threshold. With DBSCAN
(Density-Based Spatial Clustering of Application with Noise), the
selected candidate points are separated into some clusters, and
the points with the maximum curvature in every cluster are
regarded as the final feature points. We label these feature points,
and regard the faces in the mesh as nodes for graph cut. Our
energy function is constructed by utilizing the ratio between the
geodesic distance and the Euclidean distance of vertex pairs of
the mesh. The final segmentation result is obtained by minimizing
the energy function using graph cut. The proposed algorithm is
pose-invariant and can robustly segment the mesh into different
parts in line with the selected feature points.
|
365 |
The Propagation-Separation Approach / theoretical study and application to magnetic resonance imagingBecker, Saskia 16 May 2014 (has links)
Lokal parametrische Modelle werden häufig im Kontext der nichtparametrischen Schätzung verwendet. Bei einer punktweisen Schätzung der Zielfunktion können die parametrischen Umgebungen mithilfe von Gewichten beschrieben werden, die entweder von den Designpunkten oder (zusätzlich) von den Beobachtungen abhängen. Der Vergleich von verrauschten Beobachtungen in einzelnen Punkten leidet allerdings unter einem Mangel an Robustheit. Der Propagations-Separations-Ansatz von Polzehl und Spokoiny [2006] verwendet daher einen Multiskalen-Ansatz mit iterativ aktualisierten Gewichten. Wir präsentieren hier eine theoretische Studie und numerische Resultate, die ein besseres Verständnis des Verfahrens ermöglichen. Zu diesem Zweck definieren und untersuchen wir eine neue Strategie für die Wahl des entscheidenden Parameters des Verfahrens, der Adaptationsbandweite. Insbesondere untersuchen wir ihre Variabilität in Abhängigkeit von der unbekannten Zielfunktion. Unsere Resultate rechtfertigen eine Wahl, die unabhängig von den jeweils vorliegenden Beobachtungen ist. Die neue Parameterwahl liefert für stückweise konstante und stückweise beschränkte Funktionen theoretische Beweise der Haupteigenschaften des Algorithmus. Für den Fall eines falsch spezifizierten Modells führen wir eine spezielle Stufenfunktion ein und weisen eine punktweise Fehlerschranke im Vergleich zum Schätzer des Algorithmus nach. Des Weiteren entwickeln wir eine neue Methode zur Entrauschung von diffusionsgewichteten Magnetresonanzdaten. Unser neues Verfahren (ms)POAS basiert auf einer speziellen Beschreibung der Daten, die eine zeitgleiche Glättung bezüglich der gemessenen Positionen und der Richtungen der verwendeten Diffusionsgradienten ermöglicht. Für den kombinierten Messraum schlagen wir zwei Distanzfunktionen vor, deren Eignung wir mithilfe eines differentialgeometrischen Ansatzes nachweisen. Schließlich demonstrieren wir das große Potential von (ms)POAS auf simulierten und experimentellen Daten. / In statistics, nonparametric estimation is often based on local parametric modeling. For pointwise estimation of the target function, the parametric neighborhoods can be described by weights that depend on design points or on observations. As it turned out, the comparison of noisy observations at single points suffers from a lack of robustness. The Propagation-Separation Approach by Polzehl and Spokoiny [2006] overcomes this problem by using a multiscale approach with iteratively updated weights. The method has been successfully applied to a large variety of statistical problems. Here, we present a theoretical study and numerical results, which provide a better understanding of this versatile procedure. For this purpose, we introduce and analyse a novel strategy for the choice of the crucial parameter of the algorithm, namely the adaptation bandwidth. In particular, we study its variability with respect to the unknown target function. This justifies a choice independent of the data at hand. For piecewise constant and piecewise bounded functions, this choice enables theoretical proofs of the main heuristic properties of the algorithm. Additionally, we consider the case of a misspecified model. Here, we introduce a specific step function, and we establish a pointwise error bound between this function and the corresponding estimates of the Propagation-Separation Approach. Finally, we develop a method for the denoising of diffusion-weighted magnetic resonance data, which is based on the Propagation-Separation Approach. Our new procedure, called (ms)POAS, relies on a specific description of the data, which enables simultaneous smoothing in the measured positions and with respect to the directions of the applied diffusion-weighting magnetic field gradients. We define and justify two distance functions on the combined measurement space, where we follow a differential geometric approach. We demonstrate the capability of (ms)POAS on simulated and experimental data.
|
366 |
ROBUST INFERENCE FOR HETEROGENEOUS TREATMENT EFFECTS WITH APPLICATIONS TO NHANES DATARan Mo (20329047) 10 January 2025 (has links)
<p dir="ltr">Estimating the conditional average treatment effect (CATE) using data from the National Health and Nutrition Examination Survey (NHANES) provides valuable insights into the heterogeneous</p><p dir="ltr">impacts of health interventions across diverse populations, facilitating public health strategies that consider individual differences in health behaviors and conditions. However, estimating CATE with NHANES data face challenges often encountered in observational studies, such as outliers, heavy-tailed error distributions, skewed data, model misspecification, and the curse of dimensionality. To address these challenges, this dissertation presents three consecutive studies that thoroughly explore robust methods for estimating heterogeneous treatment effects. </p><p dir="ltr">The first study introduces an outlier-resistant estimation method by incorporating M-estimation, replacing the \(L_2\) loss in the traditional inverse propensity weighting (IPW) method with a robust loss function. To assess the robustness of our approach, we investigate its influence function and breakdown point. Additionally, we derive the asymptotic properties of the proposed estimator, enabling valid inference for the proposed outlier-resistant estimator of CATE.</p><p dir="ltr">The method proposed in the first study relies on a symmetric assumption which is commonly required by standard outlier-resistant methods. To remove this assumption while maintaining </p><p dir="ltr">unbiasedness, the second study employs the adaptive Huber loss, which dynamically adjusts the robustification parameter based on the sample size to achieve optimal tradeoff between bias and robustness. The robustification parameter is explicitly derived from theoretical results, making it unnecessary to rely on time-consuming data-driven methods for its selection.</p><p dir="ltr">We also derive concentration and Berry-Esseen inequalities to precisely quantify the convergence rates as well as finite sample performance.</p><p dir="ltr">In both previous studies, the propensity scores were estimated parametrically, which is sensitive to model misspecification issues. The third study extends the robust estimator from our first </p><p dir="ltr">project by plugging in a kernel-based nonparametric estimation of the propensity score with sufficient dimension reduction (SDR). Specifically, we adopt a robust minimum average variance estimation (rMAVE) for the central mean space under the potential outcome framework. Together with higher-order kernels, the resulting CATE estimation gains enhanced efficiency.</p><p dir="ltr">In all three studies, the theoretical results are derived, and confidence intervals are constructed for inference based on these findings. The properties of the proposed estimators are verified through extensive simulations. Additionally, applying these methods to NHANES data validates the estimators' ability to handle diverse and contaminated datasets, further demonstrating their effectiveness in real-world scenarios.</p><p><br></p>
|
367 |
Enhancing the Efficacy of Predictive Analytical Modeling in Operational Management Decision MakingNajmizadehbaghini, Hossein 08 1900 (has links)
In this work, we focus on enhancing the efficacy of predictive modeling in operational management decision making in two different settings: Essay 1 focuses on demand forecasting for the companies and the second study utilizes longitudinal data to analyze the illicit drug seizure and overdose deaths in the United States. In Essay 1, we utilize an operational system (newsvendor model) to evaluate the forecast method outcome and provide guidelines for forecast method (the exponential smoothing model) performance assessment and judgmental adjustments. To assess the forecast outcome, we consider not only the common forecast error minimization approach but also the profit maximization at the end of the forecast horizon. Including profit in our assessment enables us to determine if error minimization always results in maximum profit. We also look at the different levels of profit margin to analyze their impact on the forecasting method performance. Our study also investigates how different demand patterns influence maximizing the forecasting method performance. Our study shows that the exponential smoothing model family has a better performance in high-profit products, and the rate of decrease in performance versus demand uncertainty is higher in a stationary demand environment.In the second essay, we focus on illicit drug overdose death rate. Illicit drug overdose deaths are the leading cause of injury death in the United States. In 2017, overdose death reached the highest ever recorded level (70,237), and statistics show that it is a growing problem. The age adjusted rate of drug overdose deaths in 2017 (21.7 per 100,000) is 9.6% higher than the rate in 2016 (19.8 per 100,000) (U. S. Drug Enforcement Administration, 2018, p. V). Also, Marijuana consumption among youth has increased since 2009. The magnitude of the illegal drug trade and its resulting problems have led the government to produce large and comprehensive datasets on a variety of phenomena relating to illicit drugs. In this study, we utilize these datasets to examine how marijuana usage among youth influence excessive drug usage. We measure excessive drug usage in terms of drug overdose death rate per state. Our study shows that illegal marijuana consumption increases excessive drug use. Also, we analyze the pattern of most frequently seized illicit drugs and compare it with drugs that are most frequently involved in a drug overdose death. We further our analysis to study seizure patterns across layers of heroin and cocaine supply chain across states. This analysis reveals that most active layers of the heroin supply chain in the American market are retailers and wholesalers, while multi-kilo traffickers are the most active players in the cocaine supply chain. In summary, the studies in this dissertation explore the use of analytical, descriptive, and predictive models to detect patterns to improve efficacy and initiate better operational management decision making.
|
368 |
[en] SPARSE SUBARRAYS FOR DIRECTION OF ARRIVAL ESTIMATION: ALGORITHMS AND GEOMETRIES / [pt] SUBARRANJOS ESPARSOS PARA ESTIMAÇÃO DE DIREÇÃO DE CHEGADA: ALGORITMOS E GEOMETRIASWESLEY SOUZA LEITE 06 February 2025 (has links)
[pt] Esta tese desenvolve técnicas avançadas de processamento de sinais com
arranjos de sensores, tanto para arranjos completamente calibrados quanto
parcialmente calibrados. São propostas novas geometrias de arranjos esparsos
baseadas em subarranjos lineares esparsos, bem como são desenvolvidos
novos algoritmos de estimativa de direção de chegada (DOA) para sinais
eletromagnéticos de banda estreita, utilizando-se a teoria de processamento
estatístico. Os algoritmos propostos, denominados Generalized Coarray
MUSIC (GCA-MUSIC) e Generalized Coarray Root MUSIC (GCA-rMUSIC),
expandem a técnica clássica denominada Multiple Signal Classification
(MUSIC) para configurações de subarranjos esparsos. Técnicas de projeto
de subarranjos lineares esparsos foram propostas, assim como uma análise
dos graus de liberdade dos subarranjos (sDoF) em função dos graus de
liberdade do arranjo completo (DoF). Além disso, desenvolvem-se versões com
tamanho de Janela Variável (VWS) desses algoritmos, que incorporam técnicas
de suavização espacial com abertura variável. Esses métodos proporcionam
estimativas de direção de alta precisão e conseguem estimar um número maior
de fontes do que o número de sensores físicos em cada subarranjo, explorando
estruturas de coarranjo específicas. A análise de desempenho demonstra que
o GCA-MUSIC e o GCA-rMUSIC, juntamente com suas variantes VWS,
melhoram a precisão no contexto de arranjos parcialmente calibrados, onde
podem existir incertezas de calibração. Além disso, são apresentadas variantes
VWS do algoritmo Coarray MUSIC (CA-MUSIC) para arranjos totalmente
calibrados (coerentes), permitindo estratégias de suavização adaptáveis para
um desempenho aprimorado. Além do desenvolvimento algorítmico, foram
derivadas as Matrizes de Informação de Fisher (FIMs) para o conjunto
completo de parâmetros deste modelo de dados generalizado, incluindo
tanto as relações de parâmetros consigo próprios quanto cruzados. Essas
matrizes levam em consideração as direções das fontes, potências das fontes,
potência do ruído e as componentes reais e imaginárias de todos os
parâmetros de calibração, representando cenários com fontes correlacionadas
e descorrelacionadas. Este trabalho avança significativamente a compreensão
teórica dos limites de desempenho da estimativa de direções, fornecendo uma
quantificação mais rigorosa dos limitantes de Cramér-Rao. Esses limitantes são
particularmente relevantes em cenários com arranjos parcialmente calibrados
e fontes descorrelacionadas, conforme demonstrado utilizando-se modelos de
dados baseados no produto de Khatri-Rao. / [en] This thesis explores advanced array signal processing techniques
for both fully and partially calibrated arrays. We introduce novel
sparse array geometries based on sparse linear subarrays and develop
new direction-of-arrival (DOA) estimation algorithms for narrowband
electromagnetic signals, framed within statistical signal processing principles.
The proposed algorithms, named Generalized Coarray MUSIC (GCA-MUSIC)
and Generalized Coarray Root MUSIC (GCA-rMUSIC), extend the classical
Multiple Signal Classification (MUSIC) framework to sparse subarrays
configurations. Sparse linear subarray design techniques were proposed,
as well as an analysis of the degrees of freedom of subarrays (sDoF) as a
function of degrees of freedom of the whole array (DoF). Additionally, we
develop Variable Window Size (VWS) versions of these algorithms, which
incorporate flexible spatial smoothing apertures. These methods provide
high-accuracy DoA estimates and offer the key advantage of resolving more
sources than the number of physical sensors in each subarray by exploiting
coarray structures. Performance analysis demonstrates that GCA-MUSIC and
GCA-rMUSIC, along with its VWS variants, improve accuracy in the context
of partially-calibrated arrays, where calibration uncertainties may exist.
Furthermore, VWS variants of the Coarray MUSIC (CA-MUSIC) algorithm
are presented for fully calibrated (coherent) arrays, enabling adaptable
smoothing strategies for enhanced performance. In addition to algorithmic
development, we compute the Fisher Information Matrices (FIMs) for the
complete set of parameters in this generalized data model, including both
self and cross-coupled parameter relationships. These matrices account for
source directions, source powers, noise power, and the real and imaginary
components of all calibration parameters, representing both correlated
and uncorrelated source scenarios. This work significantly advances the
theoretical understanding of DoA estimation performance limits by providing
a more rigorous quantification of the Cramér-Rao bounds. These bounds
are particularly relevant in scenarios with partially calibrated arrays and
uncorrelated sources, as demonstrated using the Khatri-Rao product-based
data model.
|
369 |
Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional DataFang, Zaili 13 November 2012 (has links)
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model.
To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information.
In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed.
The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction. / Ph. D.
|
370 |
盈餘品質與盈餘管理實證研究-以台灣上市公司為例 / The Empirical Study of Earning Quality and Motivation of Earning Management – The Example of publicly listed Taiwanese companies林鈺凱, Lin ,Yu Kai Unknown Date (has links)
近幾年來,財務弊案層出不窮,管理當局參與創造性會計的情事日益嚴重,資本市場陷入紀律危機,徒增成本。為喚醒投資者對於盈餘品質的關注,以及對盈餘管理有更客觀的瞭解,本研究提出兩種不同基礎之盈餘品質分類法,並探討台灣上市公司在盈餘品質分類法下,財務特性與盈餘管理成分的差異。
以往國內文獻多個別探討盈餘品質的內涵,或盈餘管理的現象;將盈餘品質與盈餘管理兩大主題做結合,乃新嘗試。研究主要分為兩大部分,第一部份定義盈餘品質,而第二部分深入盈餘管理的課題。
研究以2002年第3季到2004年第3季,381家台灣上市公司,共3429個樣本點,進行迴歸模型分析。
首先將樣本以:一、盈餘對營運現金流量的關係,及二、應收帳款成長率對營收成長率的比較,共兩種基礎,區分盈餘品質。在區分盈餘品質後,以盈餘品質測試盈餘評價能力與持續性。發現以盈餘對營運現金流量關係作為基礎之盈餘品質,具有增額評價能力,而以應收帳款成長率對營收成長率的比較做為基礎者,無增額評價能力。而兩種盈餘品質對盈餘持續性有顯著貢獻。
第二部分將樣本分為盈餘平滑企業與非盈餘平滑企業,探討盈餘評價功能之強弱,發現在盈餘平滑與盈餘非平滑兩組別中,盈餘平滑化並不影響盈餘評價能力。接著導入盈餘品質,觀察在交叉分組下盈餘評價功能的差異,發現高盈餘品質結合盈餘非平滑化(Quality Non-Smoother)的組別中,有最高盈餘評價係數。盈餘進一步拆解成三個組成份子:營運現金流量,裁量性應計數,與非裁量性應計數。其中,觀察重點在於裁量性應計數,研究同樣加入盈餘品質,測試其評價能力與持續性。發現裁量性應計數具有評價功能,而在兩種盈餘品質指標分類下,高盈餘品質之裁量性應計數,並無增量評價貢獻;在持續性方面,兩種盈餘品質指標同樣對裁量性應計數有正向貢獻。
為測試盈餘品質在盈餘管理誘因下的反應,最後將盈餘管理誘因區分為達成損益兩平與超越前期盈餘兩項目標,並加入盈餘品質,觀察其交互作用。發現在「達成損益兩平」與「超越前期盈餘」兩種盈餘目標下,盈餘管理現象的確存在。加入盈餘品質變數後,在兩種盈餘品質變數之作用下,對「達成損益兩平」與「超越前期盈餘」兩項管理誘因均有抑制作用。 / During the last few years, there have been numerous cases of financial ma-nipulation and scandals of firms and the situation of the managing authority par-ticipating in creative accounting has become worsen which has posed enormous disciplinary risks and unnecessary costs on the entire capital market. In order to invoke the concern of investors towards earning quality and to objectively under-stand more broadly about earning management, this study focused on two dif-ferent earning quality categorization based on different basis. The other aim of this study was to discuss the differences of financial characteristics and earning management that arise under two different earning quality categorizations.
Most of the local existing literature discussed separately on the essence of earning management or the phenomenon of earning management; this study would be a completely new attempt, which combined the above-mentioned two topics into one study. The first part of this study focused on the definition of earn-ing quality and the second part discussed in detail on issues concerning earning management.
A regression analysis was conducted on 381 publicly listed firms in Taiwan during the period from the third quarter of 2002 to the third quarter of 2004 and the total sample points were 3429.
Firstly, the samples were processed and the samples’ earning qualities were categorized based on: 1. the comparison between earning and operating cash flow; 2. the comparison between the growth rate of account receivables and the growth rate of revenue. After the categorization of earning qualities, earning qual-ity was used to test the ability of earning valuation and the persistence of the earning. It was revealed in this study that the earning quality based on the com-parison between earning and operating cash flow could greatly enhance the abil-ity of earning valuation. On other hand, the earning quality based on the compari-son between the growth rate of account receivables and the growth rate of reve-nue failed to enhance the earning valuation. However, both categories of earning qualities had significant contribution to the persistence of earning.
The second part of the study separated the samples into earning smoothing firms and non-earning smoothing firms in order to discuss the power of the ability of earning valuation. The results showed that earning smoothing did not have any impact on ability of earning valuation. The next step was to introduce earning quality into this part of the study and to observe the differences in the ability of earning valuation that arise from cross grouping. It was revealed that the group of high earning quality combining non-smoother had the highest earning valuation coefficient. Earning was further decomposed into three components: operating cash flow, discretional accruals and non-discretional accruals. The focus was on the observation of discretional accruals, and in this part of the study, the earning quality was also introduced in order to test the ability to valuate and the persis-tence of earning. It was discovered that discretional accruals possessed the func-tion of valuation. Furthermore, under the categorization of earning quality indexes with two different bases, discretional accruals with high earning quality had no contribution toward the ability to enhance valuation; however, about the persis-tence, both earning quality indexes had positive contribution toward discretional accruals.
Finally, in order to test the reactions of earning quality under the influence of the incentives of earning management, the incentives of earning management were categorized into two groups with two different goals: 1. to reach breakeven; 2. to exceed prior period earning. The earning quality was also introduced to ob-serve the interactions. It was observed that under the two different goals in earn-ing, the phenomenon of earning management did indeed exist. After including the variable of earning quality, under the influences of two different categories of earning quality variables, there was some kind of suppressive effects on the management incentives of “reaching breakeven” and “earning that surpasses the prior period earning”.
|
Page generated in 0.0837 seconds