251 |
Conceptual and empirical advances in antitrust market definition with application to South African competition policyBoshoff, Willem Hendrik 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Delineating the relevant product and geographic market is an important first step in competition inquiries,
as it permits an assessment of market power and substitutability. Critics often argue that market definition
is arbitrary and increasingly unnecessary, as modern econometric models can directly predict the
competitive effects of a merger or anti-competitive practice. Yet practical constraints (such as limited
data) and legal considerations (such as case law precedence) continue to support a formal definition of the
relevant market. Within this context, this dissertation develops three tools to improve market definition:
two empirical tools for cases with limited data and one conceptual decision-making tool to elucidate
important factors and risks in market definition.
The first tool for market definition involves a systematic analysis of consumer characteristics (i.e. the
demographic and income profiles of consumers). Consumer characteristics can assist in defining markets
as consumers with similar characteristics tend to switch to similar products following a price rise.
Econometric models therefore incorporate consumer characteristics data to improve price elasticity
estimates. Even though data constraints often prevent the use of econometric models, a systematic
analysis of consumer characteristics can still be useful for market definition. Cluster analysis offers a
statistical technique to group products on the basis of the similarity of their consumers. characteristics. A
recently concluded partial radio station merger in South Africa offers a case study for the use of consumer
characteristics in defining markets.
The second tool, or set of tools, for defining markets involves using tests for price co-movement. Critics
argue that price tests are not appropriate for defining markets, as these tests are based on the law of one
price - which tests only for price linkages and not for the ability to raise prices. Price tests, however, are
complements for existing market definition tools, rather than substitutes. Critics also argue that price tests
suffer from low statistical power in discriminating close and less close substitutes. But these criticisms
ignore inter alia the role of price tests as tools for gathering information and the range of price tests with
better size and power properties that are available, including new stationarity tests and autoregressive
models. A recently concluded investigation in the South African dairy industry offers price data to
evaluate the market definition insights of various price tests.
The third tool is conceptual in nature and involves a decision rule for defining markets. If market
definition is a binary classification problem (a product is either 'in' or 'out' of the market), it faces risks of misclassification (incorrectly including or excluding a product). Analysts can manage these risks using
a Bayesian decision rule that balances (1) the weight of evidence in favour of and against substitutability,
(2) prior probabilities determined by previous cases and economic research, and (3) the loss function of
the decision maker. The market definition approach adopted by the South African Competition Tribunal
in the Primedia / Kaya FM merger investigation offers a useful case study to illustrate the implementation
of such a rule in practice. / AFRIKAANSE OPSOMMING: Mededingingsake neem gewoonlik 'n aanvang met die afbakening van die relevante produk- en
geografiese mark. Die markdefinisie-proses werp dikwels lig op markmag en substitusie-moontlikhede,
en ondersteun dus die beoordeling van 'n mededingingsaak. Markdefinisie word egter deur kritici as
arbitrer en selfs onnodig geag, veral aangesien ekonometriese modelle die uitwerking van 'n
samesmelting of 'n teen-mededingende praktyk op mededinging direk kan voorspel. Tog verkies
praktisyns steeds om markte formeel af te baken op grond van sowel praktiese oorwegings (insluitend
databeperkings wat ekonometriese modellering bemoeilik) as regsoorwegings (insluitend die rol van
presedentereg). Hierdie proefskrif ontwikkel dus drie hulpmiddels vir die definisie van markte: twee
empiriese hulpmiddels vir gevalle waar data beperk is sowel as 'n denkhulpmiddel om o.a. risiko's
rondom markdefinisie te bestuur.
Die eerste hulpmiddel vir die definisie van markte behels die sistematiese analise van
verbruikerseienskappe, insluitend die demografiese en inkomste-profiel van verbruikers.
Verbruikerseienskappe werp lig op substitusie, aangesien soortgelyke verbruikers neig om na soortgelyke
produkte te verwissel na aanleiding van 'n prysstyging. Ekonometriese modelle maak derhalwe van data
omtrent verbruikerseienskappe gebruik om beramings van pryselastisiteit te verbeter. Hoewel
databeperkings dikwels ekonometriese modellering beperk, kan verbruikerseienskappe op sigself steeds
nuttig wees vir die afbakening van die mark. Trosanalise bied 'n statistiese metode vir 'n stelselmatige
ondersoek van verbruikerseienskappe vir markdefinisie, deurdat dit produkte op grond van gelyksoortige
verbruikerseienskappe groepeer. 'n Onlangse ondersoek in Suid-Afrika rakende die gedeeltelike
samesmelting van Primedia and Kaya FM radiostasies bied data om die gebruik van trosanalise en
verbruikerseienskappe vir markdefinisie-doeleindes te illustreer.
Die tweede hulpmiddel vir markdefinisie behels statistiese toetse vir verwantskappe tussen prystydreekse
van verskillende produkte of streke. Hierdie prystoetse is gebaseer op die wet van een prys en beklemtoon
prysverwantskappe eerder as die vermoë om pryse te verhoog (wat die uiteindelike fokus in
mededingingsbeleid is). Hierdie klem verminder egter nie noodwendig die insigte wat prystoetse bied nie,
aangesien markdefinisie dikwels 'n omvattende analise verg. Prystoetse se statistiese
onderskeidingsvermoe word ook dikwels deur kritici as swak beskryf. Hierdie tegniese kritiek beskou
prystoetse as eng-gedefinieerde hipotesetoetse eerder as hulpmiddels vir die verkenning van
substitusiepatrone. Voorts ignoreer hierdie tegniese kritiek 'n verskeidenheid nuwe prystoetse met beter
onderskeidingsvermoë, insluitend nuwe toetse vir stasioneriteit en nuwe autoregressiewe modelle. 'n Onlangse mededingingsondersoek in die Suid-Afrikaanse melkindustrie verskaf prysdata om die
verrigting van verskillende prystoetse vir geografiese markdefinisie te ondersoek.
Die derde hulpmiddel vir die definisie van markte behels 'n besluitnemingsreël. Hiervolgens word
markdefinisie as 'n binêre klassifikasieprobleem beskou, waar 'n produk of streek 'binne' of 'buite' die
mark geplaas moet word. Gegewe dat hierdie klassifikasie onder toestande van onsekerheid geskied, is
markdefinisie blootgestel aan risiko's van wanklassifikasie. Praktisyns kan hierdie risiko‟s bestuur deur
gebruik te maak van 'n Bayesiaanse besluitnemingsreël. Sodanige reël balanseer (1) die gewig van
getuienis ten gunste van en teen substitusie, (2) a priori waarskynlikhede soos bepaal deur vorige
mededingingsake en akademiese navorsing, en (3) die verliesfunksie van die besluitnemer. Die
benadering van die Suid-Afrikaanse Mededingingstribunaal in die saak rakende die gedeeltelike
samesmelting van Primedia en Kaya FM bied 'n nuttige gevallestudie om hierdie beginsels te
demonstreer.
|
252 |
Some Novel Statistical InferencesLi, Chenxue 12 August 2016 (has links)
In medical diagnostic studies, the area under the Receiver Operating Characteristic (ROC) curve (AUC) and Youden index are two summary measures widely used in the evaluation of the diagnostic accuracy of a medical test with continuous test results. The first half of this dissertation will highlight ROC analysis including extension of Youden index to the partial Youden index as well as novel confidence interval estimation for AUC and Youden index in the presence of covariates in induced linear regression models. Extensive simulation results show that the proposed methods perform well with small to moderate sized samples. In addition, some real examples will be presented to illustrate the methods.
The latter half focuses on the application of empirical likelihood method in economics and finance. Two models draw our attention. The first one is the predictive regression model with independent and identically distributed errors. Some uniform tests have been proposed in the literature without distinguishing whether the predicting variable is stationary or nearly integrated. Here, we extend the empirical likelihood methods in Zhu, Cai and Peng (2014) with independent errors to the case of an AR error process. The proposed new tests do not need to know whether the predicting variable is stationary or nearly integrated, and whether it has a finite variance or an infinite variance. Another model we considered is a GARCH(1,1) sequence or an AR(1) model with ARCH(1) errors. It is known that the observations have a heavy tail and the tail index is determined by an estimating equation. Therefore, one can estimate the tail index by solving the estimating equation with unknown parameters replaced by Quasi Maximum Likelihood Estimation (QMLE), and profile empirical likelihood method can be employed to effectively construct a confidence interval for the tail index. However, this requires that the errors of such a model have at least finite fourth moment to ensure asymptotic normality with n1/2 rate of convergence and Wilk's Theorem. We show that the finite fourth moment can be relaxed by employing some Least Absolute Deviations Estimate (LADE) instead of QMLE for the unknown parameters by noting that the estimating equation for determining the tail index is invariant to a scale transformation of the underlying model. Furthermore, the proposed tail index estimators have a normal limit with n1/2 rate of convergence under minimal moment condition, which may have an infinite fourth moment, and Wilk's theorem holds for the proposed profile empirical likelihood methods. Hence a confidence interval for the tail index can be obtained without estimating any additional quantities such as asymptotic variance.
|
253 |
Data cleaning techniques for software engineering data setsLiebchen, Gernot Armin January 2010 (has links)
Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.
|
254 |
EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALSShen, Zhiyuan 01 January 2016 (has links)
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. It has been shown by Owen (1988,1990) and many others that empirical likelihood ratio (ELR) method can be used to produce nice confidence intervals or regions. Owen (1988) shows that -2logELR converges to a chi-square distribution with one degree of freedom subject to a linear statistical functional in terms of distribution functions. However, a generalization of Owen's result to the right censored data setting is difficult since no explicit maximization can be obtained under constraint in terms of distribution functions. Pan and Zhou (2002), instead, study the EL with right censored data using a linear statistical functional constraint in terms of cumulative hazard functions. In this dissertation, we extend Owen's (1988) and Pan and Zhou's (2002) results subject to non-linear but Hadamard differentiable statistical functional constraints. In this purpose, a study of differentiable functional with respect to hazard functions is done. We also generalize our results to two sample problems. Stochastic process and martingale theories will be applied to prove the theorems. The confidence intervals based on EL method are compared with other available methods. Real data analysis and simulations are used to illustrate our proposed theorem with an application to the Gini's absolute mean difference.
|
255 |
Coordinating requirements engineering and software testingUnterkalmsteiner, Michael January 2015 (has links)
The development of large, software-intensive systems is a complex undertaking that is generally tackled by a divide and conquer strategy. Organizations face thereby the challenge of coordinating the resources which enable the individual aspects of software development, commonly solved by adopting a particular process model. The alignment between requirements engineering (RE) and software testing (ST) activities is of particular interest as those two aspects are intrinsically connected: requirements are an expression of user/customer needs while testing increases the likelihood that those needs are actually satisfied. The work in this thesis is driven by empirical problem identification, analysis and solution development towards two main objectives. The first is to develop an understanding of RE and ST alignment challenges and characteristics. Building this foundation is a necessary step that facilitates the second objective, the development of solutions relevant and scalable to industry practice that improve REST alignment. The research methods employed to work towards these objectives are primarily empirical. Case study research is used to elicit data from practitioners while technical action research and field experiments are conducted to validate the developed solutions in practice. This thesis contains four main contributions: (1) An in-depth study on REST alignment challenges and practices encountered in industry. (2) A conceptual framework in the form of a taxonomy providing constructs that further our understanding of REST alignment. The taxonomy is operationalized in an assessment framework, REST-bench (3), that was designed to be lightweight and can be applied as a postmortem in closing development projects. (4) An extensive investigation into the potential of information retrieval techniques to improve test coverage, a common REST alignment challenge, resulting in a solution prototype, risk-based testing supported by topic models (RiTTM). REST-bench has been validated in five cases and has shown to be efficient and effective in identifying improvement opportunities in the coordination of RE and ST. Most of the concepts operationalized from the REST taxonomy were found to be useful, validating the conceptual framework. RiTTM, on the other hand, was validated in a single case experiment where it has shown great potential, in particular by identifying test cases that were originally overlooked by expert test engineers, improving effectively test coverage.
|
256 |
EMPIRICAL PROCESSES FOR ESTIMATED PROJECTIONS OF MULTIVARIATE NORMAL VECTORS WITH APPLICATIONS TO E.D.F. AND CORRELATION TYPE GOODNESS OF FIT TESTSSaunders, Christopher Paul 01 January 2006 (has links)
Goodness-of-fit and correlation tests are considered for dependent univariate data that arises when multivariate data is projected to the real line with a data-suggested linear transformation. Specifically, tests for multivariate normality are investigated. Let { } i Y be a sequence of independent k-variate normal random vectors, and let 0 d be a fixed linear transform from Rk to R . For a sequence of linear transforms { ( )} 1 , , n d Y Y converging almost surely to 0 d , the weak convergence of the empirical process of the standardized projections from d to a tight Gaussian process is established. This tight Gaussian process is identical to that which arises in the univariate case where the mean and standard deviation are estimated by the sample mean and sample standard deviation (Wood, 1975). The tight Gaussian process determines the limiting null distribution of E.D.F. goodness-of-fit statistics applied to the process of the projections. A class of tests for multivariate normality, which are based on the Shapiro-Wilk statistic and the related correlation statistics applied to the dependent univariate data that arises with a data-suggested linear transformation, is also considered. The asymptotic properties for these statistics are established. In both cases, the statistics based on random linear transformations are shown to be asymptotically equivalent to the statistics using the fixed linear transformation. The statistics based on the fixed linear transformation have same critical points as the corresponding tests of univariate normality; this allows an easy implementation of these tests for multivariate normality. Of particular interest are two classes of transforms that have been previously considered for testing multivariate normality and are special cases of the projections considered here. The first transformation, originally considered by Wood (1981), is based on a symmetric decomposition of the inverse sample covariance matrix. The asymptotic properties of these transformed empirical processes were fully developed using classical results. The second class of transforms is the principal components that arise in principal component analysis. Peterson and Stromberg (1998) suggested using these transforms with the univariate Shapiro-Wilk statistic. Using these suggested projections, the limiting distribution of the E.D.F. goodness-of-fit and correlation statistics are developed.
|
257 |
EMPIRICAL PROCESSES AND ROC CURVES WITH AN APPLICATION TO LINEAR COMBINATIONS OF DIAGNOSTIC TESTSChirila, Costel 01 January 2008 (has links)
The Receiver Operating Characteristic (ROC) curve is the plot of Sensitivity vs. 1- Specificity of a quantitative diagnostic test, for a wide range of cut-off points c. The empirical ROC curve is probably the most used nonparametric estimator of the ROC curve. The asymptotic properties of this estimator were first developed by Hsieh and Turnbull (1996) based on strong approximations for quantile processes. Jensen et al. (2000) provided a general method to obtain regional confidence bands for the empirical ROC curve, based on its asymptotic distribution.
Since most biomarkers do not have high enough sensitivity and specificity to qualify for good diagnostic test, a combination of biomarkers may result in a better diagnostic test than each one taken alone. Su and Liu (1993) proved that, if the panel of biomarkers is multivariate normally distributed for both diseased and non-diseased populations, then the linear combination, using Fisher's linear discriminant coefficients, maximizes the area under the ROC curve of the newly formed diagnostic test, called the generalized ROC curve. In this dissertation, we will derive the asymptotic properties of the generalized empirical ROC curve, the nonparametric estimator of the generalized ROC curve, by using the empirical processes theory as in van der Vaart (1998). The pivotal result used in finding the asymptotic behavior of the proposed nonparametric is the result on random functions which incorporate estimators as developed by van der Vaart (1998). By using this powerful lemma we will be able to decompose an equivalent process into a sum of two other processes, usually called the brownian bridge and the drift term, via Donsker classes of functions. Using a uniform convergence rate result given by Pollard (1984), we derive the limiting process of the drift term. Due to the independence of the random samples, the asymptotic distribution of the generalized empirical ROC process will be the sum of the asymptotic distributions of the decomposed processes. For completeness, we will first re-derive the asymptotic properties of the empirical ROC curve in the univariate case, using the same technique described before. The methodology is used to combine biomarkers in order to discriminate lung cancer patients from normals.
|
258 |
AN INNOVATIVE APPROACH TO MECHANISTIC EMPIRICAL PAVEMENT DESIGNGraves, Ronnie Clark, II 01 January 2012 (has links)
The Mechanistic Empirical Pavement Design Guide (MEPDG) developed by the National Cooperative Highway Research Program (NCHRP) project 1-37A, is a very powerful tool for the design and analysis of pavements. The designer utilizes an iterative process to select design parameters and predict performance, if the performance is not acceptable they must change design parameters until an acceptable design is achieved.
The design process has more than 100 input parameters across many areas, including, climatic conditions, material properties for each layer of the pavement, and information about the truck traffic anticipated. Many of these parameters are known to have insignificant influence on the predicted performance
During the development of this procedure, input parameter sensitivity analysis varied a single input parameter while holding other parameters constant, which does not allow for the interaction between specific variables across the entire parameter space. A portion of this research identified a methodology of global sensitivity analysis of the procedure using random sampling techniques across the entire input parameter space. This analysis was used to select the most influential input parameters which could be used in a streamlined design process.
This streamlined method has been developed using Multiple Adaptive Regression Splines (MARS) to develop predictive models derived from a series of actual pavement design solutions from the design software provided by NCHRP. Two different model structures have been developed, one being a series of models which predict pavement distress (rutting, fatigue cracking, faulting and IRI), the second being a forward solution to predict a pavement thickness given a desired level of distress. These thickness prediction models could be developed for any subset of MEPDG solutions desired, such as typical designs within a given state or climatic zone. These solutions could then be modeled with the MARS process to produce am “Efficient Design Solution” of pavement thickness and performance predictions. The procedure developed has the potential to significantly improve the efficiency of pavement designers by allowing them to look at many different design scenarios prior to selecting a design for final analysis.
|
259 |
MODELING OF AN AIR-BASED DENSITY SEPARATORGhosh, Tathagata 01 January 2013 (has links)
There is a lack of fundamental studies by means of state of the art numerical and scale modeling techniques scrutinizing the theoretical and technical aspect of air table separators as well as means to comprehend and improve the efficiency of the process. The dissertation details the development of a workable empirical model, a numerical model and a scale model to demonstrate the use of a laboratory air table unit.
The modern air-based density separator achieves effective density-based separation for particle sizes greater than 6 mm. Parametric studies with the laboratory scale unit using low rank coal have demonstrated the applicability with regards to finer size fractions of the range 6 mm to 1 mm. The statistically significant empirical models showed that all the four parameters, i.e, blower and table frequency, longitudinal and transverse angle were significant in determining the separation performance. Furthermore, the tests show that an increase in the transverse angle increased the flow rate of solids to the product end and the introduction of feed results in the dampening of airflow at the feed end. The higher table frequency and feed rate had a detrimental effect on the product yield due to low residence time of particle settlement.
The research further evaluated fine particle upgrading using various modeling techniques. The numerical model was evaluated using K-Epsilon and RSM turbulence formulations and validated using experimental dataset. The results prove that the effect of fine coal vortices forming around the riffles act as a transport mechanism for higher density particle movement across the table deck resulting in 43% displacement of the midlings and 29% displacement of the heavies to the product side. The velocity and vector plots show high local variance of air speeds and pressure near the feed end and an increase in feed rate results in a drop in deshaling capability of the table.
The table was further evaluated using modern scale-modeling concepts and the scaling laws indicated that the vibration velocity has an integral effect on the separation performance. The difference between the full-scale model and the scaled prototype was 3.83% thus validating the scaling laws.
|
260 |
台灣地區個人捐贈的所得稅誘因之實證分析朱紀燕, CHU CHI YEN Unknown Date (has links)
過去數十年來,當社會福利的觀念盛行於歐美各國時,台灣仍處在經濟起飛的階段,為了致力於經濟繁榮而忽略了人民的福祉。近幾年來,台灣的經濟發展已經到達一個穩定的階段,人民在滿足了基本生活需求後,也開始重視自身的福利;促使政府轉而將政策目標集中在社會福利制度的推行。在國外實行了社會福利數十年的今天,累積了許多寶貴的經驗提供台灣在政策制訂上的一個參考,社會福利制度的效益廣及全國大眾,影響層面既深且巨,我國政府在社會福利的推動上面,不可不仔細評估。
政府為了鼓勵人們從事慈善捐贈行為,利用所得稅的扣抵方式使得人民的捐贈價格降低,如此一來,捐贈價格的降低將會提高人們捐贈的誘因。在國外,許多的學者利用所得稅資料庫和家庭收支調查資料庫,針對人們的慈善捐贈動機估計其價格彈性和替代彈性,以驗證政府制訂所得扣抵以提高捐贈誘因的政策有效性;實證結果大多同意所得稅抵減的政策有效性。在台灣,政府的所得稅抵減政策適用對象除了從事慈善捐贈,尚且包括政黨捐贈、私立學校捐贈等各種非慈善捐贈,其政策目的各有不同。因為資料特性的緣故,本文僅利用台灣地區的個人所得稅資料對於台灣地區的慈善捐贈進行價格彈性和所得彈性的估計;並同時對於捐贈的各種誘因進行實證上的分析。
研究結果發現,慈善捐贈價格彈性為-4.0768,相對於國外而言,捐贈價格的變動對於個人的捐贈金額似乎有更大的影響力。當捐贈價格越低,則對於慈善捐贈的誘因確實有提高的效果;是以如果政府致力於社會福利規模的增加,為了能使慈善團體不至於因為經費不足而面臨縮減規模或是關閉的命運,在不增加政府的負擔的情況下,可以租稅減免做為鼓勵的手段以達到鼓勵捐贈的政策目的。
在所得分層估計的實證結果方面,所得介於175,000∼400,000的階層的所得彈性估計值不顯著,其餘的所得階層的所得彈性估計值均為顯著且正向關係,且所得彈性隨著可支配所得越高而隨之提高;再觀察價格彈性,只有可支配所得介於900,000∼1,800,000的樣本其估計值顯著,其餘的所得階層其估計值均不顯著。隨著所得階層越高,其所得彈性彈性隨之提高,反之其價格彈性則相對降低。此估計結果和國外文獻的結果不謀而合。
迴歸式加入其他的列舉扣除一併探討其他列舉扣除額對於慈善捐贈的影響時,發現生育醫療費對於慈善捐贈金額的大小影響並不顯著;而人身保險費則對於慈善捐贈金額的大小有顯著且負向的影響,並且和本文的理論模型結果吻合。
最後在針對納稅義務人從事慈善捐贈與否的二元選擇模型中,結果發現配偶薪資所得對於納稅義務人從事捐贈與否雖然有顯著的結果,但是根據其邊際效果觀之,兩者對於納稅義務人是否慈善捐贈的機率影響不大,年齡和受扶養人數也同樣存在著顯著的結果,相對於配偶和本人薪資所得總額的邊際效果而言,其對於從事慈善捐贈與否的機率影響較大。婚姻狀況是此迴歸模型關注的焦點,結果發現,結婚與否對於納稅義務人從事慈善捐贈與否的確有較大而顯著的影響機率;已婚的納稅義務人較未婚的納稅義務人有較大的機率去選擇從事慈善捐贈行為,其直觀的原因應該是已婚人士的生活穩定,收入來源也較為穩定,心態上和經濟上都較單身者有較大的意願從事慈善捐贈行為。至於捐贈價格對於納稅義務人從事慈善捐贈與否有著極大的邊際效果,也就是說捐贈價格的減少對於人們從事慈善捐贈有著極大的機率,此結果再次印證了政府的所得稅扣抵政策對於鼓勵捐贈的有效性。
|
Page generated in 0.0505 seconds