421 |
Risk properties and parameter estimation on mean reversion and Garch modelsSypkens, Roelf 09 1900 (has links)
Most of the notations and terminological conventions used in this thesis are Statistical.
The aim in risk management is to describe the risk factors present in time series. In order
to group these risk factors, one needs to distinguish between different stochastic
processes and put them into different classes. The risk factors discussed in this thesis are
fat tails and mean reversion. The presence of these risk factors fist need to be found in the
historical dataset. I will refer to the historical dataset as the original dataset. The Ljung-
Box-Pierce test will be used in this thesis to determine if the distribution of the original
dataset has mean reversion or no mean reversion. / Mathematical Sciences / M.Sc. (Applied Mathematics)
|
422 |
Probabilistic analysis of monthly peak factors in a regional water distribution systemKriegler, Benjamin Jacobus 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The design of a water supply system relies on the knowledge of the water demands of its specific end-users.
It is also important to understand the end-users’ temporal variation in water demand. Failure of the system to
provide the required volume of water at the required flow-rate is deemed a system failure. The system
therefore needs to be designed with sufficient capacity to ensure that it is able to supply the required volume
of water during the highest demand periods. In practice, bulk water supply systems do not have to cater for
the high frequency, short duration high peak demand scenarios of the end-user, such as the peak hour or peak
day events, as the impact of events is reduced by the provision of water storage capacity at the off-take from
the bulk supply system. However, for peak demand scenarios with durations longer than an hour or a day,
depending on the situation, the provision of sufficient storage capacity to reduce the impact on the bulk water
system, becomes impractical and could lead to potential water quality issues during low demand periods. It
is, therefore, a requirement that bulk water systems be designed to be able to meet the peak weekly or peak
month end-user demands. These peak demand scenarios usually occur only during a certain portion of the
year, generally concentrated in a two to three month period during the drier months. Existing design
guidelines usually follow a deterministic design approach, whereby a suitable DPF is applied to the average
annual daily system demand in order to determine the expected peak demand on the system. This DPF does
not account for the potential variability in end-user demand profiles, or the impact that end-storage has on
the required peak design factor of the bulk system.
This study investigated the temporal variations of end-user demand on two bulk water supply systems. These
systems are located in the winter rainfall region of the Western Cape province of South Africa. The data
analysed was the monthly measured consumption figures of different end-users supplied from the two
systems. The data-sets extended over 14 years of data. Actual monthly peak factors were extracted from this
data and used in deterministic and probabilistic methods to determine the expected monthly peak factor for
both the end-user and the system design. The probabilistic method made use of a Monte Carlo analysis,
whereby the actual recorded monthly peak factor for each end-user per bulk system was used as an input into
discrete probability functions. The Monte Carlo analysis executed 1 500 000 iterations in order to produce
probability distributions of the monthly peak factors for each system. The deterministic and probabilistic
results were compared to the actual monthly peak factors as calculated from the existing water use data, as
well as against current DPFs as published in guidelines used in the industry. The study demonstrated that the
deterministic method would overstate the expected peak system demand and result in an oversized system.
The probabilistic method yielded good results and compared well with the actual monthly peak factors. It is
thus deemed an appropriate tool to use to determine the required DPF of a bulk water system for a chosen
reliability of supply. The study also indicated the DPFs proposed by current guidelines to be too low. The
study identified a potential relationship between the average demand of an end-user and the expected
maximum monthly peak factor, whereas in current guidelines peak factors are not indicated as being
influenced by the end-user average demand. / AFRIKAANSE OPSOMMING: Die ontwerp van ‘n watervoorsiening stelsel berus op die kennis van die water aanvraag van sy spesifieke
eindverbruikers. Dit is ook belangrik om ‘n begrip te hê van die tydelike variasie van die eindverbruiker se
water-aanvraag. Indien die voorsieningstelsel nie in staat is om die benodigde volume water teen die
verlangde vloeitempo te kan lewer nie, word dit beskou as ‘n faling. Die stelsel word dus ontwerp met
voldoende kapasiteit wat dit sal in staat stel om die benodigde volume gedurende die hoogste aanvraag
periodes te kan voorsien. In die praktyk hoef grootmaat water-voorsiening stelsels nie te voldoen aan spits
watergebeurtenisse met hoë frekwensie en kort duurtes, soos piek-dag of piek-uur aanvraag nie, aangesien
hierdie gebeurtenisse se impak op die grootmaat stelsel verminder word deur die voorsiening van wateropgaring
fasiliteite by die aftap-punte vanaf die grootmaatstelsels. Nieteenstaande, vir piek-aanvraag
gebeurtenisse met langer duurtes as ‘n uur of dag, raak die voorsiening van voldoende wateropgaring
kapasiteit by die aftap-punt onprakties en kan dit selfs lei tot waterkwaliteits probleme. Dit is dus ‘n vereiste
dat grootmaat watervoorsienings stelsels ontwerp moet word om die piek-week of piek-maand eindverbruiker
aanvrae te kan voorsien. Hierdie piek-aanvraag gebeurtenisse vind algemeen in gekonsentreerde
twee- of drie maand periodes tydens die droeër maande plaas. Bestaande ontwerpsriglyne volg gewoonlik ‘n
deterministiese ontwerp benadering, deurdat ‘n voldoende ontwerp spits faktor toegepas word op die
gemiddelde jaarlikse daaglikse stelsel aanvraag om sodoende te bepaal wat die verwagte spits aanvraag van
die stelsel sal wees. Hierdie ontwerp spits faktor maak nie voorsiening vir die potensiële variasie in die
eindverbruiker se aanvraag karakter of die impak van die beskikbare water-opgaring fasiliteit op die
benodigde ontwerp spits faktor van die grootmaat-stelsel nie.
Hierdie studie ondersoek die tydelike variasie van die eindverbruiker se aanvraag op twee grootmaat watervoorsiening
stelsels. Die twee stelsels is geleë in die winter reënval streek van die Wes-Kaap provinsie van
Suid-Afrika. Die data wat geanaliseer is was die maandelikse gemeterde verbruiksyfers van verskillende
eindverbruikers voorsien deur die twee stelsels. Die datastelle het oor 14 jaar gestrek. Die ware maand piekfaktore
is bereken vanaf die data en is in deterministiese en probabilistiese metodes gebruik om die verwagte
eindverbruiker en stelsel ontwerp se maand spits-faktore te bereken. Die probabilistiese metode het gebruik
gemaak van ‘n Monte Carlo analise metode, waardeur die ware gemeette maand spits-faktor vir elke
eindverbruiker vir elke grootmaatstelsel gebruik is as invoer tot diskrete waarskynlikheids funksies. Die
Monte Carlo analise het 1 500 000 iterasies voltooi om waarskynlikheids-verdelings van elke maand spitsfaktor
vir elke stelsel te bereken. Die deterministiese en probabilistiese resultate is vergelyk met die ware
maand spits faktore soos bereken vanuit die bestaande waterverbruik data, asook teen huidige gepubliseerde
ontwerp spits-faktore, wat in die bedryf gebruik word.
Die studie het aangetoon dat die deterministiese metode te konserwatief is en dat dit die verwagte piekaanvraag
van die stelsel sal oorskat en dus sal lei tot ‘n oorgrootte stelsel. Die probabilistiese metode het
goeie resultate opgelewer wat goed vergelyk met die ware maand piek-faktore. Dit word gereken as ‘n
toepaslike metode om die benodigde ontwerp spits-faktor van ‘n grootmaat-watervoorsiening stelsel te bepaal vir ‘n gekose voorsieningsbetroubaarheid. Die studie het ook aangedui dat die ontwerps piek-faktore
voorgestel deur die huidige riglyne te laag is en dat dit tot die falings van ‘n stelsel sal lei. Die studie het ‘n
moontlike verwantskap tussen die gemiddelde daaglikse wateraanvraag van die eindverbruiker en die
verwagte maksimum maand spits faktor geïdentifiseer, nademaal die piek-faktore soos voorgestel deur die
huidige riglyne nie beïnvloed word deur die eindverbruiker se gemiddelde verbruik nie.
|
423 |
Development of an integrated decision analysis framework for selecting ICT-based logistics systems in the construction industryFadiya, Olusanjo Olaniran January 2012 (has links)
The current application of logistics in the construction industry is relatively inefficient when compared with other industries such as retail and manufacturing. The factors attributed to this inefficiency include the fragmented and short-term nature of construction process and inadequate tracking facilities on site. The inefficiency of construction logistics creates inter alia loss of materials and equipment, waste, construction delay, excessive cost and collision accident on site. Meanwhile, several information and communication technologies (ICT) have been proposed and developed by researchers to improve logistics functions such as tracking and monitoring of resources through the supply chain to the construction site. Such technologies include global positioning system (GPS), radio frequency identification devices (RFID), wireless sensors network (WSN) and geographical information system (GIS). While considerable research has been undertaken to develop the aforementioned systems, limited work has so far been done on investment justification prior to implementation. In this research, a framework has been developed to assess the extent of construction logistics problems, measure the significances of the problems, match the problems with existing ICT-based solutions and develop a robust ready-to-use multi-criteria analysis tool that can quantify the costs and benefits of implementing several ICT-based construction logistics systems. The tool is an integrated platform of related evaluation techniques such as Fault Tree Analysis, Decision Tree Analysis, Life Cycle Cost Analysis and Multi-Attribute Utility Theory. Prior to the development of this tool, data was collected through questionnaire survey and analysed by means of statistical analysis in order to derive some foundational parameters of the tool. Quantitative research method was adopted for data collection because the processes of the tool for which the data was required are quantitative. The implementation of this tool is novel given the integration of the analytical techniques mentioned above and the application of the tool for selecting ICT-based construction logistics systems. The tool takes in data such as cost and quantities of materials for a building project and quantifies the cost and benefits of alternative ICT-based tracking systems that can improve the logistics functions of the project. The application of the tool will eliminate guesswork on the benefits of ICT-based tracking systems by providing an objective platform for the quantification of cost and benefits of the systems prior to implementation.
|
424 |
Learning and development of probability concepts: Effects of computer-assisted instruction and diagnosis.Callahan, Philip. January 1989 (has links)
This study considered spontaneous versus feedback induced changes in probability strategies using grouped trials of two-choice problems. Third and sixth grade Anglo and Apache children were the focus of computer assisted instruction and diagnostics designed to maximize performance and measure understanding of probability concepts. Feedback, using indeterminate problems directed at specific strategies, in combination with a large problem set permitted examination of response latency and hypothesis alternation. Explicit training, in the form of computer based tutorials administered feedback as: (a) correctness and frequency information, (b) mathematical solutions, or (c) in a graphical format, targeted by weaknesses in the prevailing strategy. The tutorials encouraged an optimal proportional strategy and sought to affect the memorial accessibility or availability of information through the vividness of presentation. As the subject's response selection was based on the query to select for the best chance of winning, each bucket of the two-choice bucket problems was coded as containing target or winner (W) balls and distractor or loser (L) balls. Third and sixth grade subjects came to the task with position oriented strategies focusing on the winner or target elements. The strategies' sophistication was related to age with older children displaying less confusion and using proportional reasoning to a greater extent than the third grade children. Following the tutorial, the subjects displayed a marked decrease in winners strategies deferring instead to strategies focusing on both the winners and losers; however, there was a general tendency to return to the simpler strategies over the course of the posttest. These simpler strategies provided the fastest response latencies within this study. Posttest results indicated that both third and sixth grade subjects had made comparable gains in the use of strategies addressing both winners and losers. Based on the results of a long-term written test, sixth grade subjects appeared better able to retain or apply the knowledge that both winners and losers must be considered when addressing the two-choice bucket problems. Yet, for younger children, knowledge of these sophisticated strategies did not necessarily support generalization to other mathematical skills such as fraction understanding.
|
425 |
Quantization Dimension for Probability DefinitionsLindsay, Larry J. 12 1900 (has links)
The term quantization refers to the process of estimating a given probability by a discrete probability supported on a finite set. The quantization dimension Dr of a probability is related to the asymptotic rate at which the expected distance (raised to the rth power) to the support of the quantized version of the probability goes to zero as the size of the support is allowed to go to infinity. This assumes that the quantized versions are in some sense ``optimal'' in that the expected distances have been minimized. In this dissertation we give a short history of quantization as well as some basic facts. We develop a generalized framework for the quantization dimension which extends the current theory to include a wider range of probability measures. This framework uses the theory of thermodynamic formalism and the multifractal spectrum. It is shown that at least in certain cases the quantization dimension function D(r)=Dr is a transform of the temperature function b(q), which is already known to be the Legendre transform of the multifractal spectrum f(a). Hence, these ideas are all closely related and it would be expected that progress in one area could lead to new results in another. It would also be expected that the results in this dissertation would extend to all probabilities for which a quantization dimension function exists. The cases considered here include probabilities generated by conformal iterated function systems (and include self-similar probabilities) and also probabilities generated by graph directed systems, which further generalize the idea of an iterated function system.
|
426 |
A Comparison of Some Continuity Corrections for the Chi-Squared Test in 3 x 3, 3 x 4, and 3 x 5 TablesMullen, Jerry D. (Jerry Davis) 05 1900 (has links)
This study was designed to determine whether chis-quared based tests for independence give reliable estimates (as compared to the exact values provided by Fisher's exact probabilities test) of the probability of a relationship between the variables in 3 X 3, 3 X 4 , and 3 X 5 contingency tables when the sample size is 10, 20, or 30. In addition to the classical (uncorrected) chi-squared test, four methods for continuity correction were compared to Fisher's exact probabilities test. The four methods were Yates' correction, two corrections attributed to Cochran, and Mantel's correction. The study was modeled after a similar comparison conducted on 2 X 2 contingency tables and published by Michael Haber.
|
427 |
Culture, cognition and uncertainty: metacognition in the learning and teaching of probability theoryBroekmann, Irene Anne 30 August 2016 (has links)
A Research Report submitted to the Faculty of Education, University
of the Witwatersrand, Johannesburg, in fulfilment of the
requirements for the degree of Master of Education by course-work
and research report.
Johannesburg, 1992 / This research report investigates the psychological dimensions in
the learning and teaching of probability theory. It begins by
outlining some problems arising from the author's own experience in
the learning and teaching of probability theory, and develops a
theoretical position using the Theory of Activity. This theory
places education within the broad social context and recognises the
centrality of affective aspects of cognition.
[Abbreviated abstract. Open document to view full version]
|
428 |
Characterisation and application of tests for recent infection for HIV incidence surveillanceKassanjee, Reshma 02 February 2015 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. 21 October, 2014. / Three decades ago, the discovery of the Human Immunodeficiency Virus (HIV) was
announced. The subsequent HIV pandemic has continued to devastate the global
community, and many countries have set ambitious HIV reduction targets over the years.
Reliable methods for measuring incidence, the rate of new infections, are essential for
monitoring the virus, allocating resources, and assessing interventions. The estimation of
incidence from single cross-sectional surveys using tests that distinguish between ‘recent’
and ‘non-recent’ infection has therefore attracted much interest. The approach provides a
promising alternative to traditional estimation methods which often require more complex
survey designs, rely on poorly known inputs, and are prone to bias. More specifically, the
prevalence of HIV and ‘recent’ HIV infection, as measured in a survey, are used together
with relevant test properties to infer incidence. However, there has been a lack of
methodological consensus in the field, caused by limited applicability of proposed
estimators, inconsistent test characterisation (or estimation of test properties) and
uncertain test performance. This work aims to address these key obstacles. A general
theoretical framework for incidence estimation is developed, relaxing unrealistic
assumptions used in earlier estimators. Completely general definitions of the required test
properties emerge from the analysis. The characterisation of tests is then explored: a new
approach, that utilises specimens from subjects observed only once after infection, is
demonstrated; and currently-used approaches, that require that subjects are followed-up
over time after infection, are systematically benchmarked. The first independent and
consistent characterisation of multiple candidate tests is presented, and was performed on
behalf of the Consortium for the Evaluation and Performance of HIV Incidence Assays
(CEPHIA), which was established to provide guidance and foster consensus in the field.
Finally, the precision of the incidence estimator is presented as an appropriate metric for
evaluating, optimising and comparing tests, and the framework serves to counter existing
misconceptions about test performance. The contributions together provide sound
theoretical and methodological foundations for the application, characterisation and
optimisation of recent infection tests for HIV incidence surveillance, allowing the focus
to now shift towards practical application.
|
429 |
Modelos de transição de Markov: um enfoque em experimentos planejados com dados binários correlacionados / Markov transition models: a focus on planned experiments with correlated binary dataLordelo, Mauricio Santana 30 May 2014 (has links)
Os modelos de transição de Markov constituem uma ferramenta de grande importância para diversas áreas do conhecimento quando são desenvolvidos estudos com medidas repetidas. Eles caracterizam-se por modelar a variável resposta ao longo do tempo condicionada a uma ou mais respostas anteriores, conhecidas como a história do processo. Além disso, é possível a inclusão de outras covariáveis. No caso das respostas binárias, pode-se construir uma matriz com as probabilidades de transição de um estado para outro. Neste trabalho, quatro abordagens diferentes de modelos de transição foram comparadas para avaliar qual estima melhor o efeito causal de tratamentos em um estudo experimental em que a variável resposta é um vetor binário medido ao longo do tempo. Estudos de simulação foram realizados levando em consideração experimentos balanceados com três tratamentos de natureza categórica. Para avaliar as estimativas foram utilizados o erro padrão, viés e percentual de cobertura dos intervalos de confiança. Os resultados mostraram que os modelos de transição marginalizados são mais indicados na situação em que um experimento é desenvolvido com um reduzido número de medidas repetidas. Como complementação, apresenta-se uma forma alternativa de realizar comparações múltiplas, uma vez que os pressupostos como normalidade, independência e homocedasticidade são violados impossibilitando o uso dos métodos tradicionais. Um experimento com dados reais no qual se registrou a presença de fungos (considerada como sucesso) em cultivos de citros e morango foi analisado por meio do modelo de transição apropriado. Para as comparações múltiplas, intervalos de confiança simultâneos foram construídos para o preditor linear e os resultados foram estendidos para a resposta média que neste caso são as probabilidades de sucesso. / The transition Markov models are a very important tool for several areas of knowledge when studies are developed with repeated measures. They are characterized by modeling the response variable over time conditional to the previous response which is known as the history. In addtion it is possible to include other covariates. In the case of binary responses, can be constructed a matrix of transition probabilities from one state to another. In this work, four different approaches to transition models were compared in order to assess which best estimates of the causal effect of treatments in an experimental studies where the outcome is a vector of binary response measured over time. Simulation study was held taking into account a balanced experiments with three treatments of categorical nature. To assess the best estimates standard error and bias, beyond the percentage of coverage were used. The results showed that the marginalized transition models are more appropriate in situation where an experiment is developed with a reduced number of repeated measurements. As complementation is presented an alternative way to perform multiple comparisons, since the assumptions as normality, independence and homoscedasticity are violated precluding the use of traditional methods. An experiment with real data where we recorded the presence of fungi (deemed successful) in citrus and strawberry crops was analyzed through the appropriate transition model. For multiple comparisons, simultaneous confidence intervals were constructed for the linear predictor and the results have been extended to the mean response in this case are the probabilities of success.
|
430 |
An empirical analysis of the factors affecting appropriateness of confidence in predicting financially distressed firms. / Empirical analysis of the major factors affecting appropriateness of confidence in predicting financially distressed firmsJanuary 1996 (has links)
by Siu-yeung Chan. / Publication date from spine. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 254-278). / Chapter CHAPTER I --- INTRODUCTION --- p.1 / Chapter 1.1 --- Background of the Study --- p.1 / Chapter 1.2 --- Research Problems and Objectives --- p.5 / Chapter 1.3 --- Justification for the Study --- p.7 / Chapter 1.4 --- Research Model and Hypotheses --- p.9 / Chapter 1.4.1 --- Research Model --- p.9 / Chapter 1.4.2 --- Research Hypotheses --- p.10 / Chapter 1.5 --- Research Methodology --- p.12 / Chapter 1.6 --- Definitions of Key Terms --- p.14 / Chapter 1.7 --- Scope of the Study --- p.16 / Chapter 1.8 --- Organisation of the Thesis --- p.17 / Chapter CHAPTER II --- LITERATURE REVIEW ON BEHAVIOURAL DECISION THEORY --- p.19 / Chapter 2.1 --- Introduction --- p.19 / Chapter 2.2 --- Behavioural Decision Theory: Historical Development --- p.20 / Chapter 2.3 --- Bounded Rationality --- p.22 / Chapter 2.4 --- Lens Model --- p.25 / Chapter 2.5 --- Heuristics and Biases --- p.27 / Chapter 2.5.1 --- Overview --- p.27 / Chapter 2.5.2 --- Availability Heuristic --- p.28 / Chapter 2.5.3 --- Anchoring and Adjustment Heuristic --- p.29 / Chapter 2.5.4 --- Representativeness Heuristic --- p.31 / Chapter 2.5.5 --- Conjunction Fallacy --- p.32 / Chapter 2.5.6 --- Hindsight Bias --- p.34 / Chapter 2.5.7 --- Order Effects in Brief Updating --- p.35 / Chapter 2.5.7.1 --- Evidence Encoding --- p.36 / Chapter 2.5.7.2 --- Response Mode --- p.37 / Chapter 2.5.7.3 --- Adjustment Weighting --- p.38 / Chapter 2.5.7.4 --- Order Effects --- p.39 / Chapter 2.5.8 --- Framing Effect --- p.40 / Chapter 2.5.9 --- Sunk Cost Effect --- p.42 / Chapter 2.5.10 --- Confirmation Bias --- p.44 / Chapter 2.5.11 --- Accountability --- p.47 / Chapter 2.5.12 --- Base-Rate Fallacy --- p.49 / Chapter 2.5.12.1 --- Reduction of the Base-Rate Fallacy --- p.51 / Chapter 2.5.12.1.1 --- The Relevance of Base-Rate Information --- p.51 / Chapter 2.5.12.1.2 --- The Relevance of Case-Specific Evidence --- p.53 / Chapter 2.5.12.2 --- Effects of Need for Cognition on the Base-Rate Fallacy --- p.54 / Chapter 2.5.13 --- Overconfidence Effect --- p.56 / Chapter 2.5.13.1 --- Calibration and Calibration Curve --- p.58 / Chapter 2.5.13.2 --- Factors Affecting Appropriateness of Confidence --- p.60 / Chapter 2.5.13.2.1 --- Task Factors --- p.60 / Chapter 2.5.13.2.2 --- Environmental Factors --- p.61 / Chapter 2.5.13.2.3 --- Individual Difference Factors --- p.63 / Chapter 2.5.13.3 --- Methods Promoting Appropriate Confidence --- p.64 / Chapter 2.5.13.4 --- Appropriateness of Experts' Confidence --- p.67 / Chapter 2.5.13.5 --- Conceptual and Methodological Issues --- p.68 / Chapter 2.6 --- Contingent Decision Behaviour --- p.72 / Chapter 2.6.1 --- Overview --- p.72 / Chapter 2.6.2 --- Factors Influencing Contingent Decision Behaviour --- p.73 / Chapter 2.6.3 --- Effects of Task Variables on Selecting Decision Strategies --- p.74 / Chapter 2.6.3.1 --- Task Complexity --- p.74 / Chapter 2.6.3.2 --- Response Mode --- p.77 / Chapter 2.6.3.3 --- Information Display Mode --- p.77 / Chapter 2.6.3.4 --- Agenda Effect --- p.78 / Chapter 2.6.4 --- Effects of Context Variables on Selecting Decision Strategies --- p.78 / Chapter 2.6.5 --- Effects of Effort and Accuracy on Selecting Decision Strategies --- p.79 / Chapter 2.7 --- Integrated Framework for Behavioural Decision Theory --- p.81 / Chapter 2.7.1 --- Principle of Bounded Rationality and the Three Research Frameworks --- p.82 / Chapter 2.7.2 --- Lens Model and Heuristics-and-Biases Frameworks --- p.83 / Chapter 2.7.3 --- Lens Model and Contingent Decision Behaviour Frameworks --- p.84 / Chapter 2.7.4 --- Heuristics-and-Biases and Contingent Decision Behaviour Frameworks --- p.85 / Chapter 2.8 --- Chapter Summary --- p.85 / Chapter CHAPTER III --- LITERATURE REVIEW ON BEHAVIOURAL DECISION RESEARCH IN ACCOUNTING --- p.88 / Chapter 3.1 --- Introduction --- p.88 / Chapter 3.2 --- Overview of BDR in Accounting and the Major Determinants of Decision-Making Performance --- p.89 / Chapter 3.3 --- Heuristics and Biases --- p.93 / Chapter 3.3.1 --- Overview --- p.93 / Chapter 3.3.2 --- Availability Heuristic --- p.94 / Chapter 3.3.3 --- Anchoring and Adjustment Heuristic --- p.96 / Chapter 3.3.4 --- Order Effects in Belief Updating --- p.99 / Chapter 3.3.4.1 --- Overview --- p.99 / Chapter 3.3.4.2 --- Model Predictions --- p.100 / Chapter 3.3.4.3 --- Order Effects On Effectiveness --- p.102 / Chapter 3.3.4.4 --- Factors Affecting the Order Effects --- p.103 / Chapter 3.3.4.5 --- Summary of Accounting Research on the Order Effects in Belief Updating --- p.105 / Chapter 3.3.5 --- Conjunction Fallacy --- p.106 / Chapter 3.3.6 --- Framing Effect --- p.107 / Chapter 3.3.7 --- Confirmation Bias --- p.110 / Chapter 3.3.8 --- Hindsight Bias --- p.113 / Chapter 3.3.9 --- Accountability --- p.116 / Chapter 3.3.10 --- Base-Rate Fallacy --- p.118 / Chapter 3.3.10.1 --- Overview --- p.118 / Chapter 3.3.10.2 --- Attention to Base Rates --- p.119 / Chapter 3.3.10.3 --- Attention to Source Reliability --- p.123 / Chapter 3.3.10.4 --- Insensitivity to Sample Size --- p.127 / Chapter 3.3.10.5 --- Summary for Accounting Research on the Base-Rate Fallacy --- p.129 / Chapter 3.3.11 --- Overconfidence Effect --- p.129 / Chapter 3.3.11.1 --- Appropriateness of Auditors' Confidence --- p.130 / Chapter 3.3.11.2 --- Factors Affecting the Appropriateness of Auditors' Confidence --- p.131 / Chapter 3.4 --- Behavioural Decision Research in Financial Distress Prediction --- p.136 / Chapter 3.4.1 --- Overview --- p.136 / Chapter 3.4.2 --- Prediction Performance --- p.137 / Chapter 3.4.2.1 --- Prediction Accuracy --- p.137 / Chapter 3.4.2.2 --- Appropriateness of Confidence --- p.138 / Chapter 3.4.3 --- Factors Affecting Prediction Performance --- p.139 / Chapter 3.4.3.1 --- Overview --- p.139 / Chapter 3.4.3.2 --- Information Load --- p.139 / Chapter 3.4.3.3 --- Information Cue Choice Versus Weighing of Cues --- p.140 / Chapter 3.4.3.4 --- Base-Rate Information --- p.141 / Chapter 3.4.3.5 --- Task Predictability --- p.144 / Chapter 3.4.3.6 --- Reward Structure --- p.145 / Chapter 3.4.3.7 --- Individual Differences --- p.145 / Chapter 3.4.4 --- Group Judgment Accuracy --- p.146 / Chapter 3.4.5 --- Section Summary --- p.147 / Chapter 3.5 --- Motivation for the Current Study --- p.149 / Chapter 3.5.1 --- Research Opportunity 1 --- p.149 / Chapter 3.5.2 --- Research Opportunity 2 --- p.150 / Chapter 3.5.3 --- Research Opportunity 3 --- p.152 / Chapter 3.5.4 --- Research Opportunity 4 --- p.153 / Chapter 3.6 --- Chapter Summary --- p.154 / Chapter CHAPTER IV --- RESEARCH MODEL AND HYPOTHESES --- p.156 / Chapter 4.1 --- Introduction --- p.156 / Chapter 4.2 --- Research Model --- p.156 / Chapter 4.3 --- Research Hypotheses --- p.158 / Chapter 4.3.1 --- Hypothesis 1 --- p.158 / Chapter 4.3.2 --- Hypothesis 2 --- p.160 / Chapter 4.3.3 --- Hypothesis 3 --- p.163 / Chapter 4.3.4 --- Hypothesis 4 --- p.166 / Chapter 4.3.5 --- Hypothesis 5 --- p.168 / Chapter 4.4 --- Chapter Summary --- p.172 / Chapter CHAPTER V --- RESEARCH METHOD AND DESIGN --- p.173 / Chapter 5.1 --- Introduction --- p.173 / Chapter 5.2 --- Research Method --- p.173 / Chapter 5.3 --- Experimental Design --- p.175 / Chapter 5.4 --- Subjects --- p.177 / Chapter 5.5 --- Construction of the Experiment Instrument --- p.179 / Chapter 5.5.1 --- Selection of Sample Firms for Prediction Tasks --- p.180 / Chapter 5.5.1.1 --- Definition of Firms being in Financial Distress --- p.180 / Chapter 5.5.1.2 --- Identification of Firms in Financial Distress --- p.182 / Chapter 5.5.1.3 --- Selection of Healthy Firms --- p.182 / Chapter 5.5.1.4 --- Sample Firms in the Instrument --- p.183 / Chapter 5.5.2 --- Selection of Financial Ratios --- p.184 / Chapter 5.5.2.1 --- Logit Analysis --- p.185 / Chapter 5.5.2.2 --- Pilot Interviews --- p.187 / Chapter 5.5.2.3 --- Final Financial Ratios Used in the Instrument --- p.188 / Chapter 5.5.3 --- Modification of the Need for Cognition Scale --- p.189 / Chapter 5.5.4 --- Translation of the Experiment Instrument --- p.191 / Chapter 5.5.5 --- Pretest of the Experiment Instrument --- p.191 / Chapter 5.6 --- Administration of Experiment --- p.192 / Chapter 5.7 --- Operationalisation and Measurement of Variables --- p.193 / Chapter 5.7.1 --- Relevance of Base-Rate Information --- p.193 / Chapter 5.7.2 --- Need For Cognition --- p.195 / Chapter 5.7.3 --- Perceived Informativeness of Case-Specific Evidence --- p.197 / Chapter 5.7.4 --- Appropriateness of Confidence --- p.199 / Chapter 5.8 --- Data Analysis Methods --- p.201 / Chapter 5.9 --- Chapter Summary --- p.203 / Chapter CHAPTER VI --- ANALYSIS OF DATA --- p.205 / Chapter 6.1 --- Introduction --- p.205 / Chapter 6.2 --- Descriptive Data about the Subjects --- p.205 / Chapter 6.3 --- Stepwise Logit Analysis --- p.207 / Chapter 6.4 --- Statistical Testing for Hypotheses --- p.210 / Chapter 6.4.1 --- Testing Hypothesis 1 --- p.211 / Chapter 6.4.2 --- Unbalanced ANOVA Model --- p.212 / Chapter 6.4.3 --- Testing the Base Rate Pre-occupied by the Subjects --- p.215 / Chapter 6.4.4 --- Testing Hypothesis 2 --- p.217 / Chapter 6.4.5 --- Testing Hypothesis 3 --- p.218 / Chapter 6.4.6 --- Testing Hypothesis 4 --- p.220 / Chapter 6.4.7 --- Testing Hypothesis 5 --- p.222 / Chapter 6.5 --- Supplementary Statistical Testing of Hypotheses --- p.224 / Chapter 6.5.1 --- Separate Models for Hypotheses 2 to 5 --- p.224 / Chapter 6.5.2 --- Effects of Other Interactions --- p.224 / Chapter 6.5.3 --- Analysing NC As a Continuous Variable --- p.225 / Chapter 6.5.4 --- Repeated Measures ANOVA --- p.226 / Chapter 6.5.5 --- Additional Analysis ´ؤ Controlling for Task Predictability --- p.228 / Chapter 6.6 --- Chapter Summary --- p.232 / Chapter CHAPTER VII --- "SUMMARY, DISCUSSIONS AND IMPLICATIONS" --- p.234 / Chapter 7.1 --- Recap of the Study --- p.234 / Chapter 7.2 --- Conclusions and Discussions --- p.237 / Chapter 7.2.1 --- Hypothesis 1 --- p.237 / Chapter 7.2.2 --- Hypothesis 2 --- p.239 / Chapter 7.2.3 --- Hypothesis 3 --- p.241 / Chapter 7.2.4 --- Hypothesis 4 --- p.243 / Chapter 7.2.5 --- Hypothesis 5 --- p.244 / Chapter 7.2.6 --- Overall Conclusions --- p.246 / Chapter 7.3 --- Implications for Theory --- p.246 / Chapter 7.4 --- Implications for Practice --- p.248 / Chapter 7.5 --- Limitations of the Study --- p.249 / Chapter 7.6 --- Recommendations for Further Research --- p.252 / REFERENCES --- p.254 / APPENDIX A: EXPERIMENT INSTRUMENT (IN ENGLISH) --- p.279 / APPENDIX B: EXPERIMENT INSTRUMENT (IN CHINESE) --- p.306 / APPENDIX C: STEPWISE LOGIT ANALYSIS RESULTS --- p.333
|
Page generated in 0.1335 seconds