• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 44
  • 20
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 156
  • 156
  • 37
  • 35
  • 23
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Framework para sistema tutor adaptativo ao raciocínio crítico em contabilidade - STARCC / Framework for adaptive tutoring system for critical thinking in accounting - STARCC

Souza, Marcelo Cunha de 13 June 2019 (has links)
O desenvolvimento de habilidades cognitivas de análise, síntese e avaliação, diretamente ligadas à capacidade de raciocínio crítico, constitui um importante objetivo do processo educacional, há décadas a educação contábil é criticada pela deficiência de seus egressos na aquisição e no uso dessas habilidades. Algumas críticas estão diretamente relacionadas ao conteúdo tecnicista da formação (currículo), outras se referem à metodologia aplicada nas salas de aula (pedagogia). O cenário atual, de avanços tecnológicos, cria um ambiente de constantes mudanças a profissão contábil, sendo necessário que haja mudanças na forma e no conteúdo dos cursos para acompanhar essas mudanças. O corpo doutores e pesquisadores em Contabilidade, não é suficiente para protagonizar essa mudança, e o uso de tecnologias podem auxiliar. Diante desse cenário o presente estudo buscou identificar em que medida os sistemas tutores adaptativos auxiliam o estudante de Contabilidade no desenvolvimento das habilidades de raciocínio crítico, propondo um framework para desenvolvimento de Sistemas Tutores Adaptativos ao Raciocínio Crítico em Contabilidade (STARCC). Como resultado o presente estudo desenvolveu um método de classificação do nível de raciocínio crítico em estudantes da disciplina de História da Contabilidade, com base nos logs de acesso do sistema de apoio ao ensino; do processamento da linguagem natural dos textos produzidos para a disciplina; e no índice Flesch-Kincaid de legibilidade dos materiais produzidos. Analises demonstram que o modelo classifica os estudantes com acurácia de 86,20% em relação ao processo realizado por um professor. Entretanto os resultados precisam ser analisados com cuidado, dado que o modelo deve ser testado e melhorado em outras disciplinas e, em outro conjunto de dados, para que possa ser fonte confiável de classificação do nível de raciocínio crítico dos estudantes. Como sugestão de pesquisas futuras pode-se comparar os resultados do modelo de classificação, baseado em inteligência artificial, dessa pesquisa com os resultados de testes consagrados pela literatura, como por exemplo o California Critical Thinking Skills Test (CCTST); o Ennis-Weir Critical Thinking Essay Test (EWCTET). O framework STARCC, mostrou-se útil para elaboração de sistemas de apoio ao processo de ensino e aprendizagem no curso de História da Contabilidade e pesquisas futuras devem submetê-lo a testes em relação a atributos como: facilidade, utilidade e custo benefício em utilizá-lo / The development of cognitive abilities of analysis, synthesis and, evaluation, directly linked to the capacity for critical thinking, is an important objective of the educational process, for decades accounting education has been criticized for the deficiency of its graduates in the acquisition and use of these skills. Some criticisms are directly related to the technical content of the training (curriculum), others refer to the methodology applied in classrooms (pedagogy). The current scenario, of technological advances, creates an environment of constant changes in the accounting profession, being necessary that there are changes in the form and the content of the courses to follow these changes. The body doctors and researchers in Accounting are not enough to star in this change, and the use of technologies can help. Given this scenario, the present study sought to identify the extent to which adaptive tutors systems help the Accounting student in the development of critical reasoning skills, proposing a framework for the development of Adaptive Tutoring Systems for Critical Thinking in Accounting (STARCC). As a result the present study developed a method of classifying the level of critical reasoning in students of the discipline of Accounting History, based on the access logs of the teaching support system; of the processing of the natural language of the texts produced for the discipline; and the Flesch-Kincaid Index of readability of the materials produced. Analyzes show that the model classifies the students with an accuracy of 86.20% in relation to the process performed by a teacher. However, the results need to be carefully analyzed, since the model must be tested and improved in other disciplines and in another set of data so that it can be a reliable source of classification for students\' critical reasoning level. As a suggestion of future research, it is possible to compare the results of the artificial intelligence-based classification model of this research with the results of tests established in the literature, such as the California Critical Thinking Skills Test (CCTST); the Ennis-Weir Critical Thinking Essay Test (EWCTET). The STARCC framework has proved to be useful for the elaboration of support systems for the teaching and learning process in the course of Accounting History and future research should subject it to tests in relation to attributes such as ease, utility and cost benefit.
102

Modelos de previsão da despassivação das armaduras em estruturas de concreto sujeitas à carbonatação. / Prediction models of the despassivation of reinforcement steel in concrete structures due to carbonation.

Carmona, Thomas Garcia 10 June 2005 (has links)
Este trabalho é iniciado apresentando os conceitos teóricos necessários para o bom entendimento do tema tratado, incluindo corrosão de armaduras, passivação, despassivação, vida útil e também conceitos de análise de riscos e teoria da confiabilidade. No terceiro capítulo é feita a revisão bibliográfica das variáveis que influem na carbonatação do concreto, apresentando um panorama do conhecimento atual sobre o tema, tanto no Brasil como no exterior. No quarto capítulo são apresentados e discutidos os modelos de previsão da carbonatação sendo também feitas comparações entre os resultados obtidos pelos modelos principais. No capítulo cinco é apresentado o trabalho experimental que objetiva contribuir com o conhecimento sobre a variabilidade da carbonatação e dos cobrimentos por meio de um estudo de caso real. A estrutura estudada foi o subsolo de um edifício residencial na zona central da cidade de São Paulo, no qual foram feitas diversas medidas de profundidade de carbonatação, cobrimentos de armaduras, concentração de CO2 ambiente e umidade relativa do ar. Os resultados foram tratados por meio de análise de variância e os valores de profundidade de carbonatação foram comparados com os valores previstos empregando modelos de previsão. Foi realizado o cálculo teórico da probabilidade de despassivação que foi comparada com a incidência real de despassivação observada. Os coeficientes de variação encontrados também foram comparados com os resultados de outras pesquisas atuais. É apresentado o desenvolvimento de um programa computacional para previsão do período de iniciação por métodos deterministas e probabilistas. / This work starts presenting the theoretical concepts needed for a good understanding of it’s contents, including corrosion of steel in concrete, passivation, despassivation, service life and concepts of risk analysis and reliability theory. In chapter three it’s discussed the several variables that have influence in concrete carbonation, presenting a general view of the knowledge concerning the topic in Brasil and other countries. Chapter four presents and discuss the prediction models of carbonation and comparisons are made between the results of the main models. In chapter five it is presented the experimental work that intends to contribute with the knowledge about the carbonation and concrete covers variability by means of a case study. The studied structure was the parking garage of a 30 years residential building, located in the central zone of São Paulo city in Brasil, in witch were made a several number of measurements of carbonation depth, concrete cover, CO2 concentration and air relative humidity. The collected data was analyzed using variance analysis and the values of carbonation depth were compared with that estimated using prediction models. The theoretical calculation of the despassivation probability was compared with the real despassivation incidence. The variation coefficients obtained were almost compared with the results of other recent investigations. It is still presented the development of a computer program for predicting the initiation period using deterministic and probabilistic methods.
103

Avaliação de métodos para a quantificação de biomassa e carbono em florestas nativas e restauradas da Mata Atlântica / Evaluation methods for quantifying biomass and carbon in native and restored Atlantic Forests

Gusson, Eduardo 12 December 2013 (has links)
A quantificação de biomassa e carbono em florestas requer a aplicação de métodos adequados para se obter estimativas confiáveis de seus estoques. Neste sentido, o objetivo deste trabalho foi avaliar a aplicação de alguns métodos utilizados para a predição e estimação dessas variáveis em florestas nativas e restauradas da Mata Atlântica. Para isso, um primeiro capítulo aborda o uso do índice de vegetação NDVI como ferramenta auxiliar no inventário de estoques de biomassa em áreas de restauração florestal. Diferentes métodos de amostragem foram comparados em termos de precisão e conservadorismo das estimativas. Os resultados demonstraram que o NDVI apresentou adequada correlação com a biomassa estimada nas parcelas do inventário florestal instaladas em campo, sendo viável sua aplicação, seja para auxiliar na determinação de estratos, na aplicação da amostragem estratificada, seja como variável suplementar na utilização de um estimador de regressão relacionando-o à biomassa, no procedimento da amostragem dupla. Este último método, possibilitou minimizar as incertezas acerca das estimativas, valendo-se de uma intensidade amostral reduzida, fato que torna seu uso interessante, principalmente aos estudos em escala ampla, de modo a aumentar a confiabilidade das quantificações de estoques de carbono presentes na biomassa florestal, a custos de inventário reduzido. Um segundo capítulo discute a abordagem metodológica utilizada para inferir sobre a qualidade de modelos preditivos quando da seleção de modelos concorrentes para a aplicação em estudos de biomassa de florestas nativas. Para tanto, seis modelos considerando diferentes combinações de variáveis preditoras, incluindo diâmetro, altura total e alguma informação relativa à densidade da madeira, foram construídos a partir de dados de uma amostra de 80 árvores. As equações de predição de biomassa seca geradas por estes modelos foram avaliadas quanto à sua qualidade de ajuste e desempenho de aplicação. Neste segundo caso, aplicando-as aos dados de outra amostra composta por 146 árvores presentes em nove parcelas destrutivas instaladas em diferentes estágios sucessionais da floresta, de modo a possibilitar a avaliação dos vieses preditivos. No intuito de se verificar as discrepâncias nas estimativas de biomassa devido à aplicação das diferentes equações de predição de biomassa, as equações desenvolvidas, junto a outras disponíveis na literatura, foram aplicados aos dados de um inventário florestal realizado na área estudada. O estudo confirma a natureza empírica destas equações, atentando para a necessidade de prévia avaliação de seu desempenho de predição antes de sua aplicação, em especial, das ajustadas com amostras de outras florestas, expondo alguns dos principais fatores associados às causas de incertezas nas quantificações dos estoques de biomassa nos estudos realizado em florestas nativas. / The biomass and carbon quantification requires the application of appropriate methods to obtain reliable estimates of their stocks in natural and planted forests. The aim of this study was to evaluate different applicable methods to estimate biomass in both, natural and restored Atlantic Forests. The first chapter discusses the use of the vegetation index (NDVI) as an auxiliary tool in the inventory of biomass stocks in forest restoration areas. Different sampling methods were compared in terms of its accuracy and conservativeness. The results shown an adequate correlation between the vegetation index and the measured biomass, making the NDVI applicable either as supporting decision tool to define strata in the stratified sampling or as a predictor in the double sampling procedure. The last method allowed to the minimization of the uncertainties related to the biomass estimation combined to the reduction of sampling efforts. It makes the approach very interesting, especially in the context of large-scale surveys. The second chapter discusses the methodological approach used to evaluate the quality of predictive models applied to biomass studies in natural forests. For this, six models were fitted from 80 sample trees, using different combinations of predictor variables, such as, total height and information of wood density. The predictive equations generated by the models were evaluated according to their quality of fit and prediction performance. In order to evaluate its prediction performance, the equations were applied to the dataset of another 146 sample trees measured in nine destructive sample plots. The plots were located in different forest successional stages allowing the evaluation of model predictive bias among the stages. A third step of the analysis was the application of literature equations to a dataset of a forest inventory conducted in the study area, in order to verify the discrepancies in the estimates due to the use of these different models. The study confirms the empirical nature of the biomass equations and the need of previous evaluation in terms of prediction performance. This conclusion is even more relevant when we consider the equations that were obtained from other forests types, exposing some of the key factors associated to the causes of uncertainty in the biomass estimation applied to natural forests.
104

THE USE OF 3-D HIGHWAY DIFFERENTIAL GEOMETRY IN CRASH PREDICTION MODELING

Amiridis, Kiriakos 01 January 2019 (has links)
The objective of this research is to evaluate and introduce a new methodology regarding rural highway safety. Current practices rely on crash prediction models that utilize specific explanatory variables, whereas the depository of knowledge for past research is the Highway Safety Manual (HSM). Most of the prediction models in the HSM identify the effect of individual geometric elements on crash occurrence and consider their combination in a multiplicative manner, where each effect is multiplied with others to determine their combined influence. The concepts of 3-dimesnional (3-D) representation of the roadway surface have also been explored in the past aiming to model the highway structure and optimize the roadway alignment. The use of differential geometry on utilizing the 3-D roadway surface in order to understand how new metrics can be used to identify and express roadway geometric elements has been recently utilized and indicated that this may be a new approach in representing the combined effects of all geometry features into single variables. This research will further explore this potential and examine the possibility to utilize 3-D differential geometry in representing the roadway surface and utilize its associated metrics to consider the combined effect of roadway features on crashes. It is anticipated that a series of single metrics could be used that would combine horizontal and vertical alignment features and eventually predict roadway crashes in a more robust manner. It should be also noted that that the main purpose of this research is not to simply suggest predictive crash models, but to prove in a statistically concrete manner that 3-D metrics of differential geometry, e.g. Gaussian Curvature and Mean Curvature can assist in analyzing highway design and safety. Therefore, the value of this research is oriented towards the proof of concept of the link between 3-D geometry in highway design and safety. This thesis presents the steps and rationale of the procedure that is followed in order to complete the proposed research. Finally, the results of the suggested methodology are compared with the ones that would be derived from the, state-of-the-art, Interactive Highway Safety Design Model (IHSDM), which is essentially the software that is currently used and based on the findings of the HSM.
105

Rozhodování uživatele účetní závěrky o finanční pozici podniku / Decision making of the user of the financial statements about the financial position of the enterprise

VALDMANOVÁ, Dominika January 2019 (has links)
The financial health of a company is important for the decision-making of a financial statement user for various reasons. It may be important for future investors to decide if they can invest in the company. In addition, it may also be important for a bank to decide whether the company can provide a credit. However, there are other cases where the user needs to know if the company is financially health. For this evaluation, there are selected the methods used to detect and evaluate the manipulation of financial statements. As these methods are selected the CFEBT model, the Beneish model and the Jones model of non-discretion accrual. Creditworthy models are used to determine whether the company is making any value for the future and there is any danger of bankruptcy. The creditworthiness index, the Tamari model and the IN05 model are selected as these models. In the diploma thesis, these methods are applied in six companies. There are the parent company named ABC a.s. and its five subsidiaries named with Roman numerals I-V. Names of companies are invention. In the practical part of the diploma thesis, the companies are analysed according to individual models and then there is determined the influence of subsidiaries on the parent company according the correlation coefficient. In the end the hypotheses are confirmed or refuted. The first hypothesis says that the whole consolidation unit is financially health. The second hypothesis assumes that the results of subsidiaries influence the results of parent company positively.
106

Plant species rarity and data restriction influence the prediction success of species distribution models

Mugodo, James, n/a January 2002 (has links)
There is a growing need for accurate distribution data for both common and rare plant species for conservation planning and ecological research purposes. A database of more than 500 observations for nine tree species with different ecological and geographical distributions and a range of frequencies of occurrence in south-eastern New South Wales (Australia) was used to compare the predictive performance of logistic regression models, generalised additive models (GAMs) and classification tree models (CTMs) using different data restriction regimes and several model-building strategies. Environmental variables (mean annual rainfall, mean summer rainfall, mean winter rainfall, mean annual temperature, mean maximum summer temperature, mean minimum winter temperature, mean daily radiation, mean daily summer radiation, mean daily June radiation, lithology and topography) were used to model the distribution of each of the plant species in the study area. Model predictive performance was measured as the area under the curve of a receiver operating characteristic (ROC) plot. The initial predictive performance of logistic regression models and generalised additive models (GAMs) using unrestricted, temperature restricted, major gradient restricted and climatic domain restricted data gave results that were contrary to current practice in species distribution modelling. Although climatic domain restriction has been used in other studies, it was found to produce models that had the lowest predictive performance. The performance of domain restricted models was significantly (p = 0.007) inferior to the performance of major gradient restricted models when the predictions of the models were confined to the climatic domain of the species. Furthermore, the effect of data restriction on model predictive performance was found to depend on the species as shown by a significant interaction between species and data restriction treatment (p = 0.013). As found in other studies however, the predictive performance of GAM was significantly (p = 0.003) better than that of logistic regression. The superiority of GAM over logistic regression was unaffected by different data restriction regimes and was not significantly different within species. The logistic regression models used in the initial performance comparisons were based on models developed using the forward selection procedure in a rigorous-fitting model-building framework that was designed to produce parsimonious models. The rigorous-fitting modelbuilding framework involved testing for the significant reduction in model deviance (p = 0.05) and significance of the parameter estimates (p = 0.05). The size of the parameter estimates and their standard errors were inspected because large estimates and/or standard errors are an indication of model degradation from overfilling or effecls such as mullicollinearily. For additional variables to be included in a model, they had to contribule significantly (p = 0.025) to the model prediclive performance. An attempt to improve the performance of species distribution models using logistic regression models in a rigorousfitting model-building framework, the backward elimination procedure was employed for model selection, bul it yielded models with reduced performance. A liberal-filling model-building framework that used significant model deviance reduction at p = 0.05 (low significance models) and 0.00001 (high significance models) levels as the major criterion for variable selection was employed for the development of logistic regression models using the forward selection and backward elimination procedures. Liberal filling yielded models that had a significantly greater predictive performance than the rigorous-fitting logistic regression models (p = 0.0006). The predictive performance of the former models was comparable to that of GAM and classification tree models (CTMs). The low significance liberal-filling models had a much larger number of variables than the high significance liberal-fitting models, but with no significant increase in predictive performance. To develop liberal-filling CTMs, the tree shrinking program in S-PLUS was used to produce a number of trees of differenl sizes (subtrees) by optimally reducing the size of a full CTM for a given species. The 10-fold cross-validated model deviance for the subtrees was plotted against the size of the subtree as a means of selecting an appropriate tree size. In contrast to liberal-fitting logistic regression, liberal-fitting CTMs had poor predictive performance. Species geographical range and species prevalence within the study area were used to categorise the tree species into different distributional forms. These were then used, to compare the effect of plant species rarity on the predictive performance of logistic regression models, GAMs and CTMs. The distributional forms included restricted and rare (RR) species (Eucalyptus paliformis and Eucalyptus kybeanensis), restricted and common (RC) species (Eucalyptus delegatensis, Eucryphia moorei and Eucalyptus fraxinoides), widespread and rare (WR) species (Eucalyptus data) and widespread and common (WC) species (Eucalyptus sieberi, Eucalyptus pauciflora and Eucalyptus fastigata). There were significant differences (p = 0.076) in predictive performance among the distributional forms for the logistic regression and GAM. The predictive performance for the WR distributional form was significantly lower than the performance for the other plant species distributional forms. The predictive performance for the RC and RR distributional forms was significantly greater than the performance for the WC distributional form. The trend in model predictive performance among plant species distributional forms was similar for CTMs except that the CTMs had poor predictive performance for the RR distributional form. This study shows the importance of data restriction to model predictive performance with major gradient data restriction being recommended for consistently high performance. Given the appropriate model selection strategy, logistic regression, GAM and CTM have similar predictive performance. Logistic regression requires a high significance liberal-fitting strategy to both maximise its predictive performance and to select a relatively small model that could be useful for framing future ecological hypotheses about the distribution of individual plant species. The results for the modelling of plant species for conservation purposes were encouraging since logistic regression and GAM performed well for the restricted and rare species, which are usually of greater conservation concern.
107

Turkey-adjusted Next Generation Attenuation Models

Kargioglu, Bahadir 01 September 2012 (has links) (PDF)
The objective of this study is to evaluate the regional differences between the worldwide based NGA-W1 ground motion models and available Turkish strong ground motion dataset and make the required adjustments in the NGA-W1 models. A strong motion dataset using parameters consistent with the NGA ground motion models is developed by including strong motion data from Turkey. Average horizontal component ground motion is computed for response spectral values at all available periods using the GMRotI50 definition consistent with the NGA-W1 models. A random-effects regression with a constant term only is used to evaluate the systematic differences in the average level of shaking. Plots of residuals are used to evaluate the differences in the magnitude, distance, and site amplification scaling between the Turkish dataset and the NGA-W1 models. Model residuals indicated that the ground motions are overestimated by all 5 NGA-W1 models significantly, especially for small-to-moderate magnitude earthquakes. Model residuals relative to distance measures plots suggest that NGA-W1 models slightly underestimates the ground motions for rupture distances within 100-200 km range. Models including the aftershocks over-predict the ground motions at stiff soil/engineering rock sites. The misfit between the actual data and model predictions are corrected with adjustments functions for each scaling term. Turkey-Adjusted NGA-W1 models proposed in this study are compatible with the Turkish strong ground motion characteristics and preserve the well-constrained features of the global models. Therefore these models are suitable candidates for ground motion characterization and PSHA studies conducted in Turkey.
108

Probabilistic Seismic Hazard Assessment For Earthquake Induced Landslides

Balal, Onur 01 January 2013 (has links) (PDF)
Earthquake-induced slope instability is one of the major sources of earthquake hazards in near fault regions. Simplified tools, such as Newmark&rsquo / s Sliding Block (NSB) Analysis are widely used to represent the stability of a slope under earthquake shaking. The outcome of this analogy is the slope displacement where larger displacement values indicate higher seismic slope instability risk. Recent studies in the literature propose empirical models between the slope displacement and single or multiple ground motion intensity measures such as peak ground acceleration or Arias intensity. These correlations are based on the analysis of large datasets from global ground motion recording database (PEER NGA-W1 Database). Ground motions from earthquakes occurred in Turkey are poorly represented in NGA-W1 database since corrected and processed data from Turkey was not available until recently. The objective of this study is to evaluate the compatibility of available NSB displacement prediction models for the Probabilistic Seismic Hazard Assessment (PSHA) applications in Turkey using a comprehensive dataset of ground motions recorded during earthquakes occurred in Turkey. Then the application of selected NSB displacement prediction model in a vector-valued PSHA framework is demonstrated with the explanations of seismic source characterization, ground motion prediction models and ground motion intensity measure correlation coefficients. The results of the study is presented in terms of hazard curves and a comparison is made with a case history in Asarsuyu Region where seismically induced landslides (Bakacak Landslides) had taken place during 1999 D&uuml / zce Earthquake.
109

Bankroto prognozavimo modelių pritaikomumas skirtingo mokumo ir pelningumo įmonėms / Adaptation of bankruptcy prediction models for different solvency and profitability firms

Budrikienė, Rasa 02 July 2012 (has links)
Bakalauro baigiamajame darbe pritaikyti įvairūs bankroto prognozavimo modeliai bei apskaičiuoti dažniausiai naudojami mokumo ir pelningumo rodikliai, neįeinantys į modelius, tačiau turintys didelę įtaką bankroto prognozėms. / Various bankruptcy prediction models are adapted in this bachelor thesis. Generally used solvency and profitability rates, that are not part of models, but have an impact for bankrupt prediction, had been calculated.
110

Development of bank acquisition targets prediction models

Pasiouras, Fotios January 2005 (has links)
This thesis develops a range of prediction models for the purpose of predicting the acquisition of commercial banks in the European Union using publicly available data. Over the last thirty years, there have been approximately 30 studies that have attempted to identify potential acquisition targets, all of them focusing on non-bank sectors. We consider that prediction models developed specifically for the banking industry are essential due to the unusual structure of banks' financial statements, differences in the environment in which banks operate and other specific characteristics of banks that in general distinguish them from non-financial firms. We focus specifically on the EU banking sector, where M&As activity has been considerable in recent years, yet academic research relating to the EU has been rather limited compared to the case of the US. The methodology for developing prediction models involved identifying past cases of acquired banks and combining these with non-acquired banks in order to evaluate the prediction accuracy of various quantitative classification techniques. In this study, we construct a base sample of commercial banks covering 15 EU countries, and financial variables measuring capital strength, profit and cost efficiency, liquidity, growth, size and market power, with data in both raw and country-adjusted (i.e. raw variables divided by the average of the banking sector for the corresponding country) form. In order to allow for a proper comparative evaluation of classification methods, we select common subsets of the base sample and variables with high discriminatory power, dividing the sample period (1998-2002) into training sub-sample for model development (1998-2000), and holdout sub-sample for model evaluation (2001-2002). Although the results tend to support the findings of studies on non-financial firms, highlighting the difficulties in predicting acquisition targets, the prediction models we develop show classification accuracies generally higher than chance assignment based on prior probabilities. We also consider the use of equal and unequal matched holdout samples for evaluation, and find that overall classification accuracy tends to increase in the unequal matched samples, implying that equal matched samples do not necessarily overstate the prediction ability of models. The main goal of this study has been to compare and evaluate a variety of classification methods including statistical, econometric, machine learning and operational research techniques, as well as integrated techniques combining the predictions of individual classification methods. We found that some methods achieved very high accuracies in classifying non-acquired banks, but at the cost of relatively poor accuracy performance in classifying acquired banks. This suggests a trade-off in achieving high classification accuracy, although some methods (e.g. Discriminant) performed reasonably well in terms of achieving balanced overall classification accuracies of above chance predictions. Integrated prediction models offer the advantage of counterbalancing relatively poor performance of some classification methods with good performance of others, but in doing so could not out-perform all individual classification methods considered. In general, we found that the outcome of which method performed best depended largely on the group classification accuracy considered, as well as to some extent on the choice of the discriminatory variables. Concerning the use of raw or country-adjusted data, we found no clear effect on the prediction ability of the classification methods.

Page generated in 0.1098 seconds