• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The pricing of CDO based on Incomplete Information Credit model

Lien, Wei-chih 21 June 2006 (has links)
Credit risk and market risk have already been explored intensively and the reliable models of credit risk and market risk have also been developed progressively. This study try to find a method pricing the CDO (Collateralized Debt Obligation) based on Incomplete information credit model. For the various approaches to CDO valuation, the most widely accepted is the Copula approach. The Copula approach is considered suitable for describing default correlation. Combining with Monte Carlo Simulation, it can price CDO effectively.
2

Bank performance and credit risk management

Takang, Felix Achou, Ntui, Claudine Tenguh January 2008 (has links)
Banking is topic, practice, business or profession almost as old as the very existence of man, but literarily it can be rooted deep back the days of the Renaissance (by the Florentine Bankers). It has sprouted from the very primitive Stone-age banking, through the Victorian-age to the technology-driven Google-age banking, encompassing automatic teller machines (ATMs), credit and debit cards, correspondent and internet banking. Credit risk has always been a vicinity of concern not only to bankers but to all in the business world because the risks of a trading partner not fulfilling his obligations in full on due date can seriously jeopardize the affaires of the other partner. The axle of this study is to have a clearer picture of how banks manage their credit risk. In this light, the study in its first section gives a background to the study and the second part is a detailed literature review on banking and credit risk management tools and assessment models. The third part of this study is on hypothesis testing and use is made of a simple regression model. This leads us to conclude in the last section that banks with good credit risk management policies have a lower loan default rate and relatively higher interest income.
3

Partial Credit Models for Scale Construction in Hedonic Information Systems

Mair, Patrick, Treiblmaier, Horst January 2008 (has links) (PDF)
Information Systems (IS) research frequently uses survey data to measure the interplay between technological systems and human beings. Researchers have developed sophisticated procedures to build and validate multi-item scales that measure real world phenomena (latent constructs). Most studies use the so-called classical test theory (CTT), which suffers from several shortcomings. We first compare CTT to Item Response Theory (IRT) and subsequently apply a Rasch model approach to measure hedonic aspects of websites. The results not only show which attributes are best suited for scaling hedonic information systems, but also introduce IRT as a viable substitute that overcomes severall shortcomings of CTT. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
4

How do Listed Companies¡¦ Non-system Risk Influence the Credit Risk

Wang, Hsin-ping 21 June 2012 (has links)
In order to get maximum profit, investors start to high attention on risk management after financial crisis in 2008. Therefore, risk management and predict become more and more complex. This paper mainly focuses on two risks, including non-systematic risk and credit risk. After financial crisis, countries pay more attention on credit risk, and now because of Europe debt crisis, investors and governments are also concerned with the messages about credit rating which are published by Credit Rating Agency. Besides credit risk, the firm¡¦s specific risk (i.e. non-systematic risk) is also more important than before. Recent empirical studies find that the stock is not on affected by systematic risk, but also affected by non-systematic risk. According to Kuo and Lu (2005), this thesis uses two models: Moody¡¦s KMV credit model and Markov regime switching model to estimate credit risk and non-systematic risk. The period is from January 2002 to November 2010. Testing samples are data from constituent stocks of the Taiwan 50. The purpose of this paper is researching the relationship between credit risk and non-systematic risk. The empirical results show that there is the positive relationship between non-systematic risk and credit risk. And among different industries, non-systematic risk or credit risk also shows the significant differences. For plastic industry and communications network industry, there is lower credit risk. However, for electronics industry and financial industry, there is higher credit risk. The study also found that even in the same industry, each company will face different risk level.
5

An investigation of the optimal test design for multi-stage test using the generalized partial credit model

Chen, Ling-Yin 27 January 2011 (has links)
Although the design of Multistage testing (MST) has received increasing attention, previous studies mostly focused on comparison of the psychometric properties of MST with CAT and paper-and-pencil (P&P) test. Few studies have systematically examined the number of items in the routing test, the number of subtests in a stage, or the number of stages in a test design to achieve accurate measurement in MST. Given that none of the studies have identified an ideal MST test design using polytomously-scored items, the current study conducted a simulation to investigate the optimal design for MST using generalized partial credit model (GPCM). Eight different test designs were examined on ability estimation across two routing test lengths (short and long) and two total test lengths (short and long). The item pool and generated item responses were based on items calibrated from a national test consisting of 273 partial credit items. Across all test designs, the maximum information routing method was employed and the maximum likelihood estimation was used for ability estimation. Ten samples of 1,000 simulees were used to assess each test design. The performance of each test design was evaluated in terms of the precision of ability estimates, item exposure rate, item pool utilization, and item overlap. The study found that all test designs produced very similar results. Although there were some variations among the eight test structures in the ability estimates, results indicate that the performance overall of these eight test structures in achieving measurement precision did not substantially deviate from one another with regard to total test length and routing test length. However, results from the present study suggest that routing test length does have a significant effect on the number of non-convergent cases in MST tests. Short routing tests tended to result in more non-convergent cases, and the presence of fewer stage tests yielded more of such cases than structures with more stages. Overall, unlike previous findings, the results of the present study indicate that the MST test structure is less likely to be a factor impacting ability estimation when polytomously-scored items are used, based on GPCM. / text
6

Bank performance and credit risk management

Takang, Felix Achou, Ntui, Claudine Tenguh January 2008 (has links)
<p>Banking is topic, practice, business or profession almost as old as the very existence of man, but literarily it can be rooted deep back the days of the Renaissance (by the Florentine Bankers). It has sprouted from the very primitive Stone-age banking, through the Victorian-age to the technology-driven Google-age banking, encompassing automatic teller machines (ATMs), credit and debit cards, correspondent and internet banking. Credit risk has always been a vicinity of concern not only to bankers but to all in the business world because the risks of a trading partner not fulfilling his obligations in full on due date can seriously jeopardize the affaires of the other partner.</p><p>The axle of this study is to have a clearer picture of how banks manage their credit risk. In this light, the study in its first section gives a background to the study and the second part is a detailed literature review on banking and credit risk management tools and assessment models. The third part of this study is on hypothesis testing and use is made of a simple regression model. This leads us to conclude in the last section that banks with good credit risk management policies have a lower loan default rate and relatively higher interest income.</p>
7

企業信用模型建置與驗證—使用乏析應變數以塑化業及食品業為例

鐘冠智 Unknown Date (has links)
台灣上市公司不預警地宣布重整,跳票、全額交割或下市,造成投資大眾的損失,因此,必須建立企業信用模型來偵測其經營狀況。本研究發現財務比率自企業危機前五年起逐漸惡化,表示財務比率在危機發生前有惡化現象,另外危機發生後幾年財務比率仍有影響,故本研究視企業危機為一逐年遞增或遞減的變數,使用模糊數轉化,加入危機發生前後的總體變數,並且結合統計多變量分析和資料探勘中的乏析理論建立模型,使用窮舉法找出解釋力最佳之企業信用模型,結果顯示,採用模糊數轉化之應變數相當顯著。 / The listed companies in Taiwan suddenly announced restructuring, bankruptcy or out of stock, and their investors lost a lot. Therefore, we must set up the enterprise credit model to detect and examine their management states. We discover that the financial ratios decrease gradually since the past five years of enterprise's crisis. Besides, financial ratios still diminish after the crisis take place. Therefore, this research regards enterprise's crisis as one parameter, and we transform the parameter by fuzzy numbers. In addition, we use the macro economical parameters and combine multivariate analysis and fuzzy logic theory to find out a higher significant model. The result shows it is high significant to adopt the fuzzy number dependent variable.
8

A comparison of item selection procedures using different ability estimation methods in computerized adaptive testing based on the generalized partial credit model

Ho, Tsung-Han 17 September 2010 (has links)
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees’ ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most widely used item selection procedure. However, the major challenge with MI is the attenuation paradox, which results because the MI algorithm may lead to the selection of items that are not well targeted at an examinee’s true ability level, resulting in more errors in subsequent ability estimates. The solution is to find an alternative item selection procedure or an appropriate ability estimation method. CAT studies have not investigated the association between these two components of a CAT system based on polytomous IRT models. The present study compared the performance of four item selection procedures (MI, MPWI, MEI, and MEPV) across four ability estimation methods (MLE, WLE, EAP-N, and EAP-PS) under the mixed-format CAT based on the generalized partial credit model (GPCM). The test-unit pool and generated responses were based on test-units calibrated from an operational national test that included both independent dichotomous items and testlets. Several test conditions were manipulated: the unconstrained CAT as well as the constrained CAT in which the CCAT was used as the content-balancing, and the progressive-restricted procedure with maximum exposure rate equal to 0.19 (PR19) served as the exposure control in this study. The performance of various CAT conditions was evaluated in terms of measurement precision, exposure control properties, and the extent of selected-test-unit overlap. Results suggested that all item selection procedures, regardless of ability estimation methods, performed equally well in all evaluation indices across two CAT conditions. The MEPV procedure, however, was favorable in terms of a slightly lower maximum exposure rate, better pool utilization, and reduced test and selected-test-unit overlap than with the other three item selection procedures when both CCAT and PR19 procedures were implemented. It is not necessary to implement the sophisticated and computing-intensive Bayesian item selection procedures across ability estimation methods under the GPCM-based CAT. In terms of the ability estimation methods, MLE, WLE, and two EAP methods, regardless of item selection procedures, did not produce practical differences in all evaluation indices across two CAT conditions. The WLE method, however, generated significantly fewer non-convergent cases than did the MLE method. It was concluded that the WLE method, instead of MLE, should be considered, because the non-convergent case is less of an issue. The EAP estimation method, on the other hand, should be used with caution unless an appropriate prior θ distribution is specified. / text
9

Bewertungskompetenz im Physikunterricht: Entwicklung eines Messinstruments zum Themenfeld Energiegewinnung, -speicherung und -nutzung / Decision-making competencies and Physics education: Development of a questionnaire in the context of generation, storage and use of electric energy

Sakschewski, Mark 30 October 2013 (has links)
Die vorliegende Studie diskutiert die Entwicklung eines Testinstruments zur Messung von Bewertungskompetenz im Sinne der Teilkompetenz Bewerten, Entscheiden und Reflektieren (BER) innerhalb des Göttinger Modells der Bewertungskompetenz im Kontext nachhaltiger Entwicklung (Bögeholz 2011) für das Unterrichtsfach Physik in der Sekundarstufe. Die ausgewählten Aufgabenkontexte beschreiben die Erzeugung, die Speicherung und die Nutzung elektrischer Energie. Sie schließen damit auch an die aktuelle gesellschaftliche Diskussion um Erneuerbare Energien an und untersuchen diesbezüglich das Entscheidungsvermögen und die Bewertungskompetenz heutiger Schülerinnen und Schüler. Die Einsatzfähigkeit des in dieser Studie entwickelten Testinstruments wurde zunächst im Rahmen zweier Vorstudien überprüft, bevor die Haupterhebung als Querschnittstudie in den Jahrgängen 6, 8, 10 und 12 erfolgte (N = 850 Schülerinnen und Schüler an Gymnasien). Nach dem Ansatz von Eggert (2008), Eggert und Bögeholz (2006, 2010) ist es dabei als paper-and-pencil -Test konzipiert und beinhaltet zwei Entscheidungsaufgaben und eine Reflexionsaufgabe. Die empirisch gewonnenen Daten wurden zunächst anhand eines entwickelten Scoring Guides codiert und anschließend sowohl unter Gesichtspunkten der Klassischen als auch der Probabilistischen Testtheorie ausgewertet. Das entwickelte Testinstrument hat sich unter Reliabilitäts- und Validitätsaspekten bewährt. Item-Fit-Parameter zeigen, dass sich die empirischen Daten gut in einem eindimensionalen Rasch-Partial-Credit-Modell abbilden lassen. Unter anderem konnten Zusammenhänge von BER mit dem Schulalter der Schülerinnen und Schüler nachgewiesen werden. Geringe Korrelationen von BER bestehen zu verschiedenen Schulnoten (u. a. zu Deutsch, Mathematik, Politik und Physik in der Klasse 10), zudem wird das Testergebnis für BER kaum von Lesekompetenzen beeinflusst. <p><p> Externer Link zum Testinstrument: http://dx.doi.org/10.7477/39:41:17
10

A GLM framework for item response theory models. Reissue of 1994 Habilitation thesis.

Hatzinger, Reinhold January 2008 (has links) (PDF)
The aim of the monograph is to contribute towards bridging the gap between methodological developments that have evolved in the social sciences, in particular in psychometric research, and methods of statistical modelling in a more general framework. The first part surveys certain special psychometric models (often referred to as Rasch family of models) that share common properties: separation of parameters describing qualities of the subject under investigation and parameters related to properties of the situation under which the response of a subject is observed. Using conditional maximum likelihood estimation, both types of parameters may be estimated independently from each other. In particular, the Rasch model, the rating scale model, the partial credit model, hybrid types, and linear extensions thereof are treated. The second part reviews basic ideas of generalized linear models (GLMs) as an an excellent framework for unifying different approaches and providing a natural, technical background for model formulation, estimation and testing. This is followed by a short introduction to the software package GLIM chosen to illustrate the formulation of psychometric models in the GLM framework. The third part is the main part of this monograph and shows the application of generalized linear models to psychometric approaches. It gives a unified treatment of Rasch family models in the context of log-linear models and contains some new material on log-linear longitudinal modelling. The last part of the monograph is devoted to show the usefulness of the latent variable approach in a variety of applications, such as panel, cross-over, and therapy evaluation studies, where standard statistical analysis does not necessarily lead to satisfactory results. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics

Page generated in 0.0585 seconds