• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 99
  • 46
  • 36
  • 35
  • 31
  • 30
  • 19
  • 11
  • 10
  • 8
  • 7
  • 6
  • 4
  • 2
  • Tagged with
  • 694
  • 270
  • 239
  • 90
  • 82
  • 81
  • 79
  • 78
  • 62
  • 60
  • 53
  • 52
  • 44
  • 43
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Compactions in Apache Cassandra : Performance Analysis of Compaction Strategies in Apache Cassandra

Kona, Srinand January 2016 (has links)
Context: The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20th century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives: In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods: A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results: Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions: With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.
212

Predicting financial distress using corporate efficiency and corporate governance measures

Zhiyong, Li January 2014 (has links)
Credit models are essential to control credit risk and accurately predicting bankruptcy and financial distress is even more necessary after the recent global financial crisis. Although accounting and financial information have been the main variables in corporate credit models for decades, academics continue searching for new attributes to model the probability of default. This thesis investigates the use of corporate efficiency and corporate governance measures in standard statistical credit models using cross-sectional and hazard models. Relative efficiency as calculated by Data Envelopment Analysis (DEA) can be used in prediction but most previous literature that has used such variables has failed to follow the assumptions of Variable Returns to Scale and sample homogeneity and hence the efficiency may not be correctly measured. This research has built industry specific models to successfully incorporate DEA efficiency scores for different industries and it is the first to decompose overall Technical Efficiency into Pure Technical Efficiency and Scale Efficiency in the context of modelling financial distress. It has been found that efficiency measures can improve the predictive accuracy and Scale Efficiency is a more important measure of efficiency than others. Furthermore, as no literature has attempted a panel analysis of DEA scores to predict distress, this research has extended the cross sectional analysis to a survival analysis by using Malmquist DEA and discrete hazard models. Results show that dynamic efficiency scores calculated with reference to the global efficiency frontier have the best discriminant power to classify distressed and non-distressed companies. Four groups of corporate governance measures, board composition, ownership structure, management compensation and director and manager characteristics, are incorporated in the hazard models to predict financial distress. It has been found that state control, institutional ownership, salaries to independent directors, the Chair’s age, the CEO’s education, the work location of independent directors and the concurrent position of the CEO have significant associations with the risk of financial distress. The best predictive accuracy is made from the model of governance measures, financial ratios and macroeconomic variables. Policy implications are advised to the regulatory commission.
213

Estimating Loss-Given-Default through Survival Analysis : A quantitative study of Nordea's default portfolio consisting of corporate customers

Hallström, Richard January 2016 (has links)
In Sweden, all banks must report their regulatory capital in their reports to the market and their models for calculating this capital must be approved by the financial authority, Finansinspektionen. The regulatory capital is the capital that a bank has to hold as a security for credit risk and this capital should serve as a buffer if they would loose unexpected amounts of money in their lending business. Loss-Given-Default (LGD) is one of the main drivers of the regulatory capital and the minimum required capital is highly sensitive to the reported LGD. Workout LGD is based on the discounted future cash flows obtained from defaulted customers. The main issue with workout LGD is the incomplete workouts, which in turn results in two problems for banks when they calculate their workout LGD. A bank either has to wait for the workout period to end, in which some cases take several years, or to exclude or make rough assumptions about those incomplete workouts in their calculations. In this study the idea from Survival analysis (SA) methods has been used to solve these problems. The mostly used SA model, the Cox proportional hazards model (Cox model), has been applied to investigate the effect of covariates on the length of survival for a monetary unit. The considered covariates are Country of booking, Secured/Unsecured, Collateral code, Loan-To-Value, Industry code, Exposure-At- Default and Multi-collateral. The data sample was first split into 80 % training sample and 20 % test sample. The applied Cox model was based on the training sample and then validated with the test sample through interpretation of the Kaplan-Meier survival curves for risk groups created from the prognostic index (PI). The results show that the model correctly rank the expected LGD for new customers but is not always able to distinguish the difference between risk groups. With the results presented in the study, Nordea can get an expected LGD for newly defaulted customers, given the customers’ information on the considered covariates in this study. They can also get a clear picture of what factors that drive a low respectively high LGD. / I Sverige måste alla banker rapportera sitt lagstadgade kapital i deras rapporter till marknaden och modellerna för att beräkna detta kapital måste vara godkända av den finansiella myndigheten, Finansinspektionen. Det lagstadgade kapitalet är det kapital som en bank måste hålla som en säkerhet för kreditrisk och den agerar som en buffert om banken skulle förlora oväntade summor pengar i deras utlåningsverksamhet. Loss- Given-Default (LGD) är en av de främsta faktorerna i det lagstadgade kapitalet och kravet på det minimala kapitalet är mycket känsligt för det rapporterade LGD. Workout LGD är baserat på diskonteringen av framtida kassaflöden från kunder som gått i default. Det huvudsakliga problemet med workout LGD är ofullständiga workouts, vilket i sin tur resulterar i två problem för banker när de ska beräkna workout LGD. Banken måste antingen vänta på att workout-perioden ska ta slut, vilket i vissa fall kan ta upp till flera år, eller så får banken exkludera eller göra grova antaganden om dessa ofullständiga workouts i sina beräkningar. I den här studien har idén från Survival analysis (SA) metoder använts för att lösa dessa problem. Den mest använda SA modellen, Cox proportional hazards model (Cox model), har applicerats för att undersöka effekten av kovariat på livslängden hos en monetär enhet. De undersökta kovariaten var Land, Säkrat/Osäkrat, Kollateral-kod, Loan-To-Value, Industri-kod Exposure-At-Default och Multipla-kollateral. Dataurvalet uppdelades först i 80 % träningsurval och 20 % testurval. Den applicerade Cox modellen baserades på träningsurvalet och validerades på testurvalet genom tolkning av Kaplan-Meier överlevnadskurvor för riskgrupperna skapade från prognosindexet (PI). Med de presenterade resultaten kan Nordea beräkna ett förväntat LGD för nya kunder i default, givet informationen i den här studiens undersökta kovariat. Nordea kan också få en klar bild över vilka faktorer som driver ett lågt respektive högt LGD.
214

Determinants of U.S. corporate credit spreads

Kume, Ortenca January 2012 (has links)
This thesis deals with various issues regarding determinants of US corporate credit spreads. These spreads are estimated as the difference between yields to maturity for corporate bonds and default-free instruments (Treasury bonds) of the same maturity. Corporate credit spreads are considered as measures of default risk. However, the premium required by investors for holding risky rather than risk-free bonds will incorporate a compensation not only for the default risk but also for other factors related to corporate bonds such as market liquidity or tax differential between corporate and Treasury bonds. In this study we firstly examine the relationship between bond ratings and credit spreads given that bond rating changes are expected to carry some informational value for debt investors. The findings indicate that bond ratings generally carry some informational value for corporate bond investors. The Granger causal relationship is more evident for negative watch lists and during periods of uncertainty in financial markets. In line with previous studies, our results suggest that changes in credit spreads are significantly related to interest rate levels, systematic risk factors (Fama and French) factors and equity returns.
215

Credit Spread Dynamics and Default Correlation

聶怡婷, Nieh, Camille Unknown Date (has links)
本篇論文主為信用價差之時間序列研究,及其和違約相關性之間之互動關係研究。發現信用價差之水準值及波動性,都具有兩個明顯不同的狀態期間,另發現信用價差和違約相關系數之間存在正向關係,且信用價差之高低波動狀態和景氣呈現反向變動。 / In this paper, I empirically investigate the dynamics of credit spread with regime switching analysis. The finding exhibits evidence of two distinctive volatility as well as mean regimes for credit spread changes. Moreover, I document (1) that the volatility of credit spread positively corresponds to default correlation and (2) that lower (higher) volatility regimes corresponds to boom (bust) state of economy.
216

Pricing kth-to-Default Swaps: Copula Methods

賴偉聖 Unknown Date (has links)
Credit derivatives are instruments that transfer the credit risk from one party to another one. The most common credit derivative is the single entity credit default swap (CDS).A basket default is similar to a single entity CDS except that the underlying obligation is a basket of entities rather than a single reference asset. The copula methods play an important role while we price a multiname product since the assets in the portfolio are not independent. We need to model the correlated default times by using copula functions. In this article, we develop a copula based methodology for pricing -to-default swaps by using market CDS quotes. In order to know the influence of changing price drivers such as correlations and intensities on spreads, we also discuss the sensitivity analysis in this article.
217

企業信用評等模型-以營造業為例

林孟寬 Unknown Date (has links)
本研究目的,是以資料採礦的觀點,配合SPSS Clementine 11.0軟體所提供的資料採礦工具,將資料採礦進行的分析流程,導入企業信用評等模型的建置程序,針對內部評等法中的企業型暴險,根據新版巴塞爾資本協定與金管會的準則,建立信用評等模型。 投入模型的變數,分為財務變數以及總體經濟變數。在精細抽樣比例與模型方法的比較上,1:2比例訓練出的模型在反查率(Recall)較佳且在整體正確率(Accuracy)上亦有不錯的表現;最後模型評估結果決定使用羅吉斯迴歸模型。 本研究所建構出的信用評等系統分為8個評等等級,違約的機率隨評等遞增,以第8等作為違約戶的評等結果。信用評等的各項驗證,首先各等的授信戶均勻分布於8等之間,各評等的預測違約機率,亦相當接近實際違約機率,總結來說,本研究建構之模型具有一定的穩定性與預測效力,並且皆通過新巴塞資本協定與金管會的各項規範,顯示本研究之信用評等模型能夠在銀行授信流程實務中加以應用。
218

Earnings Announcements In The Credit Default Swap Market - An Event Study

Johansson, Martin, Nederberg, Johanna January 2014 (has links)
This paper investigates the European CDS markets response to earnings announcements between the years 2011-2013. Through the use of event study methodology, we investigate if the CDS market reacts to earnings news in terms of abnormal spread changes. Furthermore, by exploring the pre- and post announcement window the study examines the efficiency of the CDS market. The results imply that earnings announcements provide valuable information to the CDS market, with statistically significant results on the 5 % and 10 % significant level for negative and positive news respectively. Additionally, the paper shows that the market has a rather symmetric reaction to negative and positive earnings news since there is no significant difference in effects. The paper further reveals that there is no significant difference in the response between different credit rating groups. In terms of market efficiency, the study cannot confirm that there is anticipation for earnings announcements. The study further shows that there is no post-earnings announcement drift in the CDS market and that the market, overall, is efficient in incorporating the information into the spreads. Finally, a cross-sectional regression analysis confirms that negative earnings surprises are linked to large announcement day reactions, while positive earnings surprises are not.
219

An analysis of the covered warrants market in the UK

Klinpratoom, Apinya January 2010 (has links)
The covered warrant market in the UK has gained in popularity over time since first launched in 2002. This has opened up an alternative investment choice which offers derivative securities with a life of typically one to two years. It seems to fulfill many of the functions of a traded options market. Since most research has been focused on options trading, the investigation on covered warrants trading is still very limited. This is also largely due to the lack of readily available data for end-traded covered warrants and the existing covered warrants. A unique set of hand-collected data, supplemented by public and private data from main covered warrants issuer and the financial database are employed, making this thesis possible. The sample periods can be divided into two separate sets. The UK covered warrants trading during the period July 2004 - December 2006 are used to examine the impact of warrant introduction and expiration on the price, volume and volatility of the underlying securities. For the introduction analyses, both the announcement and listing of covered warrants have negative impacts on the price of underlying securities for both call and put features, though the impact of the announcement is more pronounced than that of the listing. These affects are temporary and do not persist much beyond the introduction of the warrants. Negative price impacts of the expiration event are also reported for both call and put covered warrants. However, this study finds no significant impacts on the volume of underlying securities trading from the announcement, listing and expiration of call and put covered warrants. Further evidence indicates an increase in volatility of the underlying securities during the announcement and listing of covered warrants. The results hold true for both call and put warrants cases. On the other hand, a decreasing stock volatility is found as a consequence of the expiration of both call and put covered warrants. The second data set involves the call covered warrants traded in the UK market between April 2007 and December 2008; this was analysed for evidence of the best appropriate covered warrants pricing model. This study suggests default risk as a major concern for the warrant price which is called the Vulnerable warrant price. The reasons behind this arise from concern about the issuer’s creditworthiness due to traders’ fraudulent action and the recent subprime problem, the difficulties of dynamic hedging by issuers because of market imperfections, as well as the no guarantees on covered warrant trading provided by the London Stock Exchange. The most salient findings of the study are the following. The Vulnerable warrant price is generally lower than both the Black-Scholes price and warrant market price throughout the warrant’s lifetime. The evidence suggests an overvalued warrant price in the UK market. Moreover, the in-the-money warrants indicate a higher rate of default in comparison to the out-of-the-money warrants. An additional finding shows that the market becomes aware of the default risk only on a short-term basis. The presentation of negative abnormal returns of both market and the Black-Sholes prices support the assumption that default risk is a relevant factor in pricing the UK covered warrants. These findings add to the literature dealing with the effect of derivatives trading on the underlying securities as well as providing more empirical evidence on a particular covered warrant market. This could be of interest not only for practitioners to widen their investment opportunities but also for regulators to have this as a guideline for their future related policies planning.
220

Essays in financial economics

Bova, Giuseppe January 2013 (has links)
We present in this thesis three distinct models in Financial Economics. In the first chapter we present a pure exchange economy model with collateral constraints in the spirit of Kiyotaki and Moore (1997). As a first result in this chapter we prove the existence of an equilibrium for this type of economies. We show that in this type of models bubbles can exist and provide a bubble example in which the asset containing the bubble pays positive dividends. We also show for the case of high interest rates the equivalence between this type of models and the Arrow-Debreu market structure. In the second chapter we present a model with limited commitment and one-side exclusion from financial markets in case of default. For this type of models we prove a no-trade theorem in the spirit of Bulow and Rogoff (1989). This is done for an economy with and without bounded investment in a productive activity. The third chapter presents a 2 period economy with complete markets, and 250 states of the world and assets. For this economies we generate a sequence of observed returns, and we show that a market proxy containing only 80% of the assets in the economy provides similar results as the true market portfolio when estimating the CAPM. We also show that for the examples we present a vast amount of observations is required in order to reject the CAPM. This raises the question what the driving force behind the bad empirical performance of the CAPM is.

Page generated in 0.0528 seconds