• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1008
  • 190
  • 1
  • Tagged with
  • 1199
  • 1199
  • 1199
  • 1199
  • 1199
  • 181
  • 165
  • 142
  • 114
  • 110
  • 109
  • 89
  • 88
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Evaluating Regime Switching in Dynamic Conditional Correlation / En utvärdering av regimskiftning i den dynamiska korrelationsmodellen DCC

Stenberg, Kristoffer, Wikerman, Henrik January 2013 (has links)
This paper provides a comparative study of the Dynamic Conditional Correlation model introduced by Engle (2002) and the Independent Switching Dynamic Conditional Correlation model introduced by Lee (2010) by evaluating the models for a set of known correlation processes. The evaluation is also extended to cover empirical data to assess the practical performance of the models. The data include the price of gold and oil, the yield on benchmark 10 year U.S. Treasury notes and the Euro-U.S. dollar exchange rate from January 2007 to December 2009. In addition, a general description of the difficulties of estimating correlations is presented to give the reader a better understanding of the limitations of the models. From the results, it is concluded that there is no general superiority of neither the IS-DCC model nor the DCC model, except for very short-lived correlation shifts. For short-lived shifts, the IS-DCC model outperforms in both detecting and measuring correlations. However, this paper recommends that these models are used in combination with a qualitative study in empirical situations to better understand the underlying correlation dynamics. / Denna rapport innehåller en jämförande studie av den dynamiska korrelationsmodellen DCC som presenterades av Engle (2002) och den regimskiftande dynamiska korrelationsmodellen IS-DCC vilken presenterades av Lee (2010). Studien är uförd genom att jämföra modellerna för en uppsättning kända korrelationsprocesser samt för empiriska data. Den empiriska datan inkluderar priset på guld och olja , referensavkastningen på amerikanska statsobligationer med 10 årig löptid samt valutakursen mellan Euro och den amerikanska dollarn under perioden januari 2007 till december 2009. Utöver jämförelsen ges även en allmän beskrivning av svårigheterna med att uppskatta korrelation för att ge läsaren en utökad förståelse av modellernas begränsningar. Resultaten visar att varken DCC eller IS-DCC modellen är bättre än den andra förutom vid snabba korrlationsskiften. För snabba korrelationsskiften visar sig IS-DCC modellen bättre både vad gäller att flagga för dessa skiften som att uppskatta korrelationen. Slutligen rekommenderas ändå att båda dessa modeller används i kombination med en kvalitativ studie i praktiska situationer för att ge en omfattande förståelse av tidsvarierande korrelationsdynamik.
252

Hedonic House Price Index

Säterbrink, Filip January 2013 (has links)
Nasdaq OMX Valueguard-KTH Housing Index (HOX) is a hedonic price index that illustrates the price development of condominiums in Sweden, and that is obtained by using regression technique. Concerns have been raised regarding the influence of the monthly fee on the index. Low fee condominiums could be more popular because of the low monthly cost, high fee condominiums tend to sell for a lower price due to the high monthly cost. As the price of a condominium rises the importance of the monthly fee decreases. Because of this the monthly fee might affect the regression that produces the index. Furthermore,housing cooperatives are usually indebted. These loans are paid off by the monthly fee which can be considered to finance a debt that few are aware of. This issue has been investigated by iteratively estimating the importance of the level of debt in order to find a model that better takes into account the possible impact of the monthly fee on the price development. Due to a somewhat simplified model that produces index values with many cases of high standard deviation, no conclusive evidence has been found that confirms the initial hypothesis. Nevertheless, converting part of the monthly fee into debt has shown a general improvement of fitting a regression equation to the data. It is therefore recommended that real data on debt in housing cooperatives be tested in Valuegua
253

Missing Data in Value-at-Risk Analysis : Conditional Imputation in Optimal Portfolios Using Regression

Andersson, Joacim, Falk, Henrik January 2013 (has links)
A regression-based method is presented in order toregenerate missing data points in stock return time series. The method usesonly complete time series of assets in optimal portfolios, in which the returnsof the underlying tend to correlate inadequately with each other. The studyshows that the method is able to replicate empirical VaR-backtesting resultswhere all data are available, even when up to 90% of the time series in half ofthe assets in the portfolios have been removed.
254

Non-local means denoising ofprojection images in cone beamcomputed tomography

Dacke, Fredrik January 2013 (has links)
A new edge preserving denoising method is used to increase image quality in cone beam computed tomography. The reconstruction algorithm for cone beam computed tomography used by Elekta enhances high frequency image details, e.g. noise, and we propose that denoising is done on the projection images before reconstruction. The denoising method is shown to have a connection with computational statistics and some mathematical improvements to the method are considered. Comparisons are made with the state-of-theart method on both artificial and physical objects. The results show that the smoothness of the images is enhanced at the cost of blurring out image details. Some results show how the setting of the method parameters influence the trade off between smoothness and blurred image details in the images. / En ny kantbevarande brusreduceringsmetod används för att förbättra bildkvaliteten för digital volymtomografi med konstråle. Rekonstruktionsalgoritmen for digital volymtomografi med konstråle som används av Elekta förstärker högfrekventa bilddetaljer, t.ex. brus, och vi föreslår att brusreduceringen genomförs på projektionsbilderna innan de genomgår rekonstruktion. Den brusreducerande metoden visas ha kopplingar till datorintensiv statistik och några matematiska förbättringar av metoden gås igenom. Jämförelser görs med den bästa metoden på både artificiella och fysiska objekt. Resultaten visar att mjukheten i bilderna förbättras på bekostnad av utsmetade bilddetaljer. Vissa resultat visar hur parametersättningen för metoden påverkar avvägningen mellan mjukhet och utsmetade bilddetaljer i bilderna.
255

Forecasting Euro Area Inflation By Aggregating Sub-components

Clason Diop, Noah January 2013 (has links)
The aim of this paper is to see whether one can improve on the naiveforecast of Euro Area inflation, where by naive forecast we mean theyear-over-year inflation rate one-year ahead will be the same as the past year.Various model selection procedures are employed on anautoregressive-moving-average model and several Phillips curvebasedmodels. We test also whether we can improve on the Euro Area inflation forecastby first forecasting the sub-components and aggregating them. We manage tosubstantially improve on the forecast by using a Phillips curve based model. Wealso find further improvement by forecasting the sub-components first andaggregating them to Euro Area inflation
256

Modeling deposit prices

Walås, Gustav January 2013 (has links)
Thisreport investigates whether there are sufficient differences between a bank'sdepositors to motivate price discrimination. This is done by looking at timeseries of individual depositors to try to find predictors by a regressionanalysis. To be able to conclude on the value of more stable deposits for thebank and hence deduce a price, one also needs to look at regulatory aspects ofdeposits and different depositors. Once these qualities of a deposit have beenassigned by both the bank and regulator, they need to be transformed into aprice. This is done by replicationwith market funding instruments. / Denna studie syftar till att kartlägga eventuella skillnader mellan insättare i en bank för att kunna avgöra om dessa skillnader motiverar olika räntor. Genom att analysera tidsserier av insatta belopp och göra en regressionsanalys fastställs eventuella skillnader. Bankinsättningar påverkas även i hög grad av olika regleringar varför även effekterna av dessa ingår i studien.  För att kunna få fram ett värde på insättningarna replikeras sedan dessa under givna kriterier med olika skuldinstrument.
257

Anomaly Detection inMachine-Generated Data:A Structured Approach

Eriksson, André January 2013 (has links)
Anomaly detection is an important issue in data mining and analysis, with applications in almost every area in science, technology and business that involves data collection. The development of general anomaly detection techniques can therefore have a large impact on data analysis across many domains. In spite of this, little work has been done to consolidate the different approaches to the subject. In this report, this deficiency is addressed in the target domain of temporal machine-generated data. To this end, new theory for comparing and reasoning about anomaly detection tasks and methods is introduced, which facilitates a problem-oriented rather than a method-oriented approach to the subject. Using this theory as a basis, the possible approaches to anomaly detection in the target domain are discussed, and a set of interesting anomaly detection tasks is highlighted. One of these tasks is selected for further study: the detection of subsequences that are anomalous with regards to their context within long univariate real-valued sequences. A framework for relating methods derived from this task is developed, and is used to derive new methods and an algorithm for solving a large class of derived problems. Finally, a software implementation of this framework along with a set of evaluation utilities is discussed and demonstrated
258

Who is Granted Disability Benefit in Sweden? : Description of risk factors and the effect of the 2008 law reform

Blomberg, Renée January 2013 (has links)
Disabilitybenefit is a publicly funded benefit in Sweden that provides financialprotection to individuals with permanent working ability impairments due todisability, injury, or illness. The eligibility requirements for disabilitybenefit were tightened June 1, 2008 to require that the working abilityimpairment be permanent and that no other factors such as age or local labormarket conditions can affect eligibility for the benefit. The goal of thispaper is to investigate risk factors for the incidence disability benefit andthe effects of the 2008 reform. This is the first study to investigate theimpact of the 2008 reform on the demographics of those that received disabilitybenefit. A logistic regression model was used to study the effect of the 2008law change. The regression results show that the 2008 reform did have astatistically significant effect on the demographics of the individuals whowere granted disability benefit. After the reform women were lessoverrepresented, the older age groups were more overrepresented, and peoplewith short educations were more overrepresented. Although the variables for SKLregions together were jointly statistically significant, their coefficientswere small and the group of variables had the least amount of explanatory valuecompared to the variables for age, education, gender and the interactionvariables.
259

Statistical Analysis of Computer Network Security

Ali, Dana, Kap, Goran January 2013 (has links)
In this thesis it isshown how to measure the annual loss expectancy of computer networks due to therisk of cyber attacks. With the development of metrics for measuring theexploitation difficulty of identified software vulnerabilities, it is possibleto make a measurement of the annual loss expectancy for computer networks usingBayesian networks. To enable the computations, computer net-work vulnerabilitydata in the form of vulnerability model descriptions, vulnerable dataconnectivity relations and intrusion detection system measurements aretransformed into vector based numerical form. This data is then used to generatea probabilistic attack graph which is a Bayesian network of an attack graph.The probabilistic attack graph forms the basis for computing the annualizedloss expectancy of a computer network. Further, it is shown how to compute anoptimized order of vulnerability patching to mitigate the annual lossexpectancy. An example of computation of the annual loss expectancy is providedfor a small invented example network
260

Finding Risk Factors for Long-Term Sickness Absence Using Classification Trees

Lundström, Ina January 2013 (has links)
In this thesis a model for predicting if someone has an over-risk for long-term sickness absence during the forthcoming year is developed. The model is a classification tree that classifies objects as having high or low risk for long-term sickness absence based on their answers on the Health-Watch form. The HealthWatch form is a questionnaire about health consisting of eleven questions, such as "How do you feel right now?", "How did you sleep last night?", "How is your job satisfaction right now?" etc. As a measure on risk for long-term sickness absence, the Oldenburg Burnout Inventory and a scale for performance based self-esteem are used. Separate models are made for men and for women. The model for women shows good enough performance on a test set for being acceptable as a general model and can be used for prediction. Some conclusions can also be drawn from the additional information given by the classification tree; workload and work atmosphere do not seem to contribute a lot to an in-creased risk for long-term sickness absence, while job satisfaction seems to be one of the most important factors. The model for men performs poorly on a test set, and therefore it is not advisable to use it for prediction or to draw other conclusions from it.

Page generated in 0.1037 seconds