• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1373
  • 379
  • 377
  • 77
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 2518
  • 1657
  • 1214
  • 1211
  • 1199
  • 452
  • 387
  • 363
  • 338
  • 338
  • 324
  • 323
  • 318
  • 308
  • 239
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Forecasting Euro Area Inflation By Aggregating Sub-components

Clason Diop, Noah January 2013 (has links)
The aim of this paper is to see whether one can improve on the naiveforecast of Euro Area inflation, where by naive forecast we mean theyear-over-year inflation rate one-year ahead will be the same as the past year.Various model selection procedures are employed on anautoregressive-moving-average model and several Phillips curvebasedmodels. We test also whether we can improve on the Euro Area inflation forecastby first forecasting the sub-components and aggregating them. We manage tosubstantially improve on the forecast by using a Phillips curve based model. Wealso find further improvement by forecasting the sub-components first andaggregating them to Euro Area inflation
382

Modeling deposit prices

Walås, Gustav January 2013 (has links)
Thisreport investigates whether there are sufficient differences between a bank'sdepositors to motivate price discrimination. This is done by looking at timeseries of individual depositors to try to find predictors by a regressionanalysis. To be able to conclude on the value of more stable deposits for thebank and hence deduce a price, one also needs to look at regulatory aspects ofdeposits and different depositors. Once these qualities of a deposit have beenassigned by both the bank and regulator, they need to be transformed into aprice. This is done by replicationwith market funding instruments. / Denna studie syftar till att kartlägga eventuella skillnader mellan insättare i en bank för att kunna avgöra om dessa skillnader motiverar olika räntor. Genom att analysera tidsserier av insatta belopp och göra en regressionsanalys fastställs eventuella skillnader. Bankinsättningar påverkas även i hög grad av olika regleringar varför även effekterna av dessa ingår i studien.  För att kunna få fram ett värde på insättningarna replikeras sedan dessa under givna kriterier med olika skuldinstrument.
383

Anomaly Detection inMachine-Generated Data:A Structured Approach

Eriksson, André January 2013 (has links)
Anomaly detection is an important issue in data mining and analysis, with applications in almost every area in science, technology and business that involves data collection. The development of general anomaly detection techniques can therefore have a large impact on data analysis across many domains. In spite of this, little work has been done to consolidate the different approaches to the subject. In this report, this deficiency is addressed in the target domain of temporal machine-generated data. To this end, new theory for comparing and reasoning about anomaly detection tasks and methods is introduced, which facilitates a problem-oriented rather than a method-oriented approach to the subject. Using this theory as a basis, the possible approaches to anomaly detection in the target domain are discussed, and a set of interesting anomaly detection tasks is highlighted. One of these tasks is selected for further study: the detection of subsequences that are anomalous with regards to their context within long univariate real-valued sequences. A framework for relating methods derived from this task is developed, and is used to derive new methods and an algorithm for solving a large class of derived problems. Finally, a software implementation of this framework along with a set of evaluation utilities is discussed and demonstrated
384

Who is Granted Disability Benefit in Sweden? : Description of risk factors and the effect of the 2008 law reform

Blomberg, Renée January 2013 (has links)
Disabilitybenefit is a publicly funded benefit in Sweden that provides financialprotection to individuals with permanent working ability impairments due todisability, injury, or illness. The eligibility requirements for disabilitybenefit were tightened June 1, 2008 to require that the working abilityimpairment be permanent and that no other factors such as age or local labormarket conditions can affect eligibility for the benefit. The goal of thispaper is to investigate risk factors for the incidence disability benefit andthe effects of the 2008 reform. This is the first study to investigate theimpact of the 2008 reform on the demographics of those that received disabilitybenefit. A logistic regression model was used to study the effect of the 2008law change. The regression results show that the 2008 reform did have astatistically significant effect on the demographics of the individuals whowere granted disability benefit. After the reform women were lessoverrepresented, the older age groups were more overrepresented, and peoplewith short educations were more overrepresented. Although the variables for SKLregions together were jointly statistically significant, their coefficientswere small and the group of variables had the least amount of explanatory valuecompared to the variables for age, education, gender and the interactionvariables.
385

Statistical Analysis of Computer Network Security

Ali, Dana, Kap, Goran January 2013 (has links)
In this thesis it isshown how to measure the annual loss expectancy of computer networks due to therisk of cyber attacks. With the development of metrics for measuring theexploitation difficulty of identified software vulnerabilities, it is possibleto make a measurement of the annual loss expectancy for computer networks usingBayesian networks. To enable the computations, computer net-work vulnerabilitydata in the form of vulnerability model descriptions, vulnerable dataconnectivity relations and intrusion detection system measurements aretransformed into vector based numerical form. This data is then used to generatea probabilistic attack graph which is a Bayesian network of an attack graph.The probabilistic attack graph forms the basis for computing the annualizedloss expectancy of a computer network. Further, it is shown how to compute anoptimized order of vulnerability patching to mitigate the annual lossexpectancy. An example of computation of the annual loss expectancy is providedfor a small invented example network
386

Finding Risk Factors for Long-Term Sickness Absence Using Classification Trees

Lundström, Ina January 2013 (has links)
In this thesis a model for predicting if someone has an over-risk for long-term sickness absence during the forthcoming year is developed. The model is a classification tree that classifies objects as having high or low risk for long-term sickness absence based on their answers on the Health-Watch form. The HealthWatch form is a questionnaire about health consisting of eleven questions, such as "How do you feel right now?", "How did you sleep last night?", "How is your job satisfaction right now?" etc. As a measure on risk for long-term sickness absence, the Oldenburg Burnout Inventory and a scale for performance based self-esteem are used. Separate models are made for men and for women. The model for women shows good enough performance on a test set for being acceptable as a general model and can be used for prediction. Some conclusions can also be drawn from the additional information given by the classification tree; workload and work atmosphere do not seem to contribute a lot to an in-creased risk for long-term sickness absence, while job satisfaction seems to be one of the most important factors. The model for men performs poorly on a test set, and therefore it is not advisable to use it for prediction or to draw other conclusions from it.
387

Interest Rate Risk – Using Benchmark Shifts in a Multi Hierarchy Paradigm / Ränterisk baserad på scenarioanalys i hierakiska räntemodeller

Murase, Takeo January 2013 (has links)
This master thesis investigates the generic benchmark approach to measuring interest rate risk. First the background and market situation is described followed by an outline of the concept and meaning of measuring interest rate risk with generic benchmarks. Finally a single yield curve in an arbitrary currency is analyzed in the cases where linear interpolation and cubic interpolation technique is utilized. It is shown that in the single yield curve setting with linear interpolation or cubic interpolation the problem of finding interest rate scenarios can be formulated as convex optimization problems implying properties such as convexity and monotonicity. The analysis also shed light on the difference between linear interpolation and cubic interpolation technique for which scenario is generated and means to go about solving for the scenarios generated by the views imposed on the generic benchmark instruments. Further research on the topic of the generic benchmark approach that would advance the understanding of the model is suggested at the end of the paper. However at this stage it seems like using generic benchmark instruments for measuring interest rate risk is a consistent and computational viable option which not only measures the interest rate risk exposure but also provide a guidance in how to act in order to manage interest rate risk in a multi hierarchy paradigm
388

Optimized Transport Planning through Coordinated Collaboration between Transport Companies

Bjarnason, Jónas January 2013 (has links)
This thesis studies a specific transport planning problem, which is based on a realistic scenario in the transport industry and deals with the delivery of goods by transport companies to their customers. The main aspect of the planning problem is to consider if each company should deliver the cargo on its own or through a collaboration of companies, in which the companies share the deliveries. In order to find out whether or not collaboration should take place, the transport planning problem is represented in terms of a mathematical optimization problem, which is formulated by using a column generation method and whose objective function involves minimization of costs. Three different solution cases are considered where each of them takes into account different combinations of vehicles used for delivering the cargo as well as the different maximum allowed driving time of the vehicles. The goal of the thesis is twofold; firstly, to see if the optimization problem can be solved and secondly, in case the problem is solvable, investigate whether it is beneficial for transport companies to collaborate under the aforementioned circumstances in order to incur lower costs in all instances considered. It turns out that both goals are achieved. To achieve the first goal, a few simplifications need to be made. The simplifications pertain both to the formulation of the problem and its implementation, as it is not only difficult to formulate a transport planning problem of this kind with respect to real life situations, but the problem is also difficult to solve due to its computational complexity. As for the second goal of the thesis, a numerical comparison between the different instances for the two scenarios demonstrates that the costs according to collaborative transport planning turns out to be considerably lower, which suggests that, under the circumstances considered in the thesis, collaboration between transport companies is beneficial for the companies involved.
389

Improved estimation of the ATT from longitudinal data / Förbättrade skattningar av ATT från longitudinellt data

Ecker, Kreske January 2018 (has links)
Our goal is to improve the estimation of the average treatment effect among treated (ATT) from longitudinal data. When the ATT is estimated at one time point (or separately at each), outcome-regression (OR), inverse probability weighting and doubly robust estimators can be used. These methods involve estimating the relationships that the covariates have with the outcome and/or propensity score, in different regression models. Assuming these relationships do not vary drastically between close-by time points, we can improve estimation by also using information from neighboring points. We use local regression to smooth the coefficient estimates in the outcome- and propensity score-model over time. Our simulation study shows that when the true coefficients are constant over time, the performance of all estimators is improved by smoothing. Especially in terms of precision, the improvement is greater the more the coefficient estimates are smoothed. We also evaluate the OR-estimator in more complex scenarios where the true regression coefficients vary linearly and non-linearly over time. Here we find that larger degrees of smoothing have a negative effect on the estimators’ accuracy, but continue to improve their precision. This is especially prominent in the non-linear scenario.
390

Sequential testing of the sign of the drift of a Brownian motion

Karimidamavandi, Ashkan January 2021 (has links)
No description available.

Page generated in 0.0667 seconds