• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 757
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1549
  • 271
  • 203
  • 186
  • 154
  • 147
  • 143
  • 143
  • 128
  • 124
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Portfolio optimisation : improved risk-adjusted return?

Mårtensson, Jonathan January 2006 (has links)
<p>In this thesis, portfolio optimisation is used to evaluate if a specific sample of portfolios have</p><p>a higher risk level or lower expected return, compared to what may be obtained through</p><p>optimisation. It also compares the return of optimised portfolios with the return of the original</p><p>portfolios. The risk analysis software Aegis Portfolio Manager developed by Barra is used for</p><p>the optimisations. With the expected return and risk level used in this thesis, all portfolios can</p><p>obtain a higher expected return and a lower risk. Over a six-month period, the optimised</p><p>portfolios do not consistently outperform the original portfolios and therefore it seems as</p><p>though the optimisation do not improve the return of the portfolios. This might be due to the</p><p>uncertainty of the expected returns used in this thesis.</p>
322

Parallel and Deterministic Algorithms for MRFs: Surface Reconstruction and Integration

Geiger, Davi, Girosi, Federico 01 May 1989 (has links)
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. The computational complexity of the implementation has been a drawback of MRFs. In this paper we derive deterministic approximations to MRFs models. All the theoretical results are obtained in the framework of the mean field theory from statistical mechanics. Because we use MRFs models the mean field equations lead to parallel and iterative algorithms. One of the considered models for image reconstruction is shown to give in a natural way the graduate non-convexity algorithm proposed by Blake and Zisserman.
323

Constitutive and fatigue crack propagation behaviour of Inconel 718

Gustafsson, David January 2010 (has links)
In this licentiate thesis the work done in the TURBO POWER project Influence of high temperature hold times on the fatigue life of nickel-based superalloys will be presented. The overall objective of this project is to develop and evaluate tools for designing against fatigue in gas turbine applications, with special focus on the nickel-based superalloy Inconel 718. Firstly, the constitutive behaviour of the material has been been studied, where focus has been placed on trying to describe the mean stress relaxation and initial softening of the material under intermediate temperatures. Secondly, the fatigue crack propagation behaviour under high temperature hold times has been studied. Focus has here been placed on investigating the main fatigue crack propagation phenomena with the aim of setting up a basis for fatigue crack propagation modelling. This thesis is divided into two parts. The first part describes the general framework, including basic constitutive and fatigue crack propagation behaviour as well as a theoretical background for the constitutive modelling of mean stress relaxation. This framework is then used in the second part, which consists of the four included papers.
324

Analysis and Estimation of Customer Survival Time in Subscription-based Businesses

Mohammed, Zakariya Mohammed Salih. January 2008 (has links)
<p>The aim of this study is to illustrate, adapt and develop methods of survival analysis in analysing and estimating customer survival time in subscription-based businesses. Two particular objectives are studied. The rst objective is to redene the existing survival analysis techniques in business terms and to discuss their uses in order to understand various issues related to the customer-rm relationship.</p>
325

A model for managing pension funds with benchmarking in an inflationary market

Nsuami, Mozart January 2011 (has links)
<p>Aggressive fiscal and monetary policies by governments of countries and central banks in developed markets could somehow push inflation to some very high level in the long run. Due to the decreasing of pension fund benefits and increasing inflation rate, pension companies are selling inflation-linked products to hedge against inflation risk. Such companies are seriously considering the possible effects of inflation volatility on their investment, and some of them tend to include inflationary allowances in the pension payment plan. In this dissertation we study the management of pension funds of the defined contribution type in the presence of inflation-recession. We study how the fund manager maximizes his fund&rsquo / s wealth when the salaries and stocks are affected by inflation. In this regard, we consider the case of a pension company which invests in a stock, inflation-linked bonds and a money market account, while basing its investment on the contribution of the plan member. We use a benchmarking approach and martingale methods to compute an optimal strategy which maximizes the fund wealth.</p>
326

Linear Models of Nonlinear Systems

Enqvist, Martin January 2005 (has links)
Linear time-invariant approximations of nonlinear systems are used in many applications and can be obtained in several ways. For example, using system identification and the prediction-error method, it is always possible to estimate a linear model without considering the fact that the input and output measurements in many cases come from a nonlinear system. One of the main objectives of this thesis is to explain some properties of such approximate models. More specifically, linear time-invariant models that are optimal approximations in the sense that they minimize a mean-square error criterion are considered. Linear models, both with and without a noise description, are studied. Some interesting, but in applications usually undesirable, properties of such optimal models are pointed out. It is shown that the optimal linear model can be very sensitive to small nonlinearities. Hence, the linear approximation of an almost linear system can be useless for some applications, such as robust control design. Furthermore, it is shown that standard validation methods, designed for identification of linear systems, cannot always be used to validate an optimal linear approximation of a nonlinear system. In order to improve the models, conditions on the input signal that imply various useful properties of the linear approximations are given. It is shown, for instance, that minimum phase filtered white noise in many senses is a good choice of input signal. Furthermore, the class of separable signals is studied in detail. This class contains Gaussian signals and it turns out that these signals are especially useful for obtaining approximations of generalized Wiener-Hammerstein systems. It is also shown that some random multisine signals are separable. In addition, some theoretical results about almost linear systems are presented. In standard methods for robust control design, the size of the model error is assumed to be known for all input signals. However, in many situations, this is not a realistic assumption when a nonlinear system is approximated with a linear model. In this thesis, it is described how robust control design of some nonlinear systems can be performed based on a discrete-time linear model and a model error model valid only for bounded inputs. It is sometimes undesirable that small nonlinearities in a system influence the linear approximation of it. In some cases, this influence can be reduced if a small nonlinearity is included in the model. In this thesis, an identification method with this option is presented for nonlinear autoregressive systems with external inputs. Using this method, models with a parametric linear part and a nonparametric Lipschitz continuous nonlinear part can be estimated by solving a convex optimization problem. / Linjära tidsinvarianta approximationer av olinjära system har många användningsområden och kan tas fram på flera sätt. Om man har mätningar av in- och utsignalerna från ett olinjärt system kan man till exempel använda systemidentifiering och prediktionsfelsmetoden för att skatta en linjär modell utan att ta hänsyn till att systemet egentligen är olinjärt. Ett av huvudmålen med den här avhandlingen är att beskriva egenskaper för sådana approximativa modeller. Framförallt studeras linjära tidsinvarianta modeller som är optimala approximationer i meningen att de minimerar ett kriterium baserat på medelkvadratfelet. Brusmodeller kan inkluderas i dessa modelltyper och både fallet med och utan brusmodell studeras här. Modeller som är optimala i medelkvadratfelsmening visar sig kunna uppvisa ett antal intressanta, men ibland oönskade, egenskaper. Bland annat visas det att en optimal linjär modell kan vara mycket känslig för små olinjäriteter. Denna känslighet är inte önskvärd i de flesta tillämpningar och innebär att en linjär approximation av ett nästan linjärt system kan vara oanvändbar för till exempel robust reglerdesign. Vidare visas det att en del valideringsmetoder som är framtagna för linjära system inte alltid kan användas för validering av linjära approximationer av olinjära system. Man kan dock göra de optimala linjära modellerna mer användbara genom att välja lämpliga insignaler. Bland annat visas det att minfasfiltrerat vitt brus i många avseenden är ett bra val av insignal. Klassen av separabla signaler detaljstuderas också. Denna klass innehåller till exempel alla gaussiska signaler och just dessa signaler visar sig vara speciellt användbara för att ta fram approximationer av generaliserade wiener-hammerstein-system. Dessutom visas det att en viss typ av slumpmässiga multisinussignaler är separabel. Några teoretiska resultat om nästan linjära system presenteras också. De flesta metoder för robust reglerdesign kan bara användas om storleken på modellfelet är känd för alla tänkbara insignaler. Detta är emellertid ofta inte realistiskt när ett olinjärt system approximeras med en linjär modell. I denna avhandling beskrivs därför ett alternativt sätt att göra en robust reglerdesign baserat på en tidsdiskret modell och en modellfelsmodell som bara är giltig för begränsade insignaler. Ibland skulle det vara önskvärt om en linjär modell av ett system inte påverkades av förekomsten av små olinjäriteter i systemet. Denna oönskade påverkan kan i vissa fall reduceras om en liten olinjär term tas med i modellen. En identifieringsmetod för olinjära autoregressiva system med externa insignaler där denna möjlighet finns beskrivs här. Med hjälp av denna metod kan modeller som består av en parametrisk linjär del och en ickeparametrisk lipschitzkontinuerlig olinjär del skattas genom att man löser ett konvext optimeringsproblem.
327

Reconciling capital structure theories: How pecking order and tradeoff theories can be equated

Dedes, Vasilis January 2010 (has links)
In this paper we study the pecking order and tradeoff theories of capital structure on a sample of 121 Swedish, non-financial, listed firms over the period between 2000 - 2009. We find that the Swedish firms’ financing behavior appears to have features consistent with the predictions of both theories. The evidence shows a preference for a financing behavior consistent with the tradeoff theory for the whole sample and for a sample of small firms, whereas large firms appear to follow a pecking order on their financing decisions. We show that under sufficient conditions both theories might be seen as “reconciled” and not mutually exclusive, and we find evidence for the large firms of our sample consistent with this notion.
328

Examining the Effects of Site-Selection Criteria for Evaluating the Effectiveness of Traffic Safety Improvement Countermeasures

Kuo, Pei-Fen 2012 May 1900 (has links)
The before-after study is still the most popular method used by traffic engineers and transportation safety analysts for evaluating the effects of an intervention. However, this kind of study may be plagued by important methodological limitations, which could significantly alter the study outcome. They include the regression-to-the-mean (RTM) and site-selection effects. So far, most of the research on these biases has focused on the RTM. Hence, the primary objective of this study consists of presenting a method that can reduce the site-selection bias when an entry criterion is used in before-after studies for continuous (e.g. speed, reaction times, etc.) and count data (e.g. number of crashes, number of fatalities, etc.). The proposed method documented in this research provides a way to adjust the Naive estimator by using the sample data and without relying on the data collected from the control group, since finding enough appropriate sites for the control group is much harder in traffic-safety analyses. In this study, the proposed method, a.k.a. Adjusted method, was compared to commonly used methods in before-after studies. The study results showed that among all methods evaluated, the Naive is the most significantly affected by the selection bias. Using the CG, the ANCOVA, or the EB method based on a control group (EBCG) method can eliminate the site-selection bias, as long as the characteristics of the control group are exactly the same as those for the treatment group. However, control group data that have same characteristics based on a truncated distribution or sample may not be available in practice. Moreover, site-selection bias generated by using a dissimilar control group might be even higher than with using the Naive method. The Adjusted method can partially eliminate site-selection bias even when biased estimators of the mean, variance, and correlation coefficient of a truncated normal distribution are used or are not known with certainty. In addition, three actual datasets were used to evaluate the accuracy of the Adjusted method for estimating site-selection biases for various types of data that have different mean and sample-size values.
329

Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework

Dalton, Lori Anne 2012 May 1900 (has links)
With the advent of high-throughput genomic and proteomic technologies, in conjunction with the difficulty in obtaining even moderately sized samples, small-sample classifier design has become a major issue in the biological and medical communities. Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework of minimum mean-square error (MMSE) signal estimation in the presence of uncertainty, where uncertainty is relative to a prior over a family of distributions. This results in a Bayesian approach to error estimation that is optimal and unbiased relative to the model. The prior addresses a trade-off between estimator robustness (modeling assumptions) and accuracy. Closed-form representations for Bayesian error estimators are provided for two important models: discrete classification with Dirichlet priors (the discrete model) and linear classification of Gaussian distributions with fixed, scaled identity or arbitrary covariances and conjugate priors (the Gaussian model). We examine robustness to false modeling assumptions and demonstrate that Bayesian error estimators perform especially well for moderate true errors. The Bayesian modeling framework facilitates both optimization and analysis. It naturally gives rise to a practical expected measure of performance for arbitrary error estimators: the sample-conditioned mean-square error (MSE). Closed-form expressions are provided for both Bayesian models. We examine the consistency of Bayesian error estimation and illustrate a salient application in censored sampling, where sample points are collected one at a time until the conditional MSE reaches a stopping criterion. We address practical considerations for gene-expression microarray data, including the suitability of the Gaussian model, a methodology for calibrating normal-inverse-Wishart priors from unused data, and an approximation method for non-linear classification. We observe superior performance on synthetic high-dimensional data and real data, especially for moderate to high expected true errors and small feature sizes. Finally, arbitrary error estimators may be optimally calibrated assuming a fixed Bayesian model, sample size, classification rule, and error estimation rule. Using a calibration function mapping error estimates to their optimally calibrated values off-line, error estimates may be calibrated on the fly whenever the assumptions apply.
330

Model Specification Searches in Latent Growth Modeling: A Monte Carlo Study

Kim, Min Jung 2012 May 1900 (has links)
This dissertation investigated the optimal strategy for the model specification search in the latent growth modeling. Although developing an initial model based on the theory from prior research is favored, sometimes researchers may need to specify the starting model in the absence of theory. In this simulation study, the effectiveness of the start models in searching for the true population model was examined. The four possible start models adopted in this study were: the simplest mean and covariance structure model, the simplest mean and the most complex covariance structure model, the most complex mean and the simplest covariance structure model, and the most complex mean and covariance structure model. Six model selection criteria were used to determine the recovery of the true model: Likelihood ratio test (LRT), DeltaCFI, DeltaRMSEA, DeltaSRMR, DeltaAIC, and DeltaBIC. The results showed that specifying the most complex covariance structure (UN) with the most complex mean structure recovered the true mean trajectory most successfully with the average hit rate above 90% using the DeltaCFI, DeltaBIC, DeltaAIC, and DeltaSRMR. In searching for the true covariance structure, LRT, DeltaCFI, DeltaAIC, and DeltaBIC performed successfully regardless of the searching method with different start models.

Page generated in 0.0504 seconds