• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 615
  • 143
  • 104
  • 92
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Large-Scale Portfolio Allocation Under Transaction Costs and Model Uncertainty

Hautsch, Nikolaus, Voigt, Stefan 09 1900 (has links) (PDF)
We theoretically and empirically study portfolio optimization under transaction costs and establish a link between turnover penalization and covariance shrinkage with the penalization governed by transaction costs. We show how the ex ante incorporation of transaction costs shifts optimal portfolios towards regularized versions of efficient allocations. The regulatory effect of transaction costs is studied in an econometric setting incorporating parameter uncertainty and optimally combining predictive distributions resulting from high-frequency and low-frequency data. In an extensive empirical study, we illustrate that turnover penalization is more effective than commonly employed shrinkage methods and is crucial in order to construct empirically well-performing portfolios.
112

Statistical Computing on Manifolds for Computational Anatomy

Pennec, Xavier 18 December 2006 (has links) (PDF)
During the last decade, my main research topic was on medical image analysis, and more particularly on image registration. However, I was also following in background a more theoretical research track on statistical computing on manifolds. With the recent emergence of computational anatomy, this topic gained a lot of importance in the medical image analysis community. During the writing of this habilitation manuscript, I felt that it was time to present a more simple and uni ed view of how it works and why it can be important. This is why the usual short synthesis of the habilitation became a hundred pages long text where I tried to synthesizes the main notions of statistical computing on manifolds with application in registration and computational anatomy. Of course, this synthesis is centered on and illustrated by my personal research work.
113

Combining analytical and iterative reconstruction in helical cone-beam CT

Sunnegårdh, Johan January 2007 (has links)
<p>Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated.</p><p>The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated.</p><p>In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification.</p><p>The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.</p>
114

Algorithms for a Partially Regularized Least Squares Problem

Skoglund, Ingegerd January 2007 (has links)
<p>Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras.</p><p>Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna.</p><p>I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna.</p><p>I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur.</p><p>För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks.</p> / <p>Statistical analysis of data from rivers deals with time series which are dependent, e.g., on climatic and seasonal factors. For example, it is a well-known fact that the load of substances in rivers can be strongly dependent on the runoff. It is of interest to find out whether observed changes in riverine loads are due only to natural variation or caused by other factors. Semi-parametric models have been proposed for estimation of time-varying linear relationships between runoff and riverine loads of substances. The aim of this work is to study some numerical methods for solving the linear least squares problem which arises.</p><p>The model gives a linear system of the form <em>A</em><em>1x1</em><em> + A</em><em>2x2</em><em> + n = b</em><em>1</em>. The vector <em>n</em> consists of identically distributed random variables all with mean zero. The unknowns, <em>x,</em> are split into two groups, <em>x</em><em>1</em><em> </em>and <em>x</em><em>2</em><em>.</em> In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g., the parameters<em> x</em><em>2</em><em>.</em> This can be accomplished by regularizing using a matrix <em>A</em><em>3</em>, which is a discretization of some norm. The problem is formulated</p><p>as a partially regularized least squares problem with one or two regularization parameters. The parameter <em>x</em><em>2</em> has here a two-dimensional structure. By using two different regularization parameters it is possible to regularize separately in each dimension.</p><p>We first study (for the case of one parameter only) the conjugate gradient method for solution of the problem. To improve rate of convergence blockpreconditioners of Schur complement type are suggested, analyzed and tested. Also a direct solution method based on QR decomposition is studied. The idea is to first perform operations independent of the values of the regularization parameters. Here we utilize the special block-structure of the problem. We further discuss the choice of regularization parameters and generalize in particular Reinsch’s method to the case with two parameters. Finally the cross-validation technique is treated. Here also a Monte Carlo method is used by which an approximation to the generalized cross-validation function can be computed efficiently.</p>
115

Regularization as a tool for managing irregular immigration : An evaluation of the regularization of irregular immigrants in Spain through the labour market

Alonso Hjärtström, Livia January 2008 (has links)
<p>The objective of the thesis is to make a stakeholder evaluation of the regularization process that in 2005 gave the right to irregular immigrants in Spain to apply for a legal status. I want to portray how different groups at the labour market experienced the process and identify the factors that contributed to the result. I further want to study if regularization can be seen as an effectual measurement for managing irregular immigration. The methods are qualitative interviews and text analysis combined with evaluation method. The main theories are Venturini’s and Levinson’s suggestions for a successful regularization. Other prominent theories are Soysal’s theory about citizenship, Jordan’s and Düvell’s and Castles theories about irregular immigration. The result shows that the main argument for carrying out the process was to improve the situation at the labour market. The most prominent factors that affected the outcome were the social consensus preceding the process and the prerequisite of having a job contract. The regularization of irregular immigrants had an overall positive outcome but the stringent prerequisites for being regularized together with problems with sanctions of employers probably had a somewhat negative outcome on the result of the regularization.<br /></p>
116

Shrinkage methods for multivariate spectral analysis

Böhm, Hilmar 29 January 2008 (has links)
In spectral analysis of high dimensional multivariate time series, it is crucial to obtain an estimate of the spectrum that is both numerically well conditioned and precise. The conventional approach is to construct a nonparametric estimator by smoothing locally over the periodogram matrices at neighboring Fourier frequencies. Despite being consistent and asymptotically unbiased, these estimators are often ill-conditioned. This is because a kernel smoothed periodogram is a weighted sum over the local neighborhood of periodogram matrices, which are each of rank one. When treating high dimensional time series, the result is a bad ratio between the smoothing span, which is the effective local sample size of the estimator, and dimension. In classification, clustering and discrimination, and in the analysis of non-stationary time series, this is a severe problem, because inverting an estimate of the spectrum is unavoidable in these contexts. Areas of application like neuropsychology, seismology and econometrics are affected by this theoretical problem. We propose a new class of nonparametric estimators that have the appealing properties of simultaneously having smaller L2-risk than the smoothed periodogram and being numerically more stable due to a smaller condition number. These estimators are obtained as convex combinations of the averaged periodogram and a shrinkage target. The choice of shrinkage target depends on the availability of prior knowledge on the cross dimensional structure of the data. In the absence of any information, we show that a multiple of the identity matrix is the best choice. By shrinking towards identity, we trade the asymptotic unbiasedness of the averaged periodogram for a smaller mean-squared error. Moreover, the eigenvalues of this shrinkage estimator are closer to the eigenvalues of the real spectrum, rendering it numerically more stable and thus more appropriate for use in classification. These results are derived under a rigorous general asymptotic framework that allows for the dimension p to grow with the length of the time series T. Under this framework, the averaged periodogram even ceases to be consistent and has asymptotically almost surely higher L2-risk than our shrinkage estimator. Moreover, we show that it is possible to incorporate background knowledge on the cross dimensional structure of the data in the shrinkage targets. We derive an exemplary instance of a custom-tailored shrinkage target in the form of a one factor model. This offers a new answer to problems of model choice: instead of relying on information criteria such as AIC or BIC for choosing the order of a model, the minimum order model can be used as a shrinkage target and combined with a non-parametric estimator of the spectrum, in our case the averaged periodogram. Comprehensive Monte Carlo studies we perform show the overwhelming gain in terms of L2-risk of our shrinkage estimators, even for very small sample size. We also give an overview of regularization techniques that have been designed for iid data, such as ridge regression or sparse pca, and show the interconnections between them.
117

Computing Visible-Surface Representations

Terzopoulos, Demetri 01 March 1985 (has links)
The low-level interpretation of images provides constraints on 3D surface shape at multiple resolutions, but typically only at scattered locations over the visual field. Subsequent visual processing can be facilitated substantially if the scattered shape constraints are immediately transformed into visible-surface representations that unambiguously specify surface shape at every image point. The required transformation is shown to lead to an ill-posed surface reconstruction problem. A well-posed variational principle formulation is obtained by invoking 'controlled continuity,' a physically nonrestrictive (generic) assumption about surfaces which is nonetheless strong enough to guarantee unique solutions. The variational principle, which admits an appealing physical interpretation, is locally discretized by applying the finite element method to a piecewise, finite element representation of surfaces. This forms the mathematical basis of a unified and general framework for computing visible-surface representations. The computational framework unifies formal solutions to the key problems of (i) integrating multiscale constraints on surface depth and orientation from multiple visual sources, (ii) interpolating these scattered constraints into dense, piecewise smooth surfaces, (iii) discovering surface depth and orientation discontinuities and allowing them to restrict interpolation appropriately, and (iv) overcoming the immense computational burden of fine resolution surface reconstruction. An efficient surface reconstruction algorithm is developed. It exploits multiresolution hierarchies of cooperative relaxation processes and is suitable for implementation on massively parallel networks of simple, locally interconnected processors. The algorithm is evaluated empirically in a diversity of applications.
118

Challenges for the Accurate Determination of the Surface Thermal Condition via In-Depth Sensor Data

Elkins, Bryan Scott 01 August 2011 (has links)
The overall goal of this work is to provide a systematic methodology by which the difficulties associated with the inverse heat conduction problem (IHCP) can be resolved. To this end, two inverse heat conduction methods are presented. First, a space-marching IHCP method (discrete space, discrete time) utilizing a Gaussian low-pass filter for regularization is studied. The stability and accuracy of this inverse prediction is demonstrated to be more sensitive to the temporal mesh than the spatial mesh. The second inverse heat conduction method presented aims to eliminate this feature by employing a global time, discrete space inverse solution methodology. The novel treatment of the temporal derivative in the heat equation, combined with the global time Gaussian low-pass filter provides the regularization required for stable, accurate results. A physical experiment used as a test bed for validation of the numerical methods described herein is also presented. The physics of installed thermocouple sensors are outlined, and loop-current step response (LCSR) is employed to measure and correct for the delay and attenuation characteristics of the sensors. A new technique for the analysis of LCSR data is presented, and excellent agreement is observed between this model and the data. The space-marching method, global time method, and a new calibration integral method are employed to analyze the experimental data. First, data from only one probe is used which limits the results to the case of a semi-infinite medium. Next, data from two probes at different depths are used in the inverse analysis which enables generalization of the results to domains of finite width. For both one- and two-probe analyses, excellent agreement is found between the actual surface heat flux and the inverse predictions. The most accurate inverse technique is shown to be the calibration integral method, which is presently restricted to one-probe analysis. It is postulated that the accuracy of the global time method could be improved if the required higher-time derivatives of temperature data could be more accurately measured. Some preliminary work in obtaining these higher-time derivatives of temperature from a voltage-rate interface used in conjunction with the thermocouple calibration curve is also presented.
119

The Cauchy problem for the Lame system in infinite domains in R up(m)

Makhmudov, O. I., Niyozov, I. E. January 2005 (has links)
We consider the problem of analytic continuation of the solution of the multidimensional Lame system in infinite domains through known values of the solution and the corresponding strain tensor on a part of the boundary, i.e,the Cauchy problem.
120

Regularization of the Cauchy Problem for the System of Elasticity Theory in R up (m)

Makhmudov O. I., Niyozov; I. E. January 2005 (has links)
In this paper we consider the regularization of the Cauchy problem for a system of second order differential equations with constant coefficients.

Page generated in 0.0244 seconds