• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 465
  • 63
  • 56
  • 56
  • 55
  • 48
  • 45
  • 43
  • 41
  • 40
  • 38
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Dating Divergence Times in Phylogenies

Anderson, Cajsa Lisa January 2007 (has links)
<p>This thesis concerns different aspects of dating divergence times in phylogenetic trees, using molecular data and multiple fossil age constraints.</p><p>Datings of phylogenetically basal eudicots, monocots and modern birds (Neoaves) are presented. Large phylograms and multiple fossil constraints were used in all these studies. Eudicots and monocots are suggested to be part of a rapid divergence of angiosperms in the Early Cretaceous, with most families present at the Cretaceous/Tertiary boundary. Stem lineages of Neoaves were present in the Late Cretaceous, but the main divergence of extant families took place around the Cre-taceous/Tertiary boundary.</p><p>A novel method and computer software for dating large phylogenetic trees, PATHd8, is presented. PATHd8 is a nonparametric smoothing method that smoothes one pair of sister groups at a time, by taking the mean of the added branch lengths from a terminal taxon to a node. Because of the local smoothing, the algorithm is simple, hence providing stable and very fast analyses, allowing for thousands of taxa and an arbitrary number of age constraints.</p><p>The importance of fossil constraints and their placement are discussed, and concluded to be the most important factor for obtaining reasonable age estimates.</p><p>Different dating methods are compared, and it is concluded that differences in age estimates are obtained from penalized likelihood, PATHd8, and the Bayesian autocorrelation method implemented in the multidivtime program. In the Bayesian method, prior assumptions about evolutionary rate at the root, rate variance and the level of rate smoothing between internal edges, are suggested to influence the results.</p>
302

Monetary policy under uncertainty

Söderström, Ulf January 1999 (has links)
This thesis contains four chapters, each of which examines different aspects of the uncertainty facing monetary policymakers.''Monetary policy and market interest rates'' investigates how interest rates set on financial markets respond to policy actions taken by the monetary authorities. The reaction of market rates is shown to depend crucially on market participants' interpretation of the factors underlying the policy move. These theoretical predictions find support in an empirical analysis of the U.S. financial markets.''Predicting monetary policy using federal funds futures prices'' examines how prices of federal funds futures contracts can be used to predict policy moves by the Federal Reserve. Although the futures prices exhibit systematic variation across trading days and calendar months, they are shown to be fairly successful in predicting the federal funds rate target that will prevailafter the next meeting of the Federal Open Market Committee from 1994 to 1998.''Monetary policy with uncertain parameters'' examines the effects  of parameter uncertainty on the optimal monetary policy strategy. Under certain parameter configurations, increasing uncertainty is shown to lead to more aggressive policy, in contrast to the accepted wisdom.''Should central banks be more aggressive?'' examines why a certain class of monetary policy models leads to more aggressive policy prescriptions than what is observed in reality. These counterfactual results are shown to be due to model restrictions rather than central banks being too cautious in their policy behavior. An unrestricted model, taking the dynamics of the economy and multiplicative parameter uncertainty into account, leads to optimal policy prescriptions which are very close to observed Federal Reserve behavior. / <p>Diss. Stockholm : Handelshögskolan, 1999</p>
303

Svenska småföretags användning av reserveringar för resultatutjämning och intern finansiering / Swedish small firms’ utilization of allowances for income smoothing and internal financing

Andersson, Håkan A. January 2006 (has links)
Small firms often have inadequate access to the capital necessary for sucessful management. The Swedish Government introduced in the mid-1990s allowance rules that facilitate retention of profit for sole proprietorships and partnership firms. The tax credits arising from the allowances give certain benefits as a source of financing compared to traditional forms of credits. Among the more essential benefits are that the payment for some parts of the tax credit can be put on hold almost indefinitely, or alternatively never be paid. The firms are free to use these means, and the responsibility of future payment of the postponed tax debt stays with the individual firms. The comprehensive purpose of the dissertation may be stated as to increase the understanding of small Swedish firms, especially sole proprietorships, utilizing possibilities for allowances for income smoothing and internal financing. At the beginning the dissertation describes case studies, comprising a smaller selection of microfirms. With a starting-point from the accounted and reported income-tax returns, alternative calculations are made where additional positive tax and finance effects appear possible to obtain. One purpose of these studies is to increase the insight regarding the possibilities of income smoothing and internal financing that arise from utilizing these allowances. These studies also illuminate, to what extent and in what way they are being used in reality. Another objective of these studies is to give a more substantive insight into the technics behind the different allowances, appropriation to positive or negative interest rate allocation appropriation or dissolving of tax allocation reserve appropriation or dissolving of “expansion fund” Theories regarding the creation of resources, through building of capital, and theories on financial planning and strategy are studied. The purpose is to find support for the choice of theoretical grounded underlying independent variables that can be used in cross-sectional studies to explain the use of the possibilities of appropriations. Theories of finance that are of greatest interest, in the operationalisation of these variables, are theories that discuss the choices of different financing alternatives for small firms. The “pecking order theory”, describes the firm’s order of priority when choices of finance alternatives are made. The concept of “financial bootstrapping” expands the frame for different forms of financing choices that especially very small firms have at their disposal. The last part of the theoretical frame deals with the phenomenon of “income smoothing,” which can be translated as leveling out profits/losses. A number of financial and non-financial variables are supported by and operationalised from these financial theories e.g., return on sales, capital turnover, quick ratio and debt-to-equity ratio, respectively age, gender and line of business. Cross-sectional studies are implemented for the taxation years of 1996 and 1999, on databases that have been extracted from Statistics Sweden. The group of 87,276 sole proprietorships included in the study were required to complete tax returns and pay taxes for the business activity according to the supporting schedule, N2, information from the sole proprietorships’ income statement and balance sheet in an accounting statement that comes with the income tax return form. The possibilities of allowances are considered as dependent variables. The intention of the cross-sectional studies is to survey and describe the utilization of possible allowances, with the support of the financial and non-financial independent variables. The connection of these variables to the decision of sole proprietorships to appropriate to the tax allocation reserve is also summarized in a logistic regression model. A number of theoretically based propositions are made for the purpose of observing how the variables are connected to the chances that sole proprietorships actually appropriate to this form of allowance. Appropriation to the tax allocation reserve stands out as the most practiced form of allowance. The studies also clarify that utilization varies among different forms of allowances, but that not all firms that have the prerequisites to utilize the possibilities really do so to the full. A further utilization of the different possibilities of allowances is often conceivable. For the sole proprietorships that are not utilizing these possibilities, the allowances should be considered eligible as a contribution to internal financing and to increase access to capital.
304

Dating Divergence Times in Phylogenies

Anderson, Cajsa Lisa January 2007 (has links)
This thesis concerns different aspects of dating divergence times in phylogenetic trees, using molecular data and multiple fossil age constraints. Datings of phylogenetically basal eudicots, monocots and modern birds (Neoaves) are presented. Large phylograms and multiple fossil constraints were used in all these studies. Eudicots and monocots are suggested to be part of a rapid divergence of angiosperms in the Early Cretaceous, with most families present at the Cretaceous/Tertiary boundary. Stem lineages of Neoaves were present in the Late Cretaceous, but the main divergence of extant families took place around the Cre-taceous/Tertiary boundary. A novel method and computer software for dating large phylogenetic trees, PATHd8, is presented. PATHd8 is a nonparametric smoothing method that smoothes one pair of sister groups at a time, by taking the mean of the added branch lengths from a terminal taxon to a node. Because of the local smoothing, the algorithm is simple, hence providing stable and very fast analyses, allowing for thousands of taxa and an arbitrary number of age constraints. The importance of fossil constraints and their placement are discussed, and concluded to be the most important factor for obtaining reasonable age estimates. Different dating methods are compared, and it is concluded that differences in age estimates are obtained from penalized likelihood, PATHd8, and the Bayesian autocorrelation method implemented in the multidivtime program. In the Bayesian method, prior assumptions about evolutionary rate at the root, rate variance and the level of rate smoothing between internal edges, are suggested to influence the results.
305

Statistical methods with application to machine learning and artificial intelligence

Lu, Yibiao 11 May 2012 (has links)
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.
306

Drection Of Arrival Estimation By Array Interpolation In Randomly Distributed Sensor Arrays

Akyildiz, Isin 01 December 2006 (has links) (PDF)
In this thesis, DOA estimation using array interpolation in randomly distributed sensor arrays is considered. Array interpolation is a technique in which a virtual array is obtained from the real array and the outputs of the virtual array, computed from the real array using a linear transformation, is used for direction of arrival estimation. The idea of array interpolation techniques is to make simplified and computationally less demanding high resolution direction finding methods applicable to the general class of non-structured arrays.In this study,we apply an interpolation technique for arbitrary array geometries in an attempt to extend root-MUSIC algorithm to arbitrary array geometries.Another issue of array interpolation related to direction finding is spatial smoothing in the presence of multipath sources.It is shown that due to the Vandermonde structure of virtual array manifold vector obtained from the proposed interpolation method, it is possible to use spatial smoothing algorithms for the case of multipath sources.
307

Target Tracking With Correlated Measurement Noise

Oksar, Yesim 01 January 2007 (has links) (PDF)
A white Gaussian noise measurement model is widely used in target tracking problem formulation. In practice, the measurement noise may not be white. This phenomenon is due to the scintillation of the target. In many radar systems, the measurement frequency is high enough so that the correlation cannot be ignored without degrading tracking performance. In this thesis, target tracking problem with correlated measurement noise is considered. The correlated measurement noise is modeled by a first-order Markov model. The effect of correlation is thought as interference, and Optimum Decoding Based Smoothing Algorithm is applied. For linear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Alpha-Beta Filter Algorithm. For nonlinear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Extended Kalman Filter by performing various simulations.
308

Finite Element Modeling Of Electromagnetic Scattering Problems Via Hexahedral Edge Elements

Yilmaz, Asim Egemen 01 July 2007 (has links) (PDF)
In this thesis, quadratic hexahedral edge elements have been applied to the three dimensional for open region electromagnetic scattering problems. For this purpose, a semi-automatic all-hexahedral mesh generation algorithm is developed and implemented. Material properties inside the elements and along the edges are also determined and prescribed during the mesh generation phase in order to be used in the solution phase. Based on the condition number quality metric, the generated mesh is optimized by means of the Particle Swarm Optimization (PSO) technique. A framework implementing hierarchical hexahedral edge elements is implemented to investigate the performance of linear and quadratic hexahedral edge elements. Perfectly Matched Layers (PMLs), which are implemented by using a complex coordinate transformation, have been used for mesh truncation in the software. Sparse storage and relevant efficient matrix ordering are used for the representation of the system of equations. Both direct and indirect sparse matrix solution methods are implemented and used. Performance of quadratic hexahedral edge elements is deeply investigated over the radar cross-sections of several curved or flat objects with or without patches. Instead of the de-facto standard of 0.1 wavelength linear element size, 0.3-0.4 wavelength quadratic element size was observed to be a new potential criterion for electromagnetic scattering and radiation problems.
309

Image Segmentation Based On Variational Techniques

Altinoklu, Metin Burak 01 February 2009 (has links) (PDF)
In this thesis, the image segmentation methods based on the Mumford&amp / #8211 / Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
310

Optimizable Multiresolution Quadratic Variation Filter For High-frequency Financial Data

Sen, Aykut 01 February 2009 (has links) (PDF)
As the tick-by-tick data of financial transactions become easier to reach, processing that much of information in an efficient and correct way to estimate the integrated volatility gains importance. However, empirical findings show that, this much of data may become unusable due to microstructure effects. Most common way to get over this problem is to sample the data in equidistant intervals of calendar, tick or business time scales. The comparative researches on that subject generally assert that, the most successful sampling scheme is a calendar time sampling which samples the data every 5 to 20 minutes. But this generally means throwing out more than 99 percent of the data. So it is obvious that a more efficient sampling method is needed. Although there are some researches on using alternative techniques, none of them is proven to be the best. Our study is concerned with a sampling scheme that uses the information in different scales of frequency and is less prone to microstructure effects. We introduce a new concept of business intensity, the sampler of which is named Optimizable Multiresolution Quadratic Variation Filter. Our filter uses multiresolution analysis techniques to decompose the data into different scales and quadratic variation to build up the new business time scale. Our empirical findings show that our filter is clearly less prone to microstructure effects than any other common sampling method. We use the classified tick-by-tick data for Turkish Interbank FX market. The market is closed for nearly 14 hours of the day, so big jumps occur between closing and opening prices. We also propose a new smoothing algorithm to reduce the effects of those jumps.

Page generated in 0.0663 seconds