• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 462
  • 63
  • 56
  • 56
  • 54
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Svenska småföretags användning av reserveringar för resultatutjämning och intern finansiering / Swedish small firms’ utilization of allowances for income smoothing and internal financing

Andersson, Håkan A. January 2006 (has links)
Small firms often have inadequate access to the capital necessary for sucessful management. The Swedish Government introduced in the mid-1990s allowance rules that facilitate retention of profit for sole proprietorships and partnership firms. The tax credits arising from the allowances give certain benefits as a source of financing compared to traditional forms of credits. Among the more essential benefits are that the payment for some parts of the tax credit can be put on hold almost indefinitely, or alternatively never be paid. The firms are free to use these means, and the responsibility of future payment of the postponed tax debt stays with the individual firms. The comprehensive purpose of the dissertation may be stated as to increase the understanding of small Swedish firms, especially sole proprietorships, utilizing possibilities for allowances for income smoothing and internal financing. At the beginning the dissertation describes case studies, comprising a smaller selection of microfirms. With a starting-point from the accounted and reported income-tax returns, alternative calculations are made where additional positive tax and finance effects appear possible to obtain. One purpose of these studies is to increase the insight regarding the possibilities of income smoothing and internal financing that arise from utilizing these allowances. These studies also illuminate, to what extent and in what way they are being used in reality. Another objective of these studies is to give a more substantive insight into the technics behind the different allowances, appropriation to positive or negative interest rate allocation appropriation or dissolving of tax allocation reserve appropriation or dissolving of “expansion fund” Theories regarding the creation of resources, through building of capital, and theories on financial planning and strategy are studied. The purpose is to find support for the choice of theoretical grounded underlying independent variables that can be used in cross-sectional studies to explain the use of the possibilities of appropriations. Theories of finance that are of greatest interest, in the operationalisation of these variables, are theories that discuss the choices of different financing alternatives for small firms. The “pecking order theory”, describes the firm’s order of priority when choices of finance alternatives are made. The concept of “financial bootstrapping” expands the frame for different forms of financing choices that especially very small firms have at their disposal. The last part of the theoretical frame deals with the phenomenon of “income smoothing,” which can be translated as leveling out profits/losses. A number of financial and non-financial variables are supported by and operationalised from these financial theories e.g., return on sales, capital turnover, quick ratio and debt-to-equity ratio, respectively age, gender and line of business. Cross-sectional studies are implemented for the taxation years of 1996 and 1999, on databases that have been extracted from Statistics Sweden. The group of 87,276 sole proprietorships included in the study were required to complete tax returns and pay taxes for the business activity according to the supporting schedule, N2, information from the sole proprietorships’ income statement and balance sheet in an accounting statement that comes with the income tax return form. The possibilities of allowances are considered as dependent variables. The intention of the cross-sectional studies is to survey and describe the utilization of possible allowances, with the support of the financial and non-financial independent variables. The connection of these variables to the decision of sole proprietorships to appropriate to the tax allocation reserve is also summarized in a logistic regression model. A number of theoretically based propositions are made for the purpose of observing how the variables are connected to the chances that sole proprietorships actually appropriate to this form of allowance. Appropriation to the tax allocation reserve stands out as the most practiced form of allowance. The studies also clarify that utilization varies among different forms of allowances, but that not all firms that have the prerequisites to utilize the possibilities really do so to the full. A further utilization of the different possibilities of allowances is often conceivable. For the sole proprietorships that are not utilizing these possibilities, the allowances should be considered eligible as a contribution to internal financing and to increase access to capital.
302

Dating Divergence Times in Phylogenies

Anderson, Cajsa Lisa January 2007 (has links)
This thesis concerns different aspects of dating divergence times in phylogenetic trees, using molecular data and multiple fossil age constraints. Datings of phylogenetically basal eudicots, monocots and modern birds (Neoaves) are presented. Large phylograms and multiple fossil constraints were used in all these studies. Eudicots and monocots are suggested to be part of a rapid divergence of angiosperms in the Early Cretaceous, with most families present at the Cretaceous/Tertiary boundary. Stem lineages of Neoaves were present in the Late Cretaceous, but the main divergence of extant families took place around the Cre-taceous/Tertiary boundary. A novel method and computer software for dating large phylogenetic trees, PATHd8, is presented. PATHd8 is a nonparametric smoothing method that smoothes one pair of sister groups at a time, by taking the mean of the added branch lengths from a terminal taxon to a node. Because of the local smoothing, the algorithm is simple, hence providing stable and very fast analyses, allowing for thousands of taxa and an arbitrary number of age constraints. The importance of fossil constraints and their placement are discussed, and concluded to be the most important factor for obtaining reasonable age estimates. Different dating methods are compared, and it is concluded that differences in age estimates are obtained from penalized likelihood, PATHd8, and the Bayesian autocorrelation method implemented in the multidivtime program. In the Bayesian method, prior assumptions about evolutionary rate at the root, rate variance and the level of rate smoothing between internal edges, are suggested to influence the results.
303

Statistical methods with application to machine learning and artificial intelligence

Lu, Yibiao 11 May 2012 (has links)
This thesis consists of four chapters. Chapter 1 focuses on theoretical results on high-order laplacian-based regularization in function estimation. We studied the iterated laplacian regularization in the context of supervised learning in order to achieve both nice theoretical properties (like thin-plate splines) and good performance over complex region (like soap film smoother). In Chapter 2, we propose an innovative static path-planning algorithm called m-A* within an environment full of obstacles. Theoretically we show that m-A* reduces the number of vertex. In the simulation study, our approach outperforms A* armed with standard L1 heuristic and stronger ones such as True-Distance heuristics (TDH), yielding faster query time, adequate usage of memory and reasonable preprocessing time. Chapter 3 proposes m-LPA* algorithm which extends the m-A* algorithm in the context of dynamic path-planning and achieves better performance compared to the benchmark: lifelong planning A* (LPA*) in terms of robustness and worst-case computational complexity. Employing the same beamlet graphical structure as m-A*, m-LPA* encodes the information of the environment in a hierarchical, multiscale fashion, and therefore it produces a more robust dynamic path-planning algorithm. Chapter 4 focuses on an approach for the prediction of spot electricity spikes via a combination of boosting and wavelet analysis. Extensive numerical experiments show that our approach improved the prediction accuracy compared to those results of support vector machine, thanks to the fact that the gradient boosting trees method inherits the good properties of decision trees such as robustness to the irrelevant covariates, fast computational capability and good interpretation.
304

Drection Of Arrival Estimation By Array Interpolation In Randomly Distributed Sensor Arrays

Akyildiz, Isin 01 December 2006 (has links) (PDF)
In this thesis, DOA estimation using array interpolation in randomly distributed sensor arrays is considered. Array interpolation is a technique in which a virtual array is obtained from the real array and the outputs of the virtual array, computed from the real array using a linear transformation, is used for direction of arrival estimation. The idea of array interpolation techniques is to make simplified and computationally less demanding high resolution direction finding methods applicable to the general class of non-structured arrays.In this study,we apply an interpolation technique for arbitrary array geometries in an attempt to extend root-MUSIC algorithm to arbitrary array geometries.Another issue of array interpolation related to direction finding is spatial smoothing in the presence of multipath sources.It is shown that due to the Vandermonde structure of virtual array manifold vector obtained from the proposed interpolation method, it is possible to use spatial smoothing algorithms for the case of multipath sources.
305

Target Tracking With Correlated Measurement Noise

Oksar, Yesim 01 January 2007 (has links) (PDF)
A white Gaussian noise measurement model is widely used in target tracking problem formulation. In practice, the measurement noise may not be white. This phenomenon is due to the scintillation of the target. In many radar systems, the measurement frequency is high enough so that the correlation cannot be ignored without degrading tracking performance. In this thesis, target tracking problem with correlated measurement noise is considered. The correlated measurement noise is modeled by a first-order Markov model. The effect of correlation is thought as interference, and Optimum Decoding Based Smoothing Algorithm is applied. For linear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Alpha-Beta Filter Algorithm. For nonlinear models, the estimation performances of Optimum Decoding Based Smoothing Algorithm are compared with the performances of Extended Kalman Filter by performing various simulations.
306

Finite Element Modeling Of Electromagnetic Scattering Problems Via Hexahedral Edge Elements

Yilmaz, Asim Egemen 01 July 2007 (has links) (PDF)
In this thesis, quadratic hexahedral edge elements have been applied to the three dimensional for open region electromagnetic scattering problems. For this purpose, a semi-automatic all-hexahedral mesh generation algorithm is developed and implemented. Material properties inside the elements and along the edges are also determined and prescribed during the mesh generation phase in order to be used in the solution phase. Based on the condition number quality metric, the generated mesh is optimized by means of the Particle Swarm Optimization (PSO) technique. A framework implementing hierarchical hexahedral edge elements is implemented to investigate the performance of linear and quadratic hexahedral edge elements. Perfectly Matched Layers (PMLs), which are implemented by using a complex coordinate transformation, have been used for mesh truncation in the software. Sparse storage and relevant efficient matrix ordering are used for the representation of the system of equations. Both direct and indirect sparse matrix solution methods are implemented and used. Performance of quadratic hexahedral edge elements is deeply investigated over the radar cross-sections of several curved or flat objects with or without patches. Instead of the de-facto standard of 0.1 wavelength linear element size, 0.3-0.4 wavelength quadratic element size was observed to be a new potential criterion for electromagnetic scattering and radiation problems.
307

Image Segmentation Based On Variational Techniques

Altinoklu, Metin Burak 01 February 2009 (has links) (PDF)
In this thesis, the image segmentation methods based on the Mumford&amp / #8211 / Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
308

Optimizable Multiresolution Quadratic Variation Filter For High-frequency Financial Data

Sen, Aykut 01 February 2009 (has links) (PDF)
As the tick-by-tick data of financial transactions become easier to reach, processing that much of information in an efficient and correct way to estimate the integrated volatility gains importance. However, empirical findings show that, this much of data may become unusable due to microstructure effects. Most common way to get over this problem is to sample the data in equidistant intervals of calendar, tick or business time scales. The comparative researches on that subject generally assert that, the most successful sampling scheme is a calendar time sampling which samples the data every 5 to 20 minutes. But this generally means throwing out more than 99 percent of the data. So it is obvious that a more efficient sampling method is needed. Although there are some researches on using alternative techniques, none of them is proven to be the best. Our study is concerned with a sampling scheme that uses the information in different scales of frequency and is less prone to microstructure effects. We introduce a new concept of business intensity, the sampler of which is named Optimizable Multiresolution Quadratic Variation Filter. Our filter uses multiresolution analysis techniques to decompose the data into different scales and quadratic variation to build up the new business time scale. Our empirical findings show that our filter is clearly less prone to microstructure effects than any other common sampling method. We use the classified tick-by-tick data for Turkish Interbank FX market. The market is closed for nearly 14 hours of the day, so big jumps occur between closing and opening prices. We also propose a new smoothing algorithm to reduce the effects of those jumps.
309

台灣上市櫃公司資產減損之探討

楊美雪 Unknown Date (has links)
國內35號會計準則公報的實施是會計從歷史成本原則走向公平價值之重要里程碑,在新公報提高財務報表攸關性之同時,卻可能因放棄可靠性而增大企業報導盈餘的空間。是以實施35號公報對企業財務與營運面資訊揭露上之影響,除資產減損認列項目的正確性,認列金額的適足性,相關揭露報導的適當性外,影響資產減損的因素,以及是否具有公司或產業差異性等,均為值得深入探討的議題。 本論文以2004年報及2005年半年報為研究期間,針對資產減損之認列內容是否符合35號公報之規範內容,本研究發現國內上市櫃公司,將原來規範在1號或5號公報、後來納入35號公報受評之資產,在研究期間內認列並報導的資產減損損失合計約為新台幣203億元,分析結果隱喻國內上市櫃公司在適用35號公報時,確有存在不當認列之可能。本研究同時發現在財務報告資訊品質方面存在財務報告附註揭露「會計變動理由及其影響數」以及會計師查核意見書未對適用35號公報予以適當揭露者計有163家;以及母子公司適用35號公報之時點不同者。 至於認列資產減損金額之決定因素,本研究之實證結果發現,企業認列資產減損之大小受獲利能力、經營績效以及資產使用效能等企業營運因素之影響。在企業特性方面,本研究發現負債比例愈高及企業信用風險愈差之企業,其資產減損金額愈大。規模愈大之公司,認列之資產減損愈小;以及資訊電子業者認列顯著較高之資產減損金額。由於企業在適用35號公報上保有彈性判斷之空間,因此本研究發現企業認列資產減損之大小受到企業本身承受能力及洗大澡動機之影響,隱喻35號公報可能是管理當局可以操弄盈餘之工具之ㄧ。 / The implementation of new accounting standard (SFAS No.35) on assets impaired is a milestone of moving from historical cost principle towards fair value principle. As SFAS No. 35 may enhance the relevance of financial information at the cost of reliability, there exists the flexibility of reported earnings through the new communiqué. Based on the importance of the SFAS No.35 on a company’s financial and operational reporting, this thesis investigate the accuracy and adequateness of asset impairment, the appropriateness of reporting and disclosures of asset impairment, the determinants of asset impairment and the characteristics across industry are worth studying into thoroughly. With the topic of the contents of asset impairment in compliance with SFAS No.35, this study finds that the listed companies in Taiwan recognized and reported as asset impairment in the amount of NT $88,094 million for the study period from December, 2004 to June, 2005, of which approximately amounted to NT $20,300 million should be periodically evaluated in accordance with SFAS No.1 or SFAS No.5 before adoption of SFAS No.35. As a result, it metaphors listed company in Taiwan to use SFAS No.35 as an excuse for written off asset value. We explored the accuracy of asset impairment loss and the appropriateness of reporting for asset impairment for listed companies in Taiwan. When analyzing the reporting quality, we found that there were 163 financial reports of listed companies in Taiwan without footnote of “Accounting change and its effect” or explanationary paragraph for accounting change in auditors’ opinion for the study period from December, 2004 to June, 2005. In addition, we also found that four companies within two consolidated group started to adopt SFAS No.35 at the different timing against the rule of consistency on the adoption of accounting principle among consolidated entities. We explored the determinants of asset impairment for the listed companies in Taiwan for the period from December, 2004 to June, 2005. Our empirical results show the following: (1) The size of asset impairment is associated with operational factors such as profitability, operational performance and effectiveness of asset utilization;(2) In the perspective of company characteristic, the size of asset impairment is associated with the debt ratio and worse credit risk ;(3) The bigger company recognized the smaller impairment loss . Compared to other industries ( excluded financial institutions and securities),the huge impairment had been recognized in electronic industry .Since the evaluation of asset value involved a lot of professional judgments , we found that the size of impairment loss was associated with the management reporting motivation and capability to afford such losses . It metaphors that the SFAS No. is one of the vehicle of earning smoothing to be used.
310

Akcijų kainų kintamumo analizė / Stock price volatility analysis

Šimkutė, Jovita 16 August 2007 (has links)
Darbe „Akcijų kainų kintamumo analizė“ nagrinėjami ir lyginami Baltijos (Lietuvos, Latvijos, Estijos) bei Lotynų Amerikos (Meksikos, Venesuelos) šalių duomenys. Atliekama pasirinktų akcijų kainų grąžų analizė. Jai naudojami trijų metų kiekvienos dienos duomenys (akcijų kainos). Pirmoje darbo dalyje supažindinama su bendra prognozavimo metodų teorija, aprašomi skirtingi, dažnai literatūroje ir praktikoje sutinkami modeliai. Antrojoje dalyje aprašyti prognozavimo metodai taikomi realiems duomenims, t.y. pasirinktoms akcijoms. Prognozuojama akcijų kainų grąža, kuri po to yra palyginama su realia reikšme, apskaičiuojamos prognozavimo metodų paklaidos. Pagrindinis darbo tikslas – atlikti lyginamąją prognozavimo modelių analizę su pasirinktomis akcijomis ir atrinkti tuos metodus, kurie duoda geriausius rezultatus. Darbo tikslui įgyvendinti naudojama SAS statistinio paketo ekonometrikos ir laiko eilučių analizės posistemė SAS/ETS (Time Series Forecasting System). / Most of empirical surveys in macro and financial economics are based on time series analysis. In this work, data of Baltic and Latin America countries is being analyzed and compared. Analysis of stock price returns is presented using daily long term (three years) period data. In the first part of this work general forecasting theory is presented, also different methods, frequently met in the literature and practice, are described. In the second part, forecasting models are being applied for real data. We present results of forecasting stock returns comparing them with real values. Also a precision of forecasts is being calculated, which let us to decide about propriety of each model. Consequently, the aim of this work is to forecast returns of stock price by various time series models and to choose the best one. The analysis was made using SAS statistical package and its econometrics and Time Series Analysis System (SAS/ETS).

Page generated in 0.0493 seconds