• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 462
  • 63
  • 56
  • 56
  • 54
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Opportunism vid nedskrivningsprövning av goodwill? : En kritisk studie av tidigare angivna förklaringar till avvikelser mellan en genom CAPM beräknad diskonteringsränta och den av företaget redovisade, vid nedskrivningsprövning av goodwill.

Carlborg, Christian, Renman Claesson, Ludvig January 2012 (has links)
År 2005 implementerades IFRS 3 och IAS 36 i Sverige. I och med detta genomför företag nedskrivningsprövningar av goodwill. Dessa kan inbegripa nuvärdesberäkningar av framtida kassaflöden. Forskarna Carlin och Finch utförde år 2009 en studie på australiensiska börsnoterade företag för att undersöka om diskonteringsräntor, vilka används vid en nedskrivningsprövning, sätts opportunistiskt. Studien genomfördes genom att de visade på förekomsten av avvikelser mellan diskonteringsräntan som företagen redovisat och en av forskarna estimerad teoretisk diskonteringsränta beräknad genom the Capital Asset Pricing Model [CAPM]. Carlin och Finch hävdar att användandet av diskonteringsräntor vilka avvek mer än 150 räntepunkter från de teoretiska diskonteringsräntorna inte kan förklaras av estimeringsfel och därmed är i linje med opportunistiskt beteende. Det har presenterats olika former av opportunism som förklaring till dessa avvikande diskonteringsräntor. Dessa inbegriper opportunistiskt beteende genom earnings managagement i form av big bath och income smoothing. Denna studie undersöker om avvikande diskonteringsräntor förekommer och om förklaringarna presenterade av Carlin och Finch har bärighet år 2010 för företag noterade på Nasdaq OMX Stockholm. Detta genom att använda samma metod som Carlin och Finch gällande beräknandet av teoretiska diskonteringsräntor för att sedan relatera detta till resultatutveckling och faktiskt utförd goodwillnedskrivning. Denna studie visar att avvikelser mellan företagens redovisade och en genom CAPM beräknad teoretisk diskonteringsränta tycks vara vanligt förekommande och att avvikelser som kan förklaras av big bath förekommer, detta tycks dock vara ovanligt. Ingen avvikelse mellan redovisad och teoretisk diskonteringsränta kan påvisas som kan förklaras av opportunistiskt beteende genom income smoohting i syfte att dämpa resultat. Vidare framför denna studie kritik av tidigare studiers slutsatser om förekomst av agerande i linje med opportunism då redovisad diskonteringsränta avviker från en genom CAPM beräknad diskonteringsränta. / In 2005 IFRS 3 and IAS 36 were implemented in Sweden. As of this companies perform impairment testing of goodwill. These impairment tests may include discounted cash flow analyses. The researchers Carlin and Finch conducted a study in 2009 of Australian listed companies to investigate if the discount rates used in these impairment tests possibly were used opportunistically. They did this by demonstrating deviations between the discount rates that companies reported and discount rates calculated by the researchers using the Capital Asset Pricing Model [CAPM]. Carlin and Finch argues that reported discount rates that deviated more than 150 basis points from the estimated discount rates cannot be explained by estimation error and is thus consistent with opportunistic behavior. Explanations were presented by Carlin and Finch concerning the occurrence of these deviations. These include earnings management in the form of big bath and income smoothing.   This study examines whether deviating discount rates occur and if the explanations presented by Carlin and Finch can be documented for companies listed on Nasdaq OMX Stockholm in 2010. This is conducted by using the same method as Carlin and Finch regarding the calculation of the discount rates. Further this is related to earnings and actual performed goodwill impairments. This study shows that deviations between reported discount rates and theoretical discount rates, estimated by CAPM, are prevalent and that these deviations may have been motivated by big bath, though this appears to be unusual. No deviations between reported and theoretical discount rates can be shown that can be explained by opportunistic behavior by conducting income smoothing to dampen earnings. Furthermore, in this study criticism is put forth of earlier studies’ conclusions concerning behavior consistent with opportunism explaining deviations between reported and theoretical discount rates calculated using CAPM.
92

Forecasting daily maximum temperature of Umeå

naz, saima January 2015 (has links)
The aim of this study is to get some approach which can help in improving the predictions of daily temperature of Umeå. Weather forecasts are available through various sources nowadays. There are various software and methods available for time series forecasting. Our aim is to investigate the daily maximum temperatures of Umeå, and compare the performance of some methods in forecasting these temperatures. Here we analyse the data of daily maximum temperatures and find the predictions for some local period using methods of autoregressive integrated moving average (ARIMA), exponential smoothing (ETS), and cubic splines.  The forecast package in R is used for this purpose and automatic forecasting methods available in the package are applied for modelling with ARIMA, ETS, and cubic splines. The thesis begins with some initial modelling on univariate time series of daily maximum temperatures. The data of daily maximum temperatures of Umeå from 2008 to 2013 are used to compare the methods using various lengths of training period. On the basis of accuracy measures we try to choose the best method. Keeping in mind the fact that there are various factors which can cause the variability in daily temperature, we try to improve the forecasts in the next part of thesis by using multivariate time series forecasting method on the time series of maximum temperatures together with some other variables. Vector auto regressive (VAR) model from the vars package in R is used to analyse the multivariate time series. Results: ARIMA is selected as the best method in comparison with ETS and cubic smoothing splines to forecast one-step-ahead daily maximum temperature of Umeå, with the training period of one year. It is observed that ARIMA also provides better forecasts of daily temperatures for the next two or three days. On the basis of this study, VAR (for multivariate time series) does not help to improve the forecasts significantly. The proposed ARIMA with one year training period is compatible with the forecasts of daily maximum temperature of Umeå obtained from Swedish Meteorological and Hydrological Institute (SMHI).
93

Extending covariance structure analysis for multivariate and functional data

Sheppard, Therese January 2010 (has links)
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
94

Hybridation GPS/Vision monoculaire pour la navigation autonome d'un robot en milieu extérieur / Outdoor robotic navigation by GPS and monocular vision sensors fusion

Codol, Jean-Marie 15 February 2012 (has links)
On assiste aujourd'hui à l'importation des NTIC (Nouvelles Technologies de l'Information et de la Télécommunication) dans la robotique. L'union de ces technologies donnera naissance, dans les années à venir, à la robotique de service grand-public.Cet avenir, s'il se réalise, sera le fruit d'un travail de recherche, amont, dans de nombreux domaines : la mécatronique, les télécommunications, l'automatique, le traitement du signal et des images, l'intelligence artificielle ... Un des aspects particulièrement intéressant en robotique mobile est alors le problème de la localisation et de la cartographie simultanée. En effet, dans de nombreux cas, un robot mobile, pour accéder à une intelligence, doit nécessairement se localiser dans son environnement. La question est alors : quelle précision pouvons-nous espérer en terme de localisation? Et à quel coût?Dans ce contexte, un des objectifs de tous les laboratoires de recherche en robotique, objectif dont les résultats sont particulièrement attendus dans les milieux industriels, est un positionnement et une cartographie de l'environnement, qui soient à la fois précis, tous-lieux, intègre, bas-coût et temps-réel. Les capteurs de prédilection sont les capteurs peu onéreux tels qu'un GPS standard (de précision métrique), et un ensemble de capteurs embarquables en charge utile (comme les caméras-vidéo). Ce type de capteurs constituera donc notre support privilégié, dans notre travail de recherche. Dans cette thèse, nous aborderons le problème de la localisation d'un robot mobile, et nous choisirons de traiter notre problème par l'approche probabiliste. La démarche est la suivante, nous définissons nos 'variables d'intérêt' : un ensemble de variables aléatoires. Nous décrivons ensuite leurs lois de distribution, et leur modèles d'évolution, enfin nous déterminons une fonction de coût, de manière à construire un observateur (une classe d'algorithme dont l'objectif est de déterminer le minimum de notre fonction de coût). Notre contribution consistera en l'utilisation de mesures GPS brutes GPS (les mesures brutes - ou raw-datas - sont les mesures issues des boucles de corrélation de code et de phase, respectivement appelées mesures de pseudo-distances de code et de phase) pour une navigation bas-coût précise en milieu extérieur suburbain. En utilisant la propriété dite 'entière' des ambiguïtés de phase GPS, nous étendrons notre navigation pour réaliser un système GPS-RTK (Real Time Kinematic) en mode différentiel local précise et bas-coût. Nos propositions sont validées par des expérimentations réalisées sur notre démonstrateur robotique. / We are witnessing nowadays the importation of ICT (Information and Communications Technology) in robotics. These technologies will give birth, in upcoming years, to the general public service robotics. This future, if realised, shall be the result of many research conducted in several domains: mechatronics, telecommunications, automatics, signal and image processing, artificial intelligence ... One particularly interesting aspect in mobile robotics is hence the simultaneous localisation and mapping problem. Consequently, to access certain informations, a mobile robot has, in many cases, to map/localise itself inside its environment. The following question is then posed: What precision can we aim for in terms of localisation? And at what cost?In this context, one of the objectives of many laboratories indulged in robotics research, and where results impact directly the industry, is the positioning and mapping of the environment. These latter tasks should be precise, adapted everywhere, integrated, low-cost and real-time. The prediction sensors are inexpensive ones, such as a standard GPS (of metric precision), and a set of embeddable payload sensors (e.g. video cameras). These type of sensors constitute the main support in our work.In this thesis, we shed light on the localisation problem of a mobile robot, which we choose to handle with a probabilistic approach. The procedure is as follows: we first define our "variables of interest" which are a set of random variables, and then we describe their distribution laws and their evolution models. Afterwards, we determine a cost function in such a manner to build up an observer (an algorithmic class where the objective is to minimize the cost function).Our contribution consists of using brute GPS measures (brute measures or raw datas are measures issued from code and phase correlation loops, called pseudo-distance measures of code and phase, respectively) for a low-cost navigation, which is precise in an external suburban environment. By implementing the so-called "whole" property of GPS phase ambiguities, we expand the navigation to achieve a GPS-RTK (Real-Time Kinematic) system in a precise and low-cost local differential mode.Our propositions has been validated through experimentations realized on our robotic demonstrator.
95

Prediction and variable selection in sparse ultrahigh dimensional additive models

Ramirez, Girly Manguba January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Haiyan Wang / The advance in technologies has enabled many fields to collect datasets where the number of covariates (p) tends to be much bigger than the number of observations (n), the so-called ultrahigh dimensionality. In this setting, classical regression methodologies are invalid. There is a great need to develop methods that can explain the variations of the response variable using only a parsimonious set of covariates. In the recent years, there have been significant developments of variable selection procedures. However, these available procedures usually result in the selection of too many false variables. In addition, most of the available procedures are appropriate only when the response variable is linearly associated with the covariates. Motivated by these concerns, we propose another procedure for variable selection in ultrahigh dimensional setting which has the ability to reduce the number of false positive variables. Moreover, this procedure can be applied when the response variable is continuous or binary, and when the response variable is linearly or non-linearly related to the covariates. Inspired by the Least Angle Regression approach, we develop two multi-step algorithms to select variables in sparse ultrahigh dimensional additive models. The variables go through a series of nonlinear dependence evaluation following a Most Significant Regression (MSR) algorithm. In addition, the MSR algorithm is also designed to implement prediction of the response variable. The first algorithm called MSR-continuous (MSRc) is appropriate for a dataset with a response variable that is continuous. Simulation results demonstrate that this algorithm works well. Comparisons with other methods such as greedy-INIS by Fan et al. (2011) and generalized correlation procedure by Hall and Miller (2009) showed that MSRc not only has false positive rate that is significantly less than both methods, but also has accuracy and true positive rate comparable with greedy-INIS. The second algorithm called MSR-binary (MSRb) is appropriate when the response variable is binary. Simulations demonstrate that MSRb is competitive in terms of prediction accuracy and true positive rate, and better than GLMNET in terms of false positive rate. Application of MSRb to real datasets is also presented. In general, MSR algorithm usually selects fewer variables while preserving the accuracy of predictions.
96

Measurement of biomass concentration using a microwave oven and analysis of data for estimation of specific rates

Buono, Mark Anthony. January 1985 (has links)
Call number: LD2668 .T4 1985 B86 / Master of Science
97

ANALYSIS OF VOCAL FOLD KINEMATICS USING HIGH SPEED VIDEO

Unnikrishnan, Harikrishnan 01 January 2016 (has links)
Vocal folds are the twin in-folding of the mucous membrane stretched horizontally across the larynx. They vibrate modulating the constant air flow initiated from the lungs. The pulsating pressure wave blowing through the glottis is thus the source for voiced speech production. Study of vocal fold dynamics during voicing are critical for the treatment of voice pathologies. Since the vocal folds move at 100 - 350 cycles per second, their visual inspection is currently done by strobosocopy which merges information from multiple cycles to present an apparent motion. High Speed Digital Laryngeal Imaging(HSDLI) with a temporal resolution of up to 10,000 frames per second has been established as better suited for assessing the vocal fold vibratory function through direct recording. But the widespread use of HSDLI is limited due to lack of consensus on the modalities like features to be examined. Development of the image processing techniques which circumvents the need for the tedious and time consuming effort of examining large volumes of recording has room for improvement. Fundamental questions like the required frame rate or resolution for the recordings is still not adequately answered. HSDLI cannot get the absolute physical measurement of the anatomical features and vocal fold displacement. This work addresses these challenges through improved signal processing. A vocal fold edge extraction technique with subpixel accuracy, suited even for hard to record pediatric population is developed first. The algorithm which is equally applicable for pediatric and adult subjects, is implemented to facilitate user inspection and intervention. Objective features describing the fold dynamics, which are extracted from the edge displacement waveform are proposed and analyzed on a diverse dataset of healthy males, females and children. The sampling and quantization noise present in the recordings are analyzed and methods to mitigate them are investigated. A customized Kalman smoothing and spline interpolation on the displacement waveform is found to improve the feature estimation stability. The relationship between frame rate, spatial resolution and vibration for efficient capturing of information is derived. Finally, to address the inability to measure physical measurement, a structured light projection calibrated with respect to the endoscope is prototyped.
98

Factors influencing U.S. canine heartworm (Dirofilaria immitis) prevalence

Wang, Dongmei, Bowman, Dwight, Brown, Heidi, Harrington, Laura, Kaufman, Phillip, McKay, Tanja, Nelson, Charles, Sharp, Julia, Lund, Robert January 2014 (has links)
BACKGROUND:This paper examines the individual factors that influence prevalence rates of canine heartworm in the contiguous United States. A data set provided by the Companion Animal Parasite Council, which contains county-by-county results of over nine million heartworm tests conducted during 2011 and 2012, is analyzed for predictive structure. The goal is to identify the factors that are important in predicting high canine heartworm prevalence rates.METHODS:The factors considered in this study are those envisioned to impact whether a dog is likely to have heartworm. The factors include climate conditions (annual temperature, precipitation, and relative humidity), socio-economic conditions (population density, household income), local topography (surface water and forestation coverage, elevation), and vector presence (several mosquito species). A baseline heartworm prevalence map is constructed using estimated proportions of positive tests in each county of the United States. A smoothing algorithm is employed to remove localized small-scale variation and highlight large-scale structures of the prevalence rates. Logistic regression is used to identify significant factors for predicting heartworm prevalence.RESULTS:All of the examined factors have power in predicting heartworm prevalence, including median household income, annual temperature, county elevation, and presence of the mosquitoes Aedes trivittatus, Aedes sierrensis and Culex quinquefasciatus. Interactions among factors also exist.CONCLUSIONS:The factors identified are significant in predicting heartworm prevalence. The factor list is likely incomplete due to data deficiencies. For example, coyotes and feral dogs are known reservoirs of heartworm infection. Unfortunately, no complete data of their populations were available. The regression model considered is currently being explored to forecast future values of heartworm prevalence.
99

Empirical Bayesian Smoothing Splines for Signals with Correlated Errors: Methods and Applications

Rosales Marticorena, Luis Francisco 22 June 2016 (has links)
No description available.
100

Some statistical aspects of LULU smoothers

Jankowitz, Maria Dorothea 12 1900 (has links)
Thesis (PhD (Statistics and Actuarial Science))--University of Stellenbosch, 2007. / The smoothing of time series plays a very important role in various practical applications. Estimating the signal and removing the noise is the main goal of smoothing. Traditionally linear smoothers were used, but nonlinear smoothers became more popular through the years. From the family of nonlinear smoothers, the class of median smoothers, based on order statistics, is the most popular. A new class of nonlinear smoothers, called LULU smoothers, was developed by using the minimum and maximum selectors. These smoothers have very attractive mathematical properties. In this thesis their statistical properties are investigated and compared to that of the class of median smoothers. Smoothing, together with related concepts, are discussed in general. Thereafter, the class of median smoothers, from the literature is discussed. The class of LULU smoothers is defined, their properties are explained and new contributions are made. The compound LULU smoother is introduced and its property of variation decomposition is discussed. The probability distributions of some LULUsmoothers with independent data are derived. LULU smoothers and median smoothers are compared according to the properties of monotonicity, idempotency, co-idempotency, stability, edge preservation, output distributions and variation decomposition. A comparison is made of their respective abilities for signal recovery by means of simulations. The success of the smoothers in recovering the signal is measured by the integrated mean square error and the regression coefficient calculated from the least squares regression of the smoothed sequence on the signal. Finally, LULU smoothers are practically applied.

Page generated in 0.0999 seconds