• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 465
  • 63
  • 56
  • 56
  • 55
  • 48
  • 45
  • 43
  • 41
  • 40
  • 38
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Kommunal redovisning : förekomsten av artificiell resultatstyrning i kommuner / Municipal accounting : the existence of artificial earnings management in municipalities

Samuelsson, Karin, Hultberg, Ellen January 2017 (has links)
Denna studie behandlar förekomsten av resultatstyrning i svenska kommuner. Tidigare forskning visar på att styrning främst tar form av periodiseringar samt att de främsta förklarande faktorerna för förekomsten är ekonomi, politik och tjänstemän. Existerande teori menar på att övergången från kassabaserad redovisning till bokföringsmässiga grunder underlättade resultatstyrning och att det förekommer både i kommuner och företag i Sverige.Undersökningen syftar till att förklara förekomsten av resultatstyrning i kommuner med hjälp av posten “bidrag till statlig infrastruktur”. I analysen ifrågasätts användandet av resultatstyrning, vilka incitament som finns samt hur dess incitament påverkar beslutprocessen.Studien är en dokumentstudie och har genomförts som en kvantitativ analys. Det empiriska materialet har främst samlats in från kommuners årsredovisningar. Flera statistiska analyser genomförs och leder fram till studiens resultat. Resultaten visar på att resultatstyrning förekommer i kommuner och att det främst är kortsiktiga incitament som styr besluten. Tiden som kommuner väljer att periodisera bidragen på beror främst på hur stora beloppen på bidragen som lämnas är samt om kommunen har mottagit stora engångsbelopp i form av till exempel AFA-försäkringar, AFA-premier eller konjunkturstöd från staten de åren eller inte. / This thesis is an analysis of the occurrence of earnings management in Swedish municipalities. Previous theory claims that the change from cash accounting to accrual accounting has made it easier to involve in earnings management actions and that these actions are broadly used in both municipalities and corporate companies. Similar studies proves economy, politics and civil servants to be explanatory for why earnings management takes place in municipalities.We seek to explain this occurrence by using the accounting record contribution to national infrastructure. The aim is to answer what the main incentives are for manipulating the results in municipalities and how these incentives affect the decision making progress.This is mainly a documentary study that focuses on the public financial reports of municipalities. A statistical analysis is performed and conclusions are drawn. We find evidence that earnings management exist in municipalities and that the biggest explaining factor of how the contribution is accounted for depend on the size of the contribution to infrastructure and if the municipality has received any big amounts from the government in form of AFA-insurances, AFA-premier or cyclical support that year or not. This indicates a short-term thinking with high focus on net income. This study is hereafter written in Swedish.
342

Statistical Analysis and Modeling Health Data: A Longitudinal Study

Tharu, Bhikhari Prasad 09 June 2016 (has links)
Lung cancer has been considered one of the leading causes of deaths while cancer re- mains the second most common cause of deaths in the USA. Understanding the behavior of a disease over time could yield important information to make decisions about the disease. Statistical models could provide crucial clues and help to make a decision about the dis- ease, budget allocation, evaluation, and implement prevention. Longitudinal trend analysis of the diseases helps to understand long term effects and nature. Cholesterol level is one of the most contributing risk factors for Coronary Heart Disease. Studying cholesterol statistically helps to know more about its nature and provides crucial information to mitigate its effectiveness in diagnosing its impact to public health. In our study, we have analyzed lung cancer mortality in the USA based on age at death, period at death, and birth cohort to investigate its nature in longitudinal effects. The attempt has been made to estimate mortality rate based on age for different age groups and to find the relative risk of mortality due to period effect and relative risk due to birth cohort for lung cancer in the United States. Our statistical analysis and modeling are based on the data obtained from Surveillance Epidemiology and End Results (SEER) program of the United States. We have also investigated the probabilistic behavior of average cholesterol level based on gender and ethnicity. The study reveals significant differences with respect to the distribution they follow and their basic inferences which could be beneficial to draw conclusions in various ways in addressing related issues. At the same time, the change of cholesterol level over time for an individual might be a good source to study the association of cholesterol level, coronary heart disease and their effects on age. The cholesterol data is obtained from inter-university Consortium for Political and Social Research and National Health and Nutrition Examination Survey (NHANS) of the United States. Understanding the average change in total serum cholesterol level over time as people get older could be vital to explore it. We have studied the longitudinal behavior of the association of sex and time with cholesterol level. It is observed that age, sex, and time have an individual effect and can impact differently upon collective considerations. Their adverse effect in increasing cholesterol level could promote to worsen the cholesterol re- lated issues and hence heart related diseases. We believe our study pivots knowing more about target population of cholesterol level and helps to have the useful inference about cholesterol levels for public health. Finally, we also analyzed the average cholesterol data using a functional data analysis approach to understand its nature and effect on age. Since functional data analysis approach presents more flexibility in modeling, it could provide more insight in studying cholesterol level.
343

Performance Analysis Of Root-MUSIC With Spatial Smoothing For Arbitrary And Uniform Circular Arrays

Reddy, K Maheswara 07 1900 (has links) (PDF)
No description available.
344

Metody konstrukce výnosové křivky státních dluhopisů na českém dluhopisovém trhu / Methods for construction of zero-coupon yield curve from the Czech coupon bond market

Hladíková, Hana January 2008 (has links)
The zero coupon yield curve is one of the most fundamental tools in finance and is essential in the pricing of various fixed-income securities. Zero coupon rates are not observable in the market for a range of maturities. Therefore, an estimation methodology is required to derive the zero coupon yield curves from observable data. If we deal with approximations of empirical data to create yield curves it is necessary to choose suitable mathematical functions. We discuss the following methods: the methods based on cubic spline functions, methods employing linear combination of the Fourier or exponential basis functions and the parametric model of Nelson and Siegel. The current mathematical apparatus employed for this kind of approximation is outlined. In order to find parameters of the models we employ the least squares minimization of computed and observed prices. The theoretical background is applied to an estimation of the zero-coupon yield curves derived from the Czech coupon bond market. Application of proper smoothing functions and weights of bonds is crucial if we want to select a method which performs best according to given criteria. The best performance is obtained for Bspline models with smoothing.
345

Forecasting Ability of the Phillips Curve / Předpověď inflace Euro zóny pomocí Phillipsovy křivky

Michálková, Simona January 2015 (has links)
The aim of this paper is to investigate various versions of the Phillips curve and their inflation forecasting ability for Euro Area. We consider autoregressive distributed lag models and use two types of trend estimation -- successive (the trend is estimated before the remaining parameters are) and join, using exponential smoothing. The versions of the Phillips curve are evaluated by rolling and recursive window methods, various selection criteria for lag variables and different combination of the inflation indicators. To evaluate the forecasted values, we calculate the RMSE in three 7-year periods: 1993-1999 (run up Euro area), 2000-2006 (stable inflation period) and 2007-2013 (financial crisis). According to all our modifications, we find some models which achieve satisfying results in terms of the RMSE, albeit not for all forecasting periods. We notice that some models are satisfactory only in the stable period however not in the periods with low inflation and vice versa.
346

Méthodes de lissage et d'estimation dans des modèles à variables latentes par des méthodes de Monte-Carlo séquentielles / Smoothing and estimation methods in hidden variable models through sequential Monte-Carlo methods

Dubarry, Cyrille 09 October 2012 (has links)
Les modèles de chaînes de Markov cachées ou plus généralement ceux de Feynman-Kac sont aujourd'hui très largement utilisés. Ils permettent de modéliser une grande diversité de séries temporelles (en finance, biologie, traitement du signal, ...) La complexité croissante de ces modèles a conduit au développement d'approximations via différentes méthodes de Monte-Carlo, dont le Markov Chain Monte-Carlo (MCMC) et le Sequential Monte-Carlo (SMC). Les méthodes de SMC appliquées au filtrage et au lissage particulaires font l'objet de cette thèse. Elles consistent à approcher la loi d'intérêt à l'aide d'une population de particules définies séquentiellement. Différents algorithmes ont déjà été développés et étudiés dans la littérature. Nous raffinons certains de ces résultats dans le cas du Forward Filtering Backward Smoothing et du Forward Filtering Backward Simulation grâce à des inégalités de déviation exponentielle et à des contrôles non asymptotiques de l'erreur moyenne. Nous proposons également un nouvel algorithme de lissage consistant à améliorer une population de particules par des itérations MCMC, et permettant d'estimer la variance de l'estimateur sans aucune autre simulation. Une partie du travail présenté dans cette thèse concerne également les possibilités de mise en parallèle du calcul des estimateurs particulaires. Nous proposons ainsi différentes interactions entre plusieurs populations de particules. Enfin nous illustrons l'utilisation des chaînes de Markov cachées dans la modélisation de données financières en développant un algorithme utilisant l'Expectation-Maximization pour calibrer les paramètres du modèle exponentiel d'Ornstein-Uhlenbeck multi-échelles / Hidden Markov chain models or more generally Feynman-Kac models are now widely used. They allow the modelling of a variety of time series (in finance, biology, signal processing, ...) Their increasing complexity gave birth to approximations using Monte-Carlo methods, among which Markov Chain Monte-Carlo (MCMC) and Sequential Monte-Carlo (SMC). SMC methods applied to particle filtering and smoothing are dealt with in this thesis. These methods consist in approximating the law of interest through a particle population sequentially defined. Different algorithms have already been developed and studied in the literature. We make some of these results more precise in the particular of the Forward Filtering Backward Smoothing and Forward Filtering Backward Simulation by showing exponential deviation inequalities and by giving non-asymptotic upper bounds to the mean error. We also introduce a new smoothing algorithm improving a particle population through MCMC iterations and allowing to estimate the estimator variance without further simulation. Part of the work presented in this thesis is devoted to the parallel computing of particle estimators. We study different interaction schemes between several particle populations. Finally, we also illustrate the use of hidden Markov chains in the modelling of financial data through an algorithm using Expectation-Maximization to calibrate the exponential Ornstein-Uhlenbeck multiscale stochastic volatility model
347

Machine learning methods for seasonal allergic rhinitis studies

Feng, Zijie January 2021 (has links)
Seasonal allergic rhinitis (SAR) is a disease caused by allergens from both environmental and genetic factors. Some researchers have studied the SAR based on traditional genetic methodologies. As technology develops, a new technique called single-cell RNA sequencing (scRNA-seq) is developed, which can generate high-dimension data. We apply two machine learning (ML) algorithms, random forest (RF) and partial least squares discriminant analysis (PLS-DA), for cell source classification and gene selection based on the SAR scRNA-seq time-series data from three allergic patients and four healthy controls denoised by single-cell variational inference (scVI). We additionally propose a new fitting method consisting of bootstrap and cubic smoothing splines to fit the averaged gene expressions per cell from different populations. To sum up, we find that both RF and PLS-DA could provide high classification accuracy, and RF is more preferable, considering its stable performance and strong gene-selection ability. Based on our analysis, there are 10 genes having discriminatory power to classify cells of allergic patients and healthy controls at any timepoints. Although there is no literature founded to show the direct connections between such 10 genes and SAR, the potential associations are indirectly confirmed by some studies. It shows a possibility that we can alarm allergic patients before a disease outbreak based on their genetic information. Meanwhile, our experiment results indicate that ML algorithms may discover something between genes and SAR compared with traditional techniques, which needs to be analyzed in genetics in the future.
348

Enhancing the Efficacy of Predictive Analytical Modeling in Operational Management Decision Making

Najmizadehbaghini, Hossein 08 1900 (has links)
In this work, we focus on enhancing the efficacy of predictive modeling in operational management decision making in two different settings: Essay 1 focuses on demand forecasting for the companies and the second study utilizes longitudinal data to analyze the illicit drug seizure and overdose deaths in the United States. In Essay 1, we utilize an operational system (newsvendor model) to evaluate the forecast method outcome and provide guidelines for forecast method (the exponential smoothing model) performance assessment and judgmental adjustments. To assess the forecast outcome, we consider not only the common forecast error minimization approach but also the profit maximization at the end of the forecast horizon. Including profit in our assessment enables us to determine if error minimization always results in maximum profit. We also look at the different levels of profit margin to analyze their impact on the forecasting method performance. Our study also investigates how different demand patterns influence maximizing the forecasting method performance. Our study shows that the exponential smoothing model family has a better performance in high-profit products, and the rate of decrease in performance versus demand uncertainty is higher in a stationary demand environment.In the second essay, we focus on illicit drug overdose death rate. Illicit drug overdose deaths are the leading cause of injury death in the United States. In 2017, overdose death reached the highest ever recorded level (70,237), and statistics show that it is a growing problem. The age adjusted rate of drug overdose deaths in 2017 (21.7 per 100,000) is 9.6% higher than the rate in 2016 (19.8 per 100,000) (U. S. Drug Enforcement Administration, 2018, p. V). Also, Marijuana consumption among youth has increased since 2009. The magnitude of the illegal drug trade and its resulting problems have led the government to produce large and comprehensive datasets on a variety of phenomena relating to illicit drugs. In this study, we utilize these datasets to examine how marijuana usage among youth influence excessive drug usage. We measure excessive drug usage in terms of drug overdose death rate per state. Our study shows that illegal marijuana consumption increases excessive drug use. Also, we analyze the pattern of most frequently seized illicit drugs and compare it with drugs that are most frequently involved in a drug overdose death. We further our analysis to study seizure patterns across layers of heroin and cocaine supply chain across states. This analysis reveals that most active layers of the heroin supply chain in the American market are retailers and wholesalers, while multi-kilo traffickers are the most active players in the cocaine supply chain. In summary, the studies in this dissertation explore the use of analytical, descriptive, and predictive models to detect patterns to improve efficacy and initiate better operational management decision making.
349

[pt] INSERÇÃO DE VARIÁVEIS EXÓGENAS NO MODELO HOLT-WINTERS COM MÚLTIPLOS CICLOS PARA PREVISÃO DE DADOS DE ALTA FREQUÊNCIA OBSERVACIONAL DE DEMANDA DE ENERGIA ELÉTRICA / [en] INTRODUCE EXOGENOUS VARIABLES IN HOLT-WINTERS EXPONENTIAL SMOOTHING WITH MULTIPLE SEASONAL PATTERNS HIGH FREQUENCY ELECTRICITY DEMAND OBSERVATIONS

05 November 2021 (has links)
[pt] O objetivo deste trabalho é inserir variáveis exógenas no modelo Holt-Winters com múltiplos ciclos, genuinamente univariado. Serão usados dados horários de demanda de energia elétrica provenientes de uma cidade da região sudeste do Brasil e dados de temperatura, tanto em sua forma primitiva quanto derivada, por exemplo, indicadores de dias quentes, o chamado cooling degree days (CDD). Com isso, pretende-se melhorar o poder preditivo do modelo, gerando previsões com maior acurácia. / [en] The aim of this thesis is to insert exogenous variables in the model Holt-Winters with multiple cycles, genuinely univariate. Hourly data will be used for electricity demand from a city in southeastern Brazil and temperature data, both in its original form as derived, for example, indicators of hot days, cooling degree days called (CDD). With this, we intend to improve the predictive power of the model, generating predictions with greater accuracy.
350

Tvorba 3D modelu čelistního kloubu / Creating 3D Model of Temporomandibular Joint

Šmirg, Ondřej January 2015 (has links)
The dissertation thesis deals with 3D reconstruction of the temporomandibular joint from 2D slices of tissue obtained by magnetic resonance. The current practice uses 2D MRI slices in diagnosing. 3D models have many advantages for the diagnosis, which are based on the knowledge of spatial information. Contemporary medicine uses 3D models of tissues, but with the temporomandibular joint tissues there is a problem with segmenting the articular disc. This small tissue, which has a low contrast and very similar statistical characteristics to its neighborhood, is very complicated to segment. For the segmentation of the articular disk new methods were developed based on the knowledge of the anatomy of the joint area of the disk and on the genetic-algorithm-based statistics. A set of 2D slices has different resolutions in the x-, y- and z-axes. An up-sampling algorithm, which seeks to preserve the shape properties of the tissue was developed to unify the resolutions in the axes. In the last phase of creating 3D models standard methods were used, but these methods for smoothing and decimating have different settings (number of polygons in the model, the number of iterations of the algorithm). As the aim of this thesis is to obtain the most precise model possible of the real tissue, it was necessary to establish an objective method by which it would be possible to set the algorithms so as to achieve the best compromise between the distortion and the model credibility achieve.

Page generated in 0.0529 seconds