Spelling suggestions: "subject:"smoothing"" "subject:"moothing""
331 |
What determines the amount of reported goodwill impairment? : An investigation of Nasdaq Stockholm OMX (OMXS)Friberg, Gusten, Åström Johansson, Carl January 2018 (has links)
Background: The question on how to account for goodwill has long been a subject that causes big debates among actors within financial accounting. In 2004, the IASB released a new standard, IFRS 3 – Business Combinations, that changed the accounting for goodwill. The interpretation for goodwill impairments according to IAS 36 has led to findings in studies that show patterns of earnings management and that a possible gap exists between the standard setter’s basic aim of IAS 36 and what actually is done by the practitioners. Purpose: Examine what determines the amount of reported goodwill impairment for firms listed on the Nasdaq OMX Stockholm (OMXS). Method: To fulfil the purpose of the thesis, the authors takes a quantitative research approach by a using a multiple linear regression model. The regression model is based on proxies for economic impairment, earnings management and corporate governance mechanisms from previous literature (Stenheim & Madsen, 2016; AbuGhazaleh, Al-Hares, & Roberts, 2011; Riedl, 2004). The data used for the regression model has been collected from published annual reports of 69 firms listed on the Nasdaq Stockholm OMX (OMXS), between the years 20072016. Conclusion: The findings of the thesis show that the accounting behaviour of “Big Bath” is exercised for firms listed on the Nasdaq Stockholm OMX (OMXS). The proxies for economic impairment have, to some extent, an impact on the amount of reported goodwill impairment, but the majority of the proxies for corporate governance mechanisms does not affect the amount of reported goodwill impairment. These findings might suggest that the standard IAS 36, which regulates the accounting for goodwill, may not entirely fulfil its purpose of creating a more transparent financial reporting.
|
332 |
Previsão da demanda de energia elétrica por combinações de modelos lineares e de inteligência computacionalDefilippo, Samuel Belini 20 September 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-17T11:13:15Z
No. of bitstreams: 1
samuelbelinidefilippo.pdf: 2610291 bytes, checksum: 6c4f48d00a0649b56977f6c8a7ada4e0 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-22T16:33:53Z (GMT) No. of bitstreams: 1
samuelbelinidefilippo.pdf: 2610291 bytes, checksum: 6c4f48d00a0649b56977f6c8a7ada4e0 (MD5) / Made available in DSpace on 2018-01-22T16:33:53Z (GMT). No. of bitstreams: 1
samuelbelinidefilippo.pdf: 2610291 bytes, checksum: 6c4f48d00a0649b56977f6c8a7ada4e0 (MD5)
Previous issue date: 2017-09-20 / Todo a produção, transmissão e distribuição de energia elétrica ocorre concomitantemente
com o consumo da energia. Isso é necessário porque ainda não existe hoje uma maneira
viável de se estocar energia em grandes quantidades. Dessa forma, a energia gerada precisa
ser consumida quase que instantaneamente. Isso faz com que as previsões de demanda
sejam fundamentais para uma boa gestão dos sistemas de energia.
Esse trabalho focaliza métodos de previsão de demanda a curto prazo, até um dia à frente.
Nos métodos mais simples, as previsões são feitas por modelos lineares que utilizam
dados históricos da demanda de energia. Contudo, modelos baseados em inteligência
computacional têm sido estudados para este fim, por explorarem a relação não-linear
entre a demanda de energia e as variáveis climáticas. Em geral, estes modelos conseguem
melhores previsões do que os métodos lineares. Seus resultados, porém, são instáveis e
sensíveis a erros de medição, gerando erros de previsão discrepantes, que podem ter graves
consequências para o processo de produção.
Neste estudo, empregamos redes neurais artificiais e algoritmos genéticos para modelar
dados históricos de carga e de clima, e combinamos estes modelos com métodos lineares
tradicionais. O objetivo é conseguir previsões que não apenas sejam mais acuradas em
termos médios, mas que também menos sensíveis aos erros de medição. / The production, transmission and distribution of electric energy occurs concomitantly
with its consumption. This is necessary because there is yet no feasible way to store
energy in large quantities. Therefore, the energy generated must be consumed almost
instantaneously. This makes forecasting essential for the proper management of energy
systems. This thesis focuses on short-term demand forecasting methods up to one day
ahead.
In simpler methods, the forecasts are made by linear models, which use of historical
data on energy demand. However, computer intelligence-based models have been studied
for this end, exploring the nonlinear relationship between energy demand and climatic
variables. In general, these models achieve better forecasts than linear methods. Their
results, however, are unstable and sensitive to measurement errors, leading to outliers in
forecasting errors, which can have serious consequences for the production process.
In this thesis, we use artificial neural networks and genetic algorithms for modelling historical
load and climate data, and combined these models with traditional linear methods.
The aim is to achieve forecasts that are not only more accurate in mean terms, but also
less sensitive to measurement errors.
|
333 |
Análise da prática do alisamento de resultados sobre o endividamento das empresas abertas após o processo de convergência às IFRSPereira, Ana Cristina 27 October 2014 (has links)
Made available in DSpace on 2016-03-15T19:31:04Z (GMT). No. of bitstreams: 1
Ana Cristina Pereira 1.pdf: 934370 bytes, checksum: 081eb64c0e784ec6da19cafba2aacf31 (MD5)
Previous issue date: 2014-10-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The aim of this study was to examine whether managers practice income smoothing, when exercising the power to judge and choose the accounting practices aimed a particular debt. It is an empirical-analytic study, with descriptive longitudinal nature. The population consists of all open Brazilian non-financial companies listed on the BM&FBOVESPA. Company data were collected in the database of Economática. The sample comprised 273 companies in the period between 2008 and 2013 for the independent variables Smoothing Results and Debt, lagged values were used as the starting point was the assumption that each period, managers adjust the capital structure to the desired goal - target debt. Initially straightening companies and non-smoothers have been identified, according to the methodology proposed by Eckel (1981). Then the degree of smoothing trowels companies was calculated according to the model of Leuz, Nanda and Wisocky (2003). Finally we proceeded to the regression model with unbalanced panel data with fixed effects after the Hausman test. There was no evidence that income smoothing after IFRS adoption were found. Also the results were not statistically significant for claiming that income smoothing affects the debt. Finally, it was found that the current debt is influenced by past debt, ie, firms adjust their capital structure. / O objetivo geral deste trabalho foi analisar se os gestores praticam o alisamento de resultados, quando exercem o poder de julgar e escolher as práticas contábeis, objetivando um determinado endividamento. Trata-se de um estudo empírico-analítico, com natureza descritivo-longitudinal. A população é composta por todas as companhias abertas brasileiras não financeiras listadas na BM&FBOVESPA. Os dados das empresas foram coletados no banco de dados da Economática. A amostra foi composta 273 empresas no período entre 2008 e 2013. Para as variáveis independentes de Alisamento de Resultados e Endividamento, foram utilizados valores defasados, pois partiu-se da premissa de que a cada período, os gestores ajustam a estrutura de capital ao objetivo desejado endividamento alvo. Inicialmente foram identificadas as empresas alisadoras e as não alisadoras, conforme a metodologia proposta por Eckel (1981). Em seguida foi calculado o grau de alisamento das empresas alisadoras conforme o modelo de Leuz, Nanda e Wisocky (2003). Por fim procedeu-se a aplicação do modelo de regressão com dados em painel balanceado com efeitos fixos após o teste de Hausman. Não foram encontradas evidências de que houve alisamento de resultados após a adoção das IFRS. Também os resultados não foram significativos estatisticamente para afirmar que o alisamento de resultados afeta o endividamento. Por fim, foi constatado que o endividamento atual é influenciado pelo endividamento passado, ou seja, as empresas ajustam a sua estrutura de capital.
|
334 |
Mobilidade de capital no Brasil no período de 1970-2007: análise pela abordagem intertemporal da conta correnteSilva, Júlia Goes da 19 December 2012 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-06-23T19:43:06Z
No. of bitstreams: 1
juliagoesdasilva.pdf: 980905 bytes, checksum: df0e8068aa03e5a954830f862e4dc02a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-13T15:41:31Z (GMT) No. of bitstreams: 1
juliagoesdasilva.pdf: 980905 bytes, checksum: df0e8068aa03e5a954830f862e4dc02a (MD5) / Made available in DSpace on 2016-07-13T15:41:31Z (GMT). No. of bitstreams: 1
juliagoesdasilva.pdf: 980905 bytes, checksum: df0e8068aa03e5a954830f862e4dc02a (MD5)
Previous issue date: 2012-12-19 / A discussão teórica em torno da mobilidade do capital pode ser divida em dois pontos de
referência: um conduzido pela mensuração da relação entre poupança e investimento
domésticos, conforme Feldstein e Horioka (1980); o outro pela análise das variâncias da conta
corrente teórica e observada, como propõe Ghosh (1995). Ambos trouxeram importantes
contribuições para testar suposições sobre o fluxo de capital entre nações, entretanto, o
presente trabalho segue a linha de Ghosh (1995), se preocupando com a análise da conta
corrente sob as hipóteses de equilíbrio intertemporal, limitando-se ao caso brasileiro no
período de 1970 a 2007. Com o fim de encontrar evidências sobre o grau de mobilidade
internacional do capital para o país, e sobre o comportamento suavizador da conta corrente,
seguiu-se em boa medida a metodologia utilizada em Huang (2010), que levanta a hipótese da
importância de incluir as variáveis taxa real de juros mundial e termos de troca no modelo
básico de Ghosh (1995). Utilizando o método de Variável Instrumental, não foi possível
estabelecer o grau de mobilidade de capital para o Brasil entre 1970-2007, pois o parâmetro
que capta a relação entre produto líquido e conta corrente mostrou-se estatisticamente não
diferente de zero. Todavia, a inclusão dos termos de troca e da taxa de juros ao modelo,
resultou em melhor ajustamento das estimativas, confirmando a importância dessas para
explicar os movimentos da conta corrente. Os resultados obtidos pelo VAR mostraram que a
série gerada para a conta corrente teórica não se ajusta à observada. Entretanto, os resultados
reafirmam a importância de incluir aquelas variáveis, e conduzem à constatação de excesso de
mobilidade do capital entre 1970-2007. Mas, quando se observa a série teórica em
subperíodos, de 1970-1989, de 1990-2007 e de 1994-2007, verifica-se que, para o modelo
expandido (que inclui as variáveis propostas),o excesso de mobilidade não ocorre após 1994. / The theoretical debate on capital mobility can be divided into two strands in the literature: one
based on measuring the saving-investment correlation following Feldstein and Horioka (1980)
seminal paper; the other one comparing the variance of the theoretical current account derived
from an intertemporal equilibrium model with its actual counterpart, as proposed by Ghosh
(1995). In the present work it is analyzed the Brazilian case from 1970 to 2007 following the
line of Ghosh (1995) who focuses on the analysis of the current account under the hypothesis
of intertemporal equilibrium. In order to find evidence of the degree of international capital
mobility, and of the behavior of smoothing current account, it is followed largely the model
developed in Huang (2010) who investigated the importance of including world real interest
rate and terms of trade in the basic model of Ghosh (1995). Using the method of Instrumental
Variable as proposed in Huang (2010) the degree of capital mobility for Brazil between 1970
and 2007 could not be correctly evaluated because the key parameter that measures the degree
of capital mobility was not statistically different from zero in all models estimated. However,
it is found that the inclusion of terms of trade and interest rate in the estimated models
improve the model fit to the actual current account, confirming the importance of these
variables to explain its movements. Comparing the variances it is found that the generated
theoretical current account does not match the volatility of the observed one leading to the
finding of “excess mobility” as defined in Ghosh (1995) in the whole sample. Nevertheless,
when we divide the theoretical series in three periods, namely, 1970-1989, 1990-2007 and
1994-2007, a different result emerges for the complete model (comprising all the variables
proposed) with the “excess mobility” no longer holding after 1994.
|
335 |
Avaliação do algoritmo Gradient Boosting em aplicações de previsão de carga elétrica a curto prazoMayrink, Victor Teixeira de Melo 31 August 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T14:25:21Z
No. of bitstreams: 1
victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-07T15:06:57Z (GMT) No. of bitstreams: 1
victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5) / Made available in DSpace on 2017-03-07T15:06:57Z (GMT). No. of bitstreams: 1
victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Previous issue date: 2016-08-31 / FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais / O armazenamento de energia elétrica em larga escala ainda não é viável devido a
restrições técnicas e econômicas. Portanto, toda energia consumida deve ser produzida
instantaneamente; não é possível armazenar o excesso de produção, ou tampouco cobrir
eventuais faltas de oferta com estoques de segurança, mesmo que por um curto período
de tempo. Consequentemente, um dos principais desafios do planejamento energético
consiste em realizar previsões acuradas para as demandas futuras. Neste trabalho,
apresentamos um modelo de previsão para o consumo de energia elétrica a curto prazo.
A metodologia utilizada compreende a construção de um comitê de previsão, por meio
da aplicação do algoritmo Gradient Boosting em combinação com modelos de árvores
de decisão e a técnica de amortecimento exponencial. Esta estratégia compreende um
método de aprendizado supervisionado que ajusta o modelo de previsão com base em
dados históricos do consumo de energia, das temperaturas registradas e de variáveis de
calendário. Os modelos propostos foram testados em duas bases de dados distintas e
demonstraram um ótimo desempenho quando comparados com resultados publicados em
outros trabalhos recentes. / The storage of electrical energy is still not feasible on a large scale due to technical and
economic issues. Therefore, all energy to be consumed must be produced instantly; it
is not possible to store the production leftover, or either to cover any supply shortages
with safety stocks, even for a short period of time. Thus, one of the main challenges
of energy planning consists in computing accurate forecasts for the future demand.
In this paper, we present a model for short-term load forecasting. The methodology
consists in composing a prediction comitee by applying the Gradient Boosting algorithm
in combination with decision tree models and the exponential smoothing technique. This
strategy comprises a supervised learning method that adjusts the forecasting model based
on historical energy consumption data, the recorded temperatures and calendar variables.
The proposed models were tested in two di
erent datasets and showed a good performance
when compared with results published in recent papers.
|
336 |
Uma investigação sobre o impacto dos accruals na variabilidade dos resultados nos diferentes contrastes cross-sectional nas firmas brasileiras de capital abertoBarreto, Marcello Silva 27 June 2012 (has links)
Submitted by Marcello Barreto (mbarreto@cbab.com.br) on 2012-08-02T19:43:46Z
No. of bitstreams: 1
Dissertacao Marcello Barreto - Versão Final - 02-08.pdf: 1121539 bytes, checksum: d24a321be480e9de11d37f89b5312e4d (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2012-08-03T18:32:20Z (GMT) No. of bitstreams: 1
Dissertacao Marcello Barreto - Versão Final - 02-08.pdf: 1121539 bytes, checksum: d24a321be480e9de11d37f89b5312e4d (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2012-08-07T20:31:25Z (GMT) No. of bitstreams: 1
Dissertacao Marcello Barreto - Versão Final - 02-08.pdf: 1121539 bytes, checksum: d24a321be480e9de11d37f89b5312e4d (MD5) / Made available in DSpace on 2012-08-07T20:31:33Z (GMT). No. of bitstreams: 1
Dissertacao Marcello Barreto - Versão Final - 02-08.pdf: 1121539 bytes, checksum: d24a321be480e9de11d37f89b5312e4d (MD5)
Previous issue date: 2012-06-27 / The main objective of this dissertation is to investigate the impact of accruals on the corporate earnings variability (EVAR) results that influence the practical application of income smoothing in Brazilian capital market firms. Initially, it approaches the importance of financial statements that must be evidenced in compliance with accounting principles generally accepted. Its disclosure should represent the reality of the economic-financial firm to the process of decision-making of shareholders and creditors. But at certain times, managers are motivated to practice the earnings management in an attempt to reduce the variability of profits through the use of accruals. The accruals represent the difference between net income and operating cash flow. In the process for reducing results volatility managers use the practice of income smoothing that represents a smoothing of profits, reducing any distortions in the market price of the firm. In this study, the sample is formed by a group of 163 capital market firms listed on Bovespa and presenting financial information in the period 2000 to 2007, categorized by sector, using data obtained in Economática. The statistical model used for this research was the regression analysis in order to explain the different cross-sectional models. These survey results indicate that accruals are significant to explain the EVAR of Brazilian companies. Our results suggest that the identification of structural model corporate earnings variability (EVAR) in Brazilian companies should be evidenced by non-financial variables that differ from those made by U.S. firms. / Esta dissertação tem como objetivo principal investigar o impacto dos accruals na variabilidade dos resultados corporativos (EVAR) que influenciam a aplicação prática do income smoothing nas firmas brasileiras de capital aberto. Inicialmente, é demonstrada a importância das demonstrações contábeis que devem ser evidenciadas em cumprimento aos princípios contábeis geralmente aceitos. Sua evidenciação deve representar a realidade econômico-financeira da firma para o processo de tomada de decisão dos acionistas e credores. Porém, em determinados momentos, os gestores se sentem motivados a praticar o gerenciamento dos resultados contábeis na tentativa de reduzir a variabilidade dos lucros por meio da utilização dos accruals. Os accruals correspondem à diferença entre o lucro líquido e o fluxo de caixa operacional. Nesse processo de redução da volatilidade dos resultados, os gestores se utilizam da prática do income smoothing procurando reduzir eventuais distorções no preço das ações da firma. A amostra neste estudo é composta por um grupo de 163 firmas de capital aberto listadas na Bovespa e que apresentaram informações financeiras no intervalo de 2000 a 2007, categorizadas por setores através de dados obtidos na Economática. O modelo estatístico utilizado na pesquisa foi a análise de regressão para explicar os diferentes modelos de cross-sectional. Os resultados desta pesquisa indicam que os accruals são significativos para explicar a variabilidade dos resultados corporativos (EVAR) de empresas brasileiras. Além disso, nossos resultados sugerem que o modelo estrutural de identificação do EVAR nas empresas brasileiras deve ser explicado por variáveis não contábeis diferentes das que são apresentadas pelas firmas norte-americanas.
|
337 |
[en] DIRECT EXPONENTIAL SMOOTHING METHOD INCORPORATING SEASONAL COMPONENT MODELLED BY HARRISON HARMONIC APPROACH / [pt] MÉTODO DE AMORTECIMENTO DIRETO COM TRATAMENTO DA SAZONALIDADE ATRAVÉS DO MÉTODO HARMÔNICO DE HARRISONJOSE MUNIZ DA COSTA VARGENS 18 January 2007 (has links)
[pt] Os métodos de amortecimento exponencial, apesar de
originalmente proposto nos anos 60, continuam em pleno uso
nos dias de hoje. Neste trabalho apresentamos um método
novo para previsão de séries temporais com ou sem
sazonalidade utilizando as teorias de amortecimento
exponencial e análise harmônica. Assume-se que a série
seja composta por uma tendência secular (constante, linear
ou quadrática) e seus parâmetros são atualizados
seqüencialmente pelo procedimento de amortecimento direto.
Já a parte sazonal é tratada separadamente através da
técnica de análise harmônica, conforme sugerida por
Harrison, 1964. Dessa forma, o método proposto se
apresenta como uma alternativa ao método de Souza &
Epprecht, (1983) ; tendo como principal vantagem a rotina
de estimação inicial dos parâmetros que no método de Souza
& Epprecht produz estimadores tendenciosos em alguns casos. / [en] The method of exponential smoothing, although originally
propesed during the 60´s, still continues in use up to
today. In this thesis we present a new forecasting method
for time series / with and/or without seasonality,
applying the theory of exponential smoothing and harmonic
analysis. It is assume that the series is composed of
secular trend (constant, linear or quadratic) and a
seasonal part. The trend parameters are sequentially using
direct smoothing procedure. The seasonal part of the
process is treated / separately through the technic of
harmonica analysis according to Harrison´s suggestion,
(1964).
In this way, the proposed method can be viewed as an
alternative to that of Souza & Epprecht, (1983), which
has, as the most important advantage, the routine of
initial estimation of the parameters, which in Souza &
Epprecht method produces, in some cases, biased
estimators.
|
338 |
ARIMA demand forecasting by aggregation / Prévision de la demande type ARIMA par agrégationRostami Tabar, Bahman 10 December 2013 (has links)
L'objectif principal de cette recherche est d'analyser les effets de l'agrégation sur la prévision de la demande. Cet effet est examiné par l'analyse mathématique et l’étude de simulation. L'analyse est complétée en examinant les résultats sur un ensemble de données réelles. Dans la première partie de cette étude, l'impact de l'agrégation temporelle sur la prévision de la demande a été évalué. En suite, Dans la deuxième partie de cette recherche, l'efficacité des approches BU(Bottom-Up) et TD (Top-Down) est analytiquement évaluée pour prévoir la demande au niveau agrégé et désagrégé. Nous supposons que la série désagrégée suit soit un processus moyenne mobile intégrée d’ordre un, ARIMA (0,1,1), soit un processus autoregressif moyenne mobile d’ordre un, ARIMA (1,0,1) avec leur cas spéciales. / Demand forecasting performance is subject to the uncertainty underlying the time series an organisation is dealing with. There are many approaches that may be used to reduce demand uncertainty and consequently improve the forecasting (and inventory control) performance. An intuitively appealing such approach that is known to be effective is demand aggregation. One approach is to aggregate demand in lower-frequency ‘time buckets’. Such an approach is often referred to, in the academic literature, as temporal aggregation. Another approach discussed in the literature is that associated with cross-sectional aggregation, which involves aggregating different time series to obtain higher level forecasts.This research discusses whether it is appropriate to use the original (not aggregated) data to generate a forecast or one should rather aggregate data first and then generate a forecast. This Ph.D. thesis reveals the conditions under which each approach leads to a superior performance as judged based on forecast accuracy. Throughout this work, it is assumed that the underlying structure of the demand time series follows an AutoRegressive Integrated Moving Average (ARIMA) process.In the first part of our1 research, the effect of temporal aggregation on demand forecasting is analysed. It is assumed that the non-aggregate demand follows an autoregressive moving average process of order one, ARMA(1,1). Additionally, the associated special cases of a first-order autoregressive process, AR(1) and a moving average process of order one, MA(1) are also considered, and a Single Exponential Smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical Mean Squared Error expressions are derived for the aggregate and the non-aggregate demand in order to contrast the relevant forecasting performances. The theoretical analysis is validated by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.In the second part of our research, the effect of cross-sectional aggregation on demand forecasting is evaluated. More specifically, the relative effectiveness of top-down (TD) and bottom-up (BU) approaches are compared for forecasting the aggregate and sub-aggregate demands. It is assumed that that the sub-aggregate demand follows either a ARMA(1,1) or a non-stationary Integrated Moving Average process of order one, IMA(1,1) and a SES procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and, as discussed above, SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA(1) process). Theoretical Mean Squared Errors are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate levels in addition to empirically validating our findings on a real dataset from a European superstore. The results show that the superiority of each approach is a function of the series autocorrelation, the cross-correlation between series and the comparison level.Finally, for both parts of the research, valuable insights are offered to practitioners and an agenda for further research in this area is provided.
|
339 |
Optimisation non-lisse pour l'apprentissage statistique avec régularisation matricielle structurée / Nonsmooth optimization for statistical learning with structured matrix regularizationPierucci, Federico 23 June 2017 (has links)
La phase d’apprentissage des méthodes d’apprentissage statistique automatique correspondent à la résolution d’un problème d’optimisation mathématique dont la fonction objectif se décompose en deux parties: a) le risque empirique, construit à partir d’une fonction de perte, dont la forme est déterminée par la métrique de performance et les hypothèses sur le bruit; b) la pénalité de régularisation, construite a partir d’une norme ou fonction jauge, dont la structure est déterminée par l’information à priori disponible sur le problème a résoudre.Les fonctions de perte usuelles, comme la fonction de perte charnière pour la classification supervisée binaire, ainsi que les fonctions de perte plus avancées comme celle pour la classification supervisée avec possibilité d’abstention, sont non-différentiables. Les pénalités de régularisation comme la norme l1 (vectorielle), ainsi que la norme nucléaire (matricielle), sont également non- différentiables. Cependant, les algorithmes d’optimisation numériques les plus simples, comme l’algorithme de sous-gradient ou les méthodes de faisceaux, ne tirent pas profit de la structure composite de l’objectif. Le but de cette thèse est d’étudier les problèmes d’apprentissage doublement non-différentiables (perte non- différentiable et régularisation non-différentiable), ainsi que les algorithmes d’optimisation numérique qui sont en mesure de bénéficier de cette structure composite.Dans le premier chapitre, nous présentons une nouvelle famille de pénalité de régularisation, les normes de Schatten par blocs, qui généralisent les normes de Schatten classiques. Nous démontrons les principales propriétés des normes de Schatten par blocs en faisant appel à des outils d’analyse convexe et d’algèbre linéaire; nous retrouvons en particulier des propriétés caractérisant les normes proposées en termes d’enveloppe convexes. Nous discutons plusieurs applications potentielles de la norme nucléaire par blocs, pour le filtrage collaboratif, la compression de bases de données, et l’annotation multi-étiquettes d’images.Dans le deuxième chapitre, nous présentons une synthèse de différentes tech- niques de lissage qui permettent d’utiliser des algorithmes de premier ordre adaptes aux objectifs composites qui de décomposent en un terme différentiable et un terme non-différentiable. Nous montrons comment le lissage peut être utilisé pour lisser la fonction de perte correspondant à la précision au rang k, populaire pour le classement et la classification supervises d’images. Nous décrivons dans les grandes lignes plusieurs familles d’algorithmes de premier ordre qui peuvent bénéficier du lissage: i) les algorithmes de gradient conditionnel; ii) les algorithmes de gradient proximal; iii) les algorithmes de gradient incrémental.Dans le troisième chapitre, nous étudions en profondeur les algorithmes de gradient conditionnel pour les problèmes d’optimisation non-différentiables d’apprentissage statistique automatique. Nous montrons qu’une stratégie de lis- sage adaptative associée à un algorithme de gradient conditionnel donne lieu à de nouveaux algorithmes de gradient conditionnel qui satisfont des garanties de convergence théoriques. Nous présentons des résultats expérimentaux prometteurs des problèmes de filtrage collaboratif pour la recommandation de films et de catégorisation d’images. / Training machine learning methods boils down to solving optimization problems whose objective functions often decomposes into two parts: a) the empirical risk, built upon the loss function, whose shape is determined by the performance metric and the noise assumptions; b) the regularization penalty, built upon a norm, or a gauge function, whose structure is determined by the prior information available for the problem at hand.Common loss functions, such as the hinge loss for binary classification, or more advanced loss functions, such as the one arising in classification with reject option, are non-smooth. Sparse regularization penalties such as the (vector) l1- penalty, or the (matrix) nuclear-norm penalty, are also non-smooth. However, basic non-smooth optimization algorithms, such as subgradient optimization or bundle-type methods, do not leverage the composite structure of the objective. The goal of this thesis is to study doubly non-smooth learning problems (with non-smooth loss functions and non-smooth regularization penalties) and first- order optimization algorithms that leverage composite structure of non-smooth objectives.In the first chapter, we introduce new regularization penalties, called the group Schatten norms, to generalize the standard Schatten norms to block- structured matrices. We establish the main properties of the group Schatten norms using tools from convex analysis and linear algebra; we retrieve in particular some convex envelope properties. We discuss several potential applications of the group nuclear-norm, in collaborative filtering, database compression, multi-label image tagging.In the second chapter, we present a survey of smoothing techniques that allow us to use first-order optimization algorithms designed for composite objectives decomposing into a smooth part and a non-smooth part. We also show how smoothing can be used on the loss function corresponding to the top-k accuracy, used for ranking and multi-class classification problems. We outline some first-order algorithms that can be used in combination with the smoothing technique: i) conditional gradient algorithms; ii) proximal gradient algorithms; iii) incremental gradient algorithms.In the third chapter, we study further conditional gradient algorithms for solving doubly non-smooth optimization problems. We show that an adaptive smoothing combined with the standard conditional gradient algorithm gives birth to new conditional gradient algorithms having the expected theoretical convergence guarantees. We present promising experimental results in collaborative filtering for movie recommendation and image categorization.
|
340 |
Centralized versus Decentralized Inventory Control in Supply Chains and the Bullwhip EffectQu, Zhan, Raff, Horst 20 October 2017 (has links) (PDF)
This paper constructs a model of a supply chain to examine how demand volatility is passed upstream through the chain. In particular, we seek to determine how likely it is that the chain experiences a bullwhip effect, where the variance of the upstream firm’s production exceeds the variance of the downstream firm’s sales. We show that the bullwhip effect is more likely to occur and is greater in size in supply chains in which inventory control is centralized rather than decentralized, that is, exercised by the downstream firm.
|
Page generated in 0.0522 seconds