• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 277
  • 65
  • 62
  • 53
  • 40
  • 29
  • 27
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 933
  • 184
  • 141
  • 88
  • 87
  • 86
  • 86
  • 83
  • 77
  • 74
  • 69
  • 62
  • 61
  • 61
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

An investigation of accuracy, learning and biases in judgmental adjustments of statistical forecasts

Eroglu, Cuneyt 21 November 2006 (has links)
No description available.
82

Probabilistic Flood Forecast Using Bayesian Methods

Han, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability. In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models. Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
83

[en] DEMAND FORECAST: A CASE STUDY IN SUPPLY CHAIN / [pt] PREVISÃO DE DEMANDA: ESTUDO DE CASO NA CADEIA DE SUPRIMENTOS

ACHILES RAMOS RIBEIRO 08 November 2017 (has links)
[pt] A presente dissertação tem como principal objetivo a conceituação e apresentação das metodologias básicas de previsão de demanda e, a partir de um estudo de caso, a seleção da metodologia mais adequada e sua respectiva implantação. No primeiro capítulo é apresentada, além da importância do referido tema, a empresa selecionada para aplicação dos conceitos levantados, com a descrição de seus principais processos internos. No segundo capítulo foram abordados os conceitos de previsão de demanda e uma revisão dos principais modelos existentes. No capítulo seguinte, o problema que deverá ser tratado com a metodologia proposta é apresentado. Neste momento a metodologia conceituada é aplicada, através da seleção do método de previsão mais adequado ao caso estudado e respectiva modelagem, buscando melhorias em relação aos métodos de previsão existentes na empresa. Neste processo de modelagem utilizou-se o software Forecast Pro, um dos mais conceituados aplicativos de previsão de demanda no mercado. Por fim, na conclusão, avalia-se o impacto das mudanças propostas nos resultados da empresa, principalmente o aumento da precisão da previsão da demanda e, conseqüentemente, redução dos custos de importação e dos índices de stockout. / [en] The main objective of this dissertation is the presentation of basic forecasting methods and their implementation in a case study in supply chain. The first chapter points out the importance of forecasting in this context and describes the company selected for the case study and some of its internal processes that will be under scrutiny in the case study presented in this dissertation. The second chapter discusses the concepts and models of forecasting and reviews some of the major techniques in the field. In chapter three, standard forecasting techniques are apllied to real data (ten time series) from the company and select the most appropriate model in each case. Model adjustment is performed through the Forecast Pro software, one of the best-known products in the market. Chapter four contains the conclusions and the evaluation of the impacts of the proposed methodology on the company s results, especially the increased accuracy of forecasting and, consequently, the reduction in the import costs and stock out index.
84

Qualidade das projeções dos analistas Sell Side: evidência empírica do mercado brasileiro

Villalobos, Sonia Julia Sulzbeck 17 October 2005 (has links)
Made available in DSpace on 2010-04-20T20:51:42Z (GMT). No. of bitstreams: 3 142184.pdf.jpg: 20410 bytes, checksum: 720b476fe32b25d220b0dde4d663ee25 (MD5) 142184.pdf: 373613 bytes, checksum: 1b6743be6830c2ae7ab8245255b9ad6b (MD5) 142184.pdf.txt: 120505 bytes, checksum: be4e63d920365eb874f914450f641b26 (MD5) Previous issue date: 2005-10-17T00:00:00Z / A presente dissertação analisa o erro de projeção dos analistas de investimentos do sell side, definido como a diferença entre o consenso das projeções dos analistas e o resultado reportado pela empresa. O tamanho do erro de projeção é uma medida da qualidade das projeções dos analistas de um determinado mercado de capitais. Uma vasta literatura acadêmica mostra que uma melhora na qualidade das projeções dos analistas, medida através de uma diminuição do tamanho do erro de projeção, está relacionada com a redução da assimetria de informação e com um aumento do valor de mercado das empresas. São testadas duas regressões, nas quais características das empresas, como setor, tamanho, endividamento e variabilidade do lucro, e características do ambiente de informação da empresa, como listagem de ADR, número de analistas que acompanham a empresa e convergência das projeções, são testadas contra duas métricas do erro de projeção, acurácia e viés. Nossas hipóteses são que existem fatores que influenciam de maneira significativa o tamanho do erro de projeção (acurácia) e o viés das projeções (viés). Estas hipóteses foram confirmadas, isto é, nossas regressões apresentaram pelo menos um fator que se mostrou significativo estatisticamente para influenciar o tamanho do erro de projeção (hipóteses H1 e H2) ou o seu viés (hipótese H3). Entretanto, os resultados mostram que vários fatores que se mostram significativos em testes conduzidos em mercados desenvolvidos – tais como tamanho, endividamento e variabilidade do lucro – não se mostraram significativos no mercado brasileiro. Por outro lado, os fatores relacionados com o resultado do ano projetado ou do ano anterior se mostraram fortemente significativos. Acreditamos que os resultados podem ser explicados de três maneiras: 1) ou a capacidade de adicionar valor dos analistas em relação a modelos estatísticos de projeção é muito pequena, devido à sua falta de habilidade; ou 2) a instabilidade macroeconômica é tão grande domina todos os outros fatores que poderiam influenciar o tamanho do erro de projeção; ou 3) os resultados das empresas nos mercados desenvolvidos são tão administrados, isto é, tão estáveis, que permitem que fatores mais sutis como o tamanho, o nível de endividamento e a variabilidade do lucro se tornem significativos. Esta dissertação não permite distinguir qual das explicações é a correta. Uma de suas limitações é não incluir variáveis referentes à habilidade e experiência dos analistas e, também, variáveis relacionadas a fatores como governança corporativa e disclosure de informações. Em uma linha de pesquisa muito extensa nos países desenvolvidos, mas praticamente inexistente no Brasil, esperamos que estudos futuros supram estas lacunas e nos permitam entender melhor a questão da qualidade das projeções de resultados no contexto brasileiro. / The current dissertation analyses the forecast error of the sell side analysts in the Brazilian context, defined as the difference between the forecast consensus and the company earnings effectively reported. The size of the forecast error is used as a proxy for the quality of the forecast produced by the analysts of a specific stock market. A vast academic literature shows that an improvement in the quality of the forecasts produced by the analysts, measured by a decrease in the size of the forecast error, is related with a decrease in the asymmetry of information in such market and by an increase in the market value of its companies. Two regressions are tested, in which company characteristics, such as sector, size, leverage and variability of earnings, and characteristics of the company’s information environment, such as ADR listing, number of analysts following and forecast convergence, are tested against two metrics of forecast error, accuracy and bias. Our hypotheses are that there are factors that impact significatively both the size of the forecast error (accuracy) and the bias presented by the projections (bias). The hypotheses are confirmed, that is, the regressions present at least one factor that impacts significantly either the size of the forecast error (hypotheses H1 and H2) or the bias (hypothesis H3). However, the results show that many factors that are significant in tests conducted in developed markets – such as size, leverage and earnings variability – are not significant in the Brazilian context. On the other hand, factors related to the company results in the fiscal year being forecast and in the previous year result to be strongly significant. We believe that these results can be explained in three ways: 1) either forecasts produced by Brazilian analysts add very little value over statistical models, probably because of lack of ability; or 2) the macroeconomic instability in Brazil is so great that its influence on the companies’ results dominates all other factors that could impact the size of the forecast error; or 3) the earnings management of the companies in the developed markets is so widespread, leading to such a stability of earnings, that it allows for more subtle factors such as size and leverage become significant. This study does not allow us to distinguish which one is the correct explanation. One of its limitations is not to include variables related to the ability and experience of the analysts, as well as variables related to governance and disclosure. In a body of research that is very extensive in developed countries, but practically inexistent in Brazil, we hope that future research fills these gaps and allow us to better understand the issue of the quality of earnings forecast in the Brazilian context.
85

台灣廣告量預測之研究

曾喬彬 Unknown Date (has links)
No description available.
86

台灣香菸消費之預測

陳清樂 Unknown Date (has links)
一、研究目的 本文的主要目的仍是利用自我相關分析來配合假定由機遇程序所產生的數列,而後利用所配合的模式來求最佳的預測值。 二、研究方法 根據機遇程序的假設,時間數列如果在觀測值之間有相關的話,則可看成是機遇程序經由線性filter轉換的結果,根據此論點及對機遇程序的假定,而導出一些供配合時間數列的模式,然後利用統計學上之方法,來判定模式形態,估計模式係數及變方,而後再根據所配合的模式,根據最小均方誤之原則,導出最佳之預測值,並據而更進(updating)新的預測值。 三、內容 首先介紹時間數列一般可供配合的模式,並討論穩定性模式之性質,並進而擴充至非穩定性數列,而此最重要的是討論模式的自我相關函數及淨自我相關函數,以及這些函數與模式變動及行為之間的關係。 其次根據模式與自我相關函數及淨自我相關函數間的關係,來判定模式的形態,並且初步的估計模式係數,再則利用平方和函數,以所求的最小平方估計數(l.s.e.)來當其最概函數估計數(MLE)的近似值。 最後利用所估計的模式係數來求剩餘誤差,再求剩餘誤差的自我相關函數,而求此函數所構成的統計量Q,再利用Ⅹ2分布之性質來檢定模式的接受性,並且可利用此剩餘誤差的自我相關函數來做修正模式之依據。 當模式被接受之後,即可依此求最佳預測值,台灣香菸消費之預測數值即是根據這些時間數列模式預測的。而此種預測是種機率性的預測,所以也建立預測值的信賴區間。 四、結論 時間數列配合的目的是求預測值,而預測值一般對管埋上的用途仍在其可用為各種有關規劃之根據。因此配合時間數列只是一種手段,為要達到求預測值或控制用此目的之手段。
87

High resolution re-analysis of wind speeds over the British Isles for wind energy integration

Hawkins, Samuel Lennon January 2012 (has links)
The UK has highly ambitious targets for wind development, particularly offshore, where over 30GW of capacity is proposed for development. Integrating such a large amount of variable generation presents enormous challenges. Answering key questions depends on a detailed understanding of the wind resource and its temporal and spatial variability. However, sources of wind speed data, particularly offshore, are relatively sparse: satellite data has low temporal resolution; weather buoys and met stations have low spatial resolution; while the observations from ships and platforms are affected by the structures themselves. This work uses a state-of-the art mesoscale atmospheric model to produce a new high-resolution wind speed dataset over the British Isles and surrounding waters. This covers the whole region at a resolution of 3km for a period of eleven consecutive years, from 2000 to 2010 inclusive, and is thought to be the first high resolution re-analysis to represent a true historic time series, rather than a statistically averaged climatology. The results are validated against observations from met stations, weather buoys, offshore platforms and satellite-derived wind speeds, and model bias is reduced offshore using satellite derived wind speeds. The ability of the dataset to predict power outputs from current wind farms is demonstrated, and the expected patterns of power outputs from future onshore and offshore wind farms are predicted. Patterns of wind production are compared to patterns of electricity demand to provide the first conclusive combined assessment of the ability of future onshore and offshore wind generation meet electricity demand and contribute to secure energy supplies.
88

Secular variation prediction of the Earth's magnetic field using core surface flows

Beggan, Ciarán D. January 2009 (has links)
The Earth’s magnetic field is generated by fluid motion of liquid iron in the outer core. Flows at the top of the outer core are believed to be responsible for the secular variation (SV) observed at the surface of the Earth. Modelling of this flow is open to considerable ambiguity, though methods adopting different physical assumptions do lead to similar flow velocity regimes. Some aspects of the ambiguities are investigated in this thesis. The last decade has seen a significant improvement in the capability to observe the global field at high spatial resolution. Several satellite missions have been launched, providing a rich new set of scalar and vector magnetic measurements from which to model the global field in detail. These data complement the existing record of groundbased observatories, which have continuous temporal coverage at a single point. I exploit these new data to model the secular variation (SV) globally and attempt to improve the core flow models that have been constructed to date. Using the approach developed by Mandea and Olsen (2006) I create a set of evenly distributed ‘Virtual Observatories’ (VO), at 400km above the Earth’s surface, encompassing satellite measurements from the CHAMP satellite over seven years (2001-2007), inverting the SV calculated at each VO to infer flow along the core-mantle boundary. Direct comparison of the SV generated by the flow model to the SV at individual VO can be made. Thus, the residual differences can be investigated in detail. Comparisons of residuals from flow models generated from a number of VO datasets provide evidence that they are consistent with internal and external field effects in the satellite data. I also show that the binning and processing of the VO data can induce artefacts, including sectorial banding, into the residuals. By employing the core flows from the inversion of SV data it may be possible to forecast the change of the present magnetic field (as measured) forwards in time for a short time period (e.g. less than five years) within an acceptable error budget. Using simple advection of steady or non-steady flows to forecast magnetic field change gives reasonably good fit to field models such as GRIMM, POMME or xCHAOS (< 50nT root mean square difference after five years). The forecast of the magnetic field change can be improved by optimally assimilating measurements of the field into the forecast from flow models at discrete points in time (e.g. annually). To achieve this, an Ensemble Kalman Filter (EnKF) can be used to the capture non-linearity of the model and delineate the error bounds by means of a Monte Carlo representation of the field evolution over time. In the EnKF model, an ensemble of probable state vectors (Gauss coefficients) evolve over time, driven by SV derived from core flows. The SV is randomly perturbed at each step before addition to the state vectors. The mean of the ensemble is chosen as the most likely state (i.e. field model) and the error associated with the estimate can be gauged from the standard deviation from the mean. I show an implementation of the EnKF for steady and non-steady flows generated from ‘Virtual Observatory’ field models, compared to the field models GRIMM and xCHAOS over the period 2002–2008. Using the EnKF, the maximum difference never exceeds 25nT over the period. This promising approach allows measurements to be included into model predictions to improve the forecast.
89

Zur Schätzung von Häufigkeitstrends von extremen Wetter- und Klimaereignissen

Mudelsee, Manfred, Börngen, Michael, Tetzlaff, Gerd 03 January 2017 (has links) (PDF)
Die Vorteile der Kernschätzung gegenüber dem Abzählen von Ereignissen in Zeitintervallen werden dargestellt. Für das beiden Methoden gemeinsame Glättungsproblem gestattet die Kreuzvalidierung eine Lösung. Für die Hochwasserereignisse der Oder im Zeitraum 1350 bis 1850 wird eine Abnahme der Häufigkeit nach ca. 1675 gefunden; weitergehende Aussagen bedingen eine Homogenisierung der Daten. Die dargestellte Methodik wird gegenwärtig in das Computerprogramm XTREND implementiert. / The advantages of kernel estimation over counting of events within time intervals are shown. Cross validation offers a solution for the smoothing problem which is common to both methods. As regards ooding events of the river Oder in 1350 to 1850, a decrease in the frequency after about 1675 is found. More detailed results demand homogenized data. The method is currently being implemented into the computer program XTREND.
90

XTREND: A computer program for estimating trends in the occurrence rate of extreme weather and climate events

Mudelsee, Manfred 05 January 2017 (has links) (PDF)
XTREND consists of the following methodical Parts. Time interval extraction (Part 1) to analyse different parts of a time series; extreme events detection (Part 2) with robust smoothing; magnitude classification (Part 3) by hand; occurrence rate estimation (Part 4) with kernel functions; bootstrap simulations (Part 5) to estimate confidence bands around the occurrence rate. You work interactively with XTREND (parameter adjustment, calculation, graphics) to acquire more intuition for your data. Although, using “normal” data sizes (less than, say, 1000) and modern machines, the computing time seems to be acceptable (less than a few minutes), parameter adjustment should be done carefully to avoid spurious results or, on the other hand, too long computing times. This Report helps you to achieve that. Although it explains the statistical concepts used, this is generally done with less detail, and you should consult the given references (which include some textbooks) for a deeper understanding.

Page generated in 0.0272 seconds