Spelling suggestions: "subject:"[een] CONFIDENCE INTERVAL"" "subject:"[enn] CONFIDENCE INTERVAL""
51 |
[en] VERY SHORT TERM LOAD FORECASTING IN THE NEW BRAZILIAN ELECTRICAL SCENARIO / [es] PREVISIÓN DE CARGA A CORTÍSIMO PLAZO EN EL NUEVO ESCENARIO ELÉCTRICO BRASILERO / [pt] PREVISÃO DE CARGA DE CURTÍSSIMO PRAZO NO NOVO CENÁRIO ELÉTRICO BRASILEIROGUILHERME MARTINS RIZZO 19 July 2001 (has links)
[pt] Nesta dissertação é proposto um modelo híbrido para
previsão de carga de curtíssimo prazo, combinando
amortecimento exponencial simples e redes neurais
artificiais do topo feed-forward. O modelo fornece
previsões pontuais e limites superiores e inferiores para um
horizonte de quinze dias. Estes limites formam um intervalo
ao qual pode ser associado um nível de confiança empírico,
estimado através de um teste fora da amostra. O desempenho
do modelo é avaliado ao longo de uma simulação realizada
com dados reais de duas concessionárias de energia elétrica
brasileiras. / [en] This thesis presents an hibrid short term load forecasting
model that mixes simple exponential smoothing with feed-
forward neural networks. The model gives point predictions
with upper and lower limits for 15-day-ahead horizon. These
limits yields an interval with associated empirical
confidence level, estimated by an out of sample test. The
model's performance is evaluated through a simulation with
real data obtained from two Brazilian utilities. / [es] En esta disertación se propone un modelo híbrido para
previsión de carga de cortísimo plazo, combinando
amortecimiento exponencial simple y redes neurales
artificiales tipo feed-forward. EL modelo nos da las
previsiones puntuales y los límites superiores e inferiores
para un horizonte de quince días. Estos límites forman un
intervalo al cual se le puede asociar un nível de confianza
empírico, estimado a través de un test out of sample. EL
desempeño del modelo se evalúa utilizando datos reales de
dos concesionarias de energía eléctrica brasileras.
|
52 |
Towards a flexible statistical modelling by latent factors for evaluation of simulated responses to climate forcingsFetisova, Ekaterina January 2017 (has links)
In this thesis, using the principles of confirmatory factor analysis (CFA) and the cause-effect concept associated with structural equation modelling (SEM), a new flexible statistical framework for evaluation of climate model simulations against observational data is suggested. The design of the framework also makes it possible to investigate the magnitude of the influence of different forcings on the temperature as well as to investigate a general causal latent structure of temperature data. In terms of the questions of interest, the framework suggested here can be viewed as a natural extension of the statistical approach of 'optimal fingerprinting', employed in many Detection and Attribution (D&A) studies. Its flexibility means that it can be applied under different circumstances concerning such aspects as the availability of simulated data, the number of forcings in question, the climate-relevant properties of these forcings, and the properties of the climate model under study, in particular, those concerning the reconstructions of forcings and their implementation. It should also be added that although the framework involves the near-surface temperature as a climate variable of interest and focuses on the time period covering approximately the last millennium prior to the industrialisation period, the statistical models, included in the framework, can in principle be generalised to any period in the geological past as soon as simulations and proxy data on any continuous climate variable are available. Within the confines of this thesis, performance of some CFA- and SEM-models is evaluated in pseudo-proxy experiments, in which the true unobservable temperature series is replaced by temperature data from a selected climate model simulation. The results indicated that depending on the climate model and the region under consideration, the underlying latent structure of temperature data can be of varying complexity, thereby rendering our statistical framework, serving as a basis for a wide range of CFA- and SEM-models, a powerful and flexible tool. Thanks to these properties, its application ultimately may contribute to an increased confidence in the conclusions about the ability of the climate model in question to simulate observed climate changes. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 3: Manuscript.</p>
|
53 |
Technologie elektrojiskrového drátového řezání / Technology of wire electrical discharge machiningBrázda, Radim January 2013 (has links)
This master´s thesis deals with unconventional technology of wire electrical discharge machining. There are described the principles and essence of electrical discharge machining and the principles related to wire electrical discharge machining with emphasis on the application of this technology in terms of medium-sized engineering company. There is also described the complete assembly of technolgy wire cutting and machining on wire cutter Excetek V 650. Then in the work there are statistically evaluated parameters precision machined surfaces, specifically to the belt pulley 116-8M-130. At the end of the work there is the technical-economic evaluation that addresses the hourly cost of machining on wire cutter Excetek V 650.
|
54 |
Utilisation de l’estimateur d’Agresti-Coull dans la construction d’intervalles de confiance bootstrap pour une proportionPilotte, Mikaël 10 1900 (has links)
Pour construire des intervalles de confiance, nous pouvons utiliser diverses approches bootstrap. Nous avons un problème pour le contexte spécifique d’un paramètre de proportion lorsque l’estimateur usuel, la proportion de succès dans l’échantillon ˆp, est nul. Dans un contexte classique d’observations indépendantes et identiquement distribuées (i.i.d.) de la distribution Bernoulli, les échantillons bootstrap générés ne contiennent que des échecs avec probabilité 1 et les intervalles de confiance bootstrap deviennent dégénérés en un seul point, soit le point 0. En contexte de population finie, nous sommes confrontés aux mêmes problèmes lorsqu’on applique une méthode bootstrap à un échantillon de la population ne contenant que des échecs. Une solution possible s’inspire de l’estimateur utilisé dans les méthodes de [Wilson, 1927] et [Agresti et Coull, 1998] où ceux-ci considèrent ˜p l’estimateur qui prend la proportion de succès d’un échantillon augmenté auquel on a ajouté deux succès et deux échecs. La solution que nous introduisons consiste à effectuer le bootstrap de la distribution de ˆp mais en appliquant les méthodes bootstrap à l’échantillon augmenté de deux succès et deux échecs, tant en statistique classique que pour une population finie. Les résultats ont démontré qu’une version de la méthode percentile est la méthode bootstrap la plus efficace afin d’estimer par intervalle de confiance un paramètre de proportion autant dans un contexte i.i.d. que dans un contexte d’échantillonnage avec le plan aléatoire simple sans remise. Nos simulations ont également démontré que cette méthode percentile pouvait compétitionner avantageusement avec les meilleures méthodes traditionnelles. / A few bootstrap approaches exist to create confidence intervals. Some difficulties appear for the specific case of a proportion when the usual estimator, the proportion of success in a sample, is 0. In the classical case where the observations are independently and identically distributed (i.i.d.) from a Bernoulli distribution, the bootstrap samples only contain zeros with probability 1 and the resulting bootstrap confidence intervals are degenerate at the value 0. We are facing the same problem in the survey sampling case when we apply the bootstrap method to a sample with all observations equal to 0. A possible solution is suggested by the estimator found in the confidence intervals of [Wilson, 1927] and [Agresti et Coull, 1998] where they use ˜p the proportion of success in a augmented sample consisting of adding two successes and two failures to the original sample. The proposed solution is to use the bootstrap method on ˆp but where the bootstrap is based on the augmented sample with two additional successes and failures, whether the sample comes from i.i.d. Bernoulli variables or from a simple random sample. Results show that a version of the percentile method is the most efficient bootstrap method to construct confidence intervals for a proportion both in the classical setting or in the case of a simple random sample. Our results also show that this percentile interval can compete with the best traditional methods.
|
55 |
A Monte Carlo Study of Missing Data Treatments for an Incomplete Level-2 Variable in Hierarchical Linear ModelsKwon, Hyukje 20 July 2011 (has links)
No description available.
|
56 |
A comparative study of permutation proceduresVan Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are
compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation
test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)
|
57 |
Approche bayésienne de la construction d'intervalles de crédibilité simultanés à partir de courbes simuléesLapointe, Marc-Élie 07 1900 (has links)
Ce mémoire porte sur la simulation d'intervalles de crédibilité simultanés dans un contexte bayésien. Dans un premier temps, nous nous intéresserons à des données de précipitations et des fonctions basées sur ces données : la fonction de répartition empirique et la période de retour, une fonction non linéaire de la fonction de répartition. Nous exposerons différentes méthodes déjà connues pour obtenir des intervalles de confiance simultanés sur ces fonctions à l'aide d'une base polynomiale et nous présenterons une méthode de simulation d'intervalles de crédibilité simultanés. Nous nous placerons ensuite dans un contexte bayésien en explorant différents modèles de densité a priori. Pour le modèle le plus complexe, nous aurons besoin d'utiliser la simulation Monte-Carlo pour obtenir les intervalles de crédibilité simultanés a posteriori. Finalement, nous utiliserons une base non linéaire faisant appel à la transformation angulaire et aux splines monotones pour obtenir un intervalle de crédibilité simultané valide pour la période de retour. / This master's thesis addresses the problem of the simulation of simultaneous credible intervals in a Bayesian context. First, we will study precipation data and two functions based on these data : the empirical distribution function and the return period, a non-linear function of the empirical distribution. We will review different methods already known to obtain simultaneous confidence intervals of these functions with a polynomial basis and we will present a method to simulate simultaneous credible intervals. Second, we will explore some models of prior distributions and in the more complex one, we will need the Monte-Carlo method to simulate simultaneous posterior credible intervals. Finally, we will use a non-linear basis based on the angular transformation and on monotone splines to obtain valid simultaneous credible intervals for the return period.
|
58 |
Méthode d'inférence par bootstrap pour l'estimateur sisVIVE en randomisation mendélienneDessy, Tatiana 11 1900 (has links)
No description available.
|
59 |
[en] TEMPORAL MODELLING OF THE WATER DISCHARGES MEASUREMENTS ON FUNIL DAM (RJ) USING NEURAL NETWORK AND STATISTICAL METHODS / [pt] MODELAGEM TEMPORAL DAS MEDIDAS DE VAZÃO DE DRENOS NA BARRAGEM DE FUNIL (RJ) UTILIZANDO REDES NEURAIS E MÉTODOS ESTATÍSTICOSJANAINA VEIGA CARVALHO 15 September 2005 (has links)
[pt] Em obras de maior porte e grande responsabilidade (portos,
barragens,
usinas nucleares, etc.), a quantidade de instrumentações
pode se tornar suficiente
para permitir a construção de modelos de variabilidade
temporal das propriedades
de interesse com base em redes neurais artificiais. No caso
de barragens, o
monitoramento através da instalação de um sistema de
instrumentação
desempenha um papel fundamental na avaliação do
comportamento destas
estruturas, tanto durante o período de construção quanto no
período de operação.
Neste trabalho empregou-se a técnica de redes neurais
temporais (RNT) para
análise, modelagem e previsão dos valores de vazão na
barragem Funil, do
sistema Furnas Centrais Elétricas, a partir dos dados de
instrumentações
disponíveis no período compreendido entre 02/09/1985 e
25/02/2002. As redes
neurais temporais empregadas foram: RNT com arquitetura
feedforward associada
a técnica de janelamento, RNT recorrente Elman, RNT FIR e
RNT Jordan.
Adicionalmente, foram utilizadas duas técnicas para análise
das séries temporais:
os modelos de Box & Jenkins (1970) e métodos
geoestatísticos, com a finalidade
de comparar com o desempenho das RNT´s. Nesta pesquisa
estuda-se ainda a
geração de intervalos de confiança para RNT e para métodos
geoestatísticos. As
previsões de vazão analisadas neste trabalho, envolvendo o
comportamento da
barragem Funil, apresentaram resultados satisfatórios tanto
os obtidos pelos
modelos de redes neurais temporais como pelos de Box &
Jenkins e métodos
geoestatísticos. / [en] In works of great responsibility (ports, dams, nuclear
power, etc.), the
amount of instrumentation data may allow the construction
of models for the
temporary variability of the properties of interest based
on neural network
techniques. In case of dams, the monitoring through the
installation of an
instrumentation system plays a fundamental part in the
evaluation of the behavior
of these structures, during the construction period as well
as in the operation
period. In this work the technique of temporal neural
networks (TNN) was used
for analysis, modeling and forecast of the water discharges
values in the Funil
dam, from Furnas Centrais Elétricas system, starting from
the data of available
instrumentation in the period between 02/09/1985 and
25/02/2002. The temporal
neural networks used in this research were the following:
TNN with feedforward
architecture and the windowing technique, recursive TNN
Elman, TNN FIR and
TNN Jordan. Two additional techniques (Box & Jenkins and
geostatistical
models) were employed for analysis of the time series with
the purpose to
compare the results obtained with neural networks. In this
research the generation
of confidence intervals for TNN and geostatistical methods
were also investigated.
The discharge values forecasts analyzed in this work for
the Funil dam presented
satisfactory results, with respect to the neural network,
Box & Jenkins and
geostatistical methods.
|
60 |
A comparative study of permutation proceduresVan Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are
compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation
test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)
|
Page generated in 0.045 seconds