991 |
A comparison of flare forecasting methods. II. Benchmarks, metrics and performance results for operational solar flare forecasting systemsLeka, K.D., Park, S-H., Kusano, K., Andries, J., Barnes, G., Bingham, S., Bloomfield, D.S., McCloskey, A.E., Delouille, V., Falconer, D., Gallagher, P.T., Georgoulis, M.K., Kubo, Y., Lee, K., Lee, S., Lobzin, V., Mun, J., Murray, S.A., Nageem, T.A.M.H., Qahwaji, Rami S.R., Sharpe, M., Steenburgh, R., Steward, G., Terkilsden, M. 25 July 2019 (has links)
Yes / Solar flares are extremely energetic phenomena in our Solar System. Their impulsive,
often drastic radiative increases, in particular at short wavelengths, bring immediate
impacts that motivate solar physics and space weather research to understand solar
flares to the point of being able to forecast them. As data and algorithms improve
dramatically, questions must be asked concerning how well the forecasting performs;
crucially, we must ask how to rigorously measure performance in order to critically
gauge any improvements. Building upon earlier-developed methodology (Barnes et al.
2016, Paper I), international representatives of regional warning centers and research
facilities assembled in 2017 at the Institute for Space-Earth Environmental Research,
Nagoya University, Japan to – for the first time – directly compare the performance
of operational solar flare forecasting methods. Multiple quantitative evaluation metrics
are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the
“no skill” level, although which method scored top marks is decisively a function of
flare event definition and the metric used; there was no single winner. Following in
this paper series we ask why the performances differ by examining implementation
details (Leka et al. 2019, Paper III), and then we present a novel analysis method to
evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With
these works, this team presents a well-defined and robust methodology for evaluating
solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be
compared.
|
992 |
A systematic, experimental methodology for design optimizationRitchie, Paul Andrew, 1960- January 1988 (has links)
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.
|
993 |
Risk and admissibility for a Weibull class of distributionsNegash, Efrem Ocubamicael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: The Bayesian approach to decision-making is considered in this thesis for reliability/survival
models pertaining to a Weibull class of distributions. A generalised right censored sampling
scheme has been assumed and implemented. The Jeffreys' prior for the inverse mean lifetime
and the survival function of the exponential model were derived. The consequent posterior distributions
of these two parameters were obtained using this non-informative prior. In addition
to the Jeffreys' prior, the natural conjugate prior was considered as a prior for the parameter
of the exponential model and the consequent posterior distribution was derived. In many
reliability problems, overestimating a certain parameter of interest is more detrimental than underestimating
it and hence, the LINEX loss function was used to estimate the parameters and
their consequent risk measures. Moreover, the same analogous derivations have been carried
out relative to the commonly-used symmetrical squared error loss function. The risk function,
the posterior risk and the integrated risk of the estimators were obtained and are regarded in
this thesis as the risk measures. The performance of the estimators have been compared relative
to these risk measures. For the Jeffreys' prior under the squared error loss function, the
comparison resulted in crossing-over risk functions and hence, none of these estimators are
completely admissible. However, relative to the LINEX loss function, it was found that a correct
Bayesian estimator outperforms an incorrectly chosen alternative. On the other hand for
the conjugate prior, crossing-over of the risk functions of the estimators were evident as a result.
In comparing the performance of the Bayesian estimators, whenever closed-form expressions
of the risk measures do not exist, numerical techniques such as Monte Carlo procedures were
used. In similar fashion were the posterior risks and integrated risks used in the performance
compansons.
The Weibull pdf, with its scale and shape parameter, was also considered as a reliability model.
The Jeffreys' prior and the consequent posterior distribution of the scale parameter of the
Weibull model have also been derived when the shape parameter is known. In this case, the estimation process of the scale parameter is analogous to the exponential model. For the case
when both parameters of the Weibull model are unknown, the Jeffreys' and the reference priors
have been derived and the computational difficulty of the posterior analysis has been outlined.
The Jeffreys' prior for the survival function of the Weibull model has also been derived, when
the shape parameter is known. In all cases, two forms of the scalar estimation error have been
t:. used to compare as much risk measures as possible. The performance of the estimators were
compared for acceptability in a decision-making framework. This can be seen as a type of
procedure that addresses robustness of an estimator relative to a chosen loss function. / AFRIKAANSE OPSOMMING: Die Bayes-benadering tot besluitneming is in hierdie tesis beskou vir betroubaarheids- / oorlewingsmodelle
wat behoort tot 'n Weibull klas van verdelings. 'n Veralgemene regs gesensoreerde
steekproefnemingsplan is aanvaar en geïmplementeer. Die Jeffreyse prior vir die
inverse van die gemiddelde leeftyd en die oorlewingsfunksie is afgelei vir die eksponensiële
model. Die gevolglike aposteriori-verdeling van hierdie twee parameters is afgelei, indien hierdie
nie-inligtingge-wende apriori gebruik word. Addisioneel tot die Jeffreyse prior, is die
natuurlike toegevoegde prior beskou vir die parameter van die eksponensiële model en ooreenstemmende
aposteriori-verdeling is afgelei. In baie betroubaarheidsprobleme het die oorberaming
van 'n parameter meer ernstige nagevolge as die onderberaming daarvan en omgekeerd
en gevolglik is die LINEX verliesfunksie gebruik om die parameters te beraam tesame met
ooreenstemmende risiko maatstawwe. Soortgelyke afleidings is gedoen vir hierdie algemene
simmetriese kwadratiese verliesfunksie. Die risiko funksie, die aposteriori-risiko en die integreerde
risiko van die beramers is verkry en word in hierdie tesis beskou as die risiko maatstawwe.
Die gedrag van die beramers is vergelyk relatief tot hierdie risiko maatstawwe. Die
vergelyking vir die Jeffreyse prior onder kwadratiese verliesfunksie het op oorkruisbare risiko
funksies uitgevloei en gevolglik is geeneen van hierdie beramers volkome toelaatbaar nie. Relatief
tot die LINEX verliesfunksie is egter gevind dat die korrekte Bayes-beramer beter vaar
as die alternatiewe beramer. Aan die ander kant is gevind dat oorkruisbare risiko funksies van
die beramers verkry word vir die toegevoegde apriori-verdeling. Met hierdie gedragsvergelykings
van die beramers word numeriese tegnieke toegepas, soos die Monte Carlo prosedures,
indien die maatstawwe nie in geslote vorm gevind kan word nie. Op soortgelyke wyse is die
aposteriori-risiko en die integreerde risiko's gebruik in die gedragsvergelykings.
Die Weibull waarskynlikheidsverdeling, met skaal- en vormingsparameter, is ook beskou as 'n
betroubaarheidsmodel. Die Jeffreyse prior en die gevolglike aposteriori-verdeling van die skaalparameter
van die Weibull model is afgelei, indien die vormingsparameter bekend is. In hierdie geval is die beramingsproses van die skaalparameter analoog aan die afleidings van die eksponensiële
model. Indien beide parameters van die Weibull modelonbekend is, is die Jeffreyse
prior en die verwysingsprior afgelei en is daarop gewys wat die berekeningskomplikasies is
van 'n aposteriori-analise. Die Jeffreyse prior vir die oorlewingsfunksie van die Weibull model
is ook afgelei, indien die vormingsparameter bekend is. In al die gevalle is twee vorms van
die skalaar beramingsfoute gebruik in die vergelykings, sodat soveel as moontlik risiko maatstawwe
vergelyk kan word. Die gedrag van die beramers is vergelyk vir aanvaarbaarheid binne
die besluitnemingsraamwerk. Hierdie kan gesien word as 'n prosedure om die robuustheid van
'n beramer relatief tot 'n gekose verliesfunksie aan te spreek.
|
994 |
Statistical analysis of marine water quality data in Hong KongCheung, Ngai-pang., 張毅鵬. January 2001 (has links)
published_or_final_version / Environmental Management / Master / Master of Science in Environmental Management
|
995 |
Managerial use of quantitative techniques in building project management: contractors perspectivesLin, Chun-ming., 連振明. January 2000 (has links)
published_or_final_version / Architecture / Master / Master of Science in Construction Project Management
|
996 |
Applications of Bayesian statistical model selection in social scienceresearchSo, Moon-tong., 蘇滿堂. January 2007 (has links)
published_or_final_version / abstract / Social Sciences / Doctoral / Doctor of Philosophy
|
997 |
A comparison of Bayesian and classical statistical techniques used to identify hazardous traffic intersectionsHecht, Marie B. January 1988 (has links)
The accident rate at an intersection is one attribute used to evaluate the hazard associated with the intersection. Two techniques traditionally used to make such evaluations are the rate-quality technique and a technique based on the confidence interval of classical statistics. Both of these techniques label intersections as hazardous if their accident rate is greater than some critical accident rate determined by the technique. An alternative technique is one based on a Bayesian analysis of available accident number and traffic volume data. In contrast to the two classic techniques, the Bayesian technique identifies an intersection as hazardous based on a probabilistic assessment of accident rates. The goal of this thesis is to test and compare the ability of the three techniques to accurately identify traffic intersections known to be hazardous. Test data is generated from an empirical distribution of accident rates. The techniques are then applied to the generated data and compared based on the simulation results.
|
998 |
Complexity as a Form of Transition From Dynamics to Thermodynamics: Application to Sociological and Biological Processes.Ignaccolo, Massimiliano 05 1900 (has links)
This dissertation addresses the delicate problem of establishing the statistical mechanical foundation of complex processes. These processes are characterized by a delicate balance of randomness and order, and a correct paradigm for them seems to be the concept of sporadic randomness. First of all, we have studied if it is possible to establish a foundation of these processes on the basis of a generalized version of thermodynamics, of non-extensive nature. A detailed account of this attempt is reported in Ignaccolo and Grigolini (2001), which shows that this approach leads to inconsistencies. It is shown that there is no need to generalize the Kolmogorov-Sinai entropy by means of a non-extensive indicator, and that the anomaly of these processes does not rest on their non-extensive nature, but rather in the fact that the process of transition from dynamics to thermodynamics, this being still extensive, occurs in an exceptionally extended time scale. Even, when the invariant distribution exists, the time necessary to reach the thermodynamic scaling regime is infinite. In the case where no invariant distribution exists, the complex system lives forever in a condition intermediate between dynamics and thermodynamics. This discovery has made it possible to create a new method of analysis of non-stationary time series which is currently applied to problems of sociological and physiological interest.
|
999 |
Literaturauswahl zur StatistikHuschens, Stefan 30 March 2017 (has links) (PDF)
Eine Auswahl von Literatur zur Statistik, die subjektiv, historisch gewachsen, selektiv und unvollständig, aber dennoch vielleicht nützlich ist.
|
1000 |
A Simulation Study Comparing Various Confidence Intervals for the Mean of Voucher Populations in AccountingLee, Ihn Shik 12 1900 (has links)
This research examined the performance of three parametric methods for confidence intervals: the classical, the Bonferroni, and the bootstrap-t method, as applied to estimating the mean of voucher populations in accounting. Usually auditing populations do not follow standard models. The population for accounting audits generally is a nonstandard mixture distribution in which the audit data set contains a large number of zero values and a comparatively small number of nonzero errors. This study assumed a situation in which only overstatement errors exist. The nonzero errors were assumed to be normally, exponentially, and uniformly distributed. Five indicators of performance were used. The classical method was found to be unreliable. The Bonferroni method was conservative for all population conditions. The bootstrap-t method was excellent in terms of reliability, but the lower limit of the confidence intervals produced by this method was unstable for all population conditions. The classical method provided the shortest average width of the confidence intervals among the three methods. This study provided initial evidence as to how the parametric bootstrap-t method performs when applied to the nonstandard distribution of audit populations of line items. Further research should provide a reliable confidence interval for a wider variety of accounting populations.
|
Page generated in 0.1052 seconds