Spelling suggestions: "subject:"istatistical - sun"" "subject:"bystatistical - sun""
1 |
A comparison of flare forecasting methods. III. Systematic behaviors of operational solar flare forecasting systemsLeka, K.D., Park, S-H., Kusano, K., Andries, J., Barnes, G., Bingham, S., Bloomfield, D.S., McCloskey, A.E., Delouille, V., Falcomer, D., Gallagher, P.T., Georgoulis, M.K., Kubo, Y., Lee, K., Lee, S., Lobzin, V., Mun, J., Murray, S.A., Nageem, T.A.M.H., Qahwaji, Rami S.R., Sharpe, M., Steenburgh, R., Steward, G., Terkildsen, M. 25 July 2019 (has links)
Yes / A workshop was recently held at Nagoya University (31 October – 02 November
2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities.
Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019)
we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on
the behavior and performance of the methods when evaluated in the context of broad
implementation differences. Acknowledging the short testing interval available and the
small number of methods available, we do find that forecast performance: 1) appears to
improve by including persistence or prior flare activity, region evolution, and a human
“forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3)
may benefit from long-term statistics, but mostly when then combined with modern
data sources and statistical approaches. These trends are arguably weak and must be
viewed with numerous caveats, as discussed both here and in Paper II. Following this
present work, we present in Paper IV a novel analysis method to evaluate temporal
patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al.
2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive
methodologies.
|
2 |
A comparison of flare forecasting methods, I: results from the “All-clear” workshopBarnes, G., Leka, K.D., Schrijver, C.J., Colak, Tufan, Qahwaji, Rami S.R., Ashamari, Omar, Yuan, Y., Zhang, J., McAteer, R.T.J., Bloomfield, D.S., Higgins, P.A., Gallagher, P.T., Falconer, D.A., Georgoulis, M.K., Wheatland, M.S., Balch, C. 05 July 2016 (has links)
Yes / Solar flares produce radiation which can have an almost immediate effect on the near-Earth environ-
ment, making it crucial to forecast flares in order to mitigate their negative effects. The number of
published approaches to flare forecasting using photospheric magnetic field observations has prolifer-
ated, with varying claims about how well each works. Because of the different analysis techniques and
data sets used, it is essentially impossible to compare the results from the literature. This problem
is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events
have long been recognized in the meteorology community, but have yet to be fully acknowledged
by the space weather community. During the interagency workshop on “all clear” forecasts held in
Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common
data sets, specifically line-of-sight magnetic field and continuum intensity images from MDI, with
consistent definitions of what constitutes an event. We demonstrate the importance of making such
systematic comparisons, and of using standard verification statistics to determine what constitutes
a good prediction scheme. When a comparison was made in this fashion, no one method clearly
outperformed all others, which may in part be due to the strong correlations among the parameters
used by different methods to characterize an active region. For M-class flares and above, the set of
methods tends towards a weakly positive skill score (as measured with several distinct metrics), with
no participating method proving substantially better than climatological forecasts. / This work is the outcome of many collaborative and cooperative efforts. The 2009 “Forecasting the All-Clear” Workshop in Boulder, CO was sponsored by NASA/Johnson Space Flight Center’s Space Radiation Analysis Group, the National Center for Atmospheric Research, and the NOAA/Space Weather Prediction Center, with additional travel support for participating scientists from NASA LWS TRT NNH09CE72C to NWRA. The authors thank the participants of that workshop, in particular Drs. Neal Zapp, Dan Fry, Doug Biesecker, for the informative discussions during those three crazy days, and NCAR’s Susan Baltuch and NWRA’s Janet Biggs for organizational prowess. Workshop preparation and analysis support was provided for GB, KDL by NASA LWS TRT NNH09CE72C, and NASA Heliophysics GI NNH12CG10C. PAH and DSB received funding from the European Space Agency PRODEX Programme, while DSB and MKG also received funding from the European Union’s Horizon 2020 research and in- novation programme under grant agreement No. 640216 (FLARECAST project). MKG also acknowledges research performed under the A-EFFort project and subsequent service implementation, supported under ESA Contract number 4000111994/14/D/MPR. YY was supported by the National Science Foundation under grants ATM 09-36665, ATM 07-16950, ATM-0745744 and by NASA under grants NNX0-7AH78G, NNXO-8AQ90G. YY owes his deepest gratitude to his advisers Prof. Frank Y. Shih, Prof. Haimin Wang and Prof. Ju Jing for long discussions, for reading previous drafts of his work and providing many valuable comments that improved the presentation and contents of this work. JMA was supported by NSF Career Grant AGS-1255024 and by a NMSU Vice President for Research Interdisciplinary Research Grant.
|
3 |
A comparison of flare forecasting methods. II. Benchmarks, metrics and performance results for operational solar flare forecasting systemsLeka, K.D., Park, S-H., Kusano, K., Andries, J., Barnes, G., Bingham, S., Bloomfield, D.S., McCloskey, A.E., Delouille, V., Falconer, D., Gallagher, P.T., Georgoulis, M.K., Kubo, Y., Lee, K., Lee, S., Lobzin, V., Mun, J., Murray, S.A., Nageem, T.A.M.H., Qahwaji, Rami S.R., Sharpe, M., Steenburgh, R., Steward, G., Terkilsden, M. 25 July 2019 (has links)
Yes / Solar flares are extremely energetic phenomena in our Solar System. Their impulsive,
often drastic radiative increases, in particular at short wavelengths, bring immediate
impacts that motivate solar physics and space weather research to understand solar
flares to the point of being able to forecast them. As data and algorithms improve
dramatically, questions must be asked concerning how well the forecasting performs;
crucially, we must ask how to rigorously measure performance in order to critically
gauge any improvements. Building upon earlier-developed methodology (Barnes et al.
2016, Paper I), international representatives of regional warning centers and research
facilities assembled in 2017 at the Institute for Space-Earth Environmental Research,
Nagoya University, Japan to – for the first time – directly compare the performance
of operational solar flare forecasting methods. Multiple quantitative evaluation metrics
are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the
“no skill” level, although which method scored top marks is decisively a function of
flare event definition and the metric used; there was no single winner. Following in
this paper series we ask why the performances differ by examining implementation
details (Leka et al. 2019, Paper III), and then we present a novel analysis method to
evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With
these works, this team presents a well-defined and robust methodology for evaluating
solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be
compared.
|
Page generated in 0.0646 seconds