19–21 Sep 2016
US/Central timezone

Questions for the Panel Discussion

Please send your suggestions for discussion items for Wednesday's panel session to Tom Junk and Louis Lyons.  Questions we have received so far:

1)  Discussions of whether to do Bayesian analyses in physics often flounder because the choice of prior is controversial or unclear.  However, in neutrino physics the choice of mass hierarchy has an obvious and seemingly uncontroversial prior: P(NH)=P(IH)=0.5.  What is the best argument, if any, for not using Bayesian techniques with this prior when trying to determine the mass hierarchy?

2)  What issues arise in preparing results from neutrino experiments are less common, less important, or nonexistent in collider analyses?

3)  How can we improve the communication of experimental results to maximize their future value?   What sorts of information can be put in electronic files accompanying a result, and what associated documentation is needed?

4)  What are the relative merits of using marginalisation or profiling to eliminate nuisance parameters from Bayesian posteriors or Likelihood functions?
 
5) It seems that Unfolding is a difficult process, especially when it comes to finding reliable estimates of the uncertainties and correlations for the unfolded spectra? Should we try to avoid unfolding in neutrino physics, or are there situations in which Unfolding is really required?
 
6)  What are the arguments for using prior or posterior predictive p-values?
 
7)  Is there a way of using Bayesian techniques for nested Hypothesis Testing, which is not too sensitive to the choice of priors for the extra parameters in the larger hypothesis?
 
8)  Sometimes one of the hypotheses in Model Selection corresponds to having a parameter value fixed at the end of its physical range e.g. signal strength is zero,
mass of lightest neutrino is zero. etc. In a Bayesian approach, is it better to have a delta function at zero as part of the prior, or just some continuous function from zero upwards? 
 
9)  In trying to distinguish simple hypotheses such as the different mass hierarchies, does it make sense to use a model where the pdf for the data test statistic is modeled as a linear combination of the pdf's for the two separate hierachies?

10) The standard technique used to compute the systematic errors due to uncertainties in neutrino cross-sections, is to reweight the events simulated at the nominal values of cross-section parameters. The event weights are computed at +/- 1 or 2 sigma from the nominal values of these parameters. How statistically sound is this method? Are there specific cases where it may fail? Are there other, more preferable techniques?

11) A more basic question: how should systematic errors be computed? Are there any general guiding principles that the panel would like to share?

12-15)
Most nuisance parameters are treated as if they were constrained by some external measurement of finite resolution (typically given a Gaussian penalty term with a specified mean and variance). For such parameters, the justifications used for profiling or marginalisation are (in principle) clear.  

But sometimes the nuisance parameters (and associated constraints) are more ad-hoc: for example, an interpolation factor between two unrelated models of the background.  Quite often these will use some other prior (for example no penalty term, corresponding to a flat prior), which may be improper, or may be constrained by boundaries (again possibly imposed in an ad-hoc fashion).

In such cases are there any general guides as whether marginalisation and/or profiling across the parameter will still produce acceptable results?  
Is it logically consistent for both approaches to use the same penalty term if it is not Gaussian?  
If there are hard boundaries on the nuisance parameter, do they need special attention?
Are there other techniques that are problematic (e.g. incorporating the parameter into post fit covariances)?  

16)  Should we abandon the D'Agostini method entirely? Assuming we need to unfold, is this method more problematic than the others? Can we get some kind of statement about this from the statisticians?"



----------------------------------
Draft question, replaced by #5 above

Should we try to avoid using Unfolding, or are there methods which give reliable estimates, uncertainties and correlations for the unfolded spectra? Are there situations in which Unfolding is really required? And how should bin-sizes be chosen, or should we be using unbinned methods?  Mikael Kuusela's talk is after the panel session and may mention this.