1) Discussions of whether to do Bayesian analyses in physics often flounder because the choice of prior is controversial or unclear. However, in neutrino physics the choice of mass hierarchy has an obvious and seemingly uncontroversial prior: P(NH)=P(IH)=0.5. What is the best argument, if any, for not using Bayesian techniques with this prior when trying to determine the mass hierarchy?
2) What issues arise in preparing results from neutrino experiments are less common, less important, or nonexistent in collider analyses?
3) How can we improve the communication of experimental results to maximize their future value? What sorts of information can be put in electronic files accompanying a result, and what associated documentation is needed?
4) What are the relative merits of using marginalisation or profiling to eliminate nuisance parameters from Bayesian posteriors or Likelihood functions?
10) The standard technique used to compute the systematic errors due to uncertainties in neutrino cross-sections, is to reweight the events simulated at the nominal values of cross-section parameters. The event weights are computed at +/- 1 or 2 sigma from the nominal values of these parameters. How statistically sound is this method? Are there specific cases where it may fail? Are there other, more preferable techniques?
11) A more basic question: how should systematic errors be computed? Are there any general guiding principles that the panel would like to share?
12-15)Most nuisance parameters are treated as if they were constrained by some external measurement of finite resolution (typically given a Gaussian penalty term with a specified mean and variance). For such parameters, the justifications used for profiling or marginalisation are (in principle) clear.
But sometimes the nuisance parameters (and associated constraints) are more ad-hoc: for example, an interpolation factor between two unrelated models of the background. Quite often these will use some other prior (for example no penalty term, corresponding to a flat prior), which may be improper, or may be constrained by boundaries (again possibly imposed in an ad-hoc fashion).
In such cases are there any general guides as whether marginalisation and/or profiling across the parameter will still produce acceptable results?
Is it logically consistent for both approaches to use the same penalty term if it is not Gaussian?
If there are hard boundaries on the nuisance parameter, do they need special attention?
Are there other techniques that are problematic (e.g. incorporating the parameter into post fit covariances)?
16) Should we abandon the D'Agostini method entirely? Assuming we need to unfold, is this method more problematic than the others? Can we get some kind of statement about this from the statisticians?"
----------------------------------
Draft question, replaced by #5 above
Should we try to avoid using Unfolding, or are there methods which give reliable estimates, uncertainties and correlations for the unfolded spectra? Are there situations in which Unfolding is really required? And how should bin-sizes be chosen, or should we be using unbinned methods? Mikael Kuusela's talk is after the panel session and may mention this.