Peak Radius setting

Mantid has a “Peak Radius” as a multiple of FWHM beyond which any peak function is assumed to go to zero, to speed up calculations especially where there’s a long spectrum with lots of sharp peaks. But this is only defined, or visible, in the global preferences. Some functions, such as Lorentzian, have long tails (1% of the maximum amplitude remaining at x-x0 = 5*FWHM for example). This could cause errors if fitting a small peak of interest at about one Peak Radius from another more intense peak, where the intense peak’s simulated curve drops off a cliff. To be certain it’s best to set Peak Radius to a generously large value (99?) and ensure that anyone else repeating the same analysis has set their copy of Mantid too.

Should the value of Peak Radius be set within the function, or could a function find out what it is and complain if it’s likely to cause errors? Or should it be a parameter to Fit()? The minimum safe radius depends on the data quality as well as the functions used. Perhaps Mantid shouldn’t allow the value in the preferences to be set too small. The example of 2 in the documentation is never going to be useful - it’ll even get the wrong width and area for a single peak.

Issue Make the peak radius an attribute of the function. · Issue #6391 · mantidproject/mantid · GitHub should fix the problem.

Thanks for this. Having read the issue description I’m not sure having an attribute per function is necessarily the best. It may have to be processed outside the function as I note that a Python peak fit function is only called with the subset of x values within the radius, it’s too late by then. And it’ll add a load of unnecessary clutter for users who shouldn’t need to worry about such details.

Walking out from x=PeakCentre until the function looks “small enough” is likely to fail with those that don’t decrease monotonically. y=sin(x)/x is a good example (and where perhaps no truncation ought to be done at all).

I’d suggest a parameter to Fit called something like “AcceptablePeakError” (defaulting to zero in many cases), then a Peak Function can (optionally) define a method “RadiusRequired(self,AcceptablePeakError)” which returns the minimum Radius needed to satisfy that accuracy. The function can use the knowledge of its own behaviour, or could return “None” or “Inf” to say don’t truncate this one at all. Omitting such a method would default to no truncation too.

The error could either be defined as max(y(x>=cutoff)) / max(y), or it may be better to use integral(cut off tails) / integral(whole peak). These ratios need only be worked out by hand for each function when it is defined, and the method would be called once when the function is added / set up.

What about functions for which the ratio is impossible to work out or composite Convolutions?

If the ratio is impossible (e.g. a peak function whose integral from -inf to +inf is not defined) then you default to no radius, i.e. evaluating it at all x values.

For peak shapes that are too difficult to integrate and get a simple analytical expression then evaluate numerically for accuracies of 0.01,0.001, etc (at code writing time) and store a small table, from which a safe value can be chosen. Alternatively evaluate the errors for peak radii 4,8,16,32, etc and store these. There’s no need to be really accurate to save 1% computing effort.

For a convolution, is it safe to evaluate to (width1radius1) + (width2radius2) on each side, if radius1 and radius2 are the values requested by each function on its own?

I think I’ll have to agree with you that having the peak radius as a function attribute was a bad idea.
I am still not sure what to do with convolutions, I’ll have to experiment
Maybe Fit should have 3 new parameters:

  1. ValueError
  2. IntegralError
  3. FixedRadius
    and use whichever gives the largest radius.