Progress bar for child algorithm

I have a parent algorithm written in Python which calls built in Mantid algorithms such as Fit to do the work. My algorithm has a progress bar of its own. But this is overwritten by the progress bars of the child algorithms which whizz from 0 to 100% many times. How do I either suppress the progress of the children or make them only go (for example) from i*10% to (i+1)*10%?
(I’m using myalg=AlgorithmManager.create(AlgName) and myalg.setChild(True) to prevent them logging anything or putting their results in the ADS. That alone has speeded up the parent algorithm considerably!)

You can specify extra arguments in your call to the child algorithm for what part of the progress bar you will use, startProgress and endProgress. There are lots of examples in AlignAndFocusPowderFromFiles. The biggest challenge is figuring out what the start/end percentages are.

Thanks. The following simple excerpt (wrapped in an Algorithm) works, and the progress bar scrolls smoothly across. The only flaw is a missing “estimated time to completion”.

for i in range(10):
    LoadMuonNexus(Filename=basename.format(r), OutputWorkspace="HIFI{:08d}".format(r), startProgress=i*0.1, endProgress=(i+1)*0.1)

But the following (which I need in order to do setChild() and avoid all the “LoadMuonNexus started” and “LoadMuonNexus successful, Duration 0.37 seconds”) doesn’t:

for i in range(10):

Is there a myalg.setStartProgress() I should be using instead here? I can’t find anything in the online Python API description of Algorithm.

I’d be equally happy with a way to suppress the progress bar of the children and do my own, as would happen if the algorithm did its own numerical calculation. The actual application calls Fit() many hundreds of times, each runs very quickly, and I’d be satisfied with the progress bar incrementing only on the completion of each Fit.

The startProgress and endProgress are pseudo-properties that are stripped out from before the actual algorithm is called. I believe if you specify them in AlgorithmManager.create instead they will work, but I cannot find a code to link to as an example.

Separately, if what you are doing is calling Fit a lot of times, you are losing a lot of performance by creating the algorithm many times. Instead you should create one Fit algorithm, set its properties, call execute, get the results, set its properties, call execute, get the results, … until you’ve performed all fits. The extra overhead of object creation takes up more time than you would expect.

Useful to know. The re-used Algorithm works fine for LoadMuonNexus (where the run time is probably dominated by the actual data transfer anyway). But as usual Fit is more fussy. I tried:

for h in range(10):
	fr=dict(map(lambda r:(r["Name"],r["Value"]),p))
	fe=dict(map(lambda r:(r["Name"],r["Error"]),p))
	print h,fr["A0"],"+-",fe["A0"]

and it falls over on the second attempt to run a.execute() with

RuntimeError: Property with given name already exists search object OUTPUTNORMALISEDCOVARIANCEMATRIX
  at line 12 in 'New script'

Also if I can only specify the progress percentages in AlgorithmManager.create(), that instance of Fit is destined to run its scroll bar from (for example) 0 to 5% each of the 20 times it’s run…

I had a go at profiling my algorithm.
Creating a new Fit() child algorithm object with AlgorithmManager.create(): 0.155 ms each time (average).
Filling in its Properties prior to calling it: 0.346 ms each time.
Executing it: 6.93 ms each time. (This is with the scroll bar appearing at 0%, scrolling along to perhaps 1-5% in most cases since the fit converges quickly, and vanishing again. A few bad spectra will take the whole 100 iterations.)
So there’s probably not all that much to be gained, unless the alg.execute() actually runs more setup code that would be bypassed on the second and subsequent uses. The total execution time of all the Fits amounts for about 85% of that of the parent algorithm so any improvement would be welcome…