TestIntegratePeaksMDHKL.py (5.0 KB)
Error raised while executing IntegratePeaksMDHKL on workspace generated by ConvertQtoHKLMDHisto
Posting question in addition to reading soure code to see what causes problem and possible workaround
Python code ran which caused error is attached
Procedure briefly described below
Thanks in advance
• flux and solid angle data are loaded
• nexus file /SNS/TOPAZ/IPTS-XXXXX/nexus/TOPAZ_YYYYY.nxs.h5 loaded to “eventsName”, bad pulses filtered
• IsawDetCal loaded to eventsName
• eventsName converted to MD, MD output name is “mdName”
• peaks found in mdName, peaks name is “peaksName”
• UB matrix loaded into peaksName ( UB was found earlier & known to work well for run YYYYY )
• mdName fed to ConvertQtoHKLMDHisto ( see atached code for details )
- H,K,L all from -10 to 10, step 0.1
• Before continuing, verify data is binned as expected
appears that binning for a given dimension in ConvertQtoHKLMDHisto is:
assignedValue( bin_n ) = min + ( max - min ) / (2N) + (n-1)( max-min ) / N
or
min + ( max – min )( 1 / (2N) + ( n – 1 ) / N )
where
min = min( Extents ), max = max( Extents )
• mdHistoName fed to IntegratePeaksMDHKL
- tried once, specifying DeltaHKL, GridPoints, FluxWorkspace, & SolidAngleWorkspace ( and mandatory parameters )
- tried once, specifying only mandatory parameters
• Based on log message: “Rounding max from: 5.3 to the nearest whole width at: 5.35”, and many more messages like it,
figured two possibilities:- IntegratePeaksMDHKL uses binning:
min + (n-1)( max-min )/N - IntegratePeaksMDHKL uses similar binning to ConvertQtoHKLMDHisto, but
min = min( value assigned to a bin by ConvertQtoHKLMDHisto )
Possible that need for rounding leads to error message:
“Error in execution of algorithm IntegrateMDHistoWorkspace:
Error making MDHistoDimension. Cannot have dimension with min > max”
- IntegratePeaksMDHKL uses binning:
Naive guess for workaround-change for IntegratePeaksMDHKL::cropHisto in IntegratePEaksMDHKL.cpp is as follows:
MDHistoWorkspace_sptr IntegratePeaksMDHKL::cropHisto(int h, int k, int l, double box, const IMDWorkspace_sptr &ws) {
auto cropMD = createChildAlgorithm("IntegrateMDHistoWorkspace", 0.0, 0.3);
cropMD->setProperty("InputWorkspace", ws);
cropMD->setProperty("P1Bin",
boost::lexical_cast<std::string>(h - box) + ",0," + boost::lexical_cast<std::string>(h + box));
cropMD->setProperty("P2Bin",
boost::lexical_cast<std::string>(k - box) + ",0," + boost::lexical_cast<std::string>(k + box));
cropMD->setProperty("P3Bin",
boost::lexical_cast<std::string>(l - box) + ",0," + boost::lexical_cast<std::string>(l + box));
cropMD->setPropertyValue("OutputWorkspace", "out");
try{ cropMD->executeAsChildAlg(); }
catch(...){
cropMD->setProperty("P1Bin",
boost::lexical_cast<std::string>(h - box / 2) + "," +
boost::lexical_cast<std::string>( box / 2 ) + "," +
boost::lexical_cast<std::string>(h + 3 * box / 2)
);
cropMD->setProperty("P2Bin",
boost::lexical_cast<std::string>( k - box / 2 ) + "," +
boost::lexical_cast<std::string>( box / 2 ) + "," +
boost::lexical_cast<std::string>( k + 3 * box / 2 )
);
cropMD->setProperty("P3Bin",
boost::lexical_cast<std::string>( l - box / 2 ) + "," +
boost::lexical_cast<std::string>( box / 2 ) + "," +
boost::lexical_cast<std::string>( l + 3 * box / 2 )
);
try{ cropMD->executeAsChildAlg(); }
catch(...){
cropMD->setProperty("P1Bin",
boost::lexical_cast<std::string>(h - 3 * box / 2) + "," +
boost::lexical_cast<std::string>( -1 * box / 2 ) + "," +
boost::lexical_cast<std::string>(h + box / 2)
);
cropMD->setProperty("P2Bin",
boost::lexical_cast<std::string>( k - 3 * box / 2 ) + "," +
boost::lexical_cast<std::string>( -1 * box / 2 ) + "," +
boost::lexical_cast<std::string>( k + box / 2 )
);
cropMD->setProperty("P3Bin",
boost::lexical_cast<std::string>( l - 3 * box / 2 ) + "," +
boost::lexical_cast<std::string>( -1 * box / 2 ) + "," +
boost::lexical_cast<std::string>( l + box / 2 )
);
cropMD->executeAsChildAlg();
}
}
IMDHistoWorkspace_sptr outputWS = cropMD->getProperty("OutputWorkspace");
return std::dynamic_pointer_cast<MDHistoWorkspace>(outputWS);
}
Even if errors from IntegrateMDHistoWorkspace can be caught with catch(…), there’s certainly better way to go about this, so I am looking through related source code files to find better solution.
At the moment, cannot test naive workaround which involves altering source code since my computer only has 12 GB RAM and I almost certainly don’t have write or super user privelages on SNS virtual machine I’m using.
If anyone knows of way to access virtual machine able to install & run Mantid when Mantid is using 22GB data, this would be appreciated
Any help would be great.
Thank you