Use case "Scaling with Scala"

From EdnaWiki
Jump to: navigation, search


The main objective is to scale the collected data once they have been integrated (either by MOSFLM or XDS) The idea is to use SCALA for the scaling step. Before running SCALA, the mtz file coming from MOSFLM should be sorted by SORTMTZ

Pls refer to the SCALA documentation for more details:

Importance Very important
Priority High priority
Use Frequency Frequently used
Direct Actors Kernel / User
Pre-requirements Kernel running



  • the anomalous mode on/off
  • the sorted mtz file
  • high resolution limit
  • low resolution limit
  • scale method (related to "SCALES" subkeys)
    • BATCH or ROTATION method
      • if the ROTATION method is used, the variation of scale factor should be set (either by defining the number of scale factor or a rotation delta (refered as SPACING))
      • if BFACTOR option has been set, the variation of bfactor should be set (either by defining the number of bfactor or delta time (refered as SPACING))





Main Success Scenario


Notes and Questions

This is a very good starting point. I think the next step would be to detalize/model the "collected data that have been integrated", in a common data model context. This is, apparently,

  • something like a selected set of generic integrationSubWedgeResult(s)?
  • all these in a set are postrefined/integrated using common (or sufficiently close?) indexingSolutionSelected as a start?
  • they are coming along with a full set of mutual experimentalCondition(s) - in order to be able to decide were the discontinuties in scale factors

are permitted ?

  • What kind of sub-selection/filtetring do we forsee within the data set/each subwedge? E.g. the draft model implies only

"resolutionLow" "resolutionHigh" applied to all data - this may be insuffcient for real sub-wedged, or mutypath (low- high- res data).

  • Do we need sub-selections on the frame range basis (as we started to discuss in DIAMOND on poorly scaling last frames)?
  • Is mtz file format here a kind of techical detail?
  • Basically, I am suggesting to start top-down and not bottom-up on this problem
  • Use cases for sub-selection frames to be scaled (future developments):
    • get Rmerge per frame, cluster (within a SubWedge)
    • find outlying frames
    • re-scale with those frames excluded, - till convergence.
    • Next more complex filter is when you look at the Rmerge per frame vs resolution and only exclude high-res data from some feames etc...