[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ccp4bb]: molecular replacement woes!



***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***



> simulated annealing to see if Rfree drops.  One more thing, when you do the
> first few rounds of refinements, do not use all your high resolution data
> straight away.  You should start with a high resolution limit that is more
> or less consistent with the one you used to do MR and increase this
> gradually as refinement goes on.  And you also need to check the overall

Finally, a good statement we can start a nide argument about ;-)

<FLAME>
So, basically I would like to strongly disagree.
</FLAME>

If you do have data at any resolution (the higher the better) and a molecular
replacement
model that you suspect it is correctly positioned, I would always use ALL the
data.
Thats your best chance to get the refinement going.

I think, historically people had been using the 'slow' extension to high
resolution for two reasons:
1. More data, slower refinement. Thats not an issue for current
computers/software
2. Data were not weighted by their sigma's nor by SigmaA. Now any descent
refinement
program does that type of weighting. Especially the SigmaA weights will
downweight
the higher resolution data appropriately and you do not have to worry about it.

In cases that you have data to resolution higher than 2.0 A, what was very
succesful in
cases with really bad starting models in our hands (see ActaD CCP4 proceeding,
2001),
was to 1. do some density modification (I am still a big DM fun, but 'prime and
switch' in resolve
should be even better) 2. use the experimental phases to auto-build a brand new
model.

In cases that resolution is worse than 2.0 I would first refine and look at the
2mFo-Dfc map,
but what I would also suggest, for bad starting models, would be to also
consult a DM or ReSolve
map while model building.

    Tassos