An evaluation submitted for an okay is only reviewed to the first major outness (see HCO P/L 3 July 74, Data Series 33) and is then returned for correction.
Only when no major correction is necessary does one then verify all data or go to an extensive review of the whole eval.
This makes the line very fast. It also saves a great deal of work by one and all. If the stats are incorrectly given, that's it. Reject. If the Why is really the situation, that's it.
On the reject one gives the letter of Data Series 33 that is not correct and any reference to the Data Series that would seem helpful.
An evaluation corrector will see how well this rejection system works when you find that the eval, let us say, has no situation on it, but only some stats. Why verify anything as a whole new body of data may have to be found.
In correcting evals, if a situation is given, I usually call for the main stats of the unit being evaluated to see if these show any reason to handle it at all. I recently found an activity had had its chief removed when his stats were in Power. The activity then crashed. And that was the situation. It was made by an evaluator and an eval corrector not looking at the stats!
If no error exists in situation or stats I read the eval down to bright idea and look especially at the Why, ideal scene and handling to see if one would make the others.
If that's okay, I look at the targets of handling and the resources.
If those are okay, I look at data and outpoints. If these are all okay, I then verify the data. But if at any of these steps I find an error, I then reject at once for immediate correction. Often, by using only basic things to reject, the whole eval has to be redone as the basics are so far wrong.
If you try to correct the whole thing before rejecting or if you correct tiny little things instead of the big ones, the whole line slows. Eval correction should be a fast, helpful line, strictly on-policy, no opinion.
That way the job of correction becomes easier and easier.