To show that evals on individual orgs and getting programs done does raise stats the following brief review is published:
Around mid-July I got on the eval approval lines for about a week and had orgs of one continent evaluated by some Flag evaluators.
We got several evals through, severely according to the Data Series rules.
Here are the results of 7 of them.
Thus 5 out of 7 of the above evals were successful.
The two that failed were obviously insufficiently broad as other matters got in the way of them. The evaluator could not have had the real situation. Means not enough preliminary work to find the area that should have been evaluated.
Verbal tech on a DSEC should be severely handled if found.
Note that the evals as above were very purely supervised referring only to departures from the Data Series P/Ls.
Pure eval per Data Series 33R was the push on getting the evals done. I was simply demanding full Data Series P/L application.
The reason for verbal tech is Mis-U words!
-It is pretty easy to tell if an eval is getting done or if it is failing. The two poor evals in the 7 just weren't watched fast enough by the evaluators. You cancel a failing eval fast and do a better one.
Failing to cancel or redo a failing eval on an org would be the real reason for that org continuing to go down.
If you got 5/7ths of all our orgs purely evaluated, no nonsense with verbal tech, you would have booming Int stats!
Just like pcs-unprogrammed pcs fail-and pcs audited with hearsay tech fail! Orgs without evaluated, pushed programs for that org tend to fail. And evaluations done on hearsay tech are a waste of paper.
How about it?
A boom or crash?
It's up to you.