Русская версия

Search document title:
Content search 2 (exact):
ENGLISH DOCS FOR THIS DATE- Evaluation Success (DATA-43IR) - P770318 | Сравнить

RUSSIAN DOCS FOR THIS DATE- Оценка и Программы (ДАН-43) (ц) - И770318RA90 | Сравнить
- Успех Оценки (ДАН-43-1) (ц) - И770318-1R77 | Сравнить

SCANS FOR THIS DATE- 770318-1R - HCO Policy Letter - Evaluation Success [PL034-031]
- 770318-1R - HCO Policy Letter - Evaluation Success [PL046-055]
- 770318-1R - HCO Policy Letter - Evaluation Success [PL053-010]
- 770318R - HCO Policy Letter - Evaluation and Programs [PL034-032]
- 770318R - HCO Policy Letter - Evaluation and Programs [PL046-054]
- 770318R - HCO Policy Letter - Evaluation and Programs [PL053-009]
CONTENTS EVALUATION SUCCESS VERBAL TECH FAILING EVALS SUMMARY Cохранить документ себе Скачать
HUBBARD COMMUNICATIGNS OFFICE
Saint Hill Manor, East Grinstead, Sussex
HCO POLICY LETTER OF 18 MARCH 1977-IR
ADDITION OF 20 MARCH 1977
REVISED 14 JUNE 1977
Remimeo Data Series 43-IR

EVALUATION SUCCESS

To show that evals on individual orgs and getting programs done does raise stats the following brief review is published:

Around mid-July I got on the eval approval lines for about a week and had orgs of one continent evaluated by some Flag evaluators.

We got several evals through, severely according to the Data Series rules.

Here are the results of 7 of them.

  1. Program was reported fully done. Stats went up.
  2. 18 July eval. Pgm was almost fully done. Finance got bugged. Org crashed 22 August 74.
  3. 22 July eval. By 15 Aug stats had gone up.
  4. 21 July 74 eval but not started on until 26 Sept 74 as Study Manuals were delayed on which eval depended. Org stats after eval began to be done went up and by the end of Oct hit highest ever almost across the boards.
  5. 20 July 74 eval. Started on 10 Aug 74. Half-done. By 24 Oct stats went up.
  6. 23 July 74 issue. Bugged. Not completed. Stats went up first couple weeks. Org crashed 24 Oct 74. (Eval was also cross-ordered by removal of CO.)
  7. 23 July 74. Three-quarters done. Stats went up.

Thus 5 out of 7 of the above evals were successful.

The two that failed were obviously insufficiently broad as other matters got in the way of them. The evaluator could not have had the real situation. Means not enough preliminary work to find the area that should have been evaluated.

VERBAL TECH

Verbal tech on a DSEC should be severely handled if found.

Note that the evals as above were very purely supervised referring only to departures from the Data Series P/Ls.

Pure eval per Data Series 33R was the push on getting the evals done. I was simply demanding full Data Series P/L application.

The reason for verbal tech is Mis-U words!

FAILING EVALS

-It is pretty easy to tell if an eval is getting done or if it is failing. The two poor evals in the 7 just weren't watched fast enough by the evaluators. You cancel a failing eval fast and do a better one.

Failing to cancel or redo a failing eval on an org would be the real reason for that org continuing to go down.

SUMMARY

If you got 5/7ths of all our orgs purely evaluated, no nonsense with verbal tech, you would have booming Int stats!

Just like pcs-unprogrammed pcs fail-and pcs audited with hearsay tech fail! Orgs without evaluated, pushed programs for that org tend to fail. And evaluations done on hearsay tech are a waste of paper.

How about it?

A boom or crash?

It's up to you.

Compiled from ED 552 Flag, by LRH 4 November 1974 EVALUATION SUCCESS L. RON HUBBARD
Founder
As assisted by AVU Flag LRH:MH:MW:SH:lf.nf