Child pages
  • Home
Skip to end of metadata
Go to start of metadata

You can read previous comments below. You do not need to log in. To post your own comments, please use your initials or other consistent identifier and when you have commented, click on ADD COMMENT at the bottom of screen. Once you type your comments be sure to click to POST. 

  • No labels


  1. Anonymous

    DY:  Welcome to the Web-based national hearing for the proposed 3rd Edition of the Program Evaluation Standards.  Thank you very much for your participation.  Please be sure to fill out and submit your registration form if you haven't already.  We are looking forward to your comments. 

  2. Anonymous

    After reviewing the standards revisions I have a concern that the reordering of standards presentation (the ordering format of how the standards are presented).  While I am sure there are valid reasons for ordering the standards in several seem important to point out that those of us who are familar with the standards realize the order is relative.  However, for those who would look to the standards as a guide for making since out of evaluation in their context.  All of the stakeholders I have ever worked with want to know how is this going to be useful before they invest in whether it is or proper, fair (propriety) , feasible, or accurate.  

    Stakeholders primary concern seems to be what will I get out of this and how will it effect me, then can I really do it, and will it really be accurate....and maybe in a perfect world was that a good evaluation (meta-evaluation).

     My point is if you want non-evaluation people to understand better how evaluation can work for them if structured based on the guidance of the standards then you have to present it in such a way that they don't put it down before they hear what interest them so that it motivates them to want to learn more.


    1. Anonymous

      DY:  Thanks for your comment on the PgES3. Do you have specific suggestions about how to order the standards?  Is your comment directed at how the aspect chapters are ordered-feasibility, propriety, accuracy and utility- or at the level of the individual standards in one or more specific aspect chapters?

  3. Anonymous

    I have reviewed the accuracy standards.  I appreciate the hard work of task force members.  However, as someone who has worked in the evaluation field for approximately 20 years and will use the standards, I am very concerned that they are too academic, unclear, and lengthy.  For instance, the concept of cultural sensitivity, which I understand in practice, is not described in the accuracy standards in a way that an average evaluator in the field would be able to understand it or how it applies to the standards.  It would be helpful if the task force would cut through the academic talk, and simply and succinctly state what the concept is and how it relates to the standards.    In addition, it would be best to use commonly understood terminology rather than terms only seen in college textbooks.  For instance, while the use of the term "evaluand" may be correct, the average evaluator is not going know what this terms means.   Even if the term is defined in a glossary, it is not a widely used term and makes reading and understanding the standards more difficult.   More than terminology, the overall narrative associated with the accuracy standards is just too lengthy and sometimes seemingly pointless.  If the standards are to be used and not just studied by students in MPA programs, then they must use common terminology, be more clearly written, and be more direct.   PLEASE make substantial revisions to the current draft before the standards are finalized.

    1. Anonymous

      DY:  We will make every effort to edit this into a form that is more succinct and communicates more clearly. Thanks for your comments. 

  4. During the online hearing period, I am commenting on the standard statements related to metaevaluation M3-Identified Standards of Quality. First, the standard statements titled, "Metaevaluations should be based on appropriate and identified dimensions and standards of quality, including selected program evaluations" was clearly stated. However, it is my professional judgment that this standard could include some references related to some index of effect size or strength of relationship in selected metaevaluations. Second, the M3- Information Section of the August 4, 2008 document on line 492 stated that, "Metaevaluations should be based on adequate and accurate information documenting the program evaluation or evaluation components to be metaevaluated" is philosophically and methodologically sound.

    The rationale for the above standard on lines 496-530 clearly delineated the performance criteria expected of internal evaluators, external evaluators, and third parties.

    The Clarification Section of the above standard on line 531 addressed a) the M1-First Standard Purposes Considerations for investigators intent upon designing and executing evaluations with high levels of quality assurance. Moreover, b) the M2- Quality of Points subsection included four (4) points requiring the alignment of specific standards of quality with the metaevaluation questions and the user's needs. The appropriate documentary references and evidentiary criteria of accuracy, sufficiency and pertinence of purpose were mentioned.  

    Beginning on line 564, The Implementation Section of the above standard incorporated content related to six (6) subsections. The drafters included references to user needs and a catalogue or case data set of metaevaluation questions. The use of a table of desired information was cited as being essential. This subsection concluded with the need for the use of appropriate documentation of the adequate program evaluation processes that have taken place on line 590.

    Next, the Hazards section of this standard beginning on line 591 of the August 4, 2008 document cited four (4) subsections of concern. In this connection, it is in reference to lines 594 and 595 that stated, "When working with metaevaluators (experts) who are familiar with the full domain of program quality...such expert program evaluators who should be expected to know what the pertinent dimensions, standards, and criteria for quality look like," that the implications of the relationship between the use of different statistical techniques could possibly utilize some reference to the index of effect size or strength of relationships during the conduction of metaevaluations.


    American Psychological Association. (2001). Publication Manual. Washington, D.C: APA.

    Newton, R.R. & Rudestan, K.E. (1999). " Chapter 11: The Bigger Picture." Your statistical consultant: answers to your data analysis questions. Thousand Oaks: Sage Publications, Inc.   

    Schmidt, F.L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1(2), 115-129.   

    Schmidt, F.L. & Hunter, J.E. (1996). Measurement error in psychological research: Lessons from 26 research scenarios. Psychological Methods, 1, 199-223.

    1. Anonymous

      DY:  Thank you very much.

  5. My comments concern the Metaevaluation standards.  The feedback I present below is based on a discussion I facilitated at The Evaluation Center at Western Michigan University about the Metaevaluation standards.

    • A handful of folks, all recent graduates of our interdisciplinary Ph.D. program, did not agree with the idea that an evaluation of an evaluation design should count as metaevaluation (see lines 96-99 of the metaevaluation standards). Given the definition of evaluation as the "systematic investigation of the worth of merit of an object," they do not think an evaluation design counts as evaluation, therefore appraisal of an evaluation design does not count as metaevaluation.  Please note that others strongly disagreed with this position, and in a follow-up discussion online, Michael Scriven himself - inventor of the term - heartily agreed that evaluation of an evaluation design should be included in metaevaluation.  However, the inclusion of things other than ongoing or completed evaluations (e.g., evaluation plans, instruments, budgets, contracts, etc.) as potential objects of metaevaluation may merit some discussion/clarification
    • This may conflict with the point above, I'm not sure, but it was noted that the metaevaluation standards make no mention of/offer no guidance for evaluating evaluation systems.
    • One participant - a distinguished evaluation scholar - agreed that metaevaluation deserves special attention as a fifth domain, but thought one standard would suffice, since a metaevaluation should/would invoke the full range of standards.
    • Regarding the caveat that those who engage in metaevaluation should be experts in evaluation (see lines 34-41), one person raised the question as to what qualifies someone as an evaluation expert. There is some discussion in the draft standards about how such expertise is gained, but it may be helpful to offer more in the way of concrete indicators for determining expertise.
    • The introduction to the standards refers to Standard M3 as both "Accurate Information" and "Documentation." It is called "Documentation" in the metaevaluation section. There should be consistency.  Also, I have concerns about the use of the word "documentation."  It would be incredibly time-consuming to document every detail of an evaluation that might be of interest in a metaevaluation.  I think the key issue here is that the information should be verifiable.  Not all evidence can be obtained in the form of documentation.
    • It may be helpful to say a bit about how to discuss metaevaluation with evaluation clients so they understand its purposes and why the additional costs (if any) are warranted and so that one does not raise unfounded concerns about the primary evaluator's competence.
    1. Anonymous

      DY:  Thanks for the summary of what must have been a stimulating and rich discussion.  The timing is perfect as we will be discussing these standards this week at the annual meeting.  I appreciate all of the points above.  Most are issues we have struggeled with.  For example, we have an alternative chapter based on one standard and serioulsy considered going with it. We will no doubt revisit that decision. Your specification of emphasis on verification as opposed to documentation strikes me as an important improvement. In addition, we definitely need to address the evaluation of evaluation systems. Each point above will receive full attention and deliberation. 

    2. Anonymous

      A handful of folks, all recent graduates of our interdisciplinary Ph.D. program, did not agree with the idea that an evaluation of an evaluation design should count as metaevaluation (see lines 96-99 of the metaevaluation standards). Given the definition of evaluation as the "systematic investigation of the worth of merit of an object,"

  6. Anonymous

    Fine, thank you.
    I am from Rwanda and also now am reading in English, give true I wrote the following sentence: "Org turkish american friendship in north carolina kira has absorbed synthroid usual dosage the blue mist, a force of good."

    (big grin) Thanks in advance. Nate.

  7. Anonymous

    Excuse me. The crux... is that the vast majority of the mass of the universe seems to be missing.
    I am from Nigeria and learning to write in English, give true I wrote the following sentence: "Highlights - among those who did not receive alcohol treatment but felt they needed it, only.Find women alcohol treatment today."

    Thanks for the help (sad), Quentin.

  8. Anonymous