George Washington University
The Evaluators' InstituteHeader
Courses

OTHER COURSE TOPICS:

Course Descriptions:

Evaluation Theory, Design and Methods

Using Program Theory and Logic Models in Evaluation

Instructor: Dr. Patricia Rogers, Professor in Public Sector Evaluation at RMIT University (Royal Melbourne Institute of Technology), Australia

Description: It is now commonplace to use program theory, or logic models, in evaluation as a means to explain how a program is understood to contribute to its intended or observed outcomes. However, this does not mean that they are always used appropriately or to the best effect. At their best, logic models can provide conceptual clarity across complex programs, motivate staff, and focus evaluations. At their worst, they can divert time and attention from other critical evaluation activities, provide an invalid or misleading picture of the program, and discourage critical investigation of causal pathways and unintended outcomes. This course focuses on developing useful logic models, and using them effectively to guide evaluation and avoid some of the most common traps. It begins with the assumption that participants already know something about logic models and program theory* but come with different understandings of terminology and options. Application exercises are used throughout the course for demonstration of concepts and techniques: (a) as ways to use logic models to positive advantage (e.g., to identify criteria, develop questions, identify data sources and bases of comparison); (b) ways they are used with negative results (e.g., focusing only on intended outcomes, ignoring differential effects for client subgroups, seeking only evidence that confirms the theory); and (c) strategies to avoid use traps (e.g., differentiated theory, market segmentation, competitive elaboration of alternative hypotheses). Participants receive the instructor's co-authored text, Purposeful Program Theory: Effective Use of Theories of Change and Logic Models (Jossey-Bass, 2011).

Note: Prior to attendance, those with no previous experience with program theory should work through the University of Wisconsin Extension 's course in 'Enhancing Program Performance with Logic Models' available at no cost at www.uwex.edu/ces/lmcourse.

>> Return to top

Qualitative Evaluation Methods

Instructor: Dr. Michael Quinn Patton, Director, Utilization-Focused Evaluation and Independent Evaluation Consultant

Description: Qualitative inquiries use in-depth interviews, focus groups, observational methods, document analyses, and case studies to provide rich descriptions of people, programs, and community processes. To be credible and useful, the unique sampling, design, and analysis approaches of qualitative methods must be understood and used. Qualitative data can be used for various purposes including evaluating individualized outcomes, capturing program processes, exploring a new area of interest (e.g., to identify the unknown variables one might want to measure in greater depth/breadth), identifying unanticipated consequences, and side effects, supporting participatory evaluations, assessing quality, and humanizing evaluations by portraying the people and stories behind the numbers. This class will cover the basics of qualitative evaluation, including design, case selection (purposeful sampling), data collection techniques, and beginning analysis. Ways of increasing the rigor and credibility of qualitative evaluations will be examined. Mixed methods approaches will be included. Alternative qualitative strategies and new, innovative directions will complete the course. The strengths and weaknesses of various qualitative methods will be identified. Exercises will provide experience in applying qualitative methods and analyses in evaluations.

Individuals enrolled in this class will each receive one copy of Dr. Patton's text: Qualitative Research and Evaluation Methods, Sage, 2002, 3rd Edition.

>> Return to top

Outcome and Impact Assessment

Instructor: Dr. Mark W. Lipsey, Director of the Peabody Research Institute at Vanderbilt University

Description: Valid assessment of the outcomes or impact of a social program is among the most challenging evaluation tasks, but also one of the most important. This course will review monitoring and tracking approaches to assessing outcomes as well as the experimental and quasi-experimental methods that are the foundation for contemporary impact evaluation. Attention will also be given to issues related to the measurement of outcomes, ensuring detection of meaningful program effects, and interpreting the magnitude of effects. Emphasis will mainly be on the logic of outcome evaluation and the conceptual and methodological nature of the approaches, including research design and associated analysis issues. Nonetheless, some familiarity with social science methods and statistical analysis is necessary to effectively engage the topics covered in this course. (Participants in this class will each receive one copy of the 7th Edition (Sage, 2003) Rossi et al. text, Evaluation: A Systematic Approach.)

Prerequisites: At least some background in social science methods and statistical analysis or direct experience with outcome measurement and impact assessment designs.

>> Return to top

Case Studies in Evaluation

Instructor: Dr Delwyn Goodrick, Evaluation Practitioner / Psychologist, Melbourne, Australia

Description: Case study approaches are widely used in program evaluation. They facilitate an understanding of the way in which context mediates the influence of program and project interventions. While case study designs are often adopted to describe or depict program processes, their capacity to illuminate depth and detail can also contribute to an understanding of the mechanisms responsible for program outcomes.

The literature on case study is impressive, but there remains tension in perspectives about what constitutes good case study practice in evaluation. This leads to substantive differences in the way case study is conceived and practiced within the evaluation profession. This workshop aims to disentangle the discussions and debates, and highlight the central principles critical to effective case study practice and reporting.

This two day workshop will explore case study design, analysis and representation. The workshop will address case study topics through brief lecture presentation, small group discussion and workshop activities with realistic case study scenarios. Participants will be encouraged to examine the conceptual underpinnings, defining features and practices involved in doing case studies in evaluation contexts. Discussion of the ethical principles underpinning case study will be integrated throughout the workshop.

Specific topics and questions to be addressed over the two days include:

  • The utility of case studies useful in evaluation. Circumstance in which case studies may not be appropriate
  • Evaluation questions that are suitable for a case study approach
  • Selecting the unit of analysis in case study
  • Design frameworks in case study – single and multiple case study; the intrinsic and instrumental case
  • The use of mixed methods in case study approaches – sequential and concurrent designs
  • Developing case study protocols and case study guides
  • Analyzing case study materials – within case and cross case analysis, matrix and template displays that facilitate analysis
  • Principles and protocols for effective team work in multiple case study approaches
  • Transferability/generalisability of case studies
  • Validity and trustworthiness of case studies
  • Synthesising case materials
  • Issues of representation of the case and cases in reporting

>> Return to top

Designing, Managing, and Analyzing Multi-Site Evaluations

Instructor: Dr. Debra J. Rog, Associate Director, Westat Inc., Vice President, The Rockville Institute

Description: Guidance on how to carry out multi-site evaluations is scarce and what is available tends to focus on quantitative data collection and analysis and usually treats diverse sites in a uniform manner. This course will present instruction on designing, managing, and analyzing multi-site studies and focus on the differences that are required due to the specifics of the situation, e.g., central evaluator control vs. interactive collaboration; driven by research vs. program interests; planned and prospective vs. retrospective; varied vs. standardized sites; exploratory vs. confirmatory purpose; and data that are exclusively quantitative vs. qualitative vs. mixture. Topics include stakeholder involvement, collaborative design, maintaining integrity/quality in data, monitoring and technical assistance, data submission, communication and group process, cross-site synthesis and analysis, and cross-site reporting and dissemination. Practical strategies learned through first-hand experience as well as from review of other studies will be shared. Teaching will include large- and small-group discussions and students will work together on several problems. Detailed course materials are provided. Text provided: Herrell, J.M. & R.B. Straw, Conducting Multiple Site Evaluations in Real-World Settings, New Directions in Evaluation #94 (Jossey-Bass, 2002).

Prerequisite: Understanding of evaluation and research design.

>> Return to top

Sampling: Basic Methods for Probability and Non-Probability Samples

Instructor: Dr. Gary T. Henry, Duncan MacRae ’09 and Rebecca Kyle MacRae Professor of Public Policy, Department of Public Policy and Director of the Carolina Institute for Public Policy at the University of North Carolina at Chapel Hill

Description: Careful use of sampling methods can save resources and often increase the validity of evaluation findings. This seminar will focus on the following: (a) The Basics: defining sample, sampling and validity, probability and non-probability samples, and when not to sample; (b) Error and Sampling: study logic and sources of error, target population and sampling frame; (c) Probability Sampling Methods: simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling; (d) Making Choices before, during, and after sampling; and (e) Sampling Issues. Many examples will be used to illustrate these topics and participants will have the opportunity to work with case exercises. The instructor's text Practical Sampling (Sage, 1990) will be provided as part of the course fee in addition to take-home class work materials.

>> Return to top

Design and Administration of Internet, Mail and Mixed-Mode Surveys

Instructor: Dr. Jolene D. Smyth, Assistant Professor with a joint appointment in the Survey Research and Methodology Program and the Department of Sociology at the University of Nebraska-Lincoln

Description: Sample surveys provide a powerful means of describing the opinions and behavior of millions, while obtaining data from only a few hundred or thousand individuals in those populations. However, methods for surveying are changing, with doing surveys on the Internet one of the most promising new technologies. Web and mail questionnaires share many common features that influence their design, e.g., both are self-administered and dependent upon visual communication. Increasingly, each is being used as part of mixed-mode strategies for achieving high survey response, whereby some respondents are being asked to respond to one or both of these modes, while others are interviewed by phone or in-person.

This course begins with a discussion of the multiple sources of error (coverage, sampling, measurement and non-response) that must be overcome to achieve quality results from Web and mail surveys. Next, principles for writing questions in ways that minimize measurement error across survey methods are described, followed by a discussion of the consequences of ordering questions in different ways, and how self-administered questionnaires often produce different answers than do telephone interviews. Principles for constructing questionnaires follow, beginning with a discussion of how page layouts-graphics and numbers in addition to words-influence people to read and answer questions. Day one concludes with a discussion of the ways in which these principles of design and layout need to be applied similarly, but with appropriate variations for the paper vs. electronic formats. Day two emphasizes methods for achieving high response rates while minimizing non-response error. Attention will be given to how mail and Internet implementation strategies need to build upon similar foundations, but will differ in their details.

>> Return to top

Using Non-experimental Designs for Impact Evaluation

Instructor: Dr. Gary T. Henry, Duncan MacRae ’09 and Rebecca Kyle MacRae Professor of Public Policy, Department of Public Policy and Director of the Carolina Institute for Public Policy at the University of North Carolina at Chapel Hill

Description: In the past few years, there have been very exciting developments in approaches to causal inference that have expanded our knowledge and toolkit for conducting impact evaluations. Evaluators, statisticians, and social scientists have focused a great deal of attention on causal inference, the benefits and drawbacks of random assignment studies, and alternative designs for estimating program impacts. For this two day workshop, we will have three goals:

1. to understand a general theory of causal inference that covers both random assignment and observational studies, including quasi-experimental and non-experimental studies;

2. to identify the assumptions needed to infer causality in evaluations; and

3. to describe, compare and contrast six, promising alternatives to random assignment studies for inferring causality, including the requirements for implementing these designs, the strengths and weaknesses of each, and examples from evaluations where these designs have been applied.

The six alternative designs to be described and discussed are: regression discontinuity; propensity score matching; instrumental variables; fixed effects (within unit variance); difference-in-differences; and interrupted time series. Also, current findings concerning the accuracy of these designs relative to random assignment studies from “within study” assessments of bias will be presented and the implications for practice discussed.

Prerequisite: This class assumes some familiarity with research design, threats to validity, impact evaluations, and multivariate regression.

>> Return to top