George Washington University
The Evaluators' InstituteHeader
Courses

OTHER COURSE TOPICS:

Course Descriptions:

Evaluation Approaches and Techniques

Comparative Effectiveness: Exploring Alternatives to Randomized Clinical Trials

Instructor: Dr. Ann Doucette, TEI Director, Research Professor, Columbian College of Arts and Sciences, George Washington University

Evidence is the foundation on which we make judgments, decisions, and policy. Gathering evidence can be a challenging and time-intensive process. Although there are many approaches to gathering evidence, random clinical trials (RCTs) have remained the “gold standard” in establishing effectiveness, impact and causality, despite the fact that strong proponents of RCTs sometimes assert that RCTs are not the only valid method, nor necessarily the optimal approach in gathering evidence. RCTs can be costly in terms of time and resources; can raise ethical concerns regarding the exclusion of individuals from treatments or interventions from which they might benefit; can be inappropriate if the intervention is not sufficiently and stably implemented or if the program/service is so complex that such a design would be challenging at best and likely not to yield ecologically valid results.

Comparative effectiveness (CE) has emerged an accepted approach in gathering evidence for healthcare decision and policymaking. CE emerged as a consequence of the worldwide concern about rising health care costs and the variability of healthcare quality, and a more immediate need for evidence of effective healthcare. RCTs, while yielding strong evidence were time intensive and posed significant delays in providing data on which to make timely policy and care decisions. CE provided a new approach to gather objective evidence, and emphasized how rigorous evaluation of the data yielded across existing studies (qualitative and quantitative) could answer the questions what works for whom and under what conditionsdoes it work. Essentially, CE is a rigorous evaluation of the impact of various intervention options, based on existing studies that are available for specific populations. The CE evaluation of existing studies focuses not only on the benefits and risks of various interventions, but can also incorporates the costs associated them. CE takes advantage of both quantitative and qualitative methods, using a standardized protocol in judging the strength of, and synthesis of the evidence provided by existing studies.

The basic CE questions are: Is the available evidence good enough to support high stakes decisions? If we rely solely on RCTs for evidence, will it result in a risk that available non-RCT evidence will not be considered sufficient as a basis for policy decisions? Will sufficient evidence be available for decision-makers at the time when they need it? What alternatives can be used to ensure that rigorous findings be made available to decision-makers when they need to act? CE has become an accepted alternative to RCTs in medicine and health. While CE approach has focused on medical intervention, the approach has potential for human and social interventions that are implemented in other areas (education, justice, environment, etc.)

This course will provide an overview of CE from an international perspective (U.S., U.K., Canada, France, Germany, Turkey), illustrating how different countries have defined and established CE frameworks; how data are gathered, analyzed and used in health care decision-making; and how information is disseminated and whether it leads to change in healthcare delivery. Though CE has been targeted toward enhancing the impact of health care intervention, this course will consistently focus on whether and how CE (definition, methods, analytical models, dissemination strategies, etc.) can be applied to other human and social program areas (education, justice, poverty, environment, etc.)

No prerequisites are required for this one-day course.

>> Return to top

Utilization-Focused Evaluation

Instructor: Dr. Michael Quinn Patton, Director, Utilization-Focused Evaluation and Independent Evaluation Consultant

Description: Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on intended use by intended users.

Utilization-focused evaluation is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between evaluator and primary intended users. A psychology of use undergirds and informs utilization-focused evaluation: intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings; they are more likely to understand and feel ownership if they've been actively involved; by actively involving primary intended users, the evaluator is training users in use, preparing the groundwork for use, and reinforcing the intended utility of the evaluation every step along the way.

Participants will learn:

  • Key factors in doing useful evaluations, common barriers to use, and how to overcome those barriers.
  • Implications of focusing an evaluation on intended use by intended users.
  • Options for evaluation design and methods based on situational responsiveness, adaptability and creativity.
  • Ways of building evaluation into the programming process to increase use.

Participants will receive a copy of the instructor’s text: Utilization-focused evaluation, 4th ed., (Sage, 2008).

>> Return to top

Mixed Method Evaluation Approaches

New Course! Description coming soon.

>> Return to top

Linking Evaluation Questions to Analysis Techniques

Instructor: Dr. Melvin M. Mark, Professor of Psychology at the Pennsylvania State University

Description: Statistics are a mainstay in the toolkit of program and policy evaluators. Human memory being what it is, however, even evaluators with reasonable statistical training, over the years, often forget the basics. And the basics aren't always enough. If evaluators are going to make sensible use of consultants, communicate effectively with funders, and understand others' evaluation reports, then they often need at least a conceptual understanding of relatively complex, recently developed statistical techniques. The purposes of this course are: to link common evaluation questions with appropriate statistical procedures; to offer a strong conceptual grounding in several important statistical procedures; and to describe how to interpret the results from the statistics in ways that are principled and will be persuasive to intended audiences. The general format for the class will be to start with an evaluation question and then discuss the choice and interpretation of the most-suited statistical test(s). Emphasis will be on creating a basic understanding of what statistical procedures do, of when to use them, and why, and then on how to learn more from the data. Little attention is given to equations or computer programs, with the emphasis instead being on conceptual understanding and practical choices. Within a framework of common evaluation questions, statistical procedures and principled data inquiry will be explored.

(A) More fundamental topics to be covered include (1) basic data quality checks and basic exploratory data analysis procedures, (2) basic descriptive statistics, (3) the core functions of inferential statistics (why we use them), (4) common inferential statistics, including t-tests, the correlation coefficient, and chi square, and (5) the fundamentals of regression analysis.

(B) For certain types of evaluation questions, more complex statistical techniques need to be considered. More complex techniques to be discussed (again, at a conceptual level) include (1) structural equation modeling, (2) multi-level modeling, and (3) cluster analysis and other classification techniques.

(C) Examples of methods for learning from data, i.e., for "snooping" with validity, for making new discoveries principled, and for more persuasive reporting of findings will include (1) planned and unplanned tests of moderation, (2) graphical methods for unequal treatment effects, (3) use of previously-discussed techniques such as clustering, (4) identifying and describing converging patterns of evidence, and (5) iterating between findings and explanations.

Each participant will receive a set of readings and current support materials.

Prerequisites: Familiarity with basic statistics.

>> Return to top

Resource Evaluation and Systems Change

Instructor: Dr. Doreen Cavanaugh, Research Associate Professor at the Georgetown Public Policy Institute, Georgetown University

Description:Worldwide financial crises challenge evaluators to examine efficiency as well as the effectiveness of the programs and interventions implemented to effect favorable systems change. This course puts systems change under a microscope by examining three essential infrastructure elements of successful program effort: collaboration, leadership and resource allocation; and, the methods used to evaluate them.

The need to do more with less has increased the value of and emphasis on maximizing performance and results. Improved collaboration across participating stakeholders is one potential way of achieving both program efficiency and effectiveness. Existing studies identify that groups form partnerships by engaging in four increasingly complex activities: networking, coordination of services or resources, cooperation and finally collaboration. This course discusses each of these activities, their similarities and differences, their contributions to project/program outcomes and methods for evaluating them.

We know that collaborative frameworks yield new styles of leadership, and as a consequence, the need for new evaluation approaches. Frequently found, hierarchical top down management models give way to an array of new stakeholder positions – change champions and boundary-spanners, individuals who can manage across organizational boundaries, each contributing to the outcome and impact of a project or program. This course will provide participants with an understanding of differing leadership styles, linking the style to the project/program objectives, with an emphasis on methods of evaluating the effect of leadership on intermediate and long-term project/program outcomes.

Today, efficient and effective systems change often requires a reallocation of human and financial resources, and the need for flexible evaluation approaches. This course examines the role of resource allocation in project/program outcomes; pre-requisites for determining efficiency; a method of tracking – resource mapping, for redesigning resource deployment; and how to evaluate the resulting effects of resource reallocation on systems change and project/program outcomes.

Resource mapping is a process most often used to identify funds and in-kind contributions that are expended by an entity (governmental, donor, foundation, etc.) within a specific timeframe to address a certain issue/population of interest. The information gathered through this process is then available to inform the design and development of a comprehensive evaluation approach that will examine a proposed system change that will utilize available funds in the most efficient and effective ways.

Resource mapping may be employed to answer any number of evaluation questions. In some projects/programs, funders may wish to design, develop and support a healthcare, education, transportation, etc., system for a specific population group. In other cases stakeholders may want to evaluate the efficiency of these systems. Others may wish to harness resources specifically allocated to diverse divisions within one agency or organization. For any question of interest regarding resource allocation, this mapping strategy is a tool to help evaluators inform policymakers, program developers, and managers in answering essential questions such as:

  • What financial resources do we have to work with?
  • What is the best way to organize, allocate and administer these resources for maximum efficiency and effectiveness?
  • How will redesigning resource allocation contribute to the outcome and impact of the effort (project/program) at hand?

Participants will learn that completing a resource map is not an end in itself but rather a means to gathering evaluative information that informs the development of a comprehensive plan for resourcing project goals, asking whether resources are indeed sufficient to achieve the stated goals and objectives. Completing the mapping exercise will provide an x-ray of the system. It will identify gaps, inefficiencies, overlaps, and opportunities for collaboration with all participating partners. The map may assist evaluators in informing planners/stakeholders in identifying which resources might be combined in pooled, braided or blended arrangements that assure optimal outcomes for projects and/or programs.

On Day 1 participants will use examples from their own experience to apply the essential infrastructure elements of collaboration, leadership and resource allocation to a real life, evaluation situation. Day 2 will focus on ways to evaluate the contributions of collaboration, leadership and resource allocation strategies to systems change goals, outcome and impact.

>> Return to top

Performance Measurement for Government and Non-profit Organizations

Instructor: Dr. Theodore H. Poister, Professor of Public Management & Policy, Andrew Young School of Policy Studies, Georgia State University

Description: A commitment to performance measurement has become pervasive throughout government and the non-profit sector in response to demands for increased accountability, pressures for improved quality and customer service, and mandates to "do more with less," as well as the drive to strengthen the capacity for results oriented management among professional public and non-profit administrators. In addition to stand-alone performance reporting systems, performance measures are critical to the success of strategic planning efforts, quality improvement programs, performance management, results based budgeting systems, and program evaluation processes.

While the idea of setting goals and identifying measures of success in achieving them might appear to be a straightforward process, a myriad of conceptual, managerial, logistical, cultural, and organizational constraints - as well as methodological issues - make this a very challenging enterprise. This course presents a 10 step process for designing and implementing effective performance measurement systems in public and non-profit agencies, with an emphasis on enhancing utility in improving organizational and program performance. The focus is on the interplay between measurement and management, as well as between performance monitoring and program evaluation, and all topics are illustrated with examples from a wide variety of program areas.

Day 1 overviews the basics of performance measurement and looks at the identification of outcomes and other dimensions of performance, data sources and the definition of performance indicators, and criteria for systematically evaluating the usefulness of potential indicators. Day 2 looks at the analysis and reporting of performance data and addresses selected topics such as comparative performance measurement, balanced scorecard models, monitoring customer feedback, and developing performance measures to help manage programs through networked environments. The course concludes with a discussion of the "process side" of the design and implementation of performance measures and discusses strategies for building effective monitoring systems.

Instructor's text, Performance Measurement for Public and Non-profit Organizations (Jossey-Bass, 2003), and case studies, and other materials are provided.

>> Return to top

Policy Analysis, Implementation and Evaluation

Instructor: Dr. Doreen Cavanaugh, Research Associate Professor at the Georgetown Public Policy Institute, Georgetown University

Description: Policy drives the decisions and actions that shape our world and affect the wellbeing of individuals around the globe. It forms the foundation of every intervention, and yet the underlying assumptions and values are often not thoroughly examined in many evaluations. In this course students will explore the policy development process, study the theoretical basis of policy and examine the logical sequence by which a policy intervention is to bring about change. Participants will explore several models of policy analysis including the institutional model, process model and rational model.

Participants will experience a range of policy evaluation methods to systematically investigate the effectiveness of policy interventions, implementation and processes, and to determine their merit, worth or value in terms of improving the social and economic conditions of different stakeholders. The course will differentiate evaluation from monitoring and address several barriers to effective policy evaluation including: goal specification and goal change, measurement, targets, efficiency and effectiveness, values, politics, increasing expectations. The course will present models from a range of policy domains. At the beginning of the 2-day course, participants will select a policy from their own work to apply and use as an example throughout the class. Participants will develop the components of a policy analysis and design a policy evaluation.

>> Return to top

Policy Evaluation and Analysis

Instructor: Dr. Gary T. Henry, Duncan MacRae ’09 and Rebecca Kyle MacRae Professor of Public Policy, Department of Public Policy and Director of the Carolina Institute for Public Policy at the University of North Carolina at Chapel Hill

Description: Policy evaluation and analysis produce evidence intended to influence policymaking. Just as there are many types of evaluation, policy analysis is conducted in different ways and for different purposes. One type of policy analysis--scientific policy analysis--has much in common with policy evaluation. Both usually involve an independent assessment of the social problem that is to be addressed through government action and an assessment of the costs and consequences of relevant policy alternatives. Another type of policy analysis is labeled professional and is intended to have more direct short-term influence on policy, often using data from previous evaluations and extrapolating results to a new setting. Advocacy policy analysis selectively uses data to make a case for pre-determined policy position.

This course will explore the types of policy analysis and the types of evaluation that are most likely to be influential in the policy process. Participants will develop major components of a professional policy analysis and design a policy evaluation. In addition, the class will focus on the development of a communication strategy for a policy evaluation.

>> Return to top

Evaluability Assessment

Instructor: Dr. Debra J. Rog, Associate Director, Westat Inc., Vice President, The Rockville Institute

Description: Increasingly, both public and private funders are looking to evaluation not only as a tool for determining the accountability of interventions, but also to add to our evidence base on what works in particular fields. With scarce evaluation resources, however, funders are interested in targeting those resources in the most judicious fashion and with the highest yield. Evaluability assessment is a tool that can inform decisions on whether a program or initiative is suitable for an evaluation and the type of evaluation that would be most feasible, credible, and useful.

This course will provide students with the background, knowledge, and skills needed to conduct an evaluability assessment. Using materials and data from actual EA studies and programs, students will participate in the various stages of the method, including the assessment of the logic of a program’s design and the consistency of its implementation; the examination of the availability, quality, and appropriateness of existing measurement and data capacities; the analysis of the plausibility that the program/initiative can achieve its goals; and the assessment of appropriate options for either evaluating the program, improving the program design/implementation, or strengthening the measurement. The development and analysis of logic models will be stressed, and an emphasis will be placed on the variety of products that can emerge from the process.

Students will be sent several articles prior to the course as a foundation for the method.

Prerequisite: Background in evaluation is useful and desirable, as is familiarity with conducting program level site visits.

>> Return to top

Developmental Evaluation: Systems and Complexity

(Formerly taught as: Alternative Evaluation Designs: Implications from Systems Thinking and Complexity Theory)

Instructor: Dr. Michael Quinn Patton, Director, Utilization-Focused Evaluation and Independent Evaluation Consultant

Description: The field of evaluation already has a rich variety of contrasting models, competing purposes, alternatives methods, and divergent techniques that can be applied to projects and organizational innovations that vary in scope, comprehensiveness, and complexity. The challenge, then, is to match evaluation to the nature of the initiative being evaluated. This means that we need to have options beyond the traditional approaches (e.g., the linear logic models, experimental designs, pre-post tests) when faced with systems change dynamics and initiatives that display the characteristics of emergent complexities. Important complexity concepts with implications for evaluation include uncertainty, nonlinearity, emergence, adaptation, dynamical interactions, and co-evolution.

Developmental Evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. Innovations can take the form of new projects, programs, products, organizational changes, policy reforms, and system interventions. A complex system is characterized by a large number of interacting and interdependent elements in which there is no central control. Patterns of change emerge from rapid, real time interactions that generate learning, evolution, and development – if one is paying attention and knows how to observe and capture the important and emergent patterns. Complex environments for social interventions and innovations are those in which what to do to solve problems is uncertain and key stakeholders are in conflict about how to proceed.

Developmental Evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change. Participants will learn the unique niche of developmental evaluation and what perspectives such as Systems Thinking and Complex Nonlinear Dynamics can offer for alternative evaluation approaches. Participants will receive a copy of the instructor's new text: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).

>> Return to top

Internal Evaluation: Building Organizations from Within

Instructor: Dr. Arnold Love, internationally-recognized independent consultant based in Toronto, Canada

Description: Internal evaluations are conducted by an organization's own staff members rather than by outside evaluators. Internal evaluators have the enormous advantage of an insider's knowledge so they can rapidly focus evaluations on areas managers and staff know are important, develop systems that spot problems before they occur, constantly evaluate ways to improve service delivery processes, strengthen accountability for results, and build organizational learning that empowers staff and program participants alike.

This course begins with the fundamentals of designing and managing effective internal evaluation, including an examination of internal evaluation with its advantages and disadvantages, understanding internal evaluation within the organizational context, recognizing both positive and potentially negative roles for internal evaluators, defining the tasks of managers and evaluators, identifying the major steps in the internal evaluation process, strategies for selecting the right internal evaluation tools, and key methods for making information essential for decision making available to management, staff, board members, and program participants.

The second day will focus on practical ways of designing and managing internal evaluations that make a difference, including: methods for reducing the potential for bias and threats to validity, practical steps for organizing the internal evaluation function, outlining the specific skills the internal evaluator needs, strategies to build internal evaluation capacity in your organization, and ways for building links between internal evaluation and organizational development. Teaching will be interactive, combining presentations with opportunities for participation and discussion. Time will be set aside on the second day for an in-depth discussion of key issues and concerns raised by participants. The instructor's book on Internal Evaluation: Building Organizations from Within (Sage) is provided with other resource materials.

>> Return to top

Evaluating Training Programs

Instructor: Mr. James Bell is the president of James Bell Associates, Inc., a firm that has specialized in national health and human services program evaluation for more than 30 years

Description: The long-established approach to training program evaluation (Kirkpatrick, 1976) emphasizes participants’ initial reactions to training, gains in knowledge and skills, changes in individual behavior, and end-results indicated by organizational and societal benefits and costs. Despite this basic framework, the selection of measures and data collection and analysis methods continues to challenge evaluators. Training programs usually have unique characteristics that must be accommodated in evaluations that tend to be modestly funded. A single training program, for example, might encompass a range of approaches, such as large-group presentation, self-directed on-line tutorial, single or multi-day workshop and a similarly diverse array of complementary follow-up technical assistance including webinars or one-to-one help applying new knowledge and skills.

The course’s four segments are designed to increase the practical knowledge and skills of those who plan, conduct, or oversee training evaluations. First, the focus is on key elements of a subject training program, including instructional objectives and modes, criteria for selecting attendees, hours of instruction, expected changes in attitudes and behavior and prominent contextual factors that might moderate results. The second segment focuses on evaluation design and preparation including the selection of measures, adaptation of data collection instruments and administration procedures, and data analysis planning. Data collection is the focus of the third segment, emphasizing procedures for assuring respondents’ anonymity, adequate response rates for follow-up surveys, and data quality. The fourth segment focuses on data analysis and presentation of findings to demonstrate performance and, if needed, improvements in the training program design. Case examples of training evaluation challenges and solutions will be examined and used in class exercises. The case examples will be provided by the instructor as well as drawn from course participants' training evaluation experiences.

>> Return to top
Having problems opening the PDF files? Download the latest version of Abode Acrobat Reader to view them.