Wikimedia Education Greenhouse/Unit 3 - Module 2

 Impact evaluation

A year from now...
Imagine we're in the year 2021.

You have successfully completed this online course (congratulations!) and you have developed your project idea from start to end (double congratulations!). You implemented your evaluation plan and collected a lot of information about the process of your project and the goals achieved. You presented this information through a comprehensive report and shared it with different stakeholders.

To think a little bit deeper into some of these details answer the three questions below!
 * 1) Why did you create this evaluation?
 * 2) Who read your evaluation report?
 * 3) Then what happened?

Let's start with the "Why?
Evaluation is a complex and extensive topic. You can find entire books, courses, conferences, etc. dedicated only to exploring this area of project management. This lesson will take you through the basic concepts needed to understand the purpose of impact evaluation, designing an effective and feasible evaluation plan, and understanding what to do with the information collected through your evaluation activities.

Listen to Vasanthi Hargyono walk us through the importance of designing an evaluation strategy with purpose at the base. As you watch the video, reflect on the following questions: Is this practice relevant for your Wikimedia education project? Have you done this or a similar process before? Feel free to share your answers on the Discuss section!

You can find the full transcript of this video on this link.

What and how?
What?

Conducting an impact evaluation allows us to determine what changed as a result of our Wikimedia education project and acquire valuable lessons about our project that will help us improve future interventions. Mercy Corp's Guidebook on Design, Monitoring and Evaluation tells us that impact evaluations "seek to determine the results of a project; its impact or effects on the target population". UNICEF adds that impact evaluation "goes beyond looking only at goals and objectives to also examine unintended impacts". It helps us to answer some main questions: Was our project successful at achieving its outcomes and goals? Will we replicate this initiative? Will we scale up? Will we change it? Will we discontinue it completely?

How?

In the previous module we established that monitoring and evaluation activities have our logic models at the base. Take a look at the example on Section 7: Example of a Logic Model with Evaluation Questions of the UW Extension Course on Logic Models. Can you identify the components of the logic models on the first section and how they relate to the Key Evaluation Questions?In their example you can see there are evaluation questions that correspond to each part of the project's development including the inputs (staff, money, partners) and outputs. In this module we are only focusing on evaluating outcomes and long-term goals: the higher levels of the logic models and the ones that can help us understand if our project is creating the change it intended. In other words, our evaluation efforts are an extension of our logic model. We will explore more of this in the next section.

Additional considerations
When planning our evaluation activities we need to take into account additional factors that will determine whether the evaluation plan is feasible. This will influence the type of evaluation activities you can develop and it might require you to prioritize the most important outcomes/goals to measure according to the purpose you have set for your evaluation. Such factors to consider can be:
 * Money - Do you need any financial resources to carry out your evaluation activities? For example: printing surveys, traveling to conduct interviews, recording professional videos, etc.
 * Time - How much time do you have available to conduct evaluation activities? For example: face-to-face interviewing, classroom observations, detailed data analysis, etc.
 * Human resources - Can other members of your team get involved in conducting evaluation activities? For example: dividing up responsibilities for data collection, data analysis, reporting, etc.
 * Expertise - Do you have the knowledge and skills needed to carry out these evaluation activities? Do you need additional support from experts?

The final stretch!
In this section we will see the different elements that can help us to build an evaluation plan with logic models at the base. We will explore this together with the team of Wikimedians you met in previous units. Remember them? "Hey there! It's Laura, Alex, and Isaac again! We are developing the evaluation plan of our WikiCafé project. The main purpose of our evaluation is to learn if our project actions had the intended impact on our participants, and to see if this is a project that we can replicate in other school districts. We intend to share our evaluation report with other Wikimedia communities, with the school principal that supported us, and with potential donors for a future edition of this project."

Baseline
At the beginning of the previous unit you learned about how conducting a Needs Assessment before the start of our project can help us to collect data and identify the needs or gaps present in our participants/context. This activity also provides us with a baseline to understand our audience at the beginning of our project and identify the changes that occur as a result of our intervention.

On their Design, Monitoring and Evaluation Guidebook, Mercy Crops states that: "For impact evaluations, the baseline data gives a starting point against which further progress can be measured. Without a baseline, it is extremely difficult for an evaluation to gauge project impact".

To see how this looks, let's go back to the project from our team of Wikimedians: 'Before the beginning of our project, we conducted a needs assessment to learn more about our participants. We reviewed the surveys and interviews we developed and learned that:' We are very excited to see how their skills and perspectives change after our project implementation!
 * Only 1 out of the 15 participants knew how to conduct a very basic SPARQL
 * Only 2 out of the 15 participants stated that they understood the value of Wikidata and other Wikimedia projects as OERs
 * All of the participants expressed that they wanted to be able to use SPARQL queries to create better didactic resources for their classes but they were not sure how to start.

Evaluation questions
Maybe you have seen this quote attributed to Vanessa Redgraves : "Ask the right questions if you're going to find the right answers". The next step to build our evaluation plan is to craft the questions that will help assess how effective our project really was in creating the change we wanted to see.

As general guidelines, K. Shakman and S.Rodriguez state that evaluation questions should:
 * Be relevant and in accordance to your logic model's objectives and impact statements. Can the questions be answered given the program?
 * Be prioritized in accordance to the purpose and audience of your evaluation. What are the key, most important questions?
 * Respond to the resources and availability that the team has to collect this information. Are the questions practical and appropriate to the capacity you have to answer them?
 * Be clear and jargon-free. Would someone who is not steeped in the language of your particular field understand the question?

To see this in action, let's check how our team of Wikimedians have created the evaluation questions for their Wikicafé project: 'Alright! First, let's review our logic model:'

'Now we are going to focus on the objectives and impact to create our evaluation questions. We created 2 evaluation questions per each of these outcomes, they are the following:' "Woman by Mahmure Alp from the Noun Project.pngWe think these questions can help us collect accurate and relevant information that will tell us if we achieved the change we wanted to see through our project"

Indicators of success
Now that we have the questions, how will the answers to these questions tell us if our project was truly successful?

Indicators (also called "metrics") are Specific, Measurable, Accurate, Relevant and Timebound (SMART) statements that provide us with a picture of the desired achievements of our project. They align with the evaluation questions and outcomes/objectives by providing the key values we aim to achieve. To create indicators of success for our evaluation plan, K. Shakman and S. Rodriguez suggest that we reflect on these three questions:
 * What would achieving the goal reflected in the outcome look like?
 * How would we know if we achieved it?
 * If I were visiting the program, what would I see, hear, or read that would tell me that the program is doing what it intends?

Let's see this in practice again with our group of Wikimedians!"Avatar by iconfield from the Noun Project.pngWe think these are the indicators that will tell us if our project was successful:"

Data collection methods
We created clear evaluation questions and set measurable indicators of success for our project. How do we know if we reached these indicators? What tools can we use to collect this data?

Deciding the right data collection method will depend on the type of information you want to document (quantitative? qualitative? both?), the availability of your resources, and the purpose of your evaluation. We have seen different data collection methods in the module about needs assessments and in the previous module on monitoring and course correction. You can resort to already-made data collection tools, modify past tools, or create your own from scratch. For more alternatives you can see a handy chart of data collection methods for different purposes on this link. In any case, it is important to remember to allocate the necessary time and resources to this process as you did to other stages of your project planning.

Let's see what data collection methods our team of Wikimedians is using: Our evaluation plan is looking good!

We are moving the information we collected into a table so it is more organized and easier to understand for external audiences.

''We decided that we will design our own data collection tools since we could not find any existing ones that responded to our needs. We also gathered the contact information from the teachers and their approval to interview them 6 months after the project. Sadly, we think we might not have the time and resources to visit each teacher to do classroom observations or to conduct printed surveys so we will have online surveys instead and rely on their honesty. We are also planning to conduct interviews via Skype because of the same reason. We are positive that it will be a very informative experience!''

Data analysis and report
UNICEF's Overview of Impact Evaluation states that "evaluations must focus on producing useful and accessible findings, not just academic reports". What is the use of all the data that you collected through interviews, surveys, tests, etc. if you don't effectively organize it and present it to the relevant audiences? After collecting the needed information through your chosen data collection methods - for example: interviewing your participants, reviewing tests and surveys, etc. - your next step will be to reflect on what that information tells you: how it responds to your evaluation questions and how it stands against your indicators of success. As MercyCorps' Guidebook puts it: “What does this mean for our project?” and “What conclusions can we draw from this information?”.

As with other activities we have seen earlier, analyzing the data you have collected does not have to be a solitary endeavor. Engage different members of your team, seek thought partnerships from your internal stakeholders, discuss your reflections with others to make sure your conclusions are objective and unbiased.

Finally, the way you will organize this information to share with different audiences can vary. A written report (with various degrees of detail and depth) is appropriate and needed, but do not be afraid to get creative! Contributors to the National Council for Voluntary Organisations (UK) suggest other reporting formats to present your evaluation, such as:


 * Infographics
 * Animations
 * Blogs/Newsletters
 * Podcasts

It all comes down to who your audiences are and the relevant information and lessons you want to share with them.

Course Portfolio Assignment: The start of an evaluation plan for your Wikimedia education initiative
Let's start building an evaluation plan for your Wikimedia education project!


 * Step 1: Review the logic model you created in the previous unit (if you have not completed that module yet, take some time to do so and create a logic model for your project idea following that lesson) or another logic model you have previously created.
 * Step 2: On your Course Portfolio, create a section called "Evaluation Plan". First, write down the main purpose of your evaluation (the "why") and the stakeholders that you would share this information with (the "who").
 * Step 3: Choose two outcomes/objectives to focus your evaluation plan on (they can be the same ones you chose for the previous activity on monitoring) and your long-term goal/impact.
 * Step 4: For each outcome/objective and for your long-term goal/impact, create two evaluation questions. For each evaluation question indicate the data collection method you will use and when you will collect this information. Remember to use a visual representation that is easy to follow (such as the tables presented in the example of the previous section).

If you need some inspiration, check out the work of participants of the first cohort of the Wikimedia Education Greenhouse online course:


 * Innovación educativa URJC-UCM by Florencia Claes
 * Sum of all paintings: my local artist by Pietro Valocchi


 * Wikimedia Educators Emeritus by Gina Benett
 * Pequeñas poblaciones del interior de Argentina by Agustín Zanotti