Evaluation Theories/Pre-Class Notes Wk. 11

Chapter 3: Evaluation of the Natural Resources Leadership Program, 1995 Through 1998: An Interview With Jennifer C. Greene

interview Professor Greene about the three-year evalua- tion of the Natural Resources Leadership Program (NRLP)

She selected this study, conducted with her colleagues Camille Tischler and Alan Hahn, as one that is typical of her work.

she makes use of an intensive case study, observations, interviews, and surveys of participants to gain an understanding of the program and make a judgment about it

intensive case study

observations

interviews

surveys of participants

A major goal of the program is to change the way in which disputes are resolved among those involved in making decisions about environmental policy.

After a pilot run of this leadership program in one state in 1995, the program was implemented for 47 03-Fitzpatrick-45610:03-Fitzpatrick-45610 7/5/2008 11:08 AM Page 47 48 TRADITIONAL EVALUATIONS the next year in three southeastern states and for a third year in two states.

from 1995 to 1998

Specific objectives for the participants were (1) to understand and apply collaborative, win-win approaches to problem solving; (2) to become skilled in working with people who have different interests, values, and philosophies on issues involving natural resources; and (3) to understand natural resource policy and decision-making processes, and take into account the biological, economic, social and political implications of natural resources management decisions. (curriculum materials)

implemented as a series of five two-and-one-half-day sessions of residential instruction—in communication and conflict resolution skills, leadership development, and government and public policy processes— complemented by a trip to Washington, D.C., and a follow-up, year-long practicum.

evaluation of this leadership program was framed by the evaluation priorities of the funder, the W. K. Kellogg Foundation, and of the visionary developers and primary implementers of the program.

called for data gathering on program context, implementation, and outcomes.

surveyed

conducted mini-case studies

surveyed

asked general questions

sampled the “best” of the participant practicum projects

asked work site supervisors for their views on relevant par- ticipant attitudes and behaviors

three members of the evaluation team

Expertise in leadership development and in conflict resolution was also featured on the team.

we had worked with some of the key program staff before and had a strong, collaborative working relationship with them. (J: wow, the independence of this evaluation seems like it may be in question...but greene usually likes Meta-evaluation, no, so seems to be cognizant of this)

This strand of ongoing analysis and discussion was a key contributing factor to the over- all success of this evaluation.

success of this evaluation. (J: PLEASE TELL ME YOU DEFINE SUCCESS, AND HAVE A METRIC FOR IT THAT YOU SHARE WITH US...I AM SICK OF SOMATIC MEMORY IN THE EVALUATION FIELD)

OUTCOMES

Our evaluation indicated that the NRLP was generally successful in real- izing its learning aims.

Most program participants reported that they changed their conceptual understanding of environmental conflict

changed their ideas of effective leadership in conflict situations

learned new skills and tech- niques for organizing people and information toward the resolution of con- flicts they encounter (J: UGH! ID THIS ALL YOU LOOKED AT?!?!?! ATTITUDES?!?!!?!? WHAT ABOUT METRICS OUTSIDE OF OPINION?!?!?!?!?!?!?!?!?!?!>!)

In terms of effects on practice, only a few participants were able to enact the new lessons learned in the field, in terms of actually facilitating a consensual decision in a conflict situation—for example, rewrit- ing a state’s nuisance wildlife regulation to meet requirements for both animal protection and efficiency. (J: quantify that:HOW MANY OUT OF HOW MANY WERE ABLE TO DO SO?!?!?!?!?!)

A number of others were able to use the project ideas to make important progress toward mediated solutions that respected multiple interests—for example, holding an open symposium on hog farm regulations so that farmers, public health officials, economic developers, and county resi- dents could all voice their views.

the emphases evident in this evalua- tion on (a) learning about and understanding the complexities of this leadership program as envisioned and as experienced in context, (b) foregrounding the value dimensions of the program’s intentions and enactments, and (c) using a diverse mix of methods well capture my current work, as represented in the references that follow the interview.

diverse mix of methods

how this evaluation illustrates some of the methods you advocate in evaluation?

Some of my work has highlighted the intertwining of values and methodology

general thinking about methodology [in context of intertwining values and methodology]

Let me say a little bit more about

the ways I think about methodology that make explicit the value dimensions

the ways this study illustrates that intertwining of values and methods

My approach to evaluation is responsive, in Bob Stake’s sense of the word [Link to Bob Stake's "sense of the word]

responsive

evaluation tries to become attuned to the issues and concerns of people in the context of this program.

Second, my methodology is inclusive in that multiple, diverse perspectives, stances, and concerns are included in the evaluation

Finally

the methodology is reflective

continuing cycle of reflection and analysis.

What have we learned? What does it mean? What do we, as evaluators, think about this?

This reflective, iterative cycle was not only one that we as an evaluation team did, but was one that we shared with key stakeholders so we were able to say, “Here’s what we’re learning. What do you think about it?” (J: WHERE'S THE MATRIX OF PRO'S AND CON'S AND CONNECTIONS TO THIS REFLEXIVE PROCESS?)

We used a process where we would meet and talk substance first

We had a good working relationship with the program people before this evaluation began, and this was very helpful.

We wrote up everything, each evaluation activity, that is, each observation, each set of interviews, each survey. We didn’t wait for the end. We shared the data and our thoughts with each other and all the stakeholders (J: TRANSPARENCY: A GOOD EXAMPLE, I THINK!)

primary users or stakeholders

The program people—the developers and implementers in each state—were the

primary stakeholders

We were mindful of the Foundation and their philosophy, and that did influence, to some degree, the way we framed the evaluation. We did, in the larger vision, see the program participants and their organizations as stakeholders; however, we didn’t communicate with them throughout the process. But we certainly thought of them as important stakeholders.

primary purpose of this evaluation?

How did these purposes emerge? Is this purpose common to many evaluations you do? (J: DOUBLE BARRELLED)

Fitzpatrick: In any evaluation, there are generally more questions that we would like to answer than resources permit

Were there other evaluation questions that you would have liked to have added?

Greene: No, I attribute some of the smoothness of the initial process to the fact that we had worked with some of the key developers of the institute before. They liked our prior work and expected something similar to what we proposed. They already had a familiarity with how we worked.

Greene: Each member of the evaluation team had primary responsibility for one of the three participating states in the program. Phone calls, e-mail, site visits were all used as communication channels in these relationships. At least twice during the program period, these key state leaders convened, along with program developers, and at least one member of our team also attended. And, as I said, we worked hard to establish and nurture an ongoing evaluative conversation with these key staff. (J: THIS SOUNDS CHUMMY . . . NOT THAT IT SHOULDN'T BE A GOOD WORKING ENVIRONMENT, BUT WHEN THERE IS GOVERNMENT INVOLVEMENT AND PUBLIC GOOD TO UPHOLD, I THINK WORKING RELATIONSHIPS MAY BE IMPORTANT TO THINK THROUGH. . . )

we did not push for any par- ticular program change. We did not believe we had authority to do that.

albeit an interesting and unusual one. (J: UH, HOW IS IT INTERESTING AND UNUSUAL?! IT SEEMS PAR FOR COURSE FOR ME: AND VERY DICEY ON WHETHER THE THING WAS ACTUALLY USEFUL...)

Fitzpatrick: This is, essentially, an evaluation of a training program

Did you make use of any of the literature on training evaluation, such as Kirkpatrick’s model for evaluating training or Brinkerhoff’s writing?

one of the team members—this was definitely a team effort—was an expert in leadership development in this kind of setting—that is, adult education, coop- erative extension, rural farm, and agricultural issues, so we all just deferred to him. (J: IN MY EXPERIENCE, EXPERTS ARE JUST PEOPLE WITH TIME. . .WITHOUT A RIGOROUS EVALUATIVE BASE, THEY ARE BASICALLY JUST A COLLECTION OF ANECDOTAL INFORMATION WITH THE ABILITY (BY ACCESSING AND SHARING APPROPRIATE ANECDOTES) TO CONVINCE OTHERS TO TRUST THEM. THAT'S THE WAY THE WORLD HAS WORKED FOR MILLENNIA, SO WE SHOULD NOT THINK WE CAN MAKE SOMETHING BETTER THAN THAT! BUT WE SHOULD BE SEEING WHAT WE CAN DO TO AUGMENT IT!)

The meaning of leadership became a pretty important issue because the institute was trying to advance an alternative way of thinking of leadership, leadership as helping groups resolve conflicts and reach consensus, as opposed to advocating for, or even mandating, a certain position. But it did not do so explicitly; so many participants fell into old ways of thinking of leadership. And there were conflicts between the new way of thinking about leadership and certain activities in the program. For example, the program began with a team-building activity, often using a ropes course. Generating this sense of camaraderie in the group was somewhat in conflict with the emphasis on understanding and accepting difference that was central to the new concept of leadership. So the concept that we anchored our work around was more leadership than training.

You make use of a com- bination of observations of the institute sessions, telephone interviews with a sample of participants, interviews with staff and advisory board members, sur- veys of participants and supervisors, and mini-case studies of some practicum projects. (J: UGH!!!!! PLEASE TELL ME YOU AT LEAST GOT SOME DATA OUT OF THAT THAT YOU CAN ACTUALLY USE TO COMPARE. . .)

participants had limited positional 03-Fitzpatrick-45610:03-Fitzpatrick-45610 7/5/2008 11:08 AM Page 55 56 TRADITIONAL EVALUATIONS authority

three criteria that influenced our choices.

Maximizing the depth of our understanding

Inclusion was another important criterion

The third criterion for our decisions about methods and samples was to ensure that we captured the experience and effects of this leadership program at its “best.” (J: I THINK THIS IS IMPORTANT; THOUGH WITH THE RESOURCES A BEST/WORST (AND INDICATION OF THE CURVE COMPARED TO NORMAL CURVE) WOULD BE EVEN BETTER)

In terms of the broader goal of developing an understanding of the program and its accomplishments, there is no substitute for being there. (J: BUT IF YOU ARE THERE, AND TRAINED; YOU SHOULD BE ABLE TO GET THINGS ACROSS; ONE OF THE MAIN POINTS OF EVALUATION SEEMS TO BE TO PROVIDE INFORMATION TO BE A SUBSTITUTE FOR BEING THERE; OR TO MAKE A DECISION ABOUT WHETHER YOU WANT TO BE THERE. AND FOR EVERYONE WHO IS THERE, THERE IS THE POSSIBLE BIAS OF BEING SELF-SELECTED TO BE THERE. . .)

The surveys were always ho-hum. Surveys yielded pretty much what we thought we would get back, but we wanted to use them for inclusiveness and perspective

The surveys were also important in documenting consistency in certain findings across site and time.

mini-case studies. You used these to portray what participants have accomplished in the field; this focus corresponds to Kirkpatrick’s Level 3 focus on outcomes or applications of training on the job

You made use of telephone interviews, surveys, and examinations of the practicum projects themselves to describe these applications

there were many factors about the pro- gram’s implementation that undermined the real prospect of conflicts being solved in a different way out in the field.

But we still believed that whether and how participants had been able to actually resolve conflicts differently in the field was the question to pursue.

We deliberated a lot about the sampling of the mini-case studies

It was not just a matter of interviewing the participant but also tracking down others involved in the dispute resolution and learning more about what the participant had contributed

We wanted to examine the institute at its best

telephone interview

What kinds of things did you ask, or do, to establish rapport and stimulate reflection and interchange?

HOW TO PREP FOR QUALITATIVE PHONE INTERVIEWS!

one member of the evaluation team conducted most of the phone interviews—and this team member had not personally met all respondents before, we tried to do a variety of things to develop a relationship further before the interview

sent an advance letter to describe the purpose of the interview

advance phone call to set the appointment and to answer any questions, as well as to advance rapport and perhaps chat a little

these two advance contacts furthered rapport for respondents the interviewer had met before and helped establish some rapport for those the interviewer had not met.

Fitzpatrick: I liked some of the reaction items that you used to obtain participants’ feedback to the sessions on the survey. They tapped more than the typical reaction survey often does. (The survey made use of Likert-scale items, tapping particular issues such as “I wish we had spent more time during the institute engaging with each other on the natural resource issues we really care about,” etc.)

How did you decide which “sensitive” issues to tap with these items?

the issues we wanted to pursue came from our interviews and observations

We did share survey drafts with the program staff. They had a chance to object to a particular question

Did you observe all the sessions? What kind of guidelines did you use to observe the sessions?

We had a one-page observation guide that listed the areas we would like to have information on

the actual process of observing is often very overwhelming. So the guide helped focus us

We were also all very experienced observers. (J: OOOOH, EXPERIENCED! BUT DO YOU HAVE A METRIC ON WHICH TO RATE PERFORMANCE,A ND THE VALUE OF EXPERIENCE IN THIS AREA? TAKING IT FOR GRANTED MAY BE A USEFUL HEURISTIC. . . CAN EVALUATION IN THIS AREA BE MORE EFFECTIVE THAN OUR CURRENT PRACTICE?)

The person with background in conflict resolution would note things that the other two didn’t

w

we didn’t see the same things, but there was reasonable consistency.

Greene: We didn’t have the resources to have two of us there very often. But in the first year, in most of those observations of the five sessions, there were at least two of us there. And there were times when one of us would say, “Boy, I thought that session was great!” And the other would say, “You did?” Then we could talk and sort out our different perceptions: “Was it you or the participants who thought it was great?” (J: DOES THIS RUN FOUL OF ANY OF THE RESEARCH INTO DELIBERATIVE GROUPS (OF EXPERTS) MENTIONED BY CASS SUNSTEIN?)

Fitzpatrick: What did you see as the major impacts of the institute?

we did find evidence of people actually resolving conflicts differently, but the major impacts were much more modest

Fitzpatrick: How did you measure the changes in attitudes and skills?

Mainly through self-report collected in the interviews and annual surveys—that is, through participants’ own views of their changes.

The prac- tica also provided some evidence. There was consistency in these reports. (J: MEASURE OF CONSISTENCY/!??!?!)

Fitzpatrick: Did you think about using more traditional paper-and-pencil measures of skills or attitudes? Greene: I think only for about a second. We would have had to develop them and pilot test them. I don’t think there are appropriate instruments out there. There may be, but it just was not the appropriate assessment for this context. It would have felt like a test—very off-putting and not consistent with the climate that the workshop was trying to establish. (J: PROBABLY GOOD; BETTER TO FIND INDIRECT MEASURES....)

we did think about was the possibility of identifying, in each state, a current environmental policy area of considerable debate and trying to almost track it backward. This would have to be an area where there had been some recent action.

But on reflection, we thought that was well beyond the reach of the workshop. It would have been hard to do for us

On self-knowledge, you write, We need to be able to reflect on how and why we react the way we do in our lives, what core beliefs and assumptions govern our actions, whether or not our processes are helpful to ourselves and others, and how they fit into what we know about the world we live in. The more fully we can trace the mean- ing we ascribe to situations and the improvements we prefer to our particu- lar experience, feelings, emotions, attitudes, beliefs, morals, tastes, and talents, the more we will be able to legitimate the same in others.

These remarks are given in the context of the balance between teaching techniques, or the mechanics, of resolving conflict and the concomitant need for self-knowledge to resolve conflicts productively

On extremism and advo- cacy, you challenge the view presented in the institute that advocacy and extremism are to be avoided

“Making conflict go away is not always the goal” and note that “they [extremists] see things we don’t see. . . panning extremists puts a damper on free expression in any group. We are all extremist about something.”

how did these conclusions emerge from your evaluation? That

Is it legitimate for an evaluator to raise these sorts of issues—that is, ones which are based more on our own values, judgments, and personal experience than on the data we have collected?

Greene: I want to first note that these particular insights and views came from the evaluation team member with expertise in conflict resolution, with unequivocal support from the rest of the team.

also important to note that these comments took place in the final evaluation report.

We did not claim the same extent of evaluator voice in all the other reports.

made an intentional decision, after considerable discussion, to include such a section in this final report and to clearly label it our thoughts and ideas

Most of these issues discussed in this section—all of them really—had been brought up before in previous reports—for example, the meaning of leadership.

Fitzpatrick: But is it appropriate for you to do that? What are people hiring you for?

Greene: Our debate was not so much whether we had a legitimate right to do this. It was more over how it might be received. We wanted to present it in a way that was supportive.

But to answer your question, we claimed voice and legitimacy of voice because we had spent three years working in a collegial fashion with the insti- tute and its developers.

Another reason we felt we had the legitimacy to make these comments was the expertise on the team in conflict resolution and lead- ership.

Greene: I don’t know how this final report was received. The report itself was reasonably short. Key people involved in the development of the institute retired just about the time we completed this report. Things got a little scattered. I never heard anything bad about it. By the time the final report came about, the opportune moments for engagement had passed. (J; SO OFTEN THE SAD STORY OF EVALUATIONS. . . RATING REPORTS SO IMPORTANT...)

The program included a trip to Washington and a trip to the state capitol. We wondered, given the time and expense involved, what the value of these trips was

they continued the D.C. trip because this was new for many of the participants. And they did these trips very well. They arranged visits with pol- icy people in the participants’ area of concern and with representatives from their state. It was perceived as valuable by participants.

What ethical dilemmas did you find most challenging in conducting this study?

There were no problems with field relationships, access, or betrayal of confidentiality.

finding the appro- priate role for our own stances was a continuing challenge

Where is the place for the evaluators’ viewpoint?

You know you have a viewpoint and that your data are colored by your own lens.

So I think the ethical challenge was being mindful of our own particular way of doing things.

The evaluation team people were typical northeastern liberals who believe in environmental issues.

liberals (J: HA, IN THE AMERICAN SENSE OF THE WORD. MOST AMERICANS ARE LIBERALS IN THE ANALYTICAL POLITICS SENSE OF THE WORD . . .)

n what areas did you find your personal stances on environ- mental issues presented the greatest challenges?

particular environmental disputes, the woodpecker, the hog farm, the landfill. When the workshop people engaged with a particular situation, they brought these panels in, featuring people with different points of view. When they engaged with actual issues, my own views on these issues were often different, even extreme.

Someone would ask, for example, “How much pollution is OK?” I would think none, a view shared by only some participants. (J: DON'T YOU HAVE TO FIRST ASK YOURSELF WHAT YOUR VALUES ARE, AND THEN LOOK AT HOW POLLUTION AFFECTS THOSE VALUES, WHAT THE DEFINITIONS OF POLLUTION ARE, WHAT THE DIFFERENCE IS BETWEEN POLLUTION AND THE STATUS QUO (WHICH INCLUDES PROBABILITY OF 'REASONABLE' STATUS QUO'S?)

But there were no serious ethical dilemmas. Our evaluation team was interested in the public good. And we perceived this program as sharing that commitment

If you were to conduct this evaluation again, what would you do differently?

there is one thing that I might do differently, but it is only a “might.”

Perhaps during the eval- uation, we should have tried to bring into the conversation other people who had some expertise in conflict resolution, others with expertise in leadership, and others with expertise in environmental policy. It might have broadened the conversation and made it more substantive all along.

But the hesitation here is turf. The people who developed and designed this program would never say they were the only experts in conflict resolution, but they probably felt they knew enough and especially knew what made the most sense for this context. (J: THIS SOUNDS LIKE DESIRE FOR FORMATIVE EVALUATION . . . WHICH MAY BE A VERY EFFECTIVE USE OF EVALUATION, IF INTEGRATION COSTS (AND 'TURF WARS') ARE AMENABLE)

Note how she defines the primary purpose of the evaluation as “to develop a good under- standing of, and to make some judgments about, the quality and effectiveness of the leadership institute.”

Unlike many evaluators, she does not delineate specific questions that are paramount to program managers or other stake- holders during the initial planning stage. (J: BUT IT SEEMS VERY FOCUSED ON HER / THE TEAM. . . I'M NOT CONVINCED ENOUGH OF THE POWER OF A SMALL TEAM OF EXPERTS. . . THOUGH MORE RESEARCH NEEDS TO BE DONE COMPARING DISTRIBUTED EVALUATION VS. THAT DONE BY A COUPLE OF EXPERTS...)

Rather than answering particular questions that they have, she attempts to bring about change, not simply use, by establishing an “ongoing evaluation conversation about value-based issues important to the program.”

In attempting to instill an ongoing evaluative conversation, Greene and her fellow team members must, as she illustrates, become part of the team.

the role her values played in the evaluation. She does not take a hands- off posture, nor does she conceal her values regarding environmental issues, conflict resolution, and the means for best achieving

As a team member interested in inclusion, she attempts to hear and reflect in her reports many different voices, but her own is apparent as well, though she balances those values with her role as an evaluator.

She comments, (")Because this program dealt with contentious environmental issues, the per- sonal stances of the evaluation team members on these issues were ever present. Our job, however, was not to engage in the substance of the envi- ronmental disputes but rather to understand how this program was advancing a different process for resolving them.(")

DISCUSSION QUESTIONS

EVALUATE THESE DISCUSSION QUESTIONS ON THE THINKING SKILLS RUBRIC: https://docs.google.com/spreadsheets/d/17wZzu4G3Kp4b6-rmLZEXHtLtp0T7gPoDan3A9ZV6IOY/edit#gid=0

be responsive to the issues and concerns of the people and the context of the evaluation; to be inclusive of multiple, diverse perspectives; and to be reflective. How does she demonstrate these qualities in conducting this evaluation?

Greene see as most important for judging the merit and worth of this program? How does she evaluate these outcomes? Do you agree with her choice of outcomes? Her choice of methods? (J: QUADRUPLE-BARRELLED QUESTION (THESE ACTUALLY ALLOW MORE FLEXIBILITY IN ESSAY-LENGTH ANSWERS, THROUGH DE-DUPLICATION, ETC. - BUT ARE HARDER TO CODE FOR THAT REASON. . . AND ACTUALLY, PROBABLY, HARDER TO PARSE TOO...BUT HOW MUCH HARDER? AND HOW DO WE MEASURE IT?!?!?!?!)

3. Greene refers to the “outcome mania” of today. What is she referring to? Do you see an “outcome mania” in your workplace?

What elements of Greene’s role do you find interesting?

5. How do Greene and her team communicate and share findings with stakeholders? What seem to be their main methods of communication?

Chapter 8: Evaluation of the Special Education Program at the Anoka-Hennepin School District: An Interview With Jean A. King

King describes the process she used in working with a large self-study team of administrators, teachers, classroom aids (paras), and 183 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 183 184 EVALUATIONS WITH AN EMPHASIS ON PROGRAM PLANNING parents to build evaluation capacity in the school district.

Jean A. King is a professor in the Department of Educational Policy and Administration at the University of Minnesota, where she teaches in the Evaluation Studies Program.

perhaps best known for her work on participatory evaluation

In 1999, she took a 2-year leave from the university to work as Coordinator of Research and Evaluation for a large school district to learn firsthand about the work of an internal evaluator.

interview focuses on an internal evaluation of the district’s special education program.

The team determined the focus of the study and data collection methods, developed surveys and inter- views, interpreted results, drew conclusions, and made a report to the School Board.

facilitate the process by establish- ing the format of the meetings, guiding groups, analyzing and preparing data to present to the groups, and ensuring that groups based their conclusions on data.

interview, she describes the role and actions she took to help the district build capacity while simultaneously evaluating the special education program

experiences in capacity building in this evaluation dramati- cally changed her views of program evaluation and her subsequent writing concerning capacity building.

evaluation

self-study framed within the six-year cycle of a state-mandated curriculum review process

was both participatory and developmental

and a political setting in which parent advocates wanted a process that would force the Special Education Department to respond to their concerns

involving more than 100 people at one time or another on the Self- Study Team, had

dual intent: to provide a broad array of perceptual data col- lected from numerous stakeholders using diverse methods and, simultaneously, to create a process for continued evaluation activities.

CONTEXT

Citizen Involvement Handbook (1999–2000) gave the following demographic summary: “young community, hard working, [many who] work out of the community, poor, high school-educated, rising number of immigrants, and increasing minorities.”

funding remains an ongoing challenge

known for innovation and its commitment to the use of standardized data for accountability and improvement

Years before the state adopted graduation testing, Anoka-Hennepin routinely required students to pass such a test

in the 1990s, district personnel were active in the devel- opment of performance assessments and state graduation standards. (J: This is during the time King was their Coordinator of Research & Evaluation / internal evaluator)

(1) a team of three evaluators, which met several times a month and often late into the night (the Evaluation Consulting Team);

infrastructure for the self-study included three components

(2) a team of the three evaluators and the district special education administrators, which met twice a month (the Data Collection Team);

(3) a large self-study team with representation from as many stakeholders as we could identify, which also met once a month (the Self- Study Team)

(1) process plan- ning by the Data Collection Team;

The self-study process included

(2) data collection from multiple sources (e.g., teachers, administrators, paraprofessionals, parents, parent advocates, interagency service providers) using multiple methods (surveys, stakeholder dialogues, telephone interviews, focus groups, e-mail questionnaires, community forums, and observation);

(3) data preparation and review by the Evaluation Consulting Team;

(4) review and analysis by the Self-Study Team;

(5) development of commendations and recommendations by the Self-Study Team.

Each month, participants com- pleted a simple evaluation form at the end of the session that asked three things: plusses and wishes (on a form) and questions (on index cards).

Collectively, over the 18 months, the Self-Study Team framed issues and con- cerns for the study, reviewed instrumentation, analyzed and interpreted data, and developed commendations and recommendations.

People sat in teams at tables and would either work on the tasks at hand while they ate or wait until after their meal. (J: MAYBE NOT SO RELAXING AS I THOUGHT...:))

Then, we would have dinner (J: EATING TOGETHER VERY IMPORTANT FOR TEAM BUILDING! GOOD TO DO IT AFTER WORK HAS BEEN DONE TOO (PEOPLE ARE NOT STRESSING ABOUT THE WORK (OR, ARE TAKING A BREAK FROM STRESSING AND ARE TRYING NOT TO STRESS...USUALLY:))

people reported satisfaction with their involvement and a sense of accomplishment with the results.

findings identified a lengthy list of concerns, the most important of which included

evaluation’s

(1) special education referral and due process procedures, which parents and some staff found con- fusing; (J: CONFUSING PROCEDURES: BASIC PROBLEM; YOU WNAT TO STREAMLINE, STREAMLINE, STREAMLINE. COUNT STEPS; COUNT BITS OF INFORMATION; COMPRESS. STREAMLINE.)

(2) the need for professional development in adapting and modifying curriculum and instruction for special-needs students;

17-page report, developed collaboratively, included

(J: 1) description of the process

prioritized in 15 issue areas

(J: 2) commendations

(J: 3) recommendations

self-study data analysis and interpretations as an appendix.

Self-Study Team gave a formal presentation at the School Board meeting, including one minority report from a parent who had actively participated in the process but chose to express his thoughts separately from the Team’s.

How was it different from what you had done in the past?

the biggest participatory study I had ever engaged in— the study team had over 100 people on it

It was

size of it made all the partic- ipation activities a challenge.

most overtly political environment I have ever worked in.

internal self-study was monitored by an external force.

we had a State Department representative [a “moni- toring supervisor”] sitting in on every single meeting—sitting with a piece of paper and taking notes—so that was different, not a typical participatory process.

Fitzpatrick: What prompted you to change your practice in this setting?

To ensure that people didn’t think the evaluation was being controlled internally, we also hired two external evaluators who were truly “objective outsiders” with no role in the district.

We were very consistent about that “objectivity.” We had to be because of the public eye on the evaluation.

There was one father with two extremely handicapped children, and he was truly angry at the district. He came loaded for bear.

hen the second meeting came, this father again stood up and raised his hand to complain. We thought we’d never get anything done since he did this at every meeting. So we talked with him by phone, met with him before each meeting and after if necessary, met with him at the district office—throughout the whole process. Then, guess who gives a minority report [at the final pre- sentation to the School Board] and brings one of his children in her automated wheelchair? It wouldn’t have mattered what we did.

The politics of the situa- tion were such that no matter how objective we were, he was having none of it.

I learned you can’t always win through inclusion.

He had not changed his opinion based on our objective data. It did not matter to him.

The real question he raised was to what extent a school district can serve every individual child regardless of the cost. (J: TO WHAT EXTENT SHOULD A DISTRICT SERVE EVERY INDIVIDUAL CHILD, REGARDLESS OF COST?. . . WHAT ARE THE VALUES AT WORK? HOW DO THEY PLAY OUT?)

Was his truth real? Perceptions are reality to people.

It’s a dilemma. Of course, you want to serve every child, but can you realistically provide a full-time aide for every special-needs child? Who gets the limited resources? (J: HOW DOES EVALUATION ADDRESS THIS DILEMMA, AND HOW IS EVALUATION UNABLE TO ADDRESS THIS DILEMMA?

Did you hope your participatory process would change his views? Is achieving such a consensus part of the goal of your participatory process

we didn’t expect consensus.

We were after statements that could be supported with more than one piece of data. Our hope was to get people to support their deeply held opinions with solid evidence. As we say on campus, “What is the warrant for that claim?”

can you tell us a bit about what it was like beginning there as an internal evaluator? What did you do to begin in this position?

It was an easy beginning—there was a list of evaluation projects waiting for me.

reason to take this leave?

I wanted to walk the walk of an evaluator inside a big organization.

I am a school person at heart. I love schools.

this study started at the end of my first year and was my first big participatory study. I had been doing other studies, but they were more with me as the evaluator working with people, showing them the evaluation process.

state high school graduation requirements had changed to standards based. I was the leader of that study.

Can you tell us a bit more about the context for starting the evaluation?

part of a routine, state-mandated curriculum review process. It was special ed’s turn.

Between meetings, we would type that up across tables [type up each table’s analysis so people could compare] and distribute the comparison at the next 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 191 192 EVALUATIONS WITH AN EMPHASIS ON PROGRAM PLANNING meeting, where it would be discussed.

The special education program was large, and they wanted a continuous improvement process.

Who were the significant stakeholders for the study?

I’ll never forget this. We sat in the Associate Superintendent’s office. We actually did a chart of who the stakeholders were—a 13 × 4 chart since there were 13 categories of disabili- ties and four levels (pre-, elementary, middle, and high school), and we wanted to be sure that we had representation in all 52 cells. On it were parents, regu- lar education teachers, regular education paras, and special ed paras. Then, we added community stakeholders, the State Department, and the politicians.

The easier question is “Who wasn’t?”

We wrote this all down on a huge piece of paper. (J: WITH TODAY'S TECHNOLOGY THIS MIGHT BE A LARGE GOOGLE DRAWING OR A WIKI)

the district hired you and the other evaluators to be “objective technicians and skilled facilitators.” Tell us a bit about their expectations.

The “objective technician” part was representing the profession of evaluation, making sure people weren’t leaping to conclusions, were applying the standards of good evaluation practice, and so on.

The facilitator piece was helping to guide the self-study meetings

The perception was that the special education staff would influence and control the self-study too much. That’s why I became Dr. King, a university-based professional evaluator collaborating with two completely external colleagues.

In theory, the process didn’t allow for letting people’s past perceptions influence their conclusions.

The process we used comes right out of social psychology. At every meeting, we set up table teams, mixtures of folks— parents, aides, teachers, community members, and staff. Each table team would receive a notebook. When people came in, they could read the results of the last meeting. The new pages in the notebook would be the new data or analyses for the night. We structured activities where people would work at the table and study the information and each table would submit its analysis.

It’s like 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 192 Evaluation of the Special Education Program 193 replicating by having separate teams all looking at the data, and at the same time, people enjoy the discussions they have about the data.

The groups were responsible for interpreting and synthesizing from month to month and ultimately for generating and rank-ordering recommendations. (J: HOW DID THIS PROCESS "NOT ALLOW FOR PEOPLE'S PAST PERCEPTIONS TO INFLUENCE THEIR CONCLUSIONS"? (191, p 3)

Our job was to correct any incorrect statements and keep the process on course.

For example, two different times the data were bimodal—there were strong feelings both for and against a particular item. Looking at the mean, people at a couple of tables noted, “People are neutral on this issue,” which was wrong.

We collected an absurd amount of data. We could not convince the study committee that a sample would be sufficient.

Sometimes we would pass out data, and someone would say, “These numbers don’t add up.” So we would take it back, correct it, and distribute it with next month’s analyses. Sometimes I just turned it around and said, “There are probably errors in this data, and it’s your job to find them.”

When we got near the end of the process, after well over a year, we got frustrated because teams would write a claim but not be clear about what data they thought supported it. The table teams had the data in front of them, and the rule was you couldn’t make a claim unless you had data to support it. A team would say, “We are making the claims based on these five points,” but what five points? They wouldn’t write them down accurately since it was boring to do that. We couldn’t figure it out, but they didn’t want to copy all the data over on their sheets. (J: A COMPUTERIZED PROCESS COULD MAKE THIS MORE EFFICIENT (COPY AND PASTE WITH AUTOMATIC CITATION AND LINK TO SOURCE), IF THE ADDITIONAL LOAD OF LEARNING THE SYSTEM DID NOT OUTWEIGH THE BENEFIT)

One of the secretaries said, “Let’s print the data on labels, then table teams will easily put clearly labeled data with their claims.” It also showed them, if they’re not using half their labels, they’re probably missing something. We made the decision that it was more important to have an accurate link between the data and the claims than to save the money it cost to print all those labels. (J: THEY HAD THEIR OWN TECHNICAL SOLUTION! HOW WOULD THIS COMPARE TO THE COMPUTERIZED SOLUTION I PRESENTED ABOVE?)

Teams had to “prove” their conclusions with the data. Sometimes we had 8, maybe 10, table teams working simultaneously. If they all came up with the same conclusions, and we could check where those conclusions came from, we felt confident that the claims were supported. This is a technique that I use all the time now—having many different groups look at the same data. (J: EXTREMELY IMPORTANT, RIGHT? BUT HOW IMPORTANT? HOW DO WE MEASURE THE DIFFERENCE BETWEEN HAVING MANY TEAMS LOOK AT THE DATA? WHAT PATTERNS DO WE SEE?)

Who did you involve in shaping the focus of the study? What did you do?

the Evaluation Consulting Team, which consisted of the three evaluators (me plus the two outside con- sultants

Data Collection Team, which consisted of the Evaluation Consulting Team plus six special education staff, including the Director of Special Education

were the ones who shaped the focus of the study over time.

The State Department of Education person, who had a helpful attitude, sat in on this as well.

if we missed something she thought was important for the State, she would mention it. Otherwise she just smiled.

Why did you decide to put the focus in this study on building evaluation capacity?

In this politically charged environment, we couldn’t do a traditional for- mal evaluation and have it be of great value. It might address the concerns of the moment, but not future ones.

commendations are in the report. People in the special education programs in the schools were working really hard, and the data showed that. For every bad thing, the Self-Study Team found many good things. We really needed to congratulate people for the good work they do.

What had been controversial about the special education program?

King: As I recall—and I never did get to see or learn about the specific complaints—it was the number of the complaints. There were too many, in the State Department’s opinion. (J: WHAT DO YOU THINK ABOUT KING NOT BEING ABLE TO SEE OR LEARN OF THE SPECIFIC COMPLAINTS? IS THIS AN EXAMPLE OF SCRIVEN'S GOAL-FREE (<- check that term for accuracy) EVALUATION?)

What issues or challenges did administrators want to make others aware of

Basically, that the special education department was doing a good job with extremely limited resources. I

Since you did not complete a traditional report on this evalu- ation, I’m not really sure of the evaluation questions your study was trying to answer, and your data collection and results addressed many different issues. Were there specific evaluation questions you were trying to answer that guided you?

There were two overarching questions: (1) What was going right? (2) What was going wrong?

It was not primarily an outcome study because the concerns were more with process.

I’d say roughly two thirds were school employees. As I said before, there were special ed and regular ed teachers and paras, build- ing principals, special ed administrators and other central office folks, then 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 195 196 EVALUATIONS WITH AN EMPHASIS ON PROGRAM PLANNING parents, people from advocacy groups, and community members.

The Evaluation Consulting Team did raise the question of outcomes. We wanted to look at the Individual Education Plans (IEPs) to see if kids were pro- gressing, but the notion of outcomes in special education is very difficult, and given what we were looking at, the process questions were central

What kinds of things did you do to encourage involvement?

About how many people came each time, and who were they?

Many people attended consis- tently, which was helpful because explaining the process to newcomers was difficult.

the parents were the toughest group attendance- wise—only a few parents attended consistently, and we always hoped for more.

you were especially concerned with “bring- ing many people to the table during the study, especially those whose voices often went unheard in the district.” Whose voices were those? How did you try to involve them?

The parents were our first concern, and there were a sizable number of parents, several of whom came representing other groups. The group we never successfully involved were the parents of color—at that time they com- prised about 8% of the parents in the district. The proportion of special-needs parents who were minorities was higher, about 15%. We did everything we knew to get them involved—food, child care, meeting off campus. We would even drive to pick people up. Even doing all of that, we could get people to come for a month or two, but getting that sustained commitment was difficult.

House and Howe (1999) in their book on deliberative democracy write about the role of the evaluator in helping those who are less frequently heard to be heard. Often, the problem is the unequal status of participants. Your teams consisted of a mix of teachers, administrators, paraprofessionals, and parents. How did you attempt to deal with the unequal status of these participants?

In the table teams, democracy lived. People got to know each other as people. People got to know each other and feel comfortable. We didn’t mea- sure it; we didn’t study it. But there is a concept in social psychology called “leaning in,” meaning that groups engaged in meaningful conversation physi- cally lean toward each other. Watching the table teams month after month, I can report that our groups leaned in.

LEANING-IN AS A INDIRECT MEASURE OF MEANINGFUL CONVERSATION: IS THIS ALSO A MEASURE OF EQUALITY?

In social psychology, one way of dealing with power is group norms. We had group norms—everyone has a right to speak, process these data, conflict- ing ideas are welcome, speak your truth, and so on. We developed these at the first meeting and had them available in the team notebooks.

SOCIAL PSYCHOLOGY: DEFINING GROUP NORMS

Do you feel that House and Howe’s concern about unequal status is overdone?

No, it’s a critical concern. But the first real issue that comes before that is how to get folks to show up. Once they come, you can structure a satisfactory experience so that people can participate successfully. (J: COMPLEXITIES OF UNEQUAL STATUS? WHAT DOES IT TAKE TO GET PEOPLE TO HAVE EQUAL STATUS? WHAT IS THE ROLE OF CULTURE AND HISTORY IN THIS PROCESS? WHAT ARE THE CONSTRAINTS AROUND THE CONCEPT OF "EQUAL STATUS"?)

How did you decide on these methods?

It was only possible because I had an amazing staff, who would drop everything to process the data for the monthly meeting. For the qualitative data, they would do the typing.

How did you manage carrying all this out?

Having three evaluators was essential. We could check each other. I can’t imagine what it would have been like without the Evaluation Consulting Team.

"DATA DIALOGUE" FOCUS GROUP TECHNIQUE:

I noticed that you developed a new form of focus group, the data dialogue,

a few participants talked to each other, but without a focus group leader, and then they recorded the qualitative data. Can you tell us more about that? How did it work? What structure or guidance did you provide?

We called it the poor person’s focus group.

we were well into the study, and someone said, “You know, we’d really better get some qualitative data.” And we said, “There’s no money. There’s no way. We can’t do it. It’s too complicated.” So one of the external evaluators, the social psychologist, said, “Well, what if we adapt this technique, the three-step interview?

We’ll have people come and invite them to a public meeting, but rather than have one large group discussion, we’ll divide them into groups of three people.

We’ll have them sign in and give their consent to use the data—we can have a demographics sheet—then send people off with the questions.”

The questions were on different colored paper, so we could tell which groups were discussing which questions.

People would hand in the sheet when they were done

they usually wanted to hear what other groups said. So we would have refreshments and, after that, do a debriefing with the large group. Not only did they have wonderful conversations, but they would meet people and find out what other people thought and go home happy. (J: DO YOU SEE YOURSELF USING THIS FOCUS GROUP TECHNIQUE? WHAT ARE THE PRO'S OR CON'S THAT YOU IMAGINE WOULD COME UP?)

Was there a group leader?

Not an assigned one. Typically, someone would take a leadership role in the group, but we did not control that.

Three people is the perfect group size. It’s small enough that people really can talk but large enough that there is an audience of two. We would invite one person to be the recorder. The obvious limitation is you get what they write down, but we wanted to let people chat.

Do people vary in how much they write down?

Some write a lot. Others feel they should come to consensus and then write that down. Either way is fine.

Tell me about some of the data you got from this.

regular classroom teachers 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 199 200 EVALUATIONS WITH AN EMPHASIS ON PROGRAM PLANNING needed to work more on adapting and modifying the curriculum for special- needs students. This will always be true. There are actual skills in doing that, and many teachers hadn’t learned those skills in their training.

The dialogue data documented how they felt about teaching special-needs kids, provided stories and examples.

The surveys had open-ended questions, too. So we couldn’t easily summarize these for work groups. We had parents and teachers come in and do two days of qualitative analysis.

We used the same process of small groups doing the analysis and cross-checking each others’ analysis.

everyone was a volunteer. We just provided lunch. It gave the parents a chance to speak. This process created a place for them to speak. (J: WHAT DO YOU THINK ABOUT THE QUALITATIVE CODING PROCESS AS A TOOL FOR SELF-EXPRESSION, SELF-REFLECTION, RELATIONSHIP BUILDING, AND / OR EDUCATION?) (J: CAN YOU POINT TO ANY STUDIES ABOUT THE EDUCATIONAL EFFECTIVENESS OF QUALITATIVE CODING? (HOW WELL EDUCATED PEOPLE CAN BE JUST BY INVOLVING THEM IN A QUALITATIVE CODING PROCESS?) 2. WOULD YOU BE INTERESTED IN A CLASS WHERE YOU LEARNED BY DOING QUALITATIVE ANALYSIS, WHICH YIELDED DATA FOR RESEARCHERS AS WELL AS TEACHING YOU? 2.1 DO YOU THINK THERE MIGHT BE SOME TRADEOFFS IN THAT FOR YOUR QUALITY OF EDUCATION, AND IF SO, WHAT MIGHT THEY BE?))

Can you tell me about what you learned about the program from the data collection?

That some students were served incredibly well and others less well and some not very well at all. But, by and large, special-needs students in the district were attended to.

Communication was a problem—changing regula- tions, getting anything out to anyone clearly was a problem.

Not all parents understood their options.

Remember that we had multiple tables working with the same data each month. Also, each table had professional educators, community 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 200 Evaluation of the Special Education Program 201 members, some of whom were highly educated, other staff—in other words, people who were able to make sense of the data in front of them.

we gave table teams the list of recommendations and had them individually rate each recommen- dation on a scale of 1 to 3, where 1 meant do it right now and 3 meant put it on the back burner. We added up the individual ratings, calculated a mean, and that’s how we got the list of recommendations for immediate action.

Did the top issues or concerns that these work groups identi- fied from the results differ from what you and the Evaluation Consulting Team saw as the central issues? What if they had? Whose issues should be given priorities?

That’s a false dichotomy. We supported the table teams. We took their work and used it, but the only correction was when there were technical errors. The lesson I learned is when you have huge amounts of data, some of which contradict each other, it is so important for people to have to say what data went with their claim.

it’s still very difficult when data contradict each other. We wanted everyone to understand that you’re going to have challenges about making general statements, but it was such a community process.

There must have been some judgment involved for you, the facilitators, in determining whether the claims or recommendations that the work teams made were based on the data. Can you describe how you facili- tated when that was a problem? Tell us a little bit about how you carried out your role then.

We made a link between their participation in the evaluation and how they felt about it. We fed these results back to them as well. In retrospect, it was a 08-Fitzpatrick-45610:08-Fitzpatrick-45610 7/5/2008 11:09 AM Page 201 202 EVALUATIONS WITH AN EMPHASIS ON PROGRAM PLANNING good idea.

They really learned over the course of the study. They got better at data analysis, so our role was more comparing and contrasting the results across tables. (J: WHAT DO YOU THINK ABOUT THE COMPARATIVE EFFECTIVENESS AND JUDGEMENT OF "PROFESSIONAL" EVALUATORS (NOTE THAT THAT TERM IS NEBULOUS) VS "LAYPEOPLE" (NON-PROFESSIONALS)?)

One of the really interesting things that you did was to survey participants about the process to learn what they thought about it and their involvement. You’re actually using evaluation to evaluate your own process! That’s great—practicing what we preach. What did you learn from this? Was there anything you learned that prompted you to change your process for capacity building in the future?

You will recall that the district wanted very much to involve people in this self-study process, so part of what we were charged with was to understand what people learned from participat- ing. They really did learn about survey construction, data analysis, and the tentative nature of evaluative conclusions—that several positions could be sup- ported with data.

what I learned was the value of having people think about what they’re learning.

being more purposeful about the instructional value of evalua- tion.

if they’re not aware of what they’re learning, they may not learn as much.

the key was helping them to be purposeful about the learning process

It validated for them the positive effects of all the time they had committed to the self-study.

reflection helps me as well. It was this study that altered my research agenda—shifted it from participatory evaluation to evaluation capacity building.

you developed 6 commendations and identified 15 issues areas where you developed specific recommendations

What prompted you to consider developing commendations?

There was a sense that the special education teachers were working as hard as they could be expected to work. And there was a sense in the group that it was important to give people credit for the hard work, for the many suc- cesses that the data showed.

How can you have 53 recommendations and say nothing positive at all?

The commendations were to acknowledge that there were some positive things—some strong commendations that could be made about the program.

One of your commendations was pretty broad: “All staff, throughout the district, are caring, dedicated, compassionate, and diligent when working with students to help them reach their maximum potential.” The others are more focused. For example, “Parents have expressed that their children are well served and supported by District 11 staff.” Tell us about your role in developing these commendations and your thoughts on them.

They were supposed to come out of the data, but just hearing that first one makes me cringe. You couldn’t have data that supported that claim. (J: WHO WROTE THIS COMMENDATION? IF THE OPINION OF THE EVALUATORS (INCLUDING THE 100 TO 50 ODD PEOPLE WHO PARTICIPATED IN THE SELF-EVALUATION TEAMS AGREED TO THIS, WOULD THIS FEELING BE IMPORTANT? (J: MY ANSWER IS "YES" - FEELINGS ARE IMPORTANT, PEOPLE ARE VERY WELL TUNED TO MAKE THEM, AND WHEN WE LACK OTHER DATA, PEOPLE'S FEELINGS ARE AN IMPORTANT DATA POINT)

We did use the same process for commendations and recommendations, but with far less time available. The table teams worked with the data and supposedly had to support all of these. (J: THERE, THE TABLE TEAMS WERE INVOLVED IN THIS PROCESS, IT WASN'T JUST THE EVALUATORS. WHAT CATEGORIES WOULD YOU USE FOR THESE MORE SPECIFIC, AND MORE GENERAL STATEMENTS? WHAT ARE THE DIFFERENCES BETWEEN THE FIRST ONE STATED ABOVE AND THE SECOND ONE, AND WHAT DO YOU THINK ARE THE USES AND DANGERS OF EACH?)

The purpose was partly use, but remember our ultimate intended users were the Board members, and they did not participate. So the other reason for having people participate was for them to learn and understand.

We have evidence that people learned from it. People became close because they worked together on the data for over a year. It was like summer camp.

The process itself was the outcome?

And the learning as well.

What is it that they’re learning?

Well, certainly the professional educators in the crowd learned about evaluation, and it helped them to make sense of evaluation. This is a fairly minor point, but they learned how we used the table teams.

Parents said we could use the data dialogue idea with other parents. So people took the processes and used them in other places. (J: WHAT OTHER THINGS DO YOU THINK PEOPLE MAY HAVE LEARNED DURING THIS PROCESS? HOW IMPORTANT WOULD YOU CONSIDER THEM AS OUTCOMES? (MAKE SURE YOU DESCRIBE THE CONTEXT AND OPPORTUNITY COST OF YOUR "IMPORTANCE" RATINGS, AND BACK UP WITH ARGUMENT)

The superintendent had expressed the desire to become a “data-driven” district in making decisions. Did you see this occurring?

Absolutely

The point I would want to make is this would not have been possible 10 years ago before personal computers

Think how technology has changed what we can do with small groups. How lucky we are that anyone can go manipu- late data!

Look at USA Today putting numbers on the front page. People are used to looking at numbers, and these evaluation processes help them to do that.

Why did you decide not to write a report presenting results?

We did not write the formal traditional report that people often write. But we did compile all the information we collected, and members of the Self-Study Team made a presentation to the School Board. But the report- ing was really designed to be what was needed for the situation.

Why didn’t we choose to present findings in a formal report? I don’t remember making that conscious decision. We wanted a report that was not too long. We wanted to include all the recommendations. This was going to be the historical document. The materials related to the self-study were already long—they went on and on. In retrospect, I believe we were putting in the key information.

It was the time pressure, too

resource pressures

Who would write a more elaborate report? (J: AND WHAT WOULD THE ADDED BENEFIT OF THAT REPORT BE?) ADDITIONAL QUESTIONS: WHAT KIND OF REPORT ARE THEY TALKING ABOUT? WHAT ARE THE SIMILARITIES AND DIFFERENCES BETWEEN THAT AND WHAT JEAN KING'S TEAM DID REPORT ON THIS PROJECT?)

What might you change today if you had to do this again? King: I would not have had such a large study committee. One hundred, even 50 people, is too large and really has implications for participatory work. (J: WHAT IS KING'S POSSIBLE ARGUMENT ABOUT WHY THIS IS TOO LARGE? (PLEASE INCLUDE GROUNDS, VALUES, AND HYPOTHETICAL OR DATA-DRIVEN CONNECTIONS BETWEEN THOSE GROUNDS AND VALUES)

What’s a good number? King: Eighteen to twenty.

Do you have to go with 50 or 100 in a large organization?

It depends. In this study, we needed to. People

Laurie Stevahn, one of the external evaluators, was a key actor in this because of her knowledge of social psychology. These techniques that she helped put in the process revolu- tionized my work. (J: THIS IS AN EXAMPLE OF THE VIEW THAT SOCIAL PSYCHOLOGY IS IMPORTANT IN "EVALUATION." WHAT FORM OF EVALUATION IS IT IMPORTANT IN, AND WHY? (J: MY THOUGHTS: IT'S IMPORTANT TO THIS KIND OF EVALUATION BECAUSE YOU'RE WORKING WITH STAKEHOLDERS AND STAKEHOLDER OPINIONS. IT COULD BE IMPORTANT IN OTHER EVALUATIONS WHEN YOU WANT TO USE DISTRIBUTED EVALUATORS AS A - CHEAPER / MORE EFFECTIVE / WHATEVER OTHER BENEFIT YOU SEE - WAY OF PROCESSING THE DATA)

The table teams, the three-step interview, the structuring activities— they’re the key. I’ve taught school for a long time. These strategies are surefire. They’re winners because they’re absolutely based on psychological principles.

Fitzpatrick’s comments

King’s focus in this evaluation is capacity building

King and her colleagues serve only two roles: as “objective tech- nicians” and as facilitators

s “objective technicians” their role is to make sure groups can support their conclusions with at least two pieces of data.

For capacity building to occur, she and her colleagues strictly limit their roles so others can gain competency in evaluation.

King reports, “There was a sense that we wanted a lot of data—that to ensure coverage we really needed everyone’s opin- ion.” Was the team’s choice made with awareness of all the things evaluation can do or because surveys and perceptions are what people new to evaluation often consider?

As King reports the results, she does not seem to find them particularly surprising, nor do I.

that will always be true. The results are important, but less important than building competency to address evaluation issues in the future.

The interview also provides us with a second perspective, in addition to the interview with David Fetterman, on serving as an internal evaluator

DISCUSSION QUESTIONS (J: RATE THESE ON THE ANDERSON & KRATHWOHL (2001) "Cognitive Process Dimensions")

1. What are the purposes of King’s evaluation?

What do you see as the strengths and weaknesses of each approach?

Who are the key stakeholders for the capacity-building part of the study?

How does King choose to involve each? Discuss the strengths and weaknesses of her approach to stakeholder involvement for the different purposes of this evaluation.

For the evaluation of the special education department?

What are the strengths of King’s approach to capacity building? What are the disadvantages?

Do you think an external evaluator would have approached the evalu- ation of the special education department differently? If so, how?

Think about your own organization or a place where you have worked. Would building evaluation capacity be helpful to that organization?

What evaluation competencies would you hope to improve in the organization? How might you go about it?