Evaluation for education, learning and change – theory and practice

The picture - Office of Mayhem Evaluation - is by xiaming and is reproduced here under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 Generic licence. Flickr:

Evaluation  for education, learning and change – theory and practice. Evaluation is part and parcel of educating – yet it can be experienced as a burden and an unnecessary intrusion. We explore the theory and practice of evaluation and some of the key issues for informal and community educators, social pedagogues youth workers and others. In particular, we examine educators as connoisseurs and critics, and the way in which they can deepen their theory base and become researchers in practice.

Contents: introduction · on evaluation · three key dimensions · thinking about indicators · on being connoisseurs and critics · educators as action researchers · some issues when evaluating informal education · conclusion · further reading and references · acknowledgements · how to cite this article

A lot is written about evaluation in education – a great deal of which is misleading and confused. Many informal educators such as youth workers and social pedagogues are suspicious of evaluation because they see it as something that is imposed from outside. It is a thing that we are asked to do; or that people impose on us. As Gitlin and Smyth (1989) comment, from its Latin origin meaning ‘to strengthen’ or to empower, the term evaluation has taken a numerical turn – it is now largely about the measurement of things – and in the process can easily slip into becoming an end rather than a means. In this discussion of evaluation we will be focusing on how we can bring questions of value (rather than numerical worth) back into the centre of the process. Evaluation is part and parcel of educating.  To be informal educators we are constantly called upon to make judgements, to make theory, and to discern whether what is happening is for the good. We have, in Elliot W. Eisner’s words, to be connoisseurs and critics. In this piece we explore some important dimensions of this process; the theories involved; the significance of viewing ourselves as action researchers; and some issues and possibilities around evaluation in informal and community education, youth work and social pedagogy. However, first we need to spend a little bit of time on the notion of evaluation itself.

On evaluation

Much of the current interest in evaluation theory and practice can be directly linked to the expansion of government programmes (often described as the ‘New Deal’) during the 1930s in the United States and the implementation of various initiatives during the 1960s (such as Kennedy’s ‘War on Poverty’) (see Shadish, Cork and Leviton 1991). From the 1960s-on ‘evaluation’ grew as an activity, a specialist field of employment with its own professional bodies, and as a body of theory. With large sums of state money flowing into new agencies (with projects and programmes often controlled or influenced by people previously excluded from such political power) officials and politicians looked to increased monitoring and review both to curb what they saw as ‘abuses’, and to increase the effectiveness and efficiency of their programmes. A less charitable reading would be that they were both increasingly concerned with micro-managing initiatives and in controlling the activities of new agencies and groups. Their efforts were aided in this by developments in social scientific research. Of special note here are the activities of Kurt Lewin and the interest in action research after the Second World War.

As a starter I want to offer an orienting definition:

Evaluation is the systematic exploration and judgement of working processes, experiences and outcomes. It pays special attention to aims, values, perceptions, needs and resources.

There are several things that need to be said about this.

First, evaluation entails gathering, ordering and making judgments about information in a methodical way. It is a research process.

Second, evaluation is something more than monitoring. Monitoring is largely about ‘watching’ or keeping track and may well involve things like performance indicators. Evaluation involves making careful judgements about the worth, significance and meaning of phenomenon.

Third, evaluation is very sophisticated. There is no simple way of making good judgements. It involves, for example, developing criteria or standards that are both meaningful and honour the work and those involved.

Fourth, evaluation operates at a number of levels. It is used to explore and judge practice and programmes and projects (see below).

Last, evaluation if it is to have any meaning must look at the people involved, the processes and any outcomes we can identify. Appreciating and getting of flavour of these involves dialogue. This makes the focus enquiry rather than measurement – although some measurement might be involved (Rowlands 1991). The result has to be an emphasis upon negotiation and consensus concerning the process of evaluation, and the conclusions reached.

Three key dimensions

Basically, evaluation is either about proving something is working or needed, or improving practice or a project (Rogers and Smith 2006). The first often arises out of our accountability to funders, managers and, crucially, the people are working with. The second is born of a wish to do what we do better. We look to evaluation as an aid to strengthen our practice, organization and programmes (Chelimsky 1997: 97-188).

To help make sense of the development of evaluation I want to explore three key dimensions or distinctions and some of the theory associated.

Programme or practice evaluation? First, it is helpful to make a distinction between programme and project evaluation, and practice evaluation. Much of the growth in evaluation has been driven by the former.

Programme and project evaluation. This form of evaluation is typically concerned with making judgements about the effectiveness, efficiency and sustainability of pieces of work. Here evaluation is essentially a management tool. Judgements are made in order to reward the agency or the workers, and/or to provide feedback so that future work can be improved or altered. The former may well be related to some form of payment by results such as the giving of bonuses for ‘successful’ activities, the invoking of penalty clauses for those deemed not to have met the objectives set for it and to decisions about giving further funding. The latter is important and necessary for the development of work.

Practice evaluation. This form of evaluation is directed at the enhancement of work undertaken with particular individuals and groups, and to the development of participants (including the informal educator). It tends to be an integral part of the working process. In order to respond to a situation workers have to make sense of what is going on, and how they can best intervene (or not intervene). Similarly, other participants may also be encouraged or take it upon themselves to make judgements about the situation. In other words, they evaluate the situation and their part in it. Such evaluation is sometimes described as educative or pedagogical as it seeks to foster learning. But this is only part of the process. The learning involved is oriented to future or further action. It is also informed by certain values and commitments (informal educators need to have an appreciation of what might make for human flourishing and what is ‘good’). For this reason we can say the approach is concerned with praxis – action that is informed and committed

These two forms of evaluation will tend to pull in different directions. Both are necessary – but just how they are experienced will depend on the next two dimensions.

Summative or formative evaluation? Evaluations can be summative or formative. Evaluation can be primarily directed at one of two ends:

  • To enable people and agencies make judgements about the work undertaken; to identify their knowledge, attitudes and skills, and to understand the changes that have occurred in these; and to increase their ability to assess their learning and performance (formative evaluation).
  • To enable people and agencies to demonstrate that they have fulfilled the objectives of the programme or project, or to demonstrate they have achieved the standard required (summative evaluation).

Either can be applied to a programme or to the work of an individual. Our experience of evaluation is likely to be different according to the underlying purpose. If it is to provide feedback so that programmes or practice can be developed we are less likely, for example, to be defensive about our activities. Such evaluation isn’t necessarily a comfortable exercise, and we may well experience it as punishing – especially if it is imposed on us (see below). Often a lot more is riding on a summative evaluation. It can mean the difference between having work and being unemployed!

Banking or dialogical evaluation? Last, it is necessary to explore the extent to which evaluation is dialogical. As we have already seen much evaluation is imposed or required by people external to the situation. The nature of the relationship between those requiring evaluation and those being evaluated is, thus of fundamental importance. Here we might useful employ two contrasting models. We can usefully contrast the dominant or traditional model that tend to see the people involved in a project as objects, with an alternative, dialogical approach that views all those involved as subjects. This division has many affinities to Freire’s (1972) split between banking and dialogical models of education.

Exhibit 1: Rowlands on traditional (banking) and alternative (dialogical) evaluation

Joanna Rowlands has provided us with a useful summary of these approaches to evaluation. She was particularly concerned with the evaluation of social development projects.

The characteristics of the traditional (banking) approach to evaluation:

1.     A search for objectivity and a ‘scientific approach’, through standardized procedures. The values used in this approach… often reflect the priorities of the evaluator.

2.     An over-reliance on quantitative measures. Qualitative aspects…, being difficult to measure, tend to be ignored.

3.     A high degree of managerial control, whereby managers can influence the questions being asked Other people, who may be affected by the findings of an evaluation, may have little input, either in shaping the questions to be asked or reflecting on the findings.

4.     Outsiders are usually contracted to be evaluator in the belief that his will increase objectivity, and there may be a negative perception of them by those being evaluated’.

The characteristics of the alternative (dialogical) approach to evaluation

1.     Evaluation is viewed as an integral part of the development or change process and involves ‘reflection-action’. Subjectivity is recognized and appreciated.

2.     There is a focus on dialogue, enquiry rather than measurement, and a tendency to use less formal methods like unstructured interviews and participant observation.

3.     It is approached as an ‘empowering process’ rather than control by an external body. There is a recognition that different individuals and groups will have different perceptions. Negotiation and consensus is valued concerning the process of evaluation, and the conclusions reached, and recommendations made

4.     The evaluator takes on the role of facilitator, rather than being an objective and neutral outsider. Such evaluation may well be undertaken by ‘insiders’ – people directly involved in the project or programme.

Adapted from Joanna Rowlands (1991) How do we know it is working? The evaluation of social development projects, and discussed in Rubin (1995: 17-23)

_________

We can see in these contrasting models important questions about power and control, the way in which those directly involved in programmes and projects are viewed. Dialogical evaluation places the responsibility for evaluation squarely on the educators and the other participants in the setting (Jeffs and Smith 2005: 85-92).

Thinking about indicators

The key part of evaluation, some may argue, is framing the questions we want to ask, and the information we want to collect such that the answers provide us with the indicators of change.  Unfortunately, as we have seen, much of the talk and practice around indicators in evaluation has been linked to rather crude measures of performance and the need to justify funding (Rogers and Smith 2006). We want to explore the sort of indicators that might be more fitting to the work we do.

In common usage an indicator points to something, it is a sign or symptom. The difficulty facing us is working out just what we are seeing might be a sign of. In informal education – and any authentic education – the results of our labours may only become apparent some time later in the way that people live their lives. In addition, any changes in behaviour we see may be specific to the situation or relationship (see below). Further, it is often difficult to identify who or what was significant in bringing about change. Last, when we look at, or evaluate, the work, as E Lesley Sewell (1966) put it, we tend to see what we are looking for. For these reasons a lot of the outcomes that are claimed in evaluations and reports about work with particular groups or individuals have to be taken with a large pinch of salt.

Luckily, in trying to make sense of our work and the sorts of indicators that might be useful in evaluation, we can draw upon wisdom about practice, broader research findings, and our values.

Exhibit 2: Evaluation – what might we need indicators for?

We want to suggest four possible areas that we might want indicators for:

The number of  people we are in contact with and working with. In general, as informal educators we should expect to make and maintain a lot of contacts. This is so people know about us, and the opportunities and support we can offer. We can also expect to involve smaller numbers of participants in groups and projects, and an even smaller number as ‘clients’ in intensive work. The numbers we might expect – and the balance between them – will differ from project to project (Jeffs and Smith 2005: 116-121). However, through dialogue it does seem possible to come some agreement about these – and in the process we gain a useful tool for evaluation.

The nature of the opportunities we offer. We should expect to be asked questions about the nature and range of opportunities we offer. For example, do young people have a chance to talk freely and have fun; expand and enlarge their experience, and learn? As informal educators we should also expect to work with people to build varied programmes and groups and activities with different foci.

The quality of relationships available. Many of us talk about our work in terms of ‘building relationships’. By this we often mean that we work both through relationship, and for relationship (see Smith and Smith forthcoming). This has come under attack from those advocating targeted and more outcome-oriented work. However, the little sustained research that has been done confirms that it is the relationships that informal educators and social pedagogues form with people, and encourage them to develop with others, that really matters (see Hirsch 2005). Unfortunately identifying sensible indicators of progress is not easy – and the job of evaluation becomes difficult as a result.

How well people work together and for others. Within many of the arenas where informal education flourishes there is a valuing of working so that people may organize things for themselves, and be of service to others. The respect in which this held is also backed up by research. We know, for example, that people involved in running groups generally grow in self-confidence and develop a range of skills (Elsdon 1995). We also know that those communities where a significant number of people are involved in organizing groups and activities are healthier, have more positive experiences of education, are more active economically, and have less crime (Putnam 1999). (Taken from Rogers and Smith 2006)

__________

For some of these areas it is fairly easy to work out indicators. However, when it comes to things like relationships, as Lesley Sewell noted many years ago, ‘Much of it is intangible and can be felt in atmosphere and spirit. Appraisal of this inevitably depends to some extent on the beholders themselves’ (1966: 6). There are some outward signs – like the way people talk to each other. In the end though, informal education is fundamentally an act of faith. However, our faith can be sustained and strengthened by reflection and exploration.

On being connoisseurs and critics

Informal education involves more than gaining and exercising technical knowledge and skills. It depends on us also cultivating a kind of artistry. In this sense, educators are not engineers applying their skills to carry out a plan or drawing, they are artists who are able to improvise and devise new ways of looking at things. We have to work within a personal but shared idea of the ‘good’ – an appreciation of what might make for human flourishing and well-being (see Jeffs and Smith 1990). What is more, there is little that is routine or predictable in our work. As a result, central to what we do as educators is the ability to ‘think on our feet’. Informal education is driven by conversation and by certain values and commitments (Jeffs and Smith 2005).

infed_process - Copy

Describing informal education as an art does sound a bit pretentious. It may also appear twee. But there is a serious point here. When we listen to other educators, for example in team meetings, or have the chance to observe them in action, we inevitably form judgments about their ability. At one level, for example, we might be impressed by someone’s knowledge of the income support system or of the effects of different drugs. However, such knowledge is useless if it cannot be used in the best way. We may be informed and be able to draw on a range of techniques, yet the thing that makes us special is the way in which we are able to combine these and improvise regarding the particular situation. It is this quality that we are describing as artistry.

For Donald Schön (1987: 13) artistry is an exercise of intelligence, a kind of knowing. Through engaging with our experiences we are able to develop maxims about, for example, group work or working with an individual. In other words, we learn to appreciate – to be aware and to understand – what we have experienced. We become what Eisner (1985; 1998) describes as ‘connoisseurs‘. This involves very different qualities to those required by dominant models of evaluation.

Connoisseurship is the art of appreciation. It can be displayed in any realm in which the character, import, or value of objects, situations, and performances is distributed and variable, including educational practice. (Eisner 1998: 63)

The word connoisseurship comes from the Latin cognoscere, to know (Eisner 1998: 6). It involves the ability to see, not merely to look. To do this we have to develop the ability to name and appreciate the different dimensions of situations and experiences, and the way they relate one to another. We have to be able to draw upon, and make use of, a wide array of information. We also have to be able to place our experiences and understandings in a wider context, and connect them with our values and commitments. Connoisseurship is something that needs to be worked at – but it is not a technical exercise. The bringing together of the different elements into a whole involves artistry.

However, educators need to become something more than connoisseurs. We need to become critics.

If connoisseurship is the art of appreciation, criticism is the art of disclosure. Criticism, as Dewey pointed out in Art as Experience, has at is end the re-education of perception… The task of the critic is to help us to see.

Thus…  connoisseurship provides criticism with its subject matter. Connoisseurship is private, but criticism is public. Connoisseurs simply need to appreciate what they encounter. Critics, however, must render these qualities vivid by the artful use of critical disclosure. (Eisner 1985: 92-93)

Criticism can be approached as the process of enabling others to see the qualities of something. As Eisner (1998: 6) puts it, ‘effective criticism functions as the midwife to perception. It helps it come into being, then later refines it and helps it to become more acute’. The significance of this for those who want to be educators is, thus, clear. Educators also need to develop the ability to work with others so that they may discover the truth in situations, experiences and phenomenon.

Educators as action researchers

Schön (1987) talks about professionals being ‘researchers in the practice context’. As Bogdan and Biklen (1992: 223) put it, ‘research is a frame of mind – a perspective people take towards objects and activities’. For them, and for us here, it is something that we can all undertake. It isn’t confined to people with long and specialist training. It involves (Stringer 1999: 5):

• A problem to be investigated.

• A process of enquiry

• Explanations that enable people to understand the nature of the problem

Within the action research tradition there have been two basic orientations. The British tradition – especially that linked to education – tends to view action research as research oriented toward the enhancement of direct practice. For example, Carr and Kemmis provide a classic definition:

Action research is simply a form of self-reflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own practices, their understanding of these practices, and the situations in which the practices are carried out (Carr and Kemmis 1986: 162).

The second tradition, perhaps more widely approached within the social welfare field – and most certainly the broader understanding in the USA – is of action research as ‘the systematic collection of information that is designed to bring about social change’ (Bogdan and Biklen 1992: 223). Bogdan and Biklen continue by saying that its practitioners marshal evidence or data to expose unjust practices or environmental dangers and recommend actions for change. It has been linked into traditions of citizen’s action and community organizing, but in more recent years has been adopted by workers in very different fields.

In many respects, this distinction mirrors one we have already been using – between programme evaluation and practice evaluation. In the latter, we may well set out to explore a particular piece of work. We may think of it as a case study – a detailed examination of one setting, or a single subject, a single depository of documents, or one particular event (Merriam 1988). We can explore what we did as educators: what were our aims and concerns; how did we act; what were we thinking and feeling and so on? We can look at what may have been going on for other participants; the conversations and interactions that took place; and what people may have learnt and how this may have affected their behaviour. Through doing this we can develop our abilities as connoisseurs and critics. We can enhance what we are able to take into future encounters.

When evaluating a programme or project we may ask other participants to join with us to explore and judge the processes they have been involved in (especially if we are concerned with a more dialogical approach to evaluation). Our concern is to collect information, to reflect upon it, and to make some judgements as to the worth of the project or programme, and how it may be improved. This takes us into the realm of what a number of writers have called community-based action research. We have set out one example of this below.

Exhibit 3: Stringer on community-based action research

A fundamental premise of community-based action research is that it commences with an interest in the problems of a group, a community, or an organization. Its purpose is to assist people in extending their understanding of their situation and thus resolving problems that confront them….

Community-based action research is always enacted through an explicit set of social values. In modern, democratic social contexts, it is seen as a process of inquiry that has the following characteristics:

  • It is democratic, enabling the participation of all people.
  • It is equitable, acknowledging people’s equality of worth.
  • It is liberating, providing freedom from oppressive, debilitating conditions.
  • It is life enhancing, enabling the expression of people’s full human potential. (Stringer 1999: 9-10)

The action research process

Action research works through three basic phases:

Look building a picture and gathering information. When evaluating we define and describe the problem to be investigated and the context in which it is set. We also describe what all the participants (educators, group members, managers etc.) have been doing.

Think – interpreting and explaining. When evaluating we analyse and interpret the situation. We reflect on what participants have been doing. We look at areas of success and any deficiencies, issues or problems.

Act – resolving issues and problems. In evaluation we judge the worth, effectiveness, appropriateness, and outcomes of those activities. We act to formulate solutions to any problems.

(Stringer 1999: 18; 43-44;160)

 We could contrast with a more traditional, banking, style of research in which an outsider (or just the educators working on their own) collect information, organize it, and come to some conclusions as to the success or otherwise of the work.

Some issues when evaluating informal education

In recent years informal educators have been put under great pressure to provide ‘output indicators’, ‘qualitative criteria’, ‘objective success measures’ and ‘adequate assessment criteria’. Those working with young people have been encouraged to show how young people have developed ‘personally and socially through participation’. We face a number of problems when asked to approach our work in such ways. As we have already seen, our way of working as informal educators places us within a more dialogical framework. Evaluating our work in a more bureaucratic and less inclusive fashion may well compromise or cut across our work.

There are also some basic practical problems. Here we explore four particular issues identified by Jeffs and Smith (2005) with respect to programme or project evaluations.

The problem of multiple influences. The different things that influence the way people behave can’t be easily broken down. For example, an informal educator working with a project to reduce teen crime on two estates might notice that the one with a youth club open every weekday evening has less crime than the estate without such provision. But what will this variation, if it even exists, prove? It could be explained, as research has shown, by differences in the ethos of local schools, policing practices, housing, unemployment rates, and the willingness of people to report offences.

The problem of indirect impact. Those who may have been affected by the work of informal educators are often not easily identified. It may be possible to list those who have been worked with directly over a period of time. However, much contact is sporadic and may even take the form of a single encounter. The indirect impact is just about impossible to quantify. Our efforts may result in significant changes in the lives of people we do not work with. This can happen as those we work with directly develop. Consider, for example, how we reflect on conversations that others recount to us, or ideas that we acquire second- or third-hand. Good informal education aims to achieve a ripple effect. We hope to encourage learning through conversation and example and can only have a limited idea of what the true impact might be.

The problem of evidence. Change can rarely be monitored even on an individual basis. For example, informal educators who focus on alcohol abuse within a particular group can face an insurmountable problem if challenged to provide evidence of success. They will not be able to measure use levels prior to intervention, during contact or subsequent to the completion of their work. In the end all the educator will be able to offer, at best, is vague evidence relating to contact or anecdotal material.

The problem of timescale. Change of the sort with which informal educators are concerned does not happen overnight. Changes in values, and the ways that people come to appreciate themselves and others, are notoriously hard to identify – especially as they are happening. What may seem ordinary at the time can, with hindsight, be recognized as special.

Workarounds

There are two classic routes around such practical problems. We can use both as informal educators.

The first is to undertake the sort of participatory action research we have been discussing here. When setting up and running programmes and projects we can build in participatory research and evaluation from the start. We make it part of our way of working. Participants are routinely invited and involved in evaluation. We encourage them to think about the processes they have been participating in, the way in which they have changed and so on. This can be done in ways that fit in with the general run of things that we do as informal educators.

The second route is to make linkages between our own activities as informal educators and the general research literature. An example here is group or club membership. We may find it very hard to identify the concrete benefits for individuals from being member of a particular group such as a football team or social club. What we can do, however, is to look to the general research on such matters. We know, for example, that involvement in such groups builds social capital. We have evidence that:

In those countries where the state invested most in cultural and sporting facilities young people responded by investing more of their own time in such activities (Gauthier and Furstenberg 2001).

The more involved people are in structured leisure activities, good social contacts with friends, and participation in the arts, cultural activities and sport, the more likely they are to do well educationally, and the less likely they are to be involved even in low-level delinquency (Larson and Verma 1999).

There appears to be a strong relationship between the possession of social capital and better health. ‘As a rough rule of thumb, if you belong to no groups but decide to join one, you cut your risk of dying over the next year in half. If you smoke and belong to no groups, it’s a toss-up statistically whether you should stop smoking or start joining’ (ibid.: 331). Regular club attendance, volunteering, entertaining, or church attendance is the happiness equivalent of getting a college degree or more than doubling your income. Civic connections rival marriage and affluence as predictors of life happiness (Putnam 2000: 333).

This approach can work where there is some freedom in the way that you can respond to funders and others with regard to evaluation. Where you are forced to fill in forms that require the answers to certain set questions we can still use the evaluations that we have undertaken in a participatory manner – and there may even be room to bring in some references to the broader literature. The key here is to remember that we are educators – and that we have a responsibility foster learning, not only among those we work with in a project or programme, but also among funders, managers and policymakers. We need to view their requests for information as opportunities to work at deepening their appreciation and understanding of informal education and the issues and questions with which we work.

Conclusion

The purpose of evaluation, as Everitt et al (1992: 129) is to reflect critically on the effectiveness of personal and professional practice. It is to contribute to the development of ‘good’ rather than ‘correct’ practice.

Missing from the instrumental and technicist ways of evaluating teaching are the kinds of educative relationships that permit the asking of moral, ethical and political questions about the ‘rightness’ of actions. When based upon educative (as distinct from managerial) relations, evaluative practices become concerned with breaking down structured silences and narrow prejudices. (Gitlin and Smyth 1989: 161)

Evaluation is not primarily about the counting and measuring of things. It entails valuing – and to do this we have to develop as connoisseurs and critics. We have also to ensure that this process of ‘looking, thinking and acting’ is participative.

Further reading and references

For the moment I have listed some guides to evaluation. At a later date I will be adding in some more contextual material concerning evaluation in informal education.

Berk, R. A. and Rossi, P. H. (1990) Thinking About Program Evaluation, Newbury Park: Sage. 128 pages. Clear introduction with chapters on key concepts in evaluation research; designing programmes; examining programmes (using a chronological perspective). Useful US annotated bibliography.

Eisner, E. W. (1985) The Art of Educational Evaluation. A personal view, Barcombe: Falmer. 272 + viii pages. Wonderful collection of material around scientific curriculum making and its alternatives. Good chapters on Eisner’s championship of educational connoisseurship and criticism. Not a cookbook, rather a way of orienting oneself.

Eisner, E. W. (1998) The Enlightened Eye. Qualitative inquiry and the enhancement of educational practice, Upper Saddle River, NJ: Prentice Hall. 264 + viii pages. Re-issue of a 1990 classic in which Eisner plays with the ideas of educational connoisseurship and educational criticism. Chapters explore these ideas, questions of validity, method and evaluation. An introductory chapter explores qualitative thought and human understanding and final chapters turn to ethical tensions, controversies and dilemmas; and the preparation of qualitative researchers.

Everitt, A. and Hardiker, P. (1996) Evaluating for Good Practice, London: Macmillan. 223 + x pages. Excellent introduction that takes care to avoid technicist solutions and approaches. Chapters examine purposes; facts, truth and values; measuring performance; a critical approach to evaluation; designing critical evaluation; generating evidence; and making judgements and effecting change.

Hirsch, B. J. (2005) A Place to Call Home. After-school programs for urban youth, New York: Teachers College Press. A rigorous and insightful evaluation of the work of six inner city boys and girls clubs that concludes that the most important thing they can and do offer is relationships (both with peers and with the workers) and a ‘second home’.

Patton, M. Q. (1997) Utilization-Focused Evaluation. The new century text 3e, Thousand Oaks, Ca.: Sage. 452 pages. Claimed to be the most comprehensive review and integration of the literature on evaluation. Sections focus on evaluation use; focusing evaluations; appropriate methods; and the realities and practicalities of utilization-focused evaluation.

Rossi, P. H., Freeman, H. and Lipsey, M. W. (2004) Evaluation. A systematic approach 7e, Newbury Park, Ca.: Sage. 488 pages. Practical guidance from diagnosing problems through to measuring and analysing programmes. Includes material on formative evaluation procedures, practical ethics, and cost-benefits.

Stringer, E. T. (1999) Action Research 2e, Thousand Oaks, CA.: Sage. 229 + xxv pages. Useful discussion of community-based action research directed at practitioners.

References

Bogden, R. and Biklen, S. K. (1992) Qualitative Research For Education, Boston: Allyn and Bacon.

Carr, W. and Kemmis, S. (1986) Becoming Critical. Education, knowledge and action research, Lewes: Falmer.

Chelimsky E. (1997) Thoughts for a new evaluation society. Evaluation 3(1): 97-118.

Elsdon, K. T. with Reynolds, J. and Stewart, S. (1995) Voluntary Organizations. Citizenship, learning and change, Leicester: NIACE.

Freire, P. (1972) Pedagogy of the Oppressed, London: Penguin.

Gauthier, A. H. and Furstenberg, F. F. (2001) ‘Inequalities in the use of time by teenagers and young adults’ in K. Vleminckx and T. M. Smeeding (eds.) Child Well-being, Child Poverty and Child Policy in Modern Nations Bristol: Policy Press.

Gitlin, A. and Smyth, J. (1989) Teacher Evaluation. Critical education and transformative alternatives, Lewes: Falmer Press.

Jeffs, T. and Smith, M. (eds.) (1990) Using Informal Education, Buckingham: Open University Press.

Jeffs and Smith, M. K. (2005) Informal Education. Conversation, democracy and learning 3e, Nottingham: Educational Heretics Press.

Larson, R. W. and Vera, A. (1999) ‘How children and adolescents spend time across the world: work, play and developmental opportunities’ Psychological Bulletin 125(6).

Merriam, S. B. (1988) Case Study Research in Education, San Francisco: Jossey-Bass.

Putman, R. D. (2000) Bowling Alone: The collapse and revival of American community, New York: Simon and Schuster.

Rogers, A. and Smith, M. K. (2006) Evaluation: Learning what matters, London: Rank Foundation/YMCA George Williams College. Available as a pdf: www.ymca.org.uk/rank/conference/evaluation_learning_what_matters.pdf.

Rubin, F. (1995) A Basic Guide to Evaluation for Development Workers, Oxford: Oxfam.

Schön, D. A. (1983) The Reflective Practitioner. How professionals think in action, London: Temple Smith.

Sewell, L. (1966) Looking at Youth Clubs, London: National Association of Youth Clubs. Available in the informal education archives: http://www.infed.org/archives/nayc/sewell_looking.htm.

Shadish, W. R., Cook, T. D. and Leviton, L. C. (1991) Foundations of Program Evaluation, Newbury Park C.A.: Sage.

Smith, H. and Smith, M. K. (forthcoming) The Art of Helping Others. Being around, being there, being wise. See www.infed.org/helping.

Acknowledgements and credits: Alan Rogers and Sarah Lloyd-Jones were a great help when updating this article – and some of the material in this piece first appeared in Rogers and Smith 2006.

The picture – Office of Mayhem Evaluation – is by xiaming and is reproduced here under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 Generic licence. Flickr: http://www.flickr.com/photos/xiaming/78385893/

How to cite this article: Smith, M. K. (2001, 2006). Evaluation for education, learning and change – theory and practice, The encyclopedia of pedagogy and informal education. [https://infed.org/mobi/evaluation-theory-and-practice/. Retrieved: insert date]

© Mark K. Smith 2001, 2006

Last Updated on April 7, 2021 by infed.org