Playing Past is delighted to be publishing on Open Access –Sports and Coaching: Pasts and Futures [ISBN 978-1-905476-77-0]  – This wide-ranging collection of papers, which highlight the richness and diversity of studies into sport and coaching, has its origins in a symposium hosted by Manchester Metropolitan University’s Institute for Performance Research in June 2011. The contributors come from different disciplines and include some of Britain’s leading scholars together with a number of early career researchers.

 

Please cite this article as:

Daniels, J. Community Sport Development: Is the Future Evidence? In Day, D. (ed), Sport and Coaching: Pasts and Futures (Manchester: MMU Sport and Leisure History, 2012), 123-136.

 

7

______________________________________________________________

 

Community Sport Development: Is the Future Evidence?

John Daniels

______________________________________________________________

 

Introduction

Over the past ten years there has been a steady increase in the volume of research important to the field of sport development. This expanding research base is adding to the academic credibility of sport development by expanding knowledge and improving the understanding of issues that determine the value and impact of interventions for sports sake and those targeting broader social issues such as health (Lechner, 2009; Molta and Esculcas, 2002; Walters et al., 2009), crime (Nichols, 2004; Smith and Waddington, 2004; Crabbe, 2000) and regeneration (Gratton, Shibli and Coleman, 2005; Coaffee, 2008). This evolutionary change in the interpretation of the concept of sport development was proposed with the publication of Game Plan: a strategy for delivering the Government’s sport and physical activity objectives (Department for Culture Media and Sport, 2002).

The linkage of sport to a much broader policy agenda, whilst not new, has added sophistication to its analysis and increased the strategies and policies that promote sport consequently, the complexity and challenge of evaluating interventions has become increasingly intricate. There is very limited guidance for research and evaluation in sport and physical activity development and the debate continues as to what constitutes good practice (Collins et al., 1999). The purpose of this chapter is to provide a critical appraisal of progress in evaluation of sport development policy and intervention and identify current difficulties in the process before discussing possible future directions and recommendations for research and evaluation in the field.

Aligning sport development with programme evaluation.

The greatest challenge in assessing the state of sport and physical activity has been the lack of reliable data…..although this does not invalidate case for action; it weakens our ability to develop evidence-based policy intervention (Department of Culture Media and Sport/ Strategy Unit, 2002, p.22).

When the New Labour government challenged sport to modernise in 2002 it was no surprise that central to the modernisation process was the development of an evidence-based culture. At the time, the Government were investing more money into sport and physical activity than any government preceding them and more than any government are likely to in the next twenty years. Under the ‘liberal’ and ‘social reformism’ ideals of New Labour (Hylton and Bramham, 2008: 17), sport was seen as having many social benefits including health, education and social order. Sport’s wider acclaim is not new at least from a from a policy perspective. The development of contemporary sports policy began in the late 1960s and was a response to a rapidly evolving social, economic and cultural climate giving rise to increased access to leisure. The government at the time responding by improving and increasing facilities and opportunities for leisure activities (Coalter, 2007). Policy progressed from the rhetoric of supply and demand and quickly focussed on the need for equity in the provision of sport and active recreation as ‘participation patterns were dominated by advantaged sections of the population’ (Hylton and Bramham, 2008: 78). This gave rise to the notion of recreational welfare (Coalter, 2007) and sport’s broader potential was aligned with reducing boredom, frustration and delinquency among young people. Driven by such ideology there were very few attempts to evaluate policies and strategy beyond participation and demographic measures. As recently as in the last decade there is very little evidence to suggest that sport could help contribute to society’s ills. A recent review by Collins (2010) acknowledged this lack of evidence as one of the key drivers for revisiting sports policy towards looking after sport as opposed to the broader social issues. Few, from an academic perspective, have risen to the challenge particularly towards evidence that may inform strategic agencies and local delivery agents on what interventions may work and may best explain why they work. Collins and Kay (2003: 248) alerted academia to the ‘descriptive, atheoretical, short term, output related’ evaluations that lacked ‘context’ acknowledging that where evaluations of sports projects did exist, few of them were converting the principles of rigour, or of a systematic nature that personified evaluation and underlined the demands of the government.

Policy makers under the same government seemingly lost momentum as the only reference to reaffirm an evidence base in sport came some six years later in the Department of Culture Media and Sport’s (DCMS) Passion for Excellence policy which simply stated ‘the sector will now develop a better mechanism for improving the overall evidence base by better co-ordinating the collection of impact evidence’ (DCMS, 2008: 16). This suggests that efforts to create an evidence base were poor both in terms of the methods (mechanisms) utilised and the outcomes of the intervention to which the methods were applied.

Community sport simply wasn’t ready, nor did it have the resources to embed a research culture into its everyday operations. This is hardly surprising. Compared to disciplines such as health and education, sport development was a new concept and despite huge government investment was, and remains, governed at ‘arms-length’ (Oakley and Green, 2001: 74) with quasi-autonomous non-governmental organisation (QUANGO) leadership. Coalter (2007) acknowledged the investment in evidence-based policy making in more centralised government departments such as Health with its National Institute for Health and Clinical Excellence (NICE) and the Evidence for Policy and Practice Information and Coordinating Centre (EPPI Centre)  in Education. By contrast, in sport, several reviews were actioned by key government organisations including the Policy Action Team (DCMS, 1999) and Strategy Unit (DCMS, 2002) to ascertain the current status quo with regards to evidence based practice in sport. Most of the reports were in agreement that little evidence was of any use, nor did it inform practitioners with any explanation as to why sport was (or was not) achieving its goals or outcomes.

Sports’ interventionists were delivering programmes but only with the end in mind. Processes and programme function were ignored and unlike the aforementioned departments no research authority was put in place to ensure the rigour and reliability of methodological approach and designs for gathering and making sense of any evidence collected. Instead, it was the funding bodies who stepped in and published the ‘How to’ guides. One of the first attempts was Sport England’s (2007) practitioners’ guide. Written by experts in evaluating programmes in physical activity and health, the guide promotes the use of theoretical frameworks upon which several projects can be aligned with (if necessary) to more efficiently deal with the numerous organisations that may have an interest in the results obtained. The guide was representative of the ‘joint working’ agenda of the New Labour Government and also reflected the health and physical activity context laid out by the DCMS (2002).

The document placed importance on ‘measuring progress towards meeting expressed aims and objectives’ (Dugdill and Stratton, 2007:  3) suggesting an outcome driven evaluation philosophy that was less concerned with if the outcomes could change during the duration of the programmes or if any progress could be explained and therefore poorly reflects context. Later, the document does acknowledge that ‘outcome evaluation, on its own is not sufficient’ (Dugdill and Stratton, 2007: 5) but stops short of explaining how process measures and the power of explanation might be addressed on a scientific level. There are only fleeting glances to any qualitative techniques relative to a plethora of techniques that measure physical activity levels, health indicators such as heart rate monitoring and GPS tracking none of which would assist practitioners with the functioning of a programme or help associate beyond health indicators. This in the political backdrop of health being only one of several broader social agendas for sport and evaluation techniques placing equal standing to social forms of enquiry.

The guide also acknowledges ‘limited skills and resources’ (Dugdill and Stratton, 2007: 3) for evaluation but later refers to careful choices in methods and data collection as important.  While it was right to pose the readiness (or not) of the sector to evaluate, it would have been wrong to undermine the requirements of a rigorous and systematic evaluation and so without experts in evaluation research or an independent body to scrutinise every decision or funds to orchestrate an evaluation the dichotomous relationship between evaluation needs and sports development’s inability to supply became very clear.

Further, the language of the guide does not align well with current evaluation and sport development philosophy. The guidance acknowledges the importance of interventions for participants but fails to involve the participants in the process of evaluation. The evaluation is done at them, not with them. While more participatory forms of evaluation research may be more resource intensive and rely on the skills of the researcher its use is well founded by Wiess (1972) who places impetus on stakeholder values inherent in the process of change and Long and Dart’s (2000) philosophy that stakeholder relationships are integral to the quality of the evaluation.

Sport England published a far more detailed ‘toolkit’ in 2007. Despite the increased detail, the information was strategic in nature. Consequently, practitioners were fed information about managing and monitoring a project in order that Sport England could ascertain what interventions give greatest gain for a given investment. There was no evidence of any academic engagement so apparent in its previous publication (Sport England, 2007). Explicit information was given on how to capture hard indicators, even templates that offered exact measures of KPI’s and various other quantitative outcomes. Support for ‘wider outcomes’ such as improvements in well-being and improved education was a tenuous acknowledgement that they ‘would not be easy to measure’. Viewed in this way, the document resists any notion of programme theory (Weiss, 1998) as a valuable tool in explaining how and why a programme may or may not work. Further, it devalues the role of the practitioner who may be the most qualified in explaining the mechanisms and context associated with such outcomes. While it may be difficult to measure such outcomes this shouldn’t stop community sport from employing approaches to provide evidence that may best explain  how changes in employment status or health happen in the same vein that while the Government does not have a good evidence base for sport this shouldn’t ‘invalidate a case for action’ (DCMS, 2002: 22).

At best, the guide serves as a project monitoring template and despite its name has little usefulness from an evaluative perspective. Again, there are lessons to be learned in basic terminology. In this case, the terms evaluation and monitoring cannot be used so interchangeably. We could assume that the strategic lead for sport simply doesn’t have the understanding of evaluation that is required or that despite having an understanding acknowledges that community sport development simply can’t support rigorous evaluation and accepts that quick and dirty monitoring exercises are a more realistic and achievable means of determining the worth of sport development interventions. Long et al., (cited in Nichols, 2004) acknowledged the importance of resources saying that evidence was lacking, because they [practitioners] do not have the funds or skills to conduct their own evaluation, and a higher priority is to assure next year’s funding to allow them to continue.

Further, a significant amount of funding for community sports strategy comes from Sport England and so on a political level, a funding body is bound to be driven by accountability and value for money as opposed to changing behaviour, or improving society. It is likely a combination of the outlined issues that may best rationalise why sport is seemingly lagging behind its DCMS counterparts – such as the Arts Council –  in providing reliable evidence base for practice.

In a more positive light, such guidance is crucial. If as Collins and Kay (1999) suggest, the most basic forms of evidence are not being gathered appropriately then any guidance should be welcomed. The toolkits do provide a more strategic approach to the development and delivery of interventions. They offer illustrative frameworks on how to make sense of practice in order to best collect relevant information. As previously mentioned, at least now practitioners are beginning a programme with the end in mind even though the end may never be realised. Community sport is a dynamic policy field once described by scholars as an arm’s length policy area (Oakley and Green, 2001) where selective reinvestment has been influenced externally by, for example, winning an Olympic Bid which changed the strategic and political  landscape for community sport by handing the lion’s share of sport development work to National Governing Bodies. Consequently many pre-Olympic bid community sports programmes with more social agendas may never realise their longer term impacts or sustainability measures. If evaluation is indeed reliant on embedding a research culture within the sports services sectors then it will inevitably take time (and investment) for sport to truly embrace evaluation research.

To better understand how we may apply the principles of evaluation and provide solutions to the problems identified above – within a community sport development context – we must first understand the concept of sport development and how it may best be aligned with the domain that is evaluation research. Sport development has been described by Collins (1995: 21) as ‘a process whereby effective opportunities, processes, systems and structures are set up to enable and encourage people in all or particular groups and areas to take part in sport for recreation or to improve their performance to whatever level they desire’. From an evaluation perspective these characteristics are significant as they acknowledge that whatever activities or structures are put into place, they have an apparent effect on those groups encouraged to take part. Further, Collins suggests that sport development is a process indicating that sport development is a means to an end and not an outcome in its own right. Activities are directed towards ‘enabling’ people to take action indicating sport development is not something done on or to people but with them. This, above all other characteristics, demonstrates, at least from Collins perspective, that sport development values its function and not just its intended outcomes. Collins also placed value on activity. This gives his interpretation of sport development its interventional context. In programme evaluation terms, we would refer to these as outputs.

More recent notions of sport development are according to Houlihan and Green (2011: 4) more ‘normative and moralistic’ that is the impetus is less on opportunity per se and more on targeting other social agenda. Once again we are alerted to programme evaluation being characterised around social programmes (Rossi, Lipsey and Freeman, 2004; Berk and Rossi, 1990). Hylton and Bramham (2008) describe sport development as providing positive sporting experiences implying it is not just the taking part that counts and there is much more to be gained from participating in sport. They also describe the notion of Community Sport Development, recognising that it is a contested term but one which is characterised by addressing social and political concerns and not simply placing sport in a community.  Like Collins (1995), Hylton and Bramham also recognise process and practice and so recognise that sport development is action oriented and applied as opposed a theoretical notion.

Houlihan and Green (2011) noted the changes in our conceptualisation of sport development and attributed the changes to time and context. Time and context are implicit in more recent approaches to evaluation research (Pawson and Tilley, 1997). Time is as constant in sport as it is any domain. However, few sectors beyond sport can boast such rapid changes in context and setting. In recent years sports policy has coped with an economic downturn, a change in Government and a successful Olympic Games Bid. Sport development is constantly referred to as a place of shifting goalposts by those who work within the sector and was referred to as a ‘crowded policy space’ by Houlihan (2000: 171). This presents a challenge to both the sports development officer in terms of setting long term and realistic goals and the evaluator who may be tasked with measuring the extent to which goals have been met or who would have to be sensitive to the shift in policy and still provide valuable evidence upon which strategy or intervention decisions will be made. Coalter (2007: 36) reaffirms that where sport has a developmental context – sport as a tool for social good – then the evaluation should be developmental and focus not just on what was achieved but contribute to the functioning of the intervention or strategy. In his words ‘It is not ‘sport’ that is the key, but the way in which it is provided and experienced’ as sport has no ‘causal powers’. From this perspective evaluation should avoid generalised notions that sport can reduce crime as proof of this would be impossible to determine particularly at the level of the community.

Long and Dart (2000: 72) advocated the development focussed evaluation in their work with ‘at risk’ youth. In their words ‘we were keen to look beyond reoffending rates and tried to develop a more qualitative appreciation of what the project was achieving by tapping into the experiences of those at the heart of the scheme’. Long and Dart supported the notion that proximity with programme delivers and participants was key to the strength of the evidence as it may enrich the data due to an improved relationship between the researchers and those involved with the scheme – participatory forms of evaluation research are ignored in the aforementioned toolkits.

Coalter (2007) still recognises the value of outcomes but places equal value on what is done in trying to achieve or change them. Both evaluation theorists and sports researchers agree that outcomes are often poorly constructed and understood. Policy makers once referred to sport as a cure all for society’s ills. That sports policy was an anti-drugs policy, an education policy and a crime prevention policy. These are bold statements and if local strategy have to be aligned with such policy rhetoric in order that funding and support are accessible it is easy to see why so many interventions are set up to fail. In truth we will never be able to establish causal links between sport and such outcomes, however, as the DCMS (2002) declared ‘this should not invalidate our case for action’. So what then, is the alternative? Some of the examples of evaluation in sport related projects are composed of small projects aimed a few participants (Nichols, 2001) and National campaigns targeting large populations (Bell, 2004). Interestingly, the same authors criticising various attempts to evaluate in sport have sought alternative methods such as theory driven evaluations for nearly a decade so why hasn’t sport’s governance acknowledged their approaches and perspectives in the policy documents? One criticism may be that it is sports funders who drive the evaluation. At this level, the evaluation becomes accountability oriented and value for money or satisfying long term targets could be the ultimate outcomes.

There are varied interpretations of what constitutes value in sport and physical activity programmes. Among the perspectives given in the literature (Chelminski, 1997) is that of  the population who may place great value on the ways in which a programme is delivered, and has focussed on issues which the community itself has identified (development). The perspectives of community sport development officers who need to be able to criticise with reasonable confidence the success of a programme in relation to its objectives as a form of feedback on which to base future developments and in order to make decisions regarding allocation of resources and be accountable to programme funders (accountability).  The perspectives of academics who need to be able to analyse success to progress understanding (knowledge) of how the outcomes of a programme may be attained (Coalter, 2007; Nichols, 2004).

These conceptualisations have become apparent because sports wide appeal has brought a variety of stakeholders who have an interest in the purpose and quality of community sport development but not everyone has the same idea about what constitutes that quality or purpose. From an academic perspective it would seem that there is a very narrow field of understanding of evaluation research in sport. Any guidance seemed to sidestep qualitative methods of enquiry. Either there are too few experts in the social sciences willing to work with sport or policy is demanding truth and certainty over development and understanding or thirdly that policy is not does not understand (or accept as evidence) more social approaches to research. On the latter, the previous government set out to develop an understanding of qualitative enquiry (Spencer et al., 2003). According to Denzin and Giardina (2008) there was little progress and despite a 167 page report that simply read like a set of instructions and rules with scant disregard for context or setting. Perhaps Denzin and Giardina (2008: 66) put it best stating simply that ‘defining what counts as science is not the state’s business’. Perhaps this was the thinking of academics such as Bell,  Nichols, Green, Long and Coalter. Not playing by the rules meant adopting an approach that may not have fit well with sport’s governing bodies but, for the first time, we could see the importance of intervention matters beyond truth and certainty without compromising development and quality.

And so to the issue of quality. Defining quality in community sport development is a political minefield. Stephenson (cited in McMillan and Parker, 2006) noted that many people know quality when they see it but find it almost impossible to define. Harvey and Newton (2004) explain that this is attributed to quality being personal and social constructs and that each construct is based on attributes that will vary between stakeholders. Selection of attributes are based on personal (or organisational) values and judgements (Watty, 2003). Consequently, quality is a construct of values and judgement connected with what we think the purpose of community sport development to be.

The complex interplay of community organisations in sport (normally lead by the public sector, delivered in combination with the voluntary sector and sometimes with the private sector) makes tensions inevitable and the evaluator is tasked with making key decisions on who the evidence will serve best. Dominant among the perspectives of what constitutes evidence and value in sport are the perspectives of the funders and policy makers. The strategic lead for sport in England implies evidence is impact (broader social agenda) oriented and should be approached on three levels (Sport England, 2012):

 

  1. Value for money and benchmarking performance
  2. Focused Evaluation
  3. Novelty and Innovation of testing new ideas on an interventional level.

 

The notion of value for money is measurable and should be valued especially as the advocating organisation also has a remit to provide funding for community sport. Good return on investment is a valued outcome in sport development. Benchmarking performance can be ascertained on many levels but here it is implicit in the satisfaction of predetermined outcome measures such as key performance indicators and so performance against cost are easily calculated. Cost benefit is referred to on an academic level in evaluation literature (Berk and Rossi, 1990). Here we see the harmony between the values of academia and those of practitioners.

The same cannot be said for the remaining two levels. Quite what is meant by focused evaluation we may never know. We could refer to the evaluation process itself being rigorous and systematic or that focus describes where the impetus of the evaluation should be. For example, the intervention (in terms of function and quality) or the intervention outcomes (improvements in well-being, education, employment) or both.

But the context relates to forming case studies which demonstrate good practice and ‘what works’. This is more a collection point for case study material. Not an approach to evaluation. On inspection, none of the case study material has any academic merit. Novelty and innovation could be contextualised in terms of the methodological design and approach to programme evaluation. Instead, there is only reference to innovation on an interventional level. Again any inference to evaluation approaches is absent.

Sport development does place value beyond the boundaries of accountability. There are references made to placing value on ‘why’ interventions may work but with (Sport England, 2007: 3). The agency have collaborated with academia and produced a portal for peer reviewed research papers that theoretically underpin the notion of sport in development (Coalter, 2009). This is significant as community sport acknowledged its anecdotal origins as limiting its evidence base – but not its action (DCMS, 2008). Now practitioners can better rationalise projects and evaluators can explain associations between a programme intended outcomes and its activities (Weiss, 1998). Knowledge and understanding are valued outcomes in sport development.

Despite such efforts, there is little use for this academic perspective if evaluation policy and guidance are replete with notions of accountability. There is a complete absence of data regarding participant experiences or quality of provision in the case study (Focussed Evaluation) evidence. This is a far cry from Coalter’s philosophy of ‘its not what you do it’s the way that you do it’ (Coalter 2007: 36)…..and that’s what gets results. The point being that participation key performance indicators will tell us nothing about associations with crime reduction, social cohesion or education or about what constitutes good practice in the design and implementation of sport development interventions

Further review of the case study material demonstrates a lack of methodology challenging any notion of systematic rigour or generation of a robust evidence base. How then can policy accept such cases as ones ‘that work’? Moreover, there seems to be a culture for demonstrating success when there is also a clear mandate for what doesn’t work even in the absence of being able to explain why.

From this we can assume that the lead agencies and policy makers for sport are now aware of the issue of evidence and are simply unwilling or unable to better understand what constitutes good evidence. While there now seems to be consultation between the academic research community and the policy makers. The relationship seems fractious and limited. In Donovan’s words ‘the limited consultation between policy makers and the research evaluation community has led to a lack of policy learning’ (2011: 175). Obsession with impact will not establish state of the art programmes required to improve our understanding for the mechanisms of change (Pawson and Tilley, 1999). This fragmented relationship was noted by Johnson et al. (2004) as constrained by ‘methodological weakness’ and a philistine attitude of key stakeholders towards academic research. Taut and Brauns (2003) explain this resistance to evaluation through psychosocial barriers. Included in their perspectives is the need for control. A sport development officer is not immune from personalising the community sports projects they deliver. Their efforts to resist exercise may be a bid to keep the status quo and any deviation may result in reactance and incorporative behaviour (towards the evaluation). Underpinning the strategies to reduce such barriers is the need to allow stakeholders to be a part of the evaluation process. Empowerment improves communication, commitment and self-esteem. The evaluator may still be faced with problems (too many cooks spoiling the broth) but the data will be rich and the knowledge accumulation rewarding and reciprocal.

The function of sport development evaluation is to assess activities in the light of intended goals and values. Consequently, decisions about intervention design and modification are more informed and development work towards sport development policy can be more effective. With this in mind it is timely that community sport development determines its own development agenda for the future through evidence of effectiveness and efficiency. Effective planned evaluation offers tangible evidence of what has been achieved and thereby offers stakeholders confidence and satisfaction in relation to their sport development role. Practitioners and academics alike are encouraged to raise their profile by engaging in concerted evaluation research strategies and appropriately disseminating their findings. It is hoped that the development of more participatory and theory driven approaches will aid this process.

 

 

References

 

Berk, R. A., & Rossi, P. H. (1990). Thinking about program evaluation. Thousand Oakes:  Sage Publications, Inc.

Bell, B. (2004). An evaluation of the impacts of the Champion Coaching Scheme on youth sport and coaching. Unpublished PhD Thesis, Loughborough University, Loughborough.

Burnett, C. (2001). Social impact assessment and sport development. International Review for the Sociology of Sport, 36(1), 41-57.

Cheminski, E. (1997). Thoughts for a new evaluation society. Evaluation, 3(1), 97-118.

Coaffee, J. (2008). Sport culture and the modern state: Emerging themes in stimulating urban regeneration in the UK. International Journal of Cultural Policy, 14(4), 377-397.

Coalter, F. (2007). A wider social role for sport: who’s keeping the score? London: Taylor & Francis.

Collins, M. (2010). From ‘sport for good’ to ‘sport for sport’s sake’ – not a good move for sports development in England. International Journal of Sport Policy and Politics, 2(3), 367-379.

Collins, M. (1995). Sport development regionally and locally. Loughborough: Loughborough University

Collins, M., Henry, I., Houlihan, B. and Buller, J. (1999). Sport and social inclusion: A report to the Department of Culture Media and Sport. Loughborough: Loughborough University.

Collins, M. F., & Kay, T. (1999). Sport and social exclusion. London: Routledge.

Crabbe, T. (2000). A sporting chance? Using sport to tackle drug use and crime. Drugs: Education, Prevention and Policy, 7(4), 381-391.

Denzin, N. K., & Giardina, M. D. (2008). Qualitative inquiry and the politics of evidence. Walnut Creek: Left Coast Press.

Department for Culture Media and Sport. (2008). A passion for excellence: An improvement strategy for culture and sport. London: LGA.

Department for Culture Media and Sport. (1999). 10 (1999). Arts and sport. A report to the Social Exclusion Unit. London, Department of Culture Media and Sport.

Department of Culture Media and Sport. (2002). Game Plan: A strategy for the Government’s sport and physical activity objectives. London: Strategy Unit.

Department of Health and Human Services. (2002). Physical activity evaluation handbook, Atlanta: California.

Donovan, C. (2011). State of the art in assessing research impact: introduction to a special issue. Research Evaluation, 20(3), 175-179.

Dugdill, L. & Stratton, G. (2007). Evaluating sport and physical activity interventions: A guide for practitioners. Salford: Sport England.

Gratton, C., Shibli, S., & Coleman, R. (2005). Sport and economic regeneration in cities. Urban studies, 42(5-6), 985.

Harvey, L. & Newton, J. (2004). Transforming quality evaluation. Quality in Higher Education, 10(2), 149-65.

Houlihan, B. (2000). Sporting excellence, schools and sports development: The politics of crowded policy spaces. European Physical Education Review, 6(2), 171- 193.

Houlihan, B., & Green, M. (2011). Routledge handbook of sports development. London: Taylor & Francis.

Hylton, K., & Bramham, P. (2008). Sports development: Policy, process and practice (2nd ed.). Abingdon: Taylor & Francis.

Johnson, I.M., Williams, D. A., Wavell, C. & Baxter, G. (2004). Impact evaluation, professional practice, and policy making. New library World, 105(1/2), 33-46.

Lechner, M. (2009). Long-run labour market and health effects of individual sports activities. Journal of Health Economics, 28(4), 839-854.

Long, J., & Dart, J. (2000). Opening-up: engaging people in evaluation. International Journal of Social Research Methodology, 4(1), 71-78.

Long, J., Welch, M., Bramham, P., Butterfield, J., Hylton, K. & Lloyd, E. (2002). Count Me In: The dimension of social inclusion through culture and sport. Leeds, Leeds Metropolitan University.

McMillan, W. & Parker, M.E. (2006). ‘Quality is bound up with our values’: Evaluating the quality of mentoring programmes. Quality in Higher Education, 11(2), 151-160.

Mota, J., & Esculcas, C. (2002). Leisure-time physical activity behavior: structured and unstructured choices according to sex, age, and level of physical activity. International Journal of Behavioral Medicine, 9(2), 111-121.

Nichols, G. (2001). A realist approach to evaluating the impact of sports programmes on crime reduction. LSA Publication, 73, 71-80.

Nichols, G. (2004). Crime and punishment and sports development. Leisure Studies, 23(2), 177-194.

Oakley, B., & Green, M. (2001). Still playing the game at arm’s length? The selective re-investment in British sport, 1995 – 2000. Managing Leisure, 6(2), 74-94.

Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage Publications

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). London: Sage Publications, Inc.

Salmon, J., Ball, K., Crawford, D., Booth, M., Telford, A., Hume, C., Jolley, D. & Worsley, A. (2005). Reducing sedentary behavior and increasing physical              activity among 10-year-old children: overview and process evaluation of the ‘Switch-Play’ intervention. Health Promotion International, 20(1), 7-17.

Smith, A. & Waddington, I. (2004). Using ‘sport in the community schemes’ to tackle crime and drugs use among young people: Some policy issues and problems. European Physical Education Review, 10(3), 279-298.

Spencer, L., Ritchie, J., Lewis, J. & Dillon, L. (2003). Quality in qualitative evaluation: A framework for assessing research evidence. London: Cabinet Office.

Sport England. (2012). Evaluating Impact. Retrieved March 15, 2012 from www.sportengland.org/research/evaluating_impact.aspx

Walters, S., Barr-Anderson, D., Wall, M., & Nuemark-Sztainzer, D. (2009). Does participation in organised sports predict future physical activity for adolescents from diverse economic backgrounds? Journal of Adolescent Health, 44(3), 268-274.

Weiss, C. H. (1972). Evaluation research. New Jersey: Prentice-Hall

Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies. New Jersey: Prentice Hall.