Evidence and evaluation

What is evidence and evaluation?

Evidence and evaluation play a key part in Family Hub development, informing the approach a locality chooses to adopt, and our ongoing learning about how we can improve services for the benefit of children and families.

As well as evaluating outcomes for children and families, this module includes guidance on how to evaluate implementation, including implementation and process evaluations, which can help assess how professionals are experiencing the change process and whether Family Hubs are driving improvements in interprofessional collaboration.

Family Hubs will be drawing on existing evidence, or creating evidence through local evaluation, to understand:

  • the type of Family Hub model and service offer that will address local need and deliver the desired outcomes
  • what works (and, equally as importantly, what doesn’t work) for children and families in a particular context
  • why something that has been put in place has or hasn’t worked.

 

 

Starting with good evidence

The Family Hub development process module is a key resource. It sets out relevant sources of evidence and good practice around gathering data (through local needs assessments, population needs assessments, and by drawing on existing evaluative data and research on effective practice) to inform the planning and development of a Family Hub.

Different evidence sources may be of variable quality. They may also reflect different perspectives on an issue – for example, you may consider a service in terms of its impact on outcomes, in terms of the experience of service users, or in terms of value for money. Assessing the quality of evidence, and reflecting with stakeholders on how it should be interpreted, are key to making the best possible use of the information available.

Learning by evaluating 

In April 2021, the Department for Education (DfE) awarded two research contracts to carry out a programme of research for the Family Hubs Evaluation Innovation Fund. The Fund has five core objectives:

  1. to support Family Hubs with evaluation capacity and resource via government funding
  2. to improve the quality and rigour of the evidence base on the effectiveness of existing Family Hub delivery models
  3. to generate knowledge and learning for local authorities and other commissioners on the factors driving the service implementation and performance, outcomes and impacts, and value for money of Family Hubs
  4. to create a step change in the standards of evaluation of Family Hubs, by showcasing good-quality evaluation, and generating learning and toolkits for future evaluations and service planning
  5. to aid national policymaking on Family Hubs by building an evidence base for any future government policy.

The two consortia will support the DfE and the National Centre for Family Hubs (NCFH) in achieving these objectives and will work with partner Family Hubs to deliver high-quality local evaluations.

Although not every local authority will be able to work with independent evaluators, it is useful to consider their role because independent evaluation can bring a level of objectivity and rigour that self-evaluation may lack.

The Ecorys Partnership

The Ecorys Partnership is a collaboration between researchers from Ecorys UK, with Clarissa White Research and Starks Consulting Ltd, and five local authorities at different stages of Family Hub development, all transitioning towards an integrated 0–19 services offer. These include:

  1. Bristol City Council
  2. Essex County Council
  3. Leeds City Council
  4. Sefton Council
  5. Suffolk County Council.

The evaluation operates at two levels:

  • local authority level: five bespoke individual evaluations of Family Hubs, underpinned by theories of change, and tailored to meet local needs and circumstances
  • overall project level: a realist synthesis of evidence generated by the five local evaluations, to draw out learning and insights to inform policy and practice.

The evaluation adopts a mixed methods design, combining quantitative and qualitative data collection and analysis with a programme of action learning with professionals and families. Each local evaluation includes process, impact and economic components, which were codesigned with local authorities during an initial six-month scoping stage.

The evaluation runs over 24 months, with interim reporting in summer 2022 and a final report in summer 2023.

Sheffield Hallam University and Doncaster Metropolitan Borough Council (DMBC) 

This project focuses on the Family Hub model in Doncaster Metropolitan Borough Council (DMBC), chosen as one of the most well-developed locality-based Family Hub models in England. The evaluation study will be delivered through three broad work packages, outlined below, drawing on a set of theory of change and impact evaluation workshops:

Work package 1: implementation and performance evaluation, including:

  • strategic stakeholder-level data-gathering, to understand overall strategy
  • analysis of administrative and secondary data, to understand patterns of referral, engagement and service delivery
  • twelve case studies of Family Hub sites, to understand the experiences of professionals and families
  • service user survey.

Work package 2: outcomes and impact evaluation, establishing a monitoring and evaluation framework (MEF) and using it to assess change in outcomes and the contribution of Family Hubs to any changes.

Work package 3: value for money evaluation.

The evaluation runs over 24 months, with interim reporting in summer 2022 and a final report in summer 2023.

What happens when independent evaluation is not possible?

If independent evaluations cannot happen, evaluative information can be collected by local authorities more widely. Given that Family Hubs in each area will be tailored to the needs of the local population and shaped around different multiagency working arrangements and other local factors, we need to build knowledge about what works in different contexts and for different populations. In the Family Hub development process module, there are suggestions for roles and responsibilities because allocating resource and budget will be key for implementation.

This learning can further contribute to the development of the evidence base for Family Hubs: NCFH will play a key role in ensuring that this cumulative insight into what works is shared, so that others can learn from successes and from things that didn’t go as hoped.

 

Why does evidence and evaluation matter to families?

It is important to know whether the services or interventions you provide are beneficial for the children and families who most need them. The Magenta Book, published by HM Treasury, is a useful resource, explaining why evaluation is useful and how it can inform thinking before, during and after implementation of an intervention.

The Early Intervention Foundation’s (EIF’s) 10 steps for evaluation success offers explainers, tips and links to additional ‘how-to’ resources on all the stages of evaluation maturity, including theory of change and logic models, feasibility studies, pilot evaluations, impact assessments, and the quality assurance systems that are essential if interventions are going to remain effective when offered at scale.

Why does evidence and evaluation matter to Family Hubs Implementation?

As set out above, Family Hub providers play a role as both evidence users (through collecting information and collating and making use of existing evidence) and evidence producers (through conducting evaluations). Information and data can help inform providers’ decisions, and justify their choices. Evaluation can provide a feedback mechanism to inform the further development of Family Hubs – based on assessments of whether an offer is working, and if not, why not, and what could be done differently. As such, they are part of an evidence and knowledge ecosystem about what works for families.

 

Good practice example

Doncaster started with the children’s centre inspection framework as a basis for evaluating the success of their Family Hubs. They now have a comprehensive set of evidence collection measures, using data and information from a range of sources. This reflected their multiagency approach to working, where they integrated a number of 0–5 services into a single management structure.

Collecting evidence through methods such as case studies and satisfaction ratings mean that they know what is working well for families, so they are able to adapt their offer if necessary. From data collection, they are also able to identify which groups are engaging with Family Hubs – and if necessary increase their reach to other groups.

Finally, through collecting data on outcomes of interest, they can demonstrate that they have had a measurable impact on children’s speech and language, childhood obesity and breastfeeding rates.

Which Family Hub policy pillars are supported by evidence and evaluation?

  • Access: information and data around the need for Family Hubs may identify the general and specific needs of the local population: for example, identify or support families who may be a particular target to reach or to offer early intervention support
  • Connection: information and data can help us understand how well services, professionals, sectors are working together to serve their communities.
  • Relationships: the evidence base can inform how Family Hubs can strengthen family relationships and facilitate services to work relationally to achieve lasting change.

 

Who needs to be involved in evidence and evaluation?

It is important that evidence and evaluation are inclusive and that evidence users are engaged in the process at an early stage.

Different stakeholders will have different skills and perspectives to offer when considering how information should be collected, analysed and interpreted. For example, some professionals may have strong technical understanding of the data, while others may bring a strategic or expert perspective. Families should be involved in interpreting and using evidence about Family Hubs, to capture the lived experience perspective.

Given that families are the intended beneficiaries of Family Hubs, and are likely to provide data for any evaluation, families should be involved throughout the evaluation process. This is both for ethical reasons, and to help ensure that the approach adopted is successful and is relevant and meaningful to them.

Good practice example

The Ecorys programme evaluating Family Hubs nationally plans to use participatory action research (PAR) with parents and carers in Bristol and Suffolk to include the voice of families. PAR is a qualitative approach involving cycles of data collection. The process starts by acknowledging that families are ‘experts in their own lives’. Families then document their own experiences of services and professionals, and are involved in collecting data and information – for example, by interviewing Family Hub staff.

How to collect evidence and conduct an evaluation

Conducting an evaluation and collecting evidence may seem daunting if you haven’t had any previous experience. However, there is a wealth of information to support you. This module provides a systematic, step-by-step guide.

  • Step 1

    The theory of change is the basis for your evaluation. The theory of change provides information on what the need or case is for a Family Hub in your area, and the outcomes you intend it to have for families. It also details the assumptions and evidence base you are drawing on in theorising how your Family Hub will secure those outcomes. It explains why your Family Hub is being introduced and will result in the changes you want to deliver.

  • Step 2

    The next step is to further expand on the theory of change by developing a logic model and a blueprint for your implementation. This step describes what the Family Hub model will look like.

    A logic model is a visual representation of the resources that will be needed to deliver a service that will lead to specific outputs. These specific outputs will in turn contribute to desired outcomes.

    Logic models are useful to inform your evaluation, as they help to identify what outcome measures and other data you need to collect to understand whether an intervention is having the impact intended. Logic models can also be useful in informing how you monitor service quality during implementation. Finally, they can be useful communication tools for sharing information about what you are doing with others.

    A blueprint is a visual representation which further specifies the logic model. It identifies specific objectives for each intervention activity and how these link to short-term outcomes.

    Blueprints encourage careful consideration of the intervention intensity and variety required to achieve desired outcomes. They also allow us to check the logic of any assumptions that we have made. Although you’ll want to develop an outcomes framework that reflects your local area, you may want to consider the routine outcome measurement used by children and young people’s mental health services and how to embed multiagency outcomes frameworks.

    In the Ecorys and Sheffield Hallam evaluations, some of the sources used in outcome mapping included the Public Health England fingertips tool, Office for National Statistics data and local administrative data on issues such as social care step downs.

     

  • Step 3

    As you move into planning your evaluative activity, it is helpful to consider a range of different evaluative methodologies, and to select the approach best suited to your context and to the questions you want to answer with the evaluation. EIF identifies a number of approaches (see below).

    Implementation and process evaluation

    An implementation and process evaluation helps you establish whether your service can work. It explores whether the key components of your intervention are feasible and achievable. It cannot tell you if your service works but it can tell you if anything needs to change. For example, it can help you understand whether your target population is going to attend the service.

    This step can also help you identify whether you have the necessary systems of data collection in place to monitor families over time as part of an evaluation.

    The focus at this stage is on:

    1. delivery: what is delivered and what factors impact on delivery
    2. participation: how many families participate, whether they remain engaged over time, and whether you are reaching target populations
    3. cost: measuring the costs involved will help you calculate the costs per participant and compare your service to alternative services.

    This type of evaluation can include both quantitative (numerical) data and qualitative (experience) data. The quantitative data can tell us what is happening (e.g., we are not reaching our target population) and the qualitative data can tell us why (e.g., the reasons why the target population is struggling to access the service). Workforce surveys are one example of how to collect this data.

    Pilot impact study

    A pilot impact study addresses whether a service has the potential to achieve its intended outcomes. It allows us to see whether the intervention has the promise to deliver the intended impacts and benefits for families.

    In your pilot impact study, you should:

    • measure at least one of your short-term outcomes identified in your theory of change, using valid and reliable measures
    • carefully consider sample size – you want to have enough participants to feel confident about the conclusions that you draw and be sure it is representative of a wide range of families
    • report your results in a detailed and transparent manner and acknowledge any limitations.

     

    Impact study

    An impact study will allow you to answer the question of whether your Family Hub has had an impact on outcomes for children and families (cause and effect) and how much of an impact it has had.

    Good impact evaluations collect data pre- and post-intervention as a minimum; more robust assessments will include measures at other time points, including long-term follow-ups.

    High-quality impact evaluations will also collect data on two groups – an intervention group and a control group. The groups should be well matched in terms of their characteristics (e.g., level of deprivation); one of the best ways to ensure this is through randomisation.

    Impact evaluations can be quite resource-intensive and time-consuming and require expert input. You will need to give careful consideration to a number of issues before committing to this step. EIF suggests eight things to check to determine whether you are ready for an impact evaluation.

     

     

     

     

     

Good practice

What are the key ingredients of success?

  • Evaluations should be well-designed and conducted.
  • Evaluations should use valid and reliable measures of appropriate outcomes.
  • Evaluations should be transparent, so that others can understand what has been done and learn from it.
  • Evaluation should be embedded through upskilling the workforce.

What are the pitfalls to avoid?

  • lack of a robust comparison group
  • high drop-out rate of participants from an evaluation (attrition)
  • excluding participants from the analysis; any exclusion criteria must be carefully thought through
  • using inappropriate measures
  • insufficient sample size
  • lack of long-term follow-up.

 

 

Equity, diversity and inclusion

Evidence and evaluation can promote equity, diversity and inclusion if you ensure this is part of what you set out to understand and build it into the process. Sometimes interventions do not meet the needs of all groups, so it’s important to try to understand this through the evaluation.

Providers need to be aware that there are gaps in the current evidence base relating to equity, diversity and inclusion. For example, data and information may have been collected from some groups but not others, meaning that the existing evidence base may not represent the needs of all and what has been shown to work for some families may not work for others. There are a number of ways to collect existing data through a local system assessment (e.g., the early help system guide), a population needs assessment (e.g., through local transformation plans) and other existing evidence and research (e.g., auditing information on the interagency impact on infants, children, young people and families. You can read more about gathering data in our Family Hub development process module.

 

Key deliverables

As part of the evidence and evaluation process, you may develop the following documents:

  1. theory of change
  2. logic model and blueprint
  3. implementation and process evaluation plan
  4. pilot impact study plan
  5. measuring impact plan.

Tools

Subscribe to our newsletter 

Family Hubs in Mind is our free newsletter, circulated monthly, and will share news from our members, latest events and resources.

The National Centre for Family Hubs is hosted by the Anna Freud Centre. This data is managed by the Anna Freud Centre through Mailchimp. Click to read the National Centre for Family Hub’s Privacy policy

  • This field is for validation purposes and should be left unchanged.