Evaluating the services and resources you provide is a vital part of developing high quality information. It enables you to see whether the information is designed, written and disseminated in an effective way, helps you to review and improve what you offer, and helps to show that you are meeting need.
Evaluation can also help to show what effect your information is having; for example whether it changes behaviour, improves skills, confidence or understanding, or whether it prompts an individual to act.
There are two main types of evaluation, formative and summative:
- Formative evaluation is usually part of project development and tends to be about process, for example testing the draft of a resource to get feedback
- Summative evaluation is usually used to assess outcomes and is more concerned with the impact of a resource or service
One the whole, information producers and providers are not good at summative evaluation and assessing the impact of their work, though there are exceptions. There is very little evidence or research about the impact of information, how it benefits the individual and what it can achieve. Information producers and providers usually have to find their own evidence of impact when justifying an information service or product or when applying for funds.
However, evaluation is not an easy process, or one that many routinely undertake, though this is changing. This does not mean you should not do it. It is important to find out what reaches people, where your information ends up, how many people see or read it and what they do with it or use it for. You will need this type of information to continually improve your services and resources and to help you meet the objectives of your organisation.
There are challenges in measuring the impact of the information you produce or disseminate:
- There is very little evidence about the impact of information. What is available comes from research, not evaluation.
- Traditionally, organisations do not evaluate their information because of internal barriers and funding and resource issues.
- Some organisations do not take contact details form service users because they are worried about interfering with the interaction and dynamic of their relationship. This makes evaluation difficult.
- It is often difficult to get feedback from the full range of your service users.
- Evaluation needs to be long term and ongoing. It can be difficult to get feedback from user’s months after their contact with you or after they used your resources or services. This means you may not get accurate or meaningful results.
Successful evaluation begins when you first plan your resource or your service. If you know what the objectives for your work are, you will know what you want to achieve and can decide what to evaluate that will show this.
For example, if you are developing a resource that shows people with diabetes how to look after their feet, you may want to know whether users read the information, understood it, what they learned from it, whether they acted on the information, whether they are still acting on it. You can use the same formula or method for each evaluation, but it is important to use what is appropriate to your organisation, resource and audience.
It is also important to know what you are going with the results once the evaluation is complete. Make sure you create tangible actions and not a report that is filed away and never acted upon. Will your evaluation need to demonstrate your worth and therefore your need for funds internally and externally? Or will the evaluation impact your organisation’s reputation? Make sure people are ‘primed’ to take action on the results of your evaluation and demonstrate how you have acted upon the results and highlight improvements to the service to participants in the evaluation and funders.
- Define outcomes at the start of projects, through identifying the actions you want an individual to be able to take as a result of using your information resources.
- People are usually happy to provide feedback. Ensure users understand what their contribution to the evaluation will be used for, and allow them to opt out at any time. Ask users of your service if you can go back to them to find out how useful the information was and how they used it. Let participants know the outcome of the evaluation if possible, as this will encourage future participation. People will take part if they know they can make a difference.
- Gather feedback about the impact of your information through a ‘constant conversation’ approach with patients and those around them (carers, health professionals). Use multiple evaluation methods, and record informal feedback as well, to build a wider understanding of your impact. Make sure the methods you use to gather your evaluation information are accessible and acceptable to your target audiences.
- Avoid using only your ‘warm contacts’ for the evaluation as this can give an unbalanced view of what you do. Try to involve people who do not know your information service well. Reach out to community groups to engage a wider range of service users.
- Send a questionnaire out by post, via Survey Monkey or in information packs. Gather some data and then use focus groups or interviews to drill down.
- Ask users where they would go for information if your service or information was not available. Ask what the information does for them, what it should do and what (if any) action was taken. Be as specific as possible about what you want to know.
- Collect evidence not opinion – avoid putting your own views and passions into the outcomes you are exploring.
- Evaluate distribution channels too, to see what happens when your information reaches certain places and where it goes on to.
- Evaluation is not just about looking to see whether your information is changing behaviour it is also about some of the softer things, such as relieving anxiety, reassuring, aiding understanding etc. You can measure these as end points too.
- Evaluation takes time and resources, so it’s vital to factor this in when you’re are planning. Consider an external evaluator (a freelancer or agency) as well as using the skills you have internally.
PIF’s work on measuring the impact of information
In 2017 we ran a survey and two events, with members of PIF and The Information Standard, to inform a set of recommendations for how best to develop the measurement of benefits of good quality information.
Over 70% of people who completed the survey stated that their organisation measured the impact of their information resources or services. However, respondents cited a wide range of ‘impact’ that was being measured, with a significant number including metrics relating to reach/distribution rather than actual impact.
Respondents described how they seek to measure the difference information resources make to knowledge, confidence and awareness. They recognised the value in seeking to measure its impact on behaviour change, decision making in regards to treatment choices, and ability to self-manage. But expressed that this is not easy, and indicated there is a lack of existing guidance and support to help achieve this.
The events identified a clear interest in and need for resources and support to help Information Professionals better understand, measure and evidence the impact of their health information resources. The following key actions were identified as priorities:
- Identifying a set of principles which could be used across range of organisations, including developing shared impact measures.
- Developing a toolkit to support organisations measure impact, to include templates and examples of ‘what works’.
- Linking evaluation activities on the impact of information with established measures of healthcare experience and quality.
- Designing evaluations so they engage and resonate with the wider health system.
Page last updated: 2/1/2018