Building process evaluation activities into programme implementation and using the results of these activities to conduct continuous quality improvement is perhaps one of the most important yet overlooked strategies in public health practice. This section provides information on how to design a process evaluation that will inform and improve programme implementation. Details on types of tools and assessments that can be used, and examples of indicators specific to NCD evaluations will be provided. CIH will be used as an example to illustrate a broad range of process activities including those used across a variety of settings including school settings, work places, health care settings and the community at large. Drawing from field experiences, the challenges of collecting process data in diverse settings will be addressed and potential solutions provided.
Process evaluation may occur with or without outcome evaluation and may include a combination of qualitative and quantitative data collection strategies. However, if resources, time and feasibility are a road block to conducting a full evaluation study, it is highly recommended that a good process evaluation study be incorporated.
Process Evaluation Questions
The types of questions asked when designing a process evaluation are different from those asked in outcome evaluation. The questions underlying process evaluation focus on how well interventions are being implemented. Typical questions asked include, but are not limited to:
- What intervention activities are taking place?
- Who is conducting the intervention activities?
- Who is being reached through the intervention activities?
- What inputs or resources have been allocated or mobilized for programme implementation?
- What are possible programme strengths, weaknesses, and areas that need improvement?
Process Evaluation Strategies
Both qualitative and quantitative research methods (mixed method) are used in process evaluation. It is often the richness of qualitative methods that provides the more detailed, in-depth, language, context and relationship between ideas that best informs programme process. The following list presents the possible strategies to use to collect process level information:
- Interviews where open ended questions regarding feelings, knowledge, opinions, experiences, perceptions are used and data recorded
- Focus groups
- Forums and discussion groups
- In-depth interviews using key informant or other community members; semi- structured and structured
- Delphi method using expert opinion and reiteration
- Observations from fieldwork descriptions of activities
- Case Studies
- Ethnographic studies that are enmeshed and of long duration
Document review of
- written materials from organizations including:
- clinical files
- programme records
- publications and reports
A good process evaluation plan will include a number of indicators that can be linked to programme and service inputs and programme and service outputs. Examples of service inputs Include:
- Participants: number, health status
- Queuing: wait time, number waiting
- The locale where services are provided (e.g. rural, urban)
- Economic status and racial/ethnic background of those receiving services
- Quality of services
- Intervention delivery: quantity, fidelity to plan
Examples of service outputs include:
- Units of service: quantity, type
- Service completion: quantity, type
- Intervention: dosage received, satisfaction
CIH Process Evaluation Methodology
The process evaluation for CIH is complex and includes a variety of methods including those that tap into intervention/programme process specifics like the service inputs and outputs mentioned above, and a host of additional methods that address community context. CIH includes multiple interventions occurring across multiple settings. An overview of the CIH process evaluation methodology is presented in the case study below. The following figures provide select examples of indicators by type of setting and one of the three process logic models for CIH (the other risk factor logic models can be found in Appendix). Each of the process indicators maps directly back onto the logic model and key process evaluation questions.
Case Study. CIH Process Evaluation Methodology
CIH is designed to broaden the research base by identifying not only what works to reduce and prevent chronic disease risk factors, but understand the process by which it works, and how this varies according to setting, country, and culture. The process evaluation was designed to be both formative and summative; process measures help sites remain on target and ensure comprehensiveness of interventions, and allow for sharing of ideas and expertise across sites. The process assessment is integral to identifying cultural, social, and political facilitators of, and barriers to, the process of lifestyle behaviour change.
Intervention strategies and outputs are tracked and analyzed to better understand the implementation process. Process is documented in contact/collaboration, meeting activity, and material dissemination logs, standardized progress reports, community coalition questionnaires, and site portfolios. These documents provide information on an intervention’s target, dose, and reach, as well as details about each activity, such as logistics, costs, focus, participation, collaboration, and “agents for change” in intervention development and implementation. Process information is also collected through questions in other CIH evaluation tools including individual surveys, key informant interviews, and environmental scans
Figure 14-1. Examples of CIH process indicators by strategy
Figure illustrating examples of CIH process indicators by strategy
Figure showing one of the CIH process logic models
The methods for capturing community context in CIH are diverse and include those for addressing:
- policy development and implementation at the local, state and national level
- policy development and implementation at the institutional level in health centres, work places and schools
- the community environment by use of environmental scans.
In the table below, a summary of the types of strategies and the tools/instruments used are presented along with an explanation for their use.
CIH Measurement of Community Context
Component and Type of Instrument Used
Current policies implemented, enforced and/or under consideration
Current Policy Implementation and Practices
Degree of policy implementation and practices implemented without policy enactment
Representation of the community to assess relationships with physical and geographic attributes
Evidenced-based interventions cannot be expected to work exactly the same way in all contexts and cultures. The political, geospatial, socioeconomic, physical and cultural characteristics of each community are critical in determining what is needed, appropriate, and effective. It is strongly recommended that Community Profile assessments are conducted both pre- and post-intervention, so as to document changes in social and political constructs as well as community norms that may interact with efforts for change. In the CIH programme, data collection tools for the Community Profile included environmental scans, policy reviews, key informant surveys and interviews and community readiness assessments.
The importance of environmental scans in creating a community profile: The purpose of environmental scans is to describe the physical and spatial aspects of communities (see Assessing the Built Environment).
The importance of policy reviews in creating a community profile: Policy documents can be systematically collected and reviewed in order to identify the global, national, regional, and local policies and enforcements currently in place at the local, regional, or national levels. Policy document review can also be at the facility level when gathering information from schools or workplaces (see Assessing the policy environment)
The importance of key informant surveys and interviews in creating a community profile: Surveys can be used to assess implementation and enforcement of policies and other local practices, from a community perspective. Surveys can be administered to key leadership in schools, workplaces, and health centres. Interviews can occur after the surveys are reviewed to provide a more in-depth understanding of barriers and facilitators to community change. The interviews can include a community readiness assessment to assess the 'stage of change' of each community to better understand how community readiness affects intervention implementation and the rate of change.
The types of process activities described in this chapter can be used to provide continuous quality improvement (CQI) for a programme/intervention. Typically evaluators use this type of data as part of monitoring and feedback allowing for midcourse adjustments in programming. For example, in CIH following a site visit, the Evaluation Team determined that there was insufficient dose of an intervention underway to make it meaningful, and the site was struggling with insufficient staffing, therefore the recommendation made was to eliminate the ‘weak’ intervention and focus efforts back to the sites core intervention activities.
In CIH, the development and use of community coalitions to drive intervention and implementation efforts was presented as a core strategy. Many of the coalition members were interviewed as part of the community readiness assessment. The case study below presents a strategy for evaluating the functioning of coalitions based on the work of the University of Wisconsin Comprehensive Cancer Center who created a Coalition Reporting System which includes the Community Coalitions Activity Survey (CCAS).
The UW monitoring and evaluation system allows the charting of coalition progress over time, improves peer-to-peer learning among the different coalitions and helps government agencies target technical assistance to their needs. Although created to collect and summarize work across multiple tobacco control coalitions, its comprehensive listing of coalition activities may generate ideas for any public health coalition. The goals section of the UW CCAS addresses progress made on each of the group’s goals, including important inclusion of specific benchmarks (e.g. 20% reduction) and sub-groups (e.g. middle and high school youth, adults).
The UW coalition activity report can be found at: http://electra.biostat.wisc.edu/mep/downloads/Coalitions/Activity%20Survey%20Report_j-j02_.pdf
The instrument begins on page 24 of the document.
Case Study. Evaluating Community Coalitions
On a regular basis, coalitions should be assessed for their ability to collaborate as well as impact its targets. Why regularly? Because it is an iterative process where yearly goals and objectives are fine-tuned based upon outcomes and components of operation changes as a result of process evaluation as well as outcome evaluation, etc.
The following is a list of potential areas to consider when evaluating your coalition’s functioning and efforts .
- Comprehensiveness – what types of individuals and organizations are represented? Has there been an increase in the number of referrals between coalition members? What types of databases or directories have been created to facilitate sharing of information?
- Training – what training has the group done for coalition members and other professionals?
- Continuity – how long has the group been together? How much turnover of people and/or organizations has there been? Is there still a desire to work together?
- Involvement – how active are the group’s members? How often do they meet? How much volunteer and professional time is being focused on a particular problem because of the collaboration?
- Access and equity – do all members of the targeted group have equal access to coalition efforts? How do other non-coalition members obtain access to the information and services of the coalition members?
- Information and advocacy – how has information been increased for your constituency and policy makers? To what extent do member organizations promote each other’s efforts?
- Cost effectiveness – could existing funds have been used more effectively? How? Has duplication of services been reduced or eliminated? How has the group been able to access new funds as a result of the coalition?
- Additional considerations – What is the coalition doing together that is really working well? What are the major problems facing the coalition? Are there unanticipated outcomes that have arisen because of working together?