User Tools

Site Tools


programmatic_practices_for_community_feedback

IV. Programmatic practices for community feedback


CARE is recognized as a practice leader on community-level accountability and feedback practices. CARE UK Inclusive Governance Team has compiled an extensive set of current CARE resources for gathering and responding to community feedback and ensuring program accountability to impact groups. This section also includes select external resources developed and tested by our peer organizations and practitioners.

This guide doesn’t list all the different ways CARE collects and responds to feedback from our impact groups. It is not focused on project-level feedback practices. Specifically, CARE’s Community Score Card and Keystone’s Constituency Voice Methodology (piloted in 4 countries) are highlighted in this section as concrete ways in which CARE has gained experience with systematically institutionalising accountability through feedback processes at the country level.


1. Defining Effective Feedback Mechanisms

A beneficiary feedback mechanism, in particular, is a context-appropriate mechanism which :

a) solicits and listens to, collates and analyses feedback,

b) triggers a response/action at the required level in the organisation and/or refers feedback to other relevant stakeholders,

c) communicates the response/action taken where relevant back to the original feedback provider and if appropriate, the wider beneficiary community.

In this definition (a), (b) and © must all be present/true and a feedback mechanism is not functional if just one of them is present/true].

“A feedback mechanism is seen as effective if, at minimum, it supports the collection, acknowledgment, analysis and response to the feedback received, thus forming a closed feedback loop. Where the feedback loop is left open, the mechanism is not fully effective.”

CDA-ALNAP Closing the Loop: Effective feedback in humanitarian contexts: Practitioner Guidance

closing-the-loop-effective-feedback-in-humanitarian-contexts.pdf

CARE established an effective consultative process with community members and partners. However, the commitment to regularly solicit and utilise feedback varies greatly between country programmes. Many country programmes carry out satisfaction surveys on a regular basis, asking for feedback on specific projects and programmes, such as health and education. The level and scope of feedback solicited differ based on the length of the project. CARE staff has reported different experiences as to how much of the feedback is analysed by country programme teams and used to influence programme decisions, noting that this often depends on leadership and principled commitments to integrate feedback. (from Beneficiary feedback mechanisms: a literature review” conducted by DI in 2013.)

Feedback and complaints mechanisms need to be tailored to the context and designed in consultation with its key users: those who we expect to be providing feedback to CARE and those who will be using the feedback internally for program quality improvement, course correction and for accountability purposes.
CDA’s case studies on effective accountability and feedback practices point to a consistent preference for multiple feedback channels from local stakeholders. The diversity of channels increases the inclusion of different groups who may have varying degrees of access (e.g. literacy, digital literacy, mobility, etc).

Community feedback mechanisms can range from suggestion boxes, SMS platforms, surveys, helpdesks and focus group discussions designed to solicit and to respond to feedback. CDA developed this Menu of Options for Selecting Feedback Channels that includes consideration of strengths, weaknesses and required institutional resources for each type of channel.
cda_menu_of_feedback_channel_options_fall_2015.docx
BFM key findings summary
bfm-key-findings-summary.pdf

http://feedbackmechanisms.org/public/files/BFM-key-findings-summary.pdf

Constituent Voice Methodology (CV)
Constituent Voice™ is a methodology developed by Keystone Accountability to enable organizations to improve results by optimizing their relationships with their constituents. Instead of focusing on evaluation, it focuses on managing performance. The methodology involves four stages: collecting feedback, analyzing and reporting, dialogue and learning and course correcting and repeating, and has benefits to project, program and country level and CI. It is currently being piloted in 4 CARE country offices.

The impetus to test this methodology at CARE came from our search for feasible methods for listening more systematically to partners and project participants. CV method helps to make feedback collection routine and comparable over time. It is based on the premise that increasing focus on knowledge management will help improve our decision-making.

The CI UK Governance Team is currently developing three guidance notes to be included in this section. It will be structured as follow:
1. Survey builder guideline – indicating how to design questions and data collection
2. Data analysis guideline – indicating how to analyse the data using the NPS and making reports out of the data
3. Feedback dialogues and course correction guideline – indicating how to hold dialogues with the respondents and how to undertake course correction;

The toolkit: The Community Score Card (CSC): A generic guide for implementing CARE’s CSC process to improve quality of services can be found here
Report: CARE’s experience with community scorecards: what works and why? can be found here
Video: you can watch this video to learn more

3. Examples and Resources from peer organisations

DFID Beneficiary Feedback Mechanisms Pilots
Between 2014 and 2016, DFID supported seven non-governmental organisations in six countries to pilot beneficiary feedback mechanisms as part of their maternal and child health projects. Key lessons and recommendations from the pilots were synthesized into practice notes to help other organizations improve their feedback systems.
DFID BFM Pilot Recommendations Summary:

* At the outset, ask whether there is sufficient time, resources and flexibility to implement a feedback mechanism and respond to feedback once the mechanism is in place
* Conduct a thorough context analysis before deciding on a particular feedback mechanism, including whether literacy or cost are barriers in marginalised contexts
* Sensitise beneficiaries to the purpose and process of giving feedback, both at the start of the project and on an ongoing basis, and allow time to build trust in the mechanism
* Engage with external stakeholders (particularly local government agencies and community leaders) about the feedback mechanism and establish referral protocols
* Ensure that there is sufficient scope in the programme design to make changes and respond to requests to increase or reallocate resources; negotiate with the donor if necessary
* Ensure those with ‘first contact’ with beneficiaries (often project staff) understand the purpose of the feedback mechanism and the scope for responding to feedback
* If feedback is intended to integrate with monitoring systems, give careful consideration to how feedback will be analysed and aggregated and the capacity of staff and systems to do that.
* Consider the sustainability and exit strategy for a feedback mechanism as part of the initial design phase.

World Vision was one of the 7 non-governmental organisations piloting beneficiary feedback mechanisms and is a great example of how trust has been used to collect stakeholder data and then to respond effectively to the feedback. The case study below shows how changes in the organisational strategies have been applied to the programmes based on the feedback received.
Beneficiary Feedback Mechanisms Case-Study
mamta_india1.pdf

  • Participation: Project Design Processes and Community Involvement

CARE works on complex, deep-rooted and systemic issues which require holistic and inclusive processes and solutions. Our impact groups and partners at the community level are best placed to participate in the design of appropriate solutions. By involving them at the design stage and seeking their input, we share our decision-making power and enable local ownership.

This document was commissioned by IRC’s Client Voice and Choice (CVC) Initiative. The document was developed through a partnership between CVC and a team at CDA Collaborative Learning Projects. This framework articulates the IRC’s Client-Responsive Programming Approach. The Framework provides an overarching direction, systematises practices and sets a quality benchmark for the IRC in delivering client-responsive programming. It provides organisation-wide, coherent standards, alongside guidance which can be contextually interpreted. The Framework provides suggestions for roles and responsibilities with respect to the Actions and Enablers, and references existing IRC and external resources which can be used in support of client-responsive programming.

Client-Responsive Programming: Resource Considerations for Project Teams (2016)

Outlines main resourcing decisions for staff when designing, setting up and implementing a simple feedback and response mechanism, or a more comprehensive approach feedback.

Designing for a change in perspective: embracing client perspectives in humanitarian project design (2017)

This report builds on existing knowledge on levels of client engagement in humanitarian and development decision making. It provides an analysis of the responses that the IRC collected through an e-survey that was shared in July 2017 with hundreds of humanitarian actors in Africa, Asia and the Middle East, as well as staff working at headquarters.

  • Building stakeholder data into the organisation DNA The organisation

Accenture has used an original approach to leverage the potential of stakeholder data using a citizen-led approach to technologies. Accenture West Midlands Case Studyaccenture-west-midlands-police.pdf

  • UNICEF use of real-time data UNICEF's U-Report is a free social messaging tool used to speak about development issues or human rights from everywhere in the world. There are 4.5millions U-Report Members. This monitoring tool has been for example very useful during the Ebola crisis; in less than 24 hours, it helped direct resources to where it is needed as part of the Ebola response and helped put child protection on the agenda in Liberia.

4. Transparency & Information Provision at the Community Level

CARE’s new Accountability Framework locates transparency centrally along with feedback and participation. This guide already outlined transparency and reporting as it relates to external audiences. Some CARE offices already practice transparency at the community level by publicly disclosing information (see CARE Peru example above) in order to build trust, respect and share information that allows impact groups to make better decisions. Information provision improves the quality of feedback and overall engagement. However, information provision is not an information dump. In many communities where illiteracy and education levels impact people’s ability to engage with information, peer organizations have found that budget literacy training combined with transparency boards have been a powerful tool for enabling meaningful engagement and dialogue about programmes.

  • Data Transparency Board:The Hunger Project uses data transparency boards to share information about projects duration, number of beneficiaries, donors funding the project, as well as up-to-date data related to project’s or programme’s progress towards the stated milestones and goals.
  • Community Data Presentation:The Hunger Project also holds regular community data presentations to share M&E and feedback data with local communities. These sessions include - Interactive presentations of data (using visuals and participatory methods), - Comparison of results to previous surveys, national data, or other epicenters; - Discussion of community goals and priorities - Opportunity to provide feedback and ask questions on “how” and “why”

5. Adaptive Programming

Accountability requires effective mechanisms for using feedback in our adaptive programming and strategic decisions that lead to improved impact, ownership and sustainability. There are strong overlaps between CARE’s programming principlesto Doing Development Differently manifesto, especially in terms of empowerment, partnership, learning, accountability and sustainability. CARE is already part of internal and external discussions about adaptive management which will be linking M&E and accountability more closely. The following are key aspects of adaptive management that are in line with CARE’s priorities, values and objectives. These have been drawn from Doing Development Differently resources, as well as Thinking and Working Politically (TWP) community of practice and USAID Learning Lab’s Collaborate Learn and Adapt tools.

Genuine collaboration and partnership with local actors:We need to let go of a CARE-centric approach. For example, by asking local women’s groups where they think we should focus on, or including marginalized concerns such as the labour rights of domestic workers. * More politically smart and locally owned analysis and planning: Doing development differently requires a focus on power relations, and a more grounded understanding of political context. CARE has increasingly learned the value of adapting political economy analyses to make them more problem-driven, participatory, iterative, and gender-sensitive.

Adaptable program design and implementation:We have shifted attention from a narrow focus on project-specific attribution to contribution to systems change. Using theory-based approaches such as outcome mapping and contribution tracking and the use of vignettes have been particularly helpful to identify the right data and capture complex social change processes more effectively. We have defined core standards to help operationalise adaptive management, formulated complexity-aware M&E, learning guidance, and developed learning and reflection tools to support country teams and partners to better adapt programming to achieve transformative and sustainable change.

Fast feedback and learning loops:We need to listen more systematically to partners and project participants. We have experimented with Keystone’s Constituent Voice (CV™) model to help make feedback from impact populations and partners routine and as easy to use as possible. Increasing focus on knowledge management will also help ensure decisions are more evidence-based.

Brokering multi-stakeholder engagement to shift power:We have seen an increase in CARE’s role as a broker for multi-stakeholder engagement. This includes bridging scientific and community knowledge through participatory scenario planning (PSP) and supporting citizens to prioritise their own concerns and dialogue with power-holders through community scorecards (CSC), and using ICT platforms to scale these up.

Manage risk through small bets:Flexible central funding such as the Scale X Design has helped cultivate promising innovation from country teams, such as a smartphone application to digitize Village Savings and Loans Associations (VSLA) groups in Tanzania, or rapid prototyping for input supply shops which serve last-mile farmers.


programmatic_practices_for_community_feedback.txt · Last modified: 2019/01/23 17:09 by admin