This reflection is part of the 20 lessons included in the paper Lessons learned on promoting better links between research and policy in Latin America
The next set of lessons to be shared here will be focused in what some consider the centre stage of capacity building: participants (students, workshop attendants, mentees, etc.). As stated in the section above, there are organisations that make them part of the establishment of goals, right from the beginning. The view of the role of the participant throughout the entire initiative will clearly impact in how they are selected (in case, of course, that a selection nee
ds to be done, i.e. for limited availability of spots) and the incentives that will enable active participation and engagement.
Throughout SFE, special relevance was put in detecting the right people to participate in each activity. Naturally, the concrete methodology for running this type of selection processes should vary according to the scope of the training. For instance, INASP had a project run in Vietnam where the selection was substantially more involved – each ‘applicant’ was interviewed in person before becoming involved in the programme which aimed at building trainers in health information. Julie Bittain alerted that “This was quite an expensive way of doing things though, plus time-consuming, so is only worth it if you are expecting long-term engagement.”
Our lessons derive from CB activities in which the programme covered the total of expenses so some additional considerations should be made when thinking about selection and incentives for participants who pay for the CB. Under SFE participation in conferences, courses and peer assistance exercises was free; and fortunately we always had more interested individuals in joining them than available spaces. Thus, we applied a diverse set of criteria without a very formal process (i.e. we did not score applications according to these criteria nor had an external panel making the decision).
Selection is usually more related to the “trainer´s” expectations in terms of what participants can achieve throughout the capacity building process. Possible criteria for selection (that we have used) are:
Professional experience and knowledge they could share with others, especially if we had had personal contact with candidates
Personal and organizational commitment ( for example, requiring a formal letter signed by the Executive Director so as to foster organizational buy-in, or asking for a personal essay to unravel motivations). However, even though we sought mechanisms to assess organizational commitment, letters from Executive Directors have not proven very effective. Participants have left the course even when there was an institutional endorsement to it and for no reasonable grounds sometimes. A better mechanism shared by one member of the CB group is to work with senior participants at the beginning of a project life to achieve more buy in, strategy development and awareness building and then move to individuals or more junior members as time goes on.
Diversity: especially in terms of genre, region/country/subnational/local, type of experience (communicator, policy maker, researcher, M&E expert, etc.) and type of organization (CSO, university, think tank, etc.) due to the richness this gives to the exchanges among participants (including facilitators). Different experiences, emerging from diverse contexts usually make participants think about other possibilities of doing what they are used to do. Diversity also entails more interesting and balanced debates and a more ample knowledge exchange.
Seniority and/or level of understanding of the topic to ensure similar quality levels. In this sense, one decision that has proven to be effective is involving two or three senior profiles, who can encourage discussions and exchanges, and also “start the game” by being more extroverts and animating the others to participate with questions or controversial comments that “trigger” reflections.
Potential for future work, we were interested in individuals and institutions with which we shared goals, interests, etc.
Potential for organizational spill-off, for example if the CB is aligned with pre-existing projects of participants so as to strengthen application of knowledge and sustainability. In this sense, timing has proven a very effective indicator of how a participant will engage throughout a course: when he/she is dealing with questions, challenges, needs that are directly related to the topics of the course, participation is higher and more focused, practical exercises are conducted thoroughly and are very down to earth and they usually directly apply some of the contents to their current work
Another possible criterion that was not applied in SFE is to assess the willingness to contribute in some way to the course, especially by paying at least part of it. In this direction, in his blog Goran Buldiosky has argued that “donors should charge a participation fee almost as a rule! The fee could be a percentage of the total cost (10% or more of the total costs to beneficiaries). (…) Deciding to invest in the capacity building from the scarce funds think tanks [or similar organisations/individuals] possess means they will not approach the possibilities as getting a ‘free lunch’. Instead, it is more likely that they will think through and decide if they really need it.” This is a very effective way to avoid that the sole incentive to participate in a CB activity is to make a donor happy
Even when some individuals/organisations may experience difficulties in contributing financially for the CB, there are other innovative ways to ensure their willingness to invest resources, for example, by requiring those who have not paid to produce a case study or video, or other training material with examples for future CB activities.
Finally, there are still some very important questions related to selection that were raised by Antonio Capillo from INASP in this post a a reflection and response to our paper:
What are the benefits of using competitive processes to select the participants compared with defining an ideal participant’s profile and asking the organisations to nominate based on well-defined criteria?
Is it better selecting participants with different backgrounds, levels of understanding and perspectives or having instead participants that are more homogeneous?
Do alternative selection processes fit differently according to organisational cultures, participants’ expectations or type of training design?
Would well-selected participants have reached the same objectives if they did participate in training activities (strictly related to impact evaluation design and counterfactual analysis)?
What is more important between a good trainer and well-selected participants?
Another relevant criteria raised by Ricardo Ramirez in his recent post is readiness, which in our case would mean selecting those applicants who realle expressed and demonstrated that they were willing to take benefit from what we were offering both in terms of interest in the topics but also by assessing how they could apply knowledge from the courses to their daily challenges. Related to the latter point, Alex Ademokun from INASP rightly highlighted in this post the value of identifying key individuals to act as champions: those who can produce changes not just in their own behaviours but in their working environments.
There is certainly space to continue discussion and exchange on this topic, and I would add today that we should also ask ourselves: Why and how would participants select us? Incentives are part of that and I will reflect upon them in the next lesson.