Marjory WACHTEL is Director of Development at BENKEI. She is also a member of the scientific committee of Time for the Planet.
In 2019, when I heard about Time for the Planet (www.time-planet.com), I chose to follow the founders and appreciated their original communication of young successful entrepreneurs. I took the plunge and contacted one of the founders to learn more about this initiative, which aims to save the planet through entrepreneurship.
I realized that my past experience as an expert evaluator for the European Commission (EC) on various R&I support programs could be useful and I offered my help.
Indeed, the EC’s evaluation method, although not well known, is a real feat both in terms of the number of projects evaluated (270,000 projects evaluated during Horizon 2020) and the number of evaluators (20,000 experts take part each year in the evaluation process), but also in terms of its equality, objectivity and transparency.
- Objectivity; because the experts are independent from the EC and the EC does not interfere with their decision, all along the evaluation. Even in Brussels, during face-to-face meetings, there is an independent expert in charge of checking for potential conflicts of interest.
- Equality too; because the EC recruits experts in all the themes of the framework program; scientific and other specialists in their field, either in the public or private sector. Beyond the balance sought between experts from academic research and industry, gender equity is also respected.
- Finally, transparency; because the evaluation mechanism and criteria are defined at the time of the call for proposals and provided to the evaluators; and because an evaluation report written by the expert evaluators is provided to all project leaders.
However, the main drawback of this mechanism remains the cumbersome nature of the documents that have to be submitted by the project applicants, the limited number of evaluators per project (3 or 4) and the time allowed (paid) for experts’ evaluation (on average, only 4 hours for a 70-page document and a €5M project).
Time for the Planet took its inspiration from this EC model, while proposing collective intelligence to replace a hyper-formatted evaluation that only represents the opinions of a maximum of 3 or 4 people. By giving the innovators a full freedom on the format of their project proposal, Time for the Planet avoids a heavy administrative burden that can be prohibitive for an innovator and can prevent a good project from reaching the evaluators.
By relying on collective intelligence and therefore a large number of evaluators to assess the potential of projects, the process becomes more fluid while keeping:
- Objectivity: the evaluators are not necessarily members of Time for the Planet. They all receive the same training and use the same evaluation indicators available on the website.
- Equality: each evaluator gives his or her opinion and each opinion counts; no privileges. Even the scientific committee (scientific evaluation body for the best innovations) gives consensus scores. The diversity of the evaluators and the gender balance should perhaps be monitored more closely.
- Transparency: evaluators can ask questions to the innovators, and everyone sees the question and the corresponding answer – innovators have access to all the information about themselves but also about other innovations.
The evaluation process of Time for the Planet aims to improve over time, but the first application round was a success : Time for the Planet received and assessed 150 projects in 12 weeks, which represents 5000 evaluations (each project being evaluated by an average of 25 evaluators) and led to a shortlist of 10 innovations presented to the Scientific Committee. We can only congratulate the initiative and the process, and encourage Time for the Planet to maintain this course: collective intelligence in the service of a fair and transparent process.