In the past 9 months, Finland has been taking pioneering steps in experimenting with methods for participatory and direct democracy. For the first time the Ministry of Environment in Finland has crowdsourced a legislative process by asking citizens to contribute ideas for a new law on off-road traffic. The Off-Road Traffic Act is a law that regulates where and how fast snowmobiles and ATVs can be ridden, how to protect nature from off-road traffic, and how to compensate the land-owners for the use of their land for off-road traffic.
The crowdsourcing project was initiated by the Ministry of Environment in Finland and the Committee for the Future in the Finnish Parliament. The goal was to test if and how citizens can meaningfully contribute to a law-making process. The authors of this blog post both contributed to the process and studied it in various ways. For us, academics and researchers, the main goal is to identify ways in which collective intelligence can be tapped to the benefit of policy-making. Another hope, beyond that, is to affect social change.
The process – which we call the Finnish experiment for brevity – was divided into three stages. The first stage was problem mapping: the citizens were asked to share their concerns, experiences, and problems with off-road traffic and the law regulating it. In the second phase, we asked for ideas about how to solve those previously identified problems. In the third stage, we asked both the crowd and a globally distributed expert panel to evaluate the generated ideas. (See more about the project here)
The goal of the Finnish experiment is to gather information from a large number of people, and thus extend the pool of knowledge, which is used in the law reform process from the traditional stakeholders to a variety of people – in theory anybody who is interested in participating.
Here are some lessons learned from this pioneering project so far.
1. People participate in a constructive way.
A substantial number of people are really eager to participate, when they are given a meaningful opportunity to do so. That opportunity needs to be something that they care about. And there needs to be a plausible promise: meaning, their participation must lead to something. In the Finnish experiment we received hundreds of ideas from hundreds of people. The interactions on the online platform were civil and constructive. Only 20 comments had to be removed out of 4,000.
2. The crowd is not delusional about its potential impact on the law.
It is often said that participatory practices in policy-making are not desirable, and maybe even dangerous, because they create false expectations in the participants, leading them to believe that they will be directly impacting the laws. Direct taking into account of raw input, however, can rarely happen. The ideas contributed by participants generally need to be debated, refined, recombined, and some of them discarded. Thus, according to this argument, crowdsourcing is dangerous because it promises more than it can deliver and is bound to let the crowd down, demotivating them from participating in the future.
We were curious to see what the crowd’s expectations really were. Based on our analysis of participants’ interviews and survey data, our conclusion is that the crowd is savvy and realistic. At least this particular Finnish crowd was. The participants participated because they wanted to impact the law. Yet they were cautiously optimistic about the likelihood of their impact.
The crowd is hopeful, but realistic. They understand that one idea or opinion may not count so much at the end. And there are hundreds of other opinions too that need to be heard, and the end-result, the law, will be a compromise of many perspectives. This doesn’t mean, of course, that the people can or should be let down and that their opinion shouldn’t be taken seriously because they don’t care. They do care, that’s why they overcome their pessimism and spend their time voluntarily on the crowdsourcing platform.
3. Crowdsourcing creates learning moments.
As the participants exchanged information and arguments on the crowdsourcing platform, they learned from each other. As one interviewee, who participated in crowdsourcing said:
“I’m somewhat surprised to see that the online process serves as a way to add to the participants’ knowledgebase and correcting their incorrect perceptions. I had read carefully the current law and the expired bill, and I realized that quite many participants didn’t have correct understanding about the terms about the law and its implementation. But, in many conversation threads these misconceptions seemed to transform into correct ones, when somebody corrected the false information and told where to find correct information.”
Exposure to others’ perceptions didn’t lead into opinion changes, yet it made the participants to know the others’ positions and circumstances better, and thus lead into deeper understanding of even opposing opinions. Similar impact occurred in the evaluation stage: the evaluators reported that being put into a situation where they need to evaluate ideas from an opposing perspective, e.g. an environmentalist evaluating a list of ideas about increasing off-road traffic, and vice versa, creates cross-cutting exposure.
The educational aspect deserves further studying in future experiments. It is important to study what are the triggers for learning and how to enhance this learning dimension in future crowdsourcing experiments of the kind.
4. Crowdsourcing as knowledge search
One main concern in participatory methods in policy-making, in which participants self-select, is the risk of misrepresentation of the general population’s preferences. In crowdsourcing we typically deal with self-selected individuals, who are not as a group statistically representative of the population. Further, crowdsourcing platforms like ours allow people to participate anonymously, allowing for the possibility that the same people participate multiple times using multiple profiles.
In our case, we were crowdsourcing ideas to improve the law, not delegating ultimate decision-making power to the crowd so the problem of legitimacy is not necessarily acute. Because the focus was on idea and information collection, an idea didn’t gain more weight from being voted on multiple times. Duplicates were consolidated into one single idea in the later idea categorization. The profile of the idea presenters and their lack of representativeness didn’t matter either, because it was their information and knowledge that we were after, not their identities. That said, it is likely that some good ideas were not produced due to the skewed nature of the sample of participants.
5. The crowd is smart.
Based on the idea evaluation results collected so far, we conclude that the crowd – at least this specific Finnish crowd – is smart. The evaluation took place on a new crowd evaluation tool built by David Lee at Stanford University. Each participant reviewed a random sample of ideas by comparing, ranking and rating them. Based on the evaluation analysis, it seems that the crowd preferred commonsensical and nuanced ideas, while rejecting vague and extreme ones.
6. Minority voices were not lost.
What proved a very interesting and successful method to analyze the evaluation results was clustering. The clustering algorithm didn’t know anything about the demographics of the population, yet it identified a minority cluster which aligned strongly with certain demographic minorities, such as females and those whose preferences were aligned with landowners’ rights. The opinions of that group differed from those of the majority groups.
Being able to identify a minority cluster is important because it helps us analyze the results of the crowd evaluation at a more detailed level. With clustering, the voice of the minorities is separated out from the majority, allowing us to hear the minority. The use of this technique can also function as a motivating factor for minorities to participate in online crowdsourcing efforts, because we can promise them that their voices won’t be simply drowned out by whatever majority emerges.
The question of representativeness still remains. Based on demographic data, the participants in the evaluation process were biased towards male (more than 90% of the participants), who live in Northern Finland and who identify themselves as recreational snowmobile riders. The idea generator crowd was more equally spread in geographic location and issue preference.
Obviously, these dominant groups might not represent Finns’ general opinions. Yet, these are the groups who seem to care about the off-road traffic issue. If the whole population were asked (in fact, they were asked, because the process was open for anybody to participate in) to generate ideas or evaluate those, would they bother to participate? If they don’t care, should those who care not be heard at all? This is one of the core questions in participatory policy-making.
7. Next steps
The next important question is the following: How should the decision makers treat the crowdsourced input and the evaluation results? We recommend that the decision makers consider the crowdsourced input just like they would consider input from other sources such as interest groups and hired experts. The evaluation results should help to focus on the most promising ideas, because the crowd already filtered out the vaguest and least promising ones. The politicians, of course, have to determine what is the most appropriate political line to be followed in the idea implementation. Further, the ideas should be perceived as raw material that most likely needs to be refined.
Maybe the main difference between the traditional law-making process and this new one will be that both the idea-generating and the evaluating crowds will receive a reasoned justification from the law-makers as to why their ideas were integrated into the law, or rejected. Public justification is a core ideal of deliberative democracy and we trust that public shared reasoning will ensure transparency in the law-making process. If this part of the experiment is done well, we believe it will keep the people motivated to participate in further crowdsourcing experiments.
The next step in the experiment is handing out a report on our results to the Ministry of Environment, with accompanying policy recommendations for the next steps in the actual law-writing process.