In response to the widespread utopianism regarding the ability of crowdsourcing to provide solutions to difficult problems in both the public and private spheres, Maggie Koerth-Baker warns in this week’s New York Times Magazine that treating crowds, real or virtual, like sentient beings is misguided. With Wikipedia’s success and inescapability playing a large role, Koerth-Baker notes that, “over the last decade, we’ve come to think of virtual crowds as sources of wisdom that can’t be found in individuals.” Similarly, in the non-digital world, crowds are often treated as singular entities that, if mishandled or left to their own devices, are prone to irrationality and panic. Beyond questions of how technological mediation could possibly shift the character of a crowd from one defined by thoughtlessness and irresponsibility to one of intelligence and innovation, the underlying conceit, that crowds are entities, rather than groups of individual people, is “deeply flawed.”
Koerth-Baker believes that this misconception at least partially derives from the belief that a crowd behaves like a herd of animals, and that, “at some point, it reaches a critical mass and the will of the crowd overrides individual intelligence and individual decision making.” In reality, a crowd can be smart or dumb, helpful or dangerous, but, “a crowd’s behavior depends on what individuals are thinking and how they interact with one another—not some overpowering collective consciousness.”
While Koerth-Baker does not discount the unprecedented collaborative capabilities created by new information technologies, she highlights the importance of not only the individuals within groups, but the information that is being shared by groups, which, necessarily, determines the direction of any collaboration. Essentially, Koerth-Baker argues, while many put faith in crowdsourcing exclusively based on faith in technologically mediated crowds, what really matters for the success or failure of a given collaborative project depends on three things:
– who makes up the crowd,
– the information they share and
– how they interact.
Based on Koerth-Baker’s article, it would be easy to question the ability of crowdsourcing in the government arena to address the problems that inspired their creation. However, in an article about the many types of crowdsourcing in government and their potential, Justine Brown helps demonstrate why painting all public sector crowdsourcing projects with the same brush would be reductive.
In the article, Brown lists five central types of crowdsourcing in government: crowd competition, crowd collaboration, crowd voting, crowd funding and crowd labor. This list shows that not all types of crowdsourcing depend on some elusive mass knowledge. For some open government projects, engaging the crowd is done to widen the search for individuals with innovative ideas or insights. A crowd competition, for example, does not place excess faith in the abilities of an undefined crowd entity; rather, it provides incentive and opportunity for an individual or small group of individuals to solve a problem that has eluded more traditional government problem solvers.
The government website Challenge.gov provides a number of crowd competition opportunities for citizens. Challenges like “Non-invasive Measurement of Intra-cranial Pressure” from the National Aeronautics and Space Administration demonstrate that government crowdsourcing projects are often initiated in the hopes of finding a uniquely capable individual, not in the interest of obtaining insight from the crowd as a whole.
U.S. Chief Technology Officer Todd Park, whose Health Datapalooza is one of the more well-known examples of crowd competitions, highlights why they are effective tools for governments: “I think [prizes and competitions] are a very exciting new tool that government has in its toolkit to get better results at a lower cost. You can greatly broaden and deepen the range of players that can help solve the problem. You draw in unusual suspects along with the more usual suspects.”
Crowd labor also does not rely on any innate ability or intelligence in the crowd that does not exist in individuals. Instead, it, again, widens the net, but this time instead of doing so in an attempt to find a uniquely capable or insightful individual, the government engages the crowd in the hopes that a large enough group of people will be willing to take up a tedious task so that the task’s completion is not left to paid government employees. One such example comes from the Library of Congress, where they are asking the crowd to aid in tagging photos with metadata. While there is no reason to believe that an online crowd is uniquely capable of handling the task, in comparison to paid employees or a real life mass of people, and there is every opportunity for unreasonable individuals within the crowd to attempt to sabotage the project by providing incorrect information, engaging a willing mass of people to undertake necessary but tedious projects within government helps to move more projects to completion while minimizing the time and resources expended by the government itself.
Crowd voting programs, on the other hand, do rely on the masses exhibiting some type of intelligence and reason, but, at some point, even if the outcomes could prove dubious, public opinion must be drawn upon as part of a functional democracy. Similarly, crowd collaboration requires some amount of faith in the crowd, but, for the most part, the reason why many believe that these programs will have success has less to do with an unreasonable faith in crowds than with the hope that the destruction of “sectorial boundaries” will allow previously separated but similarly capable individuals from different areas of interest to work together and engage problems in new and innovative ways.
Some recent high profile examples of crowdsourcing within governments have come from Europe, and, for the most part, they do not fall into the trap that Koerth-Baker warns against. In Estonia, in response to high-profile cases of government corruption, citizens were called upon to provide policy suggestions to be debated on by government officials and possibly implemented. In Iceland, citizen input from Facebook and Twitter was used to help guide officials in crafting a new constitution for Europe’s most sparsely populated state. The final document was put together by a 25-member Constitutional Council that drew upon citizens’ social media input. Finally, in Finland, any policy petition that obtains 50,000 citizen signatures automatically elicits a vote by the Eduskunta, the Finnish Parliament. Similar programs also exist in the U.K. and U.S., but the U.K. initiative requires 100,000 signatures for Parliament to consider debating the issue, and the U.S. We the People site guarantees only that the administration will “review” and “respond to” petitions that gain at least 25,000 signatures in one month.
Each of the above programs mitigates the power of the people by ensuring that crowdsourcing only serves to set the agenda for the traditional powers that exist within government. While some might argue that this serves to entrench the status quo and traditional power dynamics, it also keeps the government from placing excessive faith in the capabilities of a singular crowd, as Koerth-Baker warns against, and ensures that public funds will not be utilized in the construction of a Death Star. Moreover, while many point to Wikipedia as the ultimate example of the wisdom of the unorganized crowd, it is actually the product of a similar system. Though much of the original content on Wikipedia comes from tens of thousands of outsiders, “the bulk of the changes to the original text…are made by a core group” of around 1400 heavy editors that make thousands of small changes to increase the accuracy of postings. In other words, the masses are relied upon to do much of the grunt work, just as in government crowd labor projects, and to increase the visibility of relevant topics, but a smaller, more trusted group is responsible for shaping the input of the crowd into the final—though, of course, constantly evolving—product.
The German Pirate Party’s Liquid Feedback system, on the other hand, essentially sets the party’s platform through crowdsourcing. One of the party’s defining characteristics is its sliding scale of direct and representative democracy. In this system, party members can vote on any and every issue, if they so choose, or they can delegate their vote on any given issue to their elected representative. On the Liquid Feedback system, proposals are revised and voted upon, and, no matter the opinions of elected representatives, proposals accepted by the crowd become the party’s platform. This is certainly the purest example of direct democracy out of the recent crowdsourcing programs, but it also puts the most faith in the wisdom of a singular crowd.
Faith in the transformative power of crowdsourcing in government is not limited to the developed world, however. In parts of Africa, where mobile networks have bypassed all other forms of infrastructure development in terms of speed and usage, “crowdsourcing is increasingly viewed as a core mechanism of new systemic approaches to governance addressing the highly complex, global, and dynamic challenges of climate change, poverty, armed conflict, and other crises.” Whether or not the crowdsourcing programs put into place to address these crises correctly characterize what a crowd is and what it is not, of course, remains to be seen.