Proposal 3 for ICANN: Enhance Accountability by Crowdsourcing Oversight & Developing Metrics for Success

This is the third of a series of 16 draft proposals developed by the ICANN Strategy Panel on Multistakeholder Innovation in conjunction with the Governance Lab @ NYU  for how to design an effective, legitimate and evolving 21st century Internet Corporation for Assigned Names & Numbers (ICANN). 

Please share your comments/reactions/questions on this proposal in the comments section of this post or via the line-by-line annotation plug-in.


From Principle to Practice

For ICANN to be a legitimate global organization operating in the public interest, it must be accountable. This means it needs to identify opportunities to engage a broader audience in overseeing the impact, the effect and the level of community compliance that results from ICANN decisions. To do so, ICANN should crowdsource oversight and develop standards to measure success.

What Does it Mean to Crowdsource Oversight and Develop Metrics for Success?

Crowdsourcing is the concept of an “institution taking a function once performed by employees or volunteers and outsourcing it to an undefined (and usually large) network of people in the form of an open call.” [1] In the context of tapping a diffuse crowd to perform oversight, we mean using the power of the crowd to evaluate the success of ICANN’s decisions, measured not only in light of ICANN’s core public interest values, [2] but also based on the impact, effect and level of compliance following ICANN’s policy development process.

Developing standards to measure success means the ICANN community should collectively develop the indicators that can be used internally or by a distributed crowd to evaluate old and new practices of problem solving within ICANN.

Why Does This Proposal Make Sense at ICANN?

ICANN is often critiqued for being unaccountable [3] or for not having a clear consensus as to whom or to what ICANN is accountable. While the Affirmation of Commitments (Aoc) to which ICANN is contractually obligated to uphold sets out that ICANN must ensure it operates in a manner that is accountable, transparent and in the interests of global Internet users – ICANN currently has no clear mechanism or metrics for reviewing whether ICANN actually operates well [4] and in the global public interest.

As such, crowdsourcing oversight and developing success metrics will help ICANN enhance accountability and thus increase ICANN’s legitimacy as a 21st century global organization. Specifically, these proposals will help ICANN:

  • Decentralize accountability by giving the responsibility for evaluating ICANN’s work to a globally distributed crowd;
  • Widen pool of participation by creating new avenues for engagement in the evaluation and review stage of policy development.
  • Alleviate stress and human error by removing brunt of oversight responsibility from over-burdened volunteers and staff;
  • Operate more directly in the public interest by involving the public in assessing whether ICANN’s practices are in line with its core values and mission;
  • Enable flexible but ongoing evaluation and assessment to help ICANN best allocate resources and change ineffectual practices over time;
  • Embrace experimentation as a means for measuring success.

Implementation Within ICANN

Crowdsourcing Oversight

Here are some initial crowdsourcing oversight pilot ideas that ICANN could test over the course of the next year:

  • Develop an Open Peer Review Platform
    • Embracing learnings from successful open peer review projects (e.g., LIBRE), ICANN could identify testbed groups, structures or topics on which work product (e.g., draft issue reports, draft final recommendations, etc.) could be posted to an open platform that offers editing, commenting, reviewing and revising functionality to users. ICANN could then invite the public to refine and give feedback directly rather than only submitting formal public comments during specific stages or after the fact.
    • Having an open platform where those responsible for work product can vet their work while still in progress or after submission to the Board will promote the development of policy recommendations that can more easily or more quickly be implemented. Increasing oversight into potential impacts and compliance issues throughout the policy development process minimizes the chance time, energy and resources will be wasted.
  • Pilot the Use of Online Ranking and Feedback Tools
    • Using annotation tools like ReadrBoard (or in time Hypothes.is), ICANN could enable real-time evaluation of text; poll community sentiment on specific policy development proposals; or help identify potential impacts not addressed during issue scoping.
  • Crowdsource Contractual Compliance Monitoring
    • One recent accountability challenge raised at ICANN relates to its role as a contracting authority with registries and registrars. [5] As a first step toward ensuring a level playing field within the contracting processs, ICANN could, using open contracting principles, openly post all registry and registrar contracts online (along with other open data sets, such as financial data and existing compliance data) and ask the public to help monitor for compliance by all contracting parties. This could be coupled with a challenge to crowdsource the creation of a “contractual compliance” guidebook for use by the public. [6]

Developing Standard to Measure Success

To continue existing initiatives aimed at developing success standards, ICANN should not only look to the output of the ICANN Strategy Panel on the Public Responsibility Framework, but also to interdisciplinary research being conducted on developing metrics to study the impact of new, collaborative and iterative decisionmaking models. [7]
“If we are going to accelerate the rate of experimentation in governance and create more agile institutions capable of piloting new techniques and getting rid of ineffectual programs, we need research that will move away from ‘faith-based’ engagement initiatives toward ‘evidence-based’ ones.” [8]

Notably, ICANN’s development of metrics should take into account the following factors:

  • The availability and potential use of real-time data along with enhanced analytical capabilities (often called big data) to and assess outcome and impact and predict which strategies are more likely to find success [9];
  • The study of outcome and impact should be ongoing, especially considering the rapid rate at which the DNS and the Internet evolve. Therefore, metrics should be developed with an eye toward enabling flexible and continual assessment [10];
  • Devising a conceptual framework, or logic model may serve as a useful tool to help define success indicators. “The logic model makes explicit the relationships among resources available to implement an intervention, activities planned, and sought-after results. It also theorizes how the results, or outputs, of the initiative will lead to both short-term beneficial outcomes and longer-term, fundamental impact” [11];
  • Metrics for success should be based on both quantitative and qualitative factors. Experimentation provides a medium for measuring and assessing success and thus quantitative and qualitative experimentation at ICANN should be practiced.
  • Measuring success is inherently based on values and thus engaging the global Internet public through the use of online rating and feedback tools can help provide support for a change or evolution in standards for success as community values change and evolve.
  • Those with experiential know-how related to particular implementation challenges should be leveraged in the process for developing success metrics. This increases the certainty that the indicators developed to measure success will be able to practically be applied without excess burden or cost.

Examples & Case Studies – What’s Worked in Practice?

There have been a number of crowdsourced projects around the world aimed at improving oversight and measuring success in a variety of contexts that ICANN could learn from. For instance:

  • The Alliance for Useful Evidence – An open–access network of more than 1,400 individuals from across government, universities, charities, business and local authorities in the UK and internationally. The organization’s aim is to become a hub for evidence initiatives in the UK, providing a forum for members to share best practices and avoid duplication of work.
  • Asign – In 2011, the United Nations Institute for Training and Research and the Asian Disaster Preparedness Center used an app called Asign that enabled accurate “geo-referencing of photographs taken by volunteers connected to the Internet” to help monitor crisis-level floods in Bangkok.
  • FCC’s Speed Test – In November 2013, the U.S. Federal Communications released a free app that performs speed tests to measure mobile broadband network performances. The app collects this data and the FCC plans to release interactive visuals to allow consumers to see national mobile broadband network performance.
  • Libre – A free online platform offers instant accessibility to all research output, followed by dynamic and transparent evaluation through a formal open peer review process, arranged and handled by authors. It allows community-based organization and cross referencing of global knowledge.
  • Liquid Feedback – The Public Software Group of Berlin, Germany and the Association for Interactive Democracy teamed up to create an open-source platform to aid in decision-making. The platform enables polling of the public (beyond yes/no questions) and even allows for rephrasing and submission of unforeseen input.
  • Louisiana Bucket Brigade – An environmental activist group used crowdsourcing “via a mapping platform developed from Ushahidi to collect data from people who witnessed the spread of oil and the damage to the environment” after the BP Gulf Coast oil spill. [12] The group used this input to record the “magnitude of the oil leak effect.”
  • Stimulus Watch – A platform created following passage of the Recovery Act and the creation of Recovery.gov in the United States to help track federal spending of stimulus funds. [13]
    Stimulus Watch harnesses the power of a distributed crowd in monitoring stimulus spending by the federal government by asking citizens to share their knowledge on local stimulus project by finding, discussing and rating those projects.

There have also been initiatives attempting to test new metrics for success that could be informative for ICANN. For an overview of these initiatives, see the GovLab Working Paper: “Toward Metrics for Re(imagining) Governance: The Promise and Challenge of Evaluating Innovations in How We Govern.”[14]

Open Questions – Help Bring This Proposal Closer to Implementation?

  • What institutional and cultural barriers – such as a current lack of data in accessible, open and machine-readable formats – could pose challenges to implementation?
  • ICANN has previously and is currently working on developing metrics for success. How can we work together to leverage that work to help in piloting this proposal?
  • What are specific compliance challenges that ICANN faces for which developing a crowdsourcing project may be useful?
  • What oversight responsibilities require the least specialized or nuanced knowledge (i.e. making them more ripe for crowdsourcing to the general global public)?
  • Which ICANN structures or groups (e.g., those working in the Contractual Compliance Program) would be the best testbeds for piloting this proposal?

Sources

1. Jeff Howe. “The rise of crowdsourcing.” Wired Magazine. Issue 14, no. 6 (June 2006): 1-4.
2. SeePrimer on the Internet Corporation for Assigned Names and Numbers.” at 6. The Governance Lab @ NYU. October 13, 2013.
3. Milton Mueller. “ICANN’s Accountability Meltdown: A Four-Part Series.” Internet Governance Project. August 31, 2013; Chuck Gomez. “Examples of Where ICANN Can Be More Accountable.” CircleID. September 4, 2013;
Emily Wilsdon. “Regulating the Root: The Role of ICANN as Regulator, and Accountability.” May 1, 2010.
4. Note that ICANN does conduct annual IANA Functions Customer Satisfaction Surveys, see “2013 IANA Functions Customer Service Survey Results,” though of the 1491 survey invitations sent, only 112 responded in 2013. Furthermore, this survey only relates to ICANN’s role performing specific IANA Functions and does not request input on customer or user satisfaction in relation to other responsibilities within ICANN’s remit.
5. Chuck Gomez. “Examples of Where ICANN Can Be More Accountable.” CircleID. September 4, 2013 (Gomez argues that “these agreements from the start have been slanted to ICANN’s favor and burdensome for applicants, registrars, and registries. All risks have been flowed down to registries and registrars with requirements to indemnify ICANN while removing any chance for the contracted parties to take action against ICANN, if warranted. This was compounded further in 2013 when the ICANN staff, in a surprise move, decided to impose the unilateral right to amend clauses in the new gTLD registry agreements.”).
6. For more information on how ICANN currently handles contractual compliance, see “Contractual Compliance at ICANN.” ICANN.org. October 23, 2011.
7. See, e.g., Barnett, Aleise, Dembo, David and Verhulst, Stefaan G. “Toward Metrics for Re(imagining) Governance: The Promise and Challenge of Evaluating Innovations in How We Govern.” GovLab Working Paper. v.1. April 18, 2013.
8. Ibid. at 1
9. Ibid. at 8 (“When designed well, big data may allow practitioners to track progress and understand where existing interventions require adjustment much faster.”).
10. Note that the annual IANA Functions Satisfaction Survey may be a good tool to use in conceptualizing metrics for success. SeeIANA Functions Satisfaction Survey Yields Overwhelmingly Positive Results.” AG-IP News. January 16, 2014 (considering factors such as documentation quality, process quality, accuracy, courtesy and transparency).
11. Ibid. at 5.
12. “Louisiana Bucket Brigade puts monitoring in the hands of citizens.” Daily Crowdsource.
13. Sanchez, Julian. “Stimulus stimulates crowdsourced oversight, activism.” Ars Technica. February 2, 2009.
14. Barnett, Aleise and David Dembo and Stefaan G. Verhulst. “Toward Metrics for Re(imagining) Governance: The Promise and Challenge of Evaluating Innovations in How We Govern.” GovLab Working Paper. v.1. April 18, 2013 at 10-11.

The Tags . . .

One Response to “Proposal 3 for ICANN: Enhance Accountability by Crowdsourcing Oversight & Developing Metrics for Success”

  1. Chuck Gomes February 7, 2014 at 9:37 pm #

    Some of the ideas in this proposal appear to have clear potential for adding value and hence would seem to warrant further investigation. Others seem to me to need more justification before spending very much on testing them. (See my inline comments.)

Leave a Reply