Smarter Governance: A Research Agenda

The Governance Lab strives to improve people’s lives by changing how we govern. We endeavor to strengthen the ability of people and institutions to work together to solve problems, make decisions, resolve conflict and govern ourselves more effectively and legitimately. We design technology, policy and strategies for fostering more open and collaborative approaches to governance and we test what works.

The GovLab Hypothesis
When institutions open themselves to diverse participation and collaborative problem solving, they become more effective and the decisions they make are more legitimate.

Living Labs: Action Research

We call the GovLab a Living Lab because we are committed to an action-oriented research approach to testing this hypothesis. We combine agile solving of real world problems for leaders of institutions and communities who want to implement more open and collaborative decision-making with rigorous, empirical research by an international network of researchers who want to study what works.

Smarter Governance: Getting Better Expertise In

Our Smarter Governance projects look at how institutions can tap the intelligence and expertise of a distributed network. Our goal is to understand whether and how targeting participation based on skills, experience and expertise (crowdsourcing wisely) might lead to more effective and more legitimate outcomes than broad citizen engagement (crowdsourcing widely).

This work on Smarter Governance is premised on the belief that people possess expertise that can translate into new insights and solutions to public problems. Advances in science and technology have provided new tools for identifying who knows what and targeting requests to participate to those most likely to contribute. We want to understand whether:

  • Targeting a request to participate to a number of relevant audiences with a specific set of skills and real-world experience – just as an advertiser targets ad placement to consumers to increase the likelihood of purchase – translates into improved participation.
  • If we can match people to problems they are well-suited to understand, does this lead to crafting solutions that work.

Experiments: Answering these questions requires undertaking experiments that help us answer several key questions. While each experiment focuses on deepening our answers to one core question, we anticipate that the experiments will inform one another. We are actively seeking additional project partners to undertake related projects to the ones described below:

  1. Disclosing Expertise: We believe asking people to share what they know will improve how they participate. If we ask people to disclose their expertise, does that increase the likelihood of engagement?
    • ExpertNet – The GovLab is working with several cities, beginning with the City of Buenos Aires, Argentina, to build and test an open source, linked data system for capturing municipal government expertise. ExpertNet seeks to expand what city government decision-makers and problem-solvers know about their colleagues by going beyond titles to capture people’s expertise understood broadly.
    • What Do We Hope to Learn? We want to learn what people are willing to share about what they know, how that correlates to a willingness to participate, and whether government expert networking can increase collaboration and improve problem-solving.
    • How Will We Learn It? In this project, we are testing different ways of asking people to express what they know, including credentials, experience, know-how and passions and assessing the impact when they are called upon to collaborate with colleagues to help solve problems. We will also test different incentives for participating in these municipal, employee expert networks.
  2. Finding Expertise: We believe there are different forms of expertise relevant to different circumstances. How do we identify, validate and target expertise in society?
    • ICANN Expert Network – In the context of work with the ICANN Strategy Panel on Multi-Stakeholder Innovation, the GovLab is testing strategies for creating an expert network to make global experts in computer science and engineering more accessible to a range of organizations responsible for managing the Internet.
    • What Do We Hope to Learn? We want to learn more about the different techniques available across disciplines for surfacing expertise at various stages of decision-making using openly available data; through sentiment analysis in social networks; searching across existing open and closed networks; and other algorithmic techniques.
    • How Will We Learn It? Using questions ICANN and other organizations need help answering from a global, distributed audience, we will test different techniques for finding and sharing expertise ranging from use of popular existing networks such as Quora or Stack Exchange to custom-built solutions.
  3. Matching Expertise: We believe that targeting requests to participate improves outcomes. How do we motivate people to apply what they know to public problems when asked?
    • OPEN NYU – The Open Peer Engagement Network (OPEN) Project at New York University devises strategies for leveraging expertise among staff, faculty and students across the university community and its stakeholders to solve challenges presented in a variety of contexts from campus policing to the future of technology enhanced education.
    • What Do We Hope to Learn? In this setting where we already know a great deal about who knows what because the members of the community are easily known to us, the hope is to deepen our understanding of how to match people to problems. We will also consider whether a person with relevant credentials is more likely to participate and to do so usefully.
    • How Will We Learn It? By running parallel experiments where we crowdsource widely and wisely across the campus, we can test the impact of targeting.
  4. Asking for Help: We believe that if the questions are clear and articulate the form of participation sought, this will create an incentive for relevant participation. How do we frame and define problems meaningfully and compellingly to attract useful participation?
    • National Health Service (NHS) We are helping the NHS to understand how to measure the impact of their investment in opening up healthcare data to patients, providers and the public on health and wellness outcomes. In particular, we are designing techniques for engaging broader participation in outcomes research than academics alone.
    • What Do We Hope to Learn? By designing a set of massive open online research projects for engaging patients, providers and administrators in fact and data gathering about their own health, we can learn more about how to create incentives for participation.
    • How Will We Learn It? Here we hope to test different articulations of the same question to see how it impacts people’s willingness to engage and to develop an understanding of what meaningful engagement looks like.

Across all four of these projects, we hope to deepen our understanding of:

  • Context: What absorptive capabilities must governing institutions develop to assimilate the diverse knowledge of the public?
  • Impediments: What are the legal and cultural impediments to being open to expertise? Whether and how institutions use the input they receive?
  • Outcome and impact: What is the resulting impact on people’s lives? On perceived legitimacy and effectiveness?

We inform our work and that of our experimental project teams through the development of:

  • Open Governance Knowledge Base – In the interest of sharing our research in a widely accessible and collaborative fashion, the GovLab is building a wiki-style knowledge base for the field. The Open Governance Knowledge Base will act as a co-created repository of research and insights on the concepts, tools, experiments, people and organizations playing a role in reimagining modern governance.
  • Observatory on Innovations in Governance – Reporting on current events such as crowdsourcing doctrine by the Vatican, as well as the latest academic research findings  on the mechanisms that provide direct access to environmental rulemaking processes  decreasing regulatory compliance costs, the Observatory seeks to act as a neutral knowledge broker for the field of open governance.
  • GovLab Academy – Funded by The John S. and James L. Knight Foundation, The GovLab Academy is a free, online community for those interested in teaching and learning how to open their institutions and work more collaboratively to solve public problems that improve people’s lives. A partnership between The GovLab and MIT Media Lab’s Online Learning Initiative, the Academy offers curated videos, podcasts, readings and activities designed to enable the purpose-driven learner to deepen her practical knowledge of the field at her own pace.

 logo-expertnet

Background

Professionals in municipal government have a hard time identifying who among their colleagues has the experience and expertise to improve how a city serves its citizens. The person sitting at the next desk with the ambiguous title of “manager” or “director” could have the know-how to make the difference between success and failure in an important public project. To make city government more effective, we want to be able to identify the talents, skills and capacity of the municipal workforce.

Working with partner cities, the ExpertNet project at the Governance Lab is building open source tools to enable municipal government employees to identify each other’s areas of expertise. We will also test the efficacy of knowing more about the skills and abilities of employees for improving collaboration and problem solving.

ExpertNet builds on earlier work in connection with the White House Open Government Initiative on designing a platform for asking people to share more about their expertise – their experience, expertise, skills and passions. Using a collaborative wiki, the United States General Services Administration (GSA) and Open Government Initiative solicited feedback from the public on the development of a platform that 1) enables government officials to circulate notice of opportunities to participate in public consultations to members of the public with expertise on a topic; and 2) provides those volunteer experts with a mechanism to provide useful, relevant and manageable feedback back to government officials.

What Do We Hope to Learn?

As the centerpiece of the GovLab’s inquiry into Disclosing Expertise, ExpertNet seeks to determine if we can expand what we know about individuals by going beyond titles and capturing people’s expertise understood broadly. Expertise is more than credentials. It can include skills and past experience. It encompasses that which we know how to do and that which we know how to explain to others.

How Will We Learn It?

The central objective of ExpertNet is to build an open source, linked data system for capturing municipal government expertise. We want to understand: 1) what kinds of expertise are most helpful to identify; 2) what are the best ways to collect that information; 3) how expertise impacts people’s willingness to collaborate; 4) whether identifying employee expertise helps cities to be more effective at solving problems; and 5) the resulting impact on the citizen and employee perceptions.

Experiments

The ExpertNet project will progress through four experimental stages:

  • Phase 1. In the first phase, we are focusing on employee self-reporting and testing different categories of information people might supply (credentials, skills, interests, past projects, partnerships, work experience and future plans) and how that correlates to a willingness to collaborate and to collaborate well. The goal is to run 3 pilot implementations of 300-500 people each in 2 cities.
  • Phase 2. In the second phase, we will explore strategies for creating a linked data infrastructure, to connect and make searchable the skills and expertise of city workers across cities. The goal is to connect people across all the pilot implementations and to test using the resulting data to inform problem solving by trying to match people to problems.
  • Phase 3. In the third phase, we explore integration with LinkedIn, VIVO and other popular international, regional and local pre-existing sources of expertise as well as open datasets on publications and grants. The goal is to test whether tapping into existing databases is effective in supplementing and vetting self-reported data collected in Phase 1.
  • Phase 4. In the final phase, we investigate integration with existing HR systems. The goal is to transform ExpertNet from a stand-alone expertise platform into the central employee database for partner cities, which will allow us to explore what happens when a city’s HR system is populated with robust data on skills and experience, rather than exclusively featuring simple biographical information.

Tools & Technologies

In performing and designing these experiments, we will leverage and/or learn from a variety of innovative tools and technologies already in existence. For instance:

  • Expert networking platforms like VIVO and Stack Exchange will provide useful data to draw from as well as best practices to emulate for creating searchable platforms of individuals’ skills, interests and experiences.
  • Ratings and endorsement systems as found on LinkedIn and eBay will allow users to vet skills, gain demonstrable recognition for good work and enable the creation of badges and leaderboard functionalities to incentivize participation.
  • Linked Data technologies as used in CKAN and Google Now will allow us to consolidate data on individuals’ skills, preferences and experience from a diversity of sources.
  • Predictive search functionality as deployed by Google and Bing will ensure consistency of self-reported data and make expert discovery easier.
  • High-Performance databases like Oracle DB2 and MSSQL will satisfy our data management and storage needs.

OPEN NYU

Background

New York Univeristy is a large, elite institution of higher learning with a strong tradition of faculty governance. As with most institutions, traditional methods of consultation are the primary means by which the NYU administration works with faculty, staff and students to link official decisions with the interests and perspectives of stakeholders. These traditional approaches, however, are being challenged in recent years as expectations for more meaningful engagement become the norm. As NYU seeks to adapt to new technological realities that enable more voices to influence the governance sphere, while at the same time the problems of Univerity governance increase in complexity, the NYU administration is faced with the challenge of responding to calls for greater legitimacy in decision making while at the same time concerned about the effectiveness of its consultative processes.

The GovLab is working with NYU to undertake a series of pilot projects on crowdsourcing as a supplemental approach to its traditional consultative and market research processes. The goal of the Open Peer Engagement Network (OPEN) NYU Project is to explore strategies for NYU to innovate how it makes decisions by tapping the intelligence and knowledge of NYU’s diverse community of stakeholders, including staff, students, teachers, and researchers.

What Do We Hope to Learn?

The GovLab OPEN Project seeks, through experimentation and analysis, the conditions that promote smarter governance – that is, how can organizations deploy targeted tools, platforms and engagement processes that tap broad-based expertise and information from their stakeholders as inputs into better decisions, where better means outcomes that are more effective and processes that are perceived as more legitimate.

 How Will We Learn It?

The OPEN NYU project will design, deploy, and test different platforms for engaging the community and different incentive structures for encouraging participation to the end of improving the outcomes of decisions made with the benefit of new inputs.

The OPEN Project is informed by a number of knowledge domains, supported by technical expertise, and grounded in institutional perspectives and real-world governance challenge cases, all combined to test how organizations can open themselves to more inclusive governance, with the objective of achieving smarter governance. And as a series of Living Labs experiments, we will not only test what works where, and why, but the test processes and findings will be directly linked to the real-world governance settings of the clients. We will explore strategies for large institutions to innovate how they make decisions through the tapping of the intelligence and knowledge of the institutions’ diverse communities of stakeholders.

The research agenda focuses on a number NYU governance cases, with experiments designed to intervene in and support the governance setting, combined with observation and analysis to determine the effect of differing process factors and tool/platform features on the outcome objectives of effectiveness and legitimacy. We will focus on and compare the differing settings amongst the clients, the application of specialized engagement tools and platforms and varying process features to those cases as interventions in those real-world governance settings in order to control and account for the independent variables. These experiments will test the smart governance hypothesis generally, and seek to decipher the conditions and factors that influence effective and legitimate governance. 

Experiments

The OPEN Project will experiment in five case areas:

  • Bookstore: The NYU main Bookstore on Broadway Avenue faces significant business competition from online booksellers such as Amazon.com, and important non-market expectations from the University’s stakeholders. Working with bookstore staff, employees, and stakeholders, we will design creative new ways to engage those outside the store’s management to rethink the bookstore, whether by innovating, evolving or changing its business model.
  • Employee Engagement: Working with the University’s Human Resources and Operations unit, we will design and test crowdsourced decision-making processes for employee engagement, and mecahnisms to link the expert knowledge of front-line service personel to the adaptation of policies and procedures with the aim of improving service outcomes and employee job satisfaction.
  • Alumni Engagement: Working with the NYU Stern School of Business student groups and alumni to better understand alumni/student needs regarding engagement, participation, and communications through the use of crowdsourcing technology.
  • Technology-Enhanced Education: Working with the University’s Faculty Committee on the Future of Technology Enhanced Education, we will support the Committee in its exploration of issues involving the incorporation of new technology consistent with the University’s academic mission and in ways that further NYU’s commitment to innovation in teaching and learning, pedagogical practices, and research.

Tools & Technologies

In performing and designing these experiments, we will leverage and/or learn from a variety of innovative tools and technologies already in existence. For example:

  • Methods for testing web design interventions: e.g., A/B testing; indirect content experiments; and multivariate webpage analysis.
  • Interactive public engagement interfaces: to engage non-university stakeholders through the public face of the institutions.
  • Investigation of the features that promote civil discourse in online learning environments as compared to open media environments.  

NHS

Background

The United Kingdom National Health Service (NHS) is a publicly funded healthcare system that provides comprehensive health services, funded largely through central revenues that, for the most part, are free of charge to legal residents. Additional health services like prescription drugs and dental services may involve some patient expenditure depending on a number of patient characteristics (including their income) and the type of treatment.

The NHS is facing significant budget constraints and political questions aimed at the effectiveness of its basic model as a public healthcare system. In recent years, due to efforts to improve outcomes and efficiencies, some cost savings have been realized through the application of information technology (IT) and evidence-based decision-making. In the ideal, such savings are achieved without negative service impacts or outcome effects and the scale of the savings can be orders-of-magnitude more than the technology investment that led to them.

Despite this, the case for increased expenditure on information technology will be difficult to make as NHS England is facing a significant budget deficit in the coming year. In the context of likely reductions in direct patient services, increased expenditures on IT will have to demonstrate a clear link to short-term cost savings that do not further affect service levels or system outcomes.

A new vision of coordinated, patient-centered healthcare is emerging amongst care professionals, researchers and policymakers. This new vision leverages health data and technology to engage patients, integrate providers, and improve clinical outcomes. Supplementing this growth in data from within the health system is data from the quantified-self health movement, where patients monitor their daily behaviors and physiological conditions in response to interventions from their healthcare providers.

What Do We Hope to Learn?

How can a healthcare system facing a significant budget deficit justify increasing expenditures on IT to support evidence-based decision-making aimed at achieving positive outcome budget cuts? What combination of big data, open data and open governance can be brought together to identify targets for expenditure reductions and efficiencies that have no negative impact on patient services or outcomes?

How Will We Learn It?

Working with NHS, we aim to develop a research agenda that will support the case for greater investment in data gathering and analytical capacity as means for identifying system inefficiencies, areas where cost savings can be realized and where system outcomes can be improved. To focus the analysis, the research will concentrate on:

  1. Accountability. How can open comparative data on service-provider decision-making and health outcomes ensure that physicians and hospitals are held accountable for unwarranted variation? How can variability be used as natural experiments aimed at identifying positive outcomes at lower costs?
  2. Choice. Does open data on patient experience and health outcomes provide patients with the information and motivation to seek out the services that best meet their individual healthcare needs? Does increased access to health data allow patients to choose between providers and treatments, thus driving patient satisfaction and quality improvements? Do patients more readily and with greater agency engage in shared decision making when they have better access to information?
  3. Outcomes. Does publishing “outcomes data” drive competition between healthcare professionals, which in-turn can drive quality improvements and innovation? Does the availability of open health data lead analysts and researchers to undertake research into quality and outcomes, improve the consistency of evaluation and lead to insights that can save money and improve outcomes?

The following outputs from this project are targeted:

  • Curation of relevant examples from the literature that support the theories behind the concepts;
  • Analysis of what data sources are currently being used and outline possible experiments or research projects that might, when undertaken, help support the claim with empirical evidence;
  • Focus, especially on designing participatory and massive online research projects, which would involve patients and providers;
  • Synthesize the above into a concept paper that will be prepared for the NHS Healthcare Innovation Expo in March 2014.

Experiments:

  • Accountability
    • Natural Experiment: comparative data analysis on service-provider decision making and health outcomes
    • Supplemental: service-provider SNS knowledge-base, where, e.g., physicians can ask their colleagues questions to crowdsource possible treatments (similar to, e.g., CrowdMed)
    • Related Research Questions: expertise matching; motivation and incentives to contribute.
  • Choice
    • Patient Info System 1: provide test group with data on patient experience and health outcomes (control group would receive generic health info) – do test group patients seek out services distinct from the control group?
    • Patient Info System 2: do patients with access to more health data cause patients to choose between providers and treatments?
    • Are patient outcomes improved when changing providers and / or treatments?
    • Patient Info System 3: do patients with better access to information engage in shared decision making?
    • Are patient outcomes improved in shared decision making
    • Supplemental: patient SNS where patients can query whether their service / diagnosis, etc. is proper / adequate (similar to, e.g., Patients Like Me). Related Research Question: expertise matching; motivation and incentives to contribute; relationship between information and decision making; impact of patient input on outcomes.
  • Outcomes
    • Natural Experiment: does publishing “outcomes data” lead to social learning, a convergence in quality improvements and shared innovation?
    • Test Environment: develop a platform for open collaboration in an online data & analysis system (similar to Lakhani and Boudreau, 2012).
    • Survey: does open health data lead to more independent research and evaluation?
    • Related Research Question: expertise identification; motivation and incentives to contribute.

Tools & Technologies

In performing and designing these experiments, we will leverage and/or learn from a variety of innovative tools and technologies already in existence. For example:

  • Data analysis tools and platforms.
  • Social Networking / knowledge sharing web platforms, specifically patient-centered ones.
  • Expert collaboration spaces.
  • Online surveys.

Screen Shot 2013-07-12 at 5.29.57 PM

Background

The Internet Corporation for Assigned Names and Numbers (ICANN) has coordinated the Internet’s system of unique identifiers for more than a decade. However, ICANN has recently recognized that its coordination role is too important to the stability, security, and reliability of the Internet to not evolve and adapt along with newly available collaborative approaches for solving public problems. Therefore, in efforts to ensure the future of a free, open and secure Internet for the world – ICANN’s President recently established four strategic panels to unite experts from around the globe to re-imagine how ICANN can achieve its mission in the 21st Century.

The GovLab is providing research and experimentation support to the ICANN Strategy Panel on Multi-stakeholder Innovation (MSI Panel), chaired by Professor Beth Simone Noveck.

What Do We Hope to Learn?

The MSI Panel is specifically charged with:

  • Proposing new models for international engagement, consensus-driven policymaking and institutional structures to support such enhanced functions; and
  • Designing processes, tools and platforms that enable the global ICANN community to engage in these new forms of participatory decision-making.

The MSI Panel and the GovLab aim to help define what 21st century shared management of a global public resource looks like by investigating how ICANN can connect global, cross-disciplinary technology experts and motivate them to engage in various stages of ICANN decision-making. We aim to learn whether this leads to more effective and legitimate decisions through proposing recommendations to the ICANN President, Board and community, which will take shape as experiment designs for pilots that ICANN could immediately run and test.

How Will We Learn It?

To develop these recommendations to ICANN, the MSI Panel and the GovLab will run a three-stage brainstorming campaign to canvass for ideas from the global netizen community and then develop and collaboratively refine these ideas into concrete and implementable proposals. We aim to recommend how ICANN can better position itself to leverage specialized yet global expertise in problem-solving across borders.

In doing so, we aim to test some of the innovative tools and techniques we may propose to ICANN for engaging wisely and widely in decision-making and coordinating global policy development.  The aim is to elicit:

  • New and innovative techniques and strategies ICANN could adopt – whether procedural or technological – to work more openly and effectively at each stage of decision-making;
  • Tools, technologies and platforms that ICANN could use to enable broad, diverse and expert participation throughout;
  • Ideas for constitutional, structural, and/or legal models ICANN should or should not adopt; and
  • Other projects, initiatives, models, and technologies being deployed around the world that the MSI Panel could study and learn from to help propose specific experiments ICANN could run to ensure institutional readiness for digesting and using new forms of global expertise.

 Experiments

Our distributed brainstorming campaign – an experiment in and of itself – will take place in three stages as follows:

  • Stage 1Idea Generation: Individuals can submit ideas for designing a 21st century ICANN to an online ideation/brainstorming platform. Users can rate and rank suggestions based on importance and practicality. The goal is to elicit a wide range of ideas for concrete approaches and tools ICANN could use to evolve and adapt how it works and engages with the global public in coordinating policy development in the unique identifier space.
  • Stage 2Proposal Development: Idea submissions will be grouped into topics to start development of concrete experiment proposals that ICANN could implement and test. Participants will be able to discuss these initial proposal ideas using a blog with line-by-line annotation features. This stage is designed to take ideas closer to implementation by fleshing out the specifics for what ICANN could/should or should not do in experimenting with new approaches for expert engagement in problem-solving.
  • Stage 3 Collaborative Drafting: Using a wiki, we will invite collaborative drafting on specific experiment proposals that the Panel will then submit to ICANN.

Additionally, some initial experiment ideas we plan to further develop for ICANN to implement and test include:

  • Use of an open brainstorming tool to spot and define problems and projects to work on and to enable cross-disciplinary experts and affected stakeholders alike to raise issues from anywhere.
  • Use of expert networking tools to find experts for ICANN working groups with the know-how, skills and interests to decide what must be accomplished.
  • Submission of all ICANN policy-development projects to open peer review online as a way to get more people involved in understanding what works.
  • Writing and publishing all ICANN contracts using open and consistent data standards aligned with international open contracting norms to open and expand opportunities to provide analytical input and compliance review.

Tools & Technologies

In performing and designing these experiments, we will leverage and/or learn from a variety of innovative tools and technologies already in existence. For example:

  • Open brainstorming tools like the ideation platform Ideascale, to test how well we can elicit and surface a wide range of proposal ideas, rated and ranked for importance and practicality.
  • Blogging platforms like WordPress, to broadcast proposal topics and enable users to comment and give feedback, unrestricted by geography.
  • Annotation tools like the plug-in ReadrBoard, to enable line-by-line reactions from netizens.
  • A wiki like The GovLab Open Governance Knowledge Base, to enable collaborative drafting of action-plans.
  • Expert networking software like Vivo, Stack Exchange and Quora, to test ways to tap specific and specialized knowledge, skills and interests for problem-solving.
  • Open and consistent data standards like Strategic Markup Language (StratML), an XML vocabulary and schema for strategic plans, to enable increased participation in sharing, referencing, indexing, discovery, linking, reusing, and analyzing elements of a strategic plan.

Trackbacks/Pingbacks

  1. Idea Flow 1.0: From Academy into Action - February 25, 2014

    […] come focused on the ideas they pitched to bring them here, we are also willing participants in an experiment in social learning – an experiment that our success depends on. While you can read more about it at the links above […]

Leave a Reply