Society-in-the-Loop

From P2P Foundation
Jump to navigation Jump to search


Discussion

Society-in-the-Loop of Governance Algorithms

HITL = Human-in-the-Loop

Iyad Rahwan:

"What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of these governance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).

While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes. In other words, SITL is more akin to the interaction between a government and a governed citizenry. Modern government is the outcome of an implicit agreement — or social contract— between the ruled and their rulers, aimed at fulfilling the general will of citizens. Similarly, SITL can be conceived as an attempt to embed the general will into an algorithmic social contract.

In human-based government, citizens use various channels — e.g. democratic voting, opinion polls, civil society institutions, social media — to articulate their expectations to the government. Meanwhile, the government, through its bureaucracy and various branches undertakes the function of governing, and is ultimately evaluated by the citizenry. Modern societies are (in theory) SITL human-based governance machines. And some of those machines are better programmed than others.

Similarly, as more and more governance functions get encoded into AI algorithms, we need to create channels between human values and governance algorithms.

The Algorithmic Social Contract: To implement SITL, we need to know what types of behaviors people expect from AI, and to enable policy-makers and the public to articulate these expectations (goals, ethics, norms, social contract) to machines. To close the loop, we also need new metrics and methods to evaluate AI behavior against quantifiable human values. In other words: We need to build new tools to enable society to program, debug, and monitor the algorithmic social contract between humans and governance algorithms.

Implementing SITL control in governance algorithms poses a number of difficulties. First, some of these algorithms generate what economists refer to as negative externalities — costs incurred by third parties not involved in the decision. For example, if autonomous vehicle algorithms over-prioritize the safety of passengers — who own them or pay to use them — they may disproportionately increase the risk borne by pedestrians. Quantifying these kinds of externalities is not always straightforward, especially when they occur as a consequence of long, indirect causal chains.

Another difficulty with implementing SITL is that governing algorithms often implement implicit tradeoffs. Human expert-based governance already implements tradeoffs. For example, reducing the speed limit on a road reduces the utility of drivers who want to get home quickly, while increasing the overall safety of drivers and pedestrians. It is possible to completely eliminate accidents — by reducing the speed limit to zero and banning cars — but this would also eliminate the utility of driving, and regulators attempt to strike a balance that society is comfortable with through a constant learning process. Citizens need means to articulate their expectations to governance algorithms, as they do with human regulators.

Why are we not there yet? There has been a flurry of thoughtful treaties on the social and legal challenges posed by the opaque algorithms that permeate and govern our lives. The most prominent of those include Frank Pasquale’s The Black Box Society, and Eli Pariser’s The Filter Bubble. While these writings help illuminate many of the challenges, they often fall short on solutions. This is because we still lack mechanisms for articulating societal expectations (e.g. ethics, norms, legal principles) in ways that machines can understand. We also lack a comprehensive set of mechanisms for scrutinizing the behavior of governing algorithms against precise expectations. This gap is illustrated in the figure below. Putting the society in the loop requires us to bridge the gap between the humanities and computing." (https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802)