Difference between revisions of "Alternative Institutions Rising Post-Corona"
(Created page with " =Text= Joe Edelman: "New social systems, collective intelligences, and coordination mechanisms are forming to address COVID-19. This essay can be read as a guide to making...")
Revision as of 08:12, 25 March 2020
"New social systems, collective intelligences, and coordination mechanisms are forming to address COVID-19. This essay can be read as a guide to making them even better, or as an evolving index to the best things that are going on.
The WHO, the CDC, the media, the economy — many traditional institutions are performing poorly. But while these institutions fail, new social systems like endcoronavirus.org are kicking ass.
The virus is bringing new social systems to the fore. Innovations are popping up across the entire social stack (see fig 1).
What I want to ask here is: could some of these new social systems replace entrenched institutions they’re filling in for?
In this essay, I’ll explore how to judge these new systems, and where they might be superior to entrenched systems.
But first, a word about our social systems, in general.
In this essay, lots of things will be called social systems. When I use this term, I mean the kind of things in black text in figure 1: systems made of people, who have codified and mutually understood roles and responsibilities. If you mention that you’re part of one, and what your role is, people will understand.
When talking about the pre-crisis social systems we’re all used to, I’ll call them ‘entrenched systems’. Almost all entrenched systems are built on goals: NGOs are built on campaigns and fundraising targets; companies and product teams are built on metrics; individual jobs are all about clear responsibilities, deliverables; etc.
When a new goal arises, such as “getting food to isolated elderly people during corona”, service providers sprout up to serve it. Volunteer corps arise. People transition from other jobs into jobs that serve the new goal. This goal just came into being, but many local groups are organizing around it, and even Jeff Bezos is helping out. Managers are figuring out how to measure progress on the goal.
This handling of goals is a great accomplishment!
But it is only a partial success: in these same systems, values² fall on deaf ears. Imagine being an employee at a large company with a new moral or aesthetic value. How likely is that value to influence your company’s processes? How likely is it to change things out in the market — like a new goal often does?
Because people’s life meaning is more about values than goals, this focus on goals creates a loss of meaning.³ We lose track of why we struggle to meet that business objective; the values that hold communities and democracies together get lost (even amidst a flood of goal-driven activism). We become personally isolated and politically fragmented.
This is why moments of breakdown, like the corona crisis, can be especially meaningful. We forget about our goal-oriented lives, and that makes room for our values.
But we shouldn’t need a crisis to make room for our values.
In this essay, I’ll use ‘viable systems’ for systems that might get us through, not just corona, but many other crises caused by entrenched systems: the climate crisis, the meaning crisis, etc.⁴ Viable systems, when we find them, probably won’t drown out values and meaning the way that entrenched systems do. Because of this, I expect viable systems will likely be more meaningful to participate in and more supportive of individual agency than entrenched systems.
What else can we say about viable systems?
In the rest of this post, I’ll be searching for viable systems among those emerging due to COVID-19. I’ll consider three ideas for what’s viable:
Viable systems will spread ideas differently.
Viable systems will evolve more rapidly, and more democratically.
The values of their members will guide a viable system’s evolution.
I’ll collect emerging systems that seem promising along these lines. Some of them are already mentioned here, and others we’ll be researching and documenting over the next weeks.
Angle 1: Turtles and the Spread of Ideas
To begin, note that — in entrenched systems — the information that “goes viral” or “sells” mostly isn’t what’s most helpful to people. This is part of why entrenched systems aren’t viable.
The medical information that spreads isn’t the most grounded. The self-help literature isn’t what helps people live better. The political ideas aren’t the civic innovations we need. The business trends aren’t those which make orgs meaningful places to work. The educational ideas aren’t those that help students blossom. Etc!
Current systems are designed to make weighing in on a topic (posting on social media, retweeting, voting, purchasing) as easy as possible. This is important! But there are some questions that could be asked, at the moment of contribution, which could make a big difference. Here are two questions which might be useful in the corona crisis, and beyond.
Question 1: Is this a value-driven or an ideologically-driven contribution?
A person might have different motivations for a post on social media (or a retweet, or a vote in an election). They might be expressing their values or their ideological commitments. Viable systems need to look below the post/vote/etc to figure out where it’s coming from.
By ideological commitment, I mean an idea you push for about how things should be. In other words, a norm you want to promote. Maybe you’re an environmentalist and you want people to recycle. Maybe you believe that honest relationships are best for everyone, and so you want to promote norms of honesty. At a small scale, you might express an ideological commitment by sharing something personal in a group, if you’re trying to set an example so everyone will be more personal. At a larger scale, you might express an ideological commitment by advocating for universal human rights, or for making America great again.
By personal values, I mean a motive that’s not about what should be done, but what’s good in your own life — something you believe in attending to for its own sake, because it feels meaningful. Maybe it feels meaningful to watch your daughter explore the world, or to deepen your relationships by sharing hardships, or to feel the wind on your skin while you ride your bike. These aren’t things you push other people to do, like with honesty or human rights above. Values are things we appreciate, whereas ideological commitments are more about what we expect or demand.⁵
Especially in academia and in the social sciences, a lot of research is driven by ideological commitments. In this case, the researcher has something to prove. This is different than when an author is genuinely curious. To be curious is to be open to being convinced otherwise, to see beyond what I think needs to be said already. To be curious is to be guided more by values than by ideology.
With these distinctions in mind, we can return to the question of what goes viral. Currently, what goes viral is often a “take” which frames and enforces an ideological commitment. This makes media into a battlefield, where ideological commitments like “universal human rights” battle with other ideological commitments, like “make America great again”.
I believe the future of media — and of markets, democracies, scientific research, and many other systems — will be guided by values, appreciations, and genuine curiosities, more than ideologies. And viable systems will be those that know whether a contribution comes from one or the other.⁶
Question 2: Is the contributor a rabbit or a turtle on this topic?
Many of the systems I listed above get overrun by people who are sure of themselves: who frame the situation, who see the way forward. These people know what’s wrong with the world, what’s wrong with your life, what the group should be doing together, and so on. Sometimes they are experts (birds) but more often they are just people with an unfounded faith in their current idea (rabbits).
At Human Systems, we contrast these rabbits or birds with turtles. Turtles are people without strong takes, who are curious about a question and running experiments.
Especially in situations like corona, where expertise is still evolving and experiments still being run, to foster good collaboration means that you can’t have too many rabbits drowning out the turtles and birds.
I’ll call something a 🐢 turtley system, if it routes a contribution differently depending on whether it’s value-driven or ideological, or whether it fosters communication between turtles and birds, without drowning them in rabbits on the topic.
I think the corona crisis is a good moment for turtley systems to arise. More and more people are noticing which ideas are spreading for ideological reasons, and which because of values, and treating these two classes of ideas differently. The profusion of small, invite-only communities is already separating rabbits from turtles and birds.
One example is the facebook group Viral Science (and its attendant group Viral Exploration), which uses various filters to make sure genuine scientific discourse overpowers ideology.
We hope to see more of this — for instance, systems that suggest relationships building on members’ curiosities and values, rather than building solidarity around norms or goals.⁷ We also hope to see systems where content is only successful if it drives meaningful outcomes for people in the real world — that is, where content and experiment are linked.⁸
If you know about a new, turtle-y system, please tell us!
Angle 2: Systems that Evolve Quickly and Well
There’s a story they tell about democracies: that anyone can change the rules. When “Joe Citizen” wants to make a change, he can vote! He can challenge a law in court! He can march to the legislature, and propose a new bill!⁹
The unfortunate reality is that these systems are mostly updated by a small elite, and with great effort. Because of this, they can’t adapt when a crisis hits, when turtley research delivers new results, or to new personal values in the non-elite population.
Part of the problem here is fundamental: redesigning systems is hard. To redesign an important system — say, the New York City public schools, or Twitter — requires skill and care, or the system will stop working for the people inside it.
It may seem impossible: How could a system as important as the NYC schools, or Twitter, ever be updated by participants on the fly? How could it ever keep up with local needs, changing values, new research, or a global crisis?
I think we’ll need systems where there are (a) quick, non-elitist ways to change the rules, and (b) where changes can be discussed and evaluated rapidly, reacting to new local needs, values, research, or to global crisis. I’ll call such a system 🗣 responsive.
Things are looking good regarding (a). Consider the trend in online communities towards explicit rules and frameworks for operating. Nowadays, if you join a Facebook group, it’s common to see some rules at the top. In more structured spaces like Slack, it’s not just written rules but workflow automation processes, bots, etc, to guide people through and enforce those rules. These get copied from community to community and can be criticized, reconfigured, and updated based on needs.
Such rules aren’t in the unspoken realm of social convention — they are written down — but they also aren’t in the elites-only, slow-moving realm of law. In other words, many of our social systems can now be revised “in the cloud”.
endcoronavirus.org is a cloud-based emerging system that’s fairly responsive. The community has been automated via bots and written rules, and there are open teams which revise those flows.
The trend towards cloud-based rules is just beginning, and I expect it to blossom. What else will we see?
Pattern languages. Instead of having each group make their own rules, they could be built from plug-and-play parts, using a dictionary of different patterns that can be used. Futhermore, these patterns could interlock at different scales: several facebook groups could be federated together into a parliamentary system, each with a representative. Each group inside could be run differently—via citizens’ assemblies, mayors , or whatever its members prefer. Each local group can use the structure that serves them best.
CommunityRule is a great example of doing this with online governance.
Experimental policy changes. In a hierarchical structure like the above, subgroups could start out with the same laws, but when their members find a better alternative, they modify their local version. The success of these modifications could be monitored — if they work better, the modifications can be actively suggested to neighboring groups for adoption, based on data about values and outcomes. Otherwise, they might automatically revert.¹⁰
Hopefully, this kind of innovation will only become easier, as programming languages and no-code tools for making social software spread¹¹, and as plug-and-play systems for governance, licensing, codes of conduct, etc, continue to go mainstream.¹²
Let’s turn, then to criterion (b) for responsive social systems: sophisticated ways to discuss and evaluate potential changes. Here, I think we’re in trouble.
An advantage to the slow-moving and elite nature of law, is that it exposes laws to pressure from public debate, from disciplines like political theory, from meta-law documents like the US Bill of Rights, and from systems for reconsidering laws like courts. This pressure on law sometimes acts to keep it from suppressing personal values like creativity, individuality, etc.
This kind of thing seems even more important for cloud-based systems than for law, because cloud-based systems affect our lives more intimately. As I wrote in Can Software Be Good for Us:
With software, acting in a way the designers didn’t intend is often impossible: a user can’t sing “Thrift Shop” to a stranger on Tinder and can’t wear their Facebook cover photo on the bottom of the screen. The software has structured the sequence and style with which users interact completely.
Imagine if Twitter were implemented through government regulation: there’d be a law about how many letters you used when you spoke, and an ordinance deciding who wore a checkmark on their face. Imagine bureaucrats deciding who’s visible to the public, and who gets ignored. Could a law make you carry around and display everything you’d recently said? How would you comply?
When I imagine millions of online and offline groups, each with explicit cloud-based rules, without any sources of pressure like the bill of rights, and all structuring our lives more intimately than law… I don’t see a democratic utopia. Rather, I see a fragmented world, where each group is oppressive in its own way, and no group is very good.
Let’s return to the problem we started with: making NYC schooling, or Twitter, responsively updatable by participants, at the speed of changing needs, values, research, or global crisis.
If we are to solve that problem for real, we’ll need better ways to discuss and evaluate changes to cloud-based systems. Compared to how we evaluate laws, they’ll need to be quicker, more intuitive, and suitable for structuring everyday social life.
Do you know a good example of a responsive emerging system? Tell us.
Angle 3: Systems that Use Values to Evolve
I have a hunch about the above problem. We want to understand which rules are oppressive in cloud-based systems. This requires, I think, knowing what’s meaningful to the people who use them. Here’s what I mean:
It may be harder to live by the value of honesty on Instagram, if honest posts get fewer likes. Similarly, a courageous statement on Twitter could lead to harassing replies. On every platform, a person who wants to be attentive to their friends can find themselves in a state of frazzled distraction.
The coded structure of push notifications makes it harder to prioritize a value of personal focus; the coded structure of likes makes it harder to prioritize not relying on others’ opinions; and similar structures interfere with other values, like being honest or kind to people, being thoughtful, etc.
In this aspect, cloud-based social systems are less like law, and more like the kinds of conversation games we play at dinner. A dinner with family or friends is full of structured social games, like “Wait Your Turn, Then Say Something Relevant and Interesting”, or “Indignant Pile-On”, or “Clarifying That Point”.¹³ Each of these games has different dynamics that support some sources of meaning and undermine others. If someone wants to be present or kind, and you’re playing “Indignant Pile-On”, it won’t work for them, but it might work for someone who wants to be loyal.¹⁴
The games at dinner are a lot like the rules in facebook groups, slack workflows, and other cloud-based systems.
And whether at the dinner table or in a self-updating slack, the problem is figuring out the values of those around you, and what games will work for them. Depending on the game, participants will have more or less time to reflect; they’ll have different status relationships; etc. Some games will make it easier to be honest; in others, it will be easier to be discerning. Different values thrive or suffocate with different rules.
This brings me to one last criterion for viable systems: in such systems you’ll know other people’s values, and they’ll make a difference in the rules you pick. I’ll call this an 🙇♂️ honoring system.
At the small scale, this means changing the rules of the facebook group to make it easier for people to be honest, if the people in the group need a place to be honest. At larger scales, it means building things like contracts and companies and social networks around an idea of what’s meaningful for a group of people, and building the structures of those institutions around whatever specific values those people have.¹⁵
Even just knowing others’ values is hard. People can more often name their company values¹⁶ than their own personal values, and when they try to name their own, they often name ideological commitments instead.¹⁷
But learning to share our values and write them down is less than half the battle. We’ll need to adapt all of our systems to make room for them. This may mean processes to flag conflicts between personal values, on one hand, and current rules or goals, on the other. Once a conflict is recognized, a redesign might be necessary. If a workplace needs to supports workers better in being embodied, or honest, or courageous, there needs to be a process for figuring out what to change. And the same holds true for any other social system in need of redesign: family structures, social networks, group practices, democratic mechanisms, schools. All honoring systems, all the time. 🙇♂️
We don’t have of a good example of an emergent honoring system. If you do, please let us know.
Ok, I mentioned three potential criteria for viable systems:
🐢 turtley — Systems that know whether contributions are value-driven or ideological, and whether contributors are turtles about a topic.
🗣 responsive — systems where there are (a) quick, non-elitist ways to change the rules, and (b) where changes can be discussed and evaluated rapidly, reacting to new local needs, values, research, or global crisis.
🙇♂️ honoring — systems where you know other people’s values, and they make a difference in how the system is designed.
Which do you think are most important? Do you have good examples I missed? Go ahead and tweet at me, or at them to our list, and I’ll edit this article with updates.
Please send us promising innovations in these areas. Especially if they are turtley, responsive, or honoring and are about: local volunteer & workforce coordination, scientific collaboration, trustworthy media, online spirituality & play, or talent discovery.