Reliability vs Efficiency

From P2P Foundation
Jump to navigation Jump to search

= A critique of efficiency as the main criterion for economic decision-making


* Paper: What could be more important than efficiency? By Roberto Verzola

This piece was first distributed by the author in 2001. It appears as Chapter 23 in Towards a Political Economy of Information (2004).  

Text

Part One: A critique of Efficiency

Roberto Verzola:


Definitions

Efficiency is a measure of how well transformation of matter or energy occurs. To be efficient means to get the most from the least. The higher the efficiency, the better the transformation is occurring. Efficiency is usually computed from the ratio of useful output to input. To be accurate, the computation must take into account all inputs to a process; otherwise, the computed efficiency may exceed 100%. This will imply that the transformation process itself is creating new matter or energy, which contradicts fundamental laws of physics. Since energy transformation always produces waste heat, the energy efficiency of any process is always less than 100%. If some of the material outputs are not usable (e.g., wastes), then the sum of the useful material outputs will be less than the sum of the material inputs too, and the material efficiency of the process will likewise be less than 100%.

Economists often express the inputs and outputs of a process in monetary terms, because their interest is in processes where the monetary outputs exceed the monetary inputs. Furthermore, economists often compute the difference instead of ratio between outputs and inputs, because their interest is in absolute monetary amounts instead of ratios. In such cases where the focus is on absolute amounts, this paper uses the term “gain” instead of “efficiency.” An example of gain is the producer’s profit, which is revenues minus costs. Another example is the total utility to the consumer of a set of goods minus the total price of these goods.

Because both are measures of output relative to input, gain is closely related to efficiency and is used whenever absolute magnitudes are more important than relative magnitudes.

Among business firms, gain is really of more interest than efficiency, the best firms being those who manage to squeeze the last marginal bit of gain (i.e., profit) from their business operations.

Among natural persons, the output of interest is not necessarily matter, energy, or money but a vaguer concept like welfare, utility, or happiness, which makes measuring efficiency or maximizing it harder.

Like firms, economies today also tend to maximize gain (i.e., efficiency and inputs), not only efficiency. To maximize gain, one can increase the inputs to a process, or the efficiency by which the inputs are transformed into outputs, or both. Expanding one’s global reach is one way of increasing inputs. The economies-of-scale argument (higher efficiency through larger scale of operations) also supports a global strategy. Thus, gain-maximization strategies directly lead to globalization.

Because economies include all firms and natural persons, macro-efficiency is very difficult in practice to maximize or even simply to measure. To cope with this problem, economists have settled on a curious rule for improving the efficiency of economies step by step: improve somebody’s welfare without reducing anybody else’s, and keep doing this until nobody’s welfare can be further improved without reducing somebody else’s. This is the economist’s Pareto efficiency, which is obviously lower than full theoretical efficiency, but is itself a theoretical construct that is hardly ever seen – not even approximated – in reality.


Efficiency and economic theory

Despite these theoretical problems, efficiency is probably the most common criterion for economic decision-making in modern society. Nearly all modern economic policies cite efficiency as their ultimate goal, even if measuring it can be quite difficult.

Efficiency is the rationale for the idea of competition in a free market. It is also the reason cited for dismantling the welfare policies of the State and the welfare state itself. It is cited as the reason for privatization programs. Advocates for the international division of labor and economies of scale cite efficiency as their goal. Globalization, which extends the economies-of-scale idea to its utmost, also invokes efficiency as reason. When policy-makers select between alternative options, efficiency is often at the top of the list of criteria for selection.


Critiques of efficiency

The efficiency criterion has been criticized from at least three vantage points: 1) from efficiency advocates themselves; 2) from the social justice viewpoint; and 3) from the ecological viewpoint.

The first critique comes from within the advocates of efficiency itself. This critique retains efficiency as its main criterion for policy formulation, but points out flaws in the way efficiency is computed and efficiency estimates distorted, usually due to the incomplete accounting of inputs and outputs. Incomplete accounting occurs by ignoring non-market transactions or by externalizing costs.

An example of non-market transactions is subsistence production, where a considerable portion of the output is for direct consumption. Unless such production is accounted for, a subsistence economy may appear an inefficient, low-output economy. In fact, production for consumption is quite efficient because it saves marketing, storage and distribution costs. An important subset of production for direct consumption is household work, the non-accounting of which is a major critique of women’s movements against current economic systems.

Still another example of incomplete accounting occurs in U.S. agriculture, which prides itself in its increasing “efficiency,” with less than 10% of its population producing food for twice its population size. Yet, the energetic efficiency of U.S. agriculture has actually gone down over the decades: at the start of this century, it required less than one calorie input to produce a calorie of food; today, it needs more than 10 calories to produce the same amount.

Costs are externalized by passing them on to politically-weak social sectors, to the environment, or to future generations. This can lead to false impressions of high efficiency and mask gross inefficiencies within the system.

All such incomplete accounting distort efficiency comparisons.


The social justice critique

The social justice critique of the efficiency criterion suggests as a higher criterion the concept of equity. According to this critique, efficiency does not ensure equitable sharing of the output and often results in a reduction in equity (i.e., increasing gap between rich and poor).

This critique often presents efficiency as a problem of production (how to allocate input resources to maximize output), and equity as a problem of distribution (how to allocate the output to minimize the gap between rich and poor). Thus, from the vantage point of many equity critics of efficiency, maximizing efficiency and ensuring equitability are parallel objectives which may or may not conflict.


The ecological sustainability critique


The third critique of efficiency comes from the vantage point of ecology. According to this critique, efficiency only looks at a linear process that transforms input A into output B. This critique points out the problem of a linear process: the continuous transformation of input A into output B will gradually use up A and accumulate B. How will A be replaced? Where will B go? The more efficient such a linear process becomes, the faster A is used up, the faster B accumulates in the ecosystem. In a real world, a linear process is eventually an unsustainable process.

Just as the social justice critique insists that the output B must be equitably distributed, the ecological sustainability critique insists that the linear process must be turned into a cyclical one, so that the final output of the process eventually goes back to become fresh input into another – or even the same – process. This is what Barry Commoner called “closing the circle.”


A new critique of the efficiency criterion

This paper proposes a fourth critique of the efficiency criterion, from the vantage point of engineering and systems design. Such vantage point is becoming increasingly useful, since economic systems today are as much a product of social engineering and conscious design as they are a product of unplanned evolutionary development. This new critique also complements the social justice and ecological sustainability critiques of efficiency.

In engineering and systems design, another criterion for design optimization is often deemed more important than efficiency. This is the criterion of reliability.

While efficiency and reliability are related, they are not the same. Efficiency is a measure of how well a system transforms its inputs into useful output. It is usually expressed in terms of the ratio of useful output to input. Reliability is a measure of how long a system performs without failing. It is usually expressed in terms of a mean time between failures (MTBF). It may also be expressed in terms of the probability of non-failure. Reliability is closely related to risk, which is usually defined as the probability of failure multiplied by the estimated cost of the failure.


Reliability and failure

There are many ways of defining socio-economic failure. Even an extremely affluent society like the U.S. shows many signs of failure. Homelessness, unemployment, imprisonment, broken families, and poverty are examples of the failure of the U.S. system. For those who want a single measure of economic failure, below-subsistence income is one possible candidate.

Given a system’s output over time, one would average the output, and divide it by the average input over a period of time to get the system’s average efficiency. System failure can be defined as an instance when output goes below a minimum threshold value. To determine reliability, one would then note all instances of failure and take the mean (average) time between failures (MTBF).[3]

Note that efficiency highlights the gain in output, while reliability highlights the risk of failure. While the two are related, they are not the same. High efficiency can be achieved under unreliable conditions, and high reliability can be achieved under inefficient conditions.

For instance, a system that experiences frequent failures of extremely short duration can have low reliability without significantly reducing its efficiency. As the duration of each failure approaches zero, the reduction in efficiency becomes negligible. Such a system is highly efficient but very unreliable. Another system can have a much lower output than the first example but if it seldom fails, then it is a highly reliable but very inefficient system. In this paper, a strategy that improves on efficiency as well as the amount of input will be called a gain-improving strategy, while one that improves on reliability and the cost of failure will be called a risk-reducing strategy. Where the computational capabilities of economic agents allow it, these strategies may evolve into gain-maximizing and risk-minimizing strategies, respectively.


Part Two: Reliability as alternative criterion

When efficiency and reliability conflict

In the engineering and design sciences, efficiency and reliability are two design considerations which often conflict, because reliability can usually be improved (e.g., through modularization or through redundancy) at the expense of efficiency. Reliability is often seen as equally important, and in many cases the more important of the two, so that efficiency often takes second priority until the desired level of reliability is reached. In many designs, higher output is important, but preventing failure is even more important.

In software, for example, while efficient programs desirable, designers warn that efficiency should never sought at the expense of reliability.

In the design of bridges, buildings, dams, integrated circuits, spacecrafts, communication systems and so on, reliability is right there at the top of the list of design criteria, above or beside efficiency.


Is this debate applicable to economics?

Economies today are as much a product of social engineering and conscious design as they are a result of unplanned evolutionary development. Thus, it makes sense to review the lessons of engineering and systems design and ask whether some of the theories and methods of these disciplines may give useful insights into economic policy and decision-making.

For instance, economies are systems which contain feedback and will therefore benefit from the insights of feedback theory. Economies are complex systems which occasionally fail and will therefore benefit not only from the insights of systems designers who have successfully created extremely complex but highly reliable hardware as well as software systems, but also from the lessons of systems which have failed miserably. It is as much from these failures as from the successes in minimizing the risk of failure that designers have extracted their heuristics for successful systems design.

It is now acknowledged, for instance, that many pre-industrial communities tend to minimize risk when optimizing their resources. It is interesting to observe how this clashes with the approach of modern corporations, which would optimize these same resources by maximizing gain. We can expect that the optimum level of resource-use from the gain-maximizing firm’s viewpoint will tend to be higher than the optimum level from the risk-minimizing communities’ viewpoint. Thus, to firms and other gain-maximizers, the local resources would seem under-utilized, while the communities themselves would believe their resources are already optimally-used.

This insight helps clarify the source of many corporate-versus-community resource conflicts that are so common in the countryside."


Improving reliability: the modular approach

The standard approach in designing a complex system for reliability is called modularization, i.e., break up the system into subsystems which are relatively independent from each other and which interact with each other only through well-defined, carefully-designed interfaces. Modularization is used both in hardware and software design. The logic behind modularization is simple. In a system of many components, the number of possible pair interactions rises faster than the increase in the number of components, as the following table shows:   Table: Increasing complexity

No. of components - No. of possible pair interactions (not available in this version)   The last line is the actually the equation for the number of combinations possible from N items taken two at a time.

The table shows that a system with ten times the number of components can be a hundred times more complex than the smaller system. As the number of possible interactions increase, it becomes increasingly more difficult for the designer to anticipate, trace or control the consequences of these interactions. Mutually-dampening interactions (negative feedback) will tend to stabilize the system. But mutually-reinforcing interactions (positive feedback) can result in instabilities like oscillatory behavior or exponential growth. In physical systems, such instabilities can lead to breakdown.

In short, the risk of failure rises quickly as the number of components in a system increases.

The purpose of modularization, therefore, is to keep the number of possible interactions to a manageable level, so that their consequences can be anticipated, monitored and controlled.


Modularizing a complex system

A system with 10,000 components, as the preceding table shows, will have 49,995,000 possible interactions between pairs of components. The challenge of design is how to reduce this number of interactions; fewer interactions make it easier to evaluate and test and design and to minimize the possibility of errors.

Applying the modular approach, this system may be, for instance, be decomposed into a hypothetical two-level 100x100 system of 100 subsystems of 100 components each. (This is obviously an idealized solution, for illustrative purposes only.) Each subsystem will have 4,950 possible interactions. There are 101 modules – the main system of 100 interacting subsystems, and the 100 subsystems with 100 interacting components each. So the total number of possible interactions will be (1+100) x 4,950 or 499,950, down from the original 49,995,000. By using modular design, we have reduced potential system complexity – and any risk of failure – by a factor of 100.

If the level of reliability thus attained is still not enough, we can apply the modular approach further, and make a four-level 10x10x10x10 system (again an idealized solution, for illustrative purposes only). That is, every subsystem with 100 components each can again be decomposed into ten modules of ten components each, while the 100 subsystems themselves can also be broken up into ten modules of ten subsystems each. Now, we have a four-level hierarchy of modular units of ten subunits each. At the top level, we have the overall system broken up into ten subsystems. Each subsystem is again broken up into ten subsubsystems, giving a total of 100 subsubsystems. Each subsubsystem is further broken up into ten subsubsubsystems, for a total of 1,000 subsubsubsystems. Finally, each subsubsubsystem is composed of ten components, giving us the original 10,000 components. All in all, there are 1000 + 100 + 10 + 1 or 1,111 modular units, with a total possible interactions of 1,111 x 45 or 49,995, down from the two-level interactions of 499,950. Thus, we have further reduced potential system complexity and improved reliability by an additional factor of 10, or a full improvement by a factor of 1,000 compared with the original humongous 10,000-component system.

Note, however, that the worst case path between two components has also become longer. In the two-level modular approach above, the worst case is an interaction between two components on two different subsystems. Their interaction will now have to go through the boundary of the first component’s subsystem, across the space between two subsystems, through the boundary of the second component’s subsystem. The efficiency of the system has decreased. The worst case path between two components is even longer in the four-level modular approach, with the interaction having to pass through several levels of modular boundaries. The more reliable system is potentially also the less efficient. This creation of a modular hierarchy of subsystems composed of fewer component units is also called decomposition. In this tension between loss of efficiency from too much emphasis on modularity and reliability, the common rule is to err in favor of the latter. That is, reliability over efficiency. This is true for software as well as hardware design.

Economists commonly respond to the suggestion that reliability is more important than efficiency by asserting that since the frequency of failures affects efficiency too, it can be included in efficiency equations and therefore be taken into account by efficiency-based economic theory. Such response, however, assumes what is being questioned: that efficiency is more important than reliability. If the suggestion is accepted that reliability is important, efficiency improvements will instead have to be expressed in terms of their effects on reliability.


Modular systems: improving efficiency


It is also instructive to look at the process from the opposite end: given an existing multi-level modular design, how does one improve the efficiency of the system? Imagine the same hypothetical four-level system discussed above. The worst case interaction, efficiency-wise, is between two components whose modules belong to different subsubsubsystems, which then belong to different subsubsystems, which themselves belong to different subsystems. This interaction passes through six interfaces all in all – three on the way up the module hierarchy and another three on the way down. If the interaction between such two components occur much more frequently than anticipated by the original design, the efficiency of the whole system may suffer.

To improve efficiency, one can modify the original design by adding a direct path between the two components, bypassing all the modular interfaces. However, if the existing modular design was already reliably working, the implications of this new direct path between two components must be very carefully studied, lest replacing the long path with a direct one impact on the rest of the design. For instance, one or both of the subsystems bypassed might be relying for their own proper functioning on the signals from either component. If these signals disappear, having taken the direct path instead, the affected subsystem may not function as designed. If a thorough review of the design shows that a direct path can be added between the two components without problem, then indeed such a change may improve efficiency without causing a decrease in reliability. Often, due to the sheer number of possibilities, a 100% thorough review is not at all possible though.

What about another pair of components? A direct connection between them will likewise result in a shorter path and greater efficiency. Again, the whole design must be thoroughly reviewed, in case such a change will affect other parts of the system.

As one efficiency improvement after another is done, the possibility of overlooking a negative consequence of the change increases, and so does the risk of introducing a problem (i.e., “bug”) into what used to be a finely-working design. Or we can go back to a two-level instead of a four-level module hierarchy, reducing the worst-case path from 6 to 2 and improving efficiency by a factor of 3. However, a two-level 100x100 design, as we saw above, will degrade reliability by a bigger factor of 10.

As we make efficiency improvements, the number of new potential interactions increase dramatically. In a complex system with thousands or even millions of components, it will be impossible to anticipate, study, or much less manage the consequences of every new potential interaction. The more such efficiency improvements, the greater the possibility of introducing unintended problems into the system, some of them obvious but some of them subtle and perhaps showing up only under conditions that rarely occur, and degrading the system’s reliability. The system has become more failure-prone.


Dynamic systems are more complex

In software and hardware design, the potential interactions reflect the choices available to the designers at the start of the design process. When the design is done and implemented, only the interactions allowed in the design actually occur during system operation.

However, when a system is modified or repaired, a technician may implement changes which do create new interactions between components which are not provided for in the original design. This is especially true for software systems, whose flexibility easily allow modifications to the original design to be tried and implemented.

There are enough reliable, modular systems which have been modified over time, becoming gradually less modular and having more direct interaction among components that had been isolated from each other. Very often, these modifications introduce system “bugs,” which may show themselves immediately, or only under certain rare conditions. A single bug, or an accumulation of minor bugs, can eventually cause system failure. As the barriers between modules become porous and more direct interactions between components occur, the reliability goes down, the mean time between failures gets shorter, and the probability of the next failure occurring goes up.

Design means choosing a permanent set of component interactions to realize some desired functions. The selected set of interactions is then implemented in the design’s medium. This may mean mechanical linkages between moving parts, pipes between containers, conductive connections between electronic parts, software instructions to maintain a data structure, and so on. Once done, this permanently excludes the rest of the possible interactions. The design can then be tested for problems and improved.

In the case of very dynamic systems like economies, the potential interactions between system components – economic agents, in this case – can occur anytime. The design is never done, so to speak, but is in continuous flux and change. The market may be seen as a huge switching mechanism which establishes brief as well as long-term connections (i.e., transactions) between economic agents. For such extremely fluid systems, the possibility of positive feedback and instabilities are therefore always present, making approaches which enhance reliability and minimize the risk of failure even more important.


Barriers create modules

Modularization in systems design provides solid theoretical argument for barriers as part of economies. Such barriers are the equivalent of a module’s boundaries, meant to confine direct component interaction within the module and to course interactions with outside components through the module’s interface with other modules. Tariff, immigration and capital controls, trade barriers, import controls, etc. form boundaries that in effect minimize interactions between economic agents in different countries and enhance the internal cohesion of each country. Advocates of free trade and open economies argue that all these result in inefficiencies. They may be right. But system designers will reply that these inefficiencies are the necessary cost of modular approaches needed to enhance reliability and reduce the risk of internal failure. The debate boils down to a conflict of priority: efficiency or reliability.

Unfortunately, the blind pursuit of economic efficiency leads gain-maximizers to break down barriers which separate the world into many economic modules. Previously, these modules were loosely coupled, in the interest of greater reliability. As these barriers break down, new direct interactions become possible among the different components of different modules. Some of these interactions are bound to be mutually-reinforcing (positive feedback), creating instabilities and increasing the probability of system errors and failures.

Today, globalization is breaking down more and more barriers to economic transactions, making an increasing number of new direct interactions among components of different modules which were previously isolated. Some of these new interactions involve mutually-reinforcing events, reflecting positive feedback. Herd behavior among speculative investors is one example. Positive feedback leads to oscillatory behavior or exponential growth, both of which are indications of instability.

A system with a lot of positive feedback is an unstable system. Like software which has been modified to rely increasingly on global variables, the global economic system gradually becomes problem-ridden, unreliable, and crash-prone. Instability is exactly what the current globalized economy is showing. Anybody steeped in the theories of systems design will say, “what else can you expect?” Increasing reliance on global variables (international institutions, global corporations, global infrastructures, etc.), breaking down barriers and creating greater interdependence between modules, overemphasizing efficiency at the expense of maintaining modular boundaries – these are common design mistakes which in the past have invariably led to unmaintainable systems and early failures. They are leading to systemic instabilities and threatening failure in the global economy now.

The principles of modularization provide solid theoretical backing for many of current arguments against globalization and open economies, from scientific disciplines which have shown remarkable success in designing highly complex but very reliable systems.


Complex systems: improving reliability

In addition to modularization, systems design has developed other approaches for improving the reliability of complex systems:

Information hiding: A little as possible of the internal information within a module should be available to other modules, which should only act through the external interfaces. This is a theoretical argument against the economic “transparency” global institutions are demanding from countries.

High internal cohesion: Every module should have a high level of interaction among its members. This argues for giving greater emphasis on local instead of international transactions, internal instead of external markets. From the systems point of view, this is a theoretical argument for nationalism and other cultural mechanisms for maintaining high internal cohesion.

Weak coupling between modules: Modules should be designed for low levels of interaction, compared to the high levels desired within each module. This prevents a problem within a module from quickly propagating to other modules. It helps isolate problems when they crop up, making it easier to solve the problem. This is the direct opposite of the “global interdependence” argument in economic debates. It is precisely this interdependence which tends to make the present global system unstable and crash-prone, because a problem in one module can very easily propagate to other modules.

Minimize global variables: Global variables are system components which are “visible” to every module. Their impact is system-wide. In software, they are highly undesirable. Much more preferable are local variables, i.e., system components whose effects are felt only within the module they belong to. This is a strong theoretical argument against powerful global institutions and players.

These guidelines clearly provide theoretical support for economic protectionism, internal markets, regulation, and so on.

The argument may even be extended to the biological field. Species barriers are currently being broken down through recombinant DNA technology, allowing new biochemical and genetic interactions which had not existed in the past. Again, reliability is giving way to efficiency. The insights raised in this paper easily suggest the concern that as biological, genetic and biochemical barriers are broken down, new interactions will occur, some with positive but others with negative impact. Subtle problems will arise and the number of these problems will increase at a rate faster than the number of newly-interacting components. The reliability of the entire genetic system goes down, and the risk of failure – a genetic crash of some kind – increases.


The emergence of gain-maximizers

It is reasonable to assume that people tend to pursue a mix of strategies, ranging from predominantly risk-reducing to predominantly gain-improving, depending on their own personal inclinations as well as the specific situation. It was probably Adam Smith who first provided the theoretical foundations for the pure gain-maximizing strategy, when he claimed that self-interested individuals freely competing in the market and maximizing gain only for themselves are – though they may not be intending it – also maximizing gain for society as a whole. In short, free-market competition makes the entire economy run efficiently. Since then, efficiency and gain-maximization have become the mantra of economics. The unabashed pursuit of self-interest has even become a moral imperative. Later economists mathematically modeled Adam Smith’s hypothesis and proved it, although only under highly restrictive and unrealistic assumptions. This concept is known today as the First Fundamental Theorem of Welfare Economics.

Reality, however, kept hounding the theory. Human beings were not pure gain-maximizers, it was observed. Other aspects of humanity intruded; people were unpredictable, error-prone, ignorant, emotional, and so on. They had neither perfect information nor infinite computational powers. Many were clearly shown to be risk-averse.

It thus turns out that the “ideal” economic agent had to be invented: a pure gain-maximizer who, competing freely in the market, would also make the economy run efficiently. This ideal agent is the business firm, also known as the for-profit corporation. This ideal agent is even recognized as a legal person, with its own bundle of legal rights and obligations, separate from its shareholders, board of directors, or managers. This legal person has one and only one motivation: to maximize profits.

Today, therefore, there are two kinds of players in the economic arena: 1) the natural person, who pursues a time-varying mix of gain-improving and risk-reducing strategies, and 2) the business firm, which maximizes its gain in the perfect image of neo-classical economic theory. From an evolutionary ecological perspective, one might also study them as if they were two different species competing for the same ecological niche.


Evolutionary perspectives

Business firms have become the dominant player in most economies and natural persons now take a secondary and often minor role. This suggests that today’s dominant economic system has been selecting for the pure gain-maximizers at the expense of risk-minimizers and others who pursue mixed strategies. This system presumably rewards the pure gain-maximizers better than the rest, leading to an increase in the population of gain-maximizers, and forcing even natural persons to become pure gain-maximizers themselves. Those who don’t are considered inefficient and therefore economically unfit, and the system makes it difficult for them to survive. A theoretical construct by economists of efficient economic agents creating an efficient economy has, in a way, created these very agents and a system that selects for them.

Evolutionary development needs population to work on. In the past, we had a respectable population of economic systems, each going through their own evolutionary process. A failure in one system left the others essentially untouched. Today, all the economic systems of the world – save for a handful – have become so interdependent that they basically belong to one humongous global system. The pursuit of efficiency through economies-of-scale and global expansion has reduced an evolutionary process to an all-or-nothing proposition. A system failure is a global failure. Unfortunately, its design is being guided by gain-maximizing, efficiency-enhancing strategies which are making the system less reliable and more error-prone.


Shifting our priorities

This paper suggests restoring the criterion of reliability to its rightful place above efficiency in the list of important criteria for socio-economic decision-making. This would be the first step in rescuing society and economics from the dead-end and possible catastrophe created by pure gain-maximizers. Reliability and modularization sit on the very solid theoretical foundations of the engineering sciences and system design. Hopefully, they can provide better theoretical guidance for the difficult socio-economic decisions that we must make today.

Reliability and risk-minimization are really not such a novel idea. It has been known for a long time that farmers are generally risk-averse. Environmentalists have long advocated their version of risk-reduction, which they call the precautionary principle.

The proposed shift in priority among governments and social planners from gain-maximization to risk-minimization is not an either/or proposition but a return to a more dynamic balance between the two, with the higher-priority strategy taking precedence more often than the other strategy. Such a shift, this paper suggests, will move societies towards more cooperation, sharing, equality and stability.


Frugality, cooperation and resource-pooling

Risk-reduction also encourages other ways of coping with risks, many of them recalling traditional values which are disappearing due to globalization.

These include:

  • More frugality, less profligacy: One way of preparing for an uncertain future is saving, whenever there is a surplus. Another is conservation – to use available resources sparingly so that they will last longer. Risk-averse persons tend to be more frugal; a risk-averse society tends to encourage resource conservation, rather than profligate exploitation. The environmental ethic “reduce, reuse, recycle” is another expression of these ideas. Businesses also save whenever they create a sinking fund to provide, for instance, for bad debts or for future capital expenditures. It is a way of distributing risk over time.
  • More cooperation and sharing, less competition: Another way of distributing risk is to share it with others who are similarly exposed. Insurance is a good example. People exposed to the same risk can respond better when they cooperate rather than compete.

Risk-reduction mechanisms are replete with the language of welfare and cooperation. Risk-reduction seems to encourage people towards cooperation, sharing, pooling of resources and collective ownership. It automatically implies a welfare society, which takes care of the weak, the underprivileged and the inefficient and helps them lift themselves up beyond what society sees as levels of failure. In contrast, gain-maximization very often relies on competitive approaches. In fact, according to economic theory, a competitive free-market is a necessary condition for attaining the maximum economic efficiency.

This contrast suggests an alternative formulation of Adam Smith’s hypothesis: that a society in which individuals - who may have nothing but their self-interest in mind - try to lessen risks to themselves will settle on a state of least risk to society as a whole.

More commonly-owned resources, less privatization: Responding to risks often requires resources which are beyond the reach of individuals, forcing threatened individuals to pool their resources together to respond to such risks. Social security is one good example. Thus a society working towards lower risk will probably expand the public commons instead of privatizing it.

More attention to poverty, less to average incomes, much less to exceptionally high incomes. A family that goes to bed hungry every night represents a failure of society that is masked if incomes are averaged. Such averages will hide the daily occurrences of such failures, and society will be unable to respond properly. A risk-reducing strategy focuses society’s attention to those lowest-income situations where society has failed and continues to fail. The immediate implication of a risk-reducing strategy, therefore, is the need to look at and to remedy the plight of those at the bottom rungs of society.

One bonus of a risk-reducing strategy is a heightened awareness of limits. As perceived risk nears zero, risk-reducers can more easily come to a realization that enough has been done. Gain-maximizers, in contrast, face no such limit, and will try to grow without end. One can say that their limit is infinite gain. Zero, however, is so much easier to recognize as a physical limit than infinity.

We know the damage caused by gain-maximization. We have seen how reliability and risk-reduction can lead to more desirable societal outcomes. Isn’t it time to consider shifting priorities and changing our criteria for decision-making?"