# The Entropy Law and the Economic Process

///////

///////

Books:

The Entropy Law and the Economic Process, by Nicholas Georgescu-Roegen,
Harvard University Press, 1977 (1971).

///////

Other relevant sources of information:

Entropy at Wikipedia.
Entropy in thermodynamics and information theory, at Wikipedia.
Nicholas Georgescu-Roegen at Wikipedia.
The Entropy Law and the Economic Process, at Wikipedia.
Ecological Economics at Wikipedia.

///////

Georgescu-Roegen’s Entropy Hourglass:

Georgescu-Roegen, The Entropy Law and the Economic Process (1977 (1971) p. 11):

One such issue is the myth that science is measurement, that beyond the limits of theory there is no knowing at all. “Theory” is here taken in its discriminating meaning: a filing of all descriptive propositions within a domain in such a way that every proposition is derived by Logic (in the narrow, Aristotelian sense) from a few propositions which form the logical foundation of that science. Such a separation of all propositions into “postulates” and “theorems” obviously requires that they should be amenable to logical sifting. And the rub is that Logic can handle only a very restricted class of concepts, to which I shall refer as arithmomorphic for the good reason that every one of them is as discretely distinct as a single number in relation to the infinity of all others. Most of our thoughts, however, are concerned with forms and qualities. And practically every form (say, a leaf) and every quality (say, being reasonable) are dialectical concepts, i.e., such that every concept and its opposite overlap over a contourless penumbra of varying breadth.

The book of the universe simply is not written as Galileo claimed – only “in the language of mathematics, and its characters are triangles, circles, and other geometrical figures. In the book of physics itself we find the most edifying dialectical concept of all: probability. And no book about the phenomena of life can dispense with such basic yet dialectical concepts as species, want, industry, workable competition, democracy, and so on. It would be, I maintain, the acme of absurdity to decree that no such book be written at all or, if it is written, that it simply disseminates nonsense.

Lest this position be misinterpreted again by some casual reader, let me repeat that my point is not that arithmetization of science is undesirable. Whenever arithmetization can be worked out, its merits are above all words of praise. My point is that wholesale arithmetization is impossible, that there is valid knowledge even without arithmetization, and that mock arithmetization is dangerous if peddled as genuine.

Let us also note that arithmetization alone does not warrant that a theoretical edifice is apt and suitable. As evidenced by chemistry – a science in which most attributes are quantifiable, hence, arithmomorphic – novelty by combination constitutes an even greater blow to the creed “no science without a theory.” A theoretical edifice of chemistry would have to consist of an enormous foundation supporting a small superstructure and would thus be utterly futile. For the only raison d’être of theory is economy of thought, and this economy requires, on the contrary, an immense superstructure resting on a minute foundation.

///////

/////// (ibid, p. 18):

Since the economic process materially consists of a transformation of low entropy into high entropy, i.e., into waste, and since this transformation is irrevocable, natural resources must necessarily represent one part of the notion of economic value. And because the economic process is not automatic, but willed, the services of all agents, human or material, also belong to the same facet of that notion. For the other facet, we should note that it would be utterly absurd to think that the economic process exists only for producing waste. The irrefutable conclusion is that the true product of that process is an immaterial flux, the enjoyment of life. This flux constitutes the second facet of economic value. Labor, through its drudgery, only tends to diminish the intensity of this flux, just as a higher rate of consumption tends to increase it.

And paradoxical though it may seem, it is the Entropy Law, a law of elementary matter, that leaves us no choice but to recognize the role of the cultural tradition in the economic process. The dissipation of energy, as the law proclaims, goes on automatically everywhere. This is precisely why the entropy reversal as seen in every line of production bears the indelible hallmark of purposive activity. And the way this activity is planned and performed certainly depends upon the cultural matrix of the society in question. There is no other way to account for the intriguing differences between some developed nations endowed with a poor environment on the one hand, and some underdeveloped ones surrounded by an abundance of natural riches. The exosomatic evolution works its way thorough the cultural tradition, not only through technological knowledge.

/////// (ibid, p. 20):

It is fashionable nowadays to indulge in estimating how large a population our earth can support. Some estimates are as low as five billions, others as high as forty-five billions. However, given the entropic nature of the economic process by which the human species maintains itself, this is not the proper way to look at the problem of population. Perhaps the earth can support even forty-five billion people, but certainly not ad infinitum. We should therefore ask “how long can the earth maintain a population of forty-five billion people?” And if the answer is, say, one thousand years, we still have to ask “what will happen thereafter?” […]

Moreover, to have a maximum population at all times is definitely not in the interest of our species. The population problem, stripped of all value considerations, concerns not the parochial maximum, but the maximum of life quantity that can be supported by man’s natural dowry until its complete exhaustion. For the occasion, life quantity may be simply defined as the sum of the years lived by all individuals, present and future. Man’s natural dowry, as we all know, consist of two essentially distinct elements: (1) the stock of low entropy on or within the globe, and (2) the flow of solar energy, which slowly but steadily diminishes in intensity with the entropic degradation of the sun. But the crucial point for the population problem as well as for any reasonable speculations about the future exosomatic evolution of mankind is the relative importance of these two elements. For, as surprising as it may seem, the entire stock of natural resources is not worth more than a few days of sunlight!

//////// (ibid, p. 36):

From the preceding analysis it follows that the immense difference between East and West in the progress of factual knowledge constitutes no evidence in support of a rational reality. It does prove, however, that theoretical science is thus far the most successful device for learning reality given the scarcity pattern of the basic faculties of the human mind.

Theoretical Science: A Living Organism:

The main thesis developed in this chapter is that theoretical science is a living organism precisely because it emerged from an amorphous structure – the taxonomic science – just as life emerged from inert matter. Further, as life did not appear everywhere there was matter, so theoretical science did not grow wherever taxonomic science existed: its genesis was a historical accident. The analogy extends still further.

Recalling that “science is what scientists do,” we can regard theoretical science as a purposive mechanism that reproduces, grows and preserves itself. It reproduces itself because any “forgotten” proposition can be rediscovered by ratiocination from the logical foundation. It grows because from the same foundation new propositions are continuously derived, many of which are found factually true. It also preserves its essence because when destructive contradiction invades its body a series of factors is automatically set in motion to get rid of the intruder.

To sum up: Anatomically, theoretical science is logically ordered knowledge. A mere catalog of facts, as we say nowadays, is no more science than the materials in a lumber yard are a home. Physiologically, it is a continuous secretion of experimental suggestions which are tested and organically integrated into the science’s anatomy.

In other words, theoretical science continuously creates new facts from old facts, but its growth is organic, not accretionary. Its anabolism is an extremely complex process which at times may even alter its anatomic structure. We call this process “explanation” even when we cry out “science does not explain anything.” Teleologically, theoretical science is an organism in search of new knowledge.

/////// (ibid, p. 43):

Numbers and Arithmomorphic Concepts:

The boundaries of every science of fact are moving penumbras. Physics mingles with chemistry, chemistry with biology, economics with political science and sociology, and so on. There exists a physical chemistry, a biochemistry, and even a political economy in spite of our unwillingness to speak of it. Only the domain of Logic – conceived as Principia Mathematica – is limited by rigidly set and sharply drawn boundaries. The reason for this is that discrete distinction constitutes the very essence of Logic: perforce, discrete distinction must apply to Logic’s own boundaries. […]

There is one and only one reason why we use symbols: to represent concepts visually or audibly so that these may be communicated from one mind to another. Whether in general reasoning or in Logic (i.e., formal logic), we deal with symbols qua representatives of extant concepts. Even in mathematics, where numbers and all other concepts are as distinct from one another as the symbols used to represent them, the position that numbers are nothing but “signs” has met with tremendous opposition. Yet we do not go, it seems, so far as to realize (or to admit that we realize) that the fundamental principle on which Logic rests is that the property of discrete distinction should cover not only symbols but concepts as well.

As long as this principle is regarded as normative no one could possibly quarrel over it. On the contrary, no one could deny the immense advantages derived from following the norm whenever possible. But it is often presented as a general law of thought. A more glaring example of Whitehead’s “fallacy of misplaced concreteness” than such a position would be hard to find. To support it some have gone so far as to maintain that we can think but in words. If this were true, then thoughts would become a “symbol” of the words, a most fantastic reversal of the relationship between means and ends.

Although the absurdity has been repeatedly exposed, it still survives under the skin of logical positivism. Pareto did not first coin the word “ophelimity” and then think of the concept. Besides, thought is so fluid that even the weaker claim, namely that we can coin a word for every thought, is absurd. “The Fallacy of the Perfect Dictionary” is plain: even a perfect dictionary is molecular while thought is continuous in the most absolute sense. Plain also are the reasons for and the meaning of the remark that “in symbols truth is darkened and veiled by the sensuous element.”

Since any particular real number constitutes the most elementary example of a discretely distinct concept, I propose to call any such concept arithmomorphic. Indeed, despite the use of the word “continuum” for the set of all real numbers, within the continuum every real number retains a distinct individuality in all respects identical to that of an integer within the sequence of natural numbers. […] In Logic “is” and “is not,” “belongs” and “does not belong,” “some” and “all,” too, are discretely distinct.

Every arithmomorphic concept stands by itself in the same specific manner in which every “Ego” stands by itself perfectly conscious of its absolute differentiation from all others. This is, no doubt, why our minds crave arithmomorphic concepts, which are as translucent as the feeling of one’s own existence. Arithmomorphic concepts, to put it more directly, do not overlap. It is this peculiar (and restrictive) property of the material with which Logic can work that accounts for its tremendous efficiency: without this property we could neither compute, nor syllogize, nor construct a theoretical science. But, as happens with all powers, that of Logic too is limited by its own ground.

Dialectical Concepts:

The antinomy between One and Many with which Plato, in particular, struggled is well known. One of its roots resides in the fact that the quality of discrete distinction does not necessarily pass from the arithmomorphic concept to its concrete denotations. There are, however, cases where the transfer operates. Four pencils are an “even number” of pencils; a concrete triangle is not a “square.” Nor is there any great difficulty in deciding that Louis XIV constitutes a denotation of “king.” But we can never be absolutely sure whether a concrete quadrangle is a “square.” In the world of ideas “square” is One, but in the word of the senses it is Many.

On the other hand, if we are apt to debate endlessly whether a particular country is a “democracy” it is above all because the concept itself appears as Many, that is, it is not discretely distinct. If this is true, all the more the concrete cannot be One. A vast number of concepts belong to this very category, like “good,” “justice,” “likelihood,” “want,” etc. They have no arithmomorphic boundaries; instead, they are surrounded by a penumbra within which they overlap with their opposites. […]

It goes without saying that to the category of concepts just illustrated we cannot apply the fundamental law of Logic: the Principle of Contradiction: “B cannot be both A and non-A.” On the contrary, we must accept that, in certain instances at least, “B is both A and non-A” is the case. Since the latter principle is one cornerstone of Hegel’s Dialectics, I propose to refer to the concepts that may violate the Principle of Contradiction as dialectical.

In order to make it clear what we understand by dialectical concept, two points need special emphasis.

First, the impossibility mentioned earlier of deciding whether a concrete quadrangle is “square” has its roots in the imperfection of our senses and of their extensions, the measuring instruments. A perfect instrument would remove it. On the other hand, the difficulty of deciding whether a particular country is a democracy has nothing to do – as I shall explain in detail presently – with the imperfection of our sensory organs. It arises from another “imperfection,” namely, that of our thought, which cannot always reduce an apprehended notion to an arithmomorphic concept. […]

The second point is that a dialectical concept – in my sense – does not overlap with its opposite throughout the entire range of denotations. To wit, in most cases we can decide whether a thing, or a particular concept, represents a living organism or lifeless matter. If this were not so, then certainly dialectical concepts would be not only useless but also harmful. Though they are not discretely distinct, dialectical concepts are nevertheless distinct. The difference is this: A penumbra separates a dialectical concept from its opposite. In the case of an arithmomorphic concept the separation consists of a void: tertium non datur – there is no third case. The extremely important point is that the separating penumbra itself is a dialectical concept. […]

Undoubtedly, a penumbra surrounded by another penumbra confronts us with an infinite regress. But there is no point in condemning dialectical concepts because of this aspect: in the end the dialectical infinite regress resolves itself just as the infinite regress of Achilles running after the tortoise comes to an actual end. […] Far from being a deadly sin, the infinite regress of the dialectical penumbra constitutes the salient merit of the dialectical concepts: as we shall see, it reflects the most essential aspect of Change.

/////// (ibid, p. 50):

Dialectical Concepts and Science

No philosophical school, I think, would nowadays deny the existence of dialectical concepts as they have been defined above. But opinions as to their relationship to science and to knowledge in general vary between two extremes.

At the one end we find every form of positivism proclaiming that whatever the purpose and uses of dialectical concepts, these concepts are antagonistic to science: knowledge proper exists only to the extent to which it is expressed in arithmomorphic concepts. The position recalls that of the Catholic Church: holy thought can only be expressed in Latin.

At the other end there are the Hegelians of all strains maintaining that knowledge is attained only with the aid of dialectical notions in the strict Hegelian sense, i.e., notions to which the principle “A is non-A” applies always.

There is, though, some definite asymmetry between the two opposing schools: no Hegelian – Hegel included – has ever denied either the unique ease with which thought handles arithmomorphic concepts or their tremendous usefulness. For these concepts possess a built-in device against most kinds of errors of thought that dialectical concepts do not have. Because of this difference we are apt to associate dialectical concepts with loose thinking, even if we do not profess logical positivism. The by now famous expression “the muddled waters of Hegelian dialectics” speaks for itself.

Moreover, the use of the antidialectical weapon has come to be the easiest way for disposing of someone else’s argument. Yet the highly significant fact is that no one has been able to present an argument against dialectical concepts without incessant recourse to them!

/////// (ibid, p. 52):

The position that dialectical concepts should be barred from science because they would infest it with muddled thinking is, therefore, a flight of fancy – unfortunately, not an innocuous one. For it has bred another kind of muddle that now plagues large sectors of social science: arithmomania. To cite a few cases from economics alone. The complex notion of economic development has been reduced to a number, the income per capita. The dialectical spectrum of human wants (perhaps the most important element of the economic process) has long since been covered under the colorless numerical concept of “utility” for which, moreover, nobody has yet been able to provide an actual procedure of measurement.

/////// (ibid, p. 79):

To argue that preference is discontinuous because its arithmomorphic simile is so, is tantamount to denying the three-dimensionality of material objects on the ground that their photographs have only two dimensions. The point is that an arithmomorphic simile of a qualitative continuum displays spurious seams that are due to a peculiar property of the medium chosen for representing the continuum. The more complex the qualitative range thus formalized, the greater the number of such artificial seams. For the variety of quality is continuous in a sense that cannot be faithfully mirrored by a mathematical multiplicity.

A Critique of Arithmomorphism:

Like all inventions, that of the arithmomorphic concept too has its good and its bad features. On the one hand, it has speeded the advance of knowledge in the domain of inert matter; it has also helped us detect numerous errors in our thinking, even in our mathematical thinking. Thanks to Logic and mathematics in the ultimate instance, man has been able to free himself of most animistic superstitions in interpreting the wonders of nature. On the other hand, because an arithmomorphic concept has absolutely no relation to life, to anima, we have been led to regard it as the only sound expression of knowledge. As a result, for the last two hundred years we have bent all our efforts to enthrone a superstition as dangerous as the animism of old: that of the Almighty Arithmomorphic Concept.

Nowadays, one would risk being quietly excommunicated from the modern Akademia if he denounced this modern superstition too strongly. The temper of our century has thus come to conform to one of Plato’s adages: “He who never looks for numbers in anything, will not himself be looked for in the number of famous men.” That this attitude has also some unfortunate consequences becomes obvious to anyone willing to drop the arithmomorphic superstition for a while: today there is little, if any, inducement to study Change unless it concerns a measurable attribute. Very plausibly, evolution would still be a largely mysterious affair had Darwin been born a hundred years later. The same applies to Marx and, at least, to his analysis of society. With his creative mind, the twentieth-century Marx would have probably turned out to be the greatest econometrician of all times.

Denounciations of the arithmomorphic superstition, rare though they are, have come not only from old-fashioned or modern Hegelians, but recently also from some of the highest priests of science, occationally even from exegetes of logical positivism. Among the Nobel laureates, at least P. W. Bridgman, Erwin Schrödinger, and Werner Heisenberg have cautioned us that it is the arithmomorphic concept (indirectly, Logic and mathematics), not our knowledge of natural phenomena, that is deficient. Ludwig Wittgenstein, a most glaring example in this respect, recognizes “the bewitchment of our understanding by the means of our [rigidly interpreted] language.” The arithmomorphic rigidity of logical terms and symbols ends by giving us mental cramps. We can almost hear Hegel speaking of “the dead bones of Logic” and of “the battle of Reason … to break up the rigidity to which the Understanding has reduced everything.”

But even Hegel had his predecessors: long before him Pascal had pointed out that “reasoning is not made of barbara and baralipton.” The temper of an age, however, is a peculiarly solid phenomenon which advertises only what it likes and marches on undisturbed by the self-criticism voiced by a minority. In a way, this is only natural: as long as there is plenty of gold dust in rivers why should one waste time in felling timber for gold-mine galleries?

There can be no doubt that all arguments against the sufficiency of arithmomorphic concepts have their roots in that “mass of unanalysed prejudice which Kantians call ‘intuition,’” and hence would not exist without it. Yet, even those who, like Russell, scorn intuition for the sake of justifying a philosophical flight of fancy, could not possibly apprehend or think – or even argue against the Kantian prejudice – without this unanalysed function of the intellect. The tragedy of any strain of positivism is that in order to argue out its case it must lean heavily on something which according to its own teaching is only a shadow.

//////// (ibid, p. 95):

A social scientist seeking counsel and inspiration for his own activity from the modern philosophy of science is apt to be greatly disappointed, perhaps also confused. For some reason or other, most of the philosophy has come to be essentially a praise of theoretical science and nothing more. And since of all sciences professed today only some chapters of physics fit the concept of theoretical science, it is natural that almost every modern treatise of critical philosophy should avoid any reference to fields other than theoretical physics. To the extent to which these other fields are mentioned (rarely), it is solely for the purpose of proving how unscientific they are.

Modern philosophy of science fights no battle at all. For no one, I think, would deny that spectacular advances in some branches of physics is due entirely to the possibility of organizing the description of the corresponding phenomenal domain into a theory. But one would rightly expect more from the critical philosophy, namely, a nonprejudiced and constructive analysis of scientific methodology in all fields of knowledge. And the brutal fact is that modern works on philosophy of science do not even cover fully the whole domain of physics.

Conceptualization as a mapping: (phenomenal domain) —> (conceptual codomain):

//////// (ibid, p. 96):

I begin by recalling an unquestionable fact: the progress of physics has been dictated by the rhythm with which attributes of physical phenomena have been brought under the rule of measure, especially of instrumental measure.

As we may find natural ex post, the beginning was made on those variables whose measure, having been practiced since time immemorial, raised no problem. Geometry, understood as a science of the timeless properties of bodily objects, has only one basic attribute: length, the prototype of a quality-free attribute. Mechanics was the next chapter of physics to become a complete theoretical system. Again, measures for the variables involved had been in practical use for millennia.

It is very important to observe that what mechanics understands by “space” and “time” is not location and chronological time, but indifferent distance and indifferent time interval. Or, as the same idea is often expressed, mechanical phenomena are independent of Place and Time. The salient fact is that even the spectacular progress achieved through theoretical mechanics is confined to a phenomenal domain where the most transparent types of measure suffice. The space, the time, and the mass of mechanics all have, in modern terminology, a cardinal measure.

The situation changed fundamentally with the advent of thermodynamics, the next branch of physics after mechanics to acquire a theoretical edifice. For the first time non-cardinal variables – temperature and chronological time, to mention only the most familiar ones – were included in a theoretical texture. This novelty was not a neutral, insignificant event. I need only mention the various scales for measuring temperature, i.e., the level of heat, and, especially, the fact that not all problems raised by such a measure have been yet solved to the satisfaction of all.

/////// (ibid, p. 97):

We usually stop the survey of physics at this point [geometry, theoretical mechanics, thermodynamics, and electricity] and thus miss a very important object lesson from such fields as structural mechanics or metallurgy. The complete story reveals that these fields – which are as integral a part of the science of matter as is atomic theory – are still struggling with patchy knowledge not unified into a single theoretical body. The only possible explanation for this backwardness in development is the fact that most variables in material structure – hardness, deformation, flexure, etc. – are in essence quantified qualities.

Quantification in this case – as I shall argue presently – cannot do away completely with the peculiar nature of quality: it always leaves a qualitative residual, which is hidden somehow inside the metric structure. Physics, therefore, is not as free from metaphysics as current critical philosophy proclaims, that is, if the issues raised by the opposition between number and quality are considered – as they generally are – metaphysical.

Measure, Quantity, and Quality:

As one would expect, man used first the most direct and transparent type of measure, i.e., he first measured quantity. But we should resist the temptation to regard this step as a simple affair. Quantity presupposes the abstraction of any qualitative variation: consequently, only after this abstraction is reached does the measure of quantity become a simple matter, in most instances. Undoubtedly it did not take man very long to realize that often no qualitative difference can be seen between two instances of “wheat,” or “water,” or “cloth.” But an immense time elapsed until weight, for instance, emerged as a general measurable attribute of any palpable substance. It is this type of measure that is generally referred to as cardinal.

In view of the rather common tendency in recent times to deny the necessity for distinguishing cardinal from other types of measure, one point needs emphasis: cardinal measurability is the result of a series of specific physical operations without which the paper-and-pencil operations with the measure-numbers would have no relevance. Cardinal measure, therefore, is not a measure just like any other, but it reflects a particular physical property of a category of things. Any variable of this category always exists as a quantum in the strict sense of the word (which should not be confused with that in “quantum mechanics”).

[…] cardinal measure always implies indifferent subsumption and subtraction in a definite physical sense. To take a most elementary example: by a physical operation independent of any measure we can subsume a glass of water and a cup of water or take out a cup of water from a pitcher of water. In both cases the result is an instance of the same entity, “water.”

Of these two conditions (which are necessary but not sufficient for cardinality), subtraction is the more severe. We can subsume baskets of apples and pears, for instance, and by some definite device even colors. But the condition of subsumption suffices to disprove the cardinality of a great number of variables that are treated by economists as cardinal – if not continuously, at least significantly often. Even Bentham, in a moment of soul searching, invoked the absence of subsumption against his own notion of cardinal utility for the entire community: “Tis in vain to talk of adding quantities which after the addition will continue distinct as they were before, … you might as well pretend to add twenty apples to twenty pears.

Actually, the same argument provides the simplest way of exploding the thesis of cardinal utility even for the individual. For where, may I ask, is that reservoir where the utilities and disutilities of a person accumulate? Utility and disutility viewed as a relation between an external object and the individual’s state of mind, not as a property intrinsic to the object, are psychic fluxes. By the time we feel exhausted at the end of one day’s work, no one can tell where the pleasure felt during one phase of that work is. Like the past itself, it is gone forever.

But the example that should suffice to settle the issue of the necessity of discriminating cardinality from pure ordinality, because it is so crystal clear and also familiar to everybody, is chronological time, or “historical date,” if you wish. Clearly, there is absolutely no sense in which we can subsume two historical dates meaningfully, not even by paper-and-pencil operations after some number has been attributed to each of them. “Historical date” is not a cardinal variable. And no rational convention can make it so.

/////// (ibid, p. 100):

On the other hand, we must recognize that cardinal and purely ordinal measurability represent two extreme poles and that between these there is room for some types of measure in which quality and quantity are intervowen in, perhaps, limitless variations. Some variables, ordinally but not cardinally measurable, are such that what appears to us as their “difference” has an indirect cardinal measure. Chronological time and temperature are instances of this. There is only one rule for constructing a measuring scale for such variables that would reflect their special property. Because of the frequency among physical variables, I proposed to distinguish this property by the term weak cardinality. For self-evident reasons, a weak cardinal measure, like a cardinal one, is readily transformed into an instrumental one.

/////// (ibid, p. 107):

Sameness and Process:

In continuation of the preceding remarks, one point can hardly be overemphasized at this stage: the problem of size is indissolubly connected with the problem of sameness, specifically with the notion of “same phenomenon” or “same process.” In economics we prefer the term “unit of production” to “same process,” in order to emphasize the abstract criterion by which sameness is established. Whatever term we use, sameness remains basically a primary notion, which is not susceptible to complete formalization. “The same process” is a class of analogous events, and it raises even greater difficulties than “the same object.” But we must not let our analysis – in this case any more than in others – run aground on this sort of difficulty. There are many points that can be clarified to great advantage once we admit that “sameness,” though unanalyzable, is in most cases an operational concept.

Let $\, P_1 \,$ and $\, P_2 \,$ be any two distinct instances of a process. The problem of size arises only in those cases where it is possible to subsume $\, P_1 \,$ and $\, P_2 \,$ in vivo into another instance $\, P_3 \,$ of the same process. If this is possible we shall say that $\, P_1 \,$ and $\, P_2 \,$ are added internally and write

$\, P_1 \, + \, P_2 \, = \, P_3$.

We may also say that $\, P_3 \,$ is divided into $\, P_1 \,$ and $\, P_2 \,$ or that the corresponding process $\, (P) \,$ is divisible. For an illustration, if the masses $\, m_1 \,$ and $\, m_2 \,$ are transformed into the energies $\, E_1 \,$ and $\, E_2 \,$ respectively, by two distinct instances $\, P_1 \,$ and $\, P_2 \,$, these individual instances can be added internally because there exists an instance of the same process which transforms $\, m_1 + m_2 \,$ into $\, E_1 + E_2$. We can also divide $\, P_3 \,$ into $\, P_1 \,$ and $\, P_2 \,$ or even into two half $\, P_3$s – provided that $\, P \,$ does not possess a natural indivisible unit. Needless to say, we cannot divide (in the same sense of the word) processes such as “elephant” or even “Harvard University.”

It is obvious that it is internal addition in vivo that accounts for the linearity of the corresponding paper-and-pencil operation. For even if the subsumption of $\, P_1 \,$ and $\, P_2 \,$ is possible but represents an instance of a different process, our paper-and-pencil operations will reveal a nonlinear term.

Another point that deserves emphasis is that processes can also be added externally.
In this case, $\, P' \,$ and $\, P'' \,$ need not even be instances of the same process. In the external addition

$\, P' \, \oplus \, P'' \, = \, P'''$,

$\, P' \,$ and $\, P'' \,$ preserve their individuality (separation) in vivo and are lumped together only in thought or on paper. External and internal addition, therefore, are two entirely distinct notions.

When an accountant consolidates several balance sheets into one balance sheet, or when we compute the net national product of an economy, we merely add all production processes externally. These paper-and-pencil operations do not necessarily imply any real amalgamation of the processes involved. In bookkeeping all processes are additive. This is why we should clearly distinguish the process of unit of production (plant or firm) from that of industry. The point is that an industry may expand by the accretion of unconnected production processes, but the growth of a unit of production is the result of an internal morphological change.

It follows that if the units that are externally added in the bookkeeping process of industry are identical, then proportionality will govern the variation of the variables involved – inputs and outputs. The constancy of returns to scale is therefore a tautological property of a granular industry. To the extent that an actual industry represents an accretion of practically identical firms, no valid objection can be raised against the assumption of constant coefficients of production in Wassily Leontief’s system.

One point in connection with the preceding argument is apt to cause misunderstanding. Since I have argued that phenomena involving only cardinally measurable variables necessarily are indifferent to scale, one may maintain that I thereby offered the best argument against the existence of the optimum scale of the plant, at least. Indeed, a critic may ask: by and large, are not plant inputs and outputs cardinally measurable?

Such an interpretation would ignore the very core of my argument, which is that only if the cardinally measurable variables are immediately connected – as cause and effect in the strictest sense of the terms are – can we expect the law to be expressed by a homogeneous linear formula. To return to one of the examples used earlier, we can expect acceleration to be proportional to force because force affects acceleration directly: to our knowledge there is no intermediary link between the two. I have not even hinted that cardinality by itself suffices to justify homogeneous and linear law formulae.

I visualize cardinality as a physical property allowing certain definite operations connected with measuring, and, hence, as a property established prior to the description of a phenomenon involving cardinal variables. Precisely for this reason, I would not concur with Schrödinger’s view that energy may be in some cases “a ‘quantity-concept’ (Quantitätsgrösse),” and in others “a ‘quality-concept’ or ‘intensity-concept’ (Intensitätsgrösse).” As may be obvious by now, in my opinion the distinction should be made between internally additive and nonadditive processes instead of saying that the cardinality of a variable changes with the process into which it enters.

As to the case of a unit of production, it should be clear to any economist willing to abandon the flow-complex that inputs and outputs are not directly connected and, hence, there is no a priori reason for expecting the production function to be homogeneous of the first degree. The familiar plant-production function is the expression of an external addition of a series of physical processes, each pertaining to one of the partial operations necessary to transform the input materials into product(s). It is because most of these intermediary processes are quality-related that no plant process can be indifferent to scale. We know that the productive value of many inputs that are cardinally measurable does not reside in their material quantum. Although materials are bought by weight or volume, what we really purchase is often resistance to strain, to heat, etc., that is, quality, not quantity. This is true whether such materials are perfectly divisible or not.

Consequently, the so-called tautological thesis – that perfect divisibility of factors entails constant returns to scale – is completely unavailing. If, nevertheless, it may have some appeal it is only because in the course of the argument “divisibility of factors” is confused with “divisibility of processes.” Whenever this is the case the argument no longer applies to the unit of production: with unnecessary complications it only proves a tautological feature of a molecular industry.

Cardinality and the Qualitative Residual:

Perhaps the greatest revolution in modern mathematics was caused by Evariste Galois’ notion of group. Thanks to Galois’ contribution, mathematics came to realize that a series of properties, which until that time were considered as completely distinct, fit the same abstract pattern. The economy of thought achieved by discovering and studying other abstract patterns in which a host of situations could be reflected is so obvious that mathematicians have turned their attention more and more in this direction, i.e., towards formalism. An instructive example of the power of formalism is the abstract pattern that fits the point-line relations in Euclidean geometry and at the same time the organization of four persons into two-member clubs.

Time and again, the success of formalism in mathematics led to the epistemological position that the basis of knowledge consists of formal patterns alone: the fact that in the case just mentioned the pattern applies to points and lines in one instance and to persons and clubs in the other, is an entirely secondary matter. By a similar token – that any measuring scale can be changed into another by a strictly monotonic transformation, and hence the strictly monotonic function is the formal pattern of measure – cardinality has come to be denied any epistemological significance. According to this view, there is no reason whatever why a carpenter should not count one, two, …, n, as he lays down his yardstick once, twice, …, n-times or, as Broglie suggests, use a ruler graduated logarithmically.

Such being the typical objections against the distinction between various kinds of measurabilities, they are tantamount to arguing that there are no objective reasons for distinguishing, say a fish from an insect. Why, both are animals!

//////////