The General Case of the Theory of Contropy
The Copenhagen Interpretation of quantum physics has led to widespread acceptance that the physical universe is fundamentally probabilistic. But fifty years of concerted efforts based on this assumption have not yet led to an acceptable unified theory, even though (see Georgi 1981) partial unification has been claimed. But further work has led in the direction of ever more esoteric theories, based on more complex properties, and which can be tested only by more subtle inference from ever more complicated experimentation. The probabilistic theories which seem most promising for completion are "Supergravity", (Freedman and van Nieuwhehuizen 1985) and "Superstrings" (Green 1986), requiring eleven and ten physical dimensions, respectively. Recently developed "Theories of Everything", such as that of Nanopoulos (1991), have claimed to link string theories with the standard model of physics.
Nor has the philosophical perspective become clearer due to acceptance of the Copenhagen Interpretation. Rather than finding in the scientific experience philosophical truths that might be used in shaping our individual and communal aims, the philosophy of science has encountered indeterminism and epistemological difficulties which as yet show few signs of satisfactory resolution. "Strange" realities are being accorded physical as well as mathematical existence, and "psi" phenomena advocates are beginning to use the same mathematics. Causality has been all but abandoned.
Harari (1983) observed that proliferation in the numbers of quarks and leptons was beginning to stir interest in the possibility of some simpler scheme of constructional representations. He proposed that "postulating a still deeper level of organization is perhaps the most straightforward way to reduce the roster".
The problem is that a deeper level of organization requires the existence of another organizing principle--one which has so far not been theoretically required--at the most fundamental level of the physical Universe. Considering the state of modern scientific sophistication, if such principle does exist (is operative) then its continued non-discovery suggests that its physical manifestations must be extremely subtle from both the Classical and the Quantum Mechanical perspectives. At such extremes, definition and philosophical interpretation become controlling. Rather than discovery, more likely what is needed is a different way of looking at what is already known.
Philosophical debate is well-developed concerning the implications of Relativity and the Quantum Mechanical Theory. It involves difficult epistomological and ontological questions pertaining to absolute motion, substantival Spacetime, causal order, temporal order, etc., as well as such purely philosophical questions as an acceptable basis for reality, and so on.
Sklar (1976) reviewed this debate up to that time and observed that attempts in reduction from one kind of spatiotemporal language to another are plausible, but very problematic, while programs attempting to reduce a branch of spatiotemporal talk to language that is not prima facie spatiotemporal at all rest on extremely implausible epistomological and metaphysical assumptions. He concluded that "If there is going to be a reductionist 'purification' of all spatiotemporal discourse, it will be a reduction of all spatiotemporal talk to some spatiotemporal fragment of the original language, and not a reduction of spatiotemporal discourse to some other kind of discourse entirely." Discussions in each new issue of Philosophy of Science indicate that these problems have yet to be resolved.
With appreciation for Dr. Sklar's analyses and explications of these issues, this paper nevertheless proposes that the discussion can be simplified, not by reduction of spatiotemporal discourse, but by consideration of an even more fundamental level of reality, where an entirely different kind of discourse applies. This is the reality of particles which neither emit nor absorb simpler particles, so that they are affected by no "Field" forces whatsoever. Therefore, they interact only through collision; that is, physical coincidence in space and time. Note that three dimensions of rectilinear space and one of linear, unidirectional time are sufficient to describe the domain of such a particle using non-spatiotemporal language.
Newtonian mechanics, corrected by the understandings of Special Relativity, are adequate to describe the actions of such particles within that domain. It is argued that inference of such an existence is justified by postulation of an entirely plausible form of causality as well as by the retrograde extrapolation of a proposed additional physical parameter. Such a parameter would quantify irreversible, time-noninvariant aspects of the Evolutionary Process and take into account extrathermodynamic properties of that process.
Prigogine (1980) examined philosophical implications of non-equilibrium thermodynamics and the theory of dissipative structure, and he showed how order might develop due to real-world deviation from idealized randomness. He concludes that "the irreversibility which we observe is a feature of theories that take proper account of the nature and limitation of observation" (p. 215). He also points out that "perhaps there is a more subtle form of reality that involves laws and games, time and eternity" (same page).
LaViolette (1985) has developed a coherent theory called "sub-quantum kinetics" in which physical form in the observable Universe arises from ordered reaction-diffusion processes taking place among seven different kinds of subquantum species, the concentrations of three of these becoming inhomogeneously distributed to form observable wave patterns (fields). The theory offers a unified explanation for gravity and electromagnetism by associating their potential fields with sloping concentration patterns generated by radial subquantum unit fluxes. The theory extends causality to an even more basic level.
Young (1976, 1984) maintains that since the quantum of action Photon comes in wholes which cannot be further reduced, then Science has reduced everything to just the three parameters of action--mass, length, and time--as expressed by the measure formula ml2/t.
Action = Energy x Time
He considers energy to be derivative, rather than fundamental, since it "does not come in whole quanta, whereas action does". He then cites Shapely (1966) and Szent-Gyorgyi (1974) in support of his arguments on the need for a fourth parameter. After noting that both used the word "drive", he suggests that "purpose" might better characterize that extra dimension. With Wheeler (1982), he holds that the quantum of action is devoid of any internal structure whatsoever, and adds that it "has the potential to become that which has properties, not by adding to itself, but by dividing itself". In essence, he argues that, since the Photon exists outside space and time, the cycle of its directionless rotations "generates" space and time.
But this paper is postulated on a naive, realistic view of causality in which no physical action occurs until after its potential exists within some physical entity or entities, and that no physical action occurs except through physical means. In such a causal universe, something must rotate; something must act. (Equations cannot transmit or apply forces, much less create real Universes).
The position taken here is that even when internal structure cannot be detected due to being beyond Heisenberg's limit, it may be inferred that something exists within.This interpretation is philosophically contrary to the Copenhagen Interpretation, since it allows the conjecture that there are simpler existences than the Photon.
It is the aim here to show that, based on such inference of an actor behind the action, an extra parameter can be rationally detected which is similar in concept but opposite in operation to the "purpose" of Mr. Young. Instead of production through self-division, it is claimed that potential is being accumulated through a process of self-enhancement. At every level of physical existence, from the sub-quantum through Man, this process is driven by a capacity to combine and complexify in ways that are sometimes productive of stability and/or new properties. In those cases, available time arises from stability, and available energy arises in the potential for external interactions inherent in useful new properties. The most productive interactions would come to predominate at any level of existence over a sufficient period of time, as counterproductive behavior is self-destructive. In this view, action can be directed with or without purpose, and it is derived from energy and time, so that Young's Equation should be restated as:
Energy x Time = Action
Of course, there must be some constant which dimensionally relates the kind of action to the kind of energy, but it would be unity at the most basic level of "electromagnetic" energy where e (erg) x t (sec) = h (erg-sec). The concepts are mathematically and dimensionally equivalent, no matter in which direction the equality is taken to proceed. Thus, no contradiction to either classical or quantum theory is implied by choice of one direction as being controlling. It is a matter of philosophical preference.
However, the consequences of choice are not equivalent. If it is held that time and energy are derived from purely random action, then no real "purpose" can attach to such directionless rotations of nothing. Such a parameter could not be universal. But if it is seen that action proceeds from energy and available time, then the actor becomes part of the system, and action might be directed to some purpose, even at the fundamental level of physical existence.
To the degree which such purposeful behavior can be discerned, confirmed, defined, and quantized, it might be useful as a parameter of physical existence.
If Equation (1a.) is held to be complete, then it can be seen that any such purpose is doomed to frustration. Since there is a time function divisor, both sides (either perspective) vanish as time approaches infinity. Even if fundamental entities can be entirely closed systems (no losses) and be thus assured of eternal existence, purpose could extend no further. Internal actions would be unchanging, and possibilities for external interaction would continually decrease since convergent trajectories of such entities might be reversed by interaction, but not their divergent trajectories. Such unchanging and isolated endurance must be purposeless in the end.
But if the purpose is to grow, rather than simply to endure, then there might be a chance. Aren't the very facts of our own existences evidence that physical means exist whereby endurance can be enhanced? If this is so (as will be argued), then Equation (1a.) is incomplete, and there must be at least one added parameter which reflects the physical effects of such purposeful actions.
It is proposed that such means lie in the fact that, in some situations, action can be so directed as to add to stability and/or increase potential for further interaction (options). Spatial properties --Geometry-- would be controlling in directed interactions on at least the fundamental level, but the central idea is that growth can occur not only in such physical dimensions as available space, available time, energy, mass, etc., but also (with physical effects) in such non-physical directions as maturity, knowledge, and wisdom.
Newton showed action to be an incomplete concept. In addition to actor and action, there is also an acted-upon and its reaction to be taken into account. No action occurs in isolation; a system must be defined in order to determine the net results of action, or the net potential for action. It is contended here that since a single action may simultaneously have more than a single result, the question of increased value--or relative potential--or philosophy--is at the heart of the missing parameter.
Whether energy/action is modelled and mathematically treated as a probability wave, vibrational wave, or a set of particles acting only by elastic collision and rebound, and so on, without some containing force the system will expand into space. If it is held that actions of fields maintain the concentration of energy, then causal explanation demands specification of the agencies and means whereby such forces might be transmitted and applied. And then those agents need be contained and focused, etc. At some level of causal explanation, it must become clear that divergence in the trajectories of elemental entities becomes controlling over convergence in open systems. Thus, energy would disperse without physical acts to overcome divergence.
But when actions can be sustained or repeated so that some degree of stability occurs (growth in available time), then actor and acted-upon together form a system which might itself have capabilities for altogether different kinds of actions. Such a capability amounts to a different kind of potential energy, albeit one which depends on the availability of outside systems with which to interact. If such systems are available, and if at least part of that potential can be translated into interactions which can be used to sustain stability within the overall system, then interaction can also be sustained and stabilized. Thus, a super-system would be formed that might then itselfhave capabilities for some entirely new kinds of purposeful actions. And so on.
Such a capacity for self-stabilization arising from openness to further interaction has some similarities to the principle of self-organization ("autopoiesis") of Jantsch (1980), as well as to Bohm's (1983) theory of super-implicate order, as least insofar as those ideas were interpreted by Young. There are also some parallels to Prigogine's description of the operations of dissipative structures.
But this paper is based on the concept that the physical means of enhanced endurance derives from a capability to create potential by directed interaction to form systems which possess entirely new and useful properties. The idea is that, in a dynamic, open-system view of physical process (as opposed to the "snapshot", closed-system view required for statistical and thermodynamic analyses) it is apparent that the full results of physical actions necessarily include effects on the actors, and their continuing potentials for further actions. While usually equal to or less than that which existed prior to the interaction, such potentials can sometimes be greater and, especially when deriving from new and useful properties, even thermodynamically discontinuous and orders-of-magnitude greater. Created potential becomes created available energy when it can be converted into useful work through action. Such non-thermodynamic and purposeful behavior is the basis for the proposed added parameter.
Attempts to define such a parameter by modification and shading of existing meanings can be useful only up to a point, beyond which lie epistimological and semantic difficulties that might well be unresolvable without excessive postulation. Here, it is tried to avoid such difficulties by defining a new term, represented by C(not c, Einstein's Constant). As an added parameter to Equation (1a.), C takes into account the value of created potential and available time for further interactions. It is a measure of an entity's capacity for positive-sum interaction. In this view, each new kind of real and useful potential derives from a new level of structured interaction, soC would increase in magnitudes associated with levels of purposeful complexification. Being dimensionless yet directly relatable to action, C (like space) has the form but not the substance of a physical quantity. Equation (1b.) shows the revised perspective2:
where C = growth parameter, E = resulting potential for further interactions, and t = life of the potential. Mathematical implications of the theory are further considered in the companion Special Case paper.
In the operation of such an evolutionary parameter, the elemental quantum of energycan be seen to have the potential to become that which has properties, not by dividing itself, but by adding to itself. Actions can be destructive as well as constructive of potential (time and energy), but in a causal universe, C is limited only in the negative: Structure can be broken down only so far, but it may be added to in ways not yet even imaginable.
Since C is by definition related to interactions productive of structure, it might seem to be equivalent to (or at least derivable from) the thermodynamic concept of State. However, the point is that the states of C are not predictable from thermodynamics.
"There is no doubt that quantum mechanics has seized hold of a beautiful element of truth and that it will be a touchstone for a future theoretical basis in that it must be deducible as a limiting case from that basis, just as electrostatics is deducible from the Maxwell equations of the electromagnetic field or as thermodynamics is deducible from statistical mechanics. I do not believe that quantum mechanics will be the starting point in the search for this basis, just as one cannot arrive at the foundations of mechanics from thermodynamics or statistical mechanics." (A. Einstein, l936)
The Evolutionary Process of increasing complexification must be driven by some expenditure of potential against the Universal tendency for energy to dissipate. In addition to achievement of the more widely cited goal of field unification, the origins and nature of such complexification must be coherently explained before any physical theory can be considered complete. A complete theory must also accommodate the present existence of Man and his works at the (known) extreme of Evolutionary complexification.
However, the philosophy of fundamental probabilism which now prevails in Science seems inadequate to relate the unprecedented capabilities of Man to the Cosmic scale of physical existence. (Perhaps this difficulty is rooted in the fact that Man strives so that chance might have as little influence on events as he is able to manage?) It is clear that the structure and organization which produce increased availability of energy to Man derive from conscious application. It is not so clear how Consciousness derives from the statistical implications of the Quantum.
The problem of total physical explication is complicated by increasing difficulty in application of the probabilistic concept of Entropy to systems more complex than, say, Molecular Solids. Even modern concepts which consider information and organization to be a sort of reciprocal to Entropy, such as "Negentropy" and other ideas (e. g. Shannon, 1949), seem unable to explain the ever increasing levels of energy which Modern Man is making available to himself.
Nor has Probabilism yet been able to coherently complete the perspective on the Microcosmic scale. Despite the successes of the quantum mechanical view, after more than a half century there are still such indications of incompleteness in that description as the lack of a coherent unified field theory.
The experimentally confirmed wave/particle duality of existences on the Subatomic scale pointed to the presence of ambiguities in accepted definitions and interpretations. When Heisenberg showed with his indeterminacy relation that such ambiguities are absolutely irreconcilable (from our point of view), he established a theoretical limit to the precision attainable with direct observation.
One interpretation of the meaning and effect of this limit is that the Quantum Theory is Complete. This is to say that possible subdivisions of reality beneath the quantum need not be considered since theories referring to existences beyond that threshold cannot be subject to verification by direct observation, and are therefore useless to Science.
Nor, in this view, may such theories appeal to causality, since that property disappears into the same ambiguities, as pointed out by Niels Bohr ( 1928--quotedby Folse 1985):
"The very nature of quantum theory thus forces us to regard the space-timecoordination and the claim of causality, the union of which characterizes the classical theories, as complementary but exclusive features of the description, symbolizing the idealization of observation and definition respectively."
Within this interpretation where the statistically based quantum relationships define and limit allowed realities, the indeterminacy relationship found by Heisenberg is appropriately called "The Principle of Uncertainty".
However, this paper is based on an understanding in which "The Principle of Tolerance" (after Bronowski, 1973) is more apt. Here, it is taken that the indeterminacy relationships indicate a limit to knowledge of the reality, rather than to reality, itself. In that case, there might be any number of physical subdivisions beneath the quantum--which is to say that the quantum theory is not complete. Non-detection of particles proposed as existing beneath reality can be explained as their being individually so weak and/or short-lived as to put them beyond detectability as decreed by the Principle.
But if a multitude of such particles together can form stable or even semistable structures, then the collective structures might themselves be detectable. Without complicated assumptions as to the properties of the non-detectable particles, the idea of future stability based on the cooperative interaction of the particles might imply that a degree of foresight (on their part) would be needed. However, Axelrod (1984, with Hamilton) has shown that cooperation can arise and be maintained even in statistical systems. Thus, it is necessary only to assume that some mechanism for containment of the particles must exist.
Of course, such an approach is similar to "Hidden Variable" arguments (see below) which, if not refuted, have at least been brought to a standstill for the past fifty years. But the proposed variable is defined in this case, and the utility of the resulting theoretical structure is demonstrated in that it suggests coherent explanations for the sources of such diverse phenomena as action-at- a-distance (Field) forces and the Evolutionary Process. While the adequacy of such explanations is partially explored in a companion Special Case paper, the discussion here is primarily concerned with theoretical development based on this viewpoint.
Within our existing "Standard Model", considerations of the bases of reality have led to ambiguities and epistemological difficulties which so far have been only partially resolved. Even that has required modification of accepted concepts about the nature of objective reality. If it is held that there is no reality beyond the quantum, then, as shown by Bohr, the mathematical formalism of the theory becomes the simplest reality. Unfortunately, that formalism describes no single, unambiguous reality, but instead, it gives a picture of which only one half may be viewed at a time.
Such problems might be minimized by the philosophical expedient of postulatingcausality. In a fully causal Universe, it is assured that objects have a physical existence independent of observation, so that the Classical, unambiguous view of objective reality (as corrected by the Special Theory of Relativity) applies. Development of theory can thus make use of accepted concepts with limited need for special modification to fit special reality.
It is clear that some practical limit to experimentation must be reached even before indeterminacy becomes controlling. Regardless of the degree of energy and sophistication which might be brought to bear in any investigation, the implicit requirements for greater and greater energies to detect lesser and lesser effects will probably set that limit somewhat less than Fermi's globe-encircling accelerator would require. Even then, however, it would not be conclusive that the last particle couldn't be smashed. If, indeed, there are subdivisions of reality beneath the Quantum, they cannot be directly observed from our perspective.
Even though fusion energy--or even full development of fission energy--seems most likely in practice to meet future energy needs, we have been devoting a considerable amount of resources to research at energies far beyond in hopes of completing the scientific model with a unified theory. Of course, knowledge always has value, but it is hard to see what philosophical guidance might derive from statistical unification. No practical usage other than theory-testing has been claimed for the proposed huge accelerators such as the "Supercollider". Of course, there is a possibility that incidental discovery might find some practical value beyond the foreseen, but the realities being tested are so far removed from present and probable future experience as to make this unlikely.
Possibly the Copenhagen Interpretation overextends the Empiricist requirement for reproducible detectability, when it is applied to realms beyond direct observation. Perhaps it might be better to seek a causal metaphysical foundation which is both philosophically rational and consistent with direct observation, rather than try to construct a theory based entirely on experimental results, regardless (or in spite) of philosophical implications. Admitting the possibility of non-detectable physical existences does not lead to an equality of theories. Instead, it allows their differentiation at more basic levels according to Ockham's Razor1.
A fundamental concept of organized thought is that of the difference between Something and Nothing. In elaboration of that concept, Science depends upon the unique ability of Man to define a system by the conceptual separation of a set of interacting entities from its surroundings. It has been an aim of Science that the entire physical Universe might be completely defined as a system which is subject to fully causal, mechanistic explanation. Then the differences between something and nothing could be fully specified in all cases. However, Heisenberg showed that there are lower limits to physical detectability, and it is now accepted that this limits physical specification, as well.
The Quantum Mechanical Theory is based on a probabilistic view of nature. What is now called the "Copenhagen Interpretation" of quantum mechanics "assumes that the physical world has just those properties that are revealed by experiments, including the wave and particle aspects, and that the theory can only deal with the results of observation, not with a hypothetical 'underlying reality' that may or may not lie beneath appearances." (Holton and Brush 1973.) The successes of Quantum Mechanics , including those of Quantum Electrodynamics, Quantum Chromodynamics, Quark Theory, etc., in accounting for experimental observations have stimulated philosophical interpretations which impugn traditional notions of objective reality. For instance, Bernard d'Espagnat (1979) first lists three premises which form the basis for what he calls "local realistic theories of nature":
1. Regularities in observed phenomena are caused by some physical reality whose existence is independent of human observers.
2. Inductive inference is a valid mode of reasoning and can be freely applied, so that legitimate conclusions can be drawn from consistent observation.
3. No influence of any kind can propagate faster than the speed of light.
He then goes on to argue that at least one of these premises is in conflict with not only Quantum Mechanical Theory, but also facts that have been established by experimental observation.
Some influential philosophers flatly accept the Copenhagen Interpretation as final, and acknowledge few realistic limitations in their physical theories. P. W. Atkins (1981) for instance specifies that "The only assumption we shall make is that things happen unless they are expressly forbidden; and nothing is forbidden".
Objections have been raised from the start over such assumptions of absolute finality in the quantum mechanical view. In what is frequently cited as being the most incisive of such criticisms, Einstein, Podolsky, and Rosen (1935) based their arguments on what has become known as the "Hidden Parameter Theory". Such arguments are refuted implicitly by the premises of the Copenhagen Interpretation as quoted above, and they have been rejected essentially on grounds of the philosophical acceptance of such assumptions. For example, d'Espagnat bases his counter-arguments on the "criterion of utility", which seems equivalent to the Copenhagen premise.
The Quark Theory that was originally developed as an explication of Proton structure has been useful not only in that regard, but also it has led to the discovery and elaboration of a sort of Periodic Table of sub-atomic particles. However, it has been necessary to increasingly complicate the theory by definition of new properties and forces such as "Color", "Charm", "Flavor", etc. to help account for experimental results.
Ockham's Razor--the philosophical tool of inductive reasoning--provides that, consistent with observation, the simplest of competing theories is to be preferred. It is therefore significant that probabilistic theories become more complicated as they attempt to describe what should be simpler existences. For instance, it seems that the Quark must have more properties than does the Proton, etc.
Probabilism also has difficulty in accounting for the Evolutionary Process, because the successive appearances of evermore complicated structures becomes increasingly improbable. This is especially apparent in light of an Evolutionary perspective that was pointed out by Teilhard de Chardin in The Phenomenon of Man (1959). In addition to more arcane and widely cited arguments, Teilhard proposed that Evolution has a primary and privileged axis--a direction-- that is marked by increasing complexity and organization.
Holding that consciousness of an entity is proportional to the complexity and organization which comprise that entity, he called this phenomenon "The Rise of Consciousness". Even though an intangible such as Consciousness might be indefinite and mathematically unmanageable, Teilhard's theory has serious cosmological implications. Physical evidence of events in the first few instants of Universal existence is not directly available, so it may well be that probabilistic arguments provide not only the most facile but also the simplest explanations for those events. However, note that confirmation of any such consistent line of Evolutionary progression in opposition to general expansion impairs those views which hold that physical existence is strictly a matter of statistical happenstance.
There are some ways in which consciousness can be described in physical terms. Maxwell showed with his much-discussed Demon that rational consciousness can be perceived to perform Work (or increase potential) in excess of accountable input, depending upon the way that the system is structured for analysis. In that sense, consciousness is an energy which is inversely related to randomness and disorder. Of course, the boundaries of the system can be extended to include the Demon and his own general intake of energy so that contradictions with the Second Law of Thermodynamics are avoided. But it is not easy to determine, for instance, how much of which crust of bread should be charged to which thought. (And, are not some thoughts infinitely more productive than others?)
Even such basic means of mechanical advantage as the lever and the pulley owe at least part of that advantage to systematic application. A potential lever is only a rigid object unless the system is organized in the particular way which directs and transmits the available forces via a fulcrum to produce the output.
Another example where an organizing principle intervenes in a system to increase potential beyond apparent input can be seen in the use of a converging lens as a burning glass. Presumably, equivalent work must be done upon the beams in order to deflect them from their diverging paths; however, detection of fully corresponding changes within the glass is unlikely due to nearly negligible mass-equivalence of the Photon, as well as due to the Second Law and other considerations.
The increment of potential resulting from the structure of application in these examples might be attributed to some general increase in the efficiency of some larger system, such as one that includes the user of the burning glass, his energy output (setting up, maintaining focus, etc.) and intakes, what his mother ate, and so on. In that case, the overall rate of Entropy increase might be slowed, but neither stopped nor reversed. Such a view is consistent with the Second Law, but not very illuminating. It remains a fact that the energy of the Sun is made more available to the system by knowledge of and access to a converging lens, and these factors are not directly relatable to energy expenditures.
In fact, the concept of Available Energy must depend on some assumptions of directional preference, whether explicit or implied. That is, the questions of "available when, where, to whom, and in what form?" must be addressed prerequisite to thermodynamic analyses, and this requirement marks availability as a nonabsolute, relative quantity. Energy which becomes more available to one system may become less available to other systems. However, this is not necessarily the case. Energy may become more available to more than one system at the same time, or its availability to one system may not be affected by its availability to other systems, or its availability in one form may not affect its availability in some other form, and so on. When wealth is seen as an available energy, this principle stands out most clearly in that tenet of economic capitalism which recognizes that one man's profit is not necessarily another man's loss3.
The earlier examples aimed to show that the imposition of order on a system can yield increases in potential for interaction in apparent excess of the potentials absorbed in effecting that order. This arises when a system develops potentials for external interactions which are not available to its subsystems, and/or potentials for interactions between the system itself and its subsystems, and so on. For instance, in addition to all of the potentials (except those which depend on freedom of movement) that they would possess as free entities, Atoms that are associated into Molecules collectively have additional structural possibilities. They have the sum total of their individual potentials and more.
Such examples may suffer from oversimplification, but they illustrate that, under certain conditions, order begets more order. As a system is organized, each structural feature which can lend itself to interaction with other systems is a kind of potential. When such potentials can be realized by repeated or continued interaction with external systems, some of the actors might recover energy in forms which enhance not only their own internal existences, but also the external conditions for further interactions. The duration and/or repetition of those interactions could be increased in this way, adding a degree of stability to the overall interacting structure. The resulting Supersystem might then itself be possessed of properties which lend themselves to further interactions. That is, another kind of potential might arise, and so on. Thus might the system grow ever more complexified through additions to itself.
Notice in this description that growth is constrained on one hand by too little order (in which case potential can vanish before it might be realized), and on the other by too much order (in which case open-ness to further interaction is impaired). While a certain degree of internal order is needed to create and maintain potentials for external action, chances for growth are maximized if it is kept to a minimum.
Since physical interactions involve movement of mass/energy, each different kind of capability is equivalent to a different kind of energy. A system for classification of Universal existences (or systems) according to the complexity of their structural features should therefore reveal a parallel order in the kinds of energy associated with each level of complexity. Moreover, such a system wherein a capacity for interaction on one level is seen to depend on synergistic structural features within can be constructed from the bottom upwards--from the simple to the complex. The reality and possible usefulness of such a theoretical system can be tested at each stage of construction by observing the degree of order with which it relates different kinds of energies as a by-product of the process of construction.
At the human level, synergism results from conscious cooperation for mutual benefit, and in other directed situations. With other animals, it is associated with instinct and/or symbiotic behavior. It arises spontaneously at lower levels of complexity such as Molecules and Atoms due to probabilistic interactions between properties of Time, Space, and Energy/Mass. This increase derives from actions (or potential for actions) between open systems, while Entropy is generally defined within closed systems. While such distinction may be difficult to make when applied in the case of highly complex systems, it seems clear when simpler systems (such as those governed by statistical laws or random actions) are considered. However, such semantic difficulties seem best abated by defining a new property, rather than by seeking to modify existing meanings.
On the mechanical level of complexity, such a property is related to increased efficiency, while at the human end of the scale increased structural leverage descends from conscious application. It derives from the increased options or potential available to an entity due to its capacity for positive-sum interactions, and it can be measured in those terms. The word "Contropy" seems to be appropriately descriptive for such a dimension, and it is here proposed for inclusion as one more independent variable in the "Standard Cosmological Model" of the physical Universe. The proposed new dimension is discontinuously embedded within Space and Time. It is a positive, discrete quantity which varies according to the number of kinds of potentials (capabilities for interaction) operating in, or available to, the considered system. Multiplication of possibilities for interaction increases so rapidly with systematic complexity as to make formulation impractical beyond a few of the simplest cases. For instance, a system of molecules has possibilities for interactions not only on the molecular scale (collision and rebound, exchange of atoms, etc.), but also on the atomic scale (electron exchange, photon emissions, etc.) and sub-atomic scale (quantum effects, electron/proton interactions, proton/neutron interactions, gamma radiation, etc.), not to mention the pervasive effects of gravity, magnetism, and so on.
However, it is not necessary to achieve a precise formulation for Contropy, since the scale is relative rather than absolute, and its philosophy is more useful than are its mathematics. (The mathematics cannot be more than roughly predictive, since mechanisms of future Contropic states cannot be more than roughly anticipated.) Recognition that Contropy is tending toward a maximum in the Universe at the same time Entropy is also tending toward a maximum in the same system confirms the independence of those parameters and establishes a broader point of view. In this view, physical existence is not necessarily a "Zero Sum" game, since entirely new kinds of potential interaction can derive from the organization of forces at lower levels. At the lowest levels, space-ordering interactions of subphysical particles which are individually nondetectable might, in their mass effects, become detectable. In that case, energy might be seen to arise from nothing. Conversely, detectable energy might degrade to beneath the threshold of detectability, and come to be seen as empty Space.
Sufficient mathematical perspective can be gained by estimation of Contropy as a relative quantity that is proportional to organized complexity. It might seem that such a scale would depend more on philosophical rather than physical gradations, and thus be difficult to mathematically represent and use. However, gradations naturally coincide with accepted classifications and there need be only a few of them in the entire range from Absolute Zero to Modern Man.
Contropy has the advantage of being independent of the dimensions of space and time and it is not constrained by the concepts of heat and temperature. Yet, it can be related to a clearly definable hierarchy of structure and organization. The "Absolute Zero" of such a scale of systems corresponds to the state of empty Space, which is taken to have only extent, and no structure or organization whatsoever. It is the Nothing relative to which Something can be defined. At the other extreme of the scale, the present known maximum of Contropy must be associated with the state of Modern Man. His primary organizing principle--rational consciousness--is being used to make energy available to himself on an unprecedented scale. The central theory4 of this paper can now be stated:
A Contropic system is one in which some interactions can produce potentials for futher interactions in excess of the potentials expended by the interactors.
The development and construction of a degree-wise Contropic progression can be guided by Ockham's Razor, proceeding from the simple to the more complex. Such a perspective might be useful in better understanding the phenomenon of Evolution. For instance, if it is assumed that conditions in the "Cosmic Egg" precluded the existence of any built-up structures, then the Contropy of theUniverse is defined as being nearly zero at the instant before the "Big Bang"5. The increase of Contropy as the complex is evolved from the simple coincides with the steps of Evolution.
The evolution of matter (atoms to pre-cells) has been widely considered and is summarized in reasonable detail by Shklovski and Sagan (1966). The evolution of life forms has also been extensively studied and explained in detail which is exemplified by the achievements of Watson and Crick (1953) in their explication of the basic process of replication.
However, the evolution of energy has been dealt with mostly in general terms of cosmology. In one sense, Boltzmann and Einstein showed that matter is an evolved state of Energy. Teilhard de Chardin recognized that life, itself, is an extension of this line of evolution which aims at duration. From that perspective, the evolution of Energy is a continuing process. As claimed previously, any consistent line of Universal progression implies the action of some directing force. In a causal Universe that force must arise from within (by definition). Since the Contropic theory proposes that available potential for interactions arises from efficient structural order, an hypothesisis suggested: Evolution is driven by Contropic energy. In that case, each stage of further complexification must derive from and be dependent upon ordered interactions of simpler components, and so on down to the simplest level of physical existence. These ideas can be more generally stated as a Theoretical Hypothesis:
The physical Universe is a Contropic System.
With establishment of the Initial Conditions and Auxiliary Assumptions (or Postulates), this hypothesis provides a basis for the derivation of a testable Prediction: The Contropy of the Universe is tending towards a maximum; that is, the Universe is undergoing a process of derandomization. Since this prediction is at odds not only with the Laws of Thermodynamics, but also with those prevailing philosophies which hold physical existence to be fundamentally a random process, it would seem to offer opportunities for testing of the hypothesis.
As postulates to development of the theory, those premises listed by d'Espagnat as the bases for local realistic theories of nature are adopted here as quoted above, except that the third premise (known as Einstein Separability) will be restated. Such modification begins with the observation that the problem of "action at a distance" has been only partially resolved, and that at the expense of the basic idea of direct connection between cause and effect (or Causality). Field theories are noncausal to the extent which they assume the fields to be composed of imponderable matter or pure (non-associated) Energy. The beautiful equations of Newton and Maxwell, while perfectly predictive of force and vector, are neither based on nor derived from an underlying physical mechanism for the transmission and application of those forces.
Nor do General Relativistic interpretations--including the "standard model"--which hold that a Space/Time continuum is distorted by the presence of Mass specify how such a deforming force might itself be transmitted and applied. The Quantum Mechanical Theory, which includes probability functions in addition to Field functions, has even less causal basis.
Although the fertile usefulness of these concepts is obvious, the need for exception to the rule of causality is a strong signal of possible incompleteness in those theories. The hypothesis which Newton would not "feign" is still needed.
It seems almost redundant to claim that physics must rest on a physical, rather than a mathematical, basis. In this view, continued non-discovery of the underlying mechanics of field forces implies that the involved physical agencies are beneath the Heisenberg limit to direct observation, and thus, that "feigned" (or at least purely theoretical) hypotheses are all that will be available to explain the actions of such forces.
The idea that force might be transmitted through an exchange of particles was proposed by Hideki Yukawa in 1935 as a part of his theory concerning the Strong Nuclear Force (Asimov 1962b.). The subsequent detection of such a particle--theMeson--not only gave substance to that theory, but also it has led to conjecture that particles must be involved in the transmission of such other "action at a distance" forces as Gravity and Magnetism.
If such particles exist, they are apparently so weak and/or short-lived as to be individually undetectable. Therefore, they would be held to be beyond the purview of Science according to the Copenhagen Interpretation. While that position would not deny the existence of such particles, it would seem to philosophically preclude theories based on them as acceptable explanations for physical actions. In the absence of confirmed detections of such particles, this impasse can be broken only by some modification of the philosophical perspective.
The most direct way to deal with action-at-a-distance is to postulate that there is no such thing. After all, the "Principle of Tolerance" can be used to argue for the reality of particles which happen to exist beneath the threshold of direct physical detection. Instead of the third premise as given by d'Espagnat, these arguments take as a third postulate:
3. No action occurs without previous and direct physical cause.
It will be developed that Einstein Separability is implicit with this assumption of causality. Since it also seems to be generally consistent with the other two postulates, it might appear that any modification of perspective is not very drastic. For instance, the Fields can be understood as deriving from the loci and flow of particles which define them, with no requirement for modification of the mathematics which already fully describe them.
However, observe that as a consequence of this postulate, it follows that particles which neither emit nor absorb simpler particles must themselves make actual physical contact in space and time as a requisite for any interactions to occur. They are subject to no remote influence whatsoever. Neither concept nor reality of Field forces applies to them--only local realities would be applicable. They would travel in straight lines except for cause (collision). Their reality--the fundamental physical Universe--would consist of three dimensions of Euclidean space and one of unidirectional time in terms of duration and/or changing positions within that space. Such a particle would comprise a special case of Quantum Mechanics.
Such a nonemitting particle would be definitive of a theoretical "simplest possible" particle, since a prohibition against emission amounts to a prohibition against decay. As an isolated system, the Entropy as well as the Energy of such a particle would remain constant, and it would have an infinite lifetime. It would also be assumed to have a constant (if minimal) Contropy. Then all other physical entities in the Universe would be mechanical constructions based on it, and as such, subject to wear, tear, and/or loss. Thus, such a particle would be the only entirely closed system within the Universe.
The Photon of Electromagnetic Energy is the weakest particle that is now detectable. Its apparent ability to endure across vast reaches of Space and Time is evidence that its structure is highly efficient for the containment of energy. Still, it is the nature of emission and absorption that these structures seem to be formed or dissolved instantly under certain conditions. Such structures must derive from forces which are internally cohesive, and there must be at least one constituent particle more simple than the Photon.
Lacking direct evidence of either, it seems simplest to suppose that the presumed constituent of the Photon and the "Simplest Possible" particle are one and the same. Its existence is here postulated, and the name "SPP" is proposed for that elemental pointlike concentration of Space/Time distortion called Electromagnetic Energy/Mass. It is defined as being the first Contropic magnitude above the absolute zero of Space; that is, the first differentiation between Nothing and Something.
But the structural possibilities available to such a particle as it has been so far defined would be very limited. Without properties that could be used to generate either externally compressive or internally cohesive forces such as a mutual attraction between the particles, a concentration of them would be mathematically expected to behave as a gas, and so diffuse by expansion into space. There must be some "convergence principle" operative even in the realm of the SPP.
In theory, such a particle might have any number and any type of properties, so long as they are beneath the threshold of detection and do not depend on emission/absorption of simpler systems. Observe that the latter prohibition arises by definition, and, as discussed above, it implies that the SPP is not subject to any remote influence, including the Basic forces of Nature. (Indeed, it is later proposed that SPP's are the carriers of two of them.)
Simplicity requires that there should be assumed only a minimum number of properties for the particle. Minimally, there must be some property which differentiates SPP's from the smoothly continuous emptiness of Space. This might be pictured in various ways, such as compression, expansion, vibration or other cyclic disturbances in an elastic medium (Space), curling up of Spacelines, constructive and/or destructive interferences between two or more vibrating mediums, etc. Regardless of the analogy, minimum differentiation requires two extensive properties--one of which can be represented as energy/mass equivalence, and the other descriptive of the volume within which that energy/mass equivalence is contained--plus two intensive properties, namely: location and velocity.
Here, the simplifying assumption is made that, in the isolated (or free) state, all SPP's are identical in energy/mass equivalence, speed, and spatial extent. Thus, they may be diffentiated only by location, vector, and (as will be later developed) condition of rotation with respect to axis of translation.
The definition of such a particle provides a point of departure for further development of the Contropic dimension. It also facilitates definition of initial conditions from which to derive predictions of Universal behavior that might be used to test the theory. The Initial Condition is:
The Universe contained a minimum of Space and no structures more complicated than SPP's before time began.
Such an assumption seems reasonable on grounds that structure and organization must depend upon the geometry of Space and Time. The Initial Condition defines the state at a place and time which all physical existences in the Universe have in common.
In the absence of intervention by any outside force, the present physical reality must be the realization of possibilities which were inherent in the properties of Space ("Geometry") and in the nature of the compressed SPP's. Contropic theory suggests that the Universe has realized those possibilities by a series of discontinuous increases in potentials for different kinds of interaction, and further, that those increases are derived from viable (semi-stable) structural features. Evolution can be seen in that perspective to be a stepwise progression, from energy in its simplest, most chaotic form (the SPP) through energy in its most complex, structured form (currently Man). In other words, the Contropic Theory predicts that there is underway in the Universe a spontaneous process of derandomization, a continuing tendency for the appearance of more and more complex structures, and that this tendency should become more apparent as time goes by. To improve falsifiability, such a Prediction can be stated more clearly:
(1). There is a clear line of progression in the Universe.
(2). It's impetus is provided by a continuing process consisting of a sequence of time-consuming transformations in which structural features produce increments in potential for interaction, that are then realized as different kinds of available energy, and which then can be used to establish further structural order, and so on.
(3). This continuing process yields an ever greater potential for interaction.
That prediction is falsifiable to the extent that evidence to the contrary may be adduced, and useful to the extent that it accommodates the facts of existence within a simpler philosophical scheme.
Note that the Contropic concept of the evolution of energy structures is compatible with the "Big Bang" theory in that the pre-existence of an extreme concentration of Energy is required, before the beginning. In that case the frequency of collision (the only means of interaction) could be great enough to allow discovery of viable structural features in the limited time available, before dispersal would make interactions between SPP's unlikely, if not impossible.
Such a concentration would imply the existence of some tremendous force of compression. The extreme frequency of collision, and the assumption that SPP's are internally identical suggests that the "Cosmic Egg" must have been isothermal to an almost absolute degree. This is consistent with the assumption of identical energy/mass equivalence among SPP's as well as with the kinetic- molecular depiction of Temperature as a function of the speed of constituent elemental parts. It would also assure that the directions of translation of the SPP's, as well as the orientations of their rotations with respect to those directions, would have been entirely random before the expansion began.
When the force of compression was released or overcome, the SPP's near the outside whose vectors happened to take them away from the center continued in that direction, and must, from that time to this. Time began in the Universe at the start of expansion, and it expands proportionately with Space. The radius of the theoretical Universe in light-years is equal to its age in years. In this view, the symmetry of Space and Time is due to the constant and limiting speed of SPP's. Furthermore, since there's no particle simpler than the SPP, and because no compound structure can attain more velocity than does its parts, then its constant velocity, taken together with the third premise as restated, define a limit to the velocity with which physical information can be transmitted (thus is "Einstein Separability" implied). The symmetry is shown by Equation (2.):
where vs represents the velocity of SPP's and c is Einstein's Constant, 2.9979 x 1010cm/sec.
The initially high concentration of SPP's and the randomness in their vectors of translation assure that there was a high probability of collision which decreased as Time expanded. In absence of any other force, the process of collision and interaction offered the only retardant to diffusion. This consideration supports the idea that at least the basic mechanism of combination must be relatively simple, because the evolution of complicated structures requires time.
If SPP's are subject only to the forces of collision and rebound, their kinetic energies are also constant and they may not be brought to rest. However some form of oscillation can produce the relative rest necessary to confinement. This suggests that some form of recurrent simple harmonic motion must be involved in the internal structure of the Photon. Perhaps a sequence of collisions and rebounds occurred among some SPP's--each geometrically correct as to position, vector, and timing--so that the initial and subsequent collisions were repeated in the same order. This circularity might be visualized as the closing of a wave upon itself. Some of the waves would be stable (or standing) and those SPP's would be confined with respect to each other.
Or perhaps SPP's have the property of being able to occupy the same space at the same time. In that case, rebound might not follow collision, or at least it might be delayed. Then one or more SPP's could shrink about a common center and exist as a compound particle --a point concentration of energy--so long as the SPP's can be confined. (The "Cosmic Egg" might then have consisted of all the SPP's in the Universe centered and compressed about the same point!) Even if SPP's can't be stably contained, at least their temporary alignment might allow for some degree of focus in their emission. Then, random collision would be transformed to focused rebound, and this in itself would offer some new structural possibilities.
Whatever the mechanism of combination, once the pattern is established the interacting SPP's would move through space together as a unit. Any outside observer, aware that the Photon contains a number of SPP's, would infer that they have been compressed within the Photon. He would also note that all the complexities of individual SPP motion can be simplified into a single resultant identity, with a single location and vector of translation (in the same sense that the future location of any of the multitude of air molecules contained within a balloon can be predicted within some degree of accuracy, based upon the balloon's present position and the wind vector). Any structure based on such precise geometry and timing must be subject to some losses; that is, there should be some tendency for SPP's to escape (or be emitted) from the Photon. If so, then the Photon can be seen to comprise a third degree of Contropy in that it is a particle which emits/absorbs (only) SPP's.
Such a property makes it possible that Photons may interact not only by direct contact, but also through transfer of forces mediated by the exchange of SPP's. Thus, the Photon has available a dimension of potential remote interaction which is not available to the SPP, and such increased potential is not definable in the statistical terms of Entropy. There is an order-of-magnitude increase in that, unlike the SPP, which is restricted to possible interactions only with those
particles which converge along its path in Space and Time, the Photon has possibilities for interactions with other SPP-emitting particles anywhere in the Universe.
Furthermore, the very act of emission can afford other forces which, if properly directed, might help to stabilize the Photon's structure against expansive tendencies. It is significant that such increased possibilities of duration--growth of the available time dimension---in themselvesincrease the possibilities for further interactions, and so on. (A model of physical means for such beneficial effects of emission on Photon structure is considered separately in the companion special case of this theory.)
The next higher magnitude of Contropy would consist of those particles which emit/absorb particles which themselves emit/absorb Photons and/or SPP's. These are Sub-Atomic Particles. Naturally following is the level of Atoms; that is, particles which emit/absorb Sub-Atomic Particles, Photons, and/or SPP's.
However, it can be seen that the series is not infinite on this basis. As the systems become more complicated, the concepts "particle", "emission", etc., have decreasing relevance. Thus, the more generalized concept of Contropy (as expressed by the theory) seems better for definition at higher magnitudes. Divisions arise where systems gain new dimensions of potential, and this essentially parallels established taxonomy.
Next above Atoms comes Molecules, of course, and then Cells. Animals, with their mobility and reproductive capability, would constitute yet another order of magnitude of Contropy. It is clear that Man operates on an even more complex plane of existence, so that whatever basis is used to differentiate him from "Animals" ("Rational Consciousness"?) at the same time demarks an even higher Contropic state. At least locally, that division seems to be the most recent (and therefore, according to the theory, the greatest) discontinuous increase in Universal Contropy up until this time.
The theory implies that even now new potentials for higher cooperative interactions among Men are being developed. Extrapolation of the trend in the Contropic dimension indicates a clear prediction that other order-of-magnitude increases in Contropy will occur as Men find new ways to mutually benefit by cooperation with each other and/or other intelligent beings in the Universe. Whether that development will involve such logical stages as "Earth as a Conscious Cell", "The Organized Solar System", "Interstellar Operations", "Intergalactic Operations", etc. is obviously an open question. Such a prediction can only be tested by time, and even though the next new Contropic state is to be expected cosmically soon, in our terms that might yet be on the order of thousands of years in the future.
The more immediately falsifiable contention of consecutive, discontinuous increases in Universal complexity (or decreases in randomness) can be considered from "afar" in cases of the lowest magnitudes of Contropy. This affords a Contropic perspective of overview in which deviations from random behavior can be neglected in those cases--as they are in the entropic perspective. The Special Case of the theory is based on this effect, and it is developed in the companion paper.
Further subdivision and refinement of the Contropic Scale can proceed from consideration of possible differences among SPP's. Although they have been defined to have identical speeds and energy contents, SPP's may be differentiated by their positions in space and time and by their directions of translation. Possible differences in condition of rotation ("spin") with respect to the axis of translation also may be used to distinguish among SPP's. Without assumptions as to internal structure of the SPP's, only three possible cases of rotation with respect to the axis of translation need be considered: Clockwise, Counter-clockwise, and none (or "tangential" rotation, which is equivalent). It is the nature of two-endedness that the direction of rotation as seen from one end is in the opposite direction of rotation as seen from the other end. Therefore, the definition of the "positive pole" as being that from above which the rotation appears to be clockwise at the same time defines the opposite, or "negative"pole, as having the opposite rotation. Then, the axis of rotation of the SPP can be aligned with its axis of translation in three different ways, defining three types of SPP' s according to their different kinds of angular momentum, as shown in Sketch #1. That sketch illustrates that there are actually six possible orientations of the axis of rotation with the axis of translation, but (c), (d), (e), and (f) are equivalent, so the simplification is justified. The three kinds of SPP's are:
The individual SPP is Contropically equivalent to systems of SPP's which interact only randomly, and thus it defines the base or ground state of the first magnitude of Contropy. The next higher increment would consist of systems of SPP's in which some degree of order is imposed and so on. A subclassification system which is a Contropic analogue to the Entropic concept of "Degrees of Freedom" is defined below as "Degrees of Confinement":
1. "Free" (Zeroth Degree)--Systems of SPP's whose trajectories are completely random and unfocused.
2. "Focused" (First Degree)--Systems of SPP's in which some degree of order (or focus) is imposed.
3. "Confined" (Second Degree)--Systems of SPP's which are confined with respect to each other.
Higher magnitudes of Contropy can be subdivided according to the same principle. In fact, the Confined state of one magnitude is seen to define the Free state of the next higher magnitude6. Based on Yukawa's concept that a force can arise between two bodies as a result of the exchange of particles, it has been conjectured that each of the discontinuous increases in the possibilities for interaction resulting from applied structure can be associated with a different kind of energy. This should be most easily seen in the simplest (or SPP) case.
Since it is not necessary to assume that reactions between gravitons and magnetons occur, there are four possible interactions between SPP's7:
1. "+ magneton" with "+ magneton"
2. "+ magneton" with "- magneton"
3. "- magneton" with "- magneton"
4. " graviton" with " graviton"
Relatively few auxiliary assumptions are required for the simple relation of these elemental potentials to Basic forces: (a.) Reactions between gravitons produce a force of attraction between the bodies which emitted those gravitons; (b.) Reactions between "opposite" magnetons produce a force of attraction between the emitting bodies; and, (c.) Reactions between "like" magnetons produce a force of repulsion between the emitting bodies. It follows that the (unfocused) force associated with graviton interactions is "Gravity", and the (focused) force associated with magneton interactions is "Magnetism".
It is not necessary to this theory to develop further hypotheses as to the mechanisms of these forces, but it is interesting to speculate that they might amount to the interconvertability of Energy and Space/Time at the SPP level (perhaps through actions of constructive and/or destructive interference between waves of SPP's?). If so, there would be a nice symmetry with the interconvertability of Energy and Mass that was pointed out by Einstein. Extensions of the previous reasoning result in the Contropic Scale as shown in Table 2. Its apparent clarity and coherence argue for the usefulness of the concept of Contropy. For instance, the forces of Gravity and Magnetism are seen to arise from a common cause--interactions of SPP's--and they are therefore co-equal Basic Forces. In this view, the Strong and the Weak Nuclear Forces arise from more complicated causes, including the actions of those basic forces.
Observe that the progressions of the Contropic scale correspond to the order of appearance in the Universe of defining entities according to current Evolutionary theory. It is interesting that barren aggregations of material such as Planets, Solar Systems, Galaxies, etc., can be classified as increases within the 12th order of Contropic magnitude (complexity), but their living counterparts will define altogether new magnitudes as they are organized and stabilized.
It is submitted that the Contropic Scale as here depicted delineates a process of Derandomization as predicted by the theory, and that the descriptions of the Evolutionary Process by Darwin(1859), de Chardin (1959), Shklovski and Sagan (1966), and others document the fact that such a process is under way and continuing.
Perhaps the most significant philosophical implications in the theory of Contropy derive from the fact that the Universal future projected from the Contropic progression depicted in Table 2 is very different from the dead end predicted by probabilism. Instead of the hopeless alternatives of continued expansion or else eventual collapse, the possibility is raised that some degree of stabilization may be achieved through cooperative interactions between rational beings. Then, even more stability might be derived from higher level interactions, and so on.