On Thermodynamics Misconceptions, Part I

2 downloads 0 Views 226KB Size Report
(Dated: 6/23/2018. .... information theory, game theory, mathematical finance, biology, topology ... als already discovered, so one cannot approach, e.g., zero loss “as closely as ..... academic guidance by Ed Gerck, anonymous reviewers,.
On Thermodynamics Misconceptions, Part I Edgardo V. Gerck∗ Pasadena City College (Dated: 6/23/2018. This version supersedes the other versions, first dated 12/08/2017.) We set a stage of two limiting cases, called Blue for external appearance, and Red for underlying reality, critically studying a popular interpretation of thermodynamics. We use the lenses of classical thermodynamics, statistical thermodynamics, information theory, game theory, mathematical finance, biology, psychology, and literature, trying to obtain a more diverse view. We report a set of macro-world processes (not quantum mechanical) that “always win” in the limit, which would not be possible at first sight in thermodynamics, but is provable using Shannon’s Information Theory. The behavior of their dynamical laws, governing micro and leading to their macro, links Shannon’s entropy H with thermodynamical entropy S. We conclude, through various cases, that Nature may superficially look chaotic but does seem to not act in a random way.

I.

INTRODUCTION

Almost 85 years ago, Breit and Wheeler [1] were the first to suggest that it should be possible to turn light into matter by smashing together only two photons, to create an electron and a positron, the simplest method of turning light into matter ever predicted. To an observer before that time, it would seem as if matter is coming out of nothing. Bias also often inserts itself as an unwanted guest of observations, a distortion of reality, and maybe the ultimate challenge facing science as a neutral description of facts. Recognizing bias, and ensuring no conflicts of interest, are important in science, in order to doubleblind or just minimize the influence of bias. The lesson is that we cannot avoid bias, the multifarious manifestations of bias can be pervasive and often undetectable, so one must always be on watch. These two considerations add importance to this study: (1) if science is to steadily come closer to a true depiction of reality; and (2) if bias is to be reduced as closely as possible to zero. To better understand this two-prong test, we consider limiting cases, Blue for external appearance, and Red for underlying reality. With Blue, one stays in the known but superficial reality; or with Red, one goes deeper into the real world. The names for the two cases come from a scene in the 1999 film “The Matrix.” In the “safe” but fake world (Blue), life seems to follow a collective conception of thermodynamics. There, it seems that one is already victimized by the laws of that nature, one can



https://www.researchgate.net/profile/Edgardo Gerck

never win, “nature is unwinnable,” the most one could do is break even, but even that is forbidden by the nature. The film’s viewpoint Blue seems to be that with the Blue Pill, the world — viewed as that different nature, slightly tinted blue to visually mark it as different — presents a “nature that is unwinnable,” because its thermodynamics says that efficiency cannot be 100% and that a gain must be less than the loss. Do nothing, one loses less. But, let us here take also the Red Pill, and go deeper. This work is linked to a current and old discussion in physics, why does one observe an “arrow of time,” as recently reviewed by Sean Carrol at Caltech, who says, “Irreversible processes are at the heart of the arrow of time.” [2] Yet, all physical processes are reversible in the micro-world [2], in quantum mechanics, but not for systems in the macro-world above, which are thought to be irreversible in any case. Accordingly, one cannot undo diffusion, omelets, or any process governed by a positive variation of entropy. As Carrol explains, the principle underlying irreversible processes is summed up in the second law of thermodynamics: the entropy of an isolated system either remains constant or increases with time. We present a counter example, a set of macro-world processes (not quantum mechanical, not micro-world) that “always win” in the limit (affording 100% efficiency in setting entropy close to zero, as closely as desired), which is, mutatis mutandis, the result in quantum mechanics, where efficiency is 100% and there are no irreversible processes [2]. We plan to return to this topic outside of this exposition. This work is divided in two Parts, with further

2 material to be considered in a forthcoming Part II. This Part I will study different areas, not just physics, using a diversity of approaches in STEM and humanities. In the following, we will examine reality using the lenses of classical thermodynamics, statistical thermodynamics, information theory, game theory, mathematical finance, biology, topology, psychology, and literature, trying to obtain a more diverse view and taking into account that physics alone may not be capable of describing reality per se [3].

Therefore, the Blue hypothesis is not predictive of observations, and is denied. Research on materials and processes may increase macroscopic thermodynamic efficiency, already close to 100%.

Cautionary Note: It is not our purpose to advocate or to criticize the popular interpretation or other approaches: we adopt a philosophically neutral stance. Rather, our aim is conceptual unification in terms of physics, thermodynamical entropy, game theory, and Shannon’s information theory, by generalizing the concept of entropy beyond the thermodynamic framework, and potential applications to understand the “arrow of time”, and other irreducibility questions.

S = kB · ln(P ) + c,

II.

CLASSICAL THERMODYNAMICS

Classical thermodynamics [4] is an area in physics, involving the transformation of energy from one type to another. It is out of scope in physics to talk about, e.g., “The perversity of the universe tends to a maximum” [5] — there can be no such law in classical thermodynamics. Because entropy is an extensive property [6], it is not dependent on the amount of material in a system; it is an additive, scalar quantity, where it follows that the principle of maximum entropy must hold for each individual, however infinitesimal, segment of the world-line [6]; I ∂Q ∆S = (1) T where S is entropy, Q is heat, and T is temperature, in the Clausius integral above. This section suffers from the limitation of materials already discovered, so one cannot approach, e.g., zero loss “as closely as desired” in a limiting process. One does not know if it is because of a temporary absence of current materials, or due to a physical principle limiting their existence permanently. This will be resolved in the next sections, where material limitations are removed. However, classical thermodynamics (macroscopicallybased) does say that efficiency can approach 100%, albeit as closely as materials permit. With materials existing today, efficiency can already be close to 99.99% [4]). The macroscopic thermodynamic loss can, thus, approach zero, theoretically as close to zero as current materials permit.

III.

STATISTICAL THERMODYNAMICS

In statistical thermodynamics, entropy is usually defined as [4]: (2)

where S is entropy, kB is the Boltzmann constant, P is the probability of the microstate, and c is a non-negative constant. The conjecture is that (since the macroscopicallybased and the microscopically-based branches of classical thermodynamics must lead to the same results [6]). Thus, the Blue hypothesis is not predictive of observations, and is further denied. We will be able to stop here and prove, in the next sections, a stronger and broader result, approaching in the macro-world a result that has been only reported in a quantum process, achieving zero loss as closely as desired. This could apply to the current discussions on the “arrow of time” [2], and reversible thermodynamic processes. IV.

INFORMATION THEORY

Claude Shannon [7], introduced the mathematical concept of the “entropy” of a random variable or sequence, called H. He defined information theory in a narrow way [7], “In Information Theory, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that what the party does not expect – as measured by the uncertainty of the party as to what the message will be.” Preceded by the efforts of Szillard [8], who in 1929 identified the unit or “bit” of information when dealing with entropy and the Maxwell’s Demon problem in physics, by Hartley in 1928 [9] and by Nyquist [10] in 1924, Shannon took a different approach than just positing a behavior for information, as commented in [11]. Shannon’s zeroth-contribution [11], was to recognize that unless he would arrive at a simplified but

3 real-word (valid in nature) model of information to be used in the electronic world, no logically useful information model could be set forth. Shannon entropy H is given by the formula: X H=− pi log2 pi

(3)

in information theory [7], afford error-free transmission of signals, where signal loss can approach zero, as low as desired. The Blue hypothesis is not predictive of these observations, and is further denied.

V.

CHANCE AND GAME THEORY

i

where pi is the probability of character number i appearing in the stream of characters of the message. The formal equivalence with eq. (2), of entropy in statistical thermodynamics, should strike the reader as a common patten to be further investigated, as done in the following.

The information theory definition of entropy H has the further advantage of being used in chance considerations, on maximum Shannon entropy versus minimum risk and applications to some classical discrete distributions [14], in game theory, games where one can win or lose.

This defined a way to correlate information theory (i.e, electric signal statistical distribution) with the statistical entropy from thermodynamics, with the intuition that it measures how well behavior can be predicted.

Mass or energy are not relevant here, only the chance. This eliminates the concern expressed in a previous section, from a possible, yet theoretically unknown, limitation of materials. In game theory, one can win any betting game theoretically, through the martingale strategy [15].

This intuition by Shannon, together with his remarkable proof of existence for universal encoders, was followed by his Tenth Theorem, that there should be an algorithm for predicting the next symbol in a sequence, which is assured to come as accurately as desired: close to 100%, on an open interval. Users, e.g., of a cell phone, may not see this visibly [12], but every bit is thus predicted, and corrected, achieving error-free communication. In a hard-drive, every read/write operation has a probability of errors, where the symbols are predicted and corrected, also achieving error-free communication. As the reader can verify, the same version of this paper is read from the same file, each time, although there are random electron events in-between in the hard-drive, wires, and screen, which could garble the output. In the day-to-day life, we experience a “clear” picture, not garbled documents, mumbled voice, or fuzzy images, which would certainly result if such prediction / correction did not exist. The reality we experience is not “perfect enough” because there are no errors to correct, but because all relevant errors have been corrected — we achieve precise results through imprecise elements, corrected statically or dynamically, using Shannon’s Information Theory. Shannon‘s Tenth Theorem also provides evidence of breaking the “arrow of time,” even to what symbol the author will type or mistype next. The future is, under very general rules, knowable to some extent. The same is true for a Taylor series in mathematics, where higher topology homotopy, a measure of predictive power, is conjectured [13] to correlate with higher-order derivatives. Thus, the results using Shannon’s Tenth Theorem

“This betting strategy has you doubling your bet every time you lose a hand. When you finally win a hand, your bet returns to your minimum and you recover your losses with one win.” [15] The martingale strategy has been applied to card games, and roulette, where the probability of hitting either red or black is close to 50%. It works on limited runs, but seems to fail if the gambler does not have some “infinite” time and “infinite wealth” — which is a mathematically valid argument, but not in physics, in the “Red Level” real-world, where money itself must be finite, as any resource. Rather, a gambler should be spending money securely (without care, as he is sure to win when he leaves the game after just a single win), not boisterously, playing yet another hand, saying, “I’m lucky!” To make matters more precise, let us consider a bet of $1 on one drawing of event A, with 1:1 payoff, and P(A) = 40%, where P(A) is the probability of winning on event A. It would seem that one most likely loses. However, consider two events in a row, where the probability of an event is independent of the draw order (no history assumption: ergodic, the same chance in every event), where player Z follows a martingale strategy and ends the game unilaterally (e.g., walks away) after a single win. The probability of player Z winning the first time is 40%, ending the game and leading to $1 profit; losing first and winning second has a chance of 24%, where player Z ends the game and leads to $1 profit; losing both has a probability of 36%, and is deferred to a next round, as many rounds as needed until a win occurs, when winning once recoups all the intermediary losses,

4 with player Z ending the game with $1 profit. Using the martingale strategy, player Z enjoys a probability in the open neighborhood of zero, meaning both the thermodynamic entropy S and the Shannon entropy H approach as close to zero as desired. One can even lose on all but one bet, ending with $1 profit in all cases. The game can be repeated in parallel and in series, and $1 could be $1M, or any amount desired. With sufficient capital (always possible; money divergence is not a physical occurrence in reality, although considered in mathematics), usually using other people’s money as leverage, player Z can defer a loss until its probability is small (being just compounded of the previous result), allowing enough time for player Z to win, with increasing probability, as close to 100% as needed. It is not that each successive failure has to be reduced in probability by some “magic,” relaxing to a result; there is a “no history” assumption, and one can have a string of failures, at any time. This is often an argument against using a martingale strategy [15]. But, as one proceeds into the strategy, in the conditions noted above, parallel success channels start to open up, at the rate of 1, 3, 7, 15, ...(see previous paragraphs), while there is no competing growth on the number of failure channels, which remain at a single value. This “path engineering,” chosen by player Z, adds up to the success probability tending to 1, while the total probability (success and failure) must remain at 1, squeezing out the only failure probability channel to zero, as closely as desired. For example, one already reduces the probability of losing from 60% to 36% in two steps, to 13% in four steps, and to 1.7% in eight steps, at the simple investment of more capital, while the capital is secured by playing again if needed, all but assuring a return of 100%, with a single win. This strategy is not openly allowed in casinos [15]. But, this strategy can be used covertly, and even used openly in capital markets with instruments such as financial derivatives, with the additional tax and “merit” advantages of betting being legally disguisable as hedging [16], attracting investors. Thus, the Blue hypothesis is not predictive of observations here as well, and is further denied. In thermodynamic terms, the microscopic uncertainty, viewed as entropy, stays the same – but the path can changes, leading to a desired outcome. In physics, we can support the following conjecture (e.g., by Arieh Ben-Naim [17]): ∆S = entropy := uncertainty

(4)

to link the behavior of macroscopic systems in terms of the dynamical laws governing their microscopic parts, linking thermodynamical entropy S with Shannon’s entropy H, as done in this work. This could apply to the current discussions on the “arrow of time” [2], and reversible thermodynamic processes, to be pursued elsewhere. There is a further strategy, leading to even higher probable gains in shorter time, automatically, to be discussed next, which can also be of interest in physics. VI.

BLACK-SCHOLES FORMULA

It is possible to reduce risk to near zero by dynamic hedging (e.g. the 1997 Nobel Prize Black-Scholes formula [18, 19]), where one bets on both sides, pro and con, and one may breakeven if all fails. Presumably, one can collect a fee that offsets expenses, so one has a profit even in the case where all bets are losses. As J¨ urgen Franke, et. al. [20], explain, “Due to a dynamic hedge-strategy the portfolio bears no risk at any time.” This means that losses due to stocks do occur, but are neutralized by profits due to the calls. The formula is derived under the assumption that the time interval between observations is very small, and that the log prices follow a Brownian walk with normally distributed innovations. The formula is not affected by any linear drift in the random walk. Usually, the formula [18] is expressed as, C = N (d1 )S − N (d2 )Ke−rt , 2

S ln( K ) + (r + s2 )t √ d1 = s· t √ d2 = d1 − s · t

(5)

where C is the call premium, while the other variables and conditions are as in [18], off-topic to quote here, and of no immediate importance here. Therefore, one can always win when dynamically hedging properly, which can be seen as a “disguised betting.” That disguise, which would be factually illegal in the US, is where a covert deceit can be added, leading to further monetary wins — the US tax code could see it as hedging, but the investors see it as “legalized” betting. Although, on 05/14/2018, the US Supreme Court ruled PASPA unconstitutional, allowing sports betting to be legalized. The intent to deceive can be hidden not only in betting versus hedging, but also in passing along the financial derivative as something credible, which the buyer has no way to verify — although the buyer would

5 be the legal relying party. [19] One could ask, “Doesn’t this require that one has different odds depending which outcome one is betting on? If the odds are the same both ways, one seems to breakeven, except for the overhead.” [5] The overhead is payable as expenses, so it is never a loss for the agent. In general, if there is no conflict, there is no interest, and one can just not hedge when there is no conflict. So one should pick cases where the odds are somewhat unbalanced, even against, to maximize winning. This is a financial oversimplification, as one also has to take into account leveraging and other factors. If one is using OPM (with a cash relief, as other people’s money), if one loses, one is not exactly losing, and one can write it off, and get a tax break over years, although many people may become losers of large sums. “However, with finite runs the house wins on the average.” [5] That is a popular myth, but is incorrect. With dynamic hedging, anyone (the house, or anyone else with enough capital) can win every time against other players, and never actually lose. One can walk out, with a large profit, even after a single win or, if one loses, try another place, and do so recursively. The “second” place (and/or stock), does not know where the person came from, and the strategy can continue without detection, assuring fault-free operation and deniability. Thus, the strategy can remain covert, at will. This technique is used today in the stock market because it gives results, notwithstanding the loss it causes to others. This can be seen as a savage idea, even perverse in human terms, it seems to fit within the Blue Level reality; it needs clueless payers, fools who mostly lose. And those, who mostly lose, will tend to confirm qualitatively the Blue option, their very presence and anecdotes confirming the real existence of those others, who mostly win. This is the reality, however. It is faced by all, hence it is objective [21–23]. Even the savvy players can be “gamed”, equalizing the playing field. There is no final watcher, in “who watches the watcher?” — everyone may be up-played, outfoxed, won by someone else. And the world, as a whole, is not in one game, nor in a particular zero-sum game. New resources are found all the time, new technologies developed, and so on; this expansion creates new markets, new consumers, and new opportunities. Thus, the Blue hypothesis is not predictive of observations, in this case as well. Anyone can win as

closely to 100% as desired; one just has to follow the rules, and the hedging techniques disclosed here, to eventually win a profit, even on a short time. The Red hypothesis provides for physically significant winning outcomes, if not today, tomorrow.

VII.

BIOLOGY, PSYCHOLOGY, AND LITERATURE

Qualitative reasoning, although not part of physics explicitly, may be useful in the theoretical part, adding uncertainty of two quite different, albeit unknown, components: what we know we ignore, and what we ignore we ignore [23]. This will also help reduce bias. In this vision, science is not a collection of immutable facts, but evolving sets of NOT YET FALSE (“true”), MAYBE FALSE (“false”), and WHO KNOWS? [23] Regarding biology, something (more) comes out of a lesser set, as a family of ten living farmers come out of their two parents — more life seems to come out of less, to multiply. There are more chickens, dogs and cats too. We can go into reported cases, such as in the medical literature of East and West, against the “arrow of time”, by remembering the future, as in the cases of foreknowledge or d´ej` a vu, if we accept the evidence. In terms of psychology, thinking that “life’s choices are always against you” can be a psychosis, a mental illness characterized by loss of contact with reality. One could become psychotic, to believe that life is hopeless, perverse, that there is some sort of worldwide conspiracy against oneself. No one is that important! Thus, the negation of the antecedent can be inferred. Also, observing Nature, the Sun does not hide itself in the sunset in order to allow for crimes, nor does it rise to reveal crimes! Aristotle remarked the same in relation to rain and seeds — rain does not fall in order to make seeds grow, or to spoil seeds when laid outdoors to dry. In the balance, one has a fair shot — if not now, later, or anyone else. Literature may be viewed as a recursive, multivalued, group computation of a final state, given an initial state and sufficient time. Speakers of the same natural language communicate with one another, in recursive group expressions. They trade contents, not uninterpreted strings of symbols. In other words, one expects that there must be communicable content which is conveyed in discourse, especially of literary nature. Here, we find in positive for the Blue option, irrespective of its truth value. As a literary expression, it could be a valid vehicle for expressing angst, of some action-reaction to what seems unavoidable, of

6 consequents that must remain a permanent disaster, of unsolvable situations. However, notwithstanding the value of its literary exploration, the world does not end in an abyss, as feared by early sea explorers, and reported in novels. Further, as Shakespeare wrote, in Julius Caesar, “Men can control their destinies. The reason that we are oppressed, dear Brutus, is not a matter of fate, but because we don’t do anything about it.”

VIII.

CONCLUSIONS

Physics does not consider Nature to be without direction [3], wily-nilly, nor considers metaphysical planning. But, is Nature truly random? Can one choose the end-points and / or the intermediate steps, to reach 100% efficiency, even though each step must be less efficient than 100%? In this work we attempt to answer these and other questions related to objectivity and efficiency in physics, and further clarify the role of physics in understanding Nature. In two limiting cases, called Blue for external appearance, and Red for underlying reality, this work reports critically, but philosophically, on a collective interpretation of thermodynamics. Contrary to the Blue option, this work shows, using a diversity of macro and micro methods, that the more likely result, as it is observed, becomes that one can win with proper encoding (e.g., martingale strategy, proper dynamic hedging, or a correction channel with enough capacity), and win as closely to 100% as desired. This is the Red option, the observer-independent reality.

works, and wrong is what mostly does not. Both, as shown here, can be used, objectively, to “always win,” as closely as desired, even taking into account local rules, such as tax code differences on hedging vs. betting. Thus, we find, using the various lenses of classical thermodynamics, statistical thermodynamics, information theory, game theory, mathematical finance, biology, psychology, and literature in the words of human wisdom e.g. Shakespeare, evidence in all cases of the Red option as predictive of experiments, and negative evidence in any but literary fiction, of the Blue option. This arches back to Bernard d’Espagnat’s [3] question, “Is reality something meaningful and is science steadily coming closer to a true depiction of it?” with a positive answer, both in qualitative and quantitative, physics terms, that reality is not “fuzzy” or wily-nilly. Notwithstanding that we may ever ignore all the relevant causes, our conclusion through using various cases, is: (a) Nature supports 100% efficiency, recursively and as closely as desired with no basic material limitation, which we demonstrated with macro-systems, leading to questions on the ”arrow of time,” and may look superficially like chaos, chance; rather than (b) Nature acts in a truly random way. In summary, Nature may superficially look chaotic but does seem to not act in a random way. In the words of Richard P. Feynman, “We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics.”

Current material limitations notwithstanding, this work finds no basic thermodynamic limitation in opposition to objectivity in science: there is an absolute rule of right versus wrong, and of mathematical ordering in-between, in physics [23, 24] — right is what mostly

ACNOWLEDGMENTS: The author is thankful for comments by Kenneth B. Cheney and Miriam K. Hartman of Pasadena City College, motivation and academic guidance by Ed Gerck, anonymous reviewers, and comments online, directly or at ResearchGate. Any remaining errors are mine, and suggestions are welcome.

[1] G. Breit and J. A. Wheeler. Collision of two light quanta. Phys. Rev, 46:1087–1091, 1934. [2] Sean Carrol. The Arrow of Time. Engineering and Science, 73 (1):20–25, 2010. [3] B. d’Espagnat. Quantum Physics and Reality. Foundations of Physics, 41:1703–1716, November 2011. [4] J.W. Jewett and R.A. Serway. Physics for Scientists and Engineers with Modern Physics. Thomson Brooks/Cole, 2008.

[5] Kenneth B. Cheney. Dept. of Natural Sciences, Pasadena City College, California, USA, Private Communications, 2017. [6] J. Kestin. A Course In Statistical Thermodynamics. Elsevier Science, 2012. [7] C. E. Shannon. A mathematical theory of communication. Bell Systems Technical Journal, 27:623–656, 1948. [8] L. Szilard. On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings.

7 Zeitschrift f¨ ur Physik, 53:840–856, 1924. [9] R. V. L. Hartley. Transmission of Information. Bell Systems Technical Journal, page 535, July 1928. [10] H. Nyquist. A mathematical theory of communicationCertain Factors Affecting Telegraph Speed. Bell Systems Technical Journal, page 324, April 1924. [11] E. Gerck. Certification: Intrinsic, Extrinsic and Combined. ResearchGate, https://www.researchgate.net/ publication/286459966_Certification_Extrinsic_ Intrinsic_and_Combined, 1997. [12] Miriam K. Hartman. Dept. of Natural Sciences, Pasadena City College, California, USA, Private Communications, 2018. [13] Edgardo V. Gerck. Homotopy Topology Conjecture Regarding the Higher-Order Derivative Terms in the Taylor Series. ResearchGate, https://www. researchgate.net/publication/323257859_Homotopy_ Topology_Conjecture_Regarding_the_Higher-Order_ Derivative_Terms_in_the_Taylor_Series, 2017. [14] F. Topsøe. Maximum entropy versus minimum risk and applications to some classical discrete distributions. IEEE Trans. Inform. Theory, 48(8):2368–2376, Aug. 2002. [15] Nikki Katz. The Book of Card Games: The Complete Rules to the Classics, Family Favorites, and Forgotten Games. Adams Media, 2013. [16] Stefan Andreev. Quanto Credit Hedging, Lecture 23, MIT OpenCourseware, 2013.

[17] Arieh Ben-Nalm. A Farewell To Entropy. World Scientific Publishing Co. Pte. Ltd., 2008. [18] Black and Scholes. The Pricing of Options and Corporate Liabilities. J. of Political Economy, 81:637–654, 1973. [19] Merton. Theory of Rational Option Pricing. Bell J. of Econ. and Mgt. Sci., 4:141–183, 1973. [20] Hafner C.M. Franke J., H¨ ardle W. Black-Scholes Option Pricing Model. Statistics of Financial Markets. Universitext. Springer, Berlin, Heidelberg, 2004. [21] E. Gerck. On ABSTRACT, OBJECTIVE, SUBJECTIVE and INTERSUBJECTIVE Modes. https://www.researchgate.net/publication/ 318661666_On_ABSTRACT_OBJECTIVE_SUBJECTIVE_ and_INTERSUBJECTIVE_Modes, 1999. [22] E. Gerck. Classification of membership as SUBJECTIVE: individual, one observer; INTERSUBJECTIVE: group agreement, two or more independent observers in agreement; OBJECTIVE: cannot be changed or set, many independent observers in agreement, and ABSTRACT: with no observers or entities, only relationships. Other modes are possible, more than these four., 2018. [23] E. Gerck. Science and the Search for Truth: The Scientific Method. http://olli.ucsd.edu/videos/ videoPlayer.cfm?vid=200267324, 2017. [24] Not to be confused with local rules of right vs. wrong, e.g., in politics, societal rules, language, ethics, law, or tax codes.