Nonequilibrium Thermodynamics of Chemical

0 downloads 0 Views 892KB Size Report
Nonequilibrium Thermodynamics of Chemical Reaction Networks: Wisdom from Stochastic .... tion of chemical networks [49, 50] from a more physical and thermodynamical ...... [13] D. A. McQuarrie, J. Appl. Probab. 4, 413 (1967). [14] D. T. ...
Nonequilibrium Thermodynamics of Chemical Reaction Networks: Wisdom from Stochastic Thermodynamics Riccardo Rao and Massimiliano Esposito

arXiv:1602.07257v1 [cond-mat.stat-mech] 23 Feb 2016

Complex Systems and Statistical Mechanics, Physics and Materials Science Research Unit, University of Luxembourg, L-1511 Luxembourg, Luxembourg (Dated: February 24, 2016.) We build a rigorous nonequilibrium thermodynamic description for open chemical reaction networks of elementary reactions. Their dynamics is described by deterministic rate equations satisfying mass action law. Our most general framework considers open networks driven by time-dependent chemostats. The energy and entropy balances are established and a nonequilibrium Gibbs free energy is introduced. The difference between this latter and its equilibrium form represents the minimal work done by the chemostats to bring the network in its nonequilibrium state. It is minimized in nondriven detailed-balanced networks (i.e. networks which relax to equilibrium states) and has an interesting information-theoretic interpretation. We further show that the entropy production of complex balanced networks (i.e. networks which relax to special kinds of nonequilibrium steady states) splits in two non-negative contributions. One charaterizing the dissipation of the nonequilibrium steady state and the other the transients due to relaxation and driving. Our theory lays the path to study time-dependent energy and information transduction in biochemical networks. PACS numbers: 05.70.Ln, 87.16.Yc

I.

INTRODUCTION

Thermodynamics of chemical reactions has a long history. The second half of the XIX century witnessed the dawn of the modern studies on thermodynamics of chemical mixtures. It is indeed at that time that J. W. Gibbs introduced the concept of chemical potential and used it to define the thermodynamical potentials of non-interacting mixtures [1]. Several decades later, this enabled T. de Donder to approach the study of chemical reacting mixtures from a thermodynamical standpoint. He proposed the concept of affinity to characterize the chemical force irreversibly driving chemical reactions and related it to the thermodynamical properties of mixtures established by Gibbs [2]. I. R. Prigogine, who perpetuated the Brussels School founded by de Donder, introduced the assumption of local equilibrium to describe irreversible processes in terms of equilibrium quantities [3, 4]. In doing so, he pioneered the connections between thermodynamics and kinetics of chemical reacting mixtures [5]. During the second half of the XX century, part of the attention moved to systems with small particle numbers which are ill-described by “deterministic” rate equations. The Brussels School, as well as other groups, produced various studies on the nonequilibrium thermodynamics of chemical systems [6–12] using a stochastic description based on the (Chemical) Master Equation [13, 14]. These studies played an important role during the first decade of the 21st century for the development of Stochastic Thermodynamics, a theory that systematically establishes a nonequilibrium thermodynamical description for systems obeying stochastic dynamics [15–18], including chemical chemical networks [19–23]. Another significant part of the attention moved to the thermodynamical description of biochemical reactions in terms of deterministic rate equations [24, 25]. This is not

so surprising since living systems are the paramount example of nonequilibrium processes and they are powered by chemical reactions. The fact that metabolic processes involve thousands of coupled reactions also emphasized the importance of a network description [26–28]. While complex dynamical behaviors such as oscillations were studied in small chemical networks [29, 30], most studies on large biochemical networks focused on the steady-state dynamics. Very few studies considered the thermodynamical properties of chemical networks [31–33]. The first nonequilibrium thermodynamical description of open biochemical networks was proposed in Ref. [34]. However, it did not take advantage of Chemical Reaction Network Theory which connects the network’s topology to its dynamical behavior and which was extensively studied by mathematicians during the seventies [35–37] (this theory was also later extended to stochastic dynamics [38–41]). The first and single study that related the nonequilibrium thermodynamics of chemical networks to their topology is Ref. [23], still restricting itself to steady states. In this paper, we consider the most general setting for the study of chemical networks, namely open networks driven by chemostatted concentrations which may change over time. In this way, steady-state properties as well as transient ones are captured. Hence, in the same way that Stochastic Thermodynamics is build on top of stochastic dynamics, we systematically build a nonequilibrium thermodynamical description of chemical reaction networks on top of deterministic chemical rate equations. In doing so, we establish the energy and entropy balance and introduce the nonequilibrium entropy of the chemical network as well as its nonequilibrium Gibbs free energy. We show this latter to bear an informationtheoretical interpretation similar to that of stochastic thermodynamics [42–44] and to be related to the dynamical potentials derived by mathematicians. We also show

2 the relation between the minimal chemical work necessary to manipulate the chemical networks far from equilibrium and the nonequilibrium Gibbs free energy. Our theory embeds both the prigoginian approach to thermodynamics of irreversible processes [5] and the thermodynamics of biochemical reactions [24]. Making full use of the mathematical Chemical Reaction Network Theory, we further analyze the thermodynamical behavior of two important classes of chemical networks: detailed-balanced networks and complex-balanced networks. In absence of time-dependent driving, the former converge to thermodynamical equilibrium by minimizing their nonequilibrium Gibbs free energy. Instead, the latter converge to a specific class of nonequilibrium steady states and always allow for an adiabatic–nonadiabatic separation of their entropy production, which is analogous to that found in stochastic thermodynamics [45–47]. We note that while finalizing this paper, a result similar to the latter was independently found in Ref. [48]. Plan and Notation The plan of this paper is as follows. After introducing the necessary concepts in chemical kinetics and Chemical Reaction Network Theory in sec. II, the nonequilibrium thermodynamical description is established in sec. III. As in Stochastic Thermodynamics, we build it on top of the dynamics and formulate the entropy and energy balance, (§ III D) and (§ III E). Chemical work and nonequilibrium Gibbs free energy are also defined and the information-theoretic content of the latter is discussed. The special properties of detailed balance and of complex balanced networks are considered in sec. V and IV, respectively. Conclusions and perspectives are drawn in sec. VI, while some technical derivations are detailed in the appendices. We now proceed by fixing the notation. We consider a system made of reacting chemical species. Each chemical species Sσ is identified by an index σ ∈ S, where S is the set of all indices/species. Species populations change according to reactions. For thermodynamical consistency, we impose that for each reaction the reversible reaction also exists. We refer to the pair of forward–backward reaction as reaction pathway. We label the reaction pathways by ρ ∈ R, where R represents the set of reaction pathways. Given an arbitrary orientation on R, we refer to the forward (backward) reaction as +ρ (−ρ). Hence, a generic chemical reaction network is represented as X k+ρ X σ ∇σ+ρ Sσ

∇−ρ Sσ , (1) σ

k−ρ

σ

where the integer coefficients ∇σ+ρ (∇σ−ρ ), so-called stoichiometric coefficients, take into account the stoichiometric number of species Sσ involved in the forward (backward) reaction. These coefficients form two matrices: ∇+ = {∇σ+ρ } and ∇− = {∇σ−ρ }. The reason for the choice of the symbol “∇” will become clear later. An example of chemical network is given in fig. 1. Physical quantities associated to species and reactions are represented in contravariant–covariant vectorial notation. Covariant and contravariant quantities have the

environment system k+1

S1

2S2 + S3

k-1

S3 + S 4

k+2 k-2

S5

FIG. 1: Representation of a closed chemical reaction network. The chemical (reacting) species are {S1 , · · · , S5 }. The two reaction pathways are labeled with 1 and 2. The nonzero stoichiometric coefficients are ∇1+1 = 1, ∇2−1 = 2 and ∇3−1 = 1 for the first reaction and ∇3+2 = 1, ∇4+2 = 1 and ∇5−2 = 1 for the second one. Since the network is closed, no chemical species is exchanged with the environment.

same physical values. We use the Einstein summation notation: repeated contravariant covariant indices implies the summation over all the allowed values for those indices—e.g. σ ∈ S for species and ρ ∈ R for reactions. Given two arbitrary vectorial quantities X = {X i } and Y = {Y i }, the following notation will be used Xi

Yi



Y

Xi

Yi

.

i

Finally, given the matrix A, whose elements are {Aij }, the elements of the transposed matrix AT are {Aji }. The generic steady state value of any physical quantity ¯ while its equilibrium value X is denoted by an overbar X, by Xeq or X eq . II.

DYNAMICS OF CHEMICAL NETWORKS

In this section we formulate the mathematical description of chemical networks [49, 50] from a more physical and thermodynamical perspective. We start by considering closed chemical networks, namely systems that do not exchange matter with the environment, and we then move to open ones. We add time-dependent driving to the standard description of these networks. This means that the action of the environment changes over time according to some externally controlled time-dependent protocol. We then define conservation laws in chemical networks and study the dynamics of two important classes of chemical networks: detailed-balanced networks and complex-balanced networks. The thermodynamics of these networks is particularly rich and will be discussed in sec. IV and V, respectively. We consider a chemical system in which the reacting species Sσ are part of a homogeneous and ideal solution: The solvent is much more abundant than the reacting

3 species. The temperature and the pressure are kept constant. The species abundances are high enough so that the molecules discreteness can be neglected. The reactions proceed slowly compared to diffusion and the homogeneity is maintained over time. The system state at time t is thus well-described by the molar concentration distribution {Z σ }, where Z σ ≡ N σ /V . N σ is the Sσ molarity and V the reaction volume. We assume that the reaction mechanisms  are elementary. Hence, the rate functions J ±ρ {Z σ } , which measure the rate of occurrence of reactions, satisfy the mass action law [49, 51, 52], i.e.  ±ρ J ±ρ ≡ J ±ρ {Z σ } = k ±ρ Z σ ∇σ ,

(2)

where k ±ρ are positive coefficients called rate constants. In simple terms, the rate functions are proportional to the concentrations of its reactant species. The rate constants k ±ρ quantify the probability of the microscopic scattering and the quantum-mechanical reaction mechanism. The concentration fluxes along the reaction pathway ρ are measured by the currents, whose expression are J ρ ≡ J +ρ − J −ρ =k



Z



σ ∇σ

−k

−ρ

Z

−ρ

σ ∇σ

(3) .

environment system

S1

S3 + S 4

Closed Chemical Networks

Closed chemical networks do not exchange chemical species with the environment. Hence, the species’ concentrations can vary only because of the reaction mechanisms. To have a measure of the species formation rate—i.e. the rate of appearance of a species in the system due to the reactions—we need the information about the species stoichiometric changes in each reaction. This information is given by the stoichiometric matrix ∇ ≡ ∇− − ∇+ , whose elements are (4)

We can thus write the differential equations governing the species’ concentrations, i.e. the Rate Equation [49], as Z˙ σ = ∇σρ J ρ ,

∀σ ∈ S ,

2S2 + S3 k+2 k-2

S5

FIG. 2: Representation of an open chemical network. The green boxes aside represent the reservoirs of chemostatted species.

Example—The stoichiometric matrix of the chemical network in fig. 1 is   −1 0 2 0   ∇ =  1 −1 , (6)  0 −1 0 1 and the currents are J 2 = k +2 Z 3 Z 4 − k −2 Z 5 . B.

∇σρ ≡ ∇σ−ρ − ∇σ+ρ .

k-1

J 1 = k +1 Z 1 − k −1 (Z 2 )2 Z 3

We are now in the position to relate the currents to the species concentration changes.

A.

k+1

(5)

where the right hand side is the σ-species formation rate. Due to the mass action law (2), the rate equations are highly nonlinear and complex dynamical behaviors can emerge [30]. Also, the rate equations (5) can be thought of as a continuity equation for the mass. This explains the use of the symbol “∇” for the stoichiometric matrix [53].

(7)

Driven Open Chemical Networks

In open chemical networks, matter exchange with the environment is allowed. The environment is coupled to the system via reservoirs which control the concentrations of some specific species, fig. 2. We refer to these externally controlled species as chemostatted. Among the species, the chemostatted ones are denoted by the indices σy ∈ Sy , and the internal ones by σx ∈ Sx (S ≡ Sx ∪ Sy ). Also, the part of the stoichiometric matrix related to the internal  (resp. chemostatted)  σ species is denoted by ∇X = ∇σρ x (resp. ∇Y = ∇ρ y ). We add to this dynamical framework time-dependent drivings. In driven chemical networks the chemostatted concentrations change over time according to some timedependent protocol τt : {Z σy = Z σy (τt )}. In nondriven open chemical networks the chemostatted species have constant concentrations, i.e. Z˙ σy = 0 . The dynamical equation for the chemostatted concentrations must take into account both changes due to the internal reactions and changes due to the exchange of species with the environment. The system–environment matter exchange is measured by the external currents {I σy }. They represent the rate of chemostatted species entering the system (positive if chemostatted species enter in the system). Therefore, the rate equations for the chemostatted species read Z˙ σy = ∇σρ y J ρ + I σy , ∀ σy ∈ Sy , (8)

4 σ

where ∇ρ y J ρ are the species formation rates described earlier. We note that the above generalized rate equations (8) are the dynamical expression of the decomposition of species population changes in internal–external introduced by de Donder (see [54, §§ 4.1 and 15.2]). We finally recall that the steady-state distribution is defined by dZ¯ σ =0 dt

∀σ ∈ S ,

(9)

Using the rate equations (5) and (8), the steady state can be characterized by ∇σρ x J¯ρ = 0 , ∇σρ y J¯ρ = −I¯σy ,

∀ σx ∈ Sx ,

(10a)

∀ σy ∈ Sy .

(10b)

Eq. (10a) implies that the steady-state internal species formation rate vanishes. Instead, the steady-state chemostatted species formation rate is in general non-vanishing, eq. (10b), and is compensated by the external currents. C.

Conservation Laws

In a closed chemical network, a conservation law l = {lσ } is defined as a left null eigenvector of the stoichiometric matrix [24, 26], namely lσ ∇σρ = 0 ,

∀ρ ∈ R.

(11)

Conservation laws identify conserved quantities L ≡ lσ Z σ , called components [24, 26], which satisfy L˙ = lσ Z˙ σ = 0 . (12)  λ We denote by l a setof independent closed network’s conservation laws and by Lλ ≡ lσλ Z σ the corresponding components. The choice of this set is not unique, and different choices have different physical meanings. This set is never empty since the total mass is always conserved. From a physical point of view, conservation laws are often related to moieties [55] or species whose amount—the component—is conserved within the reaction network and hence does not change over time. In an open chemical network, the conservation laws become the left null eigenvectors of the stoichiometric matrix of the internal species ∇X . Hence, the chemostatting procedure may break a subset of closed network’s conservation laws [53]. E.g. when the first chemostat is introduced the total mass conservation law is  always broken. Within the set of conservation laws lλ , we label broken ones by λb and the unbroken ones by λu . The broken conservation laws satisfy eq. (11) but lσλxb ∇σρ x 6= 0 ,

for at least one ρ ∈ R .

(13)

On the other hand, the unbroken conservation laws satisfy eq. (11) and lσλxu ∇σρ x = 0 ,

∀ρ ∈ R.

(14)

Without loss of generality, we chose the set {lλ } so that lσλyu = 0, ∀λu , σy . Example—For the chemical network in fig. 1, an independent set of conservation laws is  l1 = 1 12 0 0 0 ,  (15) l2 = 0 0 0 1 1 and  3 1 l = 0 2 −1 1 0 . When chemostatting as in fig. 2, the first two break while the last one remains unbroken. We also note that this set is chosen so that the unbroken conservation law satisfies l13 = l53 = 0. D.

Detailed-Balanced Networks

Mathematically, detailed-balanced networks are those whose rate equations exhibit an equilibrium distribution as steady state in absence of time-dependent driving. By σ definition, an equilibrium distribution {Zeq } satisfies the detailed balance property [54, § 9.4]  ρ σ Jeq ≡ J ρ {Zeq } = 0, ∀ ρ ∈ R . (16) As a result, for open networks, all the external currents, σ eq. (10b), must vanish at equilibrium {Ieqy = 0}. By virtue of the mass action kinetics, eq. (2), the detailed balance property (16) can be rewritten as ρ k +ρ σ ∇σ = Zeq , k −ρ

∀ρ ∈ R.

(17)

Given the set of unbroken components and the chemostatted concentrations the equilibrium distribution of mass action kinetics detailed-balanced networks is globally stable [56]. Hence, these networks always relax to equilibrium. Since we consider only elementary reactions, closed chemical networks must be detailed-balanced. This statement can be seen as the zeroth law for chemical networks. Consequently, rather than considering eq. (17) as a property of the equilibrium distribution, we impose it as a property that the rate constants must satisfy. We call it local detailed-balance property. It becomes a universal property of elementary reactions which holds regardless of the network state. We emphasize that despite the fact the equilibrium distribution depends on the components, the products of powers in the r.h.s. of eq. (17) does not, as will become explicit once the thermodynamical structure is introduced in sec.V, eq. (68). From now on, this property of the rate constants will always be assumed. The local detailed-balance property will be rewritten in a thermodynamical form in § III B. In open nondriven chemical networks, the chemostatting procedure may prevent the system from reaching an equilibrium state. To express this scenario algebraically we need to introduce the concept of emergent cycle and emergent affinity.

5 When chemostatting species starting from a closed chemical network, for each chemostat either a conservation law breaks—as mentioned in § II C—or an independent emergent cycle arises [53]. A cycle ˜ c = {˜ cρ } is defined as a right null eigenvector of the stoichiometric matrix [53], namely ∇σρ c˜ρ = 0 ,

∀σ ∈ S .

(18)

Cycles represent sets of reactions which leave the populations of all species unchanged. Instead, emergent cycles are sets of reactions which leave the populations of the internal species unchanged while changing those of the chemostatted species. However, since the chemostatted concentrations of nondriven networks are held constant, the environment restores their values as soon as they change. An emergent cycle c = {cρ } is defined algebraically by [53] ( σ ρ ∇ρ x c = 0 , ∀ σx ∈ Sx (19) ∇σρ y cρ 6= 0 , for at least one σy ∈ Sy . We denote by {cα } a set of independent emergent cycles. Importantly, the rise of emergent cycles is a topological matter: it depends on the species which are chemostatted, but does not depend on the chemostatted concentrations [53]. We also note that emergent cycles are modeled as “flux modes” in the context of metabolic networks. A great deal of effort in this field is devoted to identify biologically meaningful bases of flux modes (or, equivalently, emergent cycles) [57–59]. Any emergent cycle c has a dynamical affinity [53] A = cρ ln

k+ρ −∇σρ y , Zσ k−ρ y

(20)

associated to it. We denote by {Aα } the set of emergent affinities corresponding to the set of independent emergent cycles {cα }. The thermodynamical expression of the affinity will be given later, eq. (28), after introducing the thermodynamical description. A network is detailed-balanced iff all the dynamical affinities vanish [56]. Hence, a detailed-balance network with no emergent cycle will remain detailed-balanced for any choice of the chemostatted concentrations. Consequently, when a time-dependent driving acts on such a chemical network and prevents it from reaching an equilibrium state, a well-defined equilibrium state exists at any time: the equilibrium state to which the network would relax if the time-dependent driving were stopped. We call these networks unconditionally detailed-balanced. We emphasize that a network is still detailed-balanced if emergent cycles arise but the corresponding emergent affinities (20) are vanishing. This happens e.g. when the chemostatted concentrations fit an equilibrium distribution. Finally, a tacit assumption in the above discussion is that the network involves a finite number of species and reactions, i.e. the network is finite-dimensional. In infinite-dimensional networks, the network can exhibits long-time behaviors different from equilibrium even in absence of emergent cycles [60].

E.

Complex-Balanced Networks

To discuss complex-balanced networks and complexbalanced distribution, we first need to introduce the notion of complex in open chemical networks. In a chemical network, the term complex refers to the different groups of reactants and products appearing in each reaction (see below for an example). We label each complex with an index γ ∈ C, where C denotes the set of all complexes. We distinguish those complexes composed of just chemostatted species from those containing at least one internal species. We refer to the former class as external complexes and we label them by γy . Analogously, we refer to latter as internal complexes and we label them by γx . Example—In the chemical network depicted in fig. 2, 2S2 + S3 and S3 + S4 are internal complexes while S1 and S5 are external ones. Mathematically, complex-balanced networks are those whose rate equations exhibit a complex-balanced distribution as steady state in absence of time-dependent driving [35, 36]. By definition, a complex-balanced distribution— or complex-balanced steady state—is such that the total concentration flowing in each internal complex γx is vanishing. In other words, the concentration influx equals the concentration out-flux in each internal complex γx and its “total amount” remains constant over time. In mass action kinetics complex-balanced networks, the complexbalanced distribution is globally stable [61], given the set of unbroken components and the chemostatted concentrations. We now express the complex-balanced steady state algebraically. In absolute generality, the stoichiometric matrix ∇ can be decomposed as ∇σρ = Γσγ ∂ργ .

(21)

The matrix Γ = {Γσγ } is the so-called complex matrix [35, 37], whose Γσγ element is the stoichiometric number of the species Sσ in the complex γ. The complex matrix encodes the structure of each complex in terms of chemical species (see below for an example). The matrix ∂ = {∂ργ } is the incidence matrix of the chemical network, whose ∂ργ element is given by   if γ is the product of +ρ , 1 γ ∂ρ = −1 if γ is the reactant of +ρ , (22)  0 otherwise. The incidence matrix encodes the network’s structure at the level of complexes, i.e. how the complexes are connected by the reactions. Indeed, if we think of complexes as the network nodes, the incidence matrix provides the structure of reactions which connects the complexes, i.e. the edges. The graphical structure which emerges is called the reaction graph. It is worth mentioning that the network’s structure at the level of chemical species is encoded in the stoichiometric matrix. If we now think of species as

6 environment

paper. We briefly discuss it in appendix D and we refer the reader to Refs. [49, 50] for more details. However, we stress that the characterization of deficiency-zero networks is purely topological, nonetheless it strongly influences the network’s dynamical behavior.

system

S1

S2

k+1

S3 S4

k+2

III. THERMODYNAMICS OF CHEMICAL NETWORKS

S5

FIG. 3: Hyper-graphical representation of the chemical network described in fig. 1. To simplify the representation, the backward reactions are not shown.

the network nodes, the stoichiometric matrix provides the structure of reactions which connects the species. However, since reactions connect sets of species to other sets of species, the network’s representation at this level is hyper-graphical and not graphical as for the complexes [53, 62]. We refer to the hyper-graphical structure which emerges as the reaction hyper-graph. Example—The complex matrix and the incidence matrix of the reaction network in fig. 1 and 2 are   1 0 0 0   −1 0  0 2 0 0   1 0 Γ =  0 1 1 0 , ∂ =  , (23) 0 −1  0 0 1 0 0 1 0 0 0 1 where the complexes are ordered from the top-most left to bottom-most right. Also, the hyper-graphical representation of the same chemical network is depicted in fig. 3. The incidence matrix allows to characterize the complex-balanced steady state algebraically. The steadystate distribution {Z¯ σ } is complex-balanced if the related currents {J¯ρ } satisfy ∂ργx J¯ρ = 0

∀γx ∈ Cx .

(24)

The left hand side of the above equation is the internalcomplexes formation rate at the steady state. In the same way, the term ∇σρ x J¯ρ in the characterization of the steady state (10a) is the internal-species formation rate at the steady state. Therefore, eq. (24) requires that the complex formation rate of the internal species vanishes, or equivalently that the concentration flowing in each complex equals the concentration flowing out of it. We note that detailed-balanced networks are a subclass of complex-balanced networks. We refer to unconditionally complex-balanced networks as those characterized by a complex-balanced steady state given any physical value of the chemostatted concentrations [35]. Deficiency-zero networks are the most relevant networks which exhibit this property [36, 37]. Introducing the concept of deficiency goes beyond the scope of this

In this section, we endow the above-discussed dynamical framework with a nonequilibrium thermodynamical description. We first introduce the concept of local equilibrium, which provides the connection between dynamics and thermodynamics far from equilibrium. We then introduce the basic thermodynamical quantities characterizing chemical mixtures and biochemical reactions. Our thermodynamical description starts with the formulation of the entropy balance, in the spirit of Stochastic Thermodynamics. The energy balance and the characterization of the chemical work follow.

A.

Local Equilibrium

The assumption of local equilibrium [54, § 15.1] [63] provides the connection between dynamics and thermodynamics. Together with the assumption of well-stirred reaction mixture in an ideal solution, it implies that if all reactions were instantaneously shut down, the state of the whole chemical network would be an equilibrium mixture of species. What is thus assumed is that the equilibration following any reactive event is much faster then any reaction time scales. The nonequilibrium nature of the thermodynamic description is due to the reaction mechanisms and solely involves the concentration distribution. All the intensive thermodynamical variable are well-defined and equal everywhere in the system. The temperature T is set by the solvent, which acts as a thermal bath, while the pressure p is set by the environment the solution is exposed to. The chemical potentials are written in terms of the molar concentrations, µσ = µ◦σ + RT ln

Zσ , Zσ◦

(25)

[24, § 3.1]. The above expression is valid for ideal dilute solutions: R is the gas constant, µ◦σ and Zσ◦ are the standard state chemical potential and the standard-state concentration of Sσ , respectively. From now on Zσ◦ is taken as 1 mol/l for any species [64]. Due to this, many equations will appear with non-matching dimensions. We note that the definition of chemical potential in terms of molar fractions—widely used in chemical thermodynamics when dealing with chemical mixtures (see e.g. Refs. [54, 65])—leads to contradictions in the framework of chemical networks. We refer the reader to appendix A for more details.

7 Due to the assumption of local equilibrium and wellstirred reaction mixture, the densities of all extensive thermodynamical quantities are well-defined and equal everywhere in the system. Therefore, by an abuse of notation and nomenclature, we denote the densities with the same symbol and name usually used for the corresponding extensive quantity. E.g. S represents the entropy per unit volume but we refer to it as entropy. We apply the same logic to rates of change. E.g. we call entropy production rate the entropy production rate per unit volume. B.

Affinities, Emergent Affinities and Local Detailed Balance

Using the local equilibrium assumption, we now relate chemical kinetics and thermodynamics. The kinetic forces acting along each reaction pathway are called affinities [54, § 4.1.3]. From a thermodynamical point of view, the affinities are expressed as differences of chemical potentials, or equivalently, as Gibbs free energies of reaction [54, § 9.3] [24, § 3.2]: Aρ ≡ −∇σρ µσ ≡ −∆r Gρ .

(26)

At equilibrium, the affinities must vanish along each σ eq reaction pathway, Aeq ρ ≡ −∇ρ µσ = 0, ∀ρ. Hence, using the detailed-balance property (17) (of mass action kinetics networks), we obtain the so-called reaction isotherm ∆r Gρ = −RT ∇σρ ln = −RT ln

Zσ Zσeq

J+ρ , J−ρ

(27a) (27b)

which relates thermodynamics and chemical kinetics along each reaction pathway ρ. We refer the reader to Ref. [51] for an extensive discussion about the relationship between chemical kinetics and thermodynamics. The Gibbs free energies of reactions along emergent cycles give the external thermodynamical forces the network is coupled to [53] A = −cρ ∆r Gρ = −cρ ∇σρ y µσy .

(28)

The above expression provides a thermodynamical meaning to the dynamical affinity defined in eq. (20). Combining the detailed-balance property (17) and the equilibrium condition on the affinities Aeq ρ = 0 (26), we also relate the Gibbs free energies of reaction to the rate constants   ∆r G◦ρ k +ρ = exp − . (29) k −ρ RT This relation is the thermodynamical counterpart of the local detailed balance (17). It plays the same role as in Stochastic Thermodynamics, namely connecting the thermodynamical description to the stochastic dynamics. We finally point out that the local detailed balance property

is intimately related to the assumption of local equilibrium. Indeed, when we write the rate constants referring to equilibrium quantities (the standard Gibbs free energies of reaction), we assume that the thermodynamical structure of the system is an equilibrium one. However, due to reaction mechanisms, the actual thermodynamical state may be arbitrarily far from equilibrium.

C.

Enthalpies and Entropies of Reaction

To construct a consistent thermodynamical description, and specifically to identify the heat exchanged, it is important to distinguish the enthalpic change produced by each reaction from the entropic one. We consider the following decomposition of the standard state chemical potentials [24, § 3.2]: µ◦σ = h◦σ − T s◦σ .

(30)

The standard enthalpies of formation {h◦σ } take into account the enthalpic contributions carried by each species [24, § 3.2] [65, § 10.4.2]. Enthalpy changes caused by each reaction give the enthalpies of reaction [24, § 3.2] [54, § 2.4] ∆r Hρ = ∇σρ h◦σ .

(31)

Importantly, since our chemical system is kept at constant pressure, the enthalpies of reaction measure the heat of reaction. This is the content of the Hess’ Law (see e.g. [65, § 10.4.1]). The standard entropies of formation {s◦σ } take into account the internal entropic contribution carried by each species under standard-state conditions [24, § 3.2]. Using the decomposition of the standard state chemical potential (30) the chemical potentials read µσ = h◦σ − T (s◦σ − R ln Zσ ) . | {z } ≡ sσ

(32)

The entropies of formation {sσ ≡ s◦σ − R ln Zσ } account for the entropic contribution of each species in the chemical network [24, § 3.2]. The entropy changes along each reaction are given by the entropies of reaction [24, § 3.2] ∆r Sρ = ∇σρ sσ .

(33)

As we will show, the above discussed decomposition is crucial for a meaningful expression of the entropy and energy balances.

D.

Entropy Balance

Using the local equilibrium assumption (sec. III A), the connection between dynamical and thermodynamical equilibrium (sec. III B) and the identification of enthalpic and entropic changes (sec. III C), we proceed to establish the entropy balance in chemical networks. Mimicking

8 Stochastic Thermodynamics, we start with the dynamical expression for the entropy production rate and with the thermodynamical expression for the entropy flow rate. We then identify the system entropy and show that it can be obtained as the large-particle limit of the stochastic entropy in chemical networks.

production rate depends explicitly on a time derivative and thus vanishes at steady state. If the system reaches a nonequilibrium steady state the external currents {I σy } do not vanish and the steady-state entropy production rate becomes T

1.

Entropy Production Rate

The entropy production rate measures the rate of entropy change in the system plus environment [54] and must be non-negative by the second law of thermodynamics. The following expression [54, § 9.5] [8] T

di S J+ρ ≡ J ρ RT ln , dt J−ρ

(34)

satisfies these requirements:

σ Z σ − Zeq = σ , σ Zeq

|σ |  1 ,

∀σ ∈ S ,

(35)

T

 0 di S = Eσσ0 σ σ + O 3 , dt

(36)

where E is a positive semi-definite symmetric matrix [66]. The entropy production rate in eq. (34) can be rewritten in a more thermodynamically appealing way using the reaction isotherm (27b) T

di S = −J ρ ∆r Gρ . dt

(37)

It can be further expressed as the sum of two distinct contributions [53]: T

 di S = −Z˙ σx µσx − Z˙ σy − I σy µσy . | {z } | dt {z }   iS di S ≡ T ddt ≡ T int dt ext

d¯i S = J α Aα . dt

(40)

The emergent cycle currents measure the concentration currents along the emergent cycles {cα } [53]. Eq. (40) emphasizes the role of the emergent affinities as the effective sources of thermodynamical force and dissipation in chemical networks at the steady state. 2.

Entropy Flow Rate

The entropy flow rate measures the entropy reversibly exchanged between system and environment [54]. Using the expressions for the enthalpy of reaction (31) and entropy of formation (32) we express the entropy flow rate as T

we find that

(39)

Furthermore, it can be expressed as a bilinear form of emergent affinities (28) and emergent cycle currents {J α } [53]

1. It is non-negative and vanishes only at equilibrium, i.e. when the detailed balance property (16) is satisfied; 2. It vanishes to the first order around equilibrium, thus allowing for quasi-static reversible transformations. Indeed, defining

d¯i S ¯σy . = I¯σy µ dt

de S ≡ J ρ ∆r Hρ +I σy T sσy . dt | {z } ≡ d¯dtQ

(41)

The underbraced term is the heat flow rate (positive if heat is absorbed by the system) which measures the reversible heat exchange rate between the chemical network and the thermal bath. The second term on the right hand side accounts for the entropy of formation reversibly exchanged with the chemostats. Thus, the entropy flow rate (41) reads as the sum of the heat exchange rate and an entropy of formation exchange rate. A pictorial representation of the entropy flow rate (41) is given in fig. 4 We will show the connection between the above expression of entropy flow and the one given in the literature in § III E 1. 3.

System Entropy

(38)

The first term is the entropy production rate due to the internal species and takes into account the internal dissipation. We refer to the first term as internal entropy production rate. The second term is the contribution due to the chemostatting and takes into account the dissipation due to the external exchange of species and to the time-dependent driving. We refer to this second term as external entropy production rate. The internal entropy

Using eqs. (32), (37) and (41), the nonequilibrium system entropy change rate reads di S de S T S˙ ≡ T +T dt dt = T Z˙ σ sσ .

(42)

Integrating S˙ over time, we obtain the expression for the entropy of the chemical network: S = Z σ sσ + R Z + S0 .

(43)

9 environment system

S1

k+1

2S2 + S3

k-1

k+2

S3 + S 4

k-2

S5

heat entropy of formation flow

entropy flow

FIG. 4: Pictorial representation of the entropy flow rate for the chemical network in fig. 1 with only S5 chemostatted.

P The term Z = σ Z σ is the total concentration while S0 an integration constant which sets the reference. We point out that the above expression for the entropy differs from that given in the literature [67] due to the total concentration term. To support eq. (43), we demonstrate that the entropy function (43) can be obtained as a large particle limit of the well-established stochastic entropy function of chemical networks. In the stochastic setting, the network’s state is described by the probability of finding the system in the state with a certain set of populations n ≡ {nσ }: pt (n). The stochastic entropy of a chemical network is given by [22, 68] S(n) = −R ln pt (n) + s(n) ,

(44)

where the second term is the configurational entropy s(n) ≡ nσ s˜◦σ − R

X σ

ln

nσ ! , V nσ

(45)

with s˜◦σ representing the configurational entropy of one single Sσ molecule. We assume that the probability mass function behaves as a discrete delta function pt (n) ' ˆ (t)) in the large particle limit nσ  1, ∀σ. The δ(n − n ˆ≡n ˆ (t) denotes the macroscopic amount of chemvector n ˆ /(V NA ), NA denoting the ical species, such that Z = n Avogadro constant. In the afore-mentioned limit, the average stochastic entropy becomes the entropy function in (43), see app. B. Mindful of the information-theoretical interpretation of the entropy [69], we point out that the uncertainty due to the stochasticity of the state disappears. However, the uncertainty due to the indistinguishability of the species—which is the content of the configurational entropy—remains and contributes to the whole deterministic entropy function (43).

E.

Network Theory. We finally characterize the minimum amount of chemical work that the environment can perform on the network, given an arbitrary time-dependent driving. Exploiting the analogy with Stochastic Thermodynamics, we interpret the work inequalities that we derive in an information-theoretical way. To introduce the entropy balance, we combine the decomposition of the chemical potential (32) with the system entropy change rate (42) T S˙ = Z˙ σ h◦σ − Z˙ σ µσ ≡ H˙ − G˙ .

(46)

The first term defines the enthalpy change rate in the chemical network, while the second term the nonequilibrium Gibbs free energy change rate. 1.

First Law of Thermodynamics

To formulate the first law of thermodynamics we first need to clarify the meaning of the enthalpy. The system’s enthalpy is obtained by integrating the enthalpy change rate defined in eq. (46) over time: H = Z σ h◦σ .

(47)

Since our description is based on local equilibrium (§ III A) and since the system is kept at constant pressure p, the system’s enthalpy (47) is equal to the chemical network’s internal energy, up to a constant. In other words, the enthalpy H in (46) is a density and, when written in terms of the internal energy (density) U , it reads H = U − p. Hence, the enthalpy change rate encodes the internal energy exchange in the system. Using the rate equations (5) and (8) and the definition of heat flow rate (41), the enthalpy change rate defined in eq. (46) becomes d¯Q H˙ = + I σy h◦σy . dt

(48)

Thus, H˙ is the sum of the heat flow rate and the enthalpy of formation exchange rate. We interpret the above equation as the First Law of Thermodynamics in the realm of chemical networks. We conclude the subsection by mentioning the relation between enthalpy change rate and entropy flow rate: T

de S = H˙ − I σy µσy , dt

(49)

which is obtained from eqs. (48) and (41). In this form, the entropy flow rate agrees with the definition given in the literature [54, § 4.1.2]. In the context of the stochastic description of open chemical networks, the analogous of eq. (49) was called the first law of thermodynamics [22].

Energy Balance

We are now in the position to establish the first law of thermodynamics in chemical networks. We first define the nonequilibrium Gibbs potential and show its relationship to dynamical potentials used in Chemical Reaction

2.

Nonequilibrium Gibbs Free Energy

We are now in the position to write the thermodynamical potential regulating chemical networks. Integrating

10 the Gibbs free energy change rate (46) over time, we obtain G = Z σ µσ − RT Z + G0 ,

(50)

where G0 is an integration constant which set the reference energy. The nonequilibrium potential G extends the Gibbs free energy out-of-equilibrium and we refer to it as the nonequilibrium Gibbs free energy. Closed Chemical Network To understand the meaning of G, we analyze its role in closed chemical networks and show that it takes its minimum value at equilibrium. σ Let {Zeq } be the equilibrium distribution defined by the detailed-balance property (17) and characterized by the set of components {Lλ }. At equilibrium, the nonequilibrium Gibbs free energy reduces to σ Geq = Zeq µeq σ − RT Zeq + G0 .

(51)

As discussed in § III B, the equilibrium chemical potentials must satisfy ∇σρ µeq σ = 0. From the definition of conservation law (11), we infer that µeq σ must be a linear combination of the closed system conservation laws µeq σ

=

fλ lσλ

.

Geq = fλ Lλ − RT Zeq + G0 .

(53)

In this form, the first term of the Gibbs free energy appears as a bilinear form of components {Lλ } and conjugated generalized forces {fλ } [24, § 3.3]. If {Z σ } denotes the concentration distribution of a generic state with the same components {Lλ } and G the related nonequilibrium Gibbs free energy, the latter is related to Geq , eq. (53), by  σ G = Geq + RT L {Z σ }|{Zeq } , (54) where we introduced the Shear function [35, 70, 71] Zσ − (Z − Z 0 ) . Zσ0

(55)

The Shear function is the analogous of the Kullback– Leibler divergence for concentrations distributions [72]: it is always positive and vanishes only if {Z σ } = {Z 0σ }. Hence, from eq. (54), G ≥ Geq and the equality sign holds only at equilibrium. The nonequilibrium Gibbs free energy G—or equivaσ lently the Shear function L {Z σ }|{Zeq } [56, 73]—acts as Lyapunov function in closed chemical networks. Indeed, by direct calculation we obtain  di S σ G˙ = RT L˙ {Z σ }|{Zeq } = −T ≤ 0. (56) dt The above equation emphasizes the thermodynamical structure underlying the dynamical relaxation towards equilibrium in closed networks. Eq. (56) is a special case of eq. (57) introduced below for open networks. In sec. V we will discuss the generalization of the nonequilibrium Gibbs free energy in open detailed-balance networks and demonstrate how this class of nonequilibrium potentials characterizes the relaxation towards the equilibrium.

Chemical Work

We now investigate the minimum amount of chemical work necessary to transform the chemical network between two arbitrary nonequilibrium concentration distributions. Mindful of the definition of the Gibbs free energy change rate (46), the entropy production rate (38) is rewritten as T

di S = −G˙ + I σy µσy . dt | {z } W ≡ d¯dt

(57)

We interpret the second term on the right hand side of eq. (57) as the chemical work rate performed on the chemical network (positive if work is performed on the system) [22]. The positivity of the entropy production rate (34) sets the intrinsic limits of chemical work that the chemical network can perform per unit time d¯W ≥ G˙ . dt

(52)

We can thus write the equilibrium Gibbs free energy as

L ({Z σ }|{Z 0σ }) ≡ Z σ ln

3.

(58)

The equality sign is achieved for quasi-static transformaiS tions ( ddt ' 0). We can further characterize the chemical work performed during a transformation induced by a timedependent driving. Let us consider two concentration distributions {Ziσ } and {Zfσ }, and the protocol τt driving the chemical network from the former to the latter. Integrating eq. (57) between the two, we get that W = ∆G + T ∆i S ,

(59)

where ∆G = Gf − Gi is the difference of nonequilibrium Gibbs free energies between the final and the initial state, while ∆i S is the non-negative entropy production. On the σ other hand, let us consider the equilibrium state {Zeq } i σ σ σ (resp. {Zeq f }) obtained from {Zi } (resp. {Zf }) by closing the network and letting it to relax to equilibrium, as in fig. 5. The Gibbs free energy difference between the above-mentioned equilibrium distributions is denoted by ∆Geq = Geq f − Geq i and is related to ∆G via the difference of Shear functions, eq. (54), ∆G = ∆Geq + RT ∆L ,

(60)

where   σ σ ∆L ≡ L {Zfσ }|{Zeq } − L {Ziσ }|{Zeq } . f i

(61)

Thus, the chemical work (59) can be rewritten as W − ∆Geq = RT ∆L + T ∆i S .

(62)

∆Geq represents the reversible work needed to transform σ σ the system reversibly from {Zeq } to {Zeq }. This rei f versible transformation requires in principle to control any species as a chemostat, and is not easy to achieve in practice. However, it allows us to interpret the difference

11

{Ziσ }

vin

m

riu

g

{Zfσ }

ilib

relaxation*

relaxation*

n no

u -eq

dri

IV. THERMODYNAMICS OF COMPLEX-BALANCED NETWORKS

ng equilibrium drivi σ {Zeq } i

σ } {Zeq f

FIG. 5: Pictorial representation of the transformation between two nonequilibrium states in phase space. The nonequilibrium transformation (blue line) is compared with the equilibrium one (green line). The equilibrium transformation depends on the equilibrium states corresponding to the initial and final concentration distributions. In § III E 3 the equilibrium states are obtained by closing the network and letting it relax towards equilibrium. In sec. V the equilibrium states are obtained by stopping the time-dependent driving and letting the system relax to its natural equilibrium state, since detailedbalanced networks are considered. This last case refers to detailed-balanced networks, which naturally evolve towards an equilibrium distribution in absence of time-dependent driving.

W − ∆Geq in eq. (62) as the chemical work dissipated during the nonequilibrium transformation, i.e. the irreversible chemical work Wirr . Using the positivity of the entropy production and the above definition, we infer from eq. (62) that

In this section, we focus on unconditionally complexbalanced networks. We shall see that the thermodynamics of these networks bears remarkable similarities with stochastic thermodynamics. In particular, the dissipation related to the complex-balance state and to the transient dynamics can be separated. As discussed in § II E, these networks are characterized by a unique complex-balanced distribution  σ Z¯ ≡ Z¯ σ (τt ) , defined by eq. (24), for any value of the chemostatted concentrations {Z σy = Z σy (τ  t )} and for fixed unbroken components. Since the set Z¯ σ is welldefined at any time, the entropy production rate (34) can be decomposed algebraically as T

(63)

In other words, the irreversible chemical work characterizing the transformation between two non-equilibrium states is bounded by the difference of Shear functions (61). If the transformation connects two equilibrium distributions, the inequality (63) becomes Wirr ≥ 0, in agreement with equilibrium thermodynamics. Due to a similar result obtained in Stochastic Thermodynamics [42], we refer to the inequality (63) as the nonequilibrium Landauer principle for the chemical work. In that context, probability distributions replace the concentration distributions and the Kullback–Leibler divergence replaces the Shear function. However, both the reference state and the type of work differ from those defined in our framework. The analogy will be completed, when analyzing detailed-balanced networks (sec. V), where the equilibrium state becomes the equilibrium state to which the chemical network spontaneously evolves and where the contribution to the chemical work due to the driving mechanism appears as work.

(64)

We refer to the first term as adiabatic entropy production rate, and the second as nonadiabatic entropy production rate [45, 74]. This decomposition is possible  whenever a steady-state distribution Z¯ σ is defined. Importantly, for unconditionally complex-balanced networks, both terms are non-negative (the proof is given in appendix/supplementary material) and can be interpreted as different contributions to the total dissipation. The adiabatic entropy production  rate encodes the dissipation of the steady state Z¯ σ . Introducing the Gibbs free energy of reaction at the complex-balanced ¯ ρ } and using their relation with the fluxes state {∆r G (27b), the adiabatic entropy production rate reads 

Wirr ≥ RT ∆L .

di S Zσ J¯+ρ ˙ σ −Z RT ln . = J ρ RT ln ¯ dt J−ρ Z¯σ | {z } | {z }   iS iS ≡ T ddt ≡ T ddt na a

di S dt



¯ρ ≥ 0 . = −J ρ ∆r G

(65)

a

The inequality highlights the fact that the transient dynamics—resulting in the set of currents {J ρ }—is constrained by the thermodynamics of the complex balanced state—which results in the set of Gibbs free energy of ¯ ρ }. reaction {∆r G The non-adiabatic entropy production rate characterizes the dissipation of the transient dynamics. It takes into account both the relaxation towards the complexbalanced steady state and the time-dependent driving. Indeed, by direct calculation, this rate can be further decomposed as [45, 74]  T

di S dt



 = −RT L˙ {Z σ }|{Z¯ σ } +

na

dZ¯ d¯ µσ + RT − Zσ ≥ 0 . (66) dt dt | {z }  iS ≡ T ddt d

12 The first term is proportional to the time derivative of the Shear function (55) between the concentration distribution at time t and the corresponding complex-balanced distribution. Hence, the first term describes the dissipation of the relaxation towards the steady state. The second term is related to the time-dependent driving performed via the chemostatted species and we refer to it as driving entropy production rate [45]. It vanishes in nondriven networks where we obtain    di S = −R L˙ {Z σ }|{Z¯ σ } ≥ 0 , (67) dt na  which clarifies the role of L {Z σ }|{Z¯ σ } as Lyapunov function from a thermodynamical perspective. The adiabatic–nonadiabatic decomposition is a general property of stochastic thermodynamics [45, 74] (see also Refs. [46, 47]), since the steady state probability distribution is generally well-defined. Mathematically, the Shear function was proved to be a Lyapunov function for nondriven mass-action-kinetics complex-balanced networks in [35, 75]. Let us mention that an alternative derivation of the adiabatic–nonadiabatic decomposition for nondriven massaction-kinetics complex-balanced networks was found in Ref. [48], while we were finalizing our paper. V. THERMODYNAMICS OF DETAILED-BALANCED NETWORKS

We conclude our work by focusing on detailed-balanced networks. A new nonequilibrium potential is introduced, the thermodynamical role of the driving is analyzed and a new work inequality is derived. As a preliminary result, we derive a formal expression for the equilibrium distribution. As discussed in § III E 2, the equilibrium chemical potentials can be expressed as a linear combinations of conservation laws, the generalized forces {fλ } being the coefficients, eq. (52). Hence, using the expression for the chemical potentials (25) and inverting with respect to the concentration term, we get   µ◦σ − fλ lσλ eq Zσ = exp − (68) RT The generalized forces are functions of the unbroken components and of the chemostatted concentrations. Finally, it is easy to recover the local detailed-balanced property (29) and (17) from eq. (68). A.

at equilibrium. As in equilibrium thermodynamics [24], the proper non-equilibrium thermodynamical potential is obtained from G by removing the energetic contribution of the broken conservation laws: G ≡ G − fλb Lλb .

(69)

We refer to G as transformed nonequilibrium Gibbs free energy. We proceed to show that G is minimized at equilibrium. Let {Z σ } be the concentration distribution at time t of a generic nondriven detailed-balanced network and let G be the related transformed nonequilibrium Gibbs free energy (69). The distribution {Z σ } is characterized by the sets {Lλu } and {Z σy }. At the corresponding equilibrium state σ {Zeq }, the transformed (equilibrium) Gibbs free energy reads Geq = fλu Lλu − RT Zeq + G0 .

(70)

By direct calculation, Geq is related to G by  σ G = Geq + RT L {Z σ }|{Zeq } .

(71)  σ The properties of the Shear function L {Z σ }|{Zeq } ensure that G ≥ Geq where the equality sign holds only at equilibrium. Also, combining the entropy production of nondriven detailed-balanced networks—which can be written as in the right hand side of eq. (67)—with the time derivative of G, eq. (71), we obtain di S G˙ = −T ≤ 0, dt

(72)

which demonstrates the role of G as Lyapunov function. Thus, the following variational principle holds: in nondriven detailed-balanced networks, the equilibrium distribution minimizes the potential G (69). It is worth remarking that the theoretical structure we obtain slightly differs at equilibrium from usual equilibrium thermodynamics. The integration over time performed to obtain the nonequilibrium Gibbs free energy (50) gives rise to a total concentration term, and the potential expressed in terms of all possible intensive thermodynamical variables is negative and equal to −RT Zeq . The equivalent form of the Gibbs–Duhem relation—i.e. the relation expressing the mutual dependence of all intensive variables—becomes Geq (T, p, {fλ }) + RT Zeq (T, p, {fλ }) = 0 .

(73)

We note that a mathematical perspective on the Shear function as a thermodynamical potential is found in Refs. [73, 76].

Transformed nonequilibrium Gibbs free energy

In open chemical networks, the chemostatting procedure breaks some conservation laws (see § II C), which no longer constrain the dynamics, the equilibrium state and the thermodynamics. As a consequence, the nonequilibrium Gibbs free energy G (50) is no longer minimized

B.

Unconditionally detailed-balanced networks

As discussed in § II D, unconditionally detailedbalanced networks are  σcharacterized by a unique equiσ librium distribution Zeq ≡ Zeq (τt ) , defined by eq. (17),

13 for any value of the chemostatted concentrations {Z σy = Z σy (τt )}. In order to derive the relation between the transformed Gibbs free energy and the chemical work due to the driving mechanisms, we now show that the external fluxes {I σy } can be expressed as the rate of change of specific moieties. As mentioned in § II C, the independent set of unbroken conservation laws can be chosen so that the components related to the chemostatted species vanish lσλyu = 0, ∀λu , σy . In this way, from eq. (52) we get λb µeq σy = fλb lσy ,

∀ σy ∈ Sy .

(74)

Since for any chemostatted species a conservation law is broken, the matrix whose elements are {lσλyb } is square. Choosing the set of conservation laws properly, the matrix {lσλyb } can be inverted and the generalized forces fλb can be written in terms of the chemostatted species chemical potentials σ

ˆy fλb = µeq σy lλb ,

(75)

σ

where ˆlλby denotes the inverse of lσλyb . We realize that the generalized forces {fλb } depend only on the chemostatted concentrations—via the chemostatted chemical potentials—whereas {fλu } depend both on the chemostatted concentrations and on the unbroken components {Lλu }. Eq. (75) allows us to rewrite the energetic contribution of the broken conservation laws (see eq. 69) as ˆσy λb σ fλb Lλb = µeq σy lλb lσ Z . | {z } ≡ M σy

(76)

By direct calculation we obtain that M˙ σy = I σy ,

∀ σy ∈ Sy .

(77)

Eq. (76) and (77) lead us to interpret M σy as the concentration of a moiety which is exchanged with the environment only through the chemostatted species Sσy . In view of this, eq. (76) show us that the energetic contribution of the broken components can be expressed as the Gibbs free energy carried by specific moieties. Yet, from eq. (77), the rate of exchange of chemostatted species is expressed as the time derivative of the concentration of specific moieties. A specific implementation of this scenario is the thermodynamical description of biochemical reaction at constant pH [24, Ch. 4]. In that case, the chemostatted + species becomes the ion H+ and M H is the total amount + of H ions in the system. Example—We refer to the chemical network in fig. 2 whose conservation laws are described in § II C. It is straightforward to show that in this network the concentrations of the exchanged moieties are M 1 = Z 1 + 12 Z 2 M 5 = Z4 + Z5 .

(78)

From the relation between the chemical work rate and the nonequilibrium Gibbs free energy (57), the definition of transformed nonequilibrium Gibbs free energy (69), and the relation between the energetic contribution of the broken conservation laws and the moieties (76), we get di S d¯W d n eq σy o −T µ M . (79) = G˙ − + dt dt{z σy | dt }  W σy = µ˙ eq ≡ − d¯dt σy M d The term in underbraces vanishes in nondriven networks  W and we retrieve eq. (72). Hence, we interpret d¯dt as the d chemical work rate due the driving mechanism. Also, from the definition of nonadiabatic entropy production (66) and from the relation between the transformed nonequilibrium Gibbs free energy and the Shear function (71), we get     di S d¯W T = − G˙eq , (80) dt d dt d which relates the driving chemical work rate to the driving entropy production rate. Having considered an unconditionally detailed-balanced network ensures that the transformed Gibbs free energy Geq (70) is well-defined along any transformation. To summarize, as the nonequilibrium Gibbs free energy is directly related to the chemical work performed on the network, eq. (57), the transformed nonequilibrium Gibbs free energy is directly related to the chemical work due to the driving mechanism. Let us consider a time-dependent transformation driving the chemical network from {Ziσ } to {Zfσ }. The disσ σ tribution {Zeq } (resp. {Zeq }) denotes the equilibrium i f distribution obtained from {Ziσ } (resp. {Zfσ }) by stopping the time-dependent driving and letting the system relax towards the equilibrium, fig. 5. Using the relation between G and the corresponding Geq (71) and the rela W tion between G˙ and d¯dt integrated over time, we (79) d obtain the following equality Wd − ∆Geq = RT ∆L + T ∆i S ,

(81)

where   σ σ ∆L ≡ L {Zfσ }|{Zeq } − L {Ziσ }|{Zeq } . f i

(82)

The terms Wd and ∆i S (≥ 0) denote the driving chemical work and the entropy production along the transformation, respectively. ∆Geq is the difference of transformed Gibbs free energies between the equilibrium state corresponding to the final and initial distribution. Importantly, the difference ∆Geq is interpreted as the reversible driving chemical work characterizing the considered transformation. Hence, the irreversible driving chemical work Wdirr ≡ Wd − ∆Geq is constrained by Wdirr ≥ RT ∆L .

(83)

We point out that the above result has in Ref. [42] its precise stochastic equivalent. In particular, the same kind of reference state is used and the work refers to the work performed by the driving mechanism.

14 VI.

CONCLUSIONS AND PERSPECTIVES

Following a strategy reminiscent of Stochastic Thermodynamics, we systematically build a nonequilibrium thermodynamical description for open driven chemical networks whose dynamics is described by deterministic rate equations, eqs. (5) and (8), and whose kinetics satisfy mass action law. Our framework is very general and allows to treat transients, as well as chemostatted concentrations which depend on time. Since local equilibrium is assumed (i.e. in absence of reactions the mixture is at thermodynamical equilibrium), our theory embeds the nonequilibrium thermodynamical framework of irreversible processes established by the Brussels School of Thermodynamics (§ III A and III B). We now summarize our specific results. Starting from the expression of the entropy production rate (34), we first established the entropy balance in the formalism of chemical networks (§ III D). Hence, we were able to derive an expression for the system entropy, which we supported by proving that it can be obtained as a limit of the stochastic entropy of chemical networks (§ III D 3). We then focused on the energy balance: the first law of thermodynamics was established (§ III E 1). The clear separation between chemostatted and internal species allowed us to introduce the chemical work and to relate it to the nonequilibrium Gibbs potential (§ III E 2). We were thus able to express the minimal work necessary to transform the chemical network as a difference of Shear functions (§ III E 3). These specific Shear functions measure the distance of the initial and final concentration distributions from their corresponding equilibrium ones. Hence, their difference can be thought of as the thermodynamical cost of manipulating the network far from equilibrium. This result is reminiscent of the nonequilibrium Landauer principle derived in Stochastic Thermodynamics [42] and which proved very useful to study information processing [44]. Our network description enables us to capture the deep relationship between the topology of the chemical network and its thermodynamics, and we focused on two special classes of chemical networks. For detailed-balanced networks, we exhibited the tight connection between nonequilibrium thermodynamical potentials and the Shear function (§ V A). We also characterized the minimum amount of driving chemical work (i.e. the chemical work due to the time-dependent driving mechanism) that can be exerted on the network during an arbitrary time-dependent transformation (§ V B). For complex-balanced networks we demonstrated that the entropy production rate can be meaningfully decomposed in its adiabatic and nonadiabatic contributions (sec. IV), as done in Stochastic Thermodynamics. The adiabatic term quantifies the dissipation of the steady state, while the nonadiabatic one the dissipation of the transient dynamics. Our framework could be used to shed new light on a broad range of problems. We mention only a few. Stochastic thermodynamics has been successfully used

to study the thermodynamical cost of information processing in various synthetic and biological systems [43, 77–81]. However, most of these are modeled as few state systems or linear networks [8, 9]—e.g. quantum dots [82], molecular motors [83, 84] and single enzyme mechanisms [85, 86]—while bio-chemical networks involve much more complex descriptions. The present work overcomes this limitation and paves the way towards an information thermodynamics of complex (i.e. nonlinear) biochemical dynamics. This becomes particularly relevant if one adopts the perspective that biological systems have developed by optimizing the gathering and representation of information [87] (see also Ref. [88, ch. 6]). In view of this, biological information-handling processes are good candidates for deeper analysis within our thermodynamical framework. Examples of such processes are kinetic proofreading [89–95] and enzymeassisted copolymerization [96–101], which can be thought of as mechanisms converting chemical energy to improve the accuracy of enzyme-assisted assembly processes involving competing substrates [86]. These processes have currently only been studied as single enzyme mechanisms. One may also consider using our work to study proofreading mechanisms acting in metabolic processes [102], which involve nonlinear dynamics on complex networks. Nevertheless, metabolic networks require some care, since complex enzymatic reaction mechanisms are involved [103]. But our framework provides a basis to build effective coarse-graining procedures for enzymatic reactions [68]. We also foresee an increasing use of thermodynamics to improve the modeling of metabolic networks, as recently shown in Refs. [31–33]. Since our framework accounts for time-dependent drivings and transient dynamics, it could be used to represent the transmission of signals through chemical networks or their response to external modulations in the environment. These features become crucial when considering problems such as signal transduction and biochemical switches [25, 104, 105], biochemical oscillations [29, 106], growth and self-organization in evolving bio-systems [107, 108], or sensory system mechanisms [79, 81, 109–112]. Also, since transient signals in chemical networks can be used for computation [113–118] and have been shown to display Turing-universality [119–122], one should be able to study the thermodynamical cost of this computation [123]. Finally, one should be able to use our framework to study any processes that can be described as nucleation or reversible polymerization [124–129] (see also Ref. [130, ch. 5–6]) since these processes can be described by chemical networks [60]. As closing words, we believe that our results constitutes an important contribution to the theoretical study of chemical networks. It does for nonlinear chemical kinetics what stochastic thermodynamics has done for stochastic dynamics, namely build a systematic nonequilibrium thermodynamics on top of the dynamics. It also opens many new perspectives and builds bridges between approaches from different communities studying chemical networks:

15 mathematicians who study chemical networks as dynamical systems, physicists who study them as nonequilibrium complex systems, and biochemists as well as bioengineers who aim for accurate models of metabolic networks.

ACKNOWLEDGMENTS

The present project was supported by the National Research Found, Luxembourg, in the frame of project FNR/A11/02 and of the AFR PhD Grant 2014-2, No. 9114110.

Appendix A: Chemical Potentials in Chemical Networks

When treating chemical networks, chemical potentials expressed in terms of molar fractions lead to contradicting results. Indeed, suppose µσ = µ◦σ + RT ln zσ ,

(A1)

where the molar fractions are {z σ ≡ Z σ /Z} and Z = P σ σ∈S Z is the total molar concentration. At equilibrium, ∇σρ µeq σ = 0.

(A2)

It implies   ∇σρ µ◦σ σ zσeq ∇ρ = exp − . RT

(A3)

On the other hand, the detailed balance property must be derived from dynamical principles as in § II D and reads σ

Zσeq ∇ρ

k+ρ = . k−ρ

(A4)

Putting together (A3) and (A4) we obtain the following expression ) ( ∇σρ µ◦σ X σ k+ρ eq = exp − + ∇ρ ln Z . (A5) k−ρ RT σ This expression is not acceptable because it equates quantities depending on the single reactions—the rate constants and the differences of standard chemical potentials— to quantities depending on the network—the equilibrium total concentration. In other words, the total concentration depends on the initial condition—or equivalently on the components and chemostatted concentrations—but the rate constants and the differences of chemical potential do not. Hence, we infer that the chemical potential expressed in terms of molar fraction are not consistent with thermodynamical description of chemical networks. We show in the main text that the definition expressed in terms of concentration is consistent.

Appendix B: Entropy of Chemical Network

We argue in this appendix that the entropy function obtained in § III D, eq. (43), can be obtained as a large particle limit of the stochastic entropy. In the stochastic description of chemical network, the state is characterized by the population vector n = {nσ }. The probability that the network is in the state n at time t is denoted by pt (n). The stochastic entropy of a chemical network is given by [22, 68] S(n) = −R ln pt (n) + s(n) ,

(B1)

where the second term is the configurational entropy X nσ ! (B2) s(n) ≡ nσ (−R ln ωσ ) −R ln nσ . | {z } V σ ◦ ≡ s˜σ In the last equation, ωσ represents the so-called normalizing volume of one Sσ molecule [22]. We interpret the underbraced term s˜◦σ as the configurational entropy of one single Sσ molecule [68]. We now assume that the probability distribution beˆ (t)) in haves as a discrete delta function pt (n) ' δ(n − n ˆ≡n ˆ (t) the large particle limit nσ  1, ∀σ. The vector n denotes the macroscopic amount of chemical species, such ˆ /(V NA ), NA denoting the Avogadro constant. that Z = n Hence, the average entropy becomes almost equal to the ˆ configurational entropy evaluated in n X hSi = pt (n)S(n) ' s(ˆ n) . (B3) n

Mindful of the Stirling approximation (ln m! ' m ln m−m for m  1) the configurational entropy can be further approximated by s(ˆ n) ' n ˆ σ (−R ln ωσ ) − n ˆ σ R ln

X n ˆσ n ˆσ +R V σ

n ˆσ =n ˆ σ (−R ln NA ωσ ) −ˆ nσ R ln +Rn ˆ. | {z } V NA s◦σ (B4) As for s˜◦σ , we interpret the underbraced term s◦σ as the intrinsic entropy of one mole of Sσ , i.e. the standard entropy of formation of Sσ . Dividing the above expression by V NA we finally get the macroscopic entropy density function hSi/V NA ' Z σ s◦σ − Z σ R ln Zσ + RZ

(B5)

which is the same expression as in eq. (43) up to a constant. Finally, from the interpretation of s˜◦σ ≡ −R ln ωσ as the configurational entropy and of s◦σ ≡ −R ln NA ωσ as the standard entropy of formation, we are led to interpret the energy of chemical species of Ref. [22] (denoted by Ej in their notation) as standard enthalpy of formation per species, i.e. Eσ = h◦σ /NA .

16 Appendix C: Adiabatic–Non-adiabatic Entropy Production Rate Decomposition

Because of (C5), it is equal to 

We prove in this appendix the positivity of the adiabatic and nonadiabatic entropy production rates (64). It is worth mentioning that the positivity of the nonadiabatic term derives from the theory of complex-balanced networks [35, 75]. We recall here the definition of complex-balanced steady state (24) ∂ργx J¯ρ = 0

∀γx ∈ Cx .

di S dt

 = a

0 ψ¯γ ψ γ ψγeq ψ¯γ 0 0 ψγ 0 = −Wγγ ψ¯γ ψ¯γ 0 = 0.

γ γ ≡ Kγ+ρ − Kγ−ρ 0 ψ 0 ψ

0

(C2)

0

≡ Kγρ0 ψ γ , where K = {Kγρ ≡ Kγ+ρ − Kγ−ρ } is the rate constants matrix whose elements are defined by  +ρ  if γ is the reactant of +ρ , k ρ Kγ = k −ρ if γ is the product of +ρ , (C3)  0 otherwise. Hence, the definition of complex balanced network (C1) reads 0

Wγγ0x ψ¯γ = 0

(C4)

where W ≡ ∂ K is the so-called kinetic matrix [35], and Cγ ψ¯γ ≡ Z¯ σ σ . The kinetic matrix is a Laplacian matrix [73, 76]: any off-diagonal term is equal to rate constant of the reaction having γ 0 as a reactant and γ as product if the reaction exists, and it is zero otherwise. Also, it satisfies X γ Wγ 0 = 0 , (C5) γ

which means that the diagonal terms are equal to the sum of the off-diagonal terms along the column. The detailed balanced property implies that 0

(C10)

γ0

σ Cσ σ Cσ J ρ = Kγ+ρ − Kγ−ρ 0 Z 0 Z

Wγγ0 ψγeq0 = Wγγ ψγeq ,

(C9)

0

γ = −Wγγ0 ψeq

where {γx } label the internal complexes and ∂ is the incidence matrix of the network (22). The external complexes are labeled by γy . On the other hand, the mass action law currents (3) can be written as [50, 76]

0

γ0 ψ¯γ ψeq ψ ln eq γ 0 . ψγ ψ¯ γ0

From the log inequality − ln x ≥ 1 − x and from (C6) we get !   γ0 ψ¯γ ψeq di S γ γ0 ≥ Wγ 0 ψ 1 − eq γ 0 dt a ψγ ψ¯

(C1)

γ0

Wγγ0

∀ γ, γ 0 .

(C6)

Finally, we observe that from the definition of complexbalanced steady state (C4) and eq. (C5) we get X γ 0 Wγ 0y ψ¯γ = 0 (C7) γy

In order to prove the non-negativity of the adiabatic term we rewrite it as   0 di S ψ¯γ = Wγγ0 ψ γ ln eq . (C8) dt a ψγ

In the last equality we made use of the complex balance (C4), (C7) and the fact that ψγy = ψ¯γy , ∀ γy ∈ Cy . Concerning the non-adiabatic term, we rewrite it as   0 di S ψγ = Wγγ0 ψ γ ln . (C11) dt na ψ¯γ Because of (C5), it is equal to 

di S dt

 = na

Wγγ0

0 ψγ ψ¯γ ψ ln . ψ¯γ ψ γ 0

γ0

(C12)

From the log inequality − ln x ≥ 1 − x and from (C6) we get !   0 ψγ ψ¯γ di S γ γ0 1− ≥ Wγ 0 ψ dt na ψ¯γ ψ γ 0 γ 0 ψ = −Wγγ0 ψ¯γ γ ψ¯ = 0.

(C13)

In the last equality we made use of the complex balance (C4), (C7) and the fact that ψγy = ψ¯γy , ∀ γy ∈ Cy . Thus, we proved the positivity of both the adiabatic and nonadiabatic entropy production rate of complex balanced networks. Appendix D: Mathematical description of Complex-Balanced Networks

In this appendix, we describe the mathematical description of nondriven complex-balanced networks and the deficiency of chemical networks. The definition of complex balance we gave in eq. (24) differs from the one given in the literature, though it is equivalent. In the mathematical literature it is customary to consider all the external complexes γy as one unique null complex ∅, which generically represents the environment. In the same way, the chemostatted concentrations are

17 environment

state reads

system

∂ˆργx J¯ρ = 0 , ∂ˆρ∅ J¯ρ = 0 .

2S2 + S3 k+1Z1

S3 + S 4

k

+2

k-2Z5

k-1

∀γx ∈ Cx ,

(D2)

Since ∂ˆργx = ∂ργx the above definition implies the one given in the main text (24). The fact that eq. (24) implies P γ eq. (D2) is derived in eq. (C7), since ∂ˆρ∅ = γ∅ ∂ρ ∅ . A definition of deficiency is given by[23]



FIG. 6: Mathematical representation of the open chemical network depicted in fig. 2. The external complexes S1 and S5 are merged in a unique null-complex. The null-complex describes the exchange of matter with the environment. The information about the concentration of the chemostatted species is included in the rate constants.

δ = dim ker ∇Y − dim ker ∂ˆ .

(D3)

The mathematical definition of complex-balanced steady

The kernel of the incidence matrix ∂ˆ identifies the set of cycles of the reaction graph, while the kernel of ∇ identifies the set of cycles, eqs. (18) and (19), of the chemical networks. The deficiency, hence measure the disproportion between the possible cyclic transformations of chemical species and the possible cyclic transformations of complexes in the network. Other equivalent definitions of deficiency can be found in [49] and [50]. Deficiency-zero network are characterized by δ = 0, i.e. they exhibit a one to one correspondence between cycles and hyper-cycles. This topological properties has many dynamical consequences, the most important of which is that deficiency-zero networks are unconditionally complexbalanced [36, 37]. As shown in Ref. [23], deficiency has also consequences on the stochastic thermodynamical description of networks: the stochastic entropy production of deficiency-zero network converges to the deterministic entropy production in the long-time limit.

[1] J. Gibbs, The Scientific Papers of J. Willard Gibbs, Vol.1: Thermodynamics (Dover, 1961). [2] T. de Donder, L’affinité, Mémoires de la Classe des sciences No. vol. 1 (Gauthier-Villars, 1927). [3] I. Prigogine, Etude Thermodynamique des Phénomènes Irréversibles (Desoer, Liège, 1947). [4] I. Prigogine, Introduction to thermodynamics of irreversible processes (John Wiley & Sons, 1967). [5] I. Prigogine and R. Defay, Chemical Thermodynamics (Longmans, Green & Co., 1954). [6] G. F. Oster, A. S. Perelson, and A. Katchalsky, Q. Rev. Biophys. 6, 1 (1973). [7] M. Malek-Mansour and G. Nicolis, J. Stat. Phys. 13, 197 (1975). [8] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976). [9] T. Hill, Free energy transduction in biology (Academic Press, New York, 1977). [10] C. Y. Mou, J. Luo, and G. Nicolis, J. Chem. Phys. 84, 7011 (1986). [11] J. Ross, Thermodynamics and Fluctuations far from Equilibrium (Springer, 2008). [12] J. Ross and A. F. Villaverde, Entropy 12, 2199 (2010). [13] D. A. McQuarrie, J. Appl. Probab. 4, 413 (1967). [14] D. T. Gillespie, Physica A 188, 404 (1992). [15] K. Sekimoto, Stochastic Energetics, 1st ed., Lecture

Notes in Physics, Vol. 799 (Springer-Verlag Berlin Heidelberg, 2010). C. Jarzynski, Annu. Rev. Condens. Matter Phys. 2, 329 (2011). U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). C. Van den Broeck and M. Esposito, Physica A 2, 11 (2014). P. Gaspard, J. Chem. Phys. 120, 8898 (2004). D. Andrieux and P. Gaspard, J. Chem. Phys. 121, 6167 (2004). T. Schmiedl, T. Speck, and U. Seifert, J. Stat. Phys. 128, 77 (2007). T. Schmiedl and U. Seifert, J. Chem. Phys. 126, 044101 (2007). M. Polettini, A. Wachtel, and M. Esposito, J. Chem. Phys. 143, 184103 (2015). R. A. Alberty, Thermodynamics of biochemical reactions (Wiley-Interscience, 2003). D. A. Beard and H. Qian, Chemical Biophysics. Quantitative Analysis of Cellular Systems, 1st ed., Cambridge Texts in Biomedical Engineering (Cambridge University Press, 2008). B. Ø. Palsson, Systems Biology: Properties of Reconstructed Networks (Cambridge University Press, 2006). B. Ø. Palsson, Systems Biology: Simulation of Dynamic

all included in the rate constants, which thus become pseudo-rate constants. Example—The network shown in fig. 2 becomes that in fig. 6. The incidence matrix of the chemical network obtained ˆ by the afore-mentioned merging procedure is denoted ∂. ˆ It is straightforward to show that ∂ is related to ∂ by ∂ˆργx = ∂ργx X ∂ˆρ∅ = ∂ργy

(D1)

γy

[16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

[26] [27]

18 Network States (Cambridge University Press, 2011). [28] B. Ø. Palsson, Systems Biology: Constraint-based Network Reconstruction and Analysis (Cambridge University Press, 2015). [29] A. Goldbeter, Biochemical Oscillations And Cellular Rhythms (Cambridge University Press, 1996). [30] I. Epstein and J. Pojman, An Introduction to Nonlinear Chemical Dynamics: Oscillations, Waves, Patterns, and Chaos, Topics in Physical Chemistry (Oxford University Press, 1998). [31] D. A. Beard, S.-d. Liang, and H. Qian, Biophys. J. 83, 79 (2002). [32] D. A. Beard, E. Babson, E. Curtis, and H. Qian, J. Theor. Biol. 228, 327 (2004). [33] A. Chakrabarti, L. Miskovic, K. C. Soh, and V. Hatzimanikatis, Biotechnol. J. 8, 1043 (2013). [34] H. Qian, D. A. Beard, and S.-d. Liang, Eur. J. Biochem. 270, 415 (2003). [35] F. Horn and R. Jackson, Arch. Ration. Mech. An. 47, 81 (1972). [36] M. Feinberg, Arch. Ration. Mech. An. 49, 187 (1972). [37] F. Horn, Arch. Ration. Mech. An. 49, 172 (1972). [38] T. G. Kurtz, J. Chem. Phys. 57, 2976 (1972). [39] D. F. Anderson, G. Craciun, and T. G. Kurtz, Bull. Math. Biol. 72, 1947 (2010). [40] D. F. Anderson, G. Craciun, M. Gopalkrishnan, and C. Wiuf, Bull. Math. Biol. 77, 1744 (2015). [41] D. Cappelletti and C. Wiuf, ArXiv e-prints (2015). [42] M. Esposito and C. Van den Broeck, Europhys. Lett. 95, 40004 (2011). [43] T. Sagawa, Thermodynamics of Information Processing in Small Systems (Springer Japan, 2013) doctoral Thesis. [44] J. M. R. Parrondo, J. M. Horowitz, and T. Sagawa, Nature Phys. 11, 131 (2015). [45] M. Esposito, U. Harbola, and S. Mukamel, Phys. Rev. E 76, 031132 (2007). [46] R. J. Harris and G. M. Schütz, J. Stat. Mech. Theor. Exp. , P07020 (2007). [47] H. Ge and H. Qian, Phys. Rev. E 81, 051133 (2010). [48] H. Ge and H. Qian, ArXiv e-prints (2016). [49] M. Feinberg, “Lectures on chemical reaction networks,” (1979), lecture Notes. [50] J. Gunawardena, “Chemical reaction network theory for in-silico biologists contents,” (2003), unpublished. [51] M. Pekař, Prog. React. Kinet. Mech. 30, 3 (2005). [52] M. Pekař, Reac. Kinet. Mech. Cat. 99, 29 (2009). [53] M. Polettini and M. Esposito, J. Chem. Phys. 141, 024117 (2014). [54] D. Kondepudi and I. Prigogine, Modern Thermodynamics: From Heat Engines to Dissipative Structures, 2nd ed. (Wiley, 2014). [55] Moiety, in Compendium of Chemical Terminology (IUPAC Gold Book) (2014). [56] S. Schuster and R. Schuster, J. Math. Chem. 3, 25 (1989). [57] J. Leiser and J. J. Blum, Cell Biophys. 11, 123 (1987). [58] S. Schuster and C. Hilgetag, J. Biol. Syst. 02, 165 (1994). [59] S. Klamt and J. Stelling, Trends Biotechnol. 21, 64 (2003). [60] R. Rao, D. Lacoste, and M. Esposito, J. Chem. Phys. 143, 244903 (2015). [61] G. Craciun, ArXiv e-prints (2015). [62] S. Klamt, U.-U. Haus, and F. Theis, PLoS Comput. Biol. 5, e1000385 (2009). [63] I. Prigogine, Physica 15, 272 (1949).

[64] E. Cohen, T. Cvitas, J. Frey, B. Holmström, K. Kuchitsu, R. Marquardt, I. Mills, F. Pavese, M. Quack, J. Stohner, H. Strauss, M. Takami, and A. Thor, Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book), 3rd ed. (IUPAC & RSC Publishing, 2008) 2nd Printing, p. 61. [65] R. Holyst and A. Poniewierski, Thermodynamics for Chemists, Physicists and Engineers (Springer, 2012). [66] M. Polettini and M. Esposito, J. Chem. Phys. (unpublished). [67] H. Qian and D. A. Beard, Biophys. Chem. 114, 213 (2005). [68] M. Esposito, Phys. Rev. E 85, 041125 (2012). [69] E. T. Jaynes, Phys. Rev. 106, 620 (1957). [70] D. Shear, J. Theor. Biol. 16, 212 (1967). [71] J. Higgins, J. Theor. Biol. 21, 293 (1968). [72] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. (Wiley-Interscience, 2006). [73] A. van der Schaft, S. Rao, and B. Jayawardhana, SIAM J. Appl. Math. 73, 953 (2013). [74] M. Esposito and C. Van den Broeck, Phys. Rev. E 82, 011143 (2010). [75] M. Gopalkrishnan, ArXiv e-prints (2013). [76] A. van der Schaft, S. Rao, and B. Jayawardhana, J. Math. Chem. 53, 1445 (2015). [77] M. Esposito and G. Schaller, E.P.L. 99, 30003 (2012). [78] G. Diana, G. B. Bagci, and M. Esposito, Phys. Rev. E 87, 012111 (2013). [79] J. M. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014). [80] A. C. Barato and U. Seifert, Phys. Rev. Lett. 112, 090601 (2014). [81] S. Bo, M. Del Giudice, and A. Celani, J. Stat. Mech. Theor. Exp. , P01014 (2015). [82] P. Strasberg, G. Schaller, T. Brandes, and M. Esposito, Phys. Rev. Lett. 110, 040601 (2013). [83] N. Golubeva and A. Imparato, Phys. Rev. Lett. 109, 190602 (2012). [84] B. Altaner, A. Wachtel, and J. Vollmer, Phys. Rev. E 92, 042133 (2015). [85] U. Seifert, Eur. Phys. J. E 34, 26 (2011). [86] R. Rao and L. Peliti, J. Stat. Mech. Theor. Exp. , P06001 (2015), arXiv:1504.02494. [87] G. Tkačik and W. Bialek, Annu. Rev. Condens. Matter Phys. 7, 1 (2016). [88] W. Bialek, Biophysics: Searching for Principles (Princeton University Press, 2012). [89] J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 71, 4135 (1974). [90] J. Ninio, Biochimie 57, 587 (1975). [91] J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 77, 5248 (1980). [92] E. Sontag, IEEE Transactions on Automatic Control 46, 1028 (2001). [93] H. Ge, M. Qian, and H. Qian, Phys. Rep. 510, 87 (2012). [94] A. Murugan, D. H. Huse, and S. Leibler, Proc. Natl. Acad. Sci. U.S.A. 109, 12034 (2012). [95] A. Murugan, D. H. Huse, and S. Leibler, Phys. Rev. X 4, 021016 (2014). [96] C. H. Bennett, Biosystems 11, 85 (1979). [97] D. Andrieux and P. Gaspard, Proc. Natl. Acad. Sci. U.S.A. 105, 9516 (2008). [98] D. Andrieux and P. Gaspard, J. Chem. Phys. 130, 014901

19 (2009). [99] J. R. Arias-Gonzalez, PLoS ONE 7, e42272 (2012). [100] P. Sartori and S. Pigolotti, Phys. Rev. Lett. 110, 188101 (2013). [101] P. Sartori and S. Pigolotti, Phys. Rev. X 5, 041039 (2015). [102] C. L. Linster, E. van Schaftingen, and A. D. Hanson, Nat. Chem. Biol. 9, 72 (2013). [103] A. Cornish-Bowden, Fundamentals of Enzyme Kinetics, 4th ed. (Wiley-Blackwell, 2012). [104] H. Qian and T. C. Reluga, Phys. Rev. Lett. 94, 028101 (2005). [105] H. Qian and S. Roy, IEEE Trans. Nanobiosci. 11, 289 (2012). [106] J. M. G. Vilar, H. Y. Kueh, N. Barkai, and S. Leibler, Proc. Natl. Acad. Sci. U.S.A. 99, 5988 (2002). [107] G. Nicolis and I. Prigogine, Self-organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations (Wiley-Blackwell, 1977). [108] R. Feistel and W. Ebeling, Physics of Self-Organization and Evolution (Wiley VCH, 2011). [109] P. Mehta and D. J. Schwab, Proc. Natl. Acad. Sci. U.S.A. 109, 17978 (2012). [110] P. Sartori, L. Granger, C. F. Lee, and J. M. Horowitz, PLoS Comput Biol 10, e1003974 (2014). [111] D. Hartich, A. C. Barato, and U. Seifert, New J. Phys. 17, 055026 (2015). [112] S.-W. Wang, Y. Lan, and L.-H. Tang, J. Stat. Mech. Theor. Exp. , P07025 (2015). [113] Y. Sakakibara and Y. Yongli Mi, eds., DNA Computing and Molecular Programming (Springer Berlin Heidelberg, 2011). [114] L. Cardelli and W. Shih, eds., DNA Computing and Molecular Programming (Springer Berlin Heidelberg, 2011). [115] D. Stefanovic and A. Turberfield, eds., DNA Computing and Molecular Programming (Springer Berlin Heidelberg,

2012). [116] D. Soloveichik and B. Yurke, eds., DNA Computing and Molecular Programming (Springer International Publishing, 2013). [117] S. Murata and S. Kobayashi, eds., DNA Computing and Molecular Programming (Springer International Publishing, 2014). [118] A. Phillips and P. Yin, eds., DNA Computing and Molecular Programming (Springer International Publishing, 2015). [119] A. Hjelmfelt, E. Weinberger, and J. Ross, Proc. Natl. Acad. Sci. U.S.A. 88, 10983 (1991). [120] A. Hjelmfelt, E. Weinberger, and J. Ross, Proc. Natl. Acad. Sci. U.S.A. 89, 383 (1992). [121] M. O. Magnasco, Phys. Rev. Lett. 78, 1190 (1997). [122] D. Soloveichik, M. Cook, E. Winfree, and J. Bruck, Nat. Comp. 7, 615 (2008). [123] C. H. Bennett, Int. J. Theor. Phys. 21, 905 (1982). [124] T. P. J. Knowles, W. Shu, G. L. Devlin, S. Meehan, S. Auer, C. M. Dobson, and M. E. Welland, Proc. Natl. Acad. Sci. U.S.A. 104, 10016 (2007). [125] T. P. J. Knowles, C. A. Waudby, G. L. Devlin, S. I. A. Cohen, A. Aguzzi, M. Vendruscolo, E. M. Terentjev, M. E. Welland, and C. M. Dobson, Science 326, 1533 (2009). [126] S. I. A. Cohen, S. Linse, L. M. Luheshi, E. Hellstrand, D. A. White, L. Rajah, D. E. Otzen, M. Vendruscolo, C. M. Dobson, and T. P. J. Knowles, Proc. Natl. Acad. Sci. U.S.A. 110, 9758 (2013). [127] Z. Budrikis, G. Costantini, C. A. M. La Porta, and S. Zapperi, Nat. Commun. 5, 3620 (2014). [128] Ö. Kartal, S. Mahlow, A. Skupin, and O. Ebenhöh, Mol. Syst. Biol. 7, 542 (2011). [129] S. Lahiri, Y. Wang, M. Esposito, and D. Lacoste, New J. Phys. 17, 085008 (2015). [130] P. Krapivsky, S. Redner, and E. Ben-Naim, A Kinetic View of Statistical Physics (Cambridge University Press, 2010).