Type a search term to find related articles by LIMS subject matter experts gathered from the most trusted and dynamic collaboration tools in the laboratory informatics industry.
A dissipative system is a thermodynamically open system which is operating out of, and often far from, thermodynamic equilibrium in an environment with which it exchanges energy and matter. A tornado may be thought of as a dissipative system. Dissipative systems stand in contrast to conservative systems.
A dissipative structure is a dissipative system that has a dynamical regime that is in some sense in a reproducible steady state. This reproducible steady state may be reached by natural evolution of the system, by artifice, or by a combination of these two.
A dissipative structure is characterized by the spontaneous appearance of symmetry breaking (anisotropy) and the formation of complex, sometimes chaotic, structures where interacting particles exhibit long range correlations. Examples in everyday life include convection, turbulent flow, cyclones, hurricanes and living organisms. Less common examples include lasers, Bénard cells, droplet cluster, and the Belousov–Zhabotinsky reaction.[1]
One way of mathematically modeling a dissipative system is given in the article on wandering sets: it involves the action of a group on a measurable set.
Dissipative systems can also be used as a tool to study economic systems and complex systems.[2] For example, a dissipative system involving self-assembly of nanowires has been used as a model to understand the relationship between entropy generation and the robustness of biological systems.[3]
The Hopf decomposition states that dynamical systems can be decomposed into a conservative and a dissipative part; more precisely, it states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and an invariant dissipative set.
Russian-Belgian physical chemist Ilya Prigogine, who coined the term dissipative structure, received the Nobel Prize in Chemistry in 1977 for his pioneering work on these structures, which have dynamical regimes that can be regarded as thermodynamic steady states, and sometimes at least can be described by suitable extremal principles in non-equilibrium thermodynamics.
In his Nobel lecture,[4] Prigogine explains how thermodynamic systems far from equilibrium can have drastically different behavior from systems close to equilibrium. Near equilibrium, the local equilibrium hypothesis applies and typical thermodynamic quantities such as free energy and entropy can be defined locally. One can assume linear relations between the (generalized) flux and forces of the system. Two celebrated results from linear thermodynamics are the Onsager reciprocal relations and the principle of minimum entropy production.[5] After efforts to extend such results to systems far from equilibrium, it was found that they do not hold in this regime and opposite results were obtained.
One way to rigorously analyze such systems is by studying the stability of the system far from equilibrium. Close to equilibrium, one can show the existence of a Lyapunov function which ensures that the entropy tends to a stable maximum. Fluctuations are damped in the neighborhood of the fixed point and a macroscopic description suffices. However, far from equilibrium stability is no longer a universal property and can be broken. In chemical systems, this occurs with the presence of autocatalytic reactions, such as in the example of the Brusselator. If the system is driven beyond a certain threshold, oscillations are no longer damped out, but may be amplified. Mathematically, this corresponds to a Hopf bifurcation where increasing one of the parameters beyond a certain value leads to limit cycle behavior. If spatial effects are taken into account through a reaction–diffusion equation, long-range correlations and spatially ordered patterns arise,[6] such as in the case of the Belousov–Zhabotinsky reaction. Systems with such dynamic states of matter that arise as the result of irreversible processes are dissipative structures.
Recent research has seen reconsideration of Prigogine's ideas of dissipative structures in relation to biological systems.[7]
Willems first introduced the concept of dissipativity in systems theory[8] to describe dynamical systems by input-output properties. Considering a dynamical system described by its state , its input and its output , the input-output correlation is given a supply rate . A system is said to be dissipative with respect to a supply rate if there exists a continuously differentiable storage function such that , and
As a special case of dissipativity, a system is said to be passive if the above dissipativity inequality holds with respect to the passivity supply rate .
The physical interpretation is that is the energy stored in the system, whereas is the energy that is supplied to the system.
This notion has a strong connection with Lyapunov stability, where the storage functions may play, under certain conditions of controllability and observability of the dynamical system, the role of Lyapunov functions.
Roughly speaking, dissipativity theory is useful for the design of feedback control laws for linear and nonlinear systems. Dissipative systems theory has been discussed by V.M. Popov, J.C. Willems, D.J. Hill, and P. Moylan. In the case of linear invariant systems[clarification needed], this is known as positive real transfer functions, and a fundamental tool is the so-called Kalman–Yakubovich–Popov lemma which relates the state space and the frequency domain properties of positive real systems[clarification needed].[10] Dissipative systems are still an active field of research in systems and control, due to their important applications.
As quantum mechanics, and any classical dynamical system, relies heavily on Hamiltonian mechanics for which time is reversible, these approximations are not intrinsically able to describe dissipative systems. It has been proposed that in principle, one can couple weakly the system – say, an oscillator – to a bath, i.e., an assembly of many oscillators in thermal equilibrium with a broad band spectrum, and trace (average) over the bath. This yields a master equation which is a special case of a more general setting called the Lindblad equation that is the quantum equivalent of the classical Liouville equation. The well-known form of this equation and its quantum counterpart takes time as a reversible variable over which to integrate, but the very foundations of dissipative structures imposes an irreversible and constructive role for time.
Recent research has seen the quantum extension[11] of Jeremy England's theory of dissipative adaptation[7] (which generalizes Prigogine's ideas of dissipative structures to far-from-equilibrium statistical mechanics, as stated above).
The framework of dissipative structures as a mechanism to understand the behavior of systems in constant interexchange of energy has been successfully applied on different science fields and applications, as in optics,[12][13] population dynamics and growth[14][15][16] and chemomechanical structures.[17][18][19]