Portal:Complex Systems Digital Campus/E-Department on Modelling Multi-level Dynamics: from multi-level dynamics to muti-level models

Portal:Complex_Systems_Digital_Campus/E-Department_on_Modelling Multi-level Dynamics: from multi-level dynamics to muti-level models Modeling Multi-level dynamics: from multi-level dynamics to stochastic multi-level models, instabilities and robustness

Introduction Hierarchical structures over a wide range of space-time scales are ubiquitous in the geosciences, the environment, physics, biology and socio-economic networks. They are a fundamental building block of our 4D world’s complexity. Scale invariance, or scaling for short, is a powerful tool to investigate these structures and to infer properties across scales, instead of dealing with scale-dependent properties. Whereas scaling in time or in space have been investigated in many domains, 4D scaling analysis and modeling are still relatively inchoate, yet indispensable to describe, estimate, understand, simulate and predict the underlying dynamics. Rather complementary to this approach, random dynamical system theory is also a powerful approach for grasping multiscale dynamics. This theory is likely to provide interesting generalizations of what we have learned from deterministic dynamical systems, particularly in the case of bifurcations. Other important domains of investigation are phase transitions, emerging patterns and behaviors which result when we move up in scale in the complex 4D fields. Main challenges 1. The cascade paradigm The idea of structures nested within larger structures, themselves nested within larger structures and so on over a given range of space-time scales has been in physics for some time, and could be traced back to Richardson’s book (Weather Prediction by Numerical Processes, 1922) with his humoristic presentation of the paradigm of cascades. This paradigm became widely used well beyond its original framework of atmospheric turbulence, in such fields as ecology, financial physics or high-energy physics. In a very generic manner, a cascade process can be understood as a space-time hierarchy of structures, where interactions with a mother structure are similar in a given manner to those with its daughter structures. This rather corresponds to a cornerstone of multiscale stochastic physics, as well as of complex systems: a system made of its own replicas at different scales. Cascade models have gradually become well-defined, especially in a scaling framework, i.e. when daughter interactions are a rescaled version of mother ones. A series of exact or rigorous results have been obtained in this framework. This provides a powerful multifractal toolbox to understand, analyse and simulate extremely variable fields over a wide range of scales, instead of simply at a given scale. Multifractal refers to the fact that these fields can be understood as an embedded infinite hierarchy of fractals, e.g. those supporting field values exceeding a given threshold. These techniques have been applied in many disciplines with apparent success. However, a number of questions are still open on cascade processes. They include: universality classes, generalized notions of scale, extreme values, predictability and more generally their connection with dynamical systems either deterministic-like (e.g. Navier-Stokes equations) or random (those discussed in the next section). It is certainly important to look closely for their connections with phase transitions, emerging patterns and behaviors that are discussed in the corresponding section. Particular emphasis should be placed on space-time analysis and/or simulations, as discussed in the last section on the general question of space-time scaling. 2. Random dynamical systems and stochastic bifurcations Along with mathematicians' interest in the effects of noise on dynamical systems, physicists have also paid increasing attention to noise effects in the laboratory and in models. The influence of noise on the long-term dynamics often has puzzling nonlocal effects, and no general theory exists at the present time. In this context, L. Arnold and his "Bremen group" have introduced a highly novel and promising approach. Starting in the late 1980s, this group developed new concepts and tools that deal with very general dynamical systems coupled with stochastic processes. The rapidly growing field of random dynamical systems (RDS) provides key geometrical concepts that are clearly appropriate and useful in the context of stochastic modeling. This geometrically-oriented approach uses ergodic and measure theory in an ingenious manner. Instead of dealing with a phase space S, it extends this notion to a probability bundle, S x probability space, where each fiber represents a realization of the noise. This external noise is parametrized by time through the so-called measure-preserving driving system. This driving system simply "glues" the fibers together so that a genuine notion of flow (cocycle) can be defined. One of the difficulties, even in the case of (deterministic) nonautonomous forcing, is that it is no longer possible to define unambiguously a time-independent forward attractor. This difficulty is overcome using the notion of pullback attractors. Pullback attraction corresponds to the idea that measurements are performed at present time t in an experiment that was started at some time s<t in the remote past, and so we can look at the "attracting invariant state" at time t. These well-defined geometrical objects can be generalized to randomness added to a system and are then called random attractors. Such a random invariant object represents the frozen statistics at time t when "enough" of the previous history is taken into account, and it evolves with time. In particular, it encodes dynamical phenomena related to synchronization and intermittency of random trajectories. This recent theory presents several great mathematical challenges, and a more complete theory of stochastic bifurcations and normal forms is still under development. As a matter of fact, one can define two different notions of bifurcation. Firstly, there is the notion of P-bifurcation (P for phenomenological) where, roughly speaking, it corresponds to topological changes in the probability density function (PDF). Secondly, there is the notion of D-bifurcation (D for dynamical) where one considers a bifurcation in the Lyapunov spectrum associated with an invariant Markov measure. In other words, we look at a bifurcation of an invariant measure in a very similar way as we look at the stability of a fixed point in a deterministic autonomous dynamical system. D-bifurcations are indeed used to define the concept of stochastic robustness through the notion of stochastic equivalence. The two types of bifurcation may sometimes, but not always be related, and the link between the two is unclear at the present time. The theory of stochastic normal form is also considerably enriched compared to the deterministic one but is still incomplete and more difficult to establish. Needless to say, bifurcation theory might be applied to partial differential equations (PDEs) but even proving the existence of a random attractor may appear very difficult. 3. Phase transitions, emerging patterns and behavior Phase transition is usually associated with the emergence of patterns and collective behavior, for instance due to the divergence of correlation length. Beyond the classical example of glassy systems, these features have been recently observed in shear flows, where the transition from laminar to turbulence occurs discontinuously through gradual increasing of the Reynolds Number. In such a case, the order parameter is the volume fraction occupied by the turbulence that slowly organizes into a band pattern, with a wavelength that is large with respect to any characteristic size of the system. Similar transition seems to occur in cortical dynamics, when the experimenters increase the forcing of the sensory flow, using spectral or informational measures as an order parameter. When subjected to simple visual input, neuronal processing is almost linear and population activity exhibits localized blob patterns. When subjected to more informational and realistic stimuli, the neuronal processing appears to be highly nonlinear, integrating input over large spatial scales (center-surround interaction) and population patterns become more complex and spatially distributed. The present challenge is to build a simple stochastic model that accounts for the emerging structures generated by the dynamic and their dependence on the forcing. A more fundamental long-term aim is to catch both glassy and turbulent flow dynamics under such formalism. A novel approach consists in considering a population of agents that have their own time dynamics and characterizing their collective behavior at different observation scales through gradual aggregation. The simplest way to aggregate agents is to sum an increasing number of them. When they are identically distributed and independent random variables, the law of large numbers and the central limit theorem apply and the resulting collective evolution is analogous to the individual one. The result does not change when the dependence is short range – this would be the equivalent of the laminar phase. As the spatial dependence becomes long range, the nature of the collective behavior changes (lower rate of convergence, different limit process). The same differences are observed when estimating the density of the law of the variables. By playing with the interaction range, one is therefore able to induce a phase transition. Another kind of transition is observable if one allows for nonlinear effects in the aggregation process. In such a case, the resulting process may be short-range or long-range dependent, even if the dynamics of the individual are simple (autoregressive short-range dependence in space and time). A first task is to develop such aggregation methods for simple individual models and to investigate the joint effect of dependence and aggregation process. Examples of applications include geophysical problems, hydrology and hydrography, integrative biology and cognition. 4. Space-time scaling in physics and biology 1) Empirical background

Systems displaying a hierarchy of structures on a wide range of time and space scales are ubiquitous in physics and biology. In the geosciences, ‘Stommel diagrams’ displaying life time vs. size of structures (usually in log-log plots) span several orders of magnitude, but a satisfactory explanation of this state of affairs is missing. In biology, metagenomics have recently been developed to explore microbial biodiversity and evolution by mining urban waste to improve our knowledge of the “tree of life”, but the time structure is far from being reliably estimated. In computer and social networks, the web is the best-known example, but scale-invariant and small-world networks are encountered everywhere; in this case the temporal aspects have started to be explored, but the connection between the spatial structure and the latter aspects requires further attention. 2) State of the art

a) Taylor’s hypothesis of frozen turbulence (1935), also used in hydrology, is presumably the simplest transformation of time scaling into space scaling. This is obtained by considering that the system is advected with a characteristic velocity.	b) In other cases, the connection between space and time scaling is less evident. As already pointed out, this is the case for computer networks: (space) network topology and (time) computer traffic have been separately studied up to now. Morphogenesis is a research domain that requires the development of space-time scaling analysis. c) More recently, the comparison of scaling in time vs. scaling in space has been used to determine a scaling time-space anisotropy exponent, also often called a dynamical exponent.

3) What is at stake

a) Why do we need to achieve space-time analysis/modeling? Basically there is no way to understand dynamics without space and time. For instance, whereas earlier studies of chromosomes were performed only along 1D DNA positions, 4D scaling analysis is required to understand the connection between the chromosome structure and the transcription process. b) Data analysis We need to further develop methodologies: - to perform joint time-space multiscale analysis either for exploratory analysis or for parameter and uncertainty estimations, - to extract information from heterogeneous and scarce data, - to carry out 4-D data assimilation taking better account of the multiscale variability of the observed fields, - for dynamical models in data mining. c) Modeling and simulations We also need to further develop methodologies: - to select the appropriate representation space (e.g. wavelets), - to define parsimonious and efficient generators, - to implement stochastic subgrid-scale parametrizations.